diff --git "a/PMC_clustering_501.jsonl" "b/PMC_clustering_501.jsonl" new file mode 100644--- /dev/null +++ "b/PMC_clustering_501.jsonl" @@ -0,0 +1,1848 @@ +{"text": "Missions were established in California in the eighteenth and nineteenth centuries to convert Native Americans to Christianity and enculturate them into a class of laborers for Californios (Spanish/Mexican settler). The concentration of large numbers of Native Americans at the Missions, along with the introduction of European diseases, led to serious disease problems. Medicinal supplies brought to California by the missionaries were limited in quantity. This situation resulted in an opportunity for the sharing of knowledge of medicinal plants between the Native Americans and the Mission priests. The purpose of this study is to examine the degree to which such sharing of knowledge took place and to understand factors that may have influenced the sharing of medicinal knowledge. The study also examines the sharing of medicinal knowledge between the Native Americans and the Californios following the demise of the California Missions.Two methods were employed in the study: (1) a comparison of lists of medicinal plants used by various groups prior to, during, and after the Mission period and (2) a close reading of diaries, reports, and books written by first-hand observers and modern authorities to find accounts of and identify factors influencing the exchange of medicinal information.A comparison of the lists of medicinal plants use by various groups indicated that only a small percentage of medicinal plants were shared by two or more groups. For example, none of the 265 taxa of species used by the Native Americans in pre-Mission times were imported into Spain for medicinal use and only 16 taxa were reported to have been used at the Missions. A larger sharing of information of medicinal plants took place in the post-Mission period when Native Americans were dispersed from the Missions and worked as laborers on the ranches of the Californios.Sharing of information concerning medicinal plants did occur during the Mission period, but the number of documented species\u00a0was limited. A number of possible factors discouraged this exchange. These include (1) imbalance of power between the priests and the Native Americans, (2) suppression of indigenous knowledge and medical practices by the Mission priests, (3) language barriers, (4) reduction of availability of medicinal herbs around the Mission due to introduced agricultural practices, (5) desire to protect knowledge of medicinal herbs by Native American shaman, (6) administrative structure at the Missions which left little time for direct interaction between the priests and individual Native Americans, (7) loss of knowledge of herbal medicine by the Native Americans over time at the Missions, and (8) limited transportation opportunities for reciprocal the shipment of medicinal plants between California and Spain. Three possible factors were identified that contributed to a greater sharing of information between the Native Americans and the Californios in the post-Mission period. These were (1) more one-to-one interactions between the Californios and the Native Americans, (2) many of the Californios were mestizos whose mothers or grandmothers were Native Americans, and (3) lack of pressure on the part of the Californios to suppress Native American beliefs and medicinal practices. Rubus (blackberries) were used to control diarrhea by people in Asia as well as by Native Americans living in different parts of North America [Rubus to combat diarrhea [Aesculus californica), a California endemic, to treat snakebites [The migration of people to North America began about 21,000-40,000\u2009years BP over a great land bridge between Siberia and Alaska . The immdiarrhea . Once inakebites . Variousakebites \u201319. ThesThe culture and economy of Native Americans was changed significantly beginning in 1769 with the European colonization of California. An integral part of the Spanish colonization process was the establishment of a system of Missions were assigned to care for the sick in these hospitals. The enfermeros used medicinal herbs and Spanish medicine to treat the neophytes. Medicinal herbs used by the Native Americans were collected from around the Missions [neophytes succumbing to both native and exotic diseases. At times of shortages of medical supplies, the priests and enfermeros exchanged knowledge of medicinal plants to broaden the supply of medicines to treat the sick [Neophytes were sometimes dispatched by the priests to collect medicinal plants from the wild (Engelhardt 1922).The California Missions were under the control of Spain from 1769 to 1821. During this time the Native Americans who were converted to christianity at the Missions were known to as smallpox . Contagithe sick . Neophythuertas, were an essential part of the Mission landscapes. They provided growing space for food plants, as well as trees, flowers, and medicinal herbs. Plants grown in the huertas were used by both the priests and the Native Americans. The importation of seeds and other goods was curtailed after 1810 when shipping from Spain and the Spanish colonies in the New World was interrupted by the rebellion in Mexico [neophytes, but most neophytes were transferred to nearby ranches during the Mexican period (1821-1848) were they worked as laborers. Some Native Americans were paid modest salaries for their labor, while most worked for food and a place to live. Individual Native American families and extended families lived on the ranches. A striking contrast to the hundreds who had resided at the missions. The relocation of Native Americans to local ranches provided an opportunity for the sharing of information concerning medicinal plants between the Native Americans and the Californios.During the Mission period, seeds of plants for the mission gardens periodically arrived via ships from Europe, South America, and Mexico. Walled gardens, known as n Mexico . Mexico The secularization period ended in 1848 with the annexation of California by the USA following the war with Mexico. Following the annexation, most of the Missions were abandoned and began to fall into disrepair. Without active parishes to maintain the Missions, the old buildings fell prey to the weather. Their roofs gave way first, exposing the soluble adobe walls to the rain. Many of the old buildings were abandoned as unsafe or unsalvageable, many were torn down. For many decades the decay of buildings at the Missions, the missions continued until citizens began to take an interest in them and to propose their restoration. Old records, drawing, and photographs were studied to perform reconstruction of historic buildings, patios, and gardens. At several Missions, medicinal plants were incorporated into the restored gardens.The purpose of this study is to examine the exchange of medicinal plant information at the California Missions during the Mission and post-Mission periods. Specifically, the exchange between the Native Americans and the priests during the Mission period and the exchange between the Native Americans and the Californios during and following the secularization of the Missions. We hypothesize that an exchange of information on medicinal plants can be identified by comparing the numbers of taxa from Spain that were introduced into California and adopted for use by the Native Americans and the number of taxa from California that were introduced into Spain and adopted by Spanish citizens for medicinal purposes. Furthermore, the exchange of information concerning medicinal plants between the Native Americans and the Californios can be identified by the number of medicinal taxa from Spain and Mexico that were introduced into California and used by the Native Americans and the number of California taxa adopted for medicinal use by the Californios.Two methods were employed in this study: (1) comparison of lists of medicinal plants used by Native American in California before the Mission period, medicinal plants used in Spain, medicinal plants used in Mexico before it gained its independence from Spain, and medicinal plants used by Californios and Native Americans in the post-Mission period and (2) a close reading of diaries, journals, reports, and books written by (i) first-hand observers during the Mission and post-Mission periods and, (ii) modern anthropologists, ethnobotanists, and historians to find accounts of the sharing of information about medicinal plants and to identify reasons why an exchange of information may or may not have taken place.The lists of medicinal plants and their uses were assembled from a number of sources Table for the www.ipni.org).The data provided were grouped into 14 categories depending on the pathology they treated , 38, 44:www.floraiberica.es; www.fitoterapia.net [To determine if any California species were introduced in Spanish and/or European botanical gardens a literature review was carried conducted \u201354. Seveapia.net \u201357;.A comparison of the assembled lists identified medicinal plant taxa that were used in two different areas . If taxa native to California were reported to be used in present-day Spanish medicinal gardens, then we assumed information of the medicinal use of these plants had been shared between the Native Americans and the Spanish priest. Likewise, if taxa native to Spain were present in herb gardens at the Missions or reported to have been used by Native Americans during the post-Mission period, we assumed that sharing of knowledge had taken place.Rhamnus californica Eschsch.\u2014treat rheumatism); drinking water in which the plant material had been boiled ; application of a poultice prepared from the plant material , eating the plant or plant part Hayek\u2014treat liver ailments), bathing the skin with water in which to plant had been boiled Nutt.\u2014treat sores); rubbing dry ashes of a plant on the skin Steudel\u2014treat poison oak); chewing plant parts J. Coulter & Rose\u2014treat pain).A total of 822 taxa belonging to 136 botanical families were identified C.A. Paris, Anemopsis californica (Nutt.) Hook. & Arn, Artemisia ludoviciana Nutt, Baccharis glutinosa Pers., Cucurbita foetidissima Kunth, Equisetum arvense L., Larrea tridentata (DC.) Cov., Opuntia sp., Quercus sp., Rorippa nasturtium-aquaticum (L.) Hayek, Salvia sp. and Sambucus mexicana C. Presl. A significant power imbalance existed between the priests and the Native Americans.(2) Priests thought the Native Americans were savage heathens or children who knew nothing.(3) Language barriers to communication.(4) Reduction in the availability of medicinal herbs due to the elimination of Native American burning and the introduction of Spanish livestock.(5) Knowledge of medicinal plants was a source of power and income for the Native American shamans who did not want to share it.(6) Structural organization of the administration of Missions left little time for direct communication between priests and neophytes.(7) Knowledge of herbal medicine was lost at the Missions by the neophyte\u2019s children and grandchildren.(8) Transportation limitations during the Mission period may have limited reciprocal shipments of medicinal plants between Spain and California.The list of medicinal plants used both by Natives Americans and Californios indicates a much greater sharing of medicinal knowledge following the secularization of the Missions , 43. Theneophytes to collect food plants and herbs during times of shortages [The results of this study suggest limited sharing of information about medicinal plants occurred during the Mission Period. There are direct reports of the sharing of information such as the dispatching of hortages . Additiohortages ; Flora [hortages , 53, 81)hortages , 38. TheMuch more evidence was discovered in this study to suggest many possible factors contributed to constraining the sharing of information about medicinal plants. These factors and the sources of information about these factors are presented in Table neophytes hiding some information concerning medicinal plants and shaman treating neophytes out of sight of the priests [The priests maintained significant power over the Native Americans at the missions. Their power was enforced by corporal punishment and confinement of the neophytes who did not work or who behaved badly in the eyes of the priests , 82. Thi priests , 79. AnyMany of the priests regarded the Native Americans as pagan savages whose customs needed to be suppressed. Interest in or communication about native medicinal plants would have been considered a way of endorsing native beliefs that the priests were dedicated to eliminating.Language was also a barrier to communication between the priests and the Native Americans. Several quite distinct languages and dialects were spoken by Native Americans living along the California coast. Although the Mission priests were expected to learn the native languages and instruct the Native Americans in their native languages this was seldom the case . The lanThe use of land for farming and livestock grazing along with the elimination of Native American burning of the landscape resulted in fewer medicinal plants in the vicinity of the Missions , 62, 74.The power and income Native American shamans received from their use of medicinal herbs were values that they would not have wanted to give up. The shamans continued their treatment of sick Native Americans at the Missions, but not in situations where they would be observed by the priests shipments to California were for the most part halted shipment. The SpaA greater exchange of information occurred during the post-Mission Period. The high number of plants used for medicinal purposes might be explained by the closer working relationships that occurred on the local ranches between the Native Americans and the Californios. Furthermore, the Californios had less incentives to \u201cdeculturalize\u201d the Native Americans. Preparation of 46 of the herbal remedies reported by Garriga included ingredients that were not available to the Native Americans in pre-Spanish times . This suWe conclude from this study that there was a limited transfer of information on the medicinal use of plants between the Native American and Spanish priests during the Mission period. Many factors related to the obligations of the priests, their attitudes toward the Native Americans, language barriers, and cultural differences interfered with a more complete sharing of information. A primary factor in the lack of transfer of medicinal information between the Native American and the priest was the imbalance of power. This imbalance of power kept the Native Americans from sharing information. The fact that none of the 15 most commonly used California species were not transported to Spain for medicinal uses presents an interesting question: were these plants not considered of superior value to the plants in Spain for the treatment of illnesses or did the Native American not share their knowledge of these plants with the priests? The magnitude sharing of information about medicinal plants between the Native Americans and the Californios increased in the post-Mission Period. This increase was due to a greater contact between the Native Americans and the Californios and a different relationship that existed between the two groups. Important aspects of this relationship were increased one-on-one communication, mestizo background of the Californios, and the lack of responsibility on the part of the Californios to convert the Native Americans to christianity."} +{"text": "In the current paper the implied interactions of the dendritic structure of the copper strand in terms of homogeneity at the cross-section of its electrical, mechanical and plastic properties determined based on the samples taken parallelly and perpendicularly to the surface of the dendritic boundaries were analysed. The obtained results were confronted with scanning electron microscopy (SEM) images of the fractures formed during uniaxial tensile test. It has been observed that when the crystallites were arranged perpendicularly to the tensile direction the yield strength (YS) was lower and the fractures were brittle. On the other hand, when the crystallites were arranged parallelly to the tensile direction the fractures were plastic and elongated necking was observed along with the higher YS and total elongation values. The differences in values vary in terms of the applied direction of the tensile force. A characteristic positioning of the Cu2O oxide particles inside the fracture depending on the crystallite alignment and the direction of the applied tensile force has been observed. The properties of copper in its solid state are strongly affected by the crystallization conditions of the liquid material. ETP grade copper (Electrolytic Tough Pitch Copper) contains oxygen, which causes Cu The authors in [2O eutectic forming along the grain boundaries in the casted strand may be a stress concentrator, which might lead to cracks at later stages of processing. Numerous gassings and porosities occurring in the strand when combined with Cu2O oxide constitute areas of increased risk of prospective material continuity loss and may also adversely affect the properties of copper such as electrical conductivity [2O oxide may be beneficial in terms of e.g., materials hardness [2O eutectic may also influence the type of fracture during tensile testing. However, oxygen presence in the copper structure has a numerous beneficial functions. A carefully controlled amount of oxygen throughout the manufacturing process of copper reduces the negative impact of impurities as it is involved in the formation of oxides or conglomerates of oxides of other less noble than copper elements, and therefore reducing their negative impact on material properties such as ductility and annealing by removing them from the solid solution into the form of precipitates [2O oxide at the fracture surface. The novelty of the conducted research in the current paper was also the determination of Cu2O oxide location in the casted strand and its coherence with the copper matrix. The mean shape and size of the oxide was specified as well. The identification of the initial size of the oxide will allow to analyse its evolution in further plastic working processes including but not limited to hot rolling (wire rod manufacturing) and wire drawing process of ETP grade copper. The latter being a cold working process is characterized with significant level of strengthening of copper matrix and high unit pressure value on the wall of the die approach angle.The mechanical properties such as ultimate tensile strength (UTS) and YS of the material depend on its structure, which may be controlled throughout the technological processes. However, appropriate control of the material structure is much more difficult when the processes is conducted continuously such as strand casting process of copper intended for direct processing into wire rod designated for electrical purposes (Cu-ETP). It is, in fact, a continuous casting process of copper strand directly subjected to the hot-rolling process. The strand is obtained via a mobile casting mould ,2 and dumaterial ,13. Regaobserved . Regardiobserved . Howeverc spaces ,17. The r matrix ,19. Accor matrix ,21 the Cthors in ,23 stateuctivity ,25, howehardness . One of hardness is that ipitates ,29. The 2 (60 \u00d7 120 mm) obtained via continuous casting process with average amount of oxygen equal to 200 ppm has been tested. The casted strand was obtained with the defined casting parameters which were collectively presented in An ETP grade copper strand with a cross section of 7200 mm2O oxide morphology and the type of Cu-Cu2O boundaries. The macrostructure of the copper strand is presented at 2O oxides) were performed and the spectra of characteristic X-rays were obtained.The strand was cut crosswise to the casting direction in several slices with thickness of 10 mm and was subjected to material properties analysis, in particular, macrostructural studies and micro-analysis of its chemical composition. Microstructural analysis of the copper strand using light microscope was also conducted with specific emphasis on the observations of CuElectrical conductivity test was carried out in order to determine the homogeneity of the structure and the correctness of the copper strand casting process. Conductivity measurements were conducted at a grid of 72 equal fields with dimensions of 10 \u00d7 10 mm marked at the cross section of the strand using SigmaTest model 2.069 , which is an eddy current device allowing accurate, non-destructive measurements of non-ferrous metals electrical conductivity based on the impedance of a measuring probe i.e., the relation between the voltage drop on the measured impedance and the current flowing in alternating current circuits. When measuring an unknown material the device converts the complex impedance value into an electrical conductivity value given in MS/m with a measuring range of the instrument determined by the calibration. The test was carried out at a constant ambient temperature of 20 \u00b0C. Absolute accuracy of the device is equal to \u00b10.5% of the measured value and the resolution is \u00b10.1% of measured value at 60 kHz frequency. Each of the 72 areas was measured three times and two different slices with 10 mm thickness were tested.2O oxides. 0) equal to approximately 60 mm2. The marked arrows indicate the direction of the tensile force during uniaxial tensile test.The strands\u2019 mechanical properties were assessed in the uniaxial tensile test using ProLine Z020 testing machine and the obtained values of UTS, YS, and elongation were in detail analysed in terms of their relation with the fracture observations concerning the morphology and concentration of Cu2O oxide undergoes segregation and according to the observations in 2O monocrystals located in the spaces between grains and dendrites, which was confirmed with light microscopy images. Characteristic is the distribution of oxide, which form lines (paths) of discrete division of structure elements or large clusters at the junction of several elements, which are flat images of spatially formed discrete clusters of oxide monocrystals and filling intracrystalline spaces/voids. The longitudinal section of the cast material, similarly to the cross section images, shows that the oxide crystallizes into large, regular clusters of monocrystals revealing various components of the copper matrix.During solidification Cu2O oxides inclusions an analysis of the microstructure was performed and the characteristic X-rays spectra were presented. In order to accurately recognize and confirm Cu2O oxides inclusions. Each of the analysed particles of Cu2O confirmed the atomic ratio of oxygen to copper to be approximately 1:3, which is in agreement with stoichiometry of the Cu2O compound. The shape has been identified to be axi-symmetrical and oval with the size of approximately 2\u20135 \u03bcm.EDX analysis of the material did not show any other than presented CuHomogeneity of the structure and therefore the correctness of the strand casting process of copper was determined by electrical conductivity test using eddy current device. The mean electrical conductivity of the material at the cross section was 56.77 MS/m with calculated standard deviation equal to 0.73. The tested material had similar electrical properties, which might indicate high homogeneity of the strand after the casting process. The crystallization front resulting from water cooling of the casted copper strand during the casting process is clearly visible at the cross section of the strand. Lower values of electrical conductivity being a consequence of oxygen macrosegregation at the axis of the crystallization fronts were recorded. It confirms the theoretical assumptions that at the place of contact of crystallization fronts the highest content of oxygen and interdendritic boundaries are present resulting in the reduction in electrical conductivity. Numerous accumulation of impurities in the material might lower its mechanical properties and may be the cause of materials discontinuity during its further processing.2O oxide is not wettable by copper causing the interfacial surfaces to be characterised with low mechanical strength, then in order to determine the places of Cu2O oxides crystallization and the bond strength of Cu-Cu2O at the interfacial surfaces and to define the mechanical properties of the copper strand a uniaxial tensile test was carried out using a testing machine with a maximum load of up to 2 tonnes. In order to perform the test from the prepared slices of strand samples were prepared as described in Since CuThe study focused on determining the mechanical properties of the copper strand in uniaxial tensile strength did not show significant differences in terms of UTS regardless of the direction of the applied force. When considering YS it may be stated that the measured values are higher when the dendrite orientation and tensile force are parallel and the direction of the tensile force is parallel to the shorter side of the strand and the fractures of these samples were characterized as ductile. However, when the direction of the tensile force is perpendicular to the shorter side of the strand the YS values did not show significant differences. When considering total elongation, the values are visibly higher when the direction of the tensile force is parallel to the shorter side of the strand. The average UTS was approximately 163.9 MPa and the mean YS was equal to 46.1 MPa. The divergence in elongation values of the tested samples was between 7 and 27%. Sample marked as 13 had an internal defect that made it impossible to correctly read the given values. In order to determine the cause of the differences in total elongation values SEM analyses of the fractures were conducted and the places of fracture were marked at Only one of the tested samples ruptured at the crystallization front (sample marked as 5). Prior to the conducted tests it was assumed that the crystallization front would be a place where numerous impurities in the material would accumulate and therefore the material would lose its cohesion and fracture at this area. Additionally, in both of the analysed cases it is easily noticeable that the rupture points are fairly symmetrical. The analysis of the stress-strain curves shows that the material, depending on the grain orientation had two types of characteristic courses of the curve. 2O oxides. Because of the observed and discussed previously symmetricity of the places where fractures occurred the analysis was carried out only for the first six samples in both parallel and perpendicular orientation of tensile force direction in relation to the shorter side of the strand. SEM analysis was conducted on the obtained fractures of the copper strand samples in order to determine the morphology and concentration of Cu2O oxide particles deeply embedded in the craters. In the case of samples marked as 3\u20136, 15 and 16 a higher ductile properties were observed as the crystallites were arranged parallelly to the direction of the tensile force. These results are in agreement with the studies conducted in [2O oxides were shallowly located in the fracture craters, however, not as numerous as in the former analysed case. Clearly visible are the surfaces of division of copper matrix confirming the location of deformation and necking formation.It may be stated that the crystallization zones strongly affect the macroscopic plastic properties of the material. Based on the obtained images presented in ucted in ,30,31. L2O oxides on the surfaces of dendrites (in interdendritic spaces) is present in the copper strand.As a result of the conducted research concerning microstructure analysis, chemical composition analysis, mechanical and electrical properties tests it was found that a discrete distribution of Cu2O oxides are not wettable by copper and therefore the interfacial surfaces are characterised with low strength properties. The analysis of own research conducted in this paper shows that the crystallographic orientation of the structure (dendrite orientation) has a huge impact on the behaviour of copper during deformation in the uniaxial tensile test.CuRegardless of the direction of the tensile force and orientation of the dendrites there were no visible changes in UTS values. However, when YS is considered it was confirmed that the measured values were higher when dendrite orientation and tensile force were aligned parallelly and the direction of the tensile force was parallel to the shorter side of the strand. The observed fractures of these samples were visibly ductile. On the other hand, when the direction of the tensile force was perpendicular to the shorter side of the strand the YS values did not show significant changes. Total elongation values were also higher when the direction of the tensile force was parallel to the shorter side of the strand.2O oxide particles is characteristic for specific fractures. Concerning brittle fractures, numerous oxides were located at the bottom of the craters while in the case of ductile fractures, the oxides were located shallowly (the oxides were pulled out of the copper matrix).The location of the Cu"} +{"text": "Eosinophil granulocytes (eosinophils) belong to the family of white blood cells that play important roles in the development of asthma and various types of allergy. Eosinophils are cells with a diameter of 12\u201317 \u00b5m and they originate from myeloid precursors. They were discovered by Paul Ehrlich in 1879 in the process of staining fixed blood smears with aniline dyes. Apoptosis (programmed cell death) is the process by which cells lose their functionality. Therefore, it is very important to study the apoptosis of eosinophils and their survival factors to understand how to develop new drugs based on the modulation of eosinophil apoptosis for the treatment of asthma and allergic diseases.In the past 10 years, the number of people in the Czech Republic with allergies has doubled to over three million. Allergic pollen catarrh, constitutional dermatitis and asthma are the allergic disorders most often diagnosed. Genuine food allergies today affect 6\u20138% of nursing infants, 3\u20135% of small children, and 2\u20134% of adults. These disorders are connected with eosinophil granulocytes and their apoptosis. Eosinophil granulocytes are postmitotic leukocytes containing a number of histotoxic substances that contribute to the initiation and continuation of allergic inflammatory reactions. Eosinophilia results from the disruption of the standard half-life of eosinophils by the expression of mechanisms that block the apoptosis of eosinophils, leading to the development of chronic inflammation. Glucocorticoids are used as a strong acting anti-inflammatory medicine in the treatment of hypereosinophilia. The removal of eosinophils by the mechanism of apoptosis is the effect of this process. This work sums up the contemporary knowledge concerning the apoptosis of eosinophils, its role in the aforementioned disorders, and the indications for the use of glucocorticoids in their related therapies. Eosinophils play a key role in fighting large multicellular pathogens, such as nematode parasites. Although eosinophils are capable of bactericidal phagocytosis in vitro, it is not possible to effectively prevent bacterial infection in vivo if the function of neutrophils is reduced, such as in the case of pharmaceutically-induced neutropenia or leukocyte adhesion deficiency syndrome . EosinopParagonimus westermani causing pulmonary or extrapulmonary paragonimiasis induce eosinophil apoptosis ..125].There is ever-increasing evidence that eosinophil apoptosis can by delayed in cases of allergic disorders. There are at least two mechanisms responsible for this: the increased expression of eosinophil survival factors and the disruption of death signals. The identification of molecules participating in anti-apoptotic pathways in eosinophils offer hope for the development of new drugs to reduce the number of eosinophils and, accordingly, eosinophilic inflammation in cases of allergic diseases."} +{"text": "The World Health Organization (WHO) has recommended this type of care in both developed and developing countries as soon as the premature neonate is clinically stabilized2. The virus spread rapidly and by March 2020 over 100 countries were affected3. Owing to the high contagion and fatality rate of the virus and the WHO declaration of COVID-19 as a pandemic, routine medical care was impacted and consequently the rate of KMC may have also suffered.The COVID-19 pandemic originated in the Hubei province of China in December 20194. Human milk is a unique dynamic nutrition source for the newborn during the first 6 months of life. Human milk directly contributes to innate immunity of the newborn by shaping gut microbiota and milk oligosaccharides5.Clinical evidence shows KMC could be effective in improving the newborn's neurodevelopment outcomes, stabilize preterm newborn's physiological function and decrease maternal distress following the birth. KMC is effective in the initiation of exclusive breastfeeding6.Recently, the WHO recommended that mothers and newborns should not be separated. The dyads should enable the practice of KMC even in cases of suspected or confirmed COVID-19 by using personal protective equipment and the disinfection of used surfaces7, and as such consider KMC in the neonatal wards, with the use of all related precautions.We urge clinicians, midwives and policy makers to keep neonatal care at the frontline"} +{"text": "In the Animals and housing subsection of the Materials and methods, there is an error in the sixteenth sentence of the first paragraph. The correct sentence is: After parturition, on post-partum day 1 (PPD 1), the size of litters with more than six puppies was adjusted by removing extra puppies by random selection to yield six puppies; the removed puppies were euthanized by intracardiac injection of sodium pentobarbital solution at 240 mg/mL and dose volume based on the standard operating procedures of the testing facility."} +{"text": "European Journal of Trauma and Emergency Surgery therefore focuses on recent aspects of chest trauma based on data from clinical and experimental studies, which underlines the relevance of bidirectional translational research.Chest trauma still represents one of the most frequent and devastating injuries after polytrauma. Besides the direct effects of the traumatic impact itself, lung integrity and function are also indirectly endangered by the systemic release of inflammatory mediators due to additional injuries of other body regions. Significant efforts have to be made to better understand the underlying pathomechanisms to improve diagnostic and therapeutic strategies. This issue of the The relevance of complications after chest trauma was a focus of the clinical studies of this issue. In a retrospective analysis, Huang et al. observed over an 8-year period that almDue to a significant heterogeneity of the trauma population, experimental studies are of upmost importance to investigate the underlying pathomechanisms of posttraumatic complications and to establish new strategies. A model was introduced by Stormann et al., who investigated contributing factors for the development of acute lung injury (ALI) in a murine double hit model . To induThese innovations in diagnosis and therapy are essential to reliably prevent the patient from further harm and to improve posttraumatic outcomes. In this context, reliable assessment of the patient status is also necessary. Caspers et al. conducted a clinical study focusing on the role of microparticles (MP) after severe trauma . The autIn conclusion, the articles presented here reflect the current problems in patients with chest trauma and offer promising experimental models to study the pathomechanisms associated with this trauma entity. Furthermore, interesting aspects regarding future directions in diagnostic and therapeutic options are illustrated.We hope you enjoy reading our selections of topics around the importance of chest trauma in the clinical and experimental settings."} +{"text": "There is an error in the author designations. Linda Gailite and Anna Miskova do not share first authorship of the work. Instead, Linda Gailite and Anna Miskova share senior authorship of the work. The publisher apologizes for the error."} +{"text": "This paper considers a mathematical model of infectious disease of SIS type. We willanalyze the problem of minimizing the cost of diseases trough medical treatment.Mathematical modeling of this process leads to an optimal control problem with a finitehorizon. The necessary conditions for optimality are given. Using the optimalityconditions we prove the existence, uniqueness and stability of the steady state for adifferential equations system."} +{"text": "Concerns around assisted living (AL) quality have prompted the 2019 passage of the MN legislature, which provided funding for the development of an Assisted Living Report Card. We present results from the first two phases of this project. The first phase involved a national literature review of quality measures and technical advisory panels to understand the types of domains and indicators for AL quality that are measured. Nine quality domains were identified. The second phase focused on state-wide stakeholder engagement to determine priority rankings for nine AL quality domains and indicators identified. Quality of life, staff quality and resident safety were the top three domains across all stakeholder groups. The state will implement surveys of AL resident quality of life and family satisfaction as mandated by the legislature, but findings indicate that other aspects of quality such as staff-related measures and resident safety, are also important to address."} +{"text": "The pre-EMU pressure for structural adjustments and productivity gains would be restored.The coronavirus crisis has caused new distress in the European Economic and Monetary Union (EMU), as the southern part of the EMU has been hit stronger than the northern part. The common currency prevents nominal exchange rate adjustment in response to the asymmetric shock. Policymakers have therefore taken recourse to large-scale financial transfers. Based on the lessons from the German monetary union, this article proposes instead the introduction of parallel currencies to facilitate relative price changes. Parallel currencies in the south would allow an increase in competitiveness of the south via real depreciation. The introduction of a parallel currency in Germany would lead to capital inflows and a real appreciation of the"} +{"text": "Welding technology may be considered as a promising processing method for the formation of packaging products from biopolymers. However, the welding processes used can change the properties of the polymer materials, especially in the region of the weld. In this contribution, the impact of the welding process on the structure and properties of biopolymer welds and their ability to undergo hydrolytic degradation will be discussed. Samples for the study were made from polylactide (PLA) and poly (PHA) biopolymers which were welded using two methods: ultrasonic and heated tool welding. Differential scanning calorimetry (DSC) analysis showed slight changes in the thermal properties of the samples resulting from the processing and welding method used. The results of hydrolytic degradation indicated that welds of selected biopolymers started to degrade faster than unwelded parts of the samples. The structure of degradation products at the molecular level was confirmed using mass spectrometry. It was found that hydrolysis of the PLA and PHA welds occurs via the random ester bond cleavage and leads to the formation of PLA and PHA oligomers terminated by hydroxyl and carboxyl end groups, similarly to as previously observed for unwelded PLA and PHA-based materials. Biodegradable polymers are the most developed class of new sustainable materials, leading to numerous technological innovations. In recent years, it has become extremely important to design polymer packaging materials that will be safe for human health and the environment, and to find new areas in which their unique properties can be adopted . The larThese applications usually require different joining methods of the polymeric materials. Welding could be considered the most effective method for thermoplastic polymeric materials such as PLA and PHB. The process of welding includes activation of the two surfaces, usually by heating them to a temperature higher than the melting point of a polymer, and joining the melted surfaces through the use of force. The most popular welding methods used in practice are heated tool welding, resistance welding, and ultrasonic and high-frequency welding. Butt and overlapping welding are also multipurpose methods that can be used for PLA and PHB biopolymers ,16. KlinAlthough the influence of processing methods on the properties of the final products of PLA and PHA have previously been studied, the properties of welded products made of PLA and PHA polyesters have not been investigated in detail. Due to the fact that PLA and PHA are the most widely known biodegradable polyesters, and are currently produced on a large industrial scale, information on weld properties would be very important from an industrial application point of view.1H Nuclear Magnetic Resonance spectroscopy (NMR), Gel Permeation Chromatography (GPC), Differential Scanning Calorimetry (DSC), and Electrospray Ionization Mass Spectrometry (ESI-MS) have been used. Our approach is consistent with the concept of forensic engineering of advanced polymeric materials (FEAPM), which can help to understand the relationships between the structure of the biodegradable polymer material used, its properties, and behavior for practical applications . Th. ThTg waThe DSC trace of the starting PLA filament shows only a glass transition temperature that confirms the amorphous microstructure of this sample . The PLAmT value increase of the samples studied , the oligomers become water-soluble and diffuse into the surrounding medium. For all samples studied, a decrease of the pH values in the degradation media was observed, on account of the release of lactic or 3-hydroxybutyric acids and their low molar masses PLA and PHA oligomers terminated by hydroxyl and carboxyl end groups released from the tested samples into the water medium. This is accompanied by the disintegration of the tested solid samples (see further paragraphs). The smallest changes in pH were observed for sample PHA 3, due to the lower degradation rate of PHA than the PLA biopolymer. [gT from 1.7 to \u22122.8 \u00b0C , compared to other parts of PLA. After 84 days of incubation, PLA 1 completely lost its integrity and the degradation medium became turbid. In contrast, the surface of PLA 2 and PHA 3 and their welds remain unchanged for 14 days of hydrolytic degradation. After 42 days of incubation, the loss of weld cohesion of the weld was noticed in PLA 2; in the case of sample PHA 3, the weld bulges above the surface. However, it is worth noting that the parts of PLA 2 and PHA 3 in the vicinity of their welds did not disintegrate at that time. During further incubation, PLA 2 became swollen and prone to cracking, which resulted in the progression of disintegration of the sample and its disappearance after 182 days of degradation, similar to what was observed for PLA 1. In the case of PHA 3, the start of swelling of the weld was observed after 42 days of incubation while damages in the vicinity of the weld were only noticed after 182 days of incubation. The microscopic changes in PHA 3 were investigated using SEM analysis. As can be seen from the images of PHA 3 before degradation, weld cracks can be observed. This may be due to the internal tension or thermal treatment during welding. The visibly lighter color of the weld after 182 days of incubation indicates that the weld protrudes over the surface, which is in agreement with the macroscopic observation. The cracking process of the weld progresses during the incubation time, which may suggest faster degradation of the weld than the rest of PHA 3.The progress of the hydrolytic degradation process was also monitored by the analysis of the degradation media. It is well known that the hydrolysis of PLA and PHA occurs via random ester bond cleavage along the polyester chains which leads to the formation of shorter polyester chains. When the molar mass of the degraded PLA and PHA welded samples drops below a 700 g/mol, the oligomers become water-soluble and diffuse into the surrounding medium. To determine the chemical structure of water-soluble products formed during the hydrolytic degradation of PLA and PHA welded samples, mass spectrometry (ESI-MS) was used. The application of this technique in the structural studies of degradation products allowed us to detect even small amounts of the degradation products. Recently, ESI-MS techniques have been successfully used for the structural characterization at the molecular level of the degradation products of aliphatic polyesters ,35.The purpose of this research was to check whether the structure of degradation products released from welded PLA and PHA samples is the same as that previously observed in the case of unwelded PLA and PHA biopolyesters. Singly charged signals with a peak-to-peak mass increment of 72 Da, which is equal to the molar mass of the lactic acid repeating unit, were observed in this spectrum. The signals observed correspond to the sodium adducts of water-soluble lactic acid oligomers terminated by carboxyl and hydroxyl end groups was higher than for the sample PHA (3). In the case of the PLA and PHA samples, the degradation process started in the region of the weld. The structural studies of the products released to the water were carried out with the aid of ESI-MS spectrometry. The lactic acid or 3-hydroxybutyrate oligomers terminated by hydroxyl and carboxyl end groups were identified as water-soluble degradation products for samples PLA (1 and 2) and PHA (3), respectively. No changes were observed in the structure of products released to media during hydrolytic degradation of welded materials when compared with the degradation products of unwelded PLA and PHA. The current study demonstrated the need for precise evaluation of the structure, properties, and behavior of advanced biodegradable polymer packing materials in order to avoid potential failures of the commercial products to be manufactured from them."} +{"text": "The doubling of genomic DNA during the S-phase of the cell cycle involves the global remodeling of chromatin at replication forks. The present review focuses on the eviction of nucleosomes in front of the replication forks to facilitate the passage of replication machinery and the mechanism of replication-coupled chromatin assembly behind the replication forks. The recycling of parental histones as well as the nuclear import and the assembly of newly synthesized histones are also discussed with regard to the epigenetic inheritance. The doubling of genomic DNA occurring in the S-phase involves a timely regulated remodeling of the entire chromatin. Indeed, each replication site requires the displacement of nucleosomes in front of the fork to enable the progression of the replication machinery and the reformation of chromatin behind the replication fork ,2. The mThe formation of nucleoprotein structures between histones and DNA facilitates the organization of the eukaryotic genome within the nucleus. Moreover, the histones and, specifically, the amino-terminal tails of core histones are subjected to post-translational modifications, which are involved in genetic regulation ,12. HencImportantly, during the S-phase of the cell cycle, the replication implies the doubling of the amount of DNA within the nucleus . Hence, The complex mechanism of chromatin duplication has been extensively studied over the past few decades. Chromatin replication involves the remodeling of the entire genome once per cell cycle to facilitate the passage of replication machinery and the reassembly of nucleosomes behind the replication fork. The focus of this review is to summarize and discuss the current knowledge on the processes occurring in front of and behind the replication fork. Here, we emphasize the histone processing rather than the bevy of chaperones and factors involved .Over the past few decades, replication-coupled assembly of chromatin has been extensively studied to understand how the cell packages the genetic material after its doubling . AlthougIt is generally believed that chromatin assembly is a stepwise process wherein the H3/H4 tetramer is first loaded onto DNA, followed by the deposition of the two heterodimers of H2A/H2B . This moXenopus laevis egg extracts revealed that the fate of the parental nucleosome did not follow a unique mechanism [Even though the mechanism of the deposition of the histone octamer behind the replication fork remains unclear, in recent years, it became important to explore the mechanism of inheritance of the epigenetic marks. This has been investigated by several groups, who revealed a number of difficulties in its achievement. The synchronization of the cells is clearly a critical point, as with any experiments focused on replication studies. The analyses of the inheritance of epigenetic marks require also the discrimination of newly synthesized histones and parental ones, and neo-synthesized DNA and parental DNA. Different strategies have been exploited for studying the transmission of the histone modifications following replication, which do not ease the interpretation and comparison of the details ,82. Impoechanism . In thesechanism ,86. Thusechanism . Physarum polycephalum [While it is generally believed that histone modifications contribute to the epigenetic information, histone variants have been associated with chromatin activities and, like histone modifications, are part of the epigenetic information . Therefocephalum . More recephalum ,91.In the present overview of chromatin remodeling associated with replication, we pinpointed some concerns that have been the focus of extensive research over the past few decades. Understanding the fine details of how the chromatin states are transmitted from one generation to the next one has a broad range of applications in life sciences. This leads researchers to revisit basic questions of chromatin replication for improving our knowledge, even though the basic chromatin subunit assembled behind the replication fork is a mixture of new and parental histones, which remains elusive. With the increasing evidence of relationships between epigenetics and cell destiny, recent works have hinted at the inheritance of histone modifications of parental histones through replication. It would be also interesting to collect experimental data on the inheritance of the parental histone variants and determine whether the positioning of new histone variants synthesized in the S-phase results in chromatin rearrangement behind replication fork. We must keep in mind that besides the obvious dynamics of replication, the chromatin subunit is not static as histone complexes are susceptible to exchange at different rates in living cells. The perspective of experiments providing information on the kinematics of chromatin replication in the nucleus rather than snapshots offers an exciting glimpse into the mechanisms of eukaryotic genetic transmission. It is certain that the field of chromatin biosynthesis will remain a challenging topic for future investigations."} +{"text": "This research analyses straw degradation inside straw bale walls in the region and develops the prediction of degradation inside straw bale walls. The results show that the straw inside straw bale walls have no serious concerns of degradation in the high hygrothermal environment in the region with only moderate concerns of degradation in the area 2\u20133 cm deep behind the lime render. The onsite investigations indicate that the degradation isopleth model can only predict straw conditions behind the rendering layer, whereas the isothermal model fits the complete situation inside straw bale walls. This research develops the models for predicting straw degradation levels inside a straw bale building in a warm (humid) continental climate. The impact of this research will help the growth of low carbon energy efficient straw bale construction with confidence pertaining to its long-term durability characteristics both in the region and regions sharing similar climatic features globally. Straw bale construction uses agricultural co-products in the building industry . This buThe climatic regions of China are classified by the Ministry of Housing and Urban-Rural Development (MOHURD) in the GB50178-93 . The cliThe construction technique was initially introduced to northern China by the Adventist Development and Relief Agency (ADRA) in 1998 and the To assess the durability issues of straw bale construction in the warm (humid) continental climate, this research aims to analyse the straw degradation through both experiments using specimens of straw bale walls and the inspection of an experimental building constructed in northeast China. Therefore, the objectives are to develop proper predicting models for straw degradation regarding the warm (humid) continental climate and to expand the construction of straw bale buildings in wider regions globally. The research begins with both laboratorial research and onsite research of straw degradation following the analysis of predicting models for the degradation process in comparison with the laboratorial and onsite research results.There are two kinds of degradation processes within straw bale walls, namely aerobic degradation and anaerobic degradation . The avaEven though different types of rendering materials have various permeabilities for water and air transmittances, all the rendering approaches typically provide a relatively sealed environment for the straw bales . The relThe environment inside straw bale walls can protect straw from the hostile activities of microorganisms. Bacteria and yeasts can only duplicate in high moisture conditions, and other than this, yeasts also need light to process the biochemical reactions which are essential for the growth of microorganisms . ConsideThe hygrothermal environment within the straw bale walls and absolute moisture content of straw are two critical factors that lead to straw degradation. As such, this section reviews the existing methods for predicting the critical hygrothermal within straw bale walls and the critical moisture content of straw for straw degradation.The isopleth system was designed to describe specific reactions from growth of different mould species in different hygrothermal environments . By consKey to understanding the potential degradation is an understanding of the relationship between the relative humidity, temperature, and the moisture content of the material. The sorption isotherm describes the direct relationship between the moisture content of hygroscopic materials and the surrounding RH environment . Hedlin The equation cannot convert the surrounding RH environments to moisture contents without knowing air temperatures. The air temperature can change rapidly in straw bale walls in real conditions which results in a restricted use of the equation in predicting the moisture content of straw bales in walls. Str\u00f8mdahl used controlled climatic chambers to investigate the water absorption properties of four plant fibres in different temperatures . The sorn\u2019 from 44 to 54 to indicate a closer prediction of the sorption isotherm of straw in diurnal situations [Yin et al. further tuations .To assess the possibility of degradation induced by the high RH and high temperature conditions, an experimental investigation was designed to represent a typical wall build-up. The experimental results will be compared with onsite visits of one experimental building constructed in Changchun to access the degradation potential of straw within straw bale walls in northeast China.Changchun is the capital city of Jilin province and the The monthly humidity levels are from 63% to 72% in summer during which the highest temperature appears. The climatic features of Changchun represent three typical climatic characteristics in the severe cold region in China: Firstly, both the air temperature and humidity are at high levels in summer months. The daily high temperatures are around 30 \u00b0C in the summer months. Due to the features of the warm (humid) continental climate, rainfalls are expected mostly in summer months which lead to high air humidity. Secondly, as the lowest rain potentials and air humidity are expected in spring months, the spring can be identified as a \u201cdry\u201d season in northern China. However, as air temperatures begin to rise above freezing in March, melting snow increases humidity levels in the severe cold region. Comparing the air humidity levels in April and May, the monthly average humidity of March is significantly higher than the one in April and May. Thirdly, the air temperatures are expected to be below freezing during the whole winter months in the severe cold region and the highest monthly air humidity levels present during the same period. In contrast, the high air humidity levels do not result in the humid environment inside and outside buildings . As the An experimental straw bale single-story bungalow with a double pitched roof was constructed in the southeast of Changchun . The layout of the building is similar to the existing residential building in rural regions of Northeast China. The structural frame and foundation of the experimental building are made of steel reinforced concrete, which is conventional in the area familiar to local builders. For the same reason, the building is designed to have a cold roof, thus the insulation layer in the roof is laid beneath the roof frames and above the ceiling. The experimental building is constructed in an open field on the campus of Jilin Jianzhu University. The building is oriented on cardinal directions and there are no structures or other obstructions around the building. As the building is for experimental purposes only, there is no heating system included in the building.There are two bale stacking methods used to examine the impact of straw orientation has on various properties, including durability and hygrothermal variation within the experimental building. The west flank of the building has laid-flat straw bales and the on-edge bales are used in the east flank . Due to The investigation of the degradation potential of straw uses a climate-controlled chamber to replicate the summer hygrothermal environments in Changchun. The model of the climatic chamber is DY110(C) and can control the temperature to \u00b10.1 \u00b0C. Makhlouf et al., demonstrated the importance of measuring materials under representative environmental conditions within the lab , so the Straw bales and lime render are constructed in three transparent boxes to replicate walling constructions of straw bale buildings . The ricThe specimens were initially placed in a controlled climatic room (80% RH @ 20 \u00b0C) for one week to cure the lime render. The cured specimens are then placed in low temperature (40 \u00b0C) oven for one week to reach lower than 85% initial RH @ 30 \u00b0C before placing specimens in the climatic chamber. During the 12-week experimental period, the conditions of straw are visually checked once a week at the beginning of the four weeks and once a day for the following eight weeks.Due to different moisture adsorbing speeds of the straw, as demonstrated by Yin et al. , the limThe visual checks of the specimens do not identify any recognisable straw degradation both during the experimental process and at the end of the experiments . The surThe experimental results show that straw does not exhibit signs of notable degradation in the potential high hygrothermal environment in the severe cold regions of China. The property of low degradation potential of straw at high temperature has specific importance in building straw bale constructions in the severe cold region. Due to the low degradation potential of straw in the high hygrothermal environments behind lime render layers in the region, the straw bales in the buildings can be constructed in the summer months with a similar degree of degradation in other months in the climatic region.To examine the straw conditions behind the render in the experimental building, eight positions were exaFrom the onsite investigation, the thickness of the render did not visibly impact the conditions of the straw, with even the thinnest render (40 mm) seeming to be sufficient. The thickness of lime rendering varies significantly from the design thickness , with thThe monitored results compare favorably to the isothermal models as presented by Lawrence et al. and Yin While the overestimations of straw degradation by the isopleth model have been discussed in existing research, there has been little explanation for the inaccurate predictions of the isopleth model . In thisThe active PH range of the acidogenic and acetogenic bacteria is 6\u201310 and the methanogens have a smaller range of allowable pH range (7.5\u20138) . With thFirstly, the lime render would provide an unfavourable environment for the anaerobic decomposition of straw. The major content of lime is the calcium hydroxide which is a high alkaline pH material . The pH Secondly, the calcium hydroxide reacts with the intermediates of the anaerobic digestion of straw. Calcium hydroxide requires carbon dioxide in the chemical reaction of achieving calcium carbonate during the curing stage of lime based rendering and acetic acid is also neutralised by the high pH environment provided by the calcium hydroxide. As a result, the lime render reduces the intermediates of the anaerobic decomposition of straw and increasing the durability of straw bale walls. The effectiveness of lime rendering in increasing the durability of straw bale walls against anaerobic degradation remains uncertain .The degradation between the lime render and the straw bales indicates the effect of aerobic degradation of straw. Due to the relatively high breathability of the lime render, the oxygen condensation behind the lime render would not be as low as the one in the straw bales. As a result, the aerobic degradation happens in the area behind the lime render in straw bales. However, due to the alkaline environment provided by the lime render, the degradation of straw is not serious with a mix of lime render. The degradation of straw was identified 2\u20133 cm behind the lime render. Because of the oxygen inside straw bales are much lower than the adjacent area of lime render and the straw bales, the degradation does not penetrate straw bales . HoweverThe results of this research show moderate concerns of straw degradation between straw bales and the lime render regarding the climatic features in northern China. The experimental investigation shows that the hot and humid summer has insignificant impacts on the durability of straw bales within straw bale walls. The research has monitored a full-scale building in northeast China and compared it to laboratory scale results. This has resulted in lower degradation expectation of rice straw inside straw bale walls than the existing knowledge on the issue. In the small-scale, controlled conditions, rice straw has no notable degradation with two alignments in the 95% RH @ 35 \u00b0C for 12 weeks. Full scale building investigation identified limited degradation behind the render, whereas rice straw is in good condition with the protection of lime render and deep inside the walls. Due to the relatively high oxygen levels behind the lime render, the aerobic degradation is far more rapid than the anaerobic degradation, and the straw conditions against the lime render would be a concern pertaining to the durability of straw bale walls. Therefore, the isopleth models are suitable for predicting the straw degradation behind the rendering layer whereas the unsaturated isothermal model is more suitable for predicting degradation inside straw bale walls.The work of this research builds on the understanding of the application of existing predicting models of straw degradation inside straw bale walls in the warm (humid) continental climate. The research identifies a limited degradation risk of rice straw bales inside a lime-based render layer with both the flat laying and the on-edge laying methods in the climatic condition of the warm (humid) continental climate. Further, straw bale buildings in this area will benefit from the render construction with high properties of moisture transmittance and low breathability. The results of the research justify the feasibility of straw bale building in resisting degradation with the climatic features of the warm (humid) continental climate. The impact of this research will be the growth in low-carbon energy efficient straw bale construction with confidence pertaining to its long-term durability characteristics."} +{"text": "Extracellular vesicles are evolutionarily conserved nano-sized phospholipid membraned structures and released from virtually all types of cells into the extracellular space. Their ability to carry various molecular cargos from one cell to the other to exert functional impact on the target cells enables them to play a significant role in cell to cell communication during follicular development. As the molecular signals carried by extracellular vesicles reflect the physiological status of the cells of origin, they are expected to mediate any effect of environmental or metabolic stress on the follicualr cells and the growing oocyte. Recent studies have evidenced that reproductive cells exposed to various environmental stressors (heat and oxidative stress) released extracellular vesicles enriched with mRNA and miRNA associated with stress response mechanisms. Moreover, the metabolic status of post-calving cows could be well-reflected in the follicular extracellular vesicle's miRNA profile, which signified the potential role of extracellular cellular vesicle molecular signals in mediating the effect of metabolic stress on follicular and oocyte development. In the present review, the potential role of extracellular vesicles in mediating the effect of environmental and metabolic stress in various reproductive cells and oocytes are thoroughly discussed Moreover, considering the importance of extracellular vesicles in shuttling protective or rescuing molecular signals during stress, their potential usage as means of targeted delivery of molecules to mitigate the effect of stress on oocytes are addressed as the focus of future research. Recurrent environmental and metabolic stress pose significant risk and disruption of the reproductive physiology. Over the past five decades, the intensive selection practices of dairy cows for higher milk yield have resulted in tremendous success in increasing the net milk yield. However, the increase in milk production has resulted in a concomitant reduction in fertility traits , which ipathways . ExperimDuring the course of folliculogenesis, continuous bi-directional communication between the oocyte and its encircling cumulus cells, granulosa cells, and theca cells, to exchange an oocyte and somatic cell factors, is indispensable for the ovulation of a developmentally competent oocyte, that can undergo through fertilization and the processes of embryogenesis . The gapThe formation of an antrum cavity filled with a serum-like fluid exudate called the follicular fluid is the defining phenomenon of the later stages of mammalian follicular development . The folExtracellular vesicles (EVs) are evolutionarily conserved nano-sized, phospholipid bilayer membraned structures of varying sizes released from virtually all types of cells through the exocytosis process into the extracellular space . EVs arein vitro oocyte maturation . The CEEturation . In thatTo date, the role of EVs in shuttling the CEEF between the oocyte and the surrounding somatic cells is not reported. However, reports on the differential effect of EVs supplementation from the small and large follicles on cumulus expansion could indicate the potential role of EVs in shuttling the CEEFs between the ovarian somatic cells and gamete and vise versa . NeverthDue to its enclosed microenvironment, the follicle is convenient to study the EV-signaling mechanism, as the source of the EVs and recipient cells can easily be pinpointed. Another advantage of the follicular microenvironment is the remarkable stability of the RNA molecules in the follicular fluid and other reproductive biological fluid, irrespective of the higher nuclease activity, which could be attributed to the encapsulation in EVs . This maAmong the determinant factors that contribute to the decline of female fertility is exposure to both environmental and metabolic stresses. Among the environmental stressors, oxidative and heat2O2 has a longer cellular half-life and can enter into the nucleus . One of oduction . The ext process . Interes process .2O2 are enriched with NRF2 and antioxidant enzymes (CAT and TXN1), signifying the fact that EVs could partly reflect the cellular stress conditions considering the presence of stress-associated transcripts both in the cells and the released EVs to bovine granulosa cells at a dose as low as 150 mM induced apoptosis and reduced proliferation of cells, which signifies the toxic effects of NEFA accumulation during the post-partum period on the follicular cells in bovine (The intensive selection for high milk yield in the past decades has resulted in a significant increment in the amount of milk per lactation per cow. However, a concomitant reduction in the fertility traits of high yielding dairy cows was also observed . The decn bovine .We recently examined the miRNAs content of EVs derived from follicular fluid of cows under divergent metabolic status post-calving and results showed a massive down-regulation of miRNAs in follicular fluid EVs of cows under NEB . Based oOne of the fundamental aspects of EVs-mediated transfer of bioactive molecules is the balance between the molecules remained in the cells and those released into the extracellular space. EVs released into the body fluid could indicate the level of intracellular hemostasis . Similarin vitro production of embryos, which arises from the various stress-inducing factors under in vitro environment including the oxygen tension and culture media constituents compared to their in vivo counterparts.The importance of EVs as molecular cargo in mediating mammalian follicular development has been characterized in several reproductive biofluids. Nevertheless, the functional role of these EVs and their cargo molecules are not fully understood. Considering the important roles of EVs in shuttling the protective and rescuing signals in follicular cells, it would highlight the potential usage of EVs as a means of molecule delivery, which could be utilized for future applications to mitigate the effect of stress on oocytes and embryo development. The molecular characterization of the cargo of EVs with a potential impact on oocyte and embryo development could lead to the discovery of molecular markers for the development of stress-associated infertility treatment strategies. Therefore, characterizing the content of EVs released from granulosa cells and oviductal epithelial cells after exposure to environmental and metabolic stress could provide useful insight about the survival mechanisms of reproductive cells and possible usage of these EVs as supplementation into the oocyte maturation and embryo culture medium to enhance stress-coping mechanism during oocytes maturation and embryos development. This will be relevant to address the issue of the qualitative and quantitative decline in the outcome of the SG and DT: literature searches. SG: writing. AA: illustration. DT, AA, AG, and RP: editing. DT, AG, and RP: approving and submission. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "A new approach for droplet coalescence in microfluidic channels based on selective surface energy alteration is demonstrated. The proposed method involves patterning the surface of cyclic olefin copolymer (COC), a hydrophobic substrate attached to a polydimethylsiloxane hydrophobic microchannel, with graphene oxide (GO) using standard microfabrication techniques. Surface wettability and adhesion analyses confirmed the enhancement of the COC surface energy upon GO patterning and the stability of the GO film on COC. Three representative cases are illustrated to demonstrate the effectiveness of the method on the coalescence of droplets for different droplet flow regimes, as well as the effect of changing the size of the patterned surface area on the fusion process. The method achieves droplet coalescence without the need for precise synchronization. Coalescence of droplets in microfluidic systems offers many advantages such as high mixing rates, continuous separation of multiphase systems, and creating reaction-controlled nanoliter-sized individual reactors ,2,3. TheDroplets coalescence techniques in microfluidic channels can be classified into active and passive techniques. Active techniques utilize an external field to generate energy that destabilizes the interfaces of adjacent droplets leading to droplet fusion ,3. Such Recent studies of geometrically induced coalescence included inducing droplet coalescence in the merging zone of Y- and T-junction channels ,16, and In the present paper we report a new approach for droplet coalescence based on selective alteration of the surface wettability of the microfluidic channel using graphene oxide (GO). This approach involves the fabrication of a polydimethylsiloxane (PDMS) microchannel on a hydrophobic planar cyclic olefin copolymer (COC) substrate patterned with hydrophilic graphene oxide (GO) using standard microfabrication techniques.The patterning of COC substrates with GO was achieved using plasma-enhanced lift-off method ,21. BrieThe surface energy enhancement of the COC substrate was assessed by measuring the static contact angle of a water droplet on the COC substrate with and without GO deposition. The surface energy enhancement was further investigated by measuring the contact angle for different concentrations of GO dispersion deposited on COC wafers. Five GO dispersions were prepared with concentrations of 2, 4, 6, 8, and 10 mg/mL, respectively, and were deposited on plasma treated COC wafers using a spin coater at 4000 rpm. The stability of the GO film on the COC wafer was also studied using the JIS K-6744 boiling water test. A COC wafer was coated with GO with a concentration of 4 mg/mL, as discussed above, and was immersed in boiling water for an hour. Optical microscopic images were taken at several locations of the COC wafer before and after the test. The ability to selectively enhance the surface energy of COC by GO patterning is used to control the wettability inside microfluidic systems to achieve droplet coalescence. The droplet coalescence mechanism observed in the proposed device is composed of three steps: trapping, fusion, and detachment of the merged composite droplet. The coalescence of water droplets using GO bands that are patterned perpendicular to the flow or with an angle along the channel is investigated. The succeeding droplets continue to coalesce with the merged droplet as long as the adhesion forces, due to the patterned enhanced surface energy of the surface, dominates the viscous drag force of the fluid. As the magnitude of the viscous drag force exceeds the surface forces, the merged droplet detaches from the GO pattern. Several case studies have been conducted to examine the effect of the length and the orientation of the GO pattern on 1) The coalescence of droplets exhibiting different flow regimes, and on 2) The detachment mechanism of the merged droplet. Three representative cases will be illustrated. The first case demonstrates the detachment process of the merged droplet created from disc or pancake-shaped droplets and its relation to the number of coalescing droplets, the second case demonstrates the coalescing of two slug droplets over a narrow band of GO (~10 \u00b5m), and the third case illustrates the effect of patterning GO for coalescing droplets and directing the motion of the merged droplet to a specific path.Disc-shaped droplets are the ones confined between the top and bottom sides of the channel with a size smaller than the width of the channel. Controlling the flow and the coalescence of such droplets permits the creation of microdroplet reactors, where merging droplets of different components is necessary at a selected site in the microchannel.Theoretically, the detachment of the merged droplet in this regime is initiated as the viscous drag force exerted by the continuous phase overcomes the force due to the enhanced surface energy . In the ed phase ) increasFrom the discussion above, the number of merging droplets depends on the competing forces of the increasing viscous drag force and the surface tension on the merged droplet. For example, an increase in the continuous flow velocity results in merging smaller number of droplets compared to a lower velocity of the continuous phase. Similarly, merging of droplets with large diameters will result in the merging of a fewer number of droplets compared to merging of droplets with smaller diameters. The second case demonstrates the coalescing of two slug droplets over a narrow band of GO ~10 \u00b5m). Slug droplets are droplets that are confined by the four walls of the channel and that have a length that is mainly affected by the phase flow rate ratio of the inlet streams at the droplet generation stage 0 \u00b5m. Slu. These fThe third case illustrates the effect of patterning GO for coalescing droplets and directing the motion of the merged droplet to a specific path. The coupled functions of coalescence and steering allow the construction of microfluidic systems that provide controlled reaction networks with no dispersion between the steered droplets . A narroIn this paper we present a new approach for droplet coalescence in microchannels by patterning the surface energy of a hydrophobic substrate with GO using standard microfabrication techniques. This approach is based on selective patterning of the surface energy of a hydrophobic surface for trapping dispersed droplets prior to their fusion. The GO patterned films on the COC were found to be stable at high thermal stresses without evidence of peeling off or deformation. Three representative cases were illustrated to demonstrate the effectiveness of the method on the fusion of droplets exhibiting different flow regimes and having different initial diameters, as well as the effect of changing the size of the patterned surface area on the fusion process. The results showed that the coalescence process depends on initial wetting of the GO with a thin film of water molecules prior to the trapping step. In addition, simultaneous coalescence and transport of the droplets inside the channel was achievable by manipulating the viscous drag forces in relation to the surface adhesion induced by the GO pattern. We believe this method will be useful for various micro-scale processes such as liquid\u2013liquid phase separation, micro-extraction, and reaction networks."} +{"text": "Neuropelveology is a new specialty in medicine that has yet to prove itself but the need for it is obvious. This specialty includes the diagnosis and treatment of pathologies and dysfunctions of the pelvic nerves. It encompasses knowledge that is for the most part already known but scattered throughout various other specialties; neuropelveology gathers all this knowledge together. Since the establishment of the International Society of Neuropelveology, this discipline is experiencing an ever-growing interest. In this manuscript, the author gives an overview of the different aspects of neuropelveology from the management of pelvic neuropathic pain to pelvic nerves stimulation for the control of pelvic organ dysfunctions and loss of functions in people with spinal cord injuries. The latter therapeutic option opens up new treatments but also widens preventive horizons not only in the field of curative medicine (osteoporosis and cardio-vascular diseases) but also in preventive medicine and anti-ageing, all the way to future applications in the \u201cMars mission\u201d project. In the 1990s, laparoscopy was introduced in the surgical treatment of pelvic cancers and deep endometriosis. The challenge then was to perform at least as well as in radically open surgery. The introduction of video-endoscopy allowed for perfect vision and a considerable improvement in the ergonomics of the laparoscopic surgeon, which was necessary for more complicated and longer procedures. Laparoscopic pelvic surgery has thus become an extensive and radical surgery with the consequence of the appearance of postoperative pain too often unexplained and neglected as well as often irreversible functional morbidities. Patients who presented to neurourologists and neurologists did not find much help, only neuroleptic treatments but without any effort to research or treat the cause of the symptoms. The term \u201cminimally invasive surgery\u201d thus became more and more paradoxical. The only possibility to reduce this morbidity seemed to be the in-depth study of the surgical anatomy of the pelvic nerves and their sparing as successfully as possible during interventions. However, although topographical anatomy is extensively described in anatomy textbooks, the operative functional anatomy of the pelvic nerves was, on the contrary, almost completely non-existent.Incidences of pelvic nerves pathologies are widely underestimated because of a lack of awareness that such lesions may exist, a lack of diagnosis and acceptance and a lack of declaration and reporting of such lesions. The most probable reasons for the omission of the pelvic nerves in medicine are the complexity of the pelvic nerve system, the difficulties of etiologic diagnosis and\u2014probably the overriding reason\u2014the limitations of access to the pelvic nerves for neurophysiological explorations and neurosurgical treatments. Neurosurgical procedure techniques are well established in nerve lesions of the upper limbs but pelvic retroperitoneal areas and surgeries to the pelvic nerves are still unusual for neurosurgeons. Few open-surgical approaches to the sacral plexus have been described by neurosurgeons for the treatment for traumatic pelvic plexopathies but these approaches are laborious and invasive, offer only limited access to the different pelvic areas and expose patients to the risk of severe vascular complications. Techniques of nerve neuromodulation to control pelvic pain syndromes and dysfunctions are for the same reasons limited to spinal cord and sacral nerves roots stimulation that considerably restrict their indications and effectiveness.The use of the endoscope in combination with neurofunctional surgical procedures to the pelvic nerves proved to be a decisive advantage in this development ,2,3,4 anNeuropelveology presents three consecutive aspects; the diagnostic stage followed by the therapeutic stage and the post-therapeutic follow-up of the patient. It covers four major areas:The diagnosis and treatment of pelvic neuropathic pain with particular new techniques of laparoscopic pelvic nerves decompression and neurolysis.The treatment of pelvic organ dysfunctions, in particular the stimulation of the genital nerves therapy).The technique of laparoscopic implantation of neuroprothesis to the pelvic nerves (LION procedure) for the recovery of the loss of functions in people with spinal cord injuries.The stimulation of the pelvic autonomic nervous system for the prevention and/or treatment of general medical conditions such as osteoporosis, some cardio-vascular disease or control of sarcopenia (process of ageing).The diagnostic stage uses its own instruments and an anamnesis covering many aspects from gynecology, urology, orthopedics, pelvic vessel pathology and psychology of the chronic patient and parapleology. The clinical examination combines the examination of the pelvic organs and their functions, the neurological examination of the musculoskeletal system with a neuropelveological examination and the palpation of the pelvic nerves by the vaginal or rectal route . As somaNeuropelveolgy encompasses various medical treatments and surgery of the pelvic nerves. The latter includes neurosurgical techniques ranging from decompression, neurolysis, reconstruction and even nerve resection to pelvic neurofunctional surgery.Chronic pelvic pain (CPP) is a common condition involving multiple, organ-specific medical specialties, each with its own approach to diagnosis and treatment. Its management requires a knowledge of the interplay between pelvic organ functions and neurofunctional pelvic anatomy and also of the neurological and psychological aspects. However, no current specialty field takes this approach into account. Neuropelveology is an emerging discipline focusing on the pathologies of the pelvic nervous system on a cross-disciplinary basis .The neuropelveological approach to pelvic neuropathies is primarily diagnostic with the application of neurological principles and an absolute knowledge of the pelvic neurofunctional anatomy. Patient history is the key with a focus not simply on the pain location but also on pain history, irradiation, aggravating factors, vegetative and somatic symptoms. The first step is to evaluate whether the pain is visceral or somatic .(1)Determination of the nerve pathways involved in the relay of pain information to the brain.(2)Determination of the location of the neurological irritation/injury .(3)Determination of the type of nerve(s) lesion: irritation vs. injury (neurogenic neuropathy).(4)Neurological confirmation of the suspected diagnosis by clinical examination with in particular the transvaginal or transrectal palpation of the pelvic somatic nerves with the reproduction of the trigger pain and Tinel\u2019s sign blockade).(5)Determination of a potential etiology based on patient history and diagnostic imaging.(6)Corresponding etiology-adapted therapy.Visceral pain by the lesion of the hypogastric plexuses is recognized due to the diffuse nature of pelveo-abdominal pain, irradiations proximal to the lower back and multiple vegetative symptoms including malaise, oppression, syncope, irritability, nausea, vomiting and fatigue. The clinical examination focuses on specific clinical details for vegetative disorders such as pupil dilation, salivation inhibition and tachycardia. In somatic pain, it is essential to adopt a neurological way of thinking since the location of the pain and the location of the etiology is mostly different. Somatic pain is located superficially at the skin and is described as allodynia or an electrical shock with a very specific location, caudal irradiations to the genito-anal areas or to the lower extremities (dermatomes) and lack of vegetative symptoms. The neuropelveological workup scheme follows these six steps:It is absolutely crucial to understand which nerves are involved in the pain and then to assess whether it is a nerve irritation secondary to compression or whether it is an axonal nerve lesion. In the first instance, the neuropelveological treatment is based on the laparoscopic exploration/decompression; in the second, on the neuromodulation of the affected nerves.Sacral radiculopathy by vascular or fibrotic entrapment ,10,11.Compression of the sacral plexus by hypertrophy or atypical insertion of the piriform muscle.Deep infiltrating endometriosis of the sacral plexus and the sciatic nerve ,12.Tumor of the sacral plexus 13,14] ,14 13,14The intervention in the area of the pelvic somatic nerves, which is covered by large vessels and a dense network of lymph nodes, has hitherto been hindered by the lack of minimally invasive surgical methods. However, developments in video-endoscopy enable the exploration of the retroperitoneal pelvic space with access to the lumbosacral plexus and possibilities for nerve decompression and neurolysis. The most frequent aetiologies treated in neuropelveology are:This endoscopic approach further allows in the case of an axonal lesion for the laparoscopic implantation of neuroprosthesis (LION procedure) where electrodes are selectively placed in contact with the injured pelvic nerves for the possible control of neuropathic pain .Post-therapy patient follow-up for pain management is essential. In nerve neuromodulation, the stimulation parameters must be calibrated at regular intervals. After laparoscopic nerve decompression, neuropathic pain first significantly increases while improvement usually does not set in until eight months after the operation. The follow-up of these patients is essential in order to adjust the medical treatment and to treat the pain-memory as successfully as possible. The latter, however, is much more difficult to direct.Various sites have been used for the implantation of electrodes to the pelvic nerves to treat pelvic organ dysfunctions. Sacral nerve stimulation was the first technique for pelvic nerves stimulation that typically involves the electrical stimulation of the nerve via a dorsal transformational technique of implantation. Sacral nerve stimulation (SNS) and pudendal nerve stimulation evolved as a widely used treatment for an overactive bladder (OAB) but does not completely resolve symptoms in the majority of patients. Both techniques are still unusual for most gynecologists so that the field of pelvic nerve stimulation is still extremely restricted in gynecology. There is definitively a need for a more suitable alternative for neuromodulative treatments; methods that cannot only be reserved for experts in this field but for all gynecologists dealing in daily practice with patients suffering from functional disorders of the bladder. This is why the LION procedure of the sacral plexus ,19 and tGenital nerves stimulation (GNS) is the surgical procedure developed for the stimulation of the DNP, an implantation technique adapted to the most classical surgical approach in gynecology, the vaginal approach. The procedure consists of two phases: a preoperative non-surgical test-phase and a second phase involving the surgical implantation of the neuroprothesis. In contrast to the classical technique of stimulation, the GNS-test-phase is the only one which does not require any interventional procedure. Due to the fact that the genital nerves are located just a few millimeters below the skin\u2019s surface, test-stimulation can be obtained using skin surface electrodes .The effect of the stimulation can be tested by the patient in their daily, family and professional environment or alternatively at the practice under urodynamic testing or, if required, other electrophysiological testing.\u00ae NeuroGyn AG, Baar, Switzerland) with a spear from below, behind the pubic bone according to the classical tension-free vaginal tape (TVT) procedure: A sagittal incision of about 2 mm in length is made approximately 1 cm below the external urethral meatus. The curve needle driver is inserted into the incision. The tip is oriented at an angle of 5\u201310\u00b0 from the midline towards the symphysis. The inserter tip is approximately in the 11 o\u2019clock position (1 o\u2019clock on the right side). The curve needle driver is advanced, contacting the inferior edge of the pubic ramus, until it transfixes the urogenital diaphragm, enters into the retropubic space and comes out through the skin in the suprapubic area . The first step of the procedure consists of the introduction of a hollow curve needle applicator and emerges through the first vulvar incision. After removing the spear, the electrode cable is inserted retrograde into the applicator again a\u2013c.To use the hollow needle driver for the retrograde introduction of the lead electrode enables the optimal placement of the lead electrode to the genital nerves without the need for any dissection, which, in turn, reduces considerably the risk of bleeding and nerve injury. The introduction of the curve needle driver from below belongs to standard urogynecology (TVT) procedure. As the DNP perforates the perineal membrane laterally to the external urethral meatus at an average distance of 2.7 cm (2.4\u20133.0 cm) and then runs along the bulbous spongy muscle for a distance of 1.9 cm (1.8\u20132.2 cm) before penetrating the pillars of the clitoris , the secThe last step is then the connection of the lead electrode to the generator, which is finally fixed behind the pubic bone through a suprapubic mini-laparotomy. The fixation of the generator behind the pubic bone protects from external traumas and dislocation.No X-ray screening, neurophysiological monitoring or stimulation with (Electromyography) EMG electrodes are mandatory during the procedure for a proper implantation. Due to the fact that the presented procedure does not need two surgical procedures for both the test and the final implantation but only one for the final implantation, the presented protocol allows a considerable cost reduction in comparison with the usual procedures for sacral or pudendal nerves stimulation.The endoscopic approach allows in case of axonal lesion or dysfunction of the nerves the selective laparoscopic implantation of neuroprothesis (LION procedure) for electrical stimulation of the nerves .This procedure has been used for the treatment of nerve damage and pelvic organ dysfunctions as reported previously but probably the most impressive indication of this technique is the implantation in people with spinal cord injuries for the recovery of some walking functions . In 2006Video: LION procedure in SCI The crucial discovery we made with the LION procedure in people with SCI was undoubtedly the fact that some patients experienced enough recovery of supra-spinal control for some leg movement or even standing and walking ,29. In tn = 8; AIS B: n = 9; AIS C: n = 2) could walk >10 m (67.8%); eight of them only at the bar (28.5%) and eleven of them with the aid of crutches/walker and without braces (40%).26 patients could get to their feet when the pacemaker was switched on (92.8%). Five patients could walk <10 m (17.85%) at the bar . NineteeThe precise mechanism at work in people with SCI to recover walking functions after the LION procedure is still unknown. There is increasing evidence to suggest that neuromagnetic/electrical modulation promotes neuroregeneration and neural repair by affecting signaling in the nervous system but our findings suggest that the information signals to the brain might use not only anatomical nerve pathways but also functional pathways activated by a continuous low frequency stimulation of the low-motor neurons below the spinal cord lesion.Beyond the psychological impact and the gaining of some autonomy, the benefits of locomotion include improvement of contractures, prevention of deep venous thrombosis and oedema and amelioration of spasticity . StandinThe LION procedure to the pelvic somatic nerves has been further reported for treating urinary dysfunctions and improving locomotion in multiple sclerosis patients .The development of new technologies to assist paraplegics with their common problems associated with inertia when confined to a wheelchair may find revolutionary applications in preventive medicine and even in the world of space missions in the future. The LION procedure enables a continuous and passive electrical nerve stimulation (ENS) without the need for an external stimulation system, while the neuroprothesis is located within the body: the in-Body-ENS. This capability of continuous in-body electrical nerve stimulation may open the door to a whole new area of humanity in which implanted electronics may help the human body to a better performance and a longer life. The process of ageing, also called sarcopenia, is characterized by muscle atrophy along with a reduction in muscle tissue quality characterized by such factors as the replacement of muscle fibers with fat and the degeneration of the neuromuscular junction leading to a progressive loss of muscle function and frailty. Prevention of the aging process mainly focuses on the control and treatment of such a muscle atrophy. Several therapies have been proposed for preventing the aging process such as mental activity, muscle training and high-protein diet. A crucial factor in this is sustaining a high individual strength capacity: The elderly need strength training more and more as they grow older to stay mobile for their everyday activities. The crucial factor in maintaining strength capacity is an increase in muscle mass. As continuous passive stimulation of the pelvic somatic nerves enables muscle training and may reduce the process of muscle atrophy, the in-Body-ENS may become an option in the future for slowing down the aging process by preserving body muscle mass. This technique may be appropriate in elderly people who are not capable of active muscle training because of pain, motoric limitations or subcortical pathologies but also in people confined to bed for long periods of time (prophylaxis of decubitus).As sympathetic trunks travel downward outside the spinal cord and first anastomose to the sacral plexus, which build the sciatic nerve, continuous low frequency/low energy sciatic nerve stimulation FES) permits neuromodulation of the sympathetic nervous system of the lower extremities and of the bottom. Due to the fact that there is further evidence of the role of the sympathetic innervation of bone tissue and of its role in the regulation of bone remodeling in humans, sympathetic nerve stimulation obtained by stimulation of the pelvic somatic nerves might also open new techniques for the prevention of osteoporosis not only in people with SCI as demonstrated in our study but also in elderly people ,35.In addition to this, the in-body-ENS may also find revolutionary applications in the world of space missions. Space is a dangerous, unfriendly place that requires daily exercise to keep muscles and bones from deteriorating. Calf muscle biopsies before flight and after a six month mission on the International Space Station show that even when crew members did aerobic exercise for five hours a week and resistance exercise three to six days per week, muscle volume and peak power both still deteriorated significantly. The in-body-ENS, by contrast, may allow muscle mass to be maintained even whilst the astronaut is at rest and provides an extremely effective and timesaving strength training program. During space flight, crew members also lose bone density; the calcium that is released ends up in the urine, which contributes to an increased calcium-stone forming potential. If the stone completely blocks the tube draining the kidney, the kidney could cease to function with catastrophic even life-threatening consequences for the astronaut. Due to the excruciating pain, affected astronauts could become incapacitated and missions may have to be aborted. Due to the fact that stimulation of pelvic sympathetic nerves may reduce this process of osteoporosis, as shown in our paraplegic study, in-Body-ENS may present a potential prophylactic for kidney stone formation in microgravity."} +{"text": "Following publication of the original article , it camePlease find the corrected names in the author list of this correction.The names have since been corrected in the published article."} +{"text": "Tactile intelligence has become increasingly important in the development of intelligent robots capable of dexterity skills that are comparable to those in humans. Recent advances in tactile sensors have fostered the implementation of artificial tactile sensation in robots, where tactile intelligence plays a key role in translating physical signals from tactile sensors to tactile percepts. Amid many possible approaches to the development of tactile intelligence, it is reasonable to translate the principles of information processing of the human nervous system into artificial robot intelligence. For instance, one may create artificial neural networks that mimic somatosensory nervous systems and integrate them with advanced machine learning techniques in order to have robots gain human-like dexterity skills. This Research Topic aims to highlight state-of-the-art research on the implementation of tactile intelligence in robots inspired by neural mechanisms of tactile information processing. It also emphasizes human studies on tactile perception to provide insight on robot learning for object manipulation tasks.Sun and Martius attempt to overcome this limitation by inferring tactile stimulations at virtual contacts from a limited number of strain-gauged sensor data using machine learning algorithms. They demonstrate the feasibility of reconstructing the location and force magnitudes of deformable objects at multiple contact points from sparse sensory configurations. They achieve this by leveraging machine learning algorithms for the inference of tactile information, clearly showing how robots gain the ability of inference from limited observations, akin to what human intelligence does.A number of studies contributed to the present Research Topic by providing the latest remarkable findings in tactile intelligence. One of the technical challenges in the implementation of tactile intelligence in robots is ensuring the robots can sense external mechanical stimulations with high fidelity. Unlike the widely distributed mechanoreceptors inside the human hand, the current tactile sensing technology in robotic hands often suffers from a sparseness of sensing points and a lack of spatial resolution. Richardson and Kuchenbecker contributing to this topic investigates more closely such connections between robot tactile sensing and human tactile perceptual attributes. They particularly focus on attribute intensity and perceptual variability in natural human tactile perception, which is not available in the current robot tactile intelligence. They collected haptic adjectives for a number of objects from human subjects as well as robotic tactile sensing data for the same objects. Then they successfully predicted the probability distribution of haptic adjectives from the tactile sensing data of an object using a machine-learning algorithm. The study demonstrates the possibility of modeling both the intensity and variability of human tactile perception using tactile intelligence in robots. This finding will enable artificial tactile intelligence to move closer to the human perceptual system.Another study by Seminara et al. reviewed sensorimotor coupling in the robotic control of hands and fingers with an emphasis on connections between human tactile perception and robotic tactile sensing. In particular, they highlight robotic behavior, goals, and tasks in active haptic exploration. This review offers a comprehensive review of the taxonomy of elements for the closed-loop sensorimotor control of robots and will be of great help to those who seek to design a robot with the ability to interact with various real environments adaptively and intelligently.Tactile intelligence will become increasingly important as robots begin to function in more open-ended environments, where they will have to adapt to the uncertainty of environmental states. In this regard, Beckerle et al. provides a critical review on the role of tactile perception of sensory feedback in the feeling of embodiment while using assistive robotic systems. Throughout the rigorous review, they suggest practical solutions to enhance embodiment by optimizing tactile feedback in human-robot interactions. This review will be of considerable value to those who aim to improve the usability of robotic assistive technology by providing real-time tactile feedback to the user.Closing the loop of motor control with sensory feedback can lead to the cognitive embodiment of external actuators in humans. This is especially important to the use of robots as an assistive technology in a daily life. Seminara et al. discusses the role of tactile intelligence in a closed sensorimotor loop for robotic control of hands and fingers, and the second review by Beckerle et al. deals with how such a sensorimotor loop could induce the feelings of embodiment when using robot-based assistive technology. So, the understanding of a sensorimotor loop using tactile feedback in the first review can provide a basis for the realization of embodiment in assistive technology in the second review. Based on these reviews, we believe that the upcoming issue in the field will be developing robotic upper limbs and tactile sensors that are equipped with the sophisticated model of the closed sensorimotor loop and practical applications of these advanced technologies to real-life situations, such as assistive environments.The first review by We believe that all the contributions to this topic will broaden our understanding of the neural underpinnings of tactile perception, foster the development of robots interacting with the world in a more intelligent way, and open a new avenue to integrate multiple disciplines in order to pave the way for next-generation human-robot interactions. We hope that readers will also find in this Research Topic useful insights and promising outlooks for their own research.S-PK edited and reviewed the contributions, and wrote the Editorial. CW and VD edited and reviewed the contributions. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In an increasingly dynamic, competitive and global environment, organizations in the twenty-first-century are required to accommodate to rapid changes .Lavee and Pindek. The authors focus on delineating the informal resources that are invested as part of OCBs directed toward customers (OCBCs), as well as the costs associated for employees. This study contributes to better understanding OCBCs and the growing trend to \u201cdo more with less in the workplace.\u201dThe negative impact of OCBs is addressed in the qualitative study conducted by Shukla and Kark provide a theoretical framework which examines both positive and negative impacts of prosocial behavior. The authors introduce pro-social behavior as an antecedent of creative deviance and develop a multi-level model of the moderators of this relationship. The model suggests that prosocial motivation can increase creative deviance, which ultimately increases both positive and negative outcomes.Contrary to the former studies, Lavy paper also contributes to a more balanced examination of prosocial behavior. The paper examines the daily dynamics of prosocial behaviors among a group of teachers and focuses on the interplay of daily perceived supervisor and colleague support, OCB, and daily positive and negative emotional experiences. The study presents new findings supporting the dual role of social support and emotions as both antecedents and outcomes of OCB.Reizer et al.'s paper explores the general mechanisms and moderators that explain the bright and dark sides of prosocial behaviors. Overall, findings from their two studies suggest that performing OCB can enhance work-family facilitation (WFF), with the effect being stronger for workers with low avoidance levels. However, OCB can be harmful in terms of work family facilitation among individuals who are higher on attachment avoidance.Finally, This special issue offers a more comprehensive perspective of prosocial behavior by providing both the positive and negative sides of the phenomenon. Given the increased importance of prosocial research in today's volatile environment, the current set of papers answers critical questions on direction and strength of the relationship between prosocial behaviors and workplace outcomes. We hope that the current set of papers will also inspire others to further explore these issues and contribute in developing a more integrative model of prosocial behavior in organizations.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The lesion of the accessory spinal nerve is often of iatrogenic origin. We report the case of an injury after a right jugulocarotid lymph node biopsy. A 30-year-old patient was referred for the treatment of right cervical lymphadenopathy suspected of tuberculosis. After the intervention and confirmation of tuberculosis diagnosis, the patient presented a functional impotence of the right shoulder and swarming of the right hand. The clinical examination found an active limitation of the shoulder, and a wasting of the upper bundle of the right trapezius muscle and the sternocleidomastoid. The EMG showed axonotmesis of the accessory spinal nerve and the MRI an amyotrophy of the trapezius with denervation edema. A simple rehabilitation has been scheduled. Damage of the accessory spinal nerve most often occurs after local surgery. EMG is essential for diagnosis. Rehabilitation is the first therapeutic option. Surgery can be considered if it fails. The surgeons must consider the protection of the accessory spinal nerve in case of cervical lymph node surgery. The lesion of the accessory spinal nerve (ASN) is most often of iatrogenic origin [1]. It is responsible for a pure motor mononeuropathy. We report the case of an injury of the ASN after a cervical lymph node biopsy.A 30-year-old patient, with no previous medical history was referred for a right-sided jugular-carotid lymphadenopathy, without fever or deterioration in general condition. A biopsy was performed in ENT unit confirming the diagnosis of lymph node tuberculosis. There were no other locations of tuberculosis A 6 months anti-tuberculosis treatment has, therefore, been started. In the postoperative period, the patient began to complain of progressive partial functional impotence in the right shoulder accompanied by swarming in the right hand. The clinical examination showed an anterolateral clean scar on the right side of the neck , fall ofMononeuropathy of the accessory spinal nerve is a rare iatrogenic motor disorder [1]. Idiopathic paralysis of this nerve has been described [2-4]. It affects the upper bundle of the trapezius and sternocleidomastodian muscle. The lesion generates a muscular weakness of these two muscles causing a deficit of elevation of the shoulder and the lateral rotation of the cervical spine without repercussions on the rest of the member musculature. However, we would like to point out that this slight detachment of the scapula in our patient is secondary to the weakness of the stabilizing muscles of the scapula . This detachment is a consequence of the non use of the limb and functional muscular overload on this musculature. This slight detachment of the scapula should not be attributed to a damage affecting the long thoracic nerve. This damage constitutes the differential diagnosis of the ASN lesion. The swarming sensation reported by our patient may be due to irritation of the brachial plexus during movement. Shoulder drop and muscle imbalance in the shoulder may be exacerbated by excessive stretching of the plexus [3]. The edema found on the MRI probably explains the slight pain on pinching the trapezius. To regain a trapezian function, this patient will be referred to rehabilitation for a protocol to strengthen the stabilizing and elevating muscles of the scapula. In the event of manifest handicap, and of failure of the rehabilitation, a reparative surgery is possible with a neurotisation of the accessory spinal nerve. It must be performed by an experienced surgeon with a perfect understanding of the cervical and scapular anatomy. Several procedures can be followed: the use of bundles of the upper trunk of the brachial plexus ; the use of the lateral pectoral nerve [7]; or the use of the nerve fascicles of the C7 nerve [8]. The best treatment is of course preventive, and surgeons must consider the protection of the accessory spinal nerve in case of cervical lymph node surgery.Mononeuropathy of the accessory spinal nerve is often an iatrogenic condition, but spontaneous rupture is not rare. It requires a careful clinical examination of the shoulder musculature to recognize it. The electromyogram is mandatory. Functional treatment is the main therapeutic option, surgery is possible in case of failure. The best treatment remains the prevention with a necessary protection of the ASN in the lymph node biopsies."} +{"text": "The assessment and prediction of cognitive performance is a key issue for any discipline concerned with human operators in the context of safety-critical behavior. Most of the research has focused on the measurement of mental workload but this construct remains difficult to operationalize despite decades of research on the topic. Recent advances in Neuroergonomics have expanded our understanding of neurocognitive processes across different operational domains. We provide a framework to disentangle those neural mechanisms that underpin the relationship between task demand, arousal, mental workload and human performance. This approach advocates targeting those specific mental states that precede a reduction of performance efficacy. A number of undesirable neurocognitive states are identified and mapped within a two-dimensional conceptual space encompassing task engagement and arousal. We argue that monitoring the prefrontal cortex and its deactivation can index a generic shift from a nominal operational state to an impaired one where performance is likely to degrade. Neurophysiological, physiological and behavioral markers that specifically account for these states are identified. We then propose a typology of neuroadaptive countermeasures to mitigate these undesirable mental states. A study of mental workload is fundamental to understanding the intrinsic limitations of the human information processing system. This area of research is also crucial for investigation of complex teaming relationships especially when interaction with technology necessitates multitasking or a degree of cognitive complexity.Mental workload has a long association with human factors research into safety-critical performance . Forty yThe lineage of mental workload incorporates a number of theoretical perspectives, some of which precede the formalization of the concept itself. Early work linking physiological activation to the prediction of performance was formResearch into the measurement of mental workload has outstripped the development of theoretical frameworks. Measures of mental workload can be categorized as performance-based, or linked to the process of subjective self-assessment, or associated with psychophysiology or neurophysiology. Each category has specific strengths and weaknesses and the There are a number of reasons that explain why mental workload is easy to quantify but difficult to operationalize. The absence of a unified framework for human mental workload, its antecedents, processes and measures has generated a highly abstract concept, loosely operationalized and supported by a growing database of inconsistent findings . The absFor the discipline of human factors, the study of mental workload serves two primary functions: (a) to quantify the transaction between operators and a range of task demands or technological systems or operational protocols, and (b) to predict the probability of performance impairment during operational scenarios, which may be safety-critical. One challenge facing the field is delineating a consistent relationship between mental workload measurement and performance quality on the basis of complex interactions between the person and the task. The second challenge pertains to the legacy and utility of limited capacity of resources as a framework for understanding those interactions.In the following sections, we detail some limitations of mental resources and advocate the adoption of a neuroergonomic approach for the The concept of resources represents a foundational challenge to the development of a unified framework for mental workload and prediction of human performance. The conception of a limited capacity for information processing is an intuitive one and has been embedded within several successful models, e.g., multiple resources . But thiFor example, the theory of limited cognitive resources predicts that exposure to task demands that are sustained and demanding can impair performance due to resource depletion via self-regulation mechanisms at the neuron-level is a brain structure often identified as the neurophysiological source of limited resources . The PFCThe existence of information processing resources can also be conceptualized as functional attentional networks in the brain. Michael Posner was the first to pioneer a network approach to the operationalization of resources in the early days of neuroimaging . His infThe capacity of the brain to monitor performance quality and progress toward task goals is another important function of the PFC during operational performance. The posterior medial frontal cortex (pMFC) is a central hub in a wider network devoted to performance monitoring, action selection and adaptive behavior . The pMFThis neuroergonomic approach provides a biological basis upon which to develop a concept of limited human information processing, with respect to competing neurological mechanisms, the influence of neuromodulation in the prefrontal cortex and antagonist directives between different functional networks in the brain. The prominence of inhibitory control coupled with competition between these neural networks delineate a different category of performance limitations during extremes of low vs. high mental workload, i.e., simultaneous activation of functional networks with biases toward mutually exclusive stimuli or contradictory directives .The previous sections have highlighted the complexity of those brain dynamics and networks that can introduce inherent limitations on human information processing. On the basis of this analysis, it is reasonable to target neurophysiological states and their associated mechanisms that account for impaired human performance see . This reThe rationale for considering the dimension of task engagement is that performance is driven by goals and motivation . Goal-orArousal makes an important contribution to the conceptual space illustrated in Secondly, attentional states, such as inattentional deafness and blindness, result from the activation of an attentional network involving the inferior frontal gyrus, the insula and the superior medial frontal cortex . These rThirdly, measures of arousal are used to characterize high engagement and delineate distinct mental states within the category of low task engagement . Heart rFinally, behavioral metrics such as ocular behavior can complement the detection of low and high levels of engagement . Hence, These metrics provide some relevant prospects to identify the targeted deleterious mental states for especially for field studies as long as portable devices are concerned. It is worth noting that the extraction of several features and the use of several devices is a way for robust diagnosis. Moreover, contextual information should be considered as well as actions on the user interface and system parameters if available so as to better quantify the user\u2019s mental state.This review has identified some undesired mental states that account for degraded performance . A crucial step is to design cognitive countermeasures to prevent the occurrence of these phenomena. The formal framework that we proposed see paves thms) and located information removal was an efficient mean to mitigate perseveration by forcing disengagement from non-relevant tasks. The first category of neuroadaptive countermeasure consists of triggering new types of notifications via the user interface to alert of impeding hazards. The design of these countermeasures is generally grounded on neuroergonomics basis so that these warning can reach awareness when other means have failed. Following this perspective, The second category of neuroadaptive countermeasure is the dynamic reallocation of tasks between humans and automation to maintain the performance efficacy of the operators . The undThe third and final category aims to warn the users of their mental state and \u201cstimulate\u201d neurological activity in order to augment performance. One of the most promising approach relies on the implementation of Neurofeedback see , mental The following illustration see depicts The three types of neuroadaptive solutions offer promising prospects to mitigate the onset and likelihood of undesirable neurocognitive states. However, they should be delivered in a transparent, meaningful, and timely manner so they are relevant and understood , otherwiper se. In both cases, explanations for performance breakdown are based upon neurological processes, such as dominance of specific neural networks or the heightened activity of specific mechanisms. We propose a two-dimensional framework of engagement and arousal that captures the importance of specific degraded mental sates associated with poor performance. The rationale for including the transactional concept of engagement in this scheme is to account for the goal-oriented aspect of cognition. The benefit of including the transactional concept of arousal is to make a distinction between two categories of disengagement, one that is accompanied by high arousal and low arousal (mind wandering) \u2013 and to link this conceptual distinction to known neurophysiological effects (see This paper has argued that the concept of a limited resource provides a limited explanation for the breakdown of operational performance. Our neurophysiological analysis describes a number of additional mechanisms, such as perseveration and effort withdrawal, which do not represent finite resources ects see . NonetheThis neuroergonomic framework encompasses operationali- zations of these undesirable states that can be monitored continuously in an objective fashion. Such considerations eventually lead to propose a typology of neuroadaptive countermeasures and open promising perspectives to mitigate the degradation of human performance. However, to the authors\u2019 very best knowledge, most of the neuroadaptive experimental studies have focused on human-machine dyad situations. We believe that recent research on hyperscanning , physiolAll authors have made a substantial and intellectual contribution to this review.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The years around menopause are the time that associates not only with hormonal changes but with psychological and social transitions, and previous studies have consistently revealed the relationship between menopause and depression. The present study examined the moderating effect of perceived partner responsiveness (PPR) on the association between menopausal symptoms (MS) and depressed affect (DA). The sample was middle-aged climacteric women from the second wave of Midlife in the United States (MIDUS\u2161). Measurement for MS consisted of the frequency of five symptoms in the past 30 days . PPR was assessed using three items matched the core components of responsiveness . Results revealed that there were significant interactions between menopausal symptoms and PPR . Specifically, the level of elevation of DA in response to MS was smaller in women with higher levels of PPR than in those with lower levels of PPR . According to the region of significance analysis, the coefficients of MS on DA were significant within the -2SD to +2SD range of PPR, but it decreased as the PPR increased. Findings suggest that partners\u2019 careful responsiveness may mitigate the detrimental effects of MS on DA among climacteric women."} +{"text": "Four decades on from the publication of 'The Ageing Enterprise', this paper provides a critical review of the relationship between social theory and social policies for later life. To what extent do current theoretical perspectives in gerontology bear the influence of ideas laid out in that pioneering book? How has the \u2018ageing enterprise\u2019 fared given the dominant ideology of neo-liberalism and the precarious lives faced by people moving through the life course? The paper considers these questions in the context of globalization processes, and the imposition of austerity policies. The paper will consider the continuing importance of \u2018The Ageing Enterprise\u2019 by reviewing three main themes: first, assessing the changing relationship between the state and social policy; second, through examining current perspectives within critical gerontology; third, highlighting new forms of empowerment developing amongst older people, and the relationship of these to the values and ideas expressed in \u2018The Ageing Enterprise\u2019. Part of a symposium sponsored by the Women's Issues Interest Group."} +{"text": "Hydrothermal processes modify the chemical and mineralogical composition of rock. We studied and quantified the effects of hydrothermal processes on the composition of volcanic rocks by a novel application of the Shannon entropy, which is a measure of uncertainty and commonly applied in information theory. We show here that the Shannon entropies calculated on major elemental chemical composition data and short-wave infrared (SWIR) reflectance spectra of hydrothermally altered rocks are lower than unaltered rocks with a comparable primary composition. The lowering of the Shannon entropy indicates chemical and spectral sorting during hydrothermal alteration of rocks. The hydrothermal processes described in this study present a natural mechanism for transforming energy from heat to increased order in rock. The increased order is manifest as the increased sorting of chemical elements and SWIR absorption features of the rock, and can be measured and quantified by the Shannon entropy. The results are useful for the study of hydrothermal mineral deposits, early life environments and the effects of hydrothermal processes on rocks. Hydrothermal processes affect the chemical and mineralogical composition of rock by destabilizing and breaking down the primary rock mineralogy and by forming new secondary minerals ,2. The tEnrichment and depletion of elements and minerals play an important role in the formation of hydrothermal ore deposits. Enrichment of elements and minerals may lead to the formation of economic accumulations of metals and minerals . ChangesThe effects of hydrothermal processes on the composition of rocks are commonly studied by measuring the absolute or relative concentrations of chemical elements ,8 and byCHEM) and the spectral Shannon entropy (HSPEC), respectively. We will show that the Shannon entropy provides single quantitative measures of the effects of hydrothermal processes on the composition of rocks, and we will explain that the lowering of the Shannon entropy in the hydrothermally altered rocks is the result of sorting processes.In this study, we use a novel application of the Shannon entropy to study and measure the effects of hydrothermal processes on the composition of the rock. The Shannon entropy was originally developed in the context of digital transmission of information ,12. In iThree different rock sample sets were used in this study. One set represents the rocks that were altered by hydrothermal processes. The composition of the unaltered precursors could not be measured directly from these rocks because of the intense alteration. Therefore, two sample sets were used as analogs of the unaltered precursors to estimate the chemical and spectral mineralogical composition. The three sample sets are described below. The method of calculating the Shannon entropy of these rocks sets is also explained.The hydrothermally altered rocks are represented by a suite of 10 samples from volcanic lithologies of the Soansville greenstone belt of the East Pilbara Granite-Greenstone (EPGG) terrane in Western Australia . The samThe chemical composition of precursor analogs is represented by the analyses of a suite of 176 volcanic rocks from several Archean greenstone belts of the EPGG terrane in Western Australia . The selThe SWIR reflectance spectra of analogs of the unaltered precursor rocks are represented by spectra of a suite of 61 unaltered volcanics from the ASTER spectral library , rangingH) of a probability distribution (P) is defined as [m is the number of possible outcomes and ip is the probability of outcome i. The units of Shannon entropy are bits, because of the log2 term in the equation.The Shannon entropy HCHEM, and on SWIR reflectance spectra, HSPEC, of hydrothermally altered rocks and their assumed precursors. The Shannon entropy is used as a measure of uncertainty in probability distributions. Maximum uncertainty, or maximum Shannon entropy, occurs in a distribution where all possible outcomes have equal probabilities. Such a distribution resembles maximum heterogeneity or randomness. Minimum uncertainty is reached when one of the possible outcomes has a probability of one. In that case, there is no uncertainty and the Shannon entropy becomes zero. Such a situation can be regarded as a form of minimum randomness. Shannon entropies were calculated on distributions of the major element chemical compositions, HCHEM) were calculated from the whole-rock chemical compositions of the hydrothermally altered and metamorphic volcanic rocks from the EPGG terrane. This type of entropy was named the chemical Shannon entropy (HCHEM). The following 10 major elements, expressed in weight percentages of their oxides, were included in the calculations and normalized to 100%: Si, Ti, Al, Fe, Mn, Mg, Ca, Na, K and P. Less than detection values were replaced by half of the detection limit values following common practice. The element concentrations were first converted to molar percentages of the oxides and subsequently into molar percentages of the single elements. The chemical composition of each sample was then converted to a probability distribution by rescaling of the concentrations from a sum of 100% to 1. The HCHEM of the composition of each sample was calculated using Equation (1). Trace and volatile elements were not included in the calculation. Shannon entropies of the elemental chemical composition distribution of the metamorphic volcanic rocks from the EPGG, was used to infer HCHEM of the actual precursor of the altered rock. The relationship between the HCHEM and the Zr/TiO2 ratio of the metamorphic volcanic rocks was calculated by linear regression of the means of the Zr/TiO2 ratios (predictor) on the HCHEM (predicted) of the different volcanic lithologies. Since the number of rock samples across the lithological groups is highly unbalanced, the means of each lithological group were used for this regression instead of the samples themselves. The resulting model was applied to estimate the HCHEM from the Zr/TiO2 ratios of the hydrothermally altered volcanic rocks . The reflectance spectra of the two sample sets were first resampled to a 1 nm spectral sampling and equal wavelength ranges. The resulting spectra covered the 1300\u20132500 nm range in 1201 discrete bands. This wavelength range contains diagnostic vibrational absorption features of many hydrothermal alteration minerals [SPEC following Equation (1) for each rock sample.Shannon entropies were calculated from the SWIR spectra of the hydrothermally altered rocks and the unaltered rocks from the ASTER spectral library. This type of entropy was named the spectral Shannon entropy of hydrothermally altered rocks are lower than those of the unaltered precursor rocks test confirmed that the mean HCHEM values of the four lithological and two alteration groups are not similar . A pairwise comparison using the Tukey\u2019s range test showed that the differences between the six group are statistically significant for all pairs except for the rhyolite and chlorite-quartz pair .The Shannon entropies calculated from major element chemical data (Hor rocks . Within g and Ca c\u2013f. The ed rocks is the rSPEC) of the hydrothermally altered rocks are lower than those of the unaltered precursor rocks ) are shown in CHEM and \u0394HSPEC values that range between 0.969 and 1.232 bits and between 0.034 and 0.074 bits, respectively. The chlorite-quartz altered rocks show less elevated values of \u0394HCHEM and \u0394HSPEC values, which range between 0.363 and 0.612 bits and between 0.001 to 0.002 bits, respectively. The decrease in Shannon entropy of the hydrothermally altered rocks represents a lowering of the uncertainty in the probability distributions of the major element composition and the SWIR reflectance spectra from which the Shannon entropy was calculated. We interpret this decrease in uncertainty as a form of information that was measured using the distributions of rock measurements, and imposed on the rock itself, by the hydrothermal processes and which can be measured using the Shannon entropy.The differences in chemical and spectral Shannon entropy between the hydrothermally altered rocks and the unaltered precursor rocks (\u0394HCHEM represents the uncertainty in the type of chemical element measured when one atom of the rock is sampled. The uncertainty with respect to the type of selected atom is larger when the distribution has equal probabilities than in the situation when the distribution has more varying probabilities. The latter occurs in the suite of hydrothermally altered rocks where selective enrichment and depletion resulted in distributions of high probabilities of Si and low probabilities of most of the other elements. A low chemical Shannon entropy is interpreted as the result of increased sorting of chemical elements in the altered volcanic rock.The chemical Shannon entropy HSPEC is interpreted as the uncertainty in wavelength of absorbed IR radiation when one IR-ray that has been absorbed by the rock is sampled and measured. The uncertainty in the wavelengths at which absorption occurs, decreases in SWIR spectra with deep absorption features in a few narrow wavelength ranges in bright rock. In these rocks, there is a dominance of absorption at a few narrow wavelength ranges. Flat horizontal spectra approach equal probabilities and produce high entropy values, indicating the absence of deep absorption features. The low spectral Shannon entropy is interpreted as the result of increased sorting of absorption features in the altered rock. From other studies [The spectral Shannon entropy H studies , it is kThe Shannon entropy provides quantitative estimates of the effects of sorting processes on the composition of rocks. It is important to note that the Shannon entropy is somewhat subjective since the values depend on the type and number of the measured variables. We do not normalize the Shannon entropy and therefore there is no maximum bound. For quantitative comparisons between rocks analyzed in different batches or from different areas, the measurement parameters have to be standardized. This means that parameters, such as the number and type of chemical elements or spectral bands and the units in which they are measured, must be equal. Different measurement parameters will give different Shannon entropy values of the same rock.By placing our results in a wider geological context of the study area, we found a relationship between the change in Shannon entropy of the hydrothermally altered rock and the heat of a cooling magma that drove the hydrothermal system in which the rocks were altered. The altered rocks in this study were originally deposited in a submarine seafloor environment where heat provided by a coeval sub-volcanic intrusion drove hydrothermal fluids through the volcanic sequence . TemperaCirculation of predominantly seawater-derived fluids caused large-scale alteration of the volcanic rocks, where the type and composition of the altered rock depend on the fluid composition and physicochemical conditions. The hydrothermal fluids destabilized the primary volcanic minerals, such as volcanic glass and ferromagnesian minerals, and produced quartz-sericite and chlorite-quartz as alteration minerals. Alteration reactions caused the liberation of Na and Ca, which were subsequently removed by hydrothermal fluids, and the accumulation of Si. Further depletion of Fe and Mg occurred in zones of sericite-quartz alteration. Fe and/or Mg in chlorite-quartz altered rock were retained in chlorite. The breakdown of precursor minerals and the formation of new minerals changed the chemical composition and increased the sorting of elements in the volcanic rock. Due to similarity between the Shannon entropy and the statistical thermodynamic entropy of Boltzmann ,24, i.e.Sorting processes play an important role in the formation of mineral deposits, where selective enrichment and depletion may lead to the accumulation of elements or minerals. The Shannon entropy is a measure of the degree of sorting of chemical elements in rock and can, therefore, be used to detect these accumulations. The Shannon entropy is insensitive to the types of elements that are enriched and depleted. The method is complementary to conventional methods of rock composition analysis and does not replace them. Many mineralized environments are formed by hydrothermal processes, where hydrothermally altered rocks are associated with economic accumulations of elements or minerals. The Shannon entropy can act as a proxy for mineralization by enabling the identification of zones of intense wall-rock alteration, independent of the type of alteration.Hydrothermal environments are considered favorable for developing and sustaining early life ,26. HydrWe conclude that the hydrothermal processes described in this study present a natural mechanism for transforming energy from heat to increased order in rock. The relationship between heat and Shannon entropy is indirect and based on changes in the probability distributions of rock measurements. The increased order is manifest as increased sorting of chemical elements and SWIR absorption features of the rock and can be measured and quantified by the Shannon entropy. The results are useful for the study of hydrothermal mineral deposits, early life environments and the effects of hydrothermal processes on rock."} +{"text": "The Mosquito: A Human History of Our Deadliest Predator details Anopheles,\u201d which delivered malaria to the Persians as they navigated swampy terrain, ultimately led to a victory by the Greeks during the Greco-Persian Wars. Mosquitoes aided the rise and the fall of the Roman Empire because the Pontine Marshes served as a barrier to enemies and a direct source of disease. Christianity spread across Europe and had a reputation as a healing religion that valued treating persons affected by the mosquitoborne diseases. Christians failed to capture the Holy Land during the Crusades partially because Plasmodium-infected mosquitoes attacked inexperienced Crusaders.This book describes how mosquitoes and their diseases have shaped the outcomes of war, the spread of religion, and the development of modern culture. Attacks from \u201cGeneral Winegard emphasizes the effect of mosquitoborne diseases on the development of the United States. European explorers delivered a lethal dose of mosquitoborne disease to the New World, contributing to the destruction of indigenous populations and the subsequent colonization of the Americas. Partial acquired and genetic immunity to vectorborne diseases drove the demand for enslaved persons from Africa, ensuring the productivity of plantation economies. Widespread malaria delayed the Union victory during the American Civil War, contributing to Abraham Lincoln\u2019s decision to focus on the elimination of slavery. Without malaria, a rapid Confederate defeat might not have led to the Emancipation Proclamation of 1863. Although mosquitoes probably were not the sole reason for these historical outcomes, they most likely contributed substantially to the progression of events.Winegard emphasizes that, despite modern scientific advancements, the mosquito\u2019s legacy to shape human history is not finished. The development of DDT and antimalarial drugs, such as atabrine and chloroquine during World War II, followed by the subsequent emergence of resistance to these treatments, provide evidence for the need to continue research of mosquitoborne diseases. This book also touches on the controversial topic of clustered regularly interspaced short palindromic repeats, an innovative technology that could genetically alter mosquitoes to prevent human diseases. Although Winegard describes the potential usefulness of this powerful tool, organisms and the environment may suffer unintended devastating consequences.This book is a fascinating account of the value of mosquitoes in shaping human culture and existence across time. Persons interested in the interplay between history and disease and future implications will learn much and enjoy the accumulation of knowledge and the exciting narrative presentation."} +{"text": "Functional disability leads to limitations in the older adults\u2019 personal activities and social participation. The purpose of this study was to examine the International Classification of Functioning Disability and Health model in which personal activities and social participation influence functional disability in older adults who live alone and have experienced falls. The study used a secondary data analysis of the 2017 National Survey of Older Koreans. A total of 501 study participants met the inclusion criteria. The results of multiple linear regression indicated that gender and the number of acquaintances were significantly related to the functional disability of social participation while overnutrition, depressed symptom and cognitive dysfunction were related to the functional disability of personal activities. Lastly, poor muscle strength, old age and economic status were predictors of the functional disabilities of both personal activities and social participation. The findings of the study revealed that it is important to comprehensively evaluate not only personal activities but also social participation of older adults who live alone and have experienced falls. In addition, the ICF model may be useful in the development of intervention programs for preventing functional disability in the population."} +{"text": "Pancreatic fistula (PF) remains the primary source of morbidity after distal pancreatectomy (DP). There is currently no optimal stump closure technique to reduce PF rates. We present a novel technique for pancreatic stump closure using Clip Ligation of the duct and Associated Suturing of Pancreas (CLASP). Five patients with a median age of 65 years underwent DP and splenectomy for pancreatic body or tail tumour using the CLASP technique. Four of those operations were done laparoscopically. Only one patient developed grade A PF. No other postoperative complications were noticed. The mean length of stay was 5.4 days. The CLASP technique was applicable in both laparoscopic and open distal pancreatectomy. The key points include mobilisation of the pancreatic body from the retroperitoneum and division of the parenchyma with energy device. The technique of pancreatic stump closure involves the isolation of the pancreatic duct (PD), application of a double ligaclip on the proximal duct, division of the PD and finally suturing of the pancreatic stump. The CLASP technique is an effective and safe alternative technique to the current traditional methods of pancreatic stump closure. Distal pancreatectomy (DP) was first performed by Billroth in 1884 and was further outlined by Mayo in 1913 . MultiplMany techniques have been proposed for the management of the pancreatic remnant to reduce the incidence of PF after DP. Stapler closure and handsewn closure of the pancreatic stump are the standard methods described in the literature ,6-8. SevWe present a novel technique for pancreatic stump closure using Clip Ligation of the duct and Associated Suturing of the Pancreas (CLASP). We retrospectively reviewed a prospective database of five patients who underwent DP and splenectomy (DPS) using the CLASP technique by a single surgeon (K.M.). Clinicopathological data and outcomes of the patients were recorded and analyzed. Postoperative complications were classified according to the Clavien-Dindo methodology, and PF was graded according to the International Study Group of PF -19.The initial standard steps that we follow for the DPS include the division of the gastrocolic ligament and short gastric vessels, the mobilisation of the inferior and superior pancreatic borders, and the tunnelling below the pancreatic body/neck, proximally to the lesion. The splenic artery and splenic vein are identified, isolated and divided separately at the level of the planned parenchyma transection.\u00a0The CLASP technique for the closure of the pancreatic stump is applicable for both open and laparoscopic surgery following the same principles and key points Figures -8. The p\u00a0Overall, five patients underwent DPS using the CLASP technique. Table The main effort of all the techniques for the closure of the pancreatic stump is to decrease the incidence of PF. It is well known that the pancreatic leak from the pancreatic parenchyma and small pancreatic duct branches is usually self-controlled and rarely causes clinically relevant PF (grade B or C). On the contrary, leak from the MPD is the primary source of morbidity after DP . There iOur novel technique could be an alternative way of closing the pancreatic stump after DP, comparable to the traditional ones that have been described previously. The focus is on two key points: firstly the division of the pancreatic parenchyma using an energy device that seals the small pancreatic duct branches and secondly the identification and isolation of the pancreatic duct followed by the precise and accurate application of two or three metal clips on the proximal MPD. The pancreatic cut surface is oversewn with a continuous 4.0 prolene suture, closing the lips of the pancreatic remnant. No further adjunct, such as omental plug, falciform ligament patch or fibrin glue sealant, is necessary. We believe that the incidence of leak is reduced when the MPD is identified and directly clipped.The CLASP technique is applicable to both open and laparoscopic surgery, based on the same principles. It is easily reproducible using an energy device and ligaclips that are routinely used in laparoscopic DPS. We believe that the CLASP technique can be routinely used to close all pancreatic stumps, but also has two specific indications. The first indication is for bulky and thick pancreas where the application of a stapler could crush the parenchyma, increasing the risk of pancreatic leak . The secThe present study has limitations, which are mainly inherent to the small number of patients who underwent DPS using the CLASP technique. Moreover, the results should be validated using large cohorts comparing all the methods of pancreatic stump closure taking into consideration the appearance of the pancreas, the size of the duct and the anatomical and pathological characteristics of the lesion.\u00a0The CLASP technique appears to be feasible, reproducible and safe alternative technique that can be used for the pancreatic stump closure, compared to the traditional methods of pancreatic stump closure. Particular indications include the presence of bulky pancreas and proximal pancreatic body or neck lesions. Further clinical studies including prospective randomised controlled trials could establish a standardised approach for the pancreatic stump closure."} +{"text": "One of the most frequent complications of the systematic lymph node dissection (SLND) is the injury of autonomic nervous system in the para-aortal region during the procedure. These injuries are supposed to be responsible for some of the postoperative bladder, bowel, and sexual dysfunctions. The poor anatomical understanding of the sympathetic nerves within the boundaries of an infra-renal bilateral template has limited the promulgation of a precise nerve-sparing surgery during such SLND. Therefore, the principal goal of the present study was to provide the first ever-comprehensive exposition of the anatomy of the female aortic plexus and superior hypogastric plexus and their variations. This exposition was achieved by strategic dissection of 19 human female cadavers and extrapolating the findings to develop a precise surgical technique for more accurate navigation into these structures during nerve-sparing SLND in 15 cervical cancer patients and 48 ovarian cancer patients.Whilst systematic lymph node dissection has been less prevalent in gynaecological cancer cases in the last few years, there is still a good number of cases that mandate a systematic lymph node dissection for diagnostic and therapeutic purposes. In all of these cases, it is crucial to perform the procedure as a nerve-sparing technique with utmost exactitude, which can be achieved optimally only by isolating and sparing all components of the aortic plexus and superior hypogastric plexus. To meet this purpose, it is essential to provide a comprehensive characterization of the specific anatomy of the human female aortic plexus and its variations. The anatomic dissections of two fresh and 17 formalin-fixed female cadavers were utilized to study, understand, and decipher the hitherto ambiguously annotated anatomy of the autonomic nervous system in the retroperitoneal para-aortic region. This study describes the precise anatomy of aortic and superior hypogastric plexus and provides the surgical maneuvers to dissect, highlight, and spare them during systematic lymph node dissection for gynaecological malignancies. The study also confirms the utility and feasibility of this surgery in gynaecological oncology. Dissection evaluation of pelvic and para-aortic lymph nodes has been an integral component of the surgical staging protocol for several gynaecologic malignancies for over a century ,2. HowevNevertheless, the significance of the prognostic value of the para-aortic lymph node status in locally advanced cervical cancer has been accentuated again in the new International Federation of Gynaecology and Obstetrics (FIGO) staging system for cervical cancer that classifies patients with a para-aortic lymph node involvement in stage FIGO IIIC2 .These developments in surgical management of gynaecological malignancies have given rise to austere diagnostic restrictions for a systematic lymph node dissection in such cases. Taking these facts and the potential additional treatment burden of a systematic lymph node dissection into account, all contentions are in place to avoid the complications of a lymph node dissection. One of the most frequent and vital complications of the para-aortic lymph node dissection that has perhaps never descended into the cognitive focus of many gynaecologic oncologists is the injury of autonomic nervous system in the para-aortal region during the procedure. The present study supposes that these nerve injuries are responsible for some of the postoperative bladder, bowel, and sexual dysfunctions.The present study also contends that a poor anatomical understanding of the sympathetic nerves within the boundaries of an infra-renal bilateral template has limited the promulgation of a precise nerve-sparing surgery during such systematic lymph node dissections. Since we already described the precise anatomy of pelvic autonomic nervous system (inferior hypogastric plexus) in our previous studies ,8,9, theThis exposition was achieved by strategic dissection of human female cadavers and extrapolating the findings to develop a precise surgical technique for more accurate navigation into these structures during nerve-sparing systematic lymph node dissections, especially in the para-aortic region.The abdominal aortic plexus is the sympathetic network of autonomic nerves overlying in the front and sides of the abdominal aorta in the infra-renal bilateral template and going along with the superior hypogastric plexus at the sacral promontory ,11. The The cadaver study has primarily shown that the easiest way to identify the lumbar splanchnic nerves is to prepare and dissect them from their caudal most part at the superior hypogastric plexus in the presacral area (the point of bifurcation of the common iliac arteries) and then to track carefully the deepest lumbar splanchnic nerve , to its point of origin at the second lumbar ganglion of the right sympathetic trunk. The study has also elucidated that it would be easy, in most cases , to identify a ganglion at the right side of the aorta around the inferior mesenteric artery or maximal 1 cm caudal from it (the inferior mesenteric ganglion). Successful identification of this ganglion allows the dissection to go laterocranial and dorsal to the vertebral column in the interaortocaval space above the middle lumbar vein and then behind the vena cava, at the level of the right lumbar artery in the study. This accessory nerve originates in our cases from the third lumbar ganglion of the right sympathetic chain and joins the inferior right lumbar splanchnic nerve before reaching the inferior mesenteric ganglion.The superior right splanchnic nerve will be easy to identify when dissecting along to its origin from the first lumbar ganglion of the right sympathetic chain behind and lateral of vena cava at the level where the left renal vein inserts into vena cava crossing over the right superior lumbar vein. . In all of these cases, the superior lumbar splanchnic nerve is identified as crossing another even smaller ganglion directly caudal from the origin of right ovarian artery from the aorta. .By resecting the lymph node in the paracaval region, there is truly little or no chance to injure the right lumbar splanchnic nerves or the right sympathetic chain during the surgery. The dissection and resection of lymph nodes in the intra-aortocaval region, on the other hand, always leads to an injury to the right lumbar splanchnic nerves, the inferior mesenteric ganglion, the prehypogastric ganglion, and the ovarian ganglion, if the surgeon does not pay sufficient attention to recognize, highlight, and carefully isolate these nervous components , and it obviously complicated the clear exposure of the first lumbar ganglion of the left sympathetic chain during systematic lymph node dissection is the postoperative impairment of the sexual function. Impairment of the sexual function is, in fact, the most enduringly compromised quality-of-life (QOL) issue encountered after treatment for gynaecologic cancers, affecting up to 50% of the patients ,25,26. TIn the recently published data of a sub-protocol of the prospectively randomized LION trial, comparing two cohorts that differed only in the performance of a lymphadenectomy, the role of radical surgery in the retroperitoneal space has been prospectively substantially evaluated with reference to the sexual function (sub-study LION-PAW). It is imperative to mention that, in the LION trial, there have been no requirements for nerve-sparing surgical techniques . The subThe anatomic dissections of two fresh and 17 formalin-fixed female cadavers were utilized to study, understand, and decipher the hitherto ambiguously annotated anatomy of the autonomic nervous system in the retroperitoneal para-aortic region. Rigorous dissection protocol was in place to interpret the retroperitoneal para-aortic nervous connections to and from the contiguous anatomical structures, with specific reference to lymph node dissection in gynae-oncology. The new anatomical know-how from this cadaver study was utilized to enhance and develop a superior technique for para-aortic lymph node dissection and was efficaciously employed in locally advanced cervical cancer (a cohort of 15) and in aWe aimed in this study to describe the whole components of abdominal aortic plexus anatomically and to develop a surgical technique to help us to spare them during systematic lymph node dissection (The video presents the technique of laparoscopic nerve-sparing lymph node dissection ). The foThe nerve-sparing systematic lymph node dissection is feasible in gynaecological malignancies by following the aforementioned anatomical depended surgical technique for dissection the aortic and superior hypogastric plexus. This might enhance the postoperative functional outcomes, especially the sexual complications after such surgeries."} +{"text": "The technological and scientific progress that we have experienced in recent years has contributed to characterization of the complex processes underlying human biology and evolution. In this regard, the studies performed on humans, both in pathological and physiological conditions, have been fundamental to improving knowledge of how genes, epigenetic modifications, aging, nutrition, drugs, and the microbiome affect the state of health and influence the onset of diseases . FurtherFrom a forensic field application point of view, the technological progress undergone by biology has allowed the development of innovative tools for scientific investigation. DNA analysis is no longer solely for comparative use but also for investigative use. In the last 30 years, alongside the development of increasingly sensitive techniques capable of typing biological samples consisting of even only a small number of cells, protocols and software have been developed for the interpretation of predictive markers of phenotypic, ancestral characteristics, and for the biostatistical evaluation of evidence ,3.Forensic Genetics and Genomics\u201d focuses on the latest scientific achievements in the field of forensic biology, from the introduction of new technologies allowing DNA analysis at crime scenes to the use of big data from genome sequencing studies and the study of population genetics for the development of protocols with investigative and phylogenetic purposes.The Special Issue \u201cThis Special Issue features eleven high-impact scientific articles. One of the most up-to-date issues is the possibility of analyzing DNA directly at the crime scene. A drawback of current forensic DNA technology is the need for cumbersome equipment, making it difficult to operate outside the laboratory environment. Transitioning forensic DNA analysis from the laboratory to the scene of an incident should deliver wide-ranging benefits in terms of the speed of result delivery, reduced contamination risk, and more efficient staff training ,5. OxforNanopore Sequencing of a Forensic STR Multiplex Reveals Loci Suitable for Single-Contributor STR Profiling explores the possibility of using this new technology in detail, highlighting its current limitations though also proposing short-term solutions that could allow partial rapid application [The article lication . While tlication . Comparative Analysis of ANDE 6C Rapid DNA Analysis System and Traditional Methods compares the rapid analysis of DNA using the ANDE 6C with that of capillary electrophoresis [Continuing the topic of on-site genetic analysis, the work of Ragazzo M et al. In the field of new technologies, the article by Ragazzo et al. describes the application of NGS technology for DNA typing and for personal identification in the case of mixed biological evidence . The resA number of papers on population genetics have been published in this Special Issue ,10,11,12Ancestry Prediction Comparisons of Different AISNPs for Five Continental Populations and Population Structure Dissection of the Xinjiang Hui Group via a Self-Developed Panel selected 30 novel AISNPs able to discriminate between African, European, East Asian, and South Asian populations and developed a multiplex analysis based on the NGS platform, comparing the resulting ancestry resolutions with those provided by the other published AISNPs [The paper d AISNPs ,13,14,15d AISNPs . The resd AISNPs .Joint Genetic Analyses of Mitochondrial and Y-Chromosome Molecular Markers for a Population from Northwest China reports an interesting study of the Chinese population [The paper pulation . China cpulation . The genpulation . The happulation . Thus, mpulation . Howeverpulation . Characterizing Y-STRs in the Evaluation of Population Differentiation Using the Mean of Allele Frequency Difference between Populations describes the use of the mean of allele frequency differences (mAFD) from the Yfiler set and Yfiler Plus to determine any population sub-division [The paper division . These rdivision .Genetic Reconstruction and Forensic Analysis of Chinese Shandong and Yunnan Han Populations by Co-Analyzing Y Chromosomal STRs and SNPs reports a comparative study through the use of Y-STRs and low-resolution Y-SNPs in two Chinese populations\u2014Shandong Han and Yunnan Han\u2014to characterize the patrilineal patterns within these populations [The article ulations . As a reulations .A Highly Polymorphic Panel Consisting of Microhaplotypes and Compound Markers with the NGS and Its Forensic Efficiency Evaluations in Chinese Two Groups evaluates the use of a selection of 29 compound markers that are a combination of one InDel and one SNP in a genomic region [The paper c region . In partc region . This prc region .The STRidER Report on Two Years of Quality Control of Autosomal STR Population Datasets reports the two-year experience of STRidER, the STRs for the Identity ENFSI Reference Database [The paper Database . It is aDatabase . The repDatabase . It is wDatabase . Data acDatabase .Autosomal STR Profiling and Databanking in Malaysia: Current Status and Future Prospects details the progress of DNA profiling and DNA databanking in Malaysia [Criminal DNA databases are expanding around the world to support the activities of criminal justice systems. The paper Malaysia . The artMalaysia . Challenges in Human Skin Microbial Profiling for Forensic Science: A Review reports on the possible applications of microbiome analysis in the field of forensic science. It is well known that humans have an extremely diverse microbiome that can be useful in inferring ethnicity and personal identification [The paper fication . The forfication .In conclusion, the heterogeneity and quality of the works presented in this Special Issue represent the constant advancement of knowledge in the field of forensic genetics.The development of technologies that allow the analysis of huge portions of the genome or the whole genome is already promoting the transition between forensic genetics and forensic genomics. Alongside the classic typing of STR markers for personal identification and for the genetic characterization of biological evidence, we are witnessing the development of methods that analyze every element of the genome, the transcriptome, and the epigenome. The new achievements of omic sciences can be used in forensic sciences as long as technological evolution allows the validation and standardization of the results. We are likely at the dawn of a new forensic genomics era whose potential applications are limited only by the imagination of researchers."} +{"text": "Following the West Africa Ebola virus disease (EVD) outbreak (2013\u20132016), WHO developed a preparedness checklist for its member states. This checklist is currently being applied for the first time on a large and systematic scale to prepare for the cross border importation of the ongoing EVD outbreak in the Democratic Republic of Congo hence the need to document the lessons learnt from this experience. This is more pertinent considering the complex humanitarian context and weak health system under which some of the countries such as the Republic of South Sudan are implementing their EVD preparedness interventions.We identified four main lessons from the ongoing EVD preparedness efforts in the Republic South Sudan. First, EVD preparedness is possible in complex humanitarian settings such as the Republic of South Sudan by using a longer-term health system strengthening approach. Second, the Republic of South Sudan is at risk of both domestic and cross border transmission of EVD and several other infectious disease outbreaks hence the need for an integrated and sustainable approach to outbreak preparedness in the country. Third, a phased and well-prioritized approach is required for EVD preparedness in complex humanitarian settings given the costs associated with preparedness and the difficulties in the accurate prediction of outbreaks in such settings. Fourth, EVD preparedness in complex humanitarian settings is a massive undertaking that requires effective and decentralized coordination.Despite a very challenging context, the Republic of South Sudan made significant progress in its EVD preparedness drive demonstrating that it is possible to rapidly scale up preparedness efforts in complex humanitarian contexts if appropriate and context-specific approaches are used. Further research, systematic reviews and evaluation of the ongoing preparedness efforts are required to ensure comprehensive documentation and application of the lessons learnt for future EVD outbreak preparedness and response efforts. The highly contagious and lethal nature of the Ebola virus disease (EVD) coupled with the negative impact that outbreaks have on the health system, social, cultural and economic development of affected communities ranks the disease as one of the most complex, dreaded and dramatic public health phenomena in recent time. Outbreaks of the disease have become more intense in Africa in terms of frequency, magnitude, duration and impact . As of 1The Republic of South Sudan (RSS) is considered to be at high risk of potential cross border importation from the current outbreak due to its proximity to the epicentres of the outbreak in North Kivu and Ituri provinces . The MinThe importance of instituting effective EVD outbreak preparedness measures for timely detection and containment of outbreaks cannot be overemphasized. The highly effective EVD alert system and preparedness measures instituted in one of the high-risk countries, Uganda ensured that it was able to timely detect and contain two cross border transmissions of the disease which were reported on 11 June and 29 August 2019 , 9. ThisThis article highlights the progress made so far in the EVD preparedness efforts in RSS, the associated challenges, describes the key lessons learnt and proposes recommendations for using these to improve preparedness for the current and future outbreaks in the country and other complex humanitarian settings.RSS is the World\u2019s newest nation, emerging from an almost 50-year-long civil war with its northern neighbour to attain independence in 2011. Unfortunately, the new country soon plunged into a civil war in December 2013 which resulted in the internal and external displacement of over four million South Sudanese thus triggering a major humanitarian crisis. The renewed armed conflict in the country disrupted an already fragile healthcare system culminating in a severe shortage of healthcare workers, a disrupted supply management system for essential medicines and medical supplies, inadequate health financing, weak health governance and oversight system. These continue to constrain access to good quality, sustainable and affordable healthcare services in the country.The health indicators in the country are poor; out of pocket spending on health is 54% while life expectancy at birth is 59\u2009years . The matFollowing the designation of the country as high risk for cross border EVD transmission in August 2018, the Ministry of Health and its partners instituted several EVD preparedness interventions. These include the designation of seven frontline states as high-risk, the establishment of a national EVD Incident Management System which comprise national and state coordination task forces, establishment of EVD surveillance at major points of entry, communities and health facilities in the high-risk states, activation and training of rapid response teams, establishment of laboratory capacity for diagnosis of EVD at the National Public Health Laboratory (NPHL), risk communication and preventive vaccination of 2974 frontline workers in the high-risk states.Two external joint monitoring missions were conducted in November 2018 and March 2019 to monitor the country\u2019s preparedness for EVD; the results showed improvement in the level of EVD preparedness in the country from 17% in November 2018 to 61% in March 2019 .Despite these achievements, several challenges which continue to constrain timely implementation of EVD preparedness activities persist in the country. Table\u00a0Against the backdrop of the complex humanitarian setting, achievements and challenges highlighted in Table\u00a0The appreciable improvements which were observed in the preparedness level for EVD in RSS despite a challenging environment have shown that effective EVD outbreak preparedness is feasible in complex humanitarian settings. This achievement is premised on some success factors which served as guiding principles in the implementation of EVD preparedness interventions. First, the preparedness efforts were based on the principles of bridging the humanitarian-development nexus thus emphasis was on a two-pronged approach. This approach prioritized the rapid establishment of critical outbreak response capacities that did not exist in the country previously while at the same time strengthening the health system capacity for longer-term management of disease outbreaks in general and EVD in particular.For example, instead of depending on a regional referral laboratory (the Uganda Virus Research Institute in Entebbe) for confirmation of the Ebola virus using the reverse transcription polymerase chain reaction (RT-PCR) method, installation of an RT-PCR machine , supporting infrastructure and training of national laboratory technologists to conduct the confirmatory test at the NPHL significantly improved the turnaround time for confirmation of suspected EVD samples in the second half of 2019 (unpublished WHO data). Aside from EVD, this machine is also used to test for other viral diseases such as influenza, Yellow fever, Marburg, Rift Valley fever, and other emerging pathogens. This capacity was rapidly adapted to test for the coronavirus disease 2019 (COVID-19) in early 2020. Likewise, the construction of semi-permanent points of entry screening points and installation of thermal scanners at Juba International Airport and Nimule land crossing border point within the EVD preparedness framework is contributing to screening for other diseases such as the newly detected COVID-19 and longer-term strengthening of the National Port Health System. Furthermore, the semi-permanent all-purpose infectious disease management unit which was originally constructed for EVD preparedness has been adapted and is being used for isolation of suspected and confirmed cases of COVID-19.Second, development and implementation of innovative approaches including the use of appropriate technology were critical in identifying and addressing the key gaps in preparedness. For example, the development of a real-time and health system based integrated health facility supervision tool using digital health technology ensured timely identification of gaps in infection prevention and control and surveillance at the health facility level. This system provides valuable information for decision making for EVD preparedness to the state and national EVD taskforce teams as well as for the broader infection prevention and control in health facilities in the country.Adapting this lesson for use in future EVD preparedness and response efforts requires the incorporation of strategies aimed at bridging the humanitarian-development nexus into EVD preparedness and planning processes in humanitarian settings right from the onset. This will ensure definition, prioritization and implementation of both immediate and long-term health system strengthening interventions for each preparedness and response pillar and phase.Aside from the risk of cross border transmission of EVD from neighbouring countries such as the DRC and Uganda, the southern part of RSS sits in one of the ecological zones of the disease and jointly (with DRC) recorded the first-ever outbreak of EVD in 1976 . The duaAdoption of this lesson requires some critical actions. First, an integrated and sustainable approach to outbreak preparedness in the country which uses the National Action Plan for Health Security (NAPHS) as a platform to strengthen national capacity for surveillance, detection, diagnosis and management of not only EVD but all disease outbreaks is required . Cross bThird, in communities where trust has not been established and the context is volatile, negotiation of the security of EVD preparedness and response assets and access to EVD affected and high-risk areas should be integrated into and implemented as integral part of EVD preparedness and response interventions in all settings similar to RSS and the DRC.Fourth, innovative approaches to overcome the logistic challenges of timely detection, investigation and response to potential outbreaks should be developed as early as possible. Such innovations may include the use of appropriate digital health technologies where feasible, decentralization of preparedness and response interventions and task shifting.From August 2018 to December 2019, an estimated USD 30.5 million was expended on EVD preparedness interventions in RSS . This trThe experiences during the ongoing preparedness efforts in RSS conveyed the challenges associated with predicting the location and timing of outbreaks. Unpublished data from the EVD alert management system showed that many of the alerts were detected outside the four locations where isolation units were sited. Transportation of patients from the site of an outbreak to any of these units would be logistically difficult in the complex humanitarian contexts like RSS. On the other hand, constructing a new EVD treatment centre in an epicentre would take a considerable number of days, during which the absence of proper isolation and infection prevention and control facilities could propagate the outbreak. This lesson calls for more cost-effective and all-hazard approaches to future EVD preparedness.Addressing these lessons requires some key actions. First, evidence-based geographic and programmatic prioritization and phasing of EVD preparedness interventions are required. Each phase should have a clearly defined set of public health interventions. Such interventions should be prioritized according to those required nationwide and those required in the high-risk areas. For instance, preparedness interventions such as EVD risk communication and surveillance are required throughout the country during all phases of preparedness and response while interventions such as case management may be limited in geographic scope to specific locations where the risks of EVD importation are highest. The lessons learnt from neighbouring Uganda on their recent response experience to the importation of cases from the DRC to Kasese district conveyed an innovative approach used to overcome the challenge with predicting the location of a subsequent EVD confirmed case along their western border. A low-cost alternative (less than USD 30 000 per unit) similar to the CUBE which was developed by The Alliance for International Medical Action (ALIMA) came in Second, continuous assessment and mapping of the EVD transmission risks would limit the number of interventions and partners required to implement them thus reducing costs.The complexities and the large number of partners required to prepare for EVD transmission into complex humanitarian settings such as RSS is associated with some challenges. The meetings of the national EVD taskforce which was the strategic decision-making body were often long and inconclusive due to limited functionality of its technical working groups which should have discussed and resolved technical issues before presentation to the task force for approval. Inadequate guidance of the several partners most of who were clustered in the capital and one of the high-risk states, Gbudwe resulted in poor adherence to EVD norms and standards in the training of EVD staff and implementation of interventions. For instance, the inability of EVD partners to reach a consensus on the design, specifications and quantities of personal protective equipment, EVD isolation units, and content of EVD training modules and infection prevention and control standards and procedures often compromised the quality and timeliness of essential EVD preparedness interventions. Furthermore, the concentration of most implementing partners in the capital resulted in weak coordination in the high-risk states.The introduction of two streams of coordination namely humanitarian and public health coordination addressed some of these challenges up to an extent. However, the new way of coordination came with its challenges. There was an initial lack of clarity about the allocation of tasks and responsibilities and linkage between the two streams of coordination. This was further complicated by duplication of roles and responsibilities in the national incident management system.Addressing these coordination challenges requires a unified, strong and decentralized coordination platform for EVD preparedness in future. Such a platform should ensure that there is a generic architecture and terms of reference for various streams of EVD preparedness coordination and decentralization of the coordination mechanisms to the high-risk states and potential epicentres of outbreaks . FurtherDespite a very complex humanitarian context, RSS made significant progress in its EVD preparedness efforts which demonstrate that EVD preparedness interventions can be rapidly scaled up in such contexts if appropriate and context-specific approaches are used. Despite the progress, the foregoing lessons show several gaps which should be addressed in the current and future preparedness efforts.Moving forward, the gains made so far in the preparedness efforts for EVD and other outbreaks in RSS should be consolidated and used as an opportunity to build longer-term capacity for national and sub-national outbreak preparedness and response. In this regard, existing plans and systems such as the NAPHS, the Integrated Diseases Surveillance and Response System and the Early Warning, Alert and Response Network should be nurtured to maturity and used as sustainable, integrated and health system-based platforms for outbreak preparedness and response. This has become more pertinent given the emergence of new public health threats such as COVID-19.Furthermore, the lessons highlighted above should be used as evidence to revise the WHO EVD preparedness guidelines. Lastly, further research, systematic reviews, monitoring and evaluation of the ongoing EVD preparedness efforts are required to ensure systematic documentation and application of the lessons learnt from previous and current outbreaks to future EVD preparedness and response efforts. The recently established Lancet Infectious Diseases Commission is a platform for implementing the foregoing recommendations ."} +{"text": "Out-of-hospital cardiac arrest patients with pulseless electrical activity are treated by paramedics using basic and advanced life support resuscitation. When resuscitation fails to achieve return of spontaneous circulation, there are limited evidence and national guidelines on when to continue or stop resuscitation. This has led to ambulance services in the United Kingdom developing local guidelines to support paramedics in the resuscitative management of pulseless electrical activity. The content of each guideline is unknown, as is any association between guideline implementation and patient survival. We aim to identify and synthesise local ambulance service guidelines to help improve the consistency of paramedic-led decision-making for the resuscitation of pulseless electrical activity in out-of-hospital cardiac arrest.A systematic review of text and opinion will be conducted on ambulance service guidelines for resuscitating adult cardiac arrest patients with pulseless electrical activity. Data will be gathered direct from the ambulance service website. The review will be guided by the methods of the Joanna Briggs Institute (JBI). The search strategy will be conducted in three stages: 1) a website search of the 14 ambulance services; 2) a search of the evidence listed in support of the guideline; and 3) an examination of the reference list of documents found in the first and second stages and reported using the Preferred Reporting Items for Systematic Reviews and Meta-analyses. Each document will be assessed against the inclusion criteria, and quality of evidence will be assessed using the JBI Critical Appraisal Checklist for Text and Opinion. Data will be extracted using the JBI methods of textual data extraction and a three-stage data synthesis process: 1) extraction of opinion statements; 2) categorisation of statements according to similarity of meaning; and 3) meta-synthesis of statements to create a new collection of findings. Confidence of findings will be assessed using the graded ConQual approach. In the United Kingdom (UK), ambulance services resuscitate over 28,000 adults each year following cardiac arrest , of whicA number of studies have attempted to validate termination of resuscitation rules. However, these studies have reported conflicting results due to a difference in local strategy and the small number of patients who survived when the termination of resuscitation criteria were met . As a reThis review aims to explore and evaluate local clinical guidelines for terminating resuscitation in PEA. However, given the lack of high-quality evidence, it is necessary to utilise evidence derived from clinical expertise and opinion. Therefore, this systematic review will comprise text and opinion .summarise the current variation in treatmentssummarise the evidence cited in support of such treatmentsThe objectives of this review are to:To develop the review protocol, a question was formed using the population, phenomenon of interest and context outcome criteria (PPC) . The popThis review will be guided by the methods of the Joanna Briggs Institute (JBI) . JBI havAn initial Google search for local guidelines was conducted, and two relevant documents were found: the resuscitation policies for the Yorkshire Ambulance NHS Trust and the This review will consider the local clinical guidelines that manage patients over 18 years old. The population of interest will have suffered a pre-hospital cardiac arrest and present with the non-shockable rhythm PEA.This review will consider local pre-hospital clinical guidelines and cited evidence within, which underpin the resuscitative management of PEA cardiac arrest. Local clinical guidelines are of interest as there is a paucity of national clinical guidance or consensus surrounding the topic area.This review will consider local clinical guidelines from the 14 ambulance services in the UK. The geographical location will capture local-level guidelines, which will contribute to broadening the evidence base and informing future steps towards a national perspective.This review is concerned with local clinical guidelines and the evidence cited in support of them. Only UK clinical guidelines will be considered, as the emergency medical systems in other countries differ from physician to community-led resources and are therefore not comparable to UK-based practice .Local clinical guidelines published from 2015 will be considered. These time parameters reflect the most recent published guidelines from the UK Resuscitation Council and Joint Royal College Ambulance Liaison Committee. If a guideline is found to precede 2015 or is not available via the ambulance service website, a written request will be made to the ambulance service, to ensure the most up-to-date guideline is included. Clinical guidelines that consist of qualitative and quantitative evidence will be considered for synthesis, as often complex health interventions consist of both methodologies .The search strategy will focus on local clinical guidelines and the evidence cited in support of the guideline. It is possible that local guidelines may draw upon a range of sources, including expert opinion, national guidelines and published research. Therefore, reference searches will encompass text, publications and research studies . In addiEast Midlands Ambulance Service NHS TrustEast of England Ambulance Service NHS TrustIsle of Wight NHS TrustLondon Ambulance Service NHS TrustNorth East Ambulance Service NHS TrustNorth West Ambulance Service NHS TrustSouth Central Ambulance Service NHS Foundation TrustSouth East Coast Ambulance Service NHS Foundation TrustSouth Western Ambulance Service NHS Foundation TrustWest Midlands Ambulance Service NHS TrustYorkshire Ambulance Service NHS TrustNorthern Ireland Ambulance ServiceScottish Ambulance ServiceWelsh Ambulance ServiceThe first stage will focus on the UK ambulance services as listed below:Ambulance service websites will be searched. Where local guidelines are unavailable or not found, a written request for the guideline will be made to the National Ambulance Research Steering Group and the National Ambulance Lead Paramedic Group. The use of unpublished literature has caused concern due to the uncontrollable amount and lack of quality assessment . With thThe second stage aims to identify the evidence listed in support of the local guideline. The third stage will examine the reference list of documents found in the first and second stages of the search. This hand search will reduce publication bias by ensuring all documents found are included in the review .https://www.mendeley.com). Duplicates will be removed. The selected documents will be extracted and uploaded to the System for the Unified Management, Assessment and Review of Information from JBI will be applied .The documents will be screened against the eligibility criteria. To increase the credibility of this review, each document will be screened by two independent reviewers. Disagreement will be resolved by discussion or by introducing a third reviewer . The ratAll documents will be subjected to critical appraisal and include a focus on genuine opinion, driving motivation, the location of the documents, and related conflict to ensure transparency . This apAmbulance Trust (context)Year of publicationReview dateType of textPopulation presentedThe panel who developed the guidelineThe sources cited to inform the guidelineConclusions relevant to the objectives of this reviewReviewer notesThe JBI SUMARI extraction tool will be used to transfer the main conclusions found within the text . Data exType of textThose representedSettingGeographical contentCultural contentLogic of argumentConclusionReviewer commentsFor evidence cited in support of the local guidelines, the data extraction table headings will include:The table format aims to reduce error and to record an accurate account of categorisation and synthesis . This foIdentify when to continue or cease resuscitationIdentify any other related guidanceIdentify the local strategies for terminating resuscitation for cardiac arrest patients with PEA for each ambulance serviceIdentify the evidence cited in support of such treatmentsEach document will be carefully read to identify accurate statements which encompass the main conclusions. The data will be extracted consistently and standardised by using the extraction tool to meet the main outcomes of this review:The quality of evidence will be assessed using the Joanna Briggs Institute Critical Appraisal Checklist for Text and Opinion . The appThe extracted data will be gathered and synthesised using JBI SUMARI . Data syThe summary of findings table, which contains the main elements of the review, will be graded using the ConQual approach. ConQual aims to establish the quality and confidence of synthesis for qualitative reviews. A ConQual score will be provided for each synthesised finding and from the type of research which informs it .The reviewer would like to thank their supervisors for this collaboration and acknowledge the National Institute of Health Research and University of Plymouth for providing the opportunity and funding to complete the JBI systematic review module.AC is the main author of the protocol. RE and SB have made substantial contributions to the protocol search strategy, conducted a critical analysis of the protocol and drafted and approved the final version for submission to the Joanna Briggs Institute and journal publication. SJ proofread, amended and made substantial contributions to the protocol background. RE acts as the guarantor for this article.None declared.Not required.The JBI systematic review training module was undertaken as part of the funded National Institute of Health Research integrated clinical academic award.PROSPERO 2019 (CRD42019138731)."} +{"text": "The extraordinary diversity, variability, and complexity of cell types in the vertebrate brain is overwhelming and far exceeds that of any other organ. This complexity is the result of multiple cell divisions and intricate gene regulation and cell movements that take place during embryonic development. Understanding the cellular and molecular mechanisms underlying these complicated developmental processes requires the ability to obtain a complete registry of interconnected events often taking place far apart from each other. To assist with this challenging task, developmental neuroscientists take advantage of a broad set of methods and technologies, often adopted from other fields of research. Here, we review some of the methods developed in recent years whose use has rapidly spread for application in the field of developmental neuroscience. We also provide several considerations regarding the promise that these techniques hold for the near future and share some ideas on how existing methods from other research fields could help with the analysis of how neural circuits emerge. Today, neuroscientists of all stripes, including those working on the development of the nervous system, are taking advantage of the breadth of new methods and technologies that Don Santiago could have only dreamt about. These methods accelerate our capacity to collect and analyse biological information in large and complex specimens. For instance, we can now reconstruct in three dimensions (3D) the complete peripheral nervous system of a cleared lizard embryo2 or obtain a transcriptomic map of gene expression at the single cell (or nucleus) resolution from almost any tissue and species, including humans. Science and technology have been interconnected always, and advances in one historically translate into important progress in the other. Today, the number of available advanced techniques can be overwhelming. In this review, we discuss some recently developed techniques that are currently becoming common in laboratories studying neural development.Long gone are the times when budding neuroscientists would picture themselves working like Don Santiago Ram\u00f3n y Cajal using only a simple microscope and a shelf full of chemical reagents3, these techniques were very slow and do not scale well to larger embryos or postnatal tissues. This situation began to change with the appearance of light sheet fluorescence microscopy (LSFM). The main advantage of LSFM is the high speed of acquisition and the ability to image large sample sizes that were unpractical to image with conventional microscopes. LSFM was initially used in the field of colloidal chemistry4, and about 30 years ago it was adapted to biology to visualise guinea-pig cochleas in 3D5. LSFM combines the speed of wide-field imaging with optical sectioning and low photobleaching. In conventional fluorescence microscopy, the entire thickness of the sample is illuminated in the same direction as the detection optics, and, as such, the regions outside the detection focal plane of the objective are potentially damaged by extraneous out-of-focus light that increases the photobleaching. In contrast, in LSFM, the sample is illuminated from the side, perpendicular to the direction of observation, thereby placing the excitation light only where it is required. Therefore, this technique enables the visualisation of tissue samples by shining a sheet of light through the specimen, generating a series of images that can then be digitally reconstructed thanks to the development of sophisticated algorithms and huge improvement in the capacity of computers to store and analyse data6. In developmental biology, LSFM was used for the first time to visualise the transparent tissues of zebrafish and Drosophila embryos in 3D in vivo. About 8 years ago, Tomer and colleagues were able to visualise the development of the Drosophila ventral nerve cord for the first time7, and Ahrens and co-authors measured the activity of single neurons in the brain of larval zebrafish embryos8 using in vivo light sheet microscopy.Our capacity to document the 3D organisation of the embryonic brain to understand the basic mechanisms underlying circuit formation has been limited until very recently. Studies on the development of the nervous system of most vertebrates have traditionally relied on histological sectioning methods or open book preparations that enable the visualisation of two-dimensional organisation of the axon tracts in the samples under epifluorescence or confocal microscopes. These approaches, which are based on the observation of selected slices or planes of observation (a process that can inherently introduce biases), provide only partial information about the sample. Even though 3D imaging of small embryos had been performed for many years using wide-field and confocal microscopy11, it is worth mentioning the variants of the CUBIC and DISCO series because their excellent results and easy performance ultimately exalted them as the most widespread methods for brain clearing (see https://idisco.info and http://cubic.riken.jp).However, what eventually enabled LSFM to be used for the analysis of the nervous system was the remarkable improvement in brain clearing techniques. The rapid optimisation of clearing protocols has expanded the application of LSFM in the field of developmental neuroscience in the last 4 to 5 years. Since then, a myriad of different approaches to perform tissue clearing have been developed; these approaches vary based on the type of chemical reagents used and depend on the size of the samples. Although exhaustive reviews about the diversity of clearing protocols have been published12. In axon guidance studies, this combination approach is proving to be extremely useful for visualising neuronal axons growing across the whole embryo and for detecting pathfinding defects in mutants of different members of the main families of axon guidance molecules13. It has been possible to visualise for the first time the development of the peripheral nervous system and the innervation patterns of human embryos14. Now, the power of combining these approaches with axonal tracings15 or antibody staining after functional manipulations holds the promise of interesting times ahead . In fact, combinations of transcriptomics plus epigenomics or transcriptomics plus proteomics in single cell analyses are rapidly emerging27. Here, we focus our attention on one of the most frequently used modalities in neural development, the transcriptomic characterisation of single cells.The complexity and diversity of cell types is one of the most remarkable characteristics of the mature nervous system. Corticospinal neurons that connect the brain with the spinal cord, sensory neurons that detect and conduct touch information from the skin surrounding our bodies to the central nervous system, or glial cells that modulate neural activity are different examples of the richness and huge variability in cell types that make up the nervous system. Cell specification occurs during development and, until recently, researchers had very limited ways to quantify cell diversity. For many years, the quantification of diversity was constrained by the use of pooling approaches and techniques that require either harvesting cells from the same tissue or combining cells from different individuals to obtain enough material for downstream analysis. For example, in bulk RNA sequencing (RNAseq) approaches, the transcriptomic expression level of a particular gene is not measured from an individual cell but rather as the average level of expression of that gene over many cells present within the same sample. The revolution in the molecular analysis of individual neural progenitors started when the ability to sequence DNA or RNA at the single-cell level became possible. In 2013, the journal 28, and the myriad of protocols that have been developed since then have quickly transformed several research fields and the way developmental studies are performed. The key step in scRNAseq protocols consists of tagging all transcripts inside each cell in such a way that RNA molecules coming from the same cell are easily identifiable and quantifiable29. scRNAseq enables transcriptomic cell types in the sampled tissue to be defined through the analysis of differentially expressed genes in each cell.The first protocol to perform single cell RNAseq (scRNAseq) was published in 200930 as well as the prefrontal cortex of human embryos31 or the temporal changes in the transcriptional landscape of apical progenitors and their successive cohorts of daughter neurons in the cortex32. In general, developing tissues are characterised by the presence of a mix of cells in different stages of differentiation . These stages are captured at the time of scRNAseq processing, thus resulting in a continuous representation of cellular states transitioning from one to another. These transitional stages may be modelled computationally by recapitulating the probable trajectory of the cells through a representation called pseudotime33, which defines the order of the cells through development. This representation therefore enables mapping of particular cell types to different states of the developmental trajectory34.Nowadays, commercialisation of droplet-based sequencing, for example the 10x Genomic Chromium platform, has enabled the widespread use of scRNAseq. In the field of developmental neuroscience, scRNAseq has been used to profile the entire developing mouse brain and spinal cord39, with the latest iteration of the high-definition spatial transcriptomics (HDST) method40 reaching a spatial resolution of 2 \u00b5m , thereby yielding cell types that are more representative of the original tissue and less affected by transcriptional artefacts. A potential disadvantage of using nuclei is that lower quantities of messenger RNA are obtained from the nuclei than from whole cells, and consequently fewer genes are typically detected. Nonetheless, recent studies have shown that single nuclei and single cell approaches identify similar cell typesThe heading of this section might sound as if it were taken from a sci-fi movie. However, combining the computational power of modern processors and graphics processing units (GPUs) is exactly what high-throughput methodologies such as those described above require. These techniques generate huge quantities of data that need to be analysed. The simple generation of sequencing results or imaging data by itself does not provide new insights that can advance our understanding of how the nervous system develops. Intense developments in the field of machine learning have generated algorithms that may now be used to deconstruct the complexity of such data.9. Navigating your way through such an enormous amount of information to draw conclusions quickly becomes a dead-end, both in time and in computing requirements. Working with such vast quantities of bytes imposes a heavy processing burden on a lab\u2019s computing capability but also makes analysing such datasets a time-consuming task. To solve this problem, the development of software capable of handling huge quantities of data becomes an urgent requirement; it becomes just as important as the need for the hardware that generates the dataset itself.The imaging of a mouse brain by high-resolution LSFM generates between 20 gigabytes and up to several terabytes of data depending on the resolutionCell Profiler from the Broad Institute, which can easily segment nuclei in dense tissue images, and its more powerful sibling, Cell Profiler Analyst, which makes use of machine learning algorithms to recognise defined cell types from large imaging datasets44. Cell Profiler has been used, for example, to quantify the differences in neuronal numbers between the sulci and gyri of the cortex of Flrt3 mutant mouse embryos45 and to help elucidate the role of PTPRD in neurogenesis46. More recently, the Pachitariu lab47 released a complementary approach for cell segmentation called Cellpose. This is a generalist algorithm for cellular segmentation and is based on the use of a neural network that is trained on thousands of images from different microscope modalities combined with non-biological images of similar structure. The system creates a platform capable of recognising cells from a wide array of image types. It also enables the generation of researcher-defined custom models by training the algorithm on specific types of images.Paradoxically, even though a researcher spends just a few minutes to image a whole mouse embryo in 3D using LSFM, the quantification of such datasets often relies on tools and systems that require manual and time-consuming annotations. Fortunately, in the last few years, an unparalleled development of informatics tools has begun to help researchers quickly and accurately analyse big datasets in a short period of time. Some examples of these programs are 57. Beyond solving imaging tasks, machine learning approaches may be used in many other applications within the field of neurodevelopment. The quantity of data generated during sequencing experiments such as scRNAseq face the same challenges as those derived from large imaging experiments. Newer and more refined technologies yielding an ever-increasing number of sequenced cells quickly translate not only into larger datasets but also into a higher number of dimensions that need to be non-linearly reduced to define particular cell types. Several packages that allow the processing of sequencing data and perform efficient dimensionality reduction or help to identify defined cell types of interest within the datasets have been released59. While processing of image and sequencing data are both examples from the blooming field of computational biology that are useful for studying neural development, many more developments and applications are predicted to emerge60.The use of machine learning, particularly neural networks trained to recognise structures of interest such as nuclei, cells, blood vessels, noise, etc. in images, has exploded in the last few years, and it is quickly becoming the go-to solution for many biomedical research problems61. Community crowd-sharing of resources such as those mentioned, antibody-related optimisations, and tested protocol modifications and reagents will form the basis for advancing current and future protocols, likely to the point that many antibodies will work for 3D immunostaining applications. Concurrently with advancements in staining methods, parallel development of LSFM will likely enhance the imaging resolution of transparent samples while also reducing the time required for acquisition63.Although transformational technological revolutions are constantly occurring in science, the advances that have been made in the last few years have been spectacular. Here we have highlighted what, in our opinion, are very relevant and novel approaches for investigating the developmental processes that control the formation of neural circuits. It is our belief that we will experience amazing changes in the years to come that will dwarf what we know today. We envision that tissue clearing technologies will evolve into fully applicable methods that will no longer be limited by antibody compatibility. The recent publication of CUBIC-HistoVision points in that direction, as it describes a systematic interrogation of the properties and conditions that preserve antigens and facilitate antibody penetration into fixed animal tissues64. The development of effective and high-throughput approaches in single cell proteomics will aid the quest to fully characterize cells, their functional and developmental states, and the mechanisms involved in transitioning from one state to another. Matched single cell genetic, transcriptomic, and proteomic data will help to elucidate the mechanisms behind the formation of a fully developed nervous system.The most important missing piece for single cell approaches is the development of high-throughput proteomics to individually measure protein content in each cell with enough depth to cover the whole proteome. Beyond basic estimation per cell, single cell DNA or RNA technologies are incapable of measuring the abundance and activity of proteins, which are regulated by both post-translational modifications and degradation. Although single cell proteomic approaches are already available, most of them currently rely on antibodies to detect the proteins of interest; this imposes an important throughput limitation. Methods to quantify thousands of proteins in hundreds of cells through the use of mass spectrometry (MS) are emerging, and improvements in MS are expected to increase the sensitivity of single cell proteomics65, such as those using neural networks trained to identify different types of tumours based on their location and composition in cleared whole mouse bodies66, highlight the possibilities of gathering current computing power so that it can be applied to other fields such as neural development. Similar applications of machine learning algorithms could aid in the recognition of changing mRNA/protein expression patterns in brain development. Labour-intensive tasks commonly used to study the developing nervous system could also greatly benefit from the implementation of tools developed in other neuroscience-related areas. For example, the automated identification and tracking of migrating neurons should be easily adopted following the lead of algorithms like DeepLabCut that behavioural neuroscience labs are using to track the position of different parts of the mouse body without the use of markers67. ClearMap is another algorithm that maps cells automatically in the mouse brain of LSFM datasets, which could be applied to neonate brains24. Another very promising avenue is the algorithm Trailmap, which was recently developed in the Luo lab to automatically identify and extract axonal projections in 3D image volumes68 and may be easily implemented to improve the quantification of axon guidance studies. Adoption of such neural networks will probably require re-training and optimisation to the specific use-case scenario and dataset, which highlights the need for fast-training computational strategies in order to facilitate the broader use of these techniques.The application of machine learning in the field of developmental neuroscience is still in its infancy but will likely explode in the near future. Examples stemming from cancer researchin situ at the same time. Such datasets would contain information about what are considered the main determinants of cell identity while maintaining the intact structure, shape, and form of the tissue. This \u201cfantasy technical improvement\u201d could be pictured even one step further by introducing the fourth dimension into account and analyse datasets of embryos at different stages of development to provide the most detailed description of development progression to date. However, writing a \u201cwhat will the future look like\u201d piece is bound to fail. History has demonstrated that both the imagination and the driving force of scientists are many orders of magnitude beyond what can be anticipated. As such, while this review will likely become obsolete shortly, it will be a good sign that developmental neuroscience maintains its exponential progression in advancing our understanding of the assembly of neural circuits.Therefore, despite the impressive amount of state-of-the-art technologies developed in the last few years, there is still room for improvement of some of the latest methods available to study the developing nervous system. We could envision a not-so-distant day when 3D embryonic brain imaging will be combined with single-cell technologies to elucidate the chromatin, mRNA, and protein signatures of each cell"} +{"text": "In the article titled \u201cAssessment of the Levels of Level of Biomarkers of Bone Matrix Glycoproteins and Inflammatory Cytokines from Saudi Parkinson Patients\u201d , multiplThese should supplement the following sentences in the article text:Osteopontin (OPN) was revealed to be elaborate in inflammatory and degenerative mechanisms of the neurons . OPN plaThe error was introduced during the production process of the article, and Hindawi apologises for causing this error in the article."} +{"text": "Disparities in diabetes care are prevalent, with significant inequalities observed in access to, and outcomes of, healthcare. A population health approach offers a solution to improve the quality of care for all with systematic ways of assessing whole population requirements and treating and monitoring sub-groups in need of additional attention.Collaborative working between primary, secondary and community care was introduced in seven primary care practices in one locality in England, UK, caring for 3560 patients with diabetes and sharing the same community and secondary specialist diabetes care providers. Three elements of the intervention included 1) clinical audit, 2) risk stratification, and 3) the multi-disciplinary virtual clinics in the community.This paper evaluates the acceptability, feasibility and short-term impact on primary care of implementing a population approach intervention using direct observations of the clinics and surveys of participating clinicians.Eighteen virtual clinics across seven teams took place over six months between March and July 2017 with organisation, resources, policies, education and approximately 150 individuals discussed. The feedback from primary care was positive with growing knowledge and confidence managing people with complex diabetes in primary care.Taking a population health approach helped to identify groups of people in need of additional diabetes care and deliver a collaborative health intervention across traditional organisational boundaries. Diabetes is a serious condition that can result in significant morbidity and mortality 23. The c2Diabetes care in the UK compares reasonably well in comparison with other European nations and beyond . However23This paper discusses a project designed to address these specific issues. Firstly, it reflects on the split between primary and specialist diabetes care and the impact this has on the quality of diabetes care in populations. Secondly, it proposes a population health approach as a way of improving organisation of diabetes care through specific interventions integrating primary and specialist care. Thirdly, it evaluates the acceptability, feasibility and short-term impact on primary care of implementing a population approach intervention.In England, care of people with diabetes has been long divided between primary and specialist care. Finding a balance between routine management and specialist input has been an issue since the 1960, up until which time all diabetes care was provided by the specialist services. The growing number of people with T2D prompted the relocation of those suitable for routine management from hospital to primary care 67. The d68912The challenge of developing an optimal model for people with diabetes has been ongoing. In response to the growing prevalence of diabetes and moving care closer to home to support continuity of care, primary care has been tasked with providing both routine and more complex diabetes care. There is clearly a risk of adverse outcomes for people with diabetes if transfer of responsibility to general practices happens without adequate support 1516 and 15171820A population health approach has the potential to improve the quality of care of individuals by introducing solutions targeting groups and sub-groups at risk of developing complications from diabetes 232425. T2324A good indication of general health disparities can be provided by clinical audits which systematically review the structure, processes and outcomes of care against explicit criteria . There a262728While a clinical audit indicates the effectiveness of diabetes healthcare on a national, regional, or service level, it does not enable context informed patient level interventions. The latter is provided by a risk stratification process, which identifies those who do not have their diabetes care processes (key diabetes monitoring measurements) done or treatment targets achieved and classify them as being at high, medium or low risk of poor outcomes. 3132.31multidisciplinary virtual clinics in the community are one of the options for joint working. The model is for primary care staff to be supported by specialists through virtual clinics to manage the care of patients with T1D who do not attend the outpatient clinics and of patients with T2D who do not achieve expected treatment outcomes when receiving usual care. \u2018Virtual clinic\u2019 refers to face-to-face case conferences between health professionals from primary care and specialists to discuss an individual\u2019s care without their being present [It has been increasingly recognised that a good diabetes care pathway addresses the needs of the local service and is underpinned by a multidisciplinary team working between generalists and specialists, across professional groups, and with specialists reaching into the community . The mul present as oppos present 35, or \u2018i present .In diabetes research, joint working is underrepresented 37. The v4433423338394041434647484950515253The project started in 2014 with the aim of improving diabetes care by developing a fully integrated diabetes service across primary and specialist care in Oxfordshire. The objectives were to find a sustainable solution to the growing demand for diabetes care, provide more person-centred care, improve the outcomes of people with diabetes, and reduce the variation in quality and outcomes of diabetes care across the region while at the same time improving its efficiency. The population health approach was chosen as the ideal way of addressing the unmet needs of the studied population.In the studied diabetes service, the main route to specialist advice was via the referral system with patients being moved to secondary or community care and discharged back to primary care for routine management. The recognition of the need for bringing diabetes expertise into primary care has been ongoing with previous interventions including specialist outpatient clinics in primary care and primary care health care professional (HCP) education delivered by the local specialists. This was helping some patients but not all. A move towards more inclusive diabetes care to reach those who were not benefiting from the referral system was needed. The population health approach offered tools to help identify those who were slipping through the net.One locality, a locality being a local group of practices led by a GP, piloted the population health approach and the interventions: 1) the diabetes audit , 2) risk stratification \u2013 the screening programme to identify patients at risk of developing complications from diabetes, 3) the virtual clinics to plan action in response to issues identified using the audit data and patients\u2019 lists. In the North East Locality, in the time of testing the intervention , there were 365 people with Type 1 diabetes and 3,195 people with Type 2 and other types of diabetes . The outThe programme adopted the principles of quality improvement in diabetes care aiming to make care a) safe , b) effective (bringing diabetes expertise to primary care), c) patient-centred, d) timely, e) efficient, and f) equitable (available to everybody) 57.The programme was a) theory driven , b) process oriented (focus on how change happens), c) participatory , d) multidisciplinary and multi-method , and e) meticulously detailed (detailed descriptions of context and processes) .The process of developing the virtual clinics was guided by the principles of co-design in service improvement and continual improvement process. The specialists in diabetes began regular engagement with one surgery in early 2016. Initially the meetings were case discussions with cases identified by primary care healthcare professionals, this was later complemented with the discussions of cases identified in systematic searches of the patients\u2019 list. The systematic searches were refined within a few months. The clinics also included discussions of the surgery\u2019s performance in the National Diabetes Audit Core Audit (NDA). Simultaneously, the interested surgeries across the locality were visited to build engagement and collect their feedback on proposed interventions.This was followed by a survey (August 2016) seeking comments on the individual practices participation in the NDA, factors contributing to the results achieved, and the best way forward; this information was important to understand the primary care perceptions of barriers and facilitators to good quality diabetes care and the contextual factors impacting on the outcomes. Ten participants (eight GPs and two practice managers) completed the survey representing all seven practices in the locality. All participants supported the idea of closer working between generalists and specialists with an easy access to specialist advice and more presence of specialists in the community. The findings from the survey were used to prepare an action plan informed with respondents\u2019 views to feed into the strategy for integrated diabetes care in the pilot locality. The practice managers were approached to schedule the clinics.The use of the National Diabetes Audit was intended as an indicator of the quality of diabetes care in the pilot locality and the individual practices and to help target resources whether at individual practice level (access to practice nurses with expertise) or across locality (access to conveniently timed patient education). The shortcomings of using the audit data were identified by the primary care staff, namely the lack of local indicators of quality of care and a delay in reporting outcomes. This feedback triggered development of a monthly diabetes dashboard which included local indicators of quality of care as well as national indicators. This enabled almost real-time population data monitoring. The suggestion from primary care was to discuss the results in the context of the practice resources and its population. The shortcomings are the same as with any clinical audit \u2013 the data depends on the quality of coding and transparency of the analysis and these were discussed at the meetings to build trust and mutual understanding of the audit.Patient lists were screened using criteria agreed by the clinical team to identify adult patients at risk of developing complications from diabetes:patients with HbA1c greater than 9% (75 mmol/mol)patients with HbA1c lower than 6.5% (48 mmol/mol) on insulin or sulphonylureapatients with diabetes and eGFR <30 ml/minThe lists of patients meeting the above criteria were prepared in readiness for the virtual clinics.The clinics were organised around three main items on the agenda,the results of the audit,the care of patients identified in the predefined searches (risk stratification), andthe care of other complex patients as requested by the primary healthcare professionals.The integral part of joint working between the clinics was case management with the discussions of the patients\u2019 care and decisions made at the virtual clinics, followed with a review of a care plan with a patient, and then with a follow-up at the virtual clinic. The GPs and practice nurses had access to the consultant led email advice line and telephone line between the virtual clinics Table .The aim of this work was to validate the population health approach and the interventions proposed. A mixed-methods evaluation, including observations of the clinics and survey with those involved in the delivery of the virtual clinics, was conducted with the aim of assessing acceptability and feasibility of interventions, and the initial impact of the intervention on the primary diabetes care. Acceptability was defined as the willingness of primary care healthcare professionals and practice managers to engage with all three interventions. Feasibility was defined as the ability and capacity to deliver them. The impact was defined as any changes in knowledge and confidence in managing diabetes care in primary care, changes in patterns in communicating with and referring to specialist diabetes services, and any changes introduced to diabetes care in primary care due to the interactions with the specialists.The clinics were observed by a researcher with doctoral training in qualitative methodologies assessing the delivery of interventions to ensure they were provided in accordance with the protocol and delivered consistently, if the participants adhered to the protocol, what modifications were made, and explore participants\u2019 views of and satisfaction with the intervention.In addition, information about each case discussion was recorded to identify how many patients were discussed per clinic, what aspects of their care were discussed, who contributed, what aspects of care were changed, how the decision was made, and if the tasks were assigned. After the first clinic, the following was recorded: were the actions communicated with all interested, were the actions completed, what were the barriers to completing the actions.Recruitment: All practice managers were approached with a request to complete the survey and send the invitation to the GPs and practice nurses in their practices who participated in the clinics. The clinical lead for specialist diabetes nurses was directly invited to participate and asked to invite specialist diabetes nurses participating in the pilot.Method: A survey consisted of a mix of Likert scales to determine the degree of perceived change to the GPs and primary care nurses\u2019 knowledge about diabetes management, confidence in managing patients with diabetes, and behaviours in clinical practice, and open-ended questions to collect further details of experiences and impact of participating in the clinics. Separate surveys with open-ended questions were conducted with the practice managers, specialist diabetes nurses, and consultants in diabetes to collect their feedback on the changes to the management of diabetes care in primary care and processes involved in the clinics. The survey was conducted over three weeks seven months after the pilot concluded (January 2018).Eighteen integrated virtual community diabetes clinics took place between March and July 2017 across seven practices in the North East Locality. Each practice held at least two clinics with one practice holding four. Each clinic was attended by at least one GP, one practice nurse, one specialist diabetes nurse, and one consultant in diabetes. Five clinics were attended by one or more mental health specialists.Approximately 150 patients were discussed, on average eight patients per 60 minute clinic, with the number depending on the complexity of cases, the level of preparation of the case by primary care, the availability of information from the IT system and speed of operating it, the experience of working together and any previous discussions of patients. The majority of patients had type 2 diabetes, 23 had type 1 diabetes, and 3 had a type yet to be confirmed.The interventions changing the course of diabetes care included:change of medicationintroducing new medicationintroducing new intervention, e.g. diet, lifestyle change, referral for consideration of bariatric surgerychange of medication dosereferring to diabetes specialist teams in secondary or community carereferring to allied specialities including dieticians and mental health specialistsreferring to patient structured educationagreeing on delivering specialist care in primary careagreeing on complex care pathway with primary care delivering initial intervention, and if not effective, scaling up to the specialist servicereassuring primary care healthcare professionals about appropriateness of interventions implementedThe feedback from primary care voiced during the virtual clinics was positive with the following noted:the clinics were seen as educational sessions with useful guidelines and adviceknowledge gained at the clinics was applicable and transferable to patient cases not discussed at the virtual clinicsthe clinics increased understanding of diabetes services available and the referral systemThirteen participants responded to the survey including 5 GPs (out of at least 7), 2 primary care nurses (out of at least 7), 4 practice managers (out of 7) and 2 diabetes specialist nurses (out of 3). Five out of 7 participating practices were represented.The self-reported knowledge of diabetes management and referral system gains were reported by all primary care healthcare professionals (GPs and primary care nurses) in all or some aspects of diabetes care following the clinics. The self-reported change in confidence in managing diabetes varied and was different depending on the type of diabetes. Table All primary care healthcare professionals (GPs and primary care nurses) reported an increase in their contacts with the diabetes consultants and majority with the specialist diabetes nurses (DSNs). The impact was noticed in terms of the number of referrals to different specialist services and Table It is unclear if the increase or decrease of referrals is a positive or negative change. The increased awareness of diabetes issues and number of patients identified as in need of further attention increased the referral rates as reported by one GP, while education provided at the clinics reduced referrals to the DSNs as observed by another GP. The DSNs confirmed that the frequency of seeking their advice increased as well as requests for face to face assessment; though the increase was seen as a positive change, there were concerns that the referrals were not always appropriate.Across the practices, the primary care staff reported changes on the practice level that have been introduced following the clinics:adopting the Year of Care approach promoted at the clinicscontinuing with risk stratification and searching the patient lists for patients at riskfocusing more on blood pressure in patients setting up a new alert for those who did not have microalbuminuria checked in last yearchecking more often whether a low HbA1c could mean regular hypospicking up the issues of unrecognised hypoglycaemia in patients on insulin and gliclazidebringing people with high HbS1c with diabetes in more frequentlyaltering insulin dosesactively tracking and approaching disengaged patientsThere was a shared feeling that the pilot should be extended with further work to refine it. The feedback coming from all participants was positive and the face-to-face format of the meetings was appreciated by both primary care HCPs and DSNs. In overall, all primary care HCPs reported following the decisions made at the clinics but some experienced problems in implementing them due to poor patients\u2019 engagement or internal administration issues. As the practice managers emphasised, the clinics increased focus on diabetes in the practices. Together with other training in diabetes provided by the consultants and DSNs the interventions complemented each other.The key outputs of the pilot, identified from the observations, voiced feedback, and survey, included changes in the processes related to management of diabetes in primary care; identification of the gaps in knowledge of diabetes and its management among GPs and primary care nurses; new ways of working between GPs, primary care nurses and diabetes specialists; and raising awareness of diabetes research. In particular, in each of the areas, the outputs included:management of diabetes in primary careplans to continue with the virtual clinics beyond the pilot; all practices expressed a willingness to do soidentifying groups of patients in need of intervention but previously not perceived as such by primary careproviding primary care practices with a tool (searches) to systematically screen their population and identify patients in need for interventionproducing a protocol for the virtual clinics to be used across Oxfordshiredesigning a new format of the outpatient letter to primary care including relevant information (e.g. care processes delivered) presented in a systematic wayimproving recording of information in primary care for the National Diabetes Audit by exchanging information during the virtual clinics (patient structured education)education of primary care healthcare professionalschallenging diabetes treatments and management not aligned with national or local guidelinesraising awareness of a range of diabetes treatments and interventions available within primary carechanging a narrative about people with high HbA1c \u2013 shifting of blame in poorly controlled diabetes from patients to complexity of condition with its physical and mental health aspects, unresponsiveness of the health services and gaps in the servicedeveloping a community of well-linked practitionersimproving the linkage between services with new referrals being made at the virtual clinics to secondary care, community care, patient structured education, mental health specialists, and community type 1 diabetes clinicsplanning together location and level of care in consideration of patients\u2019 needs and not primary and secondary care boundariesresearchincreasing number of people recruited into clinical trialsThe pilot confirmed the acceptability and feasibility of the population health approach and interventions in primary care in an environment with a limited previous experience of joint working. Key gaps in knowledge and confidence in managing patients with diabetes and knowledge of referral practices were identified and addressed through the virtual clinics. The clinics provided a space to explore and address the benefits and problems of joint working.One of the key benefits of the project was the development of a monthly local clinical audit which extracted NDA data from GP surgeries and was able to show improvements in a short period of time. Essential to the success of this audit tool was a learning not blaming atmosphere with the262628The screening of their population in search of those at risk of poor outcomes enabled the practices to look at the individual patients as part of bigger groups with shared problems and plan for care sharing primary care and specialist resources 3132. The315253The study highlights the importance of focusing on the different processes involved in changing healthcare across a number of different organisations, and acknowledging the multicomponent and multifactorial nature of such interventions 4558. Bei45The interventions tested in the pilot continue to be used in the piloted locality and across the county. The context has changed as the pilot was replaced with a voluntary paid service with practices hosting specialists twice a year. The service has been sustained through the NHS England Diabetes Transformation Funding programme and has been shown to improve not only healthcare professional confidence but also the key care processes delivered to patients. It is currently being refined along the lines of new Primary Care Networks and is now a part of the Locally Commissioned Service for Diabetes within Oxfordshire. Further work is required to refine the processes involved in the virtual clinics including investigating video interactions, a process which has been accelerated by the Covid-19 outbreak, and ongoing review of high risk patients within the GP surgery in between the virtual clinics. A broader impact assessment on the wider health economy including admissions as well as morbidity arising from diabetes is required now that the pilot has been rolled out across the county.This work would benefit from more complex evaluation of the intervention without relying on self-reports only when assessing changes in knowledge and confidence of healthcare professionals. Also, more insights from the participating HCPs on why some of the interventions did not meet their needs and what further improvements are needed. Unfortunately, the reasons for it not happening (e.g. no increase in knowledge or confidence in managing diabetes) for some were not explored further. It is too early to assess the impact of changes in referral patterns following HCP education of the services available for people with diabetes.The implementation of virtual clinics successfully piloted a population health approach in diabetes care, focusing on population screening, risk stratification and assessment (identifying patients at high risk of complications), reviewing patient cases (identifying solutions that are applicable across the population), and improving individual patients\u2019 care as well as at practice and population level. The multidisciplinary virtual clinics in the community enabled the service todiscuss the outcomes of audit taking into consideration the characteristics of the population and plan for improvement,proactively identify groups of patients at risk of complications from diabetes, andplan their care together.Continuing diabetes education in primary care focused on building expertise and skills rather than the dissemination of guidelines. Unnecessary referrals were avoided by encouraging shared decision making and shared responsibility for treatment changes. The intervention had an educational value, improved confidence in primary care in managing diabetes, and improved communication between primary care and specialists. It made the diabetes care for the whole population more cohesive in the piloted area.This project developed and delivered a geographical based intervention to improve care for people with diabetes starting from a baseline of a traditional model of care delivery. Factors which enabled successful adaptation and delivery of a high quality service included the establishment of working relationships and trust developed from engaging with each practice individually to discuss their specific circumstances, allowing for flexibility in the organisation of virtual clinics, working in primary care settings, appreciating primary care familiarity with patients and specialist subject area expertise, and collecting. All of this was underpinned by ongoing reporting of data and acting on ongoing feedback from primary care.Local engagement is essential in the development of the modelRegular updated trusted data is essential to show improvement and understand what changes are effectiveA culture of learning and support is essential rather than blame and performance managementFace to face engagement between GPs, primary care nurses, and specialists outside of the hospital and in the community can transform relationships and break down barriers to joint working"} +{"text": "Underground water bearing structures, and the eco-system are being contaminated through seepage of the plumes emanating from the mixtures of the industrial waste materials (IWM), made of moist cemented soil with municipal solid wastes (MSW) dumped at the site. The distribution of the contiminant hazardous plumes emanating from the waste materials' mixtures within the subsurface structural lithological layers was clearly map and delineated within the near-surface structures, using the triplicate technique to collect samples of the soil\u00a0with the waste mixtures, and the water analysis for the presence of dissolved ions. The deployed method helped to monitor the seepage of the contaminant leachate plumes to the groundwater aquifer units via the ground surface, through the subsurface stratum lithological layers, and hence, estimation of the waste materials' volume was possibly approximated to be 312,000 m\u2022The novel method is transferable, reproduce-able, and most importantly, it is unambiguous technique for the quantification of environmental, industrial and municipal waste materials.\u2022It helps to map the distribution of the plumes emanating from the waste materials' mixtures within the subsurface structural lithological layers that was clearly delineated within the near-surface structures underlain the study site.\u2022The procedure helped in the monitoring of leachate contaminants plumes seepages into the surface water bodies and the groundwater aquifer units, via the ground surface, through to the porous subsurface stratum lithological layers.In summary, the novel method adopted are as presented below: Specifications tableThe method deployed for recording subsurface parameters from RES2D geophysical survey of the area was adequately distributed within the subsurface stratum as presented in 3 taken the approximate depth to the contaminant plumes as the height to be 15 m [The distributions of the subsurface lithologic layers' depth recorded corresponding to the resistivity distributions was used to quantify volume of the waste materials by means of a rectangular prism model generated using the 3-D Oasis Montaj technique, . The process was accomplished through estimation of the volume covered by the survey area, , as shown in be 15 m . The conThe study was undertaken due to the harmful effects of the dissolved ions emanated from the hazardous materials deposited at the site on the ecosystems, the environment, and human lives. Most importantly, the growing population around the dumpsite area calls for urgent action. Assessment of the novel research work of this magnitude, on IWM and MSW is most essential to the determinations of the characteristics of these hazardous ions and the movement of contaminant plumes within the subsurface strata which houses the groundwater bodies A closed observation of the huge number of different research methodologies that have been reported in literature on the environmental wastes, management of hazardous contaminant plumes, monitoring of leachable contaminants, delineation of zones susceptible to potential environmental hazards (PEH), and estimation and quantification of the contaminant plumes flows, and risks to human and ecosystems e.g., Results from the method deployed to acquire the RES2D ERT geophysical survey, recorded along the six profiles evenly distributed across the study site, together with the GPS readings for each electrode position, were integrated together using the 3-D Oasis Montaj Software that helped in clear demarcation of the subsurface lithological layers as shown in The novelty of this work lies on the capability of integrated geophysical evaluation of the subsurface depths, and accurate quantification of the municipal, and industrial waste materials within the study area, with the invented 3-D standard rectangular prism. The method deployed for the study is faster and cost effective. The study is significant to the discontinuation, and prevention, of potential environmental hazards, and threats to human, environmental and the ecosystems around the study site, due to the pollutants fumes from the leachate plumes flowing from the mixtures of industrial and municipal waste materials. Knowledge of the hazards associated with landfills contaminants plumes is very relevant to safety of lives, the ecosystems and subsurface structural features. It is note worthy to consider the effects of these hazardous elements no matter how meager their quantity could be. The devastating long-term health effects are not to be permitted in the society ,4.In the prescribed geochemical analysis of the collected soil and water samples that enclosed the mixtures of the IWM and MSW, values of the various hazardous dissolved ions were acquired using the triplicate technique to support the findings from the geoelectrical tools deployed to delineates the zones of the contaminant plumes within the subsurface lithological units. The novel method invented to study the hazards associated with landfills contaminants plumes effect on the ecosystems, and threats to human, environmental and subsurface structural features underlain the dump site area, confirmed the presence of these potential environmental hazardous dissolved ions, except for the recorded values of the Mercury presence in the soil samples that was below the detected level (bdl) The technique for geochemical analysis and assessment of the soil and water samples collected at the study site, followed the laid down world standard provided for, by the 23rd Edition of Waste Water, published in 2017. Determinations of the samples' pH used the HACH Standard Method 8000, with DR 3900 VIS Spectral Photometer, used for the Chemical Oxygen Demand (COD) analysis, certified by the Malaysian Industrial Standard, MS ISO 17025 at an accredited laboratory, Fakulti Sains dan Teknologi, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia The results as reported in the main article, e.g.,A generated standard rectangular prism shape block model of the subsurface geophysical characteristics incorporated into the geological situation of the study area are produced with the aid of the 3-D Oasis Motaj modelling allows the quantification of the contaminant plumes' volume presented in 3.The invented novel methods adopted for the generation and quantification of leachate contaminant plumes in the forms of a standard rectangular prism shape block model of the subsurface geophysical characteristics, present a widespread guide for the rapid implementation in any part of the world irrespective of the terrain. The soil and water samples were collected at the same spot with known standardization that uses the triplicate technique of sample collections. Considering the economic gains from the novel method, this makes the novel method for leachate contaminant plumes quantifications less stressful, time saving, and does not require huge financial costs in comparison with other known traditional methods, e.g., the use of borehole wells. However, other methodological concerns for intending future users is in the technical know-how of the RES2DINV and the Oasis Montaj softwares that were deployed to quantify the contaminant plumes from leachate flows approximated to be about 312,000 mThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "The initialization of complex cyber-physical systems often requires the interaction of various components that must start up with strict timing requirements on the provision of signals . In order to safely allow an independent development of components, it is necessary to ensure a safe decomposition, i.e. the specification of local timing requirements that prevent later integration errors due to the dependencies. We propose a high-level formalism to model local timing requirements and dependencies. We consider the problem of checking the consistency (existence of an execution satisfying the requirements) and compatibility (absence of an execution that reaches an integration error) of the local requirements, and the problem of synthesizing a region of timing constraints that represents all possible correct refinements of the original specification. We show how the problems can be naturally translated into a model checking and synthesis problem for timed automata with shared variables. Exploiting the linear structure of the requirements, we propose an encoding of the problem into SMT. We evaluate the SMT-based approach using MathSAT and show how it scales better than the automata-based approach using Uppaal and nuXmv."} +{"text": "Of all the various forms of adversity experienced during childhood, childhood maltreatment is shown to have the largest impacts on mental health and well-being. Yet we still have a limited understanding of why some victims of early maltreatment suffer immense mental health consequences later on in the life course, while others are able to cushion the blow of these early insults. Using two waves of data from the National Survey of Midlife Development in the United States (MIDUS), this study considers change in religiosity as a buffer across three dimensions for victims of childhood abuse: religious importance, attendance, and the specific act of seeking comfort through religion. Results suggest that increases in religious comfort during adulthood are positively associated with adult mental health for victims of abuse, while decreases in religious comfort over time were associated with worse mental health. Changes in religious attendance and religious importance were not significant associated with mental health for victims of abuse. Taken together, my results show that the stress-moderating effects of religion for victims of childhood maltreatment are contingent on the stability or increases or decreases in religiosity over the life course, which has been overlooked in previous work."} +{"text": "There are persistent tensions of both a technical and political nature between Southeast Asia\u2019s two major palm oil producers, Indonesia and Malaysia, and the sustainability governance mechanisms shaping global environmental and trade standards emerging from Europe. The establishment of the national Indonesian Sustainable Palm Oil (ISPO) certification standard in 2011 is a sign of discontent with the transnational Roundtable on Sustainable Palm Oil (RSPO) regime, sparking debate about the legitimacy of private governance models initiated by non-governmental organizations and companies in Europe. This article questions whether the adoption of sustainability norms by Indonesia signals normative convergence or the emergence of rival governance structures that challenge the state. Evidence suggests that elements of norm adoption and rival governance coexist in Indonesia and that ISPO certification is an ambiguous policy with degrees of internal incoherence. The ambiguous nature of ISPO certification gives rise to unresolved disputes over power and authority between various actors. This article shows how these disputes came into being by framing these dynamics as part of a long historical process. Novel insights are gained by employing the state transformation framework and the concept of governance rescaling. Within this framework, we argue that the ambiguous nature of the ISPO results from complex interrelated processes of fragmentation, decentralization and the internationalization of the Indonesian state. In January 2020 Indonesian President Joko Widodo (hereafter Jokowi) accused the European Union (EU) of provoking an unjust trade war that discriminates against palm oil production , a transnational private governance mechanism for sustainable palm oil supported in principle by many EU-based companies and non-governmental organizations (NGOs).This article offers a comprehensive understanding of the ISPO by drawing upon the state transformation framework and the concept of governance rescaling. To better understand the ISPO, we examine the evolution of the Indonesian state and the development of the palm oil industry in the country, including the competition for governance rescaling that follows the transformation of the state in the late 1990s that enabled the establishment of the RSPO and then the ISPO. To elaborate on this central point, this article proceeds in five parts. Part one looks at the existing debate on the nature of ISPO and the relationship between RSPO and ISPO to provide the context in which this article aims to contribute. The second part examines the links between state transformation frameworks and concepts of governance rescaling to conceptualize the ISPO in the context of global trade, followed by an explanation of the data collection methods and evidence. Part three looks at the development of Indonesia\u2019s palm oil industry, especially in the post-1998 democratic transition and liberalization era. Part four examines the establishment of the RSPO as a response from competing networks of actors to the rapid expansion of the-* palm oil sector and the continuous tension within the RSPO. Part five examines the establishment of the ISPO by framing it as part of the continuous transformation of the Indonesian state. The authors conclude that the internationalization of the state apparatus leads to the further fragmentation of the Indonesian state bureaucracy that renders the ISPO an ambiguous and contentious policy that is ineffective on two fronts: sustainability as well as industry protection.There are divergent views on the relationship between the RSPO and the ISPO. By some measures, the establishment of the ISPO in 2011 indicates the proliferation and influence of global sustainability norms. EU normative power influences regulatory standards for global environmental and trade governance in line with renewable energy targets, though there are forms of institutional and networked opposition from Indonesia urges the government to use the ISPO as a diplomatic instrument to overcome trade barriers against the commodity GAPKI . At the Studies about the nature of the ISPO and its connection to different public and private actors in Indonesia and beyond have touched upon these ambiguities directly and indirectly, although tend not to explicitly identify the ambiguity of the ISPO and the start of a new era of export risk and contract cancellations. These written sources provide the plot and storyline to be connected and framed through theories of state transformation and governance rescaling.Evidence for this study comes from interviews and observations conducted periodically in Indonesia from 2015 to 2020, crosschecked with a wide range of written sources including statements, in-house publications, minutes of meetings, press releases and position papers from the RSPO, the ISPO, palm oil companies, trade associations such as GAPKI and the palm oil growers\u2019 association (APKASINDO), government sources such as the Ministry of Agriculture, the Ministry of Trade, the Ministry of Foreign Affairs and the Office of the President, as well as civil society actors. The authors analysed selective Indonesian news coverage of the palm oil industry from 2009 to 2020, mainly from sources such as The authors conducted in-depth interviews during different stages of fieldwork in Indonesia and the UK from 2015 to 2020. Interviewees were purposively chosen based on their expert knowledge of the palm oil industry and agribusiness in Indonesia, especially on the evolution of the RSPO and the ISPO. Semi-structured anonymized interviews were conducted so that participants could express their opinions freely. To obtain a comprehensive picture, the authors sampled elite informants as well as farming communities in Riau, Indonesia\u2019s main palm oil producing province in the Indonesian palm oil sector. To make it easier to follow, the storyline is presented in a semi-chronological manner.While industry actors often proclaim that palm oil is \u2018a gift from God for Indonesia\u2019 emissions being released annually. The report accuses global companies that source palm oil from Indonesia such as Unilever, Nestl\u00e9 and Procter & Gamble of being complicit with environmental crimes ran a lipstick from the Rainforest awareness campaign, followed shortly after by a Greenpeace campaign to hold the palm oil industry to account. A 2007 Greenpeace report entitled As consumer awareness about the environmental impacts of the palm oil industry rises in developed countries, so does the pressure placed on global brands which use palm oil in their products. In May 2008, Unilever responded to pressure by declaring its commitment to clean up the company\u2019s supply chain Unilever . In MarcIntensive NGO campaigns and market pressures create a dilemma for producers and the companies that use palm oil in their products. Companies need a steady supply of CPO for their business. Replacing palm oil with rapeseed or soybean oil produced in Europe is challenging because palm oil is generally a more cost-effective and high yielding flex-crop. The question for companies is how to convince conscientious consumers to buy their products while maintaining a steady supply of palm oil. In this context, transnational actors such as international NGOs and multinational companies based in developed countries, especially in Europe, agreed to initiate the Roundtable on Sustainable Palm Oil (RSPO).Transnational private governance in the form of the RSPO has emerged as a significant benchmark for standards of sustainable palm oil, setting out key principles and criteria and developing a system for trade in certified sustainable palm oil Hai , 22. TheThe dominant role of Western NGOs and downstream actors in the palm oil supply chain in the RSPO is evident in the standards-making process. The RSPO works by establishing a standard for sustainable palm oil and creating a system to ensure compliance. As a roundtable, the standard conceptually should come from all the members. While formally this is true as all members have a voice in the process, the standard itself is set by developed countries. The proposed sustainability standard and adaptation strategy that were discussed in the preparatory meetings for the establishment of the RSPO in September 2002 in London were shaped by previous initiatives in Europe. In 1998 for instance, Unilever started to develop the indicators for sustainable palm oil, while in 2000 Migros, with the help of the WWF and Proforest, created supply chain standards that would form most of the RSPO\u2019s principles during President Yudhoyono\u2019s two terms in office (2004 to 2014) was a powerful agency that pushed for reform in the industry and worked closely with domestic and international NGOs. A similar role, but with a different approach, is played by the Presidential Staff Office (KSP) under the current President Joko Widodo. While the Yudhoyono-era UKP4 approached sustainable palm oil production mostly in the context of international norms to protect the environment, and thus engaged in various international schemes such as reducing emissions from deforestation and forest degradation (the REDD project), the Widodo-era KSP focuses more on land conflicts and agrarian reform.The UKP4 is the agency responsible for the issuance of Presidential Instruction No. 10/2011 on the moratorium on forest conversion, a regulation that was heavily criticized by the palm oil industry. Despite this backlash, the moratorium was extended by President Yudhoyono in 2013 and retained under President Joko Widodo\u2019s administration. UKP4 members intervened in the sector by revising some Ministry of Agriculture regulations, particularly those relating to the issuance of plantation permits Zuhri . This inMutually reinforcing factors of state transformation such as fragmentation and the internationalization of the state apparatus were seen in Yudhoyono\u2019s environmental diplomacy. Indonesia\u2019s high-profile environmental diplomacy was on display at the 2015 Paris climate conference when the country became one of the first developing countries to pledge to cut carbon dioxide emissions by 26%. Indonesia\u2019s voluntary national contribution is incongruent with the realities of unsustainable practices on the ground and ever-growing palm oil production targets Taufik . The conThrough various policy commitments, reformist officials seek to mobilize international resources to support reform initiatives in the forestry sector. Examples include the Australia Forest Carbon Partnership (AU$ 70 million pledged), the German-sponsored emission reduction (REDD) pilot project (EUR 32.4 million), the UN-REDD initiative (US$ 5.6 million), the Korea-sponsored project for the adaptation and mitigation of climate change in forestry and Australia\u2019s agricultural research fund , manifested in the 2004 Roundtable on Sustainable Palm Oil (RSPO), in order to sustain their perceived privilege in the national scale of governance. Indonesian trade associations and industry players exploit their connections with relevant government agencies such as the Ministry of Agriculture and the Ministry of Trade. At the same time, reformists within the state bureaucracy internalize global norms of sustainability and attempt to push reforms in the governance of the palm oil industry. This jockeying for power and influence resulted in the establishment the ISPO, an ambiguous policy that internalizes RSPO standards and mimics global and EU norms while simultaneously creating a rival form of governance to protect the palm oil industry from external pressures.The inherent ambiguity of the ISPO causes practical dilemmas on two fronts. The ISPO is generally ineffective in its implementation, to the disappointment of reformists within the Indonesian state, and at the same time is unable to serve as a legitimate alternative to the RSPO in the global market, to the disappointment of protectionists. The low level of acceptance of the ISPO by international markets explains why Indonesian palm oil companies tend to remain in the RSPO despite their criticisms of membership, representation and the interventionist nature of transnational private governance.Active campaigning by industry players with close political connections has elevated palm oil to the status of a strategic priority in Indonesian foreign policy. In October 2019, for instance, during her first speech as Indonesia\u2019s Foreign Minister, Retno Marsudi specifically mentioned palm oil as a diplomatic priority. The foreign minister contends that palm oil is a strategic national commodity and the growth of the sector \u2018is a fundamental matter because it is related to the livelihoods of more than 16 million people, especially small-scale farmers and their families\u2019 (CNN Indonesia"} +{"text": "Neurodegenerative diseases such as Alzheimer's (AD), Huntington's (HD), and Parkinson's diseases (PD) are a group of progressive disorders that feature degeneration of the structure and function of the human nervous system. Impaired mitochondrial function, excessive oxidative stress in human brain, genetic factors, and malfunction in human brain metabolism contribute to the progression of neurodegenerative diseases . MultitaWe received articles from across the globe featuring interesting research in this area. A case-control study in this special issue demonstrated the potential role of peripheral immune disorders in the pathological progression of late-onset Parkinson's disease (LOPD). Moreover, development of phytomedicines as potential therapeutics for AD is discussed as well. The role of gut microbiota in progression of neurological disorders and approaches for regulation of this effect is featured. In addition, the guest editorial team contributed a detailed review on rational design of MTDLs for neurodegenerative diseases with special focus on Alzheimer's disease. This review explored a large number of promising MTDLs obtained based on different target combinations strategies.In conclusion, there is a growing interest from researchers from different disciplines to identify efficient multitargeted strategies for neurodegenerative diseases. Comprehension of ongoing efforts in development of multitargeted strategies would enable scientists to identify the most successful approaches in the field and eventually lead to discovery of efficient therapeutics. The multidisciplinary nature of research in this area is evident in this special issue as it features research from various disciplines. The guest editorial team hopes that this special issue will help in featuring the multidisciplinary aspect of this research area and encourage future collaborative efforts."} +{"text": "The modern industry allows synthesizing and manufacturing composite materials with a wide range of mechanical properties applicable in medicine, aviation, automotive industry, etc. Construction production generates a substantial part of budgets worldwide and utilizes vast amounts of materials. Nowadays, the practice has revealed that structural applications of innovative engineering technologies require new design concepts related to the development of materials with mechanical properties tailored for construction purposes. It is the opposite way to the current design practice where design solutions are associated with the application of existing materials, the physical characteristics of which, in general, are imperfectly suiting the application requirements.The efficient utilization of engineering materials is the result of achieving the solution of the above problem. The efficiency can be understood in a simplified and heuristic manner as the optimization of the performance and the proper combination of structural components, leading to the utilization of the minimum volume of materials; consumption of the least amount of natural resources should ensure the development of more durable and sustainable structures. The solution of the eco-optimization problem, based on the adequate characterization of the materials, enables implementing principles of environment-friendly engineering when the efficient utilization of advanced materials guarantees the structural safety required.The identification of fundamental relationships between the structure of advanced composites and the related physical properties was the focus of the completed Special Issue. The research team from the Democritus University of Thrace achieved outstanding results in the development and analysis of fibrous reinforcement, improving the mechanical performance of structural components ,2,3. TheThe articles ,5 expandThe articles included in the Special Issue explore the development of sustainable composites with valorized manufacturability for a breakthrough from conventional practices and corresponding to the Industrial Revolution 4.0 ideology. The publications revealed that the application of nano-particles improves the mechanical performance of composite materials ; advanceThe publication investigThe application of advanced manufacturing technologies is the research focus of articles ,17. SupeThe articles ,19,20 inMaterials continues the successful publication series, aiming to combine the innovative achievements of experts in the fields of materials and structural engineering to raise the scientific and practical value of the gathered results of the interdisciplinary research.The published works demonstrate that the choice of construction materials has considerable room for improvement from a scientific viewpoint, following heuristic approaches. At the closure of the Special Issue, altogether, the manuscripts included in the Special Issue were cited 51 times. That highlights the essential impact of the reported outcomes on the research community and their valuable contributions to materials science. The onward Special Issue \u201cAdvanced Composites: From Materials Characterization to Structural Application (Second Volume)\u201d of"} +{"text": "There is a rich literature from The United States looking at the importance of religion and spirituality in the lives of older adults where it is positively linked with wellbeing. Despite the increased interest in wellbeing in the UK comparatively little interest has been show in the role of religion and spirituality in promoting wellbeing including quality of life, life satisfaction and loneliness. In this paper we explore these issues using three data sets: the European Social Survey (ESS), the English Longitudinal Study of Ageing (ELSA) and the IDEAL cohort of people with dementia and their carers to examine (a) the variation in religious practice by older adults, those aged 50+, across Europe; (b) the epidemiology of religious practice among older adults within England and (c) using both ELSA and IDEAL consider the relationship between religion and wellbeing in later life."} +{"text": "Lymantria monarcha in central Europe [Most revues consider the work on l Europe , at the l Europe . Since tViruses: the first on 2011, edited by Dawn Gundersen-Rindal and Robert L. Harrison; the second in 2015, edited by John Burand and Madoka Nakai.Two previous Special Issues addressing insect viruses have been published in In 2011, the Issue covered all aspects of insect viruses. Among the contributions, a review paper discussed the future importance of massive sequencing for the discovery of new insect viruses .In 2015, the Special Issue was entitled \u201cInsect viruses and their use for microbial pest control\u201d. It presented 10 contributions, including two reviews on the use of viruses for the control of insect pests in Latin America and ChinThe increasing questioning of the negative environmental impacts of agriculture promoted the promulgation of objectives of reducing chemical insecticide use. One of the suitable alternatives is biological control, and viruses have proven their efficacy. This is the second Special Issue concerning the use of insect viruses in pest control.In this Issue, 20 contributions are published. The majority of these contributions address the potential of the virus to fight insect pests, but some consider the importance of the viruses of beneficial insects (honey bees). The generalization of massive sequencing confirmed that multiple infections are more common than previously expected. Viruses remain in host populations for a long time without apparent effect on the hosts (covert infections).The field resistance of codling moth to specific CpGV genotypes highlighted the importance of genotypic diversity in the virus populations and the role of multiple infections, which are addressed in various contributions. Taking advantage of this diversity might be one of the keys to ensuring the long-term efficiency of virus control.I hope the reviews and research articles of this Special Issue will fruitfully contribute to developing the knowledge and use of insect viruses in pest control."} +{"text": "This paper considers the critical role of social infrastructure in building age-friendly communities. Drawing on two neighbourhood projects, the paper explores the benefits which different types of social connections bring for older people and the types of spaces in which these connections are produced. It provides support for the importance of \u2018natural neighbourhood networks\u2019 by demonstrating how everyday encounters help promote informal networks of support. Following Klinenberg\u2019s (2018) analysis of the importance of social infrastructure, the paper argues that the decline of local high streets, closure of libraries, and cuts to the maintenance of green spaces, reduce opportunities for face-to-face social interactions. The paper presents findings from two studies illustrating the importance of social infrastructure in supporting new forms of community action amongst older people. The paper concludes that that the value of social interactions that occur in everyday mundane spaces needs greater emphasis in public policy."} +{"text": "Extracellular matrix (ECM) structures within skeletal muscle play an important, but under-appreciated, role in muscle development, function and adaptation. Each individual muscle is surrounded by epimysial connective tissue and within the muscle there are two distinct extracellular matrix (ECM) structures, the perimysium and endomysium. Together, these three ECM structures make up the intramuscular connective tissue (IMCT). There are large variations in the amount and composition of IMCT between functionally different muscles. Although IMCT acts as a scaffold for muscle fiber development and growth and acts as a carrier for blood vessels and nerves to the muscle cells, the variability in IMCT between different muscles points to a role in the variations in active and passive mechanical properties of muscles. Some traditional measures of the contribution of endomysial IMCT to passive muscle elasticity relied upon tensile measurements on single fiber preparations. These types of measurements may now be thought to be missing the important point that endomysial IMCT networks within a muscle fascicle coordinate forces and displacements between adjacent muscle cells by shear and that active contractile forces can be transmitted by this route . The amount and geometry of the perimysial ECM network separating muscle fascicles varies more between different muscle than does the amount of endomysium. While there is some evidence for myofascial force transmission between fascicles via the perimysium, the variations in this ECM network appears to be linked to the amount of shear displacements between fascicles that must necessarily occur when the whole muscle contracts and changes shape. Fast growth of muscle by fiber hypertrophy is not always associated with a high turnover of ECM components, but slower rates of growth and muscle wasting may be associated with IMCT remodeling. A hypothesis arising from this observation is that the level of cell signaling via shear between integrin and dystroglycan linkages on the surface of the muscle cells and the overlying endomysium may be the controlling factor for IMCT turnover, although this idea is yet to be tested. Intramuscular connective tissue plays a critical role in the development and growth of muscle tissue and its quantity and distribution vary greatly between muscles with different functional properties. Yet surprisingly, relatively little is known about the properties and adaptation of IMCT in comparison with the knowledge of muscle function and plasticity . There hThis article reviews the structure and roles of connective tissue structures within skeletal muscle tissues, with an emphasis on recent developments and remaining questions. This subject has a rich history; connective tissue structures surrounding individual muscle fibers muscle were first described by There are numerous comprehensive reviews of the structure of IMCT in the literature. The general structure of IMCT is summarized in It is a common assumption that muscle fibers typically run the entire length of a muscle fascicle, inserting onto tendons by myotendinous junctions at both ends. However, numerous studies on a wide variety of species have shown that many muscles have muscle fibers that do not span then entire fascicle, Muscles with non-spanning or intrafascicularly terminating muscle fibers are actually quite common . HijikatEach muscle fiber (cell) is bounded by its plasmalemma (sarcolemma) and, external to this, a 50 nm thick basement membrane layer comprized of non-fibrous type IV collagen and laminin in a proteoglycan matrix. Lying between the two basement membranes of two adjacent muscle fibers, the fibrous network layer of the endomysium forms a continuum between the two basement membranes. The perimysium is described as a well-ordered criss-cross lattice of two sets of wavy or crimped collagen fiber bundles in a proteoglycan matrix, with each of the two parallel sets of wavy fibers at angle symmetrically disposed about the muscle fiber axis . These cThe epimysium is a thick connective tissue layer that is composed of coarse collagen fibers in a proteoglycan matrix. The epimysium surrounds the entire muscle and defines its volume. The arrangement of collagen fibers in the epimysium varies between muscles of different shapes and functions. For instance, the collagen fibers in the relatively thin epimysium of the long strap-like M. sternomandibularis in the cow has two sets of collagen fibers running at \u00b1 55 to the muscle long axis , whereasAlthough the various IMCT structures are often described as sheaths that separate individual fibers (endomysium) fascicles (perimysium) and whole muscles (epimysium), in reality these structures form continuous networks that connect and coordinate the muscle elements within them. The endomysium clearly forms a continuous network structure within a fascicle and perimysium clearly forms another continuous network within the whole muscle As the perimysium approaches the surface of the muscle it merges seamlessly with the epimysium . At the The main components of IMCT are the fibrous collagen types I and III in a matrix of proteoglycans, with the non-fibrous type IV present in the basement membrane of the muscle cells. Small amounts of fibrous type V collagen and several of the fiber-associated collagens are also present. The endomysium and perimysium have distinct proteoglycan and collagen compositions, as detailed by Intramuscular connective tissue has a wide range of functions. At the most mundane level, it organizes and carries the neurons and capillaries that service each muscle cell. Especially at the level of the perimysium, it provides the location of intramuscular deposits of fat. It patterns muscle development and innervation, as proliferation and growth of muscle cells is stimulated and guided by cell\u2013matrix interactions. These roles have been discussed previously . This reTwo papers published in the mid 1980\u2019s characterize two very different streams of thought about the contribution of IMCT to the mechanical functioning of muscle. The second stream of thought about the functioning of connective tissue within muscle was generated by the observation by A general recognition that force transmission can readily occur between adjacent muscle fibers has been followed by evidence that myofascial force transmission can occur between fascicles even between adjacent muscles, as summarized by in vivo, except to reinforce the point that perimysium, like endomysium, is easily deformed in tension at resting muscle lengths.Tensile tests on small sheets of perimysium isolated by careful dissection from muscle are possible and show the obvious non-linear stress-strain behavior expected of a compliant network that suffers reorientation at finite strains and a straightening of initially wavy or crimped collagen fiber bundles. An example if given in It is clear that the majority of muscles undergo shape changes as they contract ; fusiforIt has been postulated that variations in the size and shape of fascicles, and therefore in the spatial distribution of perimysium, was related to variations in the shear strains that need to be accommodated in differently shaped muscles as they contract , 2010. TModels of the shear properties of perimysium and endomysium at different sarcomere lengths to direcin vivo, Various computational models models) have been used to explore the possible role of IMCT in the active and passive mechanical properties of muscles. These models are very attractive in that they can represent the complex three-dimensional architecture of the tissue at various levels. A full anisotropic set of (non-linear) moduli and the anisotropic poisons ratios for both muscle fiber elements and IMCT elements, together with a detailed representation of their shape and spatial distribution would be an ideal starting point for building such complex computational models. However, a full set of extensional, shear and poison\u2019s ratio parameters for muscle fibers and either the endomysium of perimysium, or both, is not available. So, while a great number of parameters can be estimated from experimental studies, inevitably there needs to be some assumed range of values for many of the parameters, not least the shear properties of the IMCT components . Thus, While actual measurements of the shear properties of endomysium through experiments such as those proposed above in relation to in vivo. Supersonic shear imaging (SSI) studies have provided estimates of the macroscopic shear modulus of muscles in vivo, including shear strains. As well as noting a great inhomogeneity in longitudinal strains between muscle fibers, in vivo movements of this muscle.If the shear properties of the endomysium are designed to keep adjacent muscle fibers closely aligned and coordinated with efficient force transfer between them, whereas the shear properties of the perimysium are designed to facilitate large shear strains in a working muscle, then it follows that (a) computational models should not use the same estimates of shear properties for both endomysium and perimysium, and (b) the mechanisms for adaptive growth or degradation of endomysium and perimysium may be differently regulated, or may respond to different ranges of stimuli.Part of the adaptation of muscle to exercise, disuse and overload injury involves changes to the IMCT . There aMaintenance and regeneration of IMCT is a balance of synthesis by fibroblasts versus degradation. Matrix metalloproteinases (MMPs) are the principal proteolytic enzymes of IMCT together with the family of metalloendopeptidases described as a disintegrin and metalloproteinase (ADAMs), which act as sheddases, cleaving off the extracellular portions of integrins at the muscle cell surface . In theiMyoblasts and myotubes can produce their own basement membrane collagens , normallWhile there has been assumption that high rates of muscle growth must be accompanied by a high rate of turnover (degradation and resynthesis) of IMCT , investiMuscle immobilization also results in changes in IMCT, and these responses may also shed light on its role. Mechanotransduction refers to the signally pathways by which cells sense and respond to mechanical stimuli by changes in their expression. Mechanotransduction in muscle is a well-understood concept that has been reviewed extensively . It is gFrom the viewpoint of regulating and adapting both muscle fiber volume and properties and IMCT structures to functional demands on muscles, there are two aspects of current muscle mechanotransduction research worth noting. Firstly, while the exact nature of the intracellular signaling pathways and resulting changes in expression have been studied with great precision, the exact nature of the mechanical stimuli that stimulates these is generally less well defined. While The second point to note is that most studies of mechanotransduction in muscle are concerned with transfer of mechanical information from the ECM to the muscle cells. But connections between muscle cells and ECM also transmit forces into the ECM. Regulation of IMCT due to mechanical signaling from contractile forces is far less studied. This is an important aspect if we want to know what signals could control the deposition or remodeling of IMCT beyond passive stretching of the muscle. Mechanical signals for IMCT deposition or degradation can reasonably be expected to be either strains or stresses experienced by fibroblasts, and indeed collagen and proteoglycan synthesis is primarily the function of these cells. Our current understanding of muscle function envisages relatively small shear strains within a fascicle, i.e., efficient force transmission through an endomysium with a relatively high shear stiffness, whereas perimysial boundaries allow large shear strains between fascicles. Given these differences, it is not unreasonable to suggest that the amplitude or nature of the signals controlling the growth or degradation of these two different IMCT structures are likely to be different. The control of degradation by MMPs may also be via signaling to fibroblasts, although it is more likely that signaling within muscle cells also has a large contribution to MMP expression and activity, as discussed above. Any study of the effects of stimuli on the production of MMPs by muscle cells is complicated by the fact that MMPs function as intracellular signaling molecules as well as extracellular proteases . A perinAs discussed above, the tight linkage of mechanotransduction at the muscle cell-ECM interface focuses attention on endomysial-muscle cell interactions, but the perimysium is only sporadically connected to the endomysium of muscle cells at PJPs. While the mechanotransduction pathway afforded by PJP structures may be pass external mechanical information into the muscle cells, it is more difficult to postulate that the muscle cells also regulate turnover of the perimysium, unless perimysial fibroblasts are differentiated from endomysial fibroblasts and respond differently to signals coming from the muscle cells so as to synthesize and degrade this separate and distinct IMCT structure separately.A final consideration on the control of IMCT turnover and deposition centers on the scaling of stresses at muscle fiber surface with respect to muscle fiber growth. A normal assumption is that the force produced by a fiber will increase in proportion to its cross-sectional area, i.e., radius squared, on the basis that the number of myofibrils per muscle cell should scale with CSA. As the muscle fiber contracts, the generated surface shear stress would be proportional to the force divided by the surface area of the fiber (which is a linear function of radius). This would imply that, as a fiber grows in radius through hypertrophy, for a given force output per myofibril, the shear stress at the surface of the fiber would be increasing. Above a limiting value, increased signaling to the muscle cell could then trigger remodeling by release of MMPs/ADAMs, and paracrine signaling from muscle cells to fibroblasts could affect collagen synthesis. At least in cardiac muscle, the sheddase activity of ADAMS is thought to reduce integrin-mediated signaling with the ECM during hypertrophy . An incrin vivo sarcomere lengths, so freely allowing the length changes needed in actively contracting and passively stretching muscle fibers. But, the shear linkages through the thickness of the endomysium keeps adjacent fibers in register and laterally transmits force. While shear deformations in IMCT can be interpreted in terms of its contribution to the effective tensile stiffness of muscle in the muscle fiber direction, the field is moving toward a clear understanding that the shear properties of IMCT networks are different from tensile properties, and most probably the growth and turnover of IMCT structures are sensitive to shear parameters. While measures of tensile properties of endomysium as a passive elastic element in single fiber experiments has provided many insights into muscle properties, it is arguable that what the field needs most is detailed measurements of the translaminar shear properties of the endomysial and perimysial networks.Our understanding of the role of IMCT in normal muscle functioning and its role in muscle adaptation and response to injury has undergone considerable revision and is continuing to evolve. It is arguably a legacy of the Hill three-element model that a great deal of thinking on the mechanical roles of IMCT and experimental approaches designed to measure these have centered in the past on tensile properties, in relation to the \u201cpassive elastic element.\u201d However, we must remember that the Hill model is just a representation of the macroscopic mechanical behavior of muscle, rather than a mechanistic representation with insight into molecular or structural mechanisms. Although PP is the sole author of this work therefore solely designed, wrote and submitted this manuscript.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Approximately 30% of men and women in the United States have experienced age discrimination . Experiencing age discrimination may lead to increased risk of depressive symptoms among older adults. Although positive social environments are known to buffer depressive symptoms, it is unknown to what extent a positive social environment may buffer the association between age discrimination and depressive symptoms for older adults in the US. The purpose of this study is to examine the association between perceived age discrimination and depressive symptoms among older adults, and to explore whether this association varies by two aspects of the social environment: social support and neighborhood environment. We explore this topic with data on 5,439 adults aged 50 and older in a sample drawn from the Psychosocial Module of the Health and Retirement Study . Our results show a clear association between age discrimination and increased risk of depressive symptoms, net of a range of covariates. Older adults who receive more positive social support and rate their neighborhood environment more positively also report lower depressive symptoms. Finally, we find statistically significant interactions between age discrimination and both measures of the social environment, which suggest that social support and a positive neighborhood environment may buffer the negative impact of age discrimination on depressive symptoms. We discuss these findings in light of the prevalence of age discrimination in the US and cross-nationally, and consider potential mechanisms for improving the social environment of older adults, particularly in the post-COVID era."} +{"text": "High-density polyethylene (HDPE) geomembrane is often used as an anti-seepage material in domestic and industrial solid waste landfills. To study the interfacial shear strength between the HDPE anti-seepage geomembrane and various solid wastes, we performed direct shear tests on the contact interface between nine types of industrial solid waste or soil and a geomembrane with a smooth or rough surface in Guizhou Province, China. Friction strength parameters like the interfacial friction angle and the apparent cohesion between the HDPE geomembrane and various solid wastes were measured to analyze the shear strength of the interface between a geomembrane with either a smooth or a rough surface and various solid wastes. The interfacial shear stress between the HDPE geomembrane and the industrial solid waste increased with shear displacement and the slope of the stress-displacement curve decreased gradually. When shear displacement increased to a certain range, the shear stress at the interface remained unchanged. The interfacial shear strength between the geomembrane with a rough surface and the solid waste was higher than for the geomembrane with a smooth surface. Consequentially, the interfacial friction angle for the geomembrane with a rough surface was larger. The geomembrane with a rough surface had a better shear resistance and the shear characteristics fully developed when it was in full contact with the solid waste. A high-density polyethylene (HDPE) geomembrane is a waterproof barrier material with a high strength made of a polyethylene resin with a specific formula and further processing . It is wPresently, the experimental researches on the interfacial strength between the HDPE geomembrane and various types of solid waste mainly focus on direct shear, drawing and triaxial or torsional shear. Ling et al. measuredIn this study, we used a modified direct shear apparatus to carry out direct shear tests on smooth and rough HDPE geomembranes and nine types of industrial solid waste or soil to simulate their interaction in an actual engineering structure and measure the friction strength parameters like the interfacial friction angle and the apparent cohesion of the contact interface between both types of geomembrane and the solid waste. We selected enough types (nine kinds) of industrial solid wastes or soil samples to verify that the conclusions obtained are applicable to the vast majority of industrial solid wastes. At the same time, we used the modified fracture shear seepage coupling system to carry out the test, which could keep the shear area of the interface between geomembrane and solid wastes unchanged during the test and the data of the stress and strain could be recorded per second accurately. The shear characteristics of the contact interface between both types of HDPE geomembrane and various industrial solid waste or soil samples were evaluated and analyzed to provide a theoretical reference for the construction of anti-seepage systems in solid waste disposal sites.(1)The smooth (or rough) geomembrane was laid on a rigid horizontal base in the lower part of the shear box. The front end was clamped in front of the shear area and fixed in place with 4 bolts. The surface of the geomembrane remained flat without any folding and there was no relative sliding between the specimen and the base. Then, the shear box was installed, and the solid waste was used as filler. The solid waste was filled into the shear box and the contact surface between the solid waste and the geomembrane and the upper surface of the solid waste were kept flat.(2)The pressure plate was installed, and the solid waste was applied a normal pressure of 50 kPa.(3)The horizontal load was applied to obtain a relative displacement between the upper and lower shear boxes with a speed of 1 mm/min. The instrument automatically recorded the shear force and the corresponding relative displacement at an interval of 1 s until the horizontal load did not increase anymore, which meant that the geomembrane was sheared out. If the horizontal load kept increasing slowly, the test was carried out until 16.5% of the length of the shear plane was reached.(4)The geomembrane and the solid waste were removed from the geomembrane surface. It was inspected whether the geomembrane was elongated, folded or damaged.(5)Re-assembled the geomembrane and repeated steps (1)\u2013(4) and measured the friction characteristics of the contact interface for three other normal pressure values .According to the coulombic shear criterion, four geomembrane specimens were used to measure the shear stress when the contact interface between the geomembrane and the soil was damaged under four levels of vertical load. Then, the friction angle and the apparent cohesion of the contact interface between the geomembrane and the soil were determined. Based on the relevant specifications , specifi2).We calculated the shear displacement and the shear stress of the contact surface between the geomembrane and the solid waste according to Equations (1) and (2):The evolution of the shear stress with the shear displacement was plotted systematically. The peak value of the shear stress on the curve was defined as the maximum shear stress. As can be seen from 2 was less than 0.95), then we would carry out the test again until the required result was obtained. The angle between the straight line and the abscissa corresponds to the interfacial friction angle between the geomembrane and the solid waste. The intercept of the straight line on the vertical axis determines the apparent cohesion of the interface between the geomembrane and the solid waste. The maximum shear stress of the contact interface was obtained from the evolution of the shear stress with the shear displacement and the relationship between the maximum shear stress and the corresponding normal stress was determined. First, four stress points were defined with the normal stress as the abscissa and the corresponding shear stress as the ordinate. Then, the best fit for each point was determined by linear regression. If the error of the fitted straight line was large and (4) show that the shear strength of the contact interface between geomembrane and solid waste is determined by the friction angle and apparent cohesion. To better analyze and compare the friction strength of the contact interface between the smooth and rough geomembrane and different solid wastes under different normal stresses, the friction ratio between the smooth and the rough geomembrane is introduced. It is defined as the ratio of the shear strength of the contact interface between the smooth geomembrane and a given solid waste with that for the rough geomembrane for the same normal stress:The friction of the contact interface between various solid wastes or soil samples and the smooth and rough geomembrane was calculated from Equation (5), as shown in (1)When the shear displacement increases, the interfacial shear stress between the HDPE geomembrane and the industrial solid waste did not increase linearly, but parabola. With the increase of shear displacement, the shear stress first increased in a straight line; then the rate of increase gradually decreased. When the shear displacement reached a certain value, the interfacial shear stress remained stable. This means the change of the shear stress between the geomembrane and the solid waste must be carefully considered when designing an anti-seepage structure at a solid waste disposal site.(2)The interfacial shear strength between the geomembrane with a rough surface and the solid waste was close to that for the smooth geomembrane for a small normal stress. The interfacial shear strength between the rough geomembrane and the solid waste was significantly higher than for the smooth geomembrane for larger normal stresses. This is because when the normal stress was large, the solid waste particles was in closer contact with the surface of the geomembrane; the lateral friction resistance between the bumps on the surface of the rough geomembrane and the solid waste particles was more fully developed.(3)The shear strength of the interface between geomembrane and solid waste soil was determined by the friction angle and apparent cohesion. With the increase of the normal stress, the shear strength of the interface was mainly determined by the interface friction angle. The interface friction angle of rough geomembrane was higher than that of smooth geomembrane, therefore, the geomembrane with a rough surface had a better shear resistance and a better tensile crack resistance.In this work, a series of direct shear tests for the friction failure were carried out on the interface between seven types of main industrial solid waste, two types of soil in the Guizhou Province of China and an HDPE geomembrane. The friction strength of the interface was measured between a smooth and a rough HDPE geomembrane and various solid wastes and The friction strength of the interface was measured between a smooth and a rough HDPE geomembrane and the shear strength characteristics of the interface were analyzed:"} +{"text": "Family caregivers of persons living with dementia (PLWD) provide disproportionately high levels of care over a long and variable disease course, yet an understanding of the trajectory of care hours provided over time and the contributions of individual family members to overall care is lacking. This study used longitudinal data from the nationally representative Health and Retirement Study in order to compare the hours of care that spouses, children, and other family caregivers provide to those with and without dementia. During the last 10 years of life, family caregivers of PLWD provided nearly three times as many total care hours as compared to others . While care hours provided to PLWD increased steadily in each of the last 10 years of life , care hours provided to others remained low and then nearly tripled in the last year of life to 22 hours/week on average. Adult children of PLWD provided 50% of total care hours, while adult children of others provided 41% of care hours. This study provides important insight into the high levels of year-over-year caregiving provided to PLWD by their family caregivers in general and by adult children in particular. Policies to support these caregivers must shift from short-term, episodic support to sustained assistance in acknowledgment of the key role family caregivers play in the long-term services and supports of PLWD."} +{"text": "Information theory provides a spectrum of nonlinear methods capable of grasping an internal structure of a signal together with an insight into its complex nature. In this work, we discuss the usefulness of the selected entropy techniques for a description of the information carried by the surface electromyography signals during colorectal cancer treatment. The electrical activity of the external anal sphincter can serve as a potential source of knowledge of the actual state of the patient who underwent a common surgery for rectal cancer in the form of anterior or lower anterior resection. The calculation of Sample entropy parameters has been extended to multiple time scales in terms of the Multiscale Sample Entropy. The specific values of the entropy measures and their dependence on the time scales were analyzed with regard to the time elapsed since the operation, the type of surgical treatment and also the different depths of the rectum canal. The Mann\u2013Whitney U test and Anova Friedman statistics indicate the statistically significant differences among all of stages of treatment and for all consecutive depths of rectum area for the estimated Sample Entropy. The further analysis at the multiple time scales signify the substantial differences among compared stages of treatment in the group of patients who underwent the lower anterior resection. The comprehensive knowledge about the information hidden in the complex surface electromyographic (sEMG) signals of the external anal sphincter (EAS) could significantly contribute to the proper assessment of the activity of this specific muscle group in the context of patients after multimodal rectal cancer therapy. Colorectal cancer (CRC) is one of the most frequent cancers worldwide and nowadays represents a significant part of the major public health problems . IncreasIt is also documented that the surgery, especially the level of anastomosis in conjunction with neoadjuvant radiotherapy, could increase the risk of postoperative complications associated with fecal incontinence . DespiteThe distribution of innervation zones (IZ) shows a large discrepancy in the studied groups and relatively high level of individual patient asymmetry ,10. ThusPrevious studies characterize the coaxial needle technique as an effective tool for investigating the neural control of EAS in the patient with defecation disorders . HoweverTo get a more profound insight into information hidden in sEMG, the use of proper analytical methods which can cope with the complex character of the examined phenomena is required . The infThe concept of entropy for a characterization of the measured data was first proposed in 1948 by Shannon in the form of the logarithmic dependence on a probability density function . Further in 2007 .A key limitation of these techniques is that they do not take into account multiple time scales. Biosignals often exhibit different behaviors depending on an actual scale. Nonlinearity, long memory or sensitivity to small disturbances are among the phenomena for which the description limited to a single time scale may not be sufficient. Although there exists a variety of entropy measures, the most widely used method in the context of a physiological signal\u2019s dynamics is Multiscale Entropy algorithm proposed by Costa et al. ,29,30,31The Sample Entropy (y (ApEn) . There ails, see ,38.N data points requires a prior determination of the two parameters: (i) the embedding dimension m which characterizes the length of vectors to compare; and (ii) the tolerance threshold r referred to as a similarity criterion or the distance threshold for two template vectors. The latter is usually chosen from the range between The calculation of plitudes . In the m consecutive values of series, starting with the ith pointThe procedure starts with the definition of a set of vectors Next, the Euclidean distance between the r i.e., In the next step, the probability r of each otherThis value is averaged over all possible pattern vectors m points remain similar for the Finally, the For the above calculations, The estimation of Multiscale Entropy (MSE) consists of two main steps. The first part implements the coarse-graining procedure of resampling the series to explore different time scales of a signal . The mulThe second stage concerns the calculation of the sample entropy which was just presented in the previous The examined time series were recorded at three stages of treatment, before the surgical procedure and 13 male, age range 48\u201385 (average 69.6 \u00b1 10.04 years), diagnosed with a rectal cancer and qualified for surgery. All underwent open, transabdominal resection. Based on the distance of colorectal stapled anastomosis, the study group of patients was divided according to the decision of the operating surgeon. For surgeons to make decision on the type of the procedure (AR vs. LAR in this case), a localization of the tumor is crucial. It is common to decide that tumors localized in the upper third of the rectum require AR, those in middle portion LAR and those lower than that in some cases need LAR, ultra low LAR or abdomino-perineal resection. The patients with anastomosis at or below 6 cm from the dentate line were included in the Low Anterior Resection (LAR) group. Those with higher anastomosis were included to the Anterior Resection (AR) group. Indirectly, a level of anastomosis implies also the extent of mesorectal excision with all patients in LAR group undergoing Total Mesorectal Excision while in the case of AR group mesorectum was excised minimum 5 cm below the lower margin of tumor. For the detailed information on the surgical landmarks of rectum, see . The groThe standard protocol for the proper evaluation of motor units activity of the EAS muscle group recommends a minimum sampling frequency of about 2 kHz ,43 and oThe results of The significantly greater values of The in-depth description of the examined data is given by Quite a different result is found for the MVC where the AR group is characterized by the reduced values of entropy in comparison to the LAR for In addition, Considering the different stages of treatment, the middle case, A general comparison of the mean MSE curves characterizing the distinct states of EAS tension indicatep-values, calculated at the selected significance level (The normality test of the entropy functions calculated via Shapiro\u2013Wilk formula does not allow us to confirm the hypothesis about the normal distribution for the majority of analyzed cases. Thus, to characterize the differences between compared stages the non-parametric statistical tests were used. The comparison between AR and LAR group are presented in Next, the statistical differences within the individual groups were also investigated. The results of Anova Friedman statistics, a widely known non-parametric analog of the one-factor analysis of variance for the repeated measurements, indicates a statistically significant difference (anal see .This work presents an application of the selected entropy-based techniques to study the variability of information within sEMG signals at the different stages of the rectal cancer treatment. To distinguish the groups of patients due to the type of surgery as well as to compare of signals recorded at the various postoperative periods, both single and multiscale sample entropy algorithms were implemented. The statistically significant differences identified among all of the compared stages of treatment (Definitely the most valuable information is provided by the analysis of reatment ,45. In aIt is shown that the information carried out by the sEMG signals measured one year after the surgery rst year ,47. Probweighted innervation zones. Some of the innervation zones may have a greater importance than the others because of the different sizes of motor units [The main limitations of this study are due to the problem of inter-subject variability. The large diversity in distribution of EAS innervation zones, mainly caused by the high level of the individual asymmetries, significantly affects the differences between the compared groups. This phenomenon is further strengthened by the diversity of signals within a single subject. The values of the entropy for the time series detected at one of three separated rings, which consist of 16 channels each, indicate relatively high variability over these channels. That discrepancy consists of many factors including the concept of or units . We are"} +{"text": "Hybrid nanotube composite systems with two different types of fillers attract considerable attention in several applications. The incorporation of secondary fillers exhibits conflicting behaviors of the electrical conductivity, which either increases or decreases according to the dimension of secondary fillers. This paper addresses quantitative models to predict the electrical performance in the configuration of two dimensional systems with one-dimensional secondary fillers. To characterize these properties, Monte Carlo simulations are conducted for percolating networks with a realistic model with the consideration of the resistance of conducting NWs, which conventional computational approaches mostly lack from the common assumption of zero-resistance or perfect conducting NWs. The simulation results with nonperfect conductor NWs are compared with the previous results of perfect conductors. The variation of the electrical conductivity reduces with the consideration of the resistance as compared to the cases with perfect conducting fillers, where the overall electrical conductivity solely originates from the contact resistance caused by tunneling effects between NWs. In addition, it is observed that the resistance associated with the case of invariant conductivity with respect to the dimension of the secondary fillers increases, resulting in the need for secondary fillers with the increased scale to achieve the same electrical performance. The results offer useful design guidelines for the use of a two-dimensional percolation network for flexible conducting electrodes. Conducting polymer composites have become essential components for various modern electronic applications including wearable sensors, flexible displays, batteries, and solar cells, etc. ,2,3,4. IFor additional improvement of the performance, the incorporation of particulate fillers can be considered to form the excluded volume that effectively leads to a segregated CNT network ,16,17,18articles ,20,21,22This work addresses the computational characterization of the effect of the dimensional properties of secondary particulate fillers on the electrical conductivity of CNT polymer composites. In the previous research, to clarify the contribution of the size effect of secondary particulate fillers on the electrical conductivity of the nanocomposite network, the perfect conductor is assumed for the electrical property of the CNT filler, i.e., the resistivity of the CNT is set to zero to conduct Monte Carlo (MC) simulation for the prediction of percolating behaviors of the CNT network in CNT/silica composites . Thus, tThe conflicting electrical behaviors caused by the addition of secondary particulate fillers can be briefly described with Swiss cheese model. A random configuration of secondary particulate fillers on a polymer matrix can be modeled in a shape of Swiss cheese with many pores. A pore represents a single secondary filler which excludes conducting fibers, and the volume of \u201ccheese\u201d is the space that conducting fibers can occupy. A network of conducting fillers presents the collection of all conducting paths in the polymer matrix with particulate fillers. The CNT network is separated from the conducting fiber composite to evaluate the effective conductance of conducting paths. The local conductance of a point within the volume of Swiss cheese model depends on the density of CNT fibers placed around the corresponding region. To assess the degree and formation of an electrical network and to calculate the total conductance, an available configuration of the effective conducting network is obtained from the joint consideration of the conducting path and geometry of the particulate fillers. Thus, the total conductance can be evaluated by considering the continuum percolation property of the effective network. The transition of the electrical conductivity is observed in accordance with the excluded volume caused by secondary particles, i.e., the excluded volume improves or prevents the conductivity of CNT composites depending on the particle size. The network morphology of CNT composites depends on the size of the particulate fillers, resulting in the variation of the electrical conductivity. From this model, a numerical characterization of conflicting conducting behaviors is carried out based on a unified framework. Since most of available NW composite networks are created with conducting but nonzero resistance fillers, understanding of the impact on the electrical properties of nonzero resistance fillers is necessary. This work develops quantitative models of the electrical conductivity of two-dimensional NW composite networks with one-dimensional nonperfect conducting fillers via MC based computation. Based on computational models, the change of the network sheet resistance involved with the consideration of the NW resistivity is investigated with respect to various design factors for performance improvement. The results are compared with the perfect conducting NW percolating network. According to the results, the network conductance exhibits a relatively small change with the change of the filler size variation, resulting in the change of linear slopes, which suggests the prediction of the network electrical conductivity with an enhanced accuracy. The corresponding performance is also demonstrated in a computational way.The MC simulation is conducted to identify percolating behaviors of the CNT network in CNT/silica composites under various size and concentration conditions of composite components. In order to predict the sheet conductance derived for the network with a 1V voltage source applied at both ends of the domain by considering an equivalent circuit network with two types of the resistance arising from inherent NW resistance and contact resistance between two NWs. KCL is applied at all contact points in the cluster to formulate a system of linear equations with respect to the voltage drop at every contact point in the conducting path. The solution of the system of linear equations, which can be obtained using linear algebra software packages. The total current flowing across the square domain is calculated from the solution. From the assumption of the 1V voltage source, the obtained total current corresponds to the overall conductance 2\u00d710\u22122\u03a9 . The conchnology . The conThe size effect of secondary particulate fillers on the electrical resistivity of the nanocomposites can be observed with random instances of the CNT/silica composites generated from the simulation presented in On the other hand, nano-scale secondary fillers prevent NWs from stretching toward the right side of the domain, and densely dispersed nano-sized particulate fillers cause the resulting system an abrupt increase with severe NW kinks. As shown in In order to assess a quantitative impact of the filler size, the electrical conductivity is monitored with varying secondary particulate filler contents with the CNT content fixed to 4%, which corresponds to the above electrical percolation threshold of the nanocomposite system. tal data ,27,28,29tal data . Thus, tThis section provides the discussion about qualitative interpretations and underlying intuitions of quantitative results obtained from the MC simulation. A unified formalism is addressed for the analysis of the normalized conductivity change observed concerning the size of secondary particulate fillers. It can be established via the combination of discrete percolation theory and Swiss cheese model. Hybrid NW systems with particular secondary fillers can be viewed as a novel extension of Swiss cheese model. Swiss cheese model is indeed a porous medium percolation model where insulating particles occupy pores, while continuum such as fluid is allowed to flow for percolation. However, this nanocomposite system is nontrivial in that NWs which act as conducting paths which electrical current can flow through are overlaid in Swiss cheese model originally intended for the continuum percolation. In ordinary Swiss cheese model, the conductivity of a conducting path is characterized by the width of a neck, which is defined as a narrow path between surfaces of two neighboring spherical fillers. The conducting property of this system differs from ordinary Swiss cheese model, where the total conductance depends on the narrowest neck width since only a neck having conducting NWs can act as a valid conducting path. The density of NW clusters passing across the neck determines the electrical conductance of the associated neck. Thus, only a neck with nonzero conductance is a valid conducting path of the percolating network. For example, if a small number of NWs exist in the nanocomposite system, most necks do not contribute to the percolating network, resulting in low network conductance.This framework is extended to the analysis of the nanocomposite system with micro- and nano-scale fillers. The nanocomposite system\u2019s total conduct depends on two network parameters of the numbers of parallel conducting paths and the conductance of individual paths. A conducting path is formed by interconnecting active necks that contain conducting NWs so that it traverses the simulation domain, and the number of such conducting paths connected in parallel determines the amount of the total electrical current flowing across the domain. In addition, a conducting NW can be decomposed into two components in a horizontal direction toward the right end of the domain and its perpendicular vertical direction. The effective length of a conducting NW is defined as the length of the horizontal component of the conducting NW. A large effective length of conducting NW leads to a small number of contacts contained in a conducting path traversing the domain. The contact resistance of the resulting conducting path becomes small. On the other hand, the overall NW resistance does not differ among conducting paths since its value is proportional to the length of the conducting path, which is mostly similar to the length of the domain. Thus, the contact resistance has a dominant contribution to the overall conductance of a conducting path. The addition of micro-scale particulate fillers raises the density of conducting NWs along necks, improving the conductance of conducting paths, and increases the effective length of conducting NWs, leading to the reduction of the contact resistance of conducting paths. Therefore, the overall impact on the system is the increase in the conductance of the percolating network. The addition of nano-scale silica fillers, however, causes the decrease of the conductance of the percolating network, since nano-scale fillers block conducting NWs toward the end of the domain and make them twisted. The kinks of the conducting path increase the traversing distance across the domain, resulting in an increase in the NW resistance, and reduce the effective length of conducting NWs so that the contact resistance also grows. Note also that, since the addition of secondary fillers causes the increase and decrease of the network conductance, it is expected to exist a configuration of filler size that makes the conductance almost unchanged and can be estimated from computational results. Moreover, the aspects of the change in the network conductance for increasing silica content are also explained in a consistent way. According to the experiments and simulation results, the network conductance increases as the content of the micro-scale fillers increases. Additional content of the micro-scale filler increases the density of conducting NWs across necks, thereby increasing the number of current paths. For the increase of the nano-scale silica content, the network conductance, in contrast, diminishes as a large density of nano-scale fillers further increases the kinks of the NWs so that the effective length of conducting NWs shrinks. For the CNT/silica composites with silica fillers of a certain value of the diameter, the nanocomposite system\u2019s total conductance remains almost fixed with increasing silica content. The effects of the introduction of additional current paths and the average conductance decrement are comparable, canceling out these opposite effects.The consideration of nonzero NW resistance introduces several new behaviors of the electrical conductivity of the nanocomposite system. Nonzero NW resistance contributes to the total network resistance additively along with the contact resistance. Since the normalized conductance change is the ratio between the contact resistances, the sum of the contact resistance, and the NW resistance, its value is always less than one. The overall NW resistance indeed does not change very much since it depends on the length of the conducting path, which is given by the domain length and is mostly similar among conducting paths. On the other hand, the contact resistance of the nanocomposite system is created by the percolation phenomena. Furthermore, it normally has an exponential relationship with the secondary filler content. Meanwhile, the topology of two cases with and without nonzero NW resistance are identical, and the resulting contact resistance is also identical. Furthermore, the value of the NW resistance is relatively small as compared to the contact resistance, and the resulting normalized conductivity change is strictly less than one. For the increase of the silica content, the contribution of the NW resistance to the overall network conductance remains almost similar since there is little change in the effective length of the domain. By contrast, the contribution of the contact resistance to the network conductance grows since the addition of silica content changes the topology of the conducting network. Therefore, the resulting normalized conductance change becomes relatively small since the numerator and denominator both depend dominantly on the contact resistance.Note here that the secondary filler configurations leading to invariant conductance are different. The case with nonperfect conducting NW requires a larger size of the particulate filler, as shown in Finally, the discussion about the conductivity change caused by the CNT content change is ensured. Note that all configurations have the same geometrical topology with the percolating network. The nanocomposite system with larger WN resistance has a larger value of the normalized conductivity change, as in In summary, we have shown that nonzero values of the electrical conductivity of NWs affect the conductivity variation of CNT composite networks containing secondary fillers along with the filler size. The overall network resistance increases via the addition of nonzero NW resistance, which has been ignored in the most of previous studies on the electrical property analysis of hybrid nanocomposite systems. In addition, the resulting slope of the change becomes less steep, in a linear change. Furthermore, the secondary filler configuration leading to an invariant network conductivity requires additional current so that it causes the increase with the filler size for nonzero NW resistance. The computational technique developed in this work can handle the electrical properties of comprehensive nanocomposite systems consisting of 1-D conducting wires and insulating particulate fillers such as silica-CNT, ZnO-CNT, and This work has developed predictive computational models of the electrical conductivity of the two-dimensional random network with nonzero resistance conducting fillers and suggested strategies designed to enhance the electrical property. The electrical conductance is of a proportional relationship with the size of the secondary filler while showing a consistent decrease with the increasing NW coverage. The consideration of nonzero resistance NWs reduces the impact of the size of the secondary filler on the electrical conductivity of the percolating network, thereby causing linearized changes in the electrical conductivity of the network compared to the case of the perfect conductor. This relationship provides the resistance robustness of the overall network to allow a simplified guideline for the control of electrical properties for hybrid nanocomposite systems. For future research, this work can be extended to study the electrical property of three-dimensional bulk nanocomposites for computational characterization with improved accuracy."} +{"text": "Ghana has one of the most liberal abortion laws in sub-Saharan Africa. The Ghanaian abortion law of 1985 permits abortion in cases of rape, incest or the defilement of a female idiot if the life or health of the woman is in danger; or if there is risk of fetal abnormality. A woman is also legally allowed to obtain an abortion for mental reasons [The more affordable safe abortion fees are, the less likely pregnant adolescents shall turn to unsafe abortion practices , 2. Inde"} +{"text": "In the 3.1 Going westward: The Aegean Sea and mainland Greece subsection of the 3. Results section, there is an error in the first sentence of the second paragraph. The authors did not analyze materials from the 1997 excavation. The authors received permission to study the 1957\u201360 and 1969\u201370 assemblage via permit KNO-12 issued by the British School of Athens. The correct sentence is: The flaked stone assemblage recovered from the 1957\u201360 and 1969\u201370 excavation campaigns by John Evans in the Palace of Knossos has been fully analysed in the frame of this research (S1 Table), offering the earliest evidence of the use of harvesting tools in the Aegean."} +{"text": "This article contains the average daily electric load profile (for 24\u00a0h of the day) for the five categories of residential buildings in three Local Government Areas (LGAs) of the state of Lagos, Nigeria. In each of the LGAs, 10 buildings per residential building type were surveyed for the collection of data with the aid of a questionnaire. In each surveyed household, a household member completed the energy audit section of the questionnaire with the assistance of the questionnaire administrator while the section of the questionnaire designed as a time-of-use diary was left with the household for completion. For each building surveyed, the data retrieved from the completed time-of-use diary was used in Microsoft Excel for computing the hourly electricity load profile for the seven days of the week. In order to obtain the hourly energy load (in watts) for each building, the power rating of the appliances used during each of the 24\u00a0h of the day was summed and the result in watts was converted to kWh by dividing by 1000. Each dwelling's daily load profile was obtained as an average of the load profile for the seven days of the week. The article as well provides data on the solar photovoltaic systems\u2019 components designed to supply electricity to the building and the levelized cost of electricity (LCOE) of the systems for the base case scenario and different sensitivity cases obtained from simulations using HOMER Pro. The load profile data provided in this article can be reused by other researchers in the design of solar photovoltaic systems for residential buildings. Specifications Table\u2022The data provides an estimate of residential electricity loads for different categories of buildings in Nigeria which could also be adopted for different developing countries.\u2022The load profile data could be used by other researchers interested in designing off-grid renewable energy systems for residential buildings.\u2022The data on the LCOE of systems designed for the different building categories could be used as a benchmark for further research in renewable energy applications in residential buildings.1This article includes data on the daily electric load profiles and corresponding solar PV components for the different categories of residential buildings in Lagos, Nigeria. 2Residential buildings from three Local Government Areas (LGAs): Kosofe, Oshodi and Alimosho in Lagos Metropolitan Area, Lagos State of Nigeria were surveyed. The survey was conducted using a structured questionnaire and entailed purposive sampling. Lagos is divided into five Administrative Divisions which are further divided into 20 Local Government Areas (LGAs) and 37 Local Council Development Areas (LCDAs). In each of the LGAs, 10 buildings per residential building type Nigeria as identified by Jiboye For each building type per LGA, load profiles representing the maximum and minimum building load were used in the HOMER Pro software for modelling the PV systems. The software modelled the system configuration's behaviour for each hour of the year so as to determine the life cycle cost and the technical feasibility of the system. This involves optimization of the system through the simulation of several system configurations with the aim of identifying the system that meets the technical constraints at the lowest life cycle cost. The calculation for the base case scenario was conducted as per the following parameters: 2% and 5% inflation rate and discount rate respectively, a 25-year PV-system lifetime, maximum annual capacity shortage of 0% and 40% minimum battery state of charge (SOC).Sensitivity analysis was conducted with HOMER Pro based on five variables: inflation and discount rates, lifetime of PV system, maximum annual capacity shortage and minimum battery state of charge. The sensitivity analysis was conducted in order to investigate the effect of the variables on the LCOE of the systems."} +{"text": "Aedes aegypti. The occurrence of arboviral diseases with COVID-19 in the Latin America and the Caribbean (LAC) region presents challenges and opportunities for strengthening health services, surveillance and control programs. Financing of training, equipment and reconversion of hospital spaces will have a negative effect on already the limited resource directed to the health sector. The strengthening of the diagnostic infrastructure reappears as an opportunity for the national reference laboratories. Sharing of epidemiological information for the modeling of epidemiological scenarios allows collaboration between health, academic and scientific institutions. The fear of contagion by COVID-19 is constraining people with arboviral diseases to search for care which can lead to an increase in serious cases and could disrupt the operation of vector-control programs due to the reluctance of residents to open their doors to health personnel. Promoting intense community participation along with the incorporation of long lasting innovations in vector control offers new opportunities for control. The COVID-19 pandemic offers challenges and opportunities that must provoke positive behavioral changes and encourage more permanent self-care actions.The coronavirus disease of 2019 (COVID-19) pandemic challenges public health systems around the world. Tropical countries will face complex epidemiological scenarios involving the simultaneous transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) with viruses transmitted by We identify that this will be an additional challenge for the health systems and the economy of the countries of Latin America and the Caribbean (LAC). Contrary to what will happen in temperate countries, a large population contingent from tropical countries will experience complex epidemiological scenarios that will involve the simultaneous or intercalated transmission of SARS-CoV-2 with viruses of dengue, Zika and Chikungunya transmitted by Another notable aspect is the potential exhaustion of the surveillance systems to investigate and report arboviral cases in a timely manner. In March 2020, the Pan American Health Organization (PAHO) issued an alert calling attention to the trend of increasing dengue cases in at least 27 countries of the Region compared to 2019.Aedes-transmitted diseases (ATDs) mainly affect vulnerable populations living in poor urban or rural areas and in houses with limited access to sewerage and drinking water services.,Despite having limited information on the direct impact on human health of the interaction of arboviral diseases with COVID-19,Health seeking behavior has been dramatically modified by COVID-19, driven by fear of contagion in the population, but also by messages from the health care authorities who recommend staying at home until severe symptoms (breathing problems) develop. In contrast, dengue cases are encouraged to present to health care units for close and early clinical monitoring . Clinical management and rapid diagnosis of both diseases in the context of appropriate health facility triage and case management must be developed. For example, maintaining separate care facilities when possible for COVID-19 and arboviral diseases is highly recommended.On the other hand, the fear of COVID-19 contagion could also disrupt the operation of vector-control programs. Residents may be reluctant to open their doors to health personnel and health brigades may not want to visit SARS-CoV-2 high risk areas because of lack of personal protection equipment and potential exposure to the virus. The evaluation of peridomestic settings arises as an alternative by vector control managers but avoiding the interaction with the community.Challenges and opportunities - The simultaneous occurrence of Aedes-transmitted diseases and COVID-19 in the LAC region presents us with very important challenges but also offers opportunities for strengthening health services, surveillance and control of epidemics indicates that temperature may not be as limiting for transmission as some hypothesize, however, the differential diagnosis of dengue, Zika and Chikungunya will need to incorporate COVID-19 as a diagnostic possibility, although its confirmation triggers very different control actions. The strengthening of the diagnostic infrastructure reappears as an opportunity to prepare the capacity of the regional, nation and state reference laboratories to diagnose a very wide spectrum of infections agents for which the technical inputs, equipment or trained personnel are not available. Thus, interaction between academic and public health institutions is mandatory as the pandemic has demonstrated.Similarly, proliferation and real-time virtual epidemiological information platforms developed to monitor the pandemic must be available to monitor arboviral diseases and other infectious diseases with the same intensity and frequency with which they have been occurring in mapping the progress of the COVID-19 pandemic. The effort to share and integrate epidemiological information sources for the construction and modeling of epidemiological scenarios becomes an imperative for regional health that allows collaboration between health entities with academic and scientific institutions. Examples of such synergy include timely situational analysis, the generation of adequate risk scenarios for the design and selection of more effective control interventions as well as the use of information for education and communication campaigns with the population. An area of opportunity would be the incorporation of communication technologies (TICs) in the test and tracing of contacts as well as mobility surveys in support of the early warning surveillance systems.Traditionally, the countries of the American region based their control activities with the involvement of the community for the removal and control of domestic larval habitats The development of protective kits or tools so family members can perform the application of insecticide by themselves and the free distribution or purchase by the family of appropriate consumer products like spatial repellents, insecticide treated materials, or screens on doors and windows are interventions that should be encouraged and tested on a wider scale. Given the new circumstances, home visits could be interrupted by physical distancing and it is essential to start incorporating new control strategies that do not depend so much on home visits by health personnel and are better supported by the promotion of domiciliary interventions that may be developed inside the house by the family nucleus. An additional opportunity is the more efficient use of resources that is based on risk stratification targeting areas in urban settings that are responsible for more than 50% of historical cases.Aedes aegypti surveillance and control, the possibility is opened for promoting the incorporation of innovations that do not require and/or can reduce the constant presence of health personnel.,In the case of Of course, the nature of ATDs will also require action in response to outbreaks. The responsive actions are well-known in each country and must be carried out in a timely manner and have the human, consumables, and financial resources for it. Recommendations established in the operational guides must be followed. Chemical control (larvicides and adulticides) should be properly applied and selected based on evidence of susceptibility of the local vector populations to guarantee its effectiveness.In the field of risk communication, the COVID-19 pandemic leaves us with lessons, challenges and opportunities that must lead to better information campaigns. Risk communication strategies that increase positive behavioral changes that combat disinformation and encourage people to incorporate permanent self-care actions and not only in crisis situations. At the same time, the risks of transmission of COVID-19 and its dispersion throughout the territory limit the full development of the activities that require the action of health agents, but it also creates a great opportunity to promote effective participation of population with the incorporation of protective habits of prevention and maintenance of the domestic environment free of risk factors for the presence of vectors. Vector control personnel must be considered essential, and their activities must continue to support the actions required for the effective prevention and control of VTDs, even within the contingency imposed by COVID-19.Given the possible scenario of simultaneous transmission of the ATDs and COVID-19 agents, it is important that the planning of the actions be integrated, combined with the effective multisectoral and population participation where the public and private sectors, schools, the media, tune into the common strategy to deal with health problems. Recently, the countries of the Region of the Americas, with the support of the Pan American Health Organization (PAHO), unanimously approved the Plan of Action on Entomology and Vector Control 2018-2023,"} +{"text": "This presentation shares the methodology and early findings from a policy scan conducted to understand and assess the impact of COVID-19 policies on dementia care in the community for diverse populations in the province of Nova Scotia, Canada. The scan provided baseline information on: 1) Provincial legislative and regulatory policies related to dementia care in the community; 2) Orders and legislation enacted in response to COVID-19 that potentially impact those policies. Information was obtained from publicly accessible databases and government websites. Searches were also conducted using Google. 135 Acts were collected and reviewed. A specific aim of the scan was to generate knowledge about the impact of these layered policies in the context of a public health crisis from the perspective of local socially and geographically marginalized communities. A Sex and Gender Based Analysis Plus analytical approach was used to assess potential health equity impacts of COVID-19 policies on dementia care in the community. Information was organized using an adapted Health Equity Impact Assessment tool and Systems Health Equity Lens. Strengths and limitations of the approach and tools are discussed."} +{"text": "We present a formalized proof of the regularity of the infinity predicate on ground terms. This predicate plays an important role in the first-order theory of rewriting because it allows to express the termination property. The paper also contains a formalized proof of a direct tree automaton construction of the normal form predicate, due to Comon."} +{"text": "Presence of two primary malignancies is rare and occurs in 3-5% of the cancer patients. As per our extensive internet research, this is the only reported case of a synchronous sino-nasal embryonal rhabdomyosarcoma with squamous cell carcinoma-tongue. The case report is important because of the rare diagnosis and the challenge we faced in the diagnosis and treatment of the patient because of the paucity of literature available on management adult rhabdomyosarcoma.We present a very rare case of an adult male with a sino-nasal mass diagnosed to be an embryonal type rhabdomyosarcoma. The patient also had a moderately differentiated squamous cell carcinoma-tongue for the past 8 months. Radiological investigations were done to see the extent of the sino-nasal mass and the extent of tongue lesion, which was seen to be involving the base of the tongue. The patient was referred for chemoradiotherapy but succumbed to the disease after 2 weeks of treatment.Occurrence of rhabdomyosarcoma in synchronous malignancies is extremely rare as the most common first as well as second primary malignancy in a diagnosed case of head and neck cancer is squamous cell carcinoma. A multidisciplinary approach to the treatment of adult rhabdomyosarcoma has been recommended. The combined use of chemoradiotherapy and surgery has improved treatment in the recent past but RMS in adults is still a rare head and neck tumour that carries a poor prognosis despite aggressive therapy. Presence of two primary malignancies is rare and occurs in 3-5 % of the cancer patients. In a case of head and neck cancer, the second primary tumour mostly develops in the head and neck region . In headA 34-year-old male presented with a complaint of left nasal cavity mass, nasal obstruction for the past 6 months . The patient also had a tongue ulcer on the left lateral aspect of the tongue for the past 8 months. The patient was already a diagnosed case of carcinoma tongue at the previous hospital 6 months back. The histopathology was moderately differentiated squamous cell carcinoma. The patient neglected tongue ulcer and did not take any treatment. There was no history of any substance abuse .On examination, the oral cavity showed an ulcerative lesion over the left lateral border of the tongue (anterior two-thirds) of the size 3x1 cm. On palpation, ulcer base was indurated with induration extending posteriorly to involve left side tongue base. On nasal examination, widening of the nasal bridge along with left-sided periorbital swelling was present, more so in the region of the left medial canthus.Reddish polypoidal mass was seen protruding from the left nostril which was covered with necrotic slough. The septum was pushed towards the opposite side. Contrast-enhanced computed tomography (CECT) of the nose and paranasal sinus showed heterogenous mass with mild enhancement completely filling the left nasal cavity with opacification of the left maxillary sinus and anteriorly extending out of the nasal cavity. Medially, the mass was causing bowing of the nasal septum towards the right. Postero-superiorly, it was extending to the sphenoid sinus and causing widening of the on the left spheno-ethmoidal recess. Erosion of lamina papyracea was also seen on the left side Magnetic resonance imaging (MRI) face showed an irregular heterogeneously hypointense lobulated soft tissue mass (95 mm x31mm x48mm) in the left nasal cavity extending up to the left ethmoid sinuses superiorly, medial canthus of the left orbit antero-superiorly and into the left half of nasopharynx through left choana with associated mucosal thickening of the left maxillary and bilateral frontal sinuses. An enhancing lesion involving the base of tongue was also seen. .Biopsy from nasal cavity mass showed elongated spindle-shaped cells with features of embryonal rhabdomyosarcoma . The biopsy sample was also subjected to immunohistochemistry with myogenin marker and the diagnosis was confirmed.Owing to the pathology and the extent of the sino-nasal tumour, the case was discussed with the Department of Radiotherapy and a multimodality treatment was planned. The patient was referred for chemoradiotherapy (CRT) initially with surgical excision of the nasal mass later on kept as an option if required post-chemoradiotherapy. Unfortunately, the patient succumbed to the disease after 2 weeks of chemoradiotherapy due to the advanced stage of the disease and poor health.Second primary malignancy (SPM) represents the leading cause of mortality in head and neck squamous cell carcinomas (HNSCC), responsible for approximately one-third of HNSCC deaths that is three times the deaths due to distant metastasis . Hong et al in 1990, based on the original criteria given by Warren and Gates in 1932, proposed the criteria for the definition of the second primary malignancy (SPM): the tumours have to be histologically certified as malignant; if they are of identical histological type, it must be an interval of at least three years between the two malignancies and/or presence of a distance of at least 2 cm of mucosa unchanged between the index tumour and the second primary tumour; it has to exclude the possibility that the second tumour to be the metastasis of the index tumor which was unresectable made CRT a better option. Also, the paucity of literature available on the management of synchronous malignancies, one of which is a rhabdomyosarcoma posed difficulty in deciding the management protocol. The combined use of chemoradiotherapy and surgery has improved treatment in the recent past but RMS in adults is still a rare head and neck tumour that carries a poor prognosis despite aggressive therapy."} +{"text": "Peste des petits ruminants virus (PPRV) is a highly devastating viral infection of small ruminants. Dromedary camels live in close proximity of small ruminants in Arabian Peninsula (AP) and many other regions in the world. Little is known about the reasons behind continuous PPRV emergence in Saudi Arabia (KSA). Our objective was to test some dromedary camel population across the kingdom for the presence of specific PPRV antibodies. Our results show detection of specific PPRV antibodies (2.92%) in sera of tested dromedary camels from the eastern and south regions of the KSA. Our results suggest the exposure of dromedary camels to PPRV infection. Thus, dromedary camels may play some important roles in the sustainability of PPRV in the small ruminants across the AP. This is the first study examined the nationwide prevalence of the PPRV in dromedary camels in the KSA.Infection with the Peste des petits ruminants (PPR) virus (PPRV) infection is a major viral disease affecting small ruminants, especially in Africa and Asia. Direct contact between PPRV infected and na\u00efve animals facilitate transmission through both respiratory and fecal-oral routes version 21.We conducted both a descriptive and inferential statistical analysis for our samples. We used Chi-square to test the significance of the difference between frequencies. Descriptive statistical analysis conducted to analyses the basic demographics. A Involvement of dromedary camels in the evolution and transmission of PPRV is inconsistent. Some studies have reported the absence of PPRV antibodies in sera of dromedary camels in the Canary Islands and the KSA . HoweverInterestingly, 10 PPRV positive samples were from Jazan in the southern part of the country. This outcome may be related to the fact that this region shares a border Yemen, a PPR endemic country. An outbreak of the PPRV in dromedary camels in Iran confirms the potential roles of dromedary camels in the life cycle of the PPRV . More reDetection of PPRV specific antibodies in sera of dromedary camels without previous vaccination history suggests exposure of these animals to the virus through natural routes of infection."} +{"text": "Older adults over the age of 75 are severely underrepresented in many of the clinical trials used to justify the continued use of medications for chronic disease prevention in advanced age. The gaps in evidence in this population have fueled an interest in research to better understand the potential benefits and harms associated with the continued use of medications with uncertain benefit in advanced age. Deprescribing, the intentional reduction or discontinuation of medications, has recently gained traction as an important component of the prescribing process, but raises questions about the safety of stopping medications. This presentation will provide an overview of the evolution of deprescribing research and how this has shaped my career as a geriatric health services researcher. Specifically, I will address early studies that defined the field, challenges and opportunities for studying deprescribing in older adults, and future directions and priorities in deprescribing research."} +{"text": "The discovery that transcription is pervasive with the vast majority of the genome encoding transcripts not translated into proteins, has transformed our understanding of the basic unit of genetic information are powerful tools for gene expression profiling and can sequence large numbers of DNA fragments in parallel producing millions of short reads in a single run Metzker, . These mHoldt et al. report on the latest findings on circRNA in CVDs, the potential therapeutic approaches based on either modulation of native circRNAs by therapeutic knockdown or by ectopic expression and the prospect of engineering non-native circRNAs. The major hurdles for therapeutic strategies targeting circRNA in terms of design, delivery, and side effects are considered.Bioinformatic analysis of RNA-seq data also revealed a novel class of lncRNA transcripts, the circular RNAs (circRNAs) that emerge by RNA \u201cbacksplicing,\u201d whereby the spliceosome fuses a splice donor site in a downstream exon to a splice acceptor site in an upstream exon . Their interaction with RNA binding proteins (RBPs) is thought to be crucial for very diverse cellular functions and lncRNAs and the large body of evidence demonstrating their pleiotropic effects in pathological processes in vascular diseases. The potential of ncRNAs as effectors and biomarkers in vascular pathology is critically evaluated and insights into the technical limitations in establishing a standard protocol to ensure robust reproducibility for circulating ncRNAs as biomarkers in vascular diseases are provided.Apart from diabetic vasculopathy, ncRNAs have a profound effect in the vasculature at baseline and in disease. Hobu\u00df et al. summarize the function of lncRNAs in the development and progression of cardiac diseases with a particular emphasis on their molecular mode of action in pathological tissue remodeling. They also examine the challenges that have to be overcome to establish lncRNA based therapies and effective intervention strategies in the heart. In addition, the prognostic and diagnostic value of lncRNAs in biological fluids as a novel class of circulating biomarkers for heart diseases and the prospect of using these molecular fingerprints to replace protein-based indicators of disease is discussed.In the heart, an increasing number of studies highlight the critical regulation of lncRNAs in cardiac disorders. Laina et al. highlight the two main designs to target RNA and modulate gene expression the double-stranded small interfering RNAs (siRNAs) and single stranded antisense oligonucleotides (ASOs) their advantages and limitations. The review also summarizes results from the clinical trials of RNA-targeting interventions and elaborates on the advances and hurdles for RNA based therapeutic applications. The future prospect of RNA therapeutics to empower precision medicine implementation and fulfill the promise of patient specific therapeutics is also evaluated.Extending beyond ncRNA, RNA therapeutics focus on RNA as a prime target for therapeutic applications. in vivo delivery of targeting agents are required to bring RNA based therapeutics closer to the clinic.In conclusion, this Research Topic elucidates the current understanding about the mechanisms of function of ncRNA and its role in CVDs and highlights the major advancements and promising developments in RNA therapeutics. Despite the significant progress this is a new field of research and several challenges remain. Better understanding of the mechanisms of function of ncRNAs and integration of innovative approaches to enhance target binding affinity, cellular uptake and efficient AZ and LM wrote and revised the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The aim of this study was to investigate the effects of clinical information on the accuracy, timeliness, reporting confidence and clinical relevance of the radiology report.A systematic review of studies that investigated a link between primary communication of clinical information to the radiologist and the resultant report was conducted. Relevant studies were identified by a comprehensive search of electronic databases . Studies were screened using pre\u2010defined criteria. Methodological quality was assessed using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi\u2010Experimental Studies. Synthesis of findings was narrative.\u00a0Results were reported according to the Preferred Reporting Items for Systematic Reviews and Meta\u2010Analyses (PRISMA) guidelines.There were 21 studies which met the inclusion criteria, of which 20 were included in our review following quality assessment. Sixteen studies investigated the effect of clinical information on reporting accuracy, three studies investigated the effect of clinical information on reporting confidence, three studies explored the impact of clinical information on clinical relevance, and two studies investigated the impact of clinical information on reporting timeliness. Some studies explored multiple outcomes. Studies concluded that clinical information improved interpretation accuracy, clinical relevance and reporting confidence; however, reporting time was not substantially affected by the addition of clinical information.The findings of this review suggest clinical information has a positive impact on the radiology report. It is in the best interests of radiologists to communicate the importance of clinical information to reporting via the creation of criteria standards to guide the requesting practices of medical imaging referrers. Further work is recommended to establish these criteria standards. The aim of this review was to investigate the effects of clinical information on the radiology report. We conducted a systematic review of studies that investigated a link between primary communication of clinical information to the reporting radiologist and the resultant report. The findings of this review suggest that clinical information is beneficial to radiology reporting. It is in the best interests of radiologists to communicate the importance of clinical information on reporting via the creation of criteria standards to guide the requesting practices of medical imaging referrers. Further work is recommended to establish these criteria standards. It is common practice for radiologists to interpret imaging examinations and formulate a report using clinical information communicated to assist with this process. Clinical information refers to all information detailing the patient's clinical situation and can include the current problem, co\u2010existing and past medical history, current medications, allergies, fasting status, suspected diagnosis and clinical question to be answered.For all medical imaging examinations in Australia to be performed, a request must be completed by a referrer.When the patient presents to the referrer, they are medically assessed and a request for imaging is completed, using information about the patient's medical history and current presentation. This request can take one of two paths from the referrer to the radiologist, via the radiographer, who completes the imaging before sending it along with the request to the radiologist; or the request is transmitted directly to the radiologist who then reviews the clinical information and selects the imaging protocol to be performed, before transferring it to the radiographer. The radiologist is also able to review clinical information in the request when interpreting imaging and formulating their report.Loy & Irwig'sThis review followed the methods described in a published protocol in the PROSPERO register CRD42019138509).9138509.1Studies were included if they were as follows: (1) primary studies, published in peer\u2010reviewed journals, (2) related to diagnostic imaging for any population of human patients and (3) investigated a relationship between primary communication of clinical information to the radiologist and the resultant radiology report. This review defined primary communication as any method of communication given directly to the radiologist, such as clinical information accompanying imaging , clinical information received in patient charts or verbal communication between referrer and radiologist. Studies published in languages other than English were excluded. Conference proceedings, reviews, case reports, study protocols, commentary and letters to the editor were also excluded.After duplicates were removed, titles and abstracts of studies were screened by two reviewers (CC and TS) to determine eligibility for inclusion. Screening of full text of publications was performed if the abstract provided insufficient information to judge eligibility. Disagreement or uncertainty of study eligibility was resolved by consensus discussion. The reference lists of all included studies were interrogated and subjected to the same screening process.The full text of included studies was read by two reviewers (CC and LC). Data were extracted on study characteristics , interobserver agreement, outcome measures and results summary related to the research question. Data extraction was performed by one reviewer (CC), with validation by a second reviewer (LC). Disagreements were resolved through discussion.The Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi\u2010Experimental StudiesWhilst some included studies shared commonalities in design, heterogeneity of methodologies, interventions and statistical analysis rendered them difficult to compare statistically. Therefore, a narrative synthesis was conducted to contextualise findings relevant to the review question, these being reporting accuracy, confidence, timeliness and clinical relevance.The data extraction process allowed us to categorise study characteristics into consistent fields across included studies. The data extraction and categorisation facilitated narrative synthesis by allowing us to examine the context of each study. All authors met regularly during the process and using the extracted data, discussed and subsequently refined the narrative.\u00a0Results were reported according to the Preferred Reporting Items for Systematic Reviews and Meta\u2010Analyses (PRISMA) guidelines.We identified 21 studies that met our inclusion criteria, and after quality assessment, 20 studies were included in our review. The excluded studySixteen studiesX\u2010ray examinations were the diagnostic test in 12 (57%) of included studies.The size of data sets and the number and consistency of reviewers varied throughout studies. Data set sizes ranged from sevenA total of 16 of 20 studies used a similar method involving a sample set of images, assessed twice by a group of reviewers.Additional information available to readers varied significantly between studies. Whilst many included all clinical information available to referrers at the time of reporting in the second read, others tried to demonstrate effect of an intervention to evaluate any change to reporting. These interventions included patient questionnaires,The JBI quality score ranged from 2 to 7 out of a possible 9 points with a median score of 4 curves to quantify the average difference in improvement in accuracy. Results ranged from minimal improvementThree studies described an impact on overall accuracy of reporting.Three studies described accuracy in terms of influencing change to the original radiologist report. Two studiesThree studies investigated the effect of clinical information on the confidence of reporting, each in a different way.The importance of the inclusion of a specific clinical question in the imaging request was investigated in three of the included studies. Aubin et alThe impact of clinical information on radiologist reporting time was investigated in two studies.The majority of included studies support the notion that clinical information has a positive effect on the reporting process. Studies demonstrated improved interpretation accuracy, clinical relevance and reporting confidence. The addition of clinical information was found not to substantially affect reporting time. These findings were based on studies of moderate quality, with a median quality and risk of bias assessment score of 4 out of 9.These results are in keeping with Loy & Irwig'sOne of the studies investigated the impact of the timing of when clinical information is introduced. Berbaum et alOther studies, which were outside the scope of this review, have investigated the effect of prevalence expectation on diagnostic performance of radiologists. Littlefair et al'sAnother study by Littlefair et alOur study was limited by the number of eligible studies specific to the research question. Whilst 21 articles were deemed eligible for inclusion, not all of these studies solely focused on the effect of clinical information on the radiology report. Similarly, the broad range of publication dates of included studies may be perceived as a limitation. We found this difficult to restrict as there was no existing review on the effects of clinical information on all aspects of reporting. However, the broad range of publication dates may demonstrate the issue of inadequate clinical information communicated to radiologists has persisted over several decades.The rationale of three of the most recently published included studiesWhilst many of the included studies shared similar elements of design, it was clear there was no gold standard or standardisation of requirements for clinical information. This made results difficult to compare, as many studies relied on the expert opinion of radiologists to determine whether clinical information was deemed important or useful when reporting. This measurement of usefulness of clinical information varied across studies, as radiologists taking part in studies would have had different training, skills and specialisations.In contrast, both Cooperstein et alGiven the findings of this review regarding clinical information and its effect on the accuracy, confidence, clinical relevance and timeliness of reporting, Qureishi et al'sIt is clear the lack of clinical information in requests is an issue affecting reporting quality. One of the possible causes for this may be a lack of awareness or education of referring clinicians on what constitutes relevant clinical information. It may be in the best interests of radiologists to seek to educate referrers on the effect of clinical information on diagnostic performance, including the rationale behind providing high\u2010quality clinical information.The findings of this review indicate that clinical information communicated to the radiologist has a positive impact on the radiology report. These results are relevant to the main consumers of medical imaging, those being referrers and by extension their patients. These results are also relevant to radiologists, as they demonstrate the potential improvement that the communication of clinical information can have on the quality of reporting. It is in the best interests of radiologists to communicate the importance of clinical information for reporting via the creation of criteria standards to guide the requesting practices of medical imaging referrers."} +{"text": "The COVID-19 pandemic has altered the course of events globally. Enforcement of lock down orders to curtail the spread of the pandemic had untoward consequences on the economy and health of the citizenry. In Nigeria, access to renal care was reduced by restriction of movement; inability to afford care due to economic downturn; suspension of transplant programs; uncertainties about dialysis guidelines; anxiety and reduced motivation of health care workers (HCWs) due to lack of government\u2019s commitment to their welfare and increasing rate of COVID-19 infection among HCWs. Formulation and implementation of policies to improve HCWs welfare and ease the burden of CKD patients should be prioritized in order to ensure optimal care of renal patients during the present pandemic. The coronavirus disease-19 (COVID-19) pandemic has altered the course of events globally since its outbreak in late 2019. Globally, over 8,184,867 cases and 443,872 deaths have been recorded across the world as at 18th June, 2020 . The ongChronic kidney disease patients are at higher risk of having severe COVID-19 and succumbing to the disease due to higher prevalence of cardiovascular risks such as diabetes, hypertension, hypoalbuminaemia and anaemia compared to the general population . Also, rMajority of renal transplant programmes were suspended therefore denying renal patients who could afford transplant, better quality of life. The suspension was due to uncertainties such as fear of shortage of personnel to monitor patients especially in the early postoperative period; increased susceptibility to COVID-19 due to high dose of immunosuppressants during the induction period of transplant; issues with methodology and sensitivity of COVID-19 testing that may lead to inadvertent transplant of donor kidneys infected with SARS-CoV-2 ,8. The tThe rate of infection of health care workers (HCWs) with COVID-19 across the world and particularly, the increasing rate of infection of HCWs in the African region is of concern. Number of infected HCWs has consistently increased in Nigeria which has recorded the highest number of infected HCWs in the African region in the last few weeks. HCWs in renal centers are thus concerned about risk of transmission of COVID-19 to them by renal patients during treatment especially with limited supplies of personal protective equipment (PPE). The morale of HCWs in Nigeria to fight the COVID-19 pandemic is dampened because the government had not shown enough commitment to their welfare. Presently, the monthly hazard allowance of HCWs is less than 15USD and majority are not under life insurance cover by the government. The payment of the newly approved hazard allowance that was occasioned by the COVID-19 pandemic is still yet to be implemented.There were also uncertainties about guidelines that suit renal units in order to protect other kidney disease patients and HCWs while attending to patients with COVID-19 requiring dialysis. The recently published guidelines for the prevention, detection and management of the renal complications of COVID-19 by the African Association of Nephrologists has been able to clarify gray areas and douse anxiety among renal health care workers . This waIt is evident and highly imperative that both Nigerian government and corporate organizations need to show greater and genuine commitment to the welfare of the HCWs especially those on the frontline in this present pandemic. Policy formulation and implementation that will ease the burden of CKD patients should be prioritized in order to ensure optimal care delivery to renal patients during this present pandemic."} +{"text": "Stem cells from adipose tissue (ADSCs) and platelet-rich plasma (PRP) are innovative modalities that arise due to their regenerative potential. The aim of this study was to characterize possible histological changes induced by PRP and ADSC therapies in photoaged skin. A prospective randomized study involving 20 healthy individuals, showing skin aging. They underwent two therapeutic protocols . Biopsies were obtained before and after treatment (4 months). PRP protocol showed unwanted changes in the reticular dermis, mainly due to the deposition of a horizontal layer of collagen (fibrosis) and elastic fibers tightly linked. Structural analyses revealed infiltration of mononuclear cells and depot of fibrotic material in the reticular dermis. The ADSC protocol leads to neoelastogenesis with increase of tropoelastin and fibrillin. There was an improvement of solar elastosis inducing an increment of macrophage polarization and matrix proteinases. These last effects are probably related to the increase of elastinolysis and the remodeling of the dermis. The PRP promoted an inflammatory process with an increase of reticular dermis thickness with a fibrotic aspect. On the other hand, ADSC therapy is a promising modality with an important antiaging effect on photoaged human skin. The aging skin is a complex biological phenomenon that is composed of intrinsic and extrinsic processes. Facial aging is a degenerative process that affects the skin and deep structures resulting from the action of the intrinsic factor (age) associated with an extrinsic factor related to exposure to ultraviolet radiation \u20135. The cIn recent years, there is an increasing interest in the use of the regenerative properties of adipose tissue , 8. In tex vivo expansion, with the ability to differentiate into several cell lineages regeneration of loss of oxytalanic and elaunin fiber networks in the papillary dermis and [2] replacement of pathological deposits of actinic elastin with a normal fibrillary structure, elastic fibers in the deep dermis. This remodeling action of the ECM at the elastic system level was demonstrated by the increased fiber density of the smaller diameter elastic system at the expense of new fibers in zone 1 (under DEJ). The density increase of this zone probably occurred due to the emergence of new oxytalanic/elaunin fibers shown in In ADSC-treated skin, inflammatory cells may be implicated in the resolution of the ECM repair and reorganization process by polarizing the macrophage population to M2 . These fRegarding the aspect of biosafety in tissue and cell transfer, despite the short time interval of the present study (4 months), between the application of ADSCs and their effects on tissue samples submitted to the action of ADSCs, no dysplastic or oncogenic changes in the skin of the population studied were observed as found in the literature . The mesThe limitations of this study were the difficulty of accurately quantifying skin photoaging, morphofunctional analysis of all elements involved in the regenerative process after treatment with PRP and ADSCs, and knowledge of all involved biological events and their mechanisms.The advantages of using isolated stem cells as antiaging skin therapy over PRP, fat grafting, and/or ADSC-matching therapies are as follows: there is no risk of cyst formation and fibrosis, use of small injection volumes with high regenerative potential, precise application, possibility of reapplication after reexpansion of stored stem cells, and/or cryopreserved cell reuse.The future of this research line is aimed at creating new possibilities in regenerative therapy not only in skin diseases but in other clinical applications in the case of organs and tissues with reduction and/or alteration in the elastic system , with a better understanding of the mechanisms involved and the control of these processes.It was concluded that the action of PRP when injected on aged human skin induces an inflammatory process, contributing to the increase of collagen fiber deposits and the increase of reticular dermis thickness with a fibrotic aspect, not bringing any significant tissue regenerative role. On the other hand, expanded ADSC therapy in photoaged skin is related to ECM remodeling, increased production of new elastic fibers, and degradation of elastotic material deposited in the dermis (elastosis), inducing an important regenerative effect that could be considered a promising skin rejuvenation therapy."} +{"text": "The current understanding of radical hysterectomy is more centered on the uterus and little is discussed regarding the resection of the vaginal cuff and the paracolpium as an essential part of this procedure. The anatomic dissections of two fresh and 17 formalin-fixed female pelvis cadavers were utilized to understand and decipher the anatomy of the pelvic autonomic nerve system (PANS) and its connections to the surrounding anatomical structures, especially the paracolpium. The study mandates the recognition of the three-dimensional (3D) anatomic template of the parametrium and paracolpium and provides herewith an enhanced scope during a nerve-sparing radical hysterectomy procedure by precise description of the paracolpium and its close anatomical relationships to the components of the PANS. This enables the medical fraternity to distinguish between direct infiltration of the paracolpium, where the nerve sparing technique is no longer possible, and the affected lymph node in the paracolpium, where nerve sparing is still an option. This study gives rise to a tailored surgical option that allows for abandoning the resection of the paracolpium by FIGO stage IB1, where less than 2 cm vaginal vault resection is demanded. While the radical hysterectomy, first described by Ernst Wertheim , has beeA cadaver study was undertaken in addition to our long experience with nerve-sparing radical hysterectomy to arrivThe current understanding of radical hysterectomy is more centered on the uterus, especially the uterine cervix, and little or nothing is discussed on the significance of the resection of the vaginal cuff and the paracolpium as an essential part of a radical hysterectomy ,9,13. SiFurther, the ventral and lateral parametrium (and of course the paracolpium) have extensive vascularization for the blood supply of the uterus , populated with the numerous local lymph nodes and lymphatic supply. It is worth mentioning the contribution of Girardi and BeneThe cadaver studies and their interpretation in surgical steps primarily mandate the recognition of the three-dimensional (3D) anatomic template proposed as early as 2011 for paraA precise dissection of the area reveals that whilst there is no clear border between the stated structures, the dorsal paracolpium is denser, connects the vaginal part beneath the cervix to the lateral mesorectum and is located in the same level in between the rectal fascia and the pelvic nerve-vessel guiding plate laterally . TherefThe lateral paracolpium is where the vaginal vascular supply originates from (artery) and discharges into (vein) the internal iliac artery and vein beneath the ureter.In this way, it is possible to label the ureter as an anatomical marker that splits the lateral parametrium into the lateral parametrium above the ureter, containing the uterine artery and vein, and the lateral paracolpium beneath the ureter, containing the vaginal artery and vein . These tHowever, it is imperative to mention that the vaginal artery goes undetected laterally in about 14/33 hemi-pelvises in a cadaver study (42.4%) and in 37/84 hemi-pelvises in a clinical study (44%). This is due to the fact that this artery in such instances is construed to come in as a branch of the uterine artery, crossing directly into the ventral parametrium above the ureter and travelling caudally to the anterolateral side of the vagina. Both of these variations could be noticed in the same patient in 23/42 patients from a clinical study (55%) and in 1Hence, the ventral paracolpium is nothing but the deep layer of the vesicouterine ligament and theThe pelvic splanchnic nerves have been found to run directly from the sacral roots in front of the common trunk of the internal pudendal and inferior gluteal vessels at the dorsal edge of the lesser sciatic foramen, and then medial from the inferior vesical vein cranially to merge in the inferior hypogastric plexus.The inferior hypogastric plexus lies on the endopelvic fascia at the lateral vaginal and rectal sidewall. The bladder branches of the inferior hypogastric plexus leave the plexus medial from the medial vesicovaginal vein ca. 2 cm beneath the vaginal vault .This study also shows that most of the injuries to the inferior hypogastric plexus or the components of the autonomic nerve system in the radical hysterectomy procedure occur during the resection of the vaginal vault. This element has been reported and substantiated by published literature, which conclude that the more extensive the vaginal and surrounding tissue ablation, the greater the resultant bladder denervation . This suTherefore, it is essential to emphasize the accurate identification of all the components of the autonomic nerve system, especially while attempting a nerve-sparing radical hysterectomy procedure, complying with only the resection of the uterine branches and the dissection of the endopelvic fascia directly at the level of the first bladder branch of the inferior hypogastic plexus ,12,14 F.During a radical hysterectomy, the resection of vaginal length that is deemed appropriate for the disease stage has to be calibrated according to the tumor size and infiltration into the vagina. Furthermore, cutting the vaginal cuff without highlighting and isolating the endopelvic fascia (at the pelvic nerve-vessel guiding plate) would lead to injuries of the inferior hypogastric plexus from its medial facets . The endLateral and cranial from this endopelvic fascia is the vascular supply of the vagina and cervix, consisting of the lateral and ventral paracolpium and parametrium. The vaginal vessels cross over the inferior hypogastric plexus to run lateral to the vaginal sidewall; therefore, the last step of a radical hysterectomy will be the resection of the descending vaginal vessels at the lateral side of the vagina. The inferior vesical artery arises from the common trunk of the internal pudendal artery and the inferior gluteal artery and the inferior vesical vein discharges into the internal iliac vein. The inferior vesical vessels are chiefly engaged in supplying the lower wall of the retrovesical ureter ,26.The implications of this comprehensive anatomy identification of the parametrium and paracolpium have a fundamental relevance not only in the nerve-sparing radical hysterectomy procedure, but more so with the empowerment of the surgeon to modify the radical hysterectomy according to the tumor size and infiltration. The study outcomes provide an enhanced scope during a nerve-sparing radical hysterectomy procedure by precise identification, isolation and sparing of the inferior vesical vessels to avoid ureteral ischemia at the distal part of the ureter, which usually is prevalent in 6.4% of cases after a radical hysterectomy procedure .Since the ventral and lateral paracolpium are closely connected to the pelvic autonomic nerve system and the vaginal vessels pretty close to the pelvic splanchnic nerves running above the pelvic nerve-vessel guiding plate medially along the lateral vaginal sidewall , the preThe study also gives rise to a tailored surgical option that allows for abandoning the resection of the paracolpium if the tumor is restricted to the cervix, and less than 2 cm in size .This will allow for revising the classification of radical hysterectomy ,9 to redThe type C radical hysterectomy will then be the radical hysterectomy with parametrium and paracolpium resection for stage IB1 (the old FIGO classification ) with deThe anatomic dissections of 2 fresh and 17 formalin-fixed female pelvis cadavers were utilized to study, understand and decipher the hitherto ambiguously annotated anatomy of the pelvic autonomic nerve system and its connections to the surrounding anatomical structures, especially the paracolpium, with reference to radical hysterectomy. Since medical students had previously dissected the formalin-fixed cadavers, we excluded five hemi-pelvises where the pelvic sidewall was grossly over-dissected.The new anatomical knowledge from a cadaver study used to enhance and develop the Muallem technique for nerve-sparing radical hysterectomy was described in a previous study . All datAt the end of this study, PubMed and PMC were searched for studies pertaining to radical hysterectomy, radical hysterectomy and anatomic complications, anatomy of the pelvic autonomic nerve system and its connections to surrounding anatomical structures, especially the paracolpium and parametrium, and surgical challenges in radical hysterectomy for a complete literature search for comparison and reference.The comprehensive anatomy of the parametrium, paracolpium and pelvic autonomic nerve system as presented in this study is a contradictory concise paradigm for radical hysterectomy, which provides a precise concept of the nerve-sparing radical hysterectomy that shall enable surgeons to perform such a complex procedure without the ensuing common ureter related complications. The key steps are to distinguish between direct (continuous) and lymphatic infiltration of the paracolpium, rethink the needed radicality and devise tailored surgical procedures to suit individual cases."} +{"text": "Aggressive behaviour is a major problem in clinical practice of mental health care and can result in the use of coercive measures.Coercive measures are dangerous for psychiatric patients and international mental healthcare works on the elimination of these interventions.There is no previous review that summarizes the attitude of nursing staff towards coercive measures and the influence of nursing staff characteristics on attitude towards and the use of coercive measures.The attitude of nurses shifted from a therapeutic paradigm (coercive measures have positive effects on patients) to a safety paradigm .Nurses express the need for less coercive interventions to prevent seclusion and restraint, but their perception of intrusiveness is influenced by how often they use specific coercive measures.The knowledge from scientific literature on the influence of nursing staff on coercive measures is highly inconclusive, although the feeling of safety of nurses might prove to be promising for further research.There is need for increased attention specifically for the feeling of safety of nurses, to better equip nurses for their difficult work on acute mental health wards.The use of coercive measures generally has negative effects on patients. To help prevent its use, professionals need insight into what nurses believe about coercion and which staff determinants may influence its application. There is need for an integrated review on both attitude and influence of nurses on the use of coercion.To summarize literature concerning attitude of nurses towards coercive measures and the influence of staff characteristics on the use of coercive measures.Systematic review.The attitude of nurses changed during the last two decades from a therapeutic to a safety paradigm. Nurses currently view coercive measures as undesirable, but necessary to deal with aggression. Nurses express the need for less intrusive interventions, although familiarity probably influences its perceived intrusiveness. Literature on the relation between staff characteristics and coercive measures is inconclusive.Nurses perceive coercive measures as unwanted but still necessary to maintain safety on psychiatric wards. Focussing on the determinants of perception of safety might be a promising direction for future research.Mental health care could improve the focus on the constructs of perceived safety and familiarity with alternative interventions to protect patients from unnecessary use of coercive interventions. We describe the full search strategy in Data 3.3We performed the first selection based on title and abstract. We subsequently retrieved the full text of the included studies for the final assessment of eligibility. Two reviewers (PD and JV) performed the selection independently and settled disagreements through discussion. In case of disagreement, the reviewers consulted a third reviewer (CL).We selected studies based on inclusion and exclusion criteria. Inclusion criteria concerning study design were cohort studies, case\u2013control studies, case series, cross\u2010sectional studies, surveys and qualitative studies on the attitude of nursing staff towards coercive measures and/or the influence of nursing staff characteristics on the use of one or more coercive measures . We included studies performed in acute mental health inpatient services or psychiatric facilities in general or academic hospitals that cared for psychiatric patients with primary diagnosis of axis I or II of the DSM\u2010IV\u2010TR tool performed the data extraction with a standardized form. Studies that described the attitude of nurses were mostly qualitative or survey studies, and the results were not suitable for statistical pooling. We carefully read the studies and extracted important themes from these studies independently. Thereafter, we discussed the interpretation of the qualitative findings. Subsequently, we extracted descriptive themes from the analysis of the qualitative studies based on consensus between the reviewers and combined these with the results from the surveys. We observed that literature on nursing staff characteristics had high levels of heterogeneity, which made it unlikely that performing a meta\u2010analysis would be appropriate. We summarized the most important results of the included studies. We extracted data on the research question, design, sample size, population, setting and outcome measures from the included studies.44.1The initial search resulted in 7,517 references. After the selection process, we included 84 publications Figure . A crossn\u00a0=\u00a013 nurses . The av4.2In our study of the included literature on the attitudes of nurses towards coercive measures, we observed two major themes: (a) the discrepancy between treatment paradigm and safety paradigm; and (b) the need for less intrusive alternative interventions.4.2.1We observed a paradigm shift in the attitude towards coercive measures from a treatment paradigm to a safety paradigm. The belief that patients experience therapeutic benefits from the use of coercive measures characterizes the treatment paradigm. Distinctive for the safety paradigm is the belief that the patient undergoing coercive measures experiences negative consequences, but coercive measures are necessary to maintain safety for patients and staff members.Tooke and Brown were theAfter 2010, reports that supported the therapeutic paradigm became scarce, although it seems clear that a minority of nurses still view coercive measures as calming for specific types of patients on the use of and attitude towards coercive measures.4.3.1Gender of the nurse is the most reported nursing staff characteristic associated with use of and attitude towards coercive measures, although findings are inconsistent. Several studies reported that the presence of male nurses was associated with more use of coercive measures, such as seclusion and an increase in the use of coercive measures showed less use of coercive measures compared to the clusters with lowest scores (n\u00a0=\u00a056).A higher score on the subscale programme clarity of the Ward Atmosphere Scale Moos, , indicatOther authors found no association between ward atmosphere and frequency of use of coercive measures : \u201cundesOur second aim was the influence of nursing staff factors on the use of coercive measures and on the attitude of nurses towards coercive measures. The results in literature were remarkably inconclusive. For example, we found twelve studies that investigated the association of gender of the nurse and the use of coercion. Five of them concluded that male nurses were more prone to use coercion (Bowers et al., When combining the findings of the perceived necessity of coercive measures for safety reasons and the inconsistency in the influence of nursing staff characteristics, we want to stipulate the possible importance of the feeling of safety of nurses. Despite the troubles of measuring this trait, some authors suggest that the feeling of safety of nurses may be associated with less use of coercive measures (Goulet & Larue, This current systematic review is, to the best of our knowledge, the first to explicitly combine a review on the attitude of nurses and the influence of nursing staff characteristics. The strengths are that we performed an extensive literature search in several databases and to several forms of coercive measures, instead of focussing on seclusion and restraint. There are also some limitations. Summarizing qualitative studies inevitably entails de\u2010contextualisation of qualitative findings, because of the dependency of qualitative research findings on the particular context, time and group of participants (Thomas & Harden, 6The attitude of nurses towards coercive measures has changed over the years from a therapeutic paradigm to a safety paradigm. The current attitude towards use of coercive measures is not to treat patients, but to protect patients and staff from violence. Nurses consider coercive measures as necessary interventions and express the need for less intrusive alternatives. Although nurses recognize the negative consequences for patients, the frequent use of a specific coercive measure may decrease the value that nurses give to the negative consequences associated with that measure. The research on the influence of nursing staff characteristics is highly inconclusive. However, the feeling of safety of nurses may be a key concept in the prevention of coercive measures.7We propose that mental health care could improve the focus on the constructs of safety and danger to protect patients from unnecessary use of coercive interventions. Lack of attention to the feeling of safety of nurses working at psychiatric wards can threaten further reduction in the use of coercive measures. Using coercive measures has been common practice in mental health care for centuries, as well as the debate on reducing them (Yellowlees, 8The use of coercion is associated with adverse events. Nurses have influence on the decision to use coercive measures. Attitude of nurses towards coercion and nursing staff characteristics influence these decisions. This review summarizes the literature on the influence of attitude of nurses and nursing characteristics on the use of coercive measures. Our findings indicate, based on the attitude towards coercive measures and some evidence on perception of safety, the importance of the feeling of safety of nurses by clinicians, researchers and policymakers. This might be a more relevant road towards better quality of care than focus on nursing characteristics.The authors declare no conflicts of interest.All authors listed meet the authorship criteria according to the latest guidelines of the International Committee of Medical Journal Editors. All authors agree with the manuscript.\u00a0Click here for additional data file.\u00a0Click here for additional data file."} +{"text": "Numerous studies exist that define polypharmacy and its impact on health. Additionally, the literature is rich in studies documenting the benefits of care provided by nurse practitioners. A gap in research exists at the intersection of the value of nurse practitioners in caring for older adults and their management of polypharmacy. Coinciding with a growth of America\u2019s older adult population and the need for adequate care, the purpose of this study was to explore the experiences of nurse practitioners caring for older adults experiencing polypharmacy. A qualitative descriptive study was conducted using a purposive sampling of nurse practitioners who care for older adults. Interviews were conducted and data was analyzed for themes. Four themes emerged: defining polypharmacy, communicating and collaborating, clinical judgement of nurse practitioners in relation to polypharmacy, and medication issues of older adults. Major themes emerged that depict the complexity of medication management in older adults as well as the important role of NPs in providing care to older adults. The significance of the study findings to future practice includes improving communication and collaboration of prescribing health care providers, better identification and management of polypharmacy, and improving the health care delivered to older adults. Safe and effective prescribing for older adults requires NPs consider the unique needs of each older adult while utilizing technology to support collaboration and decision making."} +{"text": "Current and future NAGE policy-related activities will be the focus of this presentation. The Geriatric Academic Career Awards (GACAs), which support the career development of junior faculty clinician educators in geriatrics, were reinstituted by HRSA in 2019 after a 13-year absence. We will discuss the role of this award in the broader context of geriatrics education and GWEPs, how GACA awardees have been integrated into NAGE, and the need for expansion of the GACA program to support both the GWEP and geriatric education pipelines. Areas for future NAGE engagement will be focused on advocacy efforts to support: permanent GWEP reauthorization by Congress; expanding current level of $40.737 million to $51 million to enable HRSA to increase the number of GWEPS to further extend their reach; increasing funding for GACA awardees; and strengthening the synergies between the GACA and GWEP programs to support development of future GWEP leadership."} +{"text": "Meloidogyne incognita induced galls is the reorganization of the vascular tissues. During the interaction of the model tree species Populus and M. incognita, a pronounced xylem proliferation was previously described in mature galls. To better characterise changes in expression of genes possibly involved in the induction and the formation of the de novo developed vascular tissues occurring in poplar galls, a comparative transcript profiling of 21-day-old galls versus uninfected root of poplar was performed. Genes coding for transcription factors associated with procambium maintenance and vascular differentiation were shown to be differentially regulated, together with genes partaking in phytohormones biosynthesis and signalling. Specific signatures of transcripts associated to primary cell wall biosynthesis and remodelling, as well as secondary cell wall formation were revealed in the galls. Ultimately, we show that molecules derived from the monolignol and salicylic acid pathways and related to secondary cell wall deposition accumulate in mature galls.One of the most striking features occurring in the root-knot nematode Meloidogyne spp.) are obligate sedentary parasites that enter the root of the plant host at the second juvenile stage. The larvae penetrate the root elongation zone, migrating intercellularly to the root tip and entering the vascular cylinder where they induce the trans-differentiation of parenchyma cells into multinucleate feeding cells . Alongside, neighbouring cells divide contributing to the development of root swellings, named galls. An extensive network of xylem cells enfolding GCs is a typical anatomical feature in galls induced by RKN .Uninfected roots and 21 dai galls model was computed with the lm function in which weighing was necessary due to the loss of one root sample.Galls in poplar result from the proliferation of xylem and phloem tissues to support the feeding of the nematode through the GCs. These biological processes rely on the preferential expression of several TFs and downstream biosynthetic genes, notably those involved in SCW formation. Nematode is therefore able to manipulate plant transcriptional regulation to mimic a developmental differentiation process favouring its own life cycle, at the expense of plant metabolism. However, the identification of the nematode molecular signals steering this programme will require further investigations. It seems that phytohormones such as cytokinins, GA and SA shape the development of galls. The targeted analysis of metabolites related to monolignols and salicylated products confirmed the reaction of the plant after nematode infection, which consists in the biosynthesis of monolignol intermediates and phenolic glycosides as lignin building blocks for xylem SCW."} +{"text": "To assess the perception and attitude of HCPs and health-related science colleges\u2019 students regarding the clinical pharmacists\u2019 roles and responsibilities in providing better pharmaceutical care to patients in Taif, Saudi Arabia and to detect its impact on management of cancer. This study was conducted in four randomly selected hospitals in Taif and three health-related science colleges in Taif University. A questionnaire was distributed to HCPs and another questionnaire to students of health-related science colleges. Three quarters of students perceived that the clinical pharmacist is an important part of the healthcare team. Two-thirds of HCPs expressed confidence in the ability of clinical pharmacists to minimize medication errors. Although two-thirds of HCPs reported that they did not have clinical pharmacists in their institutions, there was substantial willingness among HCPs to cooperate with the clinical pharmacists. Most HCPs expressed the view that the clinical pharmacist is an important integral part of the healthcare team and has a positive impact on cancer management. HCPs and students of health-related science colleges valued the role of clinical pharmacists in healthcare delivery and management of cancer. However, new developments in clinical pharmacy services in Taif hospitals are recommended to improve perception and attitudes towards the clinical pharmacy services. Also, well-organized programs should be conducted to students of health-related science colleges to improve their perceptions and attitudes towards the clinical pharmacy services which may have a positive impact on cancer management. Cancer is a group of diseases characterized by unregulated cell growth and differentiation. Management of cancer necessitates effective cooperation between various healthcare services and healthcare providers including the physicians, nurses and the clinical pharmacists . Clinical pharmacy is a health science whereby pharmacists provide patient care that optimizes medication therapy and promotes health, wellness and disease prevention . This field of pharmacy practice focuses on patient-oriented rather than drug product-oriented service . This science emerged as a result of dissatisfaction with old practice norms and the pressing need for a competent health professional with a comprehensive knowledge in the therapeutic use of drugs. Clinical pharmacists are a primary source of scientifically valid information and advice regarding the safe, appropriate, and cost\u2013effective use of medications . Already, the level of interaction between the physicians and pharmacists in the developed world is high, resulting in safer, more effective and less costly drug therapy .In 1959, the first college of pharmacy in Saudi Arabia was established and baccalaureate degrees in pharmacy were granted. In mid-1970s, clinical pharmacy practice was introduced in the country . After thirteen years, the first pharmacy department in King Khalid University Hospital (KKUH) implemented clinical pharmacy program outside the United States of America (USA) . About 15 clinical pharmacists after the initial trial were employed, and then the service was expanded to cover all hospitals. In Saudi Arabia, clinical pharmacy is relatively well developed . Whereas the number of academic centers are currently 28 academic centers that grant pharmacy degrees, some centers offer a Bachelor in Pharmacy (B-Pharm) and others offer the Master-level (MSc) or Doctor of Pharmacy (PharmD) degrees. Most universities are headings for clinical pharmacy, which hints will eventually be phased out from B-Pharm degree. In some universities, students start in the B-Pharm program. If they meet specific criteria during their studies, they will progress into the PharmD program . The aim of this study was to assess the perception and attitude of HCPs and health-related science colleges\u2019 students regarding the clinical pharmacists\u2019 roles and responsibilities in providing better pharmaceutical care to patients in Taif, Saudi Arabia and to detect its impact on the management of cancer. Subjects and methodsA cross-sectional study was conducted in four randomly selected hospitals and three health-related science colleges in Taif University and college of nursing). This study was conducted according to the National Research Council Guidelines and was approved by the ethics committee of Taif University, Saudi Arabia (Code 40-35-0181). The participants were randomly selected from lists provided by their facility administrators. A written consent was obtained from the participants before being included in this study. Inclusion criteria1. Male and female students of health-related science colleges in Taif University .2. Male and female HCPs including physicians, nurses and technicians working in physiotherapy, laboratory and radiology departments in King Faisal Hospital, King Abdul-Aziz Hospital, Pediatrics Hospital and Prince Mansour Military Hospital in Taif, Kingdom of Saudi Arabia (KSA).Exclusion criteria1. Male and female students of other colleges in Taif University.2. Male and female HCPs in departments other than the above-mentioned.MethodologyThe questionnaires were distributed to HCPs and students of health-related science colleges. The participants were approached directly to arrange a 15 minutes interview with the researchers at a convenient time. The questionnaires were completed by the participants under the supervision of the researchers in order to improve clarity and limit response bias. The questionnaire consists of a series of questions prepared by the researchers with one version targeted at HCPs and the other at students. To ensure face validity, the questionnaire was sent to three academics and three physicians with a wide range of professional experience. Their views and comments were considered and then incorporated, where appropriate, into the final version of the questionnaire. To assess test-retest reliability, the questionnaire was administered on two occasions to 12 randomly selected HCPs. The second testing took place two weeks later. Test-retest reliability was calculated using Spearman\u2019s correlation coefficient (r). The rho-value was 0.82, which implies acceptable test-retest reliability . Respondents were asked to answer a question using the options \u201cyes\u201d or \u201cno\u201d, or to rate their response using the options \u201cagree\u201d, \u201cneutral\u201d, or \u201cdisagree\u201d. The study was carried out over a period of three months (October to December 2018).Statistical analysisThe statistical analysis of the results was carried out using Minitab 16 . Descriptive analysis was used to calculate the proportion of each group of respondents who agreed/disagreed with each statement in the questionnaire. Chi-square test was used to identify any significant difference among the participants\u2019 responses regarding certain statements in the questionnaire. P-values less than 0.05 were considered statistically significant.Demographic data of the studentsStudents\u2019 perception and attitude Interestingly, Healthcare providers\u2019 demographic dataHealthcare providers\u2019 perception and attitudeInterestingly, In the last years, significant changes took place in the practice of pharmacy which necessitate changes in procedures and training of pharmacists . Moreover, these changes require more imaginative use of pharmacy skills and involvement of the clinical pharmacists at the healthcare process at all stages . However, there is still a wide gap between the clinical pharmacists and other HCPs which may deprive the society from getting the benefits of clinical pharmacy in the healthcare process . In the present study, there was a strong belief among students of health-related science colleges that clinical pharmacists represent an important integral part of the clinical team and are able to minimize medication errors in the hospital settings. The present study found that only about half of the students heard about the clinical pharmacy in their institutions during their study period. This may be due to absence of patient-oriented PharmD programs . Also, a large proportion of the students agreed that HCPs will accept the pharmacists to provide additional services within the framework of clinical pharmacy. This may be attributed to the presence of pharmacists or physicians in the families of these students who gave them this concept .In the present study, about 40% of the students reported that there is increased interest in clinical pharmacy as a profession in KSA. This may attract the attention of the healthcare authorities to the strong need to apply and expand the role of the clinical pharmacists in the healthcare team to provide the patients with better pharmaceutical care . This may be in the same line with a previous study in KSA which reported that there is shortage of clinical pharmacists services in most ministry of health hospitals, which necessitates the presence of clinically-oriented training programs for pharmacists and pharmacy students to overcome this shortage and to improve the pharmacists\u2019 role in providing the patients with healthcare in the hospital settings .In the present study, the college of the respondents significantly affected the responses of the participants to the statement that the clinical pharmacist is an important and integral part of the medical team. Also, it significantly affected the responses of the students to the statements that the clinical pharmacist can acquire training in certain medical areas enabling them to perform patient counseling and that doctors and other healthcare staff will accept the involvement of the clinical pharmacists in patient management and providing extra services within the framework of clinical pharmacy. This may reflect that the students of the college of medicine have more perception and attitude towards the participation of the clinical pharmacy services in the patient\u2019s healthcare than the students of the other health-related science colleges. This may be attributed to the content of the curricula of the subjects given to these students during the college years which may give them a positive attitude towards the participation of the clinical pharmacists in the healthcare team .In the present study, HCPs in Taif hospitals showed a high perception of the role of the clinical pharmacists in improving the therapeutic outcomes of the patients. Also, HCPs in the present study expected that clinical pharmacists can play an important role in direct patient care, particularly in patient counseling and education. Moreover, nearly about three-quarters of HCPs in the present study stated that clinical pharmacists can be of great help to improve the quality of patient care in the hospital settings. This was in agreement with the results of a previous study which indicated that clinical pharmacists working in hospitals managed by international institutions had the ability to provide more efficient clinical pharmacy services because they are involved in review of patient medications and practice an active role in therapy management .In the present study, HCPs particularly physicians and nurses supported the participation of the clinical pharmacists in the clinical ward team work. They expected that in the near future, they will have the ability to regularly seek advice with regard to patient medications from the clinical pharmacists in their institutions. These findings reinforce the importance of implication of the role of the clinical pharmacists in various aspects of healthcare and prove that the expected resistance of HCPs to implication of the role of the clinical pharmacists should not stand as an obstacle to instituting clinical pharmacy services in the hospital settings .In the present study, Less than 30% of HCPs believed that the clinical pharmacist has fulfilled his/her role in KSA. This may be due to lack of the clinical pharmacists in the hospitals which was supported by the findings of the present study where only 31.4% of the participants reported that they have a clinical pharmacist in their institutions. This lack is not limited to the hospitals of the ministry of health but also extends to the private hospitals. Private hospitals should improve the patient\u2019s care services provided by them by incorporating the clinical pharmacists to become a part of their clinical ward teams . Also, findings from the present study supported the hypothesis that the clinical pharmacists should perform specific duties that may have a great impact on the healthcare services such as education of the patients and minimizing medication errors .In the present study, the profession of the HCPs significantly affected their response to the statement that HCPs are willing to cooperate with the clinical pharmacist. Also, it significantly affected the responses of HCPs to the statements that the clinical pharmacist can improve the quality of patient care in the hospital settings and the clinical pharmacist can acquire training in certain medical areas to perform patient counseling. This was in agreement with Bondesson et al., (2012) and Sabry and Farid (2014) who reported there is a high level of acceptance by the physicians more than other HCPs to the involvement of the clinical pharmacists in the healthcare team. However, Sabry and Farid (2014) suggested that greater efforts are needed in the Arab countries to increase the physicians\u2019 awareness and knowledge of the importance of the clinical pharmacists in the hospital settings and to promote the benefits from the clinical pharmacy services.In the present study, about three quarters of the students and 60% of HCPs agreed to the statement that implication of the clinical pharmacy services will have a positive impact on management of cancer. This was in the same line with Delpeuch et al., (2015) and Yokoyama et al., (2018) who reported that patients who received medical care from a collaborative team that included a clinical pharmacist showed significant improvement in the most important key indicators of cancer management. On the other side, some studies reported resistance of HCPs especially the physicians to implication of the role of the clinical pharmacists in the hospital settings . This resistance may be attributed to the lack of direct contact between the physicians\u2019 and the clinical pharmacists participating in the clinical activities . Also, lack of knowledge with the importance of the clinical pharmacists in healthcare and deficiency of well-trained and efficient clinical pharmacists in the hospitals may be contributing factors . In order to overcome this resistance, continuous improvement and incorporation of courses related to inter-professional relationships between HCPs and the clinical pharmacists in the curricula of the medical, pharmaceutical and nursing education should be applied to encourage the collaboration between the clinical pharmacists and other HCPs in provision of patient care .In conclusion, HCPs and the students of health-related science colleges in this study valued the role of the clinical pharmacists in healthcare delivery and management of cancer. However, new developments in clinical pharmacy services in Taif hospitals are recommended to improve the perceptions and attitudes towards the services of clinical pharmacy and improve the healthcare provided to the patients. Also, well-organized programs should be included in the curricula of the subjects given to the students of health-related science colleges to improve their perception and attitude towards the clinical pharmacy services. Further studies are needed to investigate the impact of improving the clinical pharmacy services on the outcomes of cancer management."} +{"text": "The webbed penis represents a common genital abnormality consisting of penoscrotal transposition of various degrees, the presence of a skin fold tethering the ventral penile shaft to the scrotum promoting the absence of a penoscrotal angle and an abnormally short ventral shaft. Besides, a stenotic ring of distal prepuce (phimosis or paraphimosis) is frequently found. We want in this video to illustrate the steps of this common procedure associated with an excellent cosmetic result and improvement of self-esteem.Surgery consists of treating penoscrotal transposition when present by two inverted scrotal V-shaped skin flaps to be brought down to its natural position. The ventral penile shaft is detached from the scrotum, excising or dividing the fibrotic and fatty tissue. We dissect the skin and deglove the penis proximally almost reaching the pelvic floor, producing a release of the penile shaft and increase in size. After that, we suture the ventral penile skin at the lowest level of dissection by two 3.0 vycril sutures anchoring them to the Buck's fascia one at each side of the urethra. Subsequently, the circumcision is performed and the scrotum reconstructed with removal of redundant skin when necessary.Surgery produced improvement of ventral surface of the penis and better cosmetic appearance without any local complicationThe webbed penis is a frequently under-recognized abnormality by pediatricians, but a major cause of anxiety for parents. This technique can be regarded as an alternative to most webbed penis patients."} +{"text": "Development of low-cost robots to assist older adults requires the input of end users: older adults, paid caregivers and clinicians. This study builds on prior work focused on the task investigation and deployment of mobile robots in a Program of All-inclusive Care for the Elderly. We identified hydration, walking and reaching as tasks appropriate for the robot and helpful to the older adults. In this study we investigated the design specifications for a socially assistive robot to perform the above tasks. Through focus groups of clinicians, older adults and paid caregivers we sought preferences on the design specifications. Using conventional content analysis, the following four themes emerged: the robot must be polite and personable; science fiction or alien like; depends on the need of the older adult; and multifaceted to meet the needs of older adults. These themes were used in the design and deployment of the Quori robot."} +{"text": "An adaptive dynamic sliding mode control via a backstepping approach for a microelectro mechanical system (MEMS) vibratory z-axis gyroscope is presented in this paper. The time derivative of the control input of the dynamic sliding mode controller (DSMC) is treated as a new control variable for the augmented system which is composed of the original system and the integrator. This DSMC can transfer discontinuous terms to the first-order derivative of the control input, and effectively reduce the chattering. An adaptive dynamic sliding mode controller with the method of backstepping is derived to real-time estimate the angular velocity and the damping and stiffness coefficients and asymptotical stability of the designed systems can be guaranteed. Simulation examples are investigated to demonstrate the satisfactory performance of the proposed adaptive backstepping sliding mode control. Microelectro mechanical system (MEMS) gyroscopes can measure the sensor angular velocity of inertial navigation and guidance systems, widely used in aviation, aerospace, marine and positioning fields. However, parameter uncertainties and external disturbances, the manufacturing errors, and the influence of the ambient temperature decrease the accuracy and sensitivity of the micro gyroscope. The manufacturing errors and the influence of the external conditions as main factors affecting the decrease in the accuracy and sensitivity of the gyro system, the nonlinear effects in the model applied is also of great importance. The problem concerning the impact of the nonlinearity is discussed ,2,3. TheDynamic sliding mode control (DSMC) schemes ,17,18,19However, an adaptive backstepping scheme combined with dynamic sliding mode controller has not been applied to a MEMS gyroscope yet. The backstepping method is a powerful design tool for dynamic systems with pure or strict feedback forms. The gyroscope equations can be transformed into an analogically cascade system that is easily implemented by the backstepping method. This work is an extended version of the 2013 work and the In this paper, an adaptive dynamic sliding mode controller based on backstepping control is designed to realize position tracking and effectively decrease the chattering problem. The advantages of the proposed controller can be summarized as follows:(1) Adaptive control, DSMC and backstepping control are combined and applied to a MEMS gyroscope. DSMC using the derivative of the switching function is utilized to eliminate the chattering and attenuate the model uncertainties and external disturbances and adaptive control is derived to estimate the dynamics of the micro gyroscope. Hence, dynamic sliding mode control not only removes some of the fundamental limitations of the traditional approach but also provides improved tracking accuracy under sliding mode.(2) The proposed DSMC adds additional compensators to achieve system stability, thereby obtaining the desired system property. An integrator is added in the front end to transform the original system into an augmented system, with the derivative of the original control input as the system input. Therefore, the proposed integrator can filter out high frequency noise.(3) The advantages of the backstepping design are that it is able to relax the matching condition and avoid cancelation of useful nonlinearities. The procedure of backstepping design is to develop a controller recursively by regarding some of the state variables as \u201cvirtual controls\u201d and deriving control laws to improve the robustness.The paper is organized as follows. In The typical MEMS vibratory gyroscope depicted in z axis.We assume that the table where the proof mass is mounted is moving with a constant velocity; the gyroscope is rotating at a constant angular velocity Referring to , with thFabrication imperfections result in the asymmetric spring and damping terms, Define non-dimensional time On both sides of the Equation (1) divide by the mass Equation (3) is a mathematical model of the MEMS gyroscope under ideal conditions. Considering the presence of model uncertainties and external disturbances of a MEMS gyroscope under the actual conditions, ignoring the superstar for the convenience of notation, then rewriting non-dimensional model (3) in matrix form yieldsSuppose an ideal oscillator generates a reference trajectory and the control objective is to make the trajectory of the MEMS gyroscope follow that of the reference model. The reference model is defined asThe tracking error is defined asThe virtual controller is defined asIn the backstepping control, the introduction of virtual control is essentially a static compensation idea. The front subsystem must achieve stabilization purposes through the virtual control of the back subsystem.In this section, an adaptive DSMC method based on backstepping design is developed for the trajectory tracking and system identification of a MEMS gyroscope as shown in We select the first Lyapunov function as follows:The time derivative of the When Define the second Lyapunov function as followsThinking about Equations (4) and (6), the sliding surface is defined asSubstituting Equation (8) into Equation (12) yieldsReferring to Equations (4) and (13), we can obtainThe derivative of the sliding surface isThe time derivative of the To make Substituting Equation (17) into Equation (16) yieldsTo make Substituting Equation (19) into Equation (18) yieldsIt is assumed that With the choice of In this section, based on the backstepping design, an adaptive DSMC strategy is designed for the trajectory tracking and system identification of the MEMS gyroscope. The parameters of the micro gyroscope sensor are described as:The reference trajectory is chosen to be The tracking trajectory and tracking error are shown in The parameters of the MEMS gyroscope are in In this study, an adaptive DSMC strategy with a backstepping approach was successfully applied to a MEMS gyroscope for the trajectory tracking. The derivative of the switching function is employed to differentiate classical sliding surface and transfer discontinuous terms to the first-order derivative of the control input, and effectively decrase the chattering. The asymptotical stability of the closed loop system can be guaranteed with the proposed DSMC strategy. Moreover, the proposed adaptive dynamic sliding mode control can estimate the system parameters online. Simulation studies are conducted to demonstrate the good performance of the proposed dynamic sliding mode control methods."} +{"text": "Hydrocephalus is the most common cause of pediatric surgical intervention . We are aware only of the single aforementioned study by Waybright et al. for proteomic analysis of the CSF of hydrocephalic infants.Clearly, none of the previous studies affords comparison with the comparative proteomics we undertook were detected in the CSF of hydrocephalic patients are produced in the CNS, including the choroid plexus (Tseng et al., Two of the overabundant proteins are the serpins angiotensinogen and PEDF . AngioteFew previous studies had analyzed the diagnostic potential of the CSF. Developmental defects originating from attenuated cycling of germinal matrix cells in the brains of hydrocephalus-harboring rats had been attributed to alteration in the composition of the CSF relative to healthy subjects, but the putative difference between the CSF of healthy and hydrocephalic rats had not been identified (Owen-Lynch et al., The interpretation of the overabundance of ECM proteins, complement factors, and apolipoproteins present in the subgroup of 18 proteins reported by Yang et al. as biomarkers of brain injuries is supported by previous studies (Mcallister, The combined overabundance of all the five cytokine-binding proteins present in the above subgroup suggests that IGF signaling is a major autoregulatory pathway employed by the brain, possibly to counteract the neuronal injury caused by ventriculomegaly (Johnston et al., AH initiated the proteomics data analysis. All authors contributed to the writing of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Five (5) years old male patient came to outpatient department with complain of multiple swelling over chest region with upper extremities. The clinical examination revealed that there is chest multiple mass growth of about 3-4 cm bilaterally each over chest, upper extremities and forehead. The development of multiple benign osteocartilaginous masses (exostoses) begins with the relation with the ends of long bones of the lower limbs such as the femurs and tibias and of the upper limbs such as the humerus and forearm bones with chest region. The patient also had complains of difficulty in eating, finger grips, as well as loss of functional independence. On clinical examination the patient was diagnosed with a rare case of hereditary multiple osteochondromas (HMO)."} +{"text": "Seismicity pattern changes that are associated with strong earthquakes are an interesting topic with potential applications for natural hazard mitigation. As a retrospective case study of the Ms7.3 Yutian earthquake, which was an inland normal faulting event that occurred on 21 March 2008, the Region-Time-Length (RTL) method is applied to the seismological data of the China Earthquake Administration (CEA) to analyze the features of the seismicity pattern changes before the Yutian earthquake. The temporal variations of the RTL parameters of the earthquake epicenter showed that a quiescence anomaly of seismicity appeared in 2005. The Yutian main shock did not occur immediately after the local seismicity recovered to the background level, but with a time delay of about two years. The spatial variations of seismic quiescence indicated that an anomalous zone of seismic quiescence appeared near the Yutian epicentral region in 2005. This result is consistent with that obtained from the temporal changes of seismicity. The above spatio-temporal seismicity changes prior to the inland normal faulting Yutian earthquake showed similar features to those reported for some past strong earthquakes with inland strike faulting or thrust faulting. This study may provide useful information for understanding the seismogenic evolution of strong earthquakes. The Q parameter defined by Equation (3) can describe quantitatively the spatial map of quiescence anomaly of seismicity in the time window with respect to the background seismicity from the start time of a complete earthquake catalog to the ending time tB. Some case studies indicated that the Q parameter is effective in quantifying the spatial map of quiescence anomaly of seismicity was chosen as six months in this study, i.e., tB \u2212 tA = 6 months. Although the selection of this interval was somehow empirical ) with respect to the background seismicity from the start time of the adopted complete earthquake catalog to the ending time tB. The temporal changes of the Q-map can be obtained by sliding the calculated ending time of the earthquake catalog . The temporal change of seismicity in Yutian area showed that the most significant quiescence appeared at around late 2005 indicated that an anomalous zone of seismic quiescence appeared in 2005 near the Yutian epicentral area, consistent with the features revealed by the temporal variation of the RTL parameters. The size of the anomalous quiescence zone is about 200 km, several times of the rupture length of the main shock. The retrospective case study on the Yutian earthquake showed a combination of a temporal change and a spatial map of seismicity would provide useful information for seismic risk assessments."} +{"text": "The most pronounced effects of a given inflammatory stimulus on both systems are the induction of fever and the manifestation of inflammatory pain, which is an increased sensitivity of the nociceptive system due to the inflammatory response. Traditionally, the distinct functional components of the afferent somatosensory system have been investigated in vivo in conscious or anesthetized experimental animals. Most frequently, an inflammatory stimulation of the thermoregulatory system to evoke fever was achieved by administration of bacterial lipopolysaccharide (LPS) and continuous recording of body core temperature . InflammIn this issue of Pfl\u00fcgers Archiv \u2013 European Journal of Physiology, Leisengang et al. introduc"} +{"text": "Background: In order to tackle the public health threat of antimicrobial resistance, improvement in antibiotic prescribing in primary care was included as one of the priorities of the Quality Premium (QP) financial incentive scheme for Clinical Commissioning Groups (CCGs) in England. This paper briefly reports the outcome of a workshop exploring the experiences of antimicrobial stewardship (AMS) leads within CCGs in selecting and adopting strategies to help achieve the QP antibiotic targets. Methods: We conducted a thematic analysis of the notes on discussions and observations from the workshop to identify key themes. Results: Practice visits, needs assessment, peer feedback and audits were identified as strategies integrated in increasing engagement with practices towards the QP antibiotic targets. The conceptual model developed by AMS leads demonstrated possible pathways for the impact of the QP on antibiotic prescribing. Participants raised a concern that the constant targeting of high prescribing practices for AMS interventions might lead to disengagement by these practices. Most of the participants suggested that the effect of the QP might be less about the financial incentive and more about having national targets and guidelines that promote antibiotic prudency. Conclusions: Our results suggest that national targets, rather than financial incentives are key for engaging stakeholders in quality improvement in antibiotic prescribing. Clinical Commissioning Groups (CCGs) were established in the English National Health Service (NHS) in April 2013 as the statutory bodies responsible for the planning and commissioning of health care services for their local area ,5,6,7, tThis paper briefly reports the key findings from a workshop with antimicrobial stewardship (AMS) leads, which aimed to explore the experiences of selecting and adopting strategies within CCGs to help achieve the QP targets on improving antibiotic prescribing. We have also reported a concept mapping activity by the CCGs to develop a conceptual framework for modelling the mechanism of QP effect on antibiotic prescribing.Participants were sent an invitation letter via email with details of the study. We invited 80% of the AMS leads for the 191 CCGs in England (as of May 2019) [Participants were assigned to one of three small groups, and each group had a researcher to observe and record discussions. In assigning participants to small working groups, we aimed to have diverse perspectives and experiences in each group in relation to regions and prescribing behaviour of the CCGs represented by the participants. This was important to facilitate comparison of experiences and drive creativity and inclusiveness in the development of the conceptual framework. Discussions within each small group started with participants identifying and summarising the interventions and strategies adopted in their CCGs towards achieving the QP antibiotic prescribing targets. The second part of the workshop was an interactive activity to build a conceptual model to demonstrate the pathways from the QP to antibiotic prescribing through the identified AMS interventions/strategies , and associations between the identified potential mediators, if any.The modelling activity was followed by a whole-group discussion on key observations about how the QP had been implemented, building on earlier contributions by the participants in the three small groups. This discussion was useful for comparing and summarising the models, looking at triangulation of the data and exploring how the discussed experiences compared between groups and individuals .Detailed notes on discussions and observations within the small and whole groups were taken by PA, AB, CC and other researchers involved in the workshop. The notes were combined and analysed by PA and CC. We conducted a thematic analysis to identify themes across the dataset. The notes were descriptively coded to summarise the key concepts of statements and observations from the group discussions. We reviewed the codes for patterns. Codes that relate to specific fundamental concepts on the experiences of the participants were linked to form themes ,12. To iWe recruited 10 AMS leads, three GPs, and one Nurse Prescriber from a diverse range of CCGs and practices in relation to antibiotic prescribing rates and geographical location. Four of the participants who expressed interest did not attend the workshop.The findings were organised into three main themes, which constituted constructs linking the various aspects of participants\u2019 experiences (discussed below). Increased CCG engagement with practices on prudent antibiotic prescribing: Practice visits, needs assessments, peer feedback and audits were identified as a set of strategies integrated towards increasing the level of engagement between CCGs and practices, and within practices. Some CCGs used practice visits as an avenue to assess what prescribers need to help them reduce their antibiotic prescribing. Audits to assess antibiotic prescribing in practices against national guidelines were commonly adopted by CCGs alongside feedback to practices to motivate change in prescribing to help meet the QP antibiotic targets. Heavy workload in CCGs and practices was identified as the main barrier to regular antibiotic audit in practices. Participants also reported benchmarking local prescribing data against the national and regional averages as a social norm strategy.Local financial incentive schemes: The use of other local financial incentives by CCGs to encourage practices to reduce their antibiotic prescribing was widely discussed with most participants recognising the existence of local incentive schemes even before the QP. Some of the local incentives were integrated with the QP targets from 2015.Other strategies: Other strategies described to help with QP antibiotic targets included the use of AMS training resources\u2014in particular those available on the Treat Antibiotics Responsibly, Guidance, Education, Tools (TARGET) toolkit-, and C-reactive Protein point-of-care testing (CRP POCT).The participants stressed that the identified strategies were not adopted in isolation. For instance, some of the participants described the integration of audit, feedback, practice visits, and a CCG-level incentive scheme. A practice visit by CCG AMS leads (or Prescribing Advisors) was recognised as a strategy used to conduct an antibiotic audit and offer feedback to practices. Additionally, local financial incentives were sometimes set up to encourage practices to conduct self-audit.Increased availability of prescribing data for CCGs and practices: Increased surveillance, availability and feedback of prescribing data following the introduction of QP were perceived to be associated with increased engagement with the QP scheme. This was also reported to be related to increased adoption of strategies like antibiotic audit and benchmarking.National guideline on antibiotic prescribing: Participants also reported that having a national guideline helps in developing local strategies towards the QP antibiotic targets by providing a framework to underpin their AMS activities. In discussing the available guidelines on antibiotic prescribing, most of the participants indicated a preference for guidelines that were less frequently updated given the limited time allocated to AMS duties. In addition, most of the participants shared the view that the effect of the QP might be less about the financial incentive and more about having national targets and guidelines that promote antibiotic prudency. Even when their CCGs expected not to receive the QP financial payments , some participants reported that they still worked towards the QP antibiotic targets.One of the challenges to meeting the QP antibiotic targets reported by the participants was balancing the goal of AMS whilst ensuring access to antibiotics for those patients who needed them .Another reported challenge was engaging high antibiotic prescribers. AMS interventions such as practice visits, audits and feedback were often targeted by the CCGs specifically to high prescribing practices. Participants raised a concern about the possibility of disengagement by high prescribers due to continuously labelling and targeting them for most AMS interventions.Our paper summarises the strategies and activities reported by a selection of CCGs in England to facilitate antibiotic stewardship in primary care practices towards achieving the QP antibiotic targets. Evidence on the mechanism of impact of financial incentive schemes to improve the quality of care in primary care is insufficient given the limited number of rigorous studies on this . We demoOur findings are important to healthcare policymakers and quality improvement agencies in the planning and systemised implementation of programs towards antimicrobial stewardship. The participants reported some of the ways in which AMS strategies such as national guidelines, audits and feedbacks can be integrated to optimise their individual effects. The co-developed conceptual framework can inform a user-led investigation of the mechanism of the impact of financial incentive schemes on antibiotic prescribing in primary care practices. Given that the Quality Premium is one of the first national financial incentive schemes towards improvement in antibiotic prescribing in England, the developed framework depicting pathways for maximising the potentials of such a scheme can be useful in the development of future AMS financial incentive interventions.The adoption of some of the AMS strategies identified in our workshop was also reported by a survey of CCGs in England . PreviouWe recognise the limitation posed by the small sample size with regards to the representativeness of our findings. However, the diversity of the workshop participants was important in capturing different perspectives and experiences, contributing to the credibility of our findings."} +{"text": "Substantial progress has been achieved in the last two decades with the implementation of measles control strategies in the African Region. Elimination of measles is defined as the absence of endemic transmission in a defined geographical region or country for at least 12 months, as documented by a well-performing surveillance system. The framework for documenting elimination outlines five lines of evidence that should be utilized in documenting and assessing progress towards measles elimination. In March 2017, the WHO regional office for Africa developed and disseminated regional guidelines for the verification of measles elimination. As of May 2019, fourteen countries in the African Region have established national verification committees and 8 of these have begun to document progress toward measles elimination. Inadequate awareness, concerns about multiple technical committees for immunization work, inadequate funding and human resources, as well as gaps in data quality and in the implementation of measles elimination strategies have been challenges that hindered the establishment and documentation of progress by national verification committees. We recommend continuous capacity building and advocacy, technical assistance and networking to improve the work around the documentation of country progress towards measles elimination in the African Region. The WHO global vaccine action plan 2011-2020 outlines a goal for the elimination of measles and rubella in at least 5 WHO regions by 2020 . As of AElimination of measles is defined as the absence of endemic transmission in a defined geographical region or country for at least 12 months, as documented by a well-performing surveillance system. The 3 criteria for verifying measles and rubella elimination include: i) the documentation of the interruption of endemic measles and rubella virus transmission for a period of at least 36 months from the last known endemic case; ii) the presence of a high-quality surveillance system; iii) measles virus genotyping information that supports interruption of endemic transmission , 8. The The framework for documenting elimination outlines five lines of evidence that should be utilized in documenting and assessing progress towards measles elimination: 1) a detailed description of the epidemiology of measles and rubella since the introduction of measles and rubella vaccine in the national immunization program; 2) population immunity, presented as a birth cohort analysis with the addition of evidence related to any marginalized and migrant groups; 3) quality of epidemiological and laboratory surveillance systems for measles and rubella; 4) sustainability of the national immunization program, including resources for interventions to sustain elimination; 5) genotyping evidence that measles and rubella virus transmission has been interrupted , 8.When evaluating the lines of evidence, NVCs and RVCs are expected to review all the available data at both national and subnational levels that can be assessed to determine whether elimination has been achieved. The five lines of evidence facilitate a comprehensive evidence-based assessment of population immunity at all levels, immunization program performance and the capacity to sustain elimination.The WHO African regional standards for case-based measles surveillance have been in place since 2004, with an update in 2015 to include an optional elimination-standard surveillance which is recommended for countries with confirmed measles incidence approaching or less than 1 per million population. Elimination standard surveillance is expected to improve the sensitivity of measles surveillance by employing a broader suspect case definition requiring detailed active investigation of all suspected cases. As countries approach the elimination threshold, it will be critical to investigate each confirmed case of measles to determine sources of infection and reasons for lack of immunity. It will also be crucial to collect throat swab samples for viral genotyping, in addition to the serum specimens collected for serological confirmation. Elimination standard surveillance requires robust surveillance and laboratory capacity, as well timely and intensive investigation of sporadic as well as outbreak cases and is expected to be more costly to implement .The sensitivity of measles surveillance and the quality of data generated is critically important in the verification process. Without adequate surveillance sensitivity consistently attaining the performance indicators including characterization of circulating viral genotypes, it is difficult to generate evidence required to verify elimination. For example, NVCs in some countries in the WHO European region have been unable to determine whether disease transmission remained endemic or was interrupted. Reasons included inadequate surveillance systems with low sensitivity producing incomplete surveillance data that could not be clearly interpreted to demonstrate evidence in support of elimination; as well as inadequate or incomplete evidence of population immunity . In ordeThe measles verification framework shares similarities with the polio-free certification process. For a region to be certified polio free, the Regional Polio Certification Commission (RCC) will consider the following: i) the absence of wild poliovirus for at least 3 consecutive years from any source, in the presence of high quality, certification-standard AFP surveillance; ii) high routine immunization coverage with the third dose of oral polio vaccine (OPV3); iii) the completion of phase 1 poliovirus containment activities; iv) country readiness to respond to any poliovirus importation; v) the presence of a functional National Certification Committee to critically review, endorse and submit complete documentation to the RCC \u201314.In March 2017, the WHO Regional office for Africa developed and disseminated regional guidelines for the verification of measles elimination. Official communication was sent from the WHO regional office to 32 of the 47 countries in the region between May 2017 and February 2019, requesting them to establish an NVC and to commence the work of documenting progress towards elimination according to the regional guidelines and documentation template. WHO offered technical and financial assistance to establish NVCs. Not all countries were invited to establish NVC at the same time for several reasons. First, there is a limited number of technical staff from the WHO regional and sub-regional offices available to conduct briefings of the newly established NVCs. Second, countries were selected based on their relative progress towards the measles elimination targets in those countries nearing the elimination targets, and the potential advocacy value of NVCs to advance the implementation of elimination strategies in countries requiring significant improvement in their national immunization performance to advance towards measles elimination. A staged implementation of NVCs also allowed lessons to be learned from the initial country experiences.The global framework and guidelines outline the process and requirements for the documentation of measles and rubella/CRS elimination. At present, in the absence of a formal regional goal of rubella/CRS elimination the African regional guidelines are limited to the verification of measles elimination. The regional verification framework, the process and the role of the verification structures was presented and discussed in various annual meetings of national immunization program managers\u2019 in 2018 and 2019. Additionally, an initial workshop was conducted in March 2018 to orient the members of the RVC. The first five countries to submit documentation of progress to the RVC were reviewed in May 2019. The status of establishment and functionality of NVCs as of April 2019 is summarized in Despite the creation of NVCs and the organization of briefings for the NVC members, as of May 2019, only 8 countries in the region have begun to document progress toward elimination. A summary of the most common impediments in establishing NVCs and documenting country progress is detailed below.Inadequate awareness: national immunization program managers do not fully understand the purpose and function of NVCs. The justification and terms of reference for NVCs as well as the process of documentation of progress were presented in annual program meetings. However, misconceptions persist including the opinion that countries need to establish NVCs only when they get closer to claiming measles elimination status. Actually, the process of documenting progress with NVC oversight is expected to help weak performing countries to critically review their data, improve program performance and benefit from the advocacy of the NVC with national authorities and partners.Multiplicity of committees: discussions with various national immunization program managers have revealed concern about the existing multiplicity of national committees and advisory groups to support immunization. There is a limited pool of dedicated and available scientists and experts to engage in such voluntary work, especially in the smaller countries. WHO AFRO has indicated that countries may opt to utilize the expertise in the current national polio certification committees for the purpose of measles verification if practical. However, it is necessary to amend the terms of reference and nomenclature of the committee and conduct a technical briefing of the committee members.Availability of technical experts: WHO recommends that the membership of NVCs include specialists from various fields who will participate in the committee on a voluntary basis. However, in smaller countries, the available pool of high-level expertise from academic, research and clinical settings is often limited. In addition, available experts often have multiple professional responsibilities and engagements, and often are already engaged as members of NITAG, National Polio expert committee, national polio certification committee, or national polio containment taskforces.Prioritization of verification work: national immunization program staff handle numerous programmatic priorities and are fully engaged in a multitude of activities, including the development of annual and multiannual plans, development of GAVI application documentations, new vaccine introductions, SIAs, program assessments and appraisals, outbreak response activities and responding to the effects of civil conflict and natural emergencies. The NVCs require the attention, time and dedicated support of the national immunization program team, and the WHO country office immunization team to be fully functional.Inadequate human resources at regional level: there is a limitation of program staff in the WHO regional and sub-regional offices responsible for the overall coordination of measles and rubella elimination work. For this reason, it was not possible to quickly scale up and establish NVCs in multiple countries, conduct initial briefings and provide continuous to support the work of the NVC including associated work with data management and regular follow-up of the verification documentation at country level.Inadequate funding to support country level NVC activities: WHO provides catalytic funding for the establishment and functioning of the NVCs at country level. These funds cover costs related to the organization of technical meetings, joint working sessions to analyze data and prepare the country progress reports, supply stationery material and cover costs related to in-country travel when necessary. Currently, the WHO Regional office has limited committed funding to support NVC activities, requiring prioritization in the support to countries to establish NVCs.Data quality: in many countries in the African region, vaccination administrative data overestimates the levels of population immunity as compared to survey and WHO UNICEF estimates of coverage. This discrepancy also exists in data at the subnational level. As a result, unless there are recent coverage surveys done to estimate subnational levels of coverage, it is often difficult to assemble accurate information regarding population immunity levels [y levels , 16. They levels .Incomplete implementation of measles elimination strategies: as of April 2019, only 27 of the 47 countries in the region have introduced MCV2 in their routine immunization schedule. For countries having MCV2 for more than 3-5 years, the drop-out rate between MCV1 and MCV2 is more than 10% in 17 out of the 26 countries for 2017. This is a major programmatic weakness having a bearing on the documentation of one of the lines of evidence [evidence . In the Surveillance funding gaps: forty four out of 47 countries in WHO African region have been implementing measles case based surveillance since at least 2006, with the support of a network of national and regional referral measles serological laboratories. However, over the past five years, the quality of case-based surveillance has not been improving across the region despite the fact that countries are approaching the 2020 target date for elimination [mination . This isStock out of lab test kits: the regional serological measles laboratory network consists of 49 national and subnational laboratories in 44 countries across the region. The network is supported by WHO to implement standardized testing methods, utilizes similar test kits and is supported with periodic external quality assurance and accreditation exercises. In the period from 2015 to 2017, nearly all the laboratories in the regional measles laboratory network experienced prolonged periods of stock-out of laboratory test kits as a result of delays in resupplying attributed to inadequate funding. This has seriously limited the surveillance system\u2019s sensitivity and its ability to generate high quality information for the purpose of verification [Lack of genotypic data: despite the availability of services in the regional reference laboratories to perform molecular characterization of measles and rubella viruses, many countries have not yet made full use of this opportunity and therefore lack the baseline data required to assess endemic transmission patterns and distinguish them from importations that is important for the verification of elimination [mination .Inadequate data on CRS occurrence: CRS sentinel surveillance is established in only 9 countries across the region. However, several countries have some documentation from retrospective case reviews. CRS is often not recognized commonly as a clinical condition, and requires more specialized clinical skills and diagnostic equipment for initial case detection, there is lack of adequate documentation at country level [ry level .Previous experience with polio certification: countries across the region already have extensive experience with the process of preparing polio eradication progress reports and national certification documentation. The lessons from African regional certification of polio eradication are being utilized to ensure that the NVCs and the RVC establish robust processes from the outset [e outset , 14.Functional regional commission: the regional director of the WHO African regional office has officially nominated the members of the Regional Verification Commission. The commission received its introductory briefing in March 2018. The second RVC meeting in May 2019 was used to review the progress reports from 5 countries. The RVC review of country documentation has helped to identify the strengths and weaknesses in country programs with regards to documenting the lines of evidence. The lessons from this exercise will be used to assist other countries, to use the opportunity to critically review their program data and the implementation of measles elimination strategies.Advocacy value of verification committees: while the main objective of the NVCs and the RVC is to support countries to develop high quality documentation of progress towards elimination along the five lines of evidence, the terms of reference of the NVCs were designed to include advocacy as one of the key functions in their respective countries and at regional level for the RVC. The members of the committees are prominent clinicians, academicians and researchers whose professional reputations can garner support, visibility and influence policy makers in favor of measles elimination.External technical assistance: to advance the work of verification of measles elimination, the WHO regional office received support from the US Centers for Disease Control (CDC) to complete a detailed analysis of programmatic data in Seychelles and Rwanda to compile their initial documentation submitted to the RVC. This work has helped to critically examine data quality and availability issues, as well as to refine the documentation template.Country by country verification: the verification of measles elimination is assessed country-by-country, unlike the polio eradication program, where certification is done only on a regional basis. Such a country-focused approach gives high performing countries the opportunity to get official recognition for their progress and motivates others to strive to attain the elimination targets. In addition, when countries are presenting their progress report to the Regional Verification Commission, other NVCs and national immunization program managers are invited to participate and learn from the other country experiences.In order to address these challenges and strengthen the ability of NVCs to document progress towards measles elimination, the following priority actions will need to be taken at regional and country levels.Raise awareness: utilizing all opportunities to communicate to the national authorities and immunization program managers regarding the value NVCs can provide to assist countries with documenting progress towards measles elimination and advocating for better government ownership and partner support.Document and disseminate progress: scaling-up the documentation of progress towards measles elimination among the high performing countries to help them verify elimination as early as possible and to document the advocacy work of NVCs.Technical assistance: develop a regional pool of consultants that can assist countries in preparing the initial documentation of progress for review by NVCs.Capacity building: WHO will continue to build the technical capacity and broader programmatic understanding of NVC members by engaging them as participants in immunization program technical meetings.Networking: create opportunities and platforms for better networking and experience sharing among NVCs.Funding: WHO and partners to allocate predictable funding to support the work of NVCs.Sub-national documentation: in large countries, explore the possibility of NVCs monitoring and documenting progress toward measles elimination sub-nationally by province/State/Region level with their own documentation exercise. This will be a resource intensive exercise to be done in one or two countries, making sure not to burden national programs and in such a way as to carefully document lessons.The authors declare no competing interests."} +{"text": "When patient-reported measures are translated and cross-culturally adapted into any language, the process should conclude with cognitive interviewing during pretesting. This article reports on translation and cross-cultural adaptation of the Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire into Afrikaans (for the Western Cape). This qualitative component of a clinical measurement, longitudinal study was aimed at the pretesting and cognitive interviewing of the prefinal Afrikaans (for the Western Cape) DASH questionnaire highlighting the iterative nature thereof. Twenty-two females and eight males with upper limb conditions were recruited to participate at public health care facilities in the Western Cape of South Africa. Cognitive interviews were conducted as a reparative approach with an iterative process through retrospective verbal probing during a debriefing session with 30 participants once they answered all 30 items of the translated DASH questionnaire. The sample included Afrikaans-speaking persons from low socioeconomic backgrounds, with low levels of education and employment (24 of 30 were unemployed). Pragmatic factors and measurement issues were addressed during the interviews. This study provides confirmation that both pragmatic factors and measurement issues need consideration in an iterative process as part of a reparative methodology towards improving patient-reported measures and ensuring strong content validity. The DASH, developed in 1996 by the Institute of Work and Health (IWH) in the context of Canada, has been translated into many languages from around the world with 12 language versions for developing countries , p, pbest eThe iterative nature of the process is considered a strength. A limitation of this study is that the PI had no prior experience with CIs.This study highlights the process of CI employed in the pretesting and cognitive interviewing of the prefinal version of the Afrikaans for the Western Cape DASH-PAV. It is recommended that pragmatic factors experienced during CIs be addressed with adaptations such as the availability of reading glasses or through experience with the target population and/or similarities between the interviewer and the target population. In addition, measurement issues have to be addressed and evaluated in subsequent interviews. A further recommendation is to consider stopping CIs once it is evident that pragmatic and measurement issues were addressed as part of a reparative methodology with an iterative approach following translation and cross-cultural adaptation of the DASH. This study also provides evidence for content validity of the newly translated and cross-culturally adapted Afrikaans for the Western Cape DASH questionnaire."} +{"text": "The early delivery of effective analgesia is considered to be an important component of pre-hospital trauma care; however, the provision of analgesia by pre-hospital clinicians is often inadequate. While a number of studies have explored the underpinning attitudes and barriers to the paramedic administration of analgesia in various patient groups, specific barriers which prevent the paramedic administration of pre-hospital analgesia to adult trauma patients in the UK are poorly defined. The aim of this small study was to identify and define potential barriers to analgesia administration in this specific population. In doing so, this study will increase awareness and will form an important basis for future work aimed at improving the delivery of pre-hospital analgesia to adult trauma patients in the UK.Twenty paramedics employed in an urban ambulance service in the UK volunteered to participate and were recruited into this study. Semi-structured interviews were conducted with participants during 2018 in order to explore the potential barriers to analgesia administration. An inductive thematic analysis was undertaken on interview transcripts and potential barriers were defined from the themes identified, by reviewing the coded data within each theme.All of the participants completed the study which identified 12 potential barriers to the paramedic administration of pre-hospital analgesia to adult trauma patients. These barriers were diverse in nature and related to factors including the patient presentation, paramedics\u2019 perceptions of pain, the cautious use of analgesics, paramedics\u2019 scope of practice, organisational procedures and clinical guidelines, as well as factors relevant specifically to the pre-hospital environment. Some of these barriers may be modifiable through improved paramedic education and training or by widening of the paramedic scope of practice, whereas others may not be modifiable.There were a number of limitations to this study including the use of volunteer self-selection which may present a source of self-selection bias, the recruitment of paramedics from a single system which reduces the generalisability of the results and the absence of a second researcher.The identification of these potential barriers should form a basis for future work aimed at improving the paramedic delivery of pre-hospital analgesia to adult trauma patients. While many of the barriers identified may be present across pre-hospital care systems, some may be specific to the system in which this study was undertaken, to other urban systems or to paramedic practice in the UK."} +{"text": "Plant sexual systems play an important role in the evolution of angiosperm diversity. However, large-scale patterns in the frequencies of sexual systems and their drivers for species with different growth forms remain poorly known. Here, using a newly compiled database on the sexual systems and distributions of 19780 angiosperm species in China, we map the large-scale geographical patterns in frequencies of the sexual systems of woody and herbaceous species separately. We use these data to test the following two hypotheses: (1) the prevalence of sexual systems differs between woody and herbaceous assemblies because woody plants have taller canopies and are found in warm and humid climates; (2) the relative contributions of different drivers to these patterns differ between woody and herbaceous species. We show that geographical patterns in proportions of different sexual systems differ between woody and herbaceous species. Geographical variations in sexual systems of woody species were influenced by climate, evolutionary age and plant height. In contrast, these have only weakly significant effects on the patterns of sexual systems of herbaceous species. We suggest that differences between species with woody and herbaceous growth forms in terms of biogeographic patterns of sexual systems, and their drivers, may reflect their differences in physiological and ecological adaptions, as well as the coevolution of sexual system with vegetative traits in response to environmental changes. The sexual systems of plant species play a significant role in the evolution of angiosperm diversity , variatiThe evolution of plant sexual systems has been widely hypothesized to be closely linked to the evolution of plant growth forms the geographical patterns in sexual system composition differ between woody and herbaceous species; (2) the relative contributions of different drivers to these patterns differ between woody and herbaceous species. Dioecious species are more common in humid areas and in floras with older and more woody species, whereas hermaphroditic species are more common in temperate arid areas of northwestern China and in floras with younger and more herbaceous species.Flora of China (Flora Republicae Popularis Sinicae (126 issues of 80 volumes), Seeds of Woody Plants in China (http://efloras.org/). The Tree of Sex and journal publications , Higher Plants of China, The Atlas of Woody Plants in China (http://www.nsii.org.cn/), and screened. NSII included over fifteen million specimen records that are published recently. In total, our dataset consists the distributions of 19,780 Chinese plant species with data for sexual systems.Data on the distributions of plant species were compiled from all national, provincial and local floras, including Species distributions were rasterized to a grid with a spatial resolution of 100 \u00d7 100 km to eliminate the potential bias of area on subsequent analyses. We removed the grid cells with less than 50% of their area along the country borders and coastal areas. In all, 949 grid cells were included in the analyses reported below.Flora of China during the flowering period , and henTo evaluate the effect of evolutionary time on biogeographical patterns in sexual system compositions, we used the average genus age derived from three phylogenies including Flora of China . As lower and upper limits of mature height are normally reported for most species in floras, we used the average of lower and upper limits to represent the mature height of species. Species without erect stems were excluded from our analyses following Plant mature height and longevity of plant species are expected to be positively associated . Here weFirst, based on the data of sexual systems, growth forms, and distributions of all species, we calculated the proportions of species with different sexual systems within each grid cell for all species together and for each growth form separately. We used Pearson correlation coefficients to evaluate the similarity between the geographical patterns in the proportions of different sexual systems of woody and herbaceous species and used Dutilleul\u2019s t test to evaluate the significance of these correlations due the influences of spatial autocorrelation on significance tests of the GLM model. Modified t tests were used to correct for the effect of spatial autocorrelation on p values with quasi-Poisson residuals to evaluate the explanatory power of each predictor on the proportions of sexual systems per grid cell and this was conducted separately for each of the three species groups . The climate variable , genus age, and plant height were used as predictors and the proportions of each sexual systems per grid cell were used as response variables. The explanatory power of each variable was estimated as the adjusted Rp values .To evaluate the influences of growth forms on the relationships between the proportion of sexual systems and different predictors, we first built spatial linear models (SLM) for the combined dataset of sexual system for both growth forms together using spatial simultaneous autoregressive error (SAR) models. SLMs could account for the effects of residual spatial autocorrelation on the significance tests of regression slopes . The cliAll analyses were performed in R 3.4.3 .P > 0.05; The spatial patterns in the proportions of hermaphroditic and monoecious species per grid cell were consistent between woody and herbaceous growth forms by improving the efficiency in pollen and seed dispersal, which further promote the maintenance and continuation of species populations tend to exhibit short life-cycles, fast rates of population but limited resource accumulation was higher in southern China than in other regions. This finding supports the hypothesis that dioecy is more common in tropical and subtropical floras where climate is warm and humid (Rhamnus and Acer) is higher in Northeast China with relatively higher AET and MPWQ compared to other regions in China. These results reveal that the differences in requirements for hydrothermal conditions between woody and herbaceous growth forms may influence their geographical patterns in proportions of sexual systems, consistent with our hypothesis. Generally, woody species often have deep roots, and herbaceous species have relatively shallow roots. Therefore, herbaceous species have lower efficiencies in soil water usage and are more sensitive to reduced water supply compared with woody species , which might be due to the higher population turnover rate and faster adaptation to climate of herbaceous species than those of woody species , the Strategic Priority Research Program of Chinese Academy of Sciences (#XDB31000000), and National Natural Science Foundation of China .The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Helicobacter pylori in the etiology of gastric cancer has been elucidated , based on changes to the fecal microbiota in hepatocellular carcinoma patients. This dysbiosis index may have the potential to diagnose those with the disease and may in the future determine prognosis at different stages of hepatocellular carcinoma .One area where microbiome profiling could potentially improve patient outcomes is as a diagnostic tool to identify those at high risk of malignant transformation or to identify the stage of cancer development. The diagnostic potential of the gut microbiome is highlighted by Fusobacterium spp. in the development of OSCC . The influence of the oral microbiome may also extend beyond the oral cavity, as shown by Chen X-H. et al., who demonstrate that gastric cancers exhibit increased abundance of taxa normally associated with the oral microbiome. The presence of H. pylori in these gastric samples also impacts upon the composition of these communities, suggesting that H. pylori may have a central role in the development of dysbiosis. The involvement of H. pylori the development of gastric biofilms is further explored by Rizatto et al., who discuss the potential role and mechanisms of biofilm formation in the gastric mucosa and identify future directions for this research area.The collection continues with several studies of the oral microbiome. The oral cavity is one of the most diverse areas of the human GIT and an increasing number of studies are demonstrating the increased abundance of F. nucleatum and Streptococcus gallolyticus subsp. Gallolyticus. Ma et al. present a timely and comprehensive review of the impact of the gut microbiome on cancer chemotherapy. Continuing this topic, Cong, Zhu, Zhang et al. present novel data showing the impact of chemotherapy on microbial networks in the intestinal microbiota, opening up the possibility to explore how these changes impact upon chemotherapeutic outcomes. The same authors also investigate the effect of surgical interventions on the gut microbiome of patients with CRC . These data show that surgical interventions have a strong impact on the microbiome resulting in increased levels of Klebsiella spp., which was significantly linked with lymphatic invasion.The intestinal microbiota is under intense investigation for its roles in regulating metabolism, immunity, and the interaction with cancer cells. In addition, specific pathogens have been implicated in carcinogenesis including Wu et al. present exciting data showing that chitooligosaccharides protect mice from colorectal carcinomas by reducing the levels of Escherichia-Shigella, Enterococcus, and Turicibacter and by promoting the growth of butyrate producing bacteria.Targeted interventions to restore healthy or beneficial microbiomes are still in their infancy. Here, In summary, this collection offers an insight into the developing area of microbiome research in oncology, and highlights the growing potential of these scientific tools for improving diagnosis and treatment of these devastating diseases.The manuscript was written and prepared by GM and NA-H. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Food Safety Commission of Japan (FSCJ) was requested by the Ministry of Health, Labour and Welfare (MHLW) to conduct a risk assessment of cattle meat and offal imported from the United States of America (U.S.A.), Canada and Ireland. FSCJ assessed potential influences on bovine spongiform encephalopathy (BSE) risks to human health in cases of the alteration of cattle age to be allowed to import of cattle meat and offal from the three countries, from the current under 30 months of age to no age limitation, in line with the international standards for mitigating BSE risks. FSCJ judges that the control measures regarding \u201crisks related to slaughtering and meat processing\u201d are appropriately implemented in the three countries. FSCJ concludes that potential variations of BSE risks to human health by removing the age limit on cattle meat and offal excluding specified risk material (SRMs) imported from the three countries in line with the international standards is negligible. Using reference materials and documents submitted by the MHLW regarding the bovine spongiform encephalopathy (BSE) situations in the three countries, FSCJ assessed the risk of BSE agent in cattle meat and offal in relation to border measures such as slaughter age limitFSCJ assessed potential influences on BSE risks to human health in cases of the alteration of cattle age to be allowed to import of cattle meat and offal from the three countries, from the current under 30 months of age to no age limitation, in line with the international standards for mitigating BSE risks.The cases of classical BSE continue to decrease and are scarcely reported in recent years worldwide.Due to the decreased numbers of cattle exposed to BSE prion, the historical and existing risk factors are expectedly reduced to large extents in the three countries currently evaluated.Therefore, even in countries where typical BSE has occurred in cattle born within the last 11 years, the risk of developing typical BSE is estimated to be extremely low as long as the control measures related to the historical and existing risk factors of BSE in live cattle are properly implemented. In addition, assuming the control measures are maintained at the same level as the current level, the frequency of occurrence is estimated to remain below the current level.Terrestrial Animal Health Code ] does not set a an age limit for trade in beef, FSCJ decided to evaluate whether or not the risk of variant Creutzfeldt-Jacob disease (vCJD) associated with intake of the BSE prion has reached to an extremely low level.Under these situations, FSCJ decided to assess the risk of meat and offal through the verification of control measures such as specified risk material (SRMs) removal and ante-mortem inspection in cattle at the slaughter house of these countries. While an international standard [The OIE The results of the risk assessment are summarized below.Terrestrial Code)2) are recognized to be effective toward preventing the incidence of classical BSE in these countries. Therefore, FSCJ judges the incidence of classical BSE continually to be quite unlikely and/or to remain below the current level.In the three countries, indigenous classical BSE cases have not been confirmed in the U.S.A., and few cases of indigenous classical BSE are confirmed in Canada and Ireland at present. Thus, the control measures against historical and existing risk factors (the Sc) in the classical BSE-infected cattle, the amount distributed in the tissues other than SRMs is extremely low. Epidemiological information on vCJD cases supported this observation. Therefore, it is assumed that the removal of SRMs ensures, if any, the negligible intakes of PrPSc through meat and offal. Moreover, the ante-mortem inspections remove the cattle manifesting the clinical signs.According to the data on the body distribution of abnormal prion protein imported from the three countries after the age limit is removed. Assuming that current risk control measures are continuously implemented as mentioned above. For the atypical BSE, the previous assessment on \u201cBSE counter measures applied to domestic cattle\u201d has concluded that vCJD is highly unlikely to develop in association with consumption of classical BSE prions through cattle meat and offal (excluding SRMs) under the continuous implementation of current control measures against BSE, and new findings affecting this conclusion are not available.Considering thoroughly available evidence, FSCJ reached the following conclusion on the risk of BSE agent in cattle meat and offal imported from the U.S.A., Canada and Ireland by increasing the age limit from the current 30 months of age in line with the international standards. FSCJ concludes that potential variations of BSE risks to human health by removing the age limit on cattle meat and offal (excluding SRMs) imported from the three countries in line with the international standards is negligible.FSCJ drew this conclusion of the assessment assuming that current risk control measures are continuously implemented. Therefore, risk management organizations should continuously collect related information, particularly regarding feed regulation, surveillance, ante-mortem inspection of slaughter and control on SRM removal."} +{"text": "The upper cervical region is a complex anatomical structure. Myodural bridges between posterior suboccipital muscles and the dura might be important explaining conditions associated with the upper cervical spine dysfunction such as cervicogenic headache. This cadaver study explored the upper cervical spine and evaluated the myodural bridges along with position of spinal cord in response to passive motion of upper cervical spine.A total of seven adult cadavers were used in this exploratory study. The suboccipital muscles and nuchal ligament were exposed. Connections between the Rectus Capitis Posterior major/minor and the Obliquus Capitis minor, the nuchal ligament, posterior aspect of the cervical spine, flavum ligament and the dura were explored and confirmed with histology. The position of the spinal cord was evaluated with passive motions of the upper cervical spine.In all cadavers connective tissues attaching the Rectus Capitis Posterior Major to the posterior atlanto-occipital membrane were identified. In the sagittal dissection we observed connection between the nuchal ligament and the dura. Histology revealed that the connection is collagenous in nature. The spinal cord moves within the spinal canal during passive movement.The presence of tissue connections between ligament, bone and muscles in the suboccipital region was confirmed. The nuchal ligament was continuous with the menigiovertebral ligament and the dura. Passive upper cervical motion results in spinal cord motion within the canal and possible tensioning of nerve and ligamentous connections. The relationship between the upper cervical spine and headaches has been established . MusculoAny structure in the upper cervical spine such as the intervertebral disc, spinal ligaments, muscles, articulation, dura, and nerve roots could be a generating pain source and causing musculoskeletal dysfunction leading to cervicogenic headaches . MovemenIn order to prevent compression of the dura mater during motion of the spinal column, the dura is anchored in the vertebral canal by multiple fibrous connections . These mPosterior to the spinal column is the nuchal ligament, the cephalic extension of the supraspinous ligament. It extends from the C7 spinous process and attaches to the cervical spinous processes and midsagittally along the occipital bone, terminating at the external occipital protuberance . The majIn the upper cervical region the presence of fascial connections between the Rectus Capitis Posterior Major, Rectus Capitis Posterior Minor, and the Obliquus Capitis Inferior and the dura have been demonstrated . These cA total of seven adult cadavers , all over the age of 50 were used in this exploratory study. All cadavers were acquired from the Anatomical Board of the State of Florida, following standard protocols according to Anatomical Board regulations and Florida State Law; all of them had been embalmed with a formalin-based solution. No medical history was known about any of the cadavers. Students of the Doctor of Physical Therapy program at Florida Gulf Coast University had previously performed dissections of the cadavers, though leaving the suboccipital region intact. The skin and superficial fascia over the posterior neck region was removed. Following this the trapezius and the splenius capitis were detached from the occiput, as was the semispinalis capitis. Following this the suboccipital muscles and the nuchal ligament were exposed . The conAfter posterior exploration was completed, three specimens underwent a transverse cut between the occipital condyles and atlas, with an anterior reflection of the skull and pharynx, allowing for a direct transverse view of the spinal canal, dura and meningovertebral connections . These tTo determine what type of fibrous tissue connects the nuchal ligament to the dura we decided to further analyze this connection. In order to provide histological information, we cut the C1 and C2 segments, including all soft tissues, of one of these three specimens and decalcified it in an aqueous solution of sodium citrate and formic acid, until test samples of this solution added to an ammonium oxalate solution did not form any precipitate (the sample was checked weekly). Once decalcified, the specimen was embedded in a paraffin matrix and thin sectioned with a microtome to a thickness of five \u03bcm. Sections were mounted to glass slides and stained using Masson\u2019s Trichrome staining protocol to identify collagen fibers in the final sections. Finally, specimens were imaged using transmitted light with a Zeiss Axioskop II microscope and Lumenera Infinity digital camera.Our tissue samples as demonstrated in The dissection of our specimens confirmed this that there is a deep fascial connection between the Rectus Capitis Major, Minor, and the Obliquus Capitis Inferior, and this appears to be blending with the dura in the spinal canal. The myodural bridge in our samples appeared fairly wide and covered a large portion of the posterior arch of atlas on each side . In all In the sagittal dissection we observed that the nuchal ligament was continuous with the dura between the occipital bone and atlas and the atlas and axis and 7. AThe dural sac and spinal cord moved within the spinal canal as a result of motion of the upper cervical spine. Flexion and extension resulted in forward and backward movement of the spinal cord respectively. Rotation resulted in homolateral movement and concurrent tensioning of the contralateral spinal nerve and 9.The aim of this cadaver study was to explore the upper cervical region for soft tissue connections between the suboccipital musculature and ligaments and the dura mater. The superior cervical vertebral column is a very complex anatomical region. There appears to be a clear anatomical relationship between muscular, ligamentous, soft tissues and the dura mater in the high cervical region. Dissection both in the sagittal and the transverse plane demonstrates clearly that the epidural space of the upper cervical spine contains many connective tissues and fascia layers that merge and blend together regardless of their origination and 2. OIt was observed that the nuchal ligament has a direct connection with the dura and blends with the meningovertebral ligament to blend with the dural sac. This observation concurs with previous reports about the nuchal ligament dural connection . HistoloThe secondary aim of this study was to identify how the position of the spinal cord within the vertebral foramen of C1 changed during passively induced motion of upper cervical spine. When the cervical spine was extended in our cadavers it was observed that the spinal cord appeared to move forward within the spinal canal . It can When C1 was rotated on C2 in our specimens we observed that there was an ipsilateral movement of the spinal cord within the spinal canal . As seenThis cadaver study cannot be used to determine the clinical consequences of the collagenous connections between the nuchal ligament and the dural sac and the myodural bridges (with and without movement). However, these connections might be able to help explain cervical-cephalic pain and conditions such as cervicogenic and tension type headaches. Cervical pain patients can present with a variety of musculoskeletal myofascial syndromes in the upper quadrant and reduced active and passive movements . The preA possible drawback of this study was the use of formalin-embalmed specimens were used. It is well-known that the fixation and desiccation effects of formalin solution can cause tissue dehydration and shrinkage and might have affected the presentation and density of the tissues we observed . This miOur cadaver study confirmed the presence of a large network of tissue connections between ligament, bone and muscles in the suboccipital region. There was a fairly wide myodural bridge between the Rectus Capitis Major, Minor and the Obliquus Capitis Inferior and dura sac in the spinal canal. We observed that the nuchal ligament was continuous with the meningovertebral ligament between C2 and the dural sac. Histology revealed this connection was high in collagenous tissue similar to that of the nuchal ligament and the dura. Additionally, a clear connective tissue strand with the same tissue formation was identified between the posterior arch of C1 and the dura. It was demonstrated that passive motion of the upper cervical spine and C1 results in a change of position of the spinal cord within the spinal canal and resulting in tightening of the myodural, meningovertebral ligaments, and spinal nerves. Future studies should further evaluate this relationship and determine the clinical significance.10.7717/peerj.9716/supp-1Supplemental Information 1Click here for additional data file."} +{"text": "Surgery is an important part of the management of patients diagnosed with DFO. It consists in some selected patients, to remove all or part of the infected bone(s) or even to amputate all or part of the foot. Despite the use of sophisticated imaging techniques, it is however difficult to remove all the infected tissue while respecting the principles of an economical surgery. Bone biopsy performed at the margins of the resection permits to identify residual osteomyelitis and to adjust the post-surgical antibiotic treatment. Some recent studies have reported the way to perform bone margin biopsies and have assessed the impact of the bone results on the patient's outcome. However, the real impact of a residual osteomyelitis on the risk of recurrent DFO is still debated and questions regarding the interpretation of the results remain to be solved. Similarly, the consequences in terms of choice and duration of the antimicrobial treatment to use in case of positive bone margin are not clearly established. Osteoarticular infections occur in 20 to 60% of diabetic foot infections and profoundly worsen the outcome of the patients The main limitation of the surgical part of the treatment of DFO is the uncertainty about the persistence of residual osteomyelitis following bone resection. Indeed, if all the infected bone tissues have been removed, the infection is no longer an osteomyelitis (or an osteitis) and can therefore be treated as a soft tissue infection (except periarticular structures such as tendons and ligaments). This is of importance since the prognosis of non-bone infections is better and the antibiotic therapy is easier regarding the choice of the antibiotic regimens and their duration than that of DFO The aim of the present narrative literature review is to provide readers with up-to-date knowledge on the different surgical approaches for DFO and the consequences of bone examination results in terms of antibiotic treatment. The summary of the recommendations/current state of knowledge regarding surgical bone biopsy in patients treated for DFO is presented in Table Surgical removal of the entire infected bone has been considered in the past and even recently as the standard treatment Surgery is the unique means to drastically reduce the amounts of bacteria present in bones and sometimes in contiguous tissues . Surgery consists in these settings in draining pus and removing economically all necrotic tissues et al. showed that limited resection of the infected phalanx or metatarsal bone under the wound, together with removal of the ulcer site, was effective to obtain complete wound healing et al. reported a consecutive series of 185 diabetic patients with osteomyelitis of the foot and histopathological confirmation of bone involvement, all treated surgically, including 91 conservative surgical procedures The IWGDF guidance on diabetic foot infection proposes to favour a surgical approach of DFO in case of systemic signs of infection, substantial cortical destruction, osteolysis, macroscopic bone fragmentation (sequestration), an exposed bone within a forefoot ulcer, open or infected joint space and when the patient has prosthetic heart valves Conservative surgery does not entirely the risk of transfer syndrome as shown by Aragon-Sanchez et al. who reported new episodes of osteomyelitis in 16.9% of the patients Relapsing osteomyelitis episodes observed in patients operated for a DFO are not univocal. The new osteomyelitis may be in relation with a new episode of infection of the initial foot ulcer or at the adjacent rays in relation with the transfer syndrome. Another cause may be the absence of sterilization of the initial infected site although remission of DFO can be obtained despite the complete excision of the infected bone. The rate of remission may however be lower when there is a residual bone infection In a series of 66 cases of amputations defined as surgical removal of bone for DFO, 39 (59%) had remission at 12-month follow-up The dorsal and plantar approaches to metatarsal head resections in patients with DFO have been compared regarding the recovery time and the development of complications Two different types of bone resection should be considered, one is the amputation of a ray that can be at a metatarsal or phalangeal level creating one bone stump and the other is a resection of a joint in which there are two bone stumps. During the amputation, a bone margin biopsy is recommended while it is less clear that a biopsy should be done at each bone stump in case of conservative surgery. A bone biopsy performed at each stump appears to be ideal as there is no apparent reason which supports that the infection would only spread to proximal or distal direction. On the other hand, a bone biopsy at both stumps is likely to complicate the intervention. In case of a trans-articular amputation, it seems more appropriate to remove the cartilage which is avascular and therefore less able to defend itself against the pathogens. Whatever the type of surgery, it is important to take a bone sample with proper precautions to avoid the contamination of the samples i.e., by changing gloves, using sterile surgical instruments and a no-touch technique during the biopsy. External needle biopsy does not seem adequate since the level of amputation is not known with precision before the intervention and is not suitable for obtaining a bone margin biopsy.While it is recommended to use both histology and culture to affirm the diagnosis of osteomyelitis We are unaware of any studies that assessed the concordance of pathogens between the amputated bone and the residual stump microbiology. Another point in the daily practice is the difficulty to interpret the culture results especially when bacteria from the skin flora have been identified which may indicate a possible contamination of the bone samples. This issue has been addressed in a recent paper from Mijuskovic et al. who reported a series of 51 consecutive patients operated for toe or forefoot amputation White et al. prospectively studied a series of 25 patients with a suspicion of DFO who had combined histologic and microbiologic evaluation of percutaneous bone biopsies In their recent study, Schmidt et al. sent the bone margin specimens to two board-certified pathologists with expertise in bone pathology who both used a strict classification of histopathology definitions for the diagnosis of osteomyelitis Data on the optimal antibiotic therapy to administer after bone resection of foot amputation in these patients are lacking. Current guidelines recommend treating with antibiotics for up to 6 weeks if the culture demonstrates pathogen(s) or if the histology demonstrates osteomyelitis The optimal duration of DFO is a difficult subject as only one randomized controlled study has addressed this question In the study from Johnson et al. the mean \u00b1 SD antibiotic duration was 37.6 days \u00b1 24.1 versus 17.7 days \u00b1 29.6 (p=0.001) in patients with positive and negative bone margins, respectively The presence of bacterial biofilms may be of importance regarding the choice of the antimicrobial treatment given the differences in efficacy of antibiotics against planktonic and slow-growth bacteria (e.g. rifampicin versus beta-lactams which only work against multiplying bacteria) As many of the bone biopsies contain bacteria embedded into a biofilm, this may explain why rifampicin and fluoroquinolone combination regimens are likely to be associated with a better outcome of the patients Bone resection and minor even major amputations are unlikely to remove all the bacteria involved and it seems therefore important to obtain data on the persistence of infected tissues including bone. Most but not all the studies have reported a negative effect of residual osteomyelitis on the outcome of patients operated for DFO. A lot of questions about the examination of the bone specimens have not yet been solved. Some studies have pointed out the discrepancies between culture and histology of bone margin biopsies. These studies suggest that some of the positive cultures of bone margin biopsies are in fact in relation with contaminated specimens. Histology may help interpret the results of bone margin biopsies but the delay for obtaining the results in the daily practice appears to be a serious limitation in its routine implementation. In addition, the reliability of histology in these settings is an issue as suggested in some studies. The type and duration of the antibiotic treatment to consider in patients with a persistent osteomyelitis on the bone margins are currently unknown.In conservative surgery, bone specimens should probably be taken at both proximal and distal margins which, however, complicates the procedure, especially if both histology and culture are performed systematically. The role of molecular genetic techniques in the detection of bacteria in bone margin biopsies is not clearly established. These techniques do not provide information on the antimicrobial susceptibility profile of the bacterial strains identified. In the absence of indication regarding the pathogenic role of the organisms identified by these techniques, no recommendations have been made so far on their use in these settings. Data on the choice and duration of the antibiotic treatment to prescribe following bone margin biopsy are lacking. The first question is about the use of empirical post-biopsy antibiotic therapy while waiting for the results. Given that a bone and joint resection or even an amputation of all or part of the foot has just been performed in a context of a possible infection, it seems logical to consider an empirical antibiotic therapy started intraoperatively after the tissue samples have been taken and continued until the results of culture are available. As some of these patients are operated while receiving antibiotics a negative result should be interpreted with caution. If a bacterial documentation has been obtained prior to the surgery and histology is positive antibiotics should probably be administered and should be stopped if histology is negative. This raises the potential interest of histology examination in these settings. In the cases of negative culture and positive histology without previous documentation, antibiotics should be chosen taking account of the antibiotics which led to a negative culture. The duration is another difficult as the optimal duration of antibiotic treatment for DFO is not known. The last edition of the IWGDF recommends consider treating DFO with antibiotic therapy for no longer than 1-2 weeks if all infected bone has been surgically removed Additional clinical studies are necessary to answer some unsolved questions about surgical bone biopsy in the setting of DFO. Some standards for further clinical studies should be reported such as the description and gradation of soft tissue infection, the morphology and location of the tissue defect, details on concomitant peripheral artery disease, on severity of diabetic neuropathy, on surgical technique of amputation and wound closure, and on pre- and postoperative antibiotic treatment. These studies should address (i) which clinical intraoperative findings warrant the retrieval of a bone biopsy, (ii) how should a bone biopsy be retrieved , (iii) which histology criteria for the presence of acute or chronic osteomyelitis should be used and (iv) how should the biopsy be handled for microbiology assessment."} +{"text": "We report the case of a patient who presented for back pain with paresthesia, and the CT showed vertebral lysis of aneurysmal origin. The aneurysm of the thoracic aorta compresses the anterior surface of the dorsal vertebrae and by mechanical effect is responsible for the destruction of the opposite bone. The knowledge of this cause is very important considering the frequency of other tumoral and infectious causes of this affection. In the majority of cases, pseudotumoral vertebral lysis is a radiological sign leading to a tumor origin or infectious origin (tuberculosis). The aneurysmal origin is a cause that is potentially rarely described in the literature, chest CT angiography plays a key role in diagnosis, and the diagnostic range must include this etiology to prevent serious complications.We report a case of a 50-year-old patient with no medical or surgical history who has progressive worsening dorsalgia, with recent paresthesia of limbs and chest pain.The clinical examination objectified back pain in the first dorsal vertebra with paresthesia of the limbs. The biological examination was normal. A radiograph of the dorsal and lumbar spine showed lysis of the vertebral bodies of D2, D3, D4, and D5. CT confirmed the lysis of the vertebral bodies of D2, D3, D4, and D5 , but it An aneurysm is a permanent and localized dilatation of an artery of more than 50% compared to the normal diameter, with a loss of parallelism of the edges . AccordiVertebral erosion secondary to an aortic aneurysm is most often located in the anterior region of the vertebral body; the suggested physiopathological mechanism is a repetitive mechanical pressure causing relative bone ischemia, leading to lysis and bone destruction ; the assThe main signs that reveal the diagnosis are spinal pain and neurological deficits (paraparesis or paraplegia); however, infection and inflammation are not uncommon , 5.The pseudotumoral aspect of bone erosion can orientate as in our case towards a tumoral or infectious (tuberculosis) origin .The thoracoabdominal CT angiography is the reference examination, and it allows studying the whole thoracoabdominal aorta and its branches and assessing the relationship with the different adjacent structures ; it also makes it possible to objectify the signs of aneurysmal rupture or predictive signs of rupture, such as rupture of the continuity of parietal calcifications and the sign of the crescent or the sign of the draped aorta 6]; the latter is of great diagnostic value, and it is considered positive when the posterior wall of an aortic aneurysm is draped or molded on the anterior surface of the vertebra, with loss of the fat planes located between the aneurysm and the vertebra. Its presence indicates a weakening of the aortic wall and an imminent risk of rupture , 6.; the latMagnetic resonance imaging may be indicated in stable patients for comparative monitoring of lesions , but CT Spinal pain associated with vertebral osteolysis is most often directed towards tumor or infectious etiologies. The aneurysm of the aorta is a rare etiology but must be mentioned in order to avoid the evolution of serious complications."} +{"text": "Following publication of the original article , it was Journal of Reproductive and Developmental Medicine published out of the Red Hospital of Fudan University.Dr. Finnell formerly held a leadership position with the now dissolved TeratOmic Consulting LLC. He also receives travel funds to attend editorial board meetings of the"} +{"text": "Primary and secondary lung cancers are the most common clinical conditions that thoracic surgeons have to deal with: primary lung cancer, in fact, is one of the most frequently diagnosed cancers and is the leading cause of cancer-related death worldwide . On the Pulmonary metastasectomy has gradually become a frequently performed procedure among thoracic surgeons, particularly after the publication of the encouraging results of the International Registry of Lung Metastases and several other retrospective studies concerning pulmonary metastasectomies . More re"} +{"text": "Human interventions on coastal areas are always causing environmental impact; however, most of the times inventories of those interventions are possibly not well structured, and surely without a specific standard. The raw data presented shows an exhaustive and systematic revision of satellite images on 1700\u202fkm of the Caribbean coast of Colombia, where 2743 human interventions were identified. These interventions are classified in 38 categories in order to assess their environmental impact at a regional scale. The filtered data shows the environmental impact obtained for each category and the values allotted to each of the four parameters used for this evaluation. Moreover, the data is filtered for each of the five environmental coastal units in which the Caribbean coast of Colombia is divided by national regulations. Finally, the filtered and processed data shows the analysis done to obtain the graphical results of a previously paper (An evaluation of human interventions in the anthropogenically disturbed Caribbean Coast of Colombia Specifications table\u2022This dataset of human interventions allows to do several extra and derived analysis of the environmental impact caused on Colombian coastal zones, with emphases on the 1700\u202fkm on the continental Caribbean seafront.\u2022The calculation to obtain the simplified environmental impact assessment is of great interest to researchers and technicians looking for examples of quick and reliable EIA examples.\u2022This dataset shows step by step how to identify and register human interventions in coastal areas using an open source tool such as Google Earth. It also shows how to process, calculate and graphically represent the environmental impact in a simple way, which could be very useful for professors in environmental and marine sciences.\u2022The dataset is formed by three spreadsheets, which allow future researchers and practitioners to repeat the same process in three levels of complexity: raw data for inventory of human interventions, filter and process data for calculations of environmental impact and analysed data for statistical and graphical representations.\u2022The dataset can be used as a baseline for long-term monitoring of the human interventions on the Caribbean coast of Colombia and their environmental impact on coastal and marine ecosystems.1DiB_Intervencoast_tables_Raw) includes the raw data of all 2743 human interventions found on the Caribbean coast of Colombia, and is used to register an inventory of 1700\u202fkm of coastline. This raw data file has 40 datasheets in which the first shows the seven categories and 38 types of human interventions used, with their codes, descriptions and quantity of data (The dataset contains five files: three spreadsheets in MS Excel format (xlsx) and two geographical files in Google Earth format (kmz), which are presented as supplementary material. The first spreadsheet ( of data . The sec of data . The difIntervencoast_tables_filtered.xlsx) has five datasheets with consolidated, filtered and processed data. The first datasheet includes the frequency of 38 human interventions in each typology per each ECU (The second spreadsheet (each ECU . The roweach ECU . This seIntervencoast_tables_filtered.xlsx has the filtered data used to graph the main frequency patterns of human interventions on the Caribbean coast of Colombia. The second datasheet of Intervencoast_tables_filtered.xlsx shows the same data of the first one but filtered to the 29 typologies found in the study area. These filtered data were those used by the article The third datasheet of Intervencoast_tables_boxplot.xlsx) includes the data filtered and organised to obtain the graphs 4, 5A and 5B of the article The third spreadsheet (The two Google Earth files (kmz) that complement the dataset show the geographical location of each position mark describing the human interventions in the study area, which comprise the complete inventory. Those two files have the same information, but organised in a different manner, in order to make easy their consultation and manipulation. One of the kmz files groups the 3957 position marks for the 38 typologies of human interventions. Meanwhile, another file groups the position marks within the five ECU. These two files are of the utmost importance for any researcher or practitioner interested to see some specific human intervention or geographical sector, because the software of Google Earth allows to navigate virtually on the study area .Fig. 4Ex22.1Colombia has officially three coastal zones, according to Decree 1120 of 2013: Continental Caribbean Coast, Insular Caribbean Coast and Pacific Coast. The dataset shown in this article covers the first of them. In the same Decree, five Environmental Coastal Units (ECU) are defined for the study area: La Guajira peninsula (GUAJIRA); the northern slope of the Sierra Nevada of Santa Marta (VNSMR); Magdalena Delta and Canal del Dique (MAGDIQUE); Sinu Delta (SINU); and Darien Gulf (DARIEN). Their boundaries are shown in Sierra Nevada de Santa Marta massif and the southernmost end (Panama border) correspond to more resistant igneous and metamorphic rocks The approximately 1700\u202fkm shoreline of the study area alternates between deltaic plains and low coasts with high coasts of mountainous segments According to National Statistics Institute 2.2The inventory of human intervention in the study area was compiled using the structure of coastal uses and activities proposed by Botero The instrumentation for data collection relied on the software Google Earth because it provides easy access to numerous satellite images of the study area with adequate horizontal and vertical resolution to observe the earth relief and identify geomorphological units, both natural and anthropogenic ,12. The 2.3The environmental impact assessment was calculated from a simplified version of the Conesa The authors declare that they have no known competing financial interests or personal relationships that might have appeared to influence the work reported in this paper."} +{"text": "Foods journal has been published.The Mediterranean diet is now well known worldwide and recognized as a nutrition reference model by the World Health Organization. Virgin olive oil, prepared from healthy and intact fruits of the olive tree only by mechanical means, is a basic ingredient, a real pillar of this diet. Its positive role in health has now been a topic of universal concern. The virtues of natural olive oil, and especially of extra virgin olive oil, are related to the quality of the fruits, the employment of advanced technologies, and the availability of sophisticated analytical techniques that are used to control the origin of the fruits and guarantee the grade of the final product. With the aim of enriching the recent multidisciplinary scientific information that orbits around this healthy lipid source, a new special issue of This Special Issue collected specific articles from different areas of research. To produce a high-quality oil it is necessary to start from the right selection of cultivars in the olive production region. Therefore, the Special Issue dedicates an article to the varietal authentication of extra virgin olive oils . The proVirgin olive oil is a phytocomplex, a product containing hundreds of non-glyceridic substances that are responsible for the coloring of the product or contribute to the hedonistic value. They may also have antioxidant or other properties beneficial to health. An important class of such compounds is the pigments. The methods for determining them were the subject of a specific article that describes how to quantify the total amount of carotenoids and chlorophyll derivatives .The wide range of molecules present in olives and virgin olive oil also includes polar lipids such as glycerophospholipids and glycolipids. A specific article describes findings on the identification and characterization of such polar lipids, their potentiality as markers of the identity and traceability of olive oil, and their potential impact on nutrition and health .The best oils of all time are the oils produced in the last 20 years thanks to the improvement of hygiene standards and the development of technological innovations. The application of ultrasound to olive paste is among the most relevant innovations. It started from research laboratories and reached the market. The strength of this innovative process lies in the change of traditional milling. The device that combines ultrasound with heat exchange significantly increases the quantity of oil produced and the level of biologically important phenols .Studies of the favorable effects of olive oil bioactive ingredients are continuously opening new paths for medical and pharmaceutical research. A special chapter of this Issue discusses pharma-nutritional evidence that indicates how EVOO bio-phenols might exert important physiological actions that bring about cardioprotection, chemoprevention, and a lower incidence of neurodegeneration. The study focuses on recent findings that elucidate their molecular mechanisms of action .n = 1128) and the MEDiterranean Islands Study (MEDIS) (n = 2221 adults from various Greek islands) studies. The use of olive oil in food preparation and the bio-clinical characteristics of the Greek participants were investigated in relation to successful aging. It is suggested that primary public health prevention strategies to promote healthy aging and longevity should encourage the enhanced adoption of practices based on the exclusive use of olive oil for culinary purposes [The consumption of dietary fats, which occur naturally in various foods, poses an important impact on health. The last contributions reports the results of the adults from Athens metropolitan area (ATTICA) (purposes ."} +{"text": "The WISDOM Personalized Breast Cancer Screening Trial: Simulation Study to Assess Potential Bias and Analytical Approaches by Martin Eklund et al., the authors describe a simulation based on the Women Informed to Screen Depending on Measures of Risk (WISDOM) (In the article (WISDOM) , 2. The (WISDOM) . This evA primary study objective of WISDOM is to test whether personalized screening is safe, as measured by the noninferiority in the proportion of stage IIB or higher cancers found in the personalized and annual screening arms . While tThe purpose of screening is to detect early, asymptomatic disease that is potentially curable . ClinicaTo properly evaluate screening strategies for breast cancer, a natural history model of this disease should consider growth and development from a size below the detectability threshold of the technologies under investigation. This allows for the incremental benefits of earlier detection with more screening to become apparent. The omission of early-stage disease in this model\u2019s design is fundamentally limiting and masks differences in disease stage between personalized and annual screening strategies.The authors have no conflicts of interest to disclose."} +{"text": "Indian communities have the ancient cultural practice of gentle oil massage for infants which has been shown to play a beneficial role in neuro-motor development. The concept of incorporating nanosized liposomes of micronutrients in the body oil leverages this practice for transdermal supplementation of essential micro-nutrients. This paper describes the experience of developing an intervention in the form of body oil containing nanosized liposomes of iron and micro-nutrients built on the social context of infant oil massage using a theory of change approach. The process of development of the intervention has been covered into stages such as design, decide and implement. The design phase describes how the idea of nanosized liposomal encapsulated micronutrient fortified (LMF) body oil was conceptualized and how its feasibility was assessed through initial formative work in the community. The decide phase describes steps involved while scaling up technology from laboratory to community level. The implementation phase describes processes while implementing the intervention of LMF oil in a community-based randomized controlled study. Overall, the theory of change approach helps to outline the various intermediate steps and challenges while translating novel technologies for transdermal nutrient fortification to community level. In our experience, adaptation in the technology for large scale up, formative work and pilot testing of innovation at community level were important processes that helped in shaping the innovation. Meticulous mapping of these processes and experiences can be a useful guide for translating similar innovations. Globally, almost 50% of under-five children suffer from hidden hunger due to deficiencies in essential nutrients, with majority of these children being from developing countries . In IndiThis paper describes the journey of developing an innovative intervention for prevention of micronutrient deficiencies in children. The intervention involved use of a nanotechnology platform in the form of nanosized liposomes for fortification of body oil which is traditionally used for body massage in Indian infants. The intervention of nanosized liposomal encapsulated micronutrient fortified body oil (LMF body oil) was developed as part of a proof of concept randomized controlled study conducted among rural Indian infants to evaluate the effects of such an intervention on micronutrient deficiencies and neurodevelopmental outcomes.In order to develop a complex intervention for healthcare delivery, it is important to understand the social context, acceptability and final sustainability which may have a significant effect on the outcomes , 8. DeveUsing the ToC approach, we describe various steps in the process of development and implementation of the intervention: (i) design (the construction of an intervention); (ii) decide (the decision making processes while developing the intervention); (iii) implement and monitor .The intervention of LMF body oil was developed and implemented for a proof of concept community-based clinical study in rural Indian children. This was double-blind placebo controlled randomized study in 444 infants to evaluate whether use of this intervention during 1st year of life can improve nutritional anemia and neuromotor development. The intervention of LMF oil was developed by researchers at Indian Institute of Technology Bombay (IITB), Mumbai, India and the study was implemented at in population of 22 villages of Vadu health and demographic surveillance (Vadu HDSS) by reseaThe causal pathway for ToC was developed through discussion with stakeholders involved in the development and implementation of the intervention. The initial pathway was presented in a workshop on \u201cEffective Delivery of Integrated Interventions in Early Childhood\u201d organized by Saving Brains Learning Platform, amongst other peer groups of various projects and was modified through feedback from peers and reviewers. The final pathway was developed after iterative inputs from the key stakeholders involved in the study i.e., investigators from partner institutes and implementers of the intervention. As this was a proof of concept project, the pathway focusses on the design, decide and implementation part of the process and does not largely cover monitoring and evaluation part of the ToC cycle. The pathway was developed through backward mapping of the activities that led to certain evidence-based decisions and changes in the work plan.Newborn oil massage is an ancient cultural practice in South Asian countries with some reported health benefits , 11. Thein vitro and in vivo animal models for transdermal delivery of nutrients at supplemental doses that can penetrate through the intact skin while massaging with the fortified body oil. This concept of LMF body oil emerged through initial meetings amongst scientists from KEMHRC and IITB. It was perceived that as the concept was based on the traditional practice of infant oil massage, it was likely to receive greater compliance and acceptability in the Indian community and can potentially overcome barriers associated with non-compliance to oral micronutrient supplementation for proof of concept evaluation of LMF body oil in a community-based project for improvement in neurodevelopment, prevention of iron deficiency anemia and vitamin D deficiency in rural Indian children. This was an interdisciplinary collaboration between technology scientists and medical as well as public health scientists. The proposal received funding from Saving Brains Platform of Grand Challenges Canada .Before implementation of the study, a formative study was conducted in the study area for the community-based clinical trial . This waThe caregivers in the household were asked by field research assistants about whether they practiced oil massage for their infants, age of onset for oil massage, till what age was the massage practiced, frequency of massage, choice of oil and perceived benefits of oil massage. The survey results supported the available evidence on oil massage as more than 90% of the caregivers practiced oil massage for their infants, believed that it was useful and also showed willingness to switch to a new oil formulation if that contained micronutrients. Majority of the households (about 65%) used marketed oils for massage. In all the households, infants were massaged in the morning before bathing. In 85% households, massage was initiated within the first 2 weeks after birth and in 94% households the massage was continued up to at least 9 months of age. This information demonstrated feasibility for implementation of study oil massage throughout the 1st year of life, especially in the second half of infancy when the prevalence of nutritional deficiencies rises.The composition of LMF body oil was finalized through review of literature and discussion with subject experts. The composition was devised taking into consideration the physiological requirements of the micronutrients with reference to the available national nutritional guidelines such that doses of none of the nutrients exceed 100% dietary requirement for the given age .The physiological requirements of iron extensively rise after initial 6 months of life, whereas the iron requirements are minimal in the first 6 months due to redistribution of iron from fetal hemoglobin and supply of highly bioavailable iron through breastmilk . PhysiolResults from the formative study indicated that the local community was not in practice of using any particular natural oil, as majority of households used some form of marketed body oil. Based on review of literature regarding the choice of oil for infant massage, it was decided to use sunflower seed oil as this is a rich source of essential fatty acids and has demonstrated benefits in reducing morbidity and mortality in preterm newborns .The regular practice of oil massage in the local community involved massaging infants in the morning, although some households practiced massaging at bedtime as well. The formative study demonstrated that the infants were bathed soon after this massage. It was felt that this practice may not provide adequate time for absorption of the liposomes due to wash off during bathing. Hence, we planned to implement the LMF body oil massage intervention at night before bedtime .The LMF body oil was planned to be used throughout 1st year of life, during which the baby grows in size and body surface area changes which can lead to change in the volume of oil used for body massage. Also, there may be variability in the volume of oil used by each caregiver for a given size of a baby. It was important to finalize and fix the unit volume of oil to be used daily which would deliver the required dose of nutrients. A pilot study was carried out in 10 infants of various ages which demonstrated that sunflower seed oil in the volume of 2.5 mL was adequate to massage torso and extremities in first 6 months and was just adequate to massage extremities in the 6\u201312 month old babies . Hence, The patented nanotechnology for preparation of micronutrient nanoparticles was available at laboratory scale and used organic solvents (methanol) for preparation of the liposomes. Additionally, the technique used (thin film hydration) was difficult to scale up for large scale manufacturing , 29. We The storage stability of the LMF body oil was also evaluated. Suitable packaging was used for providing monthly supply of LMF with instructions for storage away from light and heat. Calibrated cups were provided for measurement of accurate volumes of LMF. As the oil was to be used for a double blind randomized controlled study, test and placebo oils as well as low and high strengths of the oil were color-coded without disclosing the composition.We decided to evaluate the tolerability of LMF body oil in healthy adults and subsequently in healthy children before proceeding with the randomized controlled study in infants to rule out any irritation potential of the LMF body oils. The need for conducting these studies was perceived after discussion with subject experts as there is some earlier evidence on role of iron in oxidative stress in skin tissue and furtThe first study was conducted in 26 healthy adult volunteers using single application of closed patch test (containing the product) under occlusion for 24 h. Subsequently, tolerability studies were conducted in 15 healthy adults and 15 healthy children aged 8\u201324 months where predefined quantity of highest strength of the LMF body oil was applied locally on the extremities for 15 days. In all the above studies, skin erythema and dryness were measured before and after the study using Draize score . The higBefore proceeding for the use of randomized controlled study, acceptability of using LMF body oil was assessed amongst the participants of the two tolerability studies. Acceptability of the oil was assessed for its appearance, texture, fragrance, ease of use and overall evaluation using visual analog scale of 0 to 5 where 0 indicates poor acceptability and 5 indicates best acceptability (The community-based study consisted of administration of LMF body oil and a placebo oil to 222 infants respectively starting from age of 4\u20137 weeks till completion of 12 months. The implementation at the manufacturer level involved manufacturing of about 500 liters of oil including both test and placebo oil following the good manufacturing practices. At field level, a team of field research assistants, clinical coordinators, laboratory coordinator and project manager were hired and trained for the study protocol. Study documents including case record forms, participant information sheet, informed consent forms, logs for study activities and compliance cards were developed by the implementation team. The study team was trained for completing all the study activities as per the decided timeline.During implementation of the project, the caregivers were trained about massage with LMF body oil at baseline visit in the study clinic. The field research assistants were trained to supply oil bottles on a monthly basis and monitor the intervention through fortnightly visits as well as through use of compliance cards. Additionally, during implementation, we found that checking the quantity of unused oil was a good measure of compliance. For non-compliant households, despite regular follow up by field research assistants, additional visits and counseling by a senior project staff were required and was found to facilitate compliance in majority of the households.There were certain challenges encountered during the various phases of work. The laboratory technique available for preparation of nano-encapsulated liposomal micronutrients needed to be changed and a green technique needed to be developed and optimized for further scale up. Identification of the service provider for scaling up an innovative technology was a challenging task. Transdermal supplementation of nutrients is an innovative mode of delivery. Although technology has been tested in preclinical and clinical studies for efficacy and tolerability, exact proportion of absorption of encapsulated nutrients through pediatric skin is not known. Lastly, there were various challenges during implementation which have been depicted in in vivo animal models of anemia has been reported in the literature (The intervention of a body oil that contains nanosized liposomes of micronutrients is a completely innovative intervention that was developed from laboratory-based technology to a proof-of-concept community level project. Transdermal iron replenishment using biophysical techniques such as microneedle patches and iontophoresis in terature \u201335. Howeterature . There iterature . Thus, tDuring development and implementation of LMF body oil, various aspects of the intervention needed to be addressed. These included feasibility of scaling up the intervention and implementing it, assessing acceptability by the local population and overall safety of the innovative intervention as well as developing measures to improve compliance to the intervention. This being a complex health intervention, lack of effect can be due to failure of implementation rather than genuine ineffectiveness . Hence, Our experience of using a causal pathway for ToC for development and implementation of LMF oil has several learnings . The cauWhile developing the innovation through the collaboration of scientists from technology and public health domains, the causal pathway helped to map steps where collaborative inputs from all stakeholders were useful in scaling up and evaluation . This alOne of the limitations while developing this causal pathway was that it was developed retrospectively. Secondly, although the initial design and decide phase of the work was not dependent upon the external funding, it was carried out only after funding for the proof of concept study was available, thus leaving less time to carry out certain study related activities. Despite the limitations, the ToC approach for this project describes how a technology innovation can be based on a traditional practice and can be translated into a viable public health intervention.ex vivo studies and studies in human volunteers and also evaluate the efficacy of LMF oil in anemic children. We plan to scale up the manufacturing of LMF oil for larger public health use through transition to scale programs which will include monitoring and evaluation part of ToC cycle.In future, we plan to mechanistically evaluate the transdermal penetration of each ingredient of the oil through Use of the causal pathway helped to clearly understand the intermediate steps taken and barriers faced during development of an innovation for transdermal delivery of nutrients, leveraging a traditional practice and its pilot implementation at community level. In our experience, adaptation in the technology for large scale up, formative work and pilot testing of innovation at community level were important processes that helped in shaping the innovation. Meticulous mapping of these processes and experiences can be a useful guide for translating similar innovations.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by Institutional Ethics Committee, KEM Hospital Research Centre, Pune. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.AA, AB, SJ, and RB were involved in conceptualization of the research idea and writing of proposal. AA, AB, and RB were involved in designing of innovation. AA, MK, and RB were involved in technology scale up of the innovation. AA, HL, AB, and SJ were involved in implementation of the innovation at community level. AA attended the ToC workshop organized by Saving Brains Learning Platform and wrote the first draft of the manuscript. All authors reviewed and provided inputs on the causal pathway developed and read and agreed with the manuscript and conclusions.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The paper presents the optimization of stacking sequence (the lamination angles in subsequent composite layers) of the composite cylinder in order to simultaneously maximize the values of the first natural frequency In many engineering fields composite materials are used more and more often, e.g., in aircraft, mechanical, environmental or civil engineering ,2. CompoMultilayer composite structures have a remarkable ease of forming various shapes, while each change of the composite structure topology may significantly change their dynamic behavior and/or buckling properties . In caseOptimization is one of the important stages in the design process. The optimization of static and/or dynamic parameters of a composite structure requires repeatedly calculating the value of the so-called objective function describing the distance of parameters being optimized from their desired values. Real-life engineering problems are typically characterized by more than one objective conflicting with each other. For this reason, an appropriate trade-off between these objective functions should be made using Multi-Objective Optimization (MOO). The computing power demand and time consumption can be reduced if zero-order optimization algorithms are applied (no derivatives of the objective functions are necessary) and modern metamodels of a considered structure are used. The application of nature-inspired metaheuristic algorithms, such as Genetic Algorithms GAs, supported by the use of Neural Networks (NNs) can meet these assumptions ,11.Many works have been done on the vibration, buckling and optimization of cylindrical shells. In , the autAs the design variable the layer fiber orientation may be used, and the multi-objective optimization may also be formulated as the weighted combination of the considered objective functions, dealing e.g., with frequency and buckling force under external load. In , a multithe application of two separate metamodels for the prediction of two parameters being optimized ,application of an internal feedback (loop) for the metamodels refinement,three different approaches to scaling of the objective function arguments,full multi-objective approach vs. scalarization approach leading to single-objective approach,novel definition of the optimal result of multi-objective optimization problem (Nadir point of the Pareto front).In this paper the properties of a composite structure are optimized through the changes of the values of basic topological parameters (lamination parameters). The proposed optimization procedure involves nature-inspired optimization algorithms such as The analysis of the problems leads finally to a proposition of composite material design by the optimization approach.Once the numerical model of a considered structure is known, the so called initial buckling problem can be described by:KL+\u03bcK\u03c3K\u03a6=M3.The investigated structure is a multilayer composite cylinder. The radius of the cylinder middle surface is The finite element model of the investigated structure, shown in The boundary conditions are defined on the shell edges by fixing the translation in all directions (XYZ) at the clamped end of the cylinder. The values of natural frequencies and buckling forces see . The leaHigher accuracy of the metamodel defined by Equation is relatapes see . The DNNn stands for the number of patterns divided by 1000).The diagram presented in The vector The number of the mode shapes selected to create the The first buckling force In all the above discussed cases instead of deep networks other data-driven models could be applied, DNNs were chosen since they work very well with huge learning sets and the description of 16-dimensional space of lamination angles needs at least a few thousands examples.The decision of applying the vector The objective function definition requires prior identification of the mode shapes and corresponds to the maximization of the lowest of the natural frequencies matching the selected mode shapes.The main optimization (either MOO or SM) is preceded by the creation of two DNN metamodels, which predicted the model natural frequencies (the The whole optimization procedure (either MOO or SM), together with the metamodels creation phase, is presented in In order to provide a clear description of the metamodels creation procedure a new symbol of o training (coarse grid): patterns) FEM examples for DNNo training: A deep network is trained to map o should be capable of working as function that returns vector DNNo optimization: single-objective GA optimization , it would be advisable to interrupt GA optimization before it reaches any sharp minimum of DNNo approximation of the objective function .GA+DNNK times), a vector of parameters being predicted FEM verification (fine grid around the optimum): For each o optimization following additional DNNo training (retraining).Stop criteria: The usual stop criteria are verified; if the criteria are not fulfilled, the procedure returns to GA+DNNo additional training: The DNNo already trained in the second step of the procedure is trained again (retrained) with the original set of patterns o should then be more precise in the area of the expected global minimum.DNNThe idea, proposed by Miller and Ziemia\u0144ski in for maxii in CLi . Since two metamodels are applied, it is possible to refine them separately.The iterative searching for a precise metamodel, described above in Point 6 is called Curriculum Learning (CL). The number of CL loops applied is denoted by o metamodels environment , the comAccording to the procedure shown in Equation ). The sun shows the number of patterns divided by 1000. The improvement of the maximization results with the increase of the number of employed metamodel training patterns is clearly visible; however, over 8000 of patterns the increase disappears. For the main task\u2014the simultaneous maximization of Each of boxes in The maximal value of Curriculum Learning loop\u2014presented in In the second initial step to the main optimization procedure the metamodel For the main task, the simultaneous maximization of The maximal value of The metamodels Equation . After if GA see ). The op(a)no scaling at all: (b)scaling to an arbitrarily selected lamination angles case: (c)scaling to maximal values of Three different variants of scaling factors pairs (see Equations and 16)16)) haveAs a result of each of the above described 250 repetitions of MOO a cloud of points is obtained, each of them being an element of one of 250 so called Pareto Fronts (PFs) obtained in each of the repetitions. Pareto front is a border line between the feasible and the infeasible results, given in the optimized parameters\u2019 space, here in UP, see ,31) at t at t(f1\u22c6Although the points in (a)scaling to an arbitrarily selected lamination angles case: (b)scaling to maximal values of Here the normalization factors in the objective space are This approach to the selection of NP prefers one or the other of the minimized values , another approach is thus proposed. The analysis of The same metamodels (Equation . After if GA see ). The opThe results, obtained for scaling factors The best values of The same results are shown (in green) also in The results obtained from MOO and SM optimization are gathered in The differences between scaling scenarios are negligible, there is however subtle but clear advantage of SM over MOO approach.The above described maximization of The results gathered in m of the cylinder. The horizontal axis of m is an overall mass and The analysis of two separate metamodels,CL loops for metamodels refinement,multi-objective optimization with two objective functions, orscalarization method approach, where the only scalar objective function is a linear combination of two objective functions involved in the previous approach.The paper presents the optimization of stacking sequence (the lamination angles in subsequent composite layers) of the composite cylinder in order to maximize simultaneously values of the first natural frequency no scaling at all: scaling to an arbitrarily selected lamination angles case: scaling to maximal values of Moreover, three different scaling of the input data for the optimization procedure are verified:New proposition of ND selection is also proposed.In the presented examples the scalarization method gives slightly better results, while the three investigated scaling approaches are barely distinguishable.The two neural network metamodels substitute very time- and resource-consuming FE calculations. The metamodels are created using examples obtained through FEM, but once the metamodels are ready they are able to assess the values of The applied metamodels enable the precise tuning of the investigated structure parameters, it is possible to obtain such a values of the design parameters that the value of the fundamental natural frequency reaches a value close to its maximum, simultaneously with the buckling force also being near its maximum. In fact in every considered case the final solution gives the values of both Genetic algorithms and DNN are very suitable tools to find global optimal solution in the analyzed problems, where laminated composite is used.The presented approach allows to design cylinder composite material through optimization approach.other parameters\u2014like overall mass and/or stiffness\u2014should be taken into account,wider range of control variables, also some geometric and/or material properties should be considered,CL approach on the level of the whole MOO procedure should be applied, not only on the level of metamodel creation.The research should be carried out further, the following problems should be addressed:"} +{"text": "The outbreak of novel coronavirus pneumonia (coronavirus disease 2019 (COVID-19)), declared as a \u2018global pandemic\u2019 by the World Health Organization (WHO), is a public health emergency of international concern (PHEIC). The outbreak in multiple locations shows a trend of accelerating spread around the world. China has taken a series of powerful measures to contain the spread of the novel coronavirus. In response to the COVID-19 pandemic, in addition to actively finding effective treatment drugs and developing vaccines, it is more important to identify the source of infection at the community level as soon as possible to block the transmission path of the virus to prevent the spread of the pandemic. The implementation of grid management in the community and the adoption of precise management and control measures to reduce unnecessary personnel movement can effectively reduce the risk of pandemic spread. This paper mainly describes that the grid management mode can promote the refinement and comprehensiveness of community management. As a management system with potential to improve the governance ability of community affairs, it may be helpful to strengthen the prevention and control of the epidemic in the community. Since the outbreak of novel coronavirus pneumonia (coronavirus disease 2019 (COVID-19)), the community is the most important place to prevent the spread of the epidemic[The identities of community members are complex and the mobility of personnel is high, so there are many uncertainties. It is difficult to implement various prevention and control measures in the traditional and decentralised community management mode. The novel coronavirus pneumonia has the characteristics of long incubation period and diversified infectious pathways . At presThe community grid management is to divide the urban area into several community unit grids according to the scientific standards such as geographical layout, convenient management and integrity of management objects, and each community unit grid is divided into sub-grids again depending on the community-level information platform. Using the electronic information and data platform to accurately and quickly locate the community grid units is conducive to orderly distribution of prevention and control materials and timely resolution of public health crisis during epidemic prevention and control. Through the active inspection of the epidemic situation in the unit grid, the prevention and control measures can actively intervene, rapid response and efficient treatment, to greatly improve the execution efficiency in the unit grid. The anti-epidemic data information of each community grid unit is timely shared and interconnected. Neighbouring community grid units shall establish a regional integration cooperation mechanism, complement each other in anti-epidemic materials and human resources and implement joint prevention and control measures. The community-level information platform timely announces the progress of anti-epidemic measures. Collect residents' opinions through Internet application software and dedicated telephone lines, and respond to residents' concerns in a timely manner. The community grid management should establish a team of epidemic prevention and control personnel in the community grid unit. These personnel compose of volunteers who are familiar with the actual situation of the community and residents who have some experience in community management. The epidemic prevention materials of grid prevention and control personnel should be fully guaranteed and they should be trained in epidemic prevention knowledge. The task of grid prevention and control was clearly divided and carried out in an orderly manner to ensure that the work responsibilities are assigned to specific prevention and control personnel.At the community level, the knowledge and measures of epidemic prevention and control should be publicised to obtain support and cooperation from residents, and all residents in the community should be fully mobilised to implement various measures of community grid management. It is suggested that the community grid prevention and control personnel be divided into several prevention and control groups, such as the entrance and exit control group, the material assurance group, the disinfection group, the grid inspection group etc. The staff of the entrance and exit control group are responsible for the pre-check and identity verification of the people entering and leaving the community, and the registration of the movement track of the people entering and leaving the community. All the people entering and leaving the community need to check their temperature, wear masks and strictly restrict the entry of outsiders into the community. The staff of the material assurance group are responsible for the purchase of protective clothing, masks, infrared temperature guns, disinfectant water and other anti-epidemic substances, arranging special personnel to be responsible for registration and distribution, helping community residents purchase daily requirements, coordinating material procurement channels and storage places and doing a good job in logistics support such as emergency supplies storage. The staff of the disinfection group are responsible for regular disinfection in public places and places with high risk of virus transmission (such as handrails and elevator buttons). The staff of the grid inspection group are responsible for the evacuation of people gathered in the community grid unit.During the epidemic prevention and control period, relying on the big data information platform, information data is shared with disease control departments, medical institutions and anti-epidemic departments to improve prevention and control efficiency. Emergency patients in urgent need of medical treatment shall be transferred to medical institutions by specially assigned persons to reduce the risk of epidemic spread caused by residents' going out for medical treatment."} +{"text": "Silicone resins are widely applied as coating materials due to their unique properties, especially those related to very good heat resistance. The most important effect on the long-term heat resistance of the coating is connected with the type of resin. Moreover, this structure is stabilized by a chemical reaction between the hydroxyl groups from the organoclay and the silicone resin. The novel trends in application of silicone resins in intumescent paints used mostly for protection of steel structures against fire will be presented based on literature review. Some examples of innovative applications for fire protection of other materials will be also presented. The effect of silicone resin structure and the type of filler used in these paints on the properties of the char formed during the thermal decomposition of the intumescent paint will be discussed in detail. The most frequently used additives are expanded graphite and organoclay. It has been demonstrated that silicate platelets are intercalated in the silicone matrix, significantly increasing its mechanical strength and resulting in high protection against fire. Silicone resins and branched polysiloxanes with very good thermal resistance are widely used as components of coating materials to meet the requirements of different applications. These compounds can be used to coat various materials, including building materials, ceramics, and construction elements. One of the examples of demanding applications are steel and aluminum structures used in construction, which must be adequately protected against fire in order to maintain their load capacity for a specified period of time, enabling the evacuation and protection of the object. Under fire conditions, the temperature of steel structure elements rises very quickly, reaching the limit temperature in which a loss of mechanical properties occurs. The result is deformation of the structural elements and their collapse. Depending on the type of fire source and its intensity and the massiveness of the structural elements, the critical temperature of steel (450\u2013550 \u00b0C) can be reached within a few minutes. The reason for the loss of mechanical properties of steel at elevated temperatures may be stresses related to thermal expansion . Steel cHistorically, the first way to create an insulating barrier against the inflow of heat was to enclose the steel structure in a concrete cover . The next applied solution was the use of spraying cement or gypsum masses filled with light porous material (which can expand under high temperature conditions), e.g., polystyrene granules, perlite, vermiculite, or mineral fiber materials. These solutions had very limited applications due to the weight of the insulating layer and corrosive properties of cement mixtures which require the initial protection of steel.Good effects in protection of steel structures against fire are achieved through the use of intumescent paints capable of producing a sintered layer during a fire, isolating the steel structure from high temperature. The best protection of the structure can be obtained when the sintered layer is formed in a controlled manner with the formation of a layer with an appropriate thickness from 1 mm to 10 cm . The basUnfortunately, it should be noted that typical organic binders used to make intumescent paints have a number of disadvantages. First of all, organic binders undergo thermal decomposition with the release of toxic gaseous products. It also causes a deterioration of the thermal insulation properties of the coating because the sintered layer is cracked and has insufficient cohesion. An additional factor causing the imperfections of the sintered layer is a softening point that is too low and thermal decomposition of organic binders, which disturbs the formation of this layer . All theAt this point, a second inorganic type of intumescent coatings based on alkali silicate should aThe aim of this review is to present the most important information concerning silicone fire-retardant intumescent paints based on a literature review made using the following keywords: intumescent paint, silicone resin, fire protection of steel or aluminum structures, silicone intumescent paint application. Literature review was made using the following databases: Web of Knowledge, Scopus, and Google Scholar. The research also covered Espacenet, Patentscope, and Google Patents, which resulted in the presentation of selected patents relevant to the subject. The overview is divided into the following main sections including discussion focused on the effect of silicone resin structure on its thermal stability, the effect of silicone resin structure, and the type of filler used in these paints on the properties of the char formed during the thermal decomposition of the intumescent paint and most important innovative applications of these paints.The properties of silicone resins differ significantly from those of linear polysiloxanes. The main factors causing these differences are their branched structure and the presence of various types of organic substituents attached to silicon atoms by a Si\u2212C bond. In the synthesis of polysiloxanes, monomers with different degrees of branching are used see .The term \u2018\u2018branched structure\u2019\u2019 means that the polysiloxane contains T or Q units in its chain. The measure of the degree of branching is the ratio of organic groups to silicon atoms (R/Si). The lower R/Si ratio is the higher content of the T units and the branched extent. In the thermogravimetric study of branched and linear polysiloxanes, it was found that thermal stability of branched polysiloxanes was higher as compared to linear ones . The exaAs shown above, the thermostability of methylphenyl silicone resins depends on their structure expressed by the degree of branching R/Si and on the content of phenyl groups. This gives a very good opportunity to select the resin with the most appropriate parameters necessary to create a good-quality sintered layer created by intumescent paint.The selection of the appropriate type of resin is of great importance for obtaining the desired final effect of fireproof intumescent paints because the process of a protective layer formation is complex and multi-stage, as detailed in 3PO4 enables further esterification reaction of the hydroxyl groups present in char formers and the polymer binder. A further increase in temperature causes decomposition of esters with the formation of carbon, free acid, water, and carbon dioxide. The decomposition of esters is accompanied by the release of a significant amount of inert gases from the decomposition of blowing agents. The composition and properties of the individual components of the intumescent paint should be selected in such a way that inert gases are released. This causes the protective layer to swell and then harden into char , and sepiolite) , zirconi2 , carbon 2 , nano -Schitosan . Yasir echitosan conducteThe significant progress in the technology of intumescent paints related to the introduction of new binders with increased thermal resistance, such as, for example, silicone resins capable of reacting with properly selected fillers, contributes to the constant trend of increasing the range of applications of these paints. This trend concerns not only the size of the market, but is also related to the development of new types of paints that give good effects not only on steel but also on other substrates requiring very good fire protection, such as plastics, fabrics, cellulose or wooden products. According to the report Research and Markets the markIn a Goldstein Research report pIt should be noted that the intumescent fireproof coating is one of the easiest and most effective methods of protecting materials, used not only for metal surfaces, but also for plastics, steel, wood, electric cables and polymer composites. This method of protection does not cause chemical modification of the substrate, but rather the formation of a protective layer that changes the heat flux acting on the substrate and may inhibit the temperature of its degradation, ignition or combustion . Even inThe stability of the carbonized protective layer is crucial to ensuring fire safety in high-rise buildings. It was found that the addition of non-purified fullerene-containing soot with different structure and graphite\u2019s microparticles allows to obtain a protective layer with increased strength . The useIntumescent paints are also used to protect plastic surfaces. Beaugendre et al. investigGood durability of the coatings and adhesion between all types of coatings and the substrate were found, indicating the possibility of using intumescent paints to protect glass fiber-reinforced epoxy composites. It can be stated that the protective layers formed from coatings obtained from intumescent paints are increasingly used not only to protect steel structures but also other materials such as building materials and plastics. There are a number of original recipes of these compositions enabling the protection of buildings, e.g., tunnels, vehicle protection and even flexible laminates used in displays in modern electronics.The use of silicone resins as binders or their components in intumescent paints is important to obtain the desired parameters of the fireproof layer. The variety of structures of silicone resins and the degree of their branching enables the selection of a binder with appropriate thermal stability so that its thermal degradation takes place at the appropriate temperature at which the sinter layer is formed and that the paint layer does not soften too early, causing it to run off. Based on the literature review, it has also been shown that the parameters of the protective layer are also influenced by the type of organic substituents in the structure of the silicone resin, especially the appropriate content of phenyl groups increasing the thermal stability of the binder. The selection of appropriate fillers has a significant impact on the parameters of the protective layer. Many studies have found that fillers such as chalk, organoclay or expanded graphite have the ability to integrate into the structure of the protective layer formed with the use of silicone resin as a paint binder. This is due to the presence of reactive silanol groups capable of reacting with the functional groups present in the fillers. The properties of the protective layer are also influenced by the structure of the fillers, especially fibrous or layered, which strengthens the mechanical properties of the protective layer. These properties significantly affect the insulation parameters. Based on the examination of many factors influencing the insulating properties of the intumescent layer, it was found that the net heat flux through this layer and the layer structure, mainly in terms of its porosity, as well as the multilayer structure seem to have a significant effect. The material on which the protective layer is placed and their compatibility are also important. The test results correlate well with the results of modelling the insulation of protective layers. Taking into account so many different factors, it seems reasonable to use mathematical modelling, which is a good engineering tool. However, due to the constantly developing area of covering construction and building materials, it is necessary to further develop this tool, taking into account also the scope of fire safety.The unique properties of intumescent paints, which enable adequate protection of not only steel structures, but also other building materials and plastics, contribute to the constant increase in the use of these paints, thus increasing the safety of people and property during a fire by extending the time necessary to evacuate endangered objects.Summarizing the information presented and taking into account the architectural tendencies towards increasingly daring, tall forms of steel structures, it should be assumed that the market of intumescent firestop paints will continue to grow intensively. High aesthetics of protective paint coatings and their low weight are advantages inherent in the construction technology of such objects, which are difficult to obtain in other ways. New publications gradually reveal new technical solutions in this field, introducing systems of hybrid paints, intumescent paints with the addition of retardants, new, much more flexible paint binders, as well as new paints using the experience of advanced nanotechnology, allowing for better effectiveness of the fireproof barrier of the coating at lower coating thicknesses."} +{"text": "All three subfamilies of herpesviruses undergo a maturation phase during replication that initiates in the nucleus of the infected cell with encapsidation of viral DNA to form nucleocapsids. These nucleocapsids are transported across the nuclear membrane into the cytoplasm utilizing a sophisticated nuclear egress complex (NEC). Cytoplasmic phase of virus maturation includes tegumentation and envelopment of nucleocapsids. Herpesvirus tegument proteins play important roles in maintaining the stability of capsids and directing the acquisition of virus envelope. The maturation concludes with the envelopment of nucleocapsids, which occurs at modified host membranes in the cytoplasm and exploits host vesicular trafficking. The entire process of virus maturation is orchestrated by protein-protein interactions and enzymatic activities of virus and host origin.Close et al. focuses on the contribution of endocytic recycling compartment to human cytomegalovirus (HCMV) maturation and egress. They make a strong case that cellular transport systems are engaged by HCMV in a systemic manner for successful maturation and egress. They utilize tools in advanced transcriptomics and proteomics married with computational analysis to come to these conclusions. Earlier studies by the same group as well as other groups have alluded to this engagement process. Close et al. argue that the composition and behavior of endosecretory organelles change during the biogenesis of cytoplasmic virion assembly complex . Infection-associated changes in gene expression suggest shifts in the balance between endocytic and exocytic recycling pathways, leading to the formation of a secretory trap within the cVAC. Also, a shift toward outbound secretory vesicle trafficking indicate a potential role of cVAC in virion egress. The analysis of signaling pathways lead them to model the behavior of endocycling recycling compartment (ERC) during HCMV replication.Herpesvirus maturation remains one of the least studied yet most complicated part of virus life cycle and numerous host and viral factors appear to influence the outcome. The current collection contains three articles contributed by nine authors. These articles have already gathered nearly 5,000 views and continue to gather attention from scientists and non-scientists alike who are willing to delve deep into the mysteries of herpesvirus maturation. The first article by Grosche et al. discusses HSV-1 replication in monocyte-derived dendritic cells (DCs). HSV-1 completes its gene expression profile in immature as well as mature DCs; however, lytic infection only occurs in immature DCs and mature DCs fail to release infectious progeny into the supernatant pointing toward differences in virus maturation and release. This article provides a commentary on viral as well as host factors responsible for these differences.The article by Heilingloh and Krawczyk focuses on the significance of production of non-infectious L-particles during HSV infection and discusses their biological functions. These particles are mostly composed of viral tegument proteins and are devoid of viral DNA and capsids. L-particles seem to undergo similar maturation and egress as the infectious particles and their generation is therefore interesting to investigate in order to understand the process of virus maturation.The article by We believe that maturation of herpesviruses will be an area of continued interest in virology as new virus and host players continue to emerge. The dramatic reorganization of the host endosecretory system leading to the formation of cVAC has mainly been appreciated in HCMV infected cells and deemed as the site of virion assembly. The debate is still on the floor whether this cVAC is a functional component of the virus maturation pathway or is merely an after-effect of virus replication. Although a typical cVAC is not seen during infection with other herpesviruses, a perinuclear site of virus maturation in cytoplasm has been recognized in most. Future studies in this area should focus on the identification of functional interactions among host and viral factors that determine virus maturation as well as structural characterization of the site of virus maturation in cells using advanced techniques in fluorescence and electron microscopy.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Leonardo da Vinci (1452\u20131519) presented the first morphological drawing of the coronaries of animals in his famous anatomical sketches (c 1485 Florence).2 In Padua, Vesalius (1514\u20131564) published his classical work De Humani Corporis Fabrica (1543) with the first anatomical drawing of the coronaries from autopsy studies of the human body.3 Also, at the illustrious University of Padua, William Harvey discovered the functional system of blood circulation in 1628 laying the foundations of modern experimental medicine.4 On 21 July, 1768 William Heberden delivered a lecture before the Royal College of Physicians of London on a disorder of the breast,5 a form of chest pain which he describes as follows: \u201cBut there is a disorder of the breast marked with strong and peculiar symptoms, considerable for the kind of danger belonging to it, and not extremely rare, which deserves to be mentioned more at length. The seat of it and the sense of strangling and anxiety with which it is attended, may make it not improperly be called angina pectoris\". After his masterly description of the chest disorder which Heberden called angina pectoris, it took more than 150 years before the syndrome was linked to its true etiology.The coronary circulation was discovered in the 13th century by Ibn al Nafis (1213\u20131288).6 For a long period, this led to the hypothesis of Jenner that calcifications were the origin of angina pectoris as has been published in 1793 (London).7 But as a result of the animal experiments of John Erichsen in 1842 (London),8 in which he ligated the proximal coronary arteries it became clear that occlusion of the coronary arteries may cause a fatal cardiac event: \u201c\u2026that the arrest of the coronary circulation produces a speedy cessation of the heart\u2019s action. Secondly, that an increase in the quantity of blood sent into, or retained in the muscular fibre of the heart, produces a corresponding increase in the activity of that organ.\u201dIn the meantime Morgagni published on calcifications in the coronary arteries of an old male.9This was the beginning of coronary ligation experiments for physiological studies of myocardial perfusion to the present day. In 1907, the visualisation of the coronary arteries by X-ray was published in an atlas composed from the analysis of human cadavers by Jamin and Merkel.10 This publication triggered the interest to visualise the coronary arteries in the living human body. Rosthoi performed the first left ventriculography and coronary visualisation in an animal experiment in 1933.11 Radner from Sweden realized the first \u201cin vivo\u201d coronary angiogram by direct sternal puncture of the ascending aorta in the year 1945.12 After this very drastic approach followed a period when catheters were inserted in the arterial system from the femoral artery and contrast was injected in the aorta at the level of the aortic root, while selective coronary contrast injection in the coronaries was regarded as a potential cause of iatrogenic mortality and was avoided. On 30 October, 1958, during such an angiographic procedure performed by Sones, the catheter accidentally stuck in the main coronary stem and the first selective cine frame coronary arteriogram was recorded.13 To the astonishment of the medical community, the patient survived without any sequalae. This positive outcome promoted diagnostic coronary angiography for the diagnosis of morphological coronary changes as a proof for the origin of chest pain, because those changes were regarded to be the key etiology for hypoxemia of the myocardium. Diagnostic coronary angiography and percutaneous transluminal angiography were developed through the 1960s and 1970s by the radiological research of Dotter14 with diminishing procedure related complication and mortality rates. As radiologist Gruentzig performed the first percutaneous transluminal coronary angioplasty on 16 September, 1977 in Zurich Switzerland.15 The nonsurgical treatment of coronary stenosis by angioplasty unleashed the medical specialty of the interventional cardiologist focusing on the treatment of coronary stenosis by angioplasty and stenting as the solution to prevent myocardial ischemia and hypoxemia. However, this monocausal approach led to substantial overdiagnosis and overtreatment of coronary artery stenosis.After this long period of hypothesising and speculating by the medical scientific community on the origin of angina pectoris, in 1928 Chester Keefer and William Resnik stated that from all available evidence angina pectoris is caused by anoxemia of the myocardium.16 In 2001, 4-Multi Detector CT Systems became available with rotation times lower than 500 ms enabling the noninvasive visualisation of the coronaries.17 It took almost 20 years of further development of coronary CT angiography before it could establish a major position in the diagnostic work-up of patients with chest pain. Systematic population research with CT examinations showed that the majority of people will develop coronary artery calcification and stenosis with ageing without any consequences for the oxygenation of the myocardium. This led to the insight that the majority of coronary stenoses do not need treatment, because there is no effect on the perfusion of the myocardium by flow redistribution and collateral flow mechanisms. This in turn resulted in a sharp rise of noninvasive diagnostic CT evaluation of the coronaries in patients with chest pain to rule out the many different pathologies that were already described by Heberden in 1768. Moreover, noninvasive coronary CT angiography is now used to diagnose and select the coronary pathologies that need (immediate) revascularization therapy by percutaneous coronary intervention or coronary artery bypass grafting. We reached the point that invasive diagnostic coronary angiography is hardly ever indicated and invasive procedures should be reserved for coronary treatment procedures alone.In 1988, the first continuously rotating CT systems became clinically available with rotation times lower than 1s.18 a new era of noninvasive coronary imaging opens up. The study proved that replacing the diagnostic work-up by noninvasive coronary CT imaging in patients with chest pain, as a first-line modality, results in a substantially lower mortality rate compared to a diagnostic strategy without this noninvasive diagnostic modality. Only 6 months after the SCOT-HEART publication, the final evidence was published that noninvasive myocardial perfusion MR can rule out all patients with significant coronary pathology but without any impact on myocardial bloodflow.19With the recent publication of the SCOT-HEART study,BJR special feature is dedicated to these landmarks and aims to open up the horizons of the many new applications that will dramatically change the current medical practice. The special feature includes an overview of the SCOT-HEART trial and its impact on CCTA for patients with stable ischemic heart disease20, as well as a review of cost-effectiveness for imaging stable ischemic disease21. Furthermore, the collection gives a detailed review of the potential for functional coronary and cardiac CT imaging beyond the evaluation of the coronary artery lumen22, the potential role of noncardiac findings in risk stratification23 and the role of machine learning to drive forward enhanced imaging data analysis.24,25 In addition, the role of MRI for the assessment of chest pain is of great importance26\u201328, as is the role of imaging in the evaluation of heart valve disease29. Lastly, novel methods for the evaluation of heart viability and coronary artery disease30, with a focus on developing ways to assess vulnerable plaque31,32, are coming to the fore and are reviewed.This BJR special feature marks the moment of publication of the first hard evidence that noninvasive coronary CT imaging in patients with chest pain saves lives compared to current medical practice and at the same time is a lot less harmful for the patient, costs less and is more effective. The special feature appears also at the moment of 125 years of publishing radiological research in BJR. BJR is the oldest radiology journal in the world and this excellent collection of articles demonstrates that the journal remains at the cutting edge of radiological research publishing. This fascinating collection will be of value to any medical professionals interested in stable chest pain. The Guest Editors would like to thank all of the authors for contributing their work as well as the expert reviewers who helped review it. We hope our readers enjoy the collection!This"} +{"text": "Testicular androgens during the perinatal period play an important role in the sexual differentiation of the brain of rodents. Testicular androgens transported into the brain act via androgen receptors or are the substrate of aromatase, which synthesizes neuroestrogens that act via estrogen receptors. The latter that occurs in the perinatal period significantly contributes to the sexual differentiation of the brain. The preoptic area (POA) and the bed nucleus of the stria terminalis (BNST) are sexually dimorphic brain regions that are involved in the regulation of sex-specific social behaviors and the reproductive neuroendocrine system. Here, we discuss how neuroestrogens of testicular origin act in the perinatal period to organize the sexually dimorphic structures of the POA and BNST. Accumulating data from rodent studies suggest that neuroestrogens induce the sex differences in glial and immune cells, which play an important role in the sexually dimorphic formation of the dendritic synapse patterning in the POA, and induce the sex differences in the cell number of specific neuronal cell groups in the POA and BNST, which may be established by controlling the number of cells dying by apoptosis or the phenotypic organization of living cells. Testicular androgens in the peripubertal period also contribute to the sexual differentiation of the POA and BNST, and thus their aromatization to estrogens may be unnecessary. Additionally, we discuss the notion that testicular androgens that do not aromatize to estrogens can also induce significant effects on the sexually dimorphic formation of the POA and BNST. Sex differences in the structures of the brain are considered to underlie sex-specific functions of the brain and brain functions that differ between sexes or genders. The mechanisms by which the brain is sexually differentiated have not yet been completely elucidated; however, they have long been studied using animal models, especially rodents. Based on accumulated data, androgens secreted from the testes during the perinatal period are converted to estrogens in the brain, wherein the neuroestrogens masculinize and defeminize the brain. Neuroestrogens are essential but not sole factors in the sexual differentiation of the brain. There are other factors that significantly contribute to the brain sexual differentiation. The processes of brain sexual differentiation require sex chromosome genes\u2019 expression in the brain and gonaThe POA and BNST show morphological sex differences that are related to sex-specific brain functions . The numIn the POA of rats and mice, there are two sexually dimorphic nuclei that have been identified to date. The sexually dimorphic nucleus of the POA (SDN-POA) exhibits male-biased sex differences in volume and the number of neurons , 1980. TAnother sexually dimorphic nucleus in the POA is the anteroventral periventricular nucleus (AVPV), which is larger and contains more neurons in females than in males . The AVPThe principal nucleus of the BNST (BNSTp) is a subnucleus of the BNST showing male-biased sex differences in size and neuron number , 1992. LNeuroestrogens originating from testicular androgens affect the POA and BNST in the perinatal period to organize sexually dimorphic structures in a variety of modes of action . As mentBax gene did not affect the number of Calb neurons in the CALB-SDN of mice in both sexes and AR-kPerinatal testicular androgens induce masculinizing and defeminizing effects on the POA and BNST of rodents through binding to ER after conversion to estrogens in the brain rather than by binding to AR directly, resulting in sex differences in glial and immune cells, dendritic synapse patterning, and specific neuronal cell groups. Interactions among immune cells, glial cells, and neuronal cells under the influence of neuroestrogens is a prerequisite for producing the sex difference in dendritic synapse patterning in the POA. Sex differences in specific neuronal cell groups in the SDN-POA, AVPV, and BNSTp may be established by controlling the number of dying cells by apoptosis or phenotypic organization of living cells that are influenced by neuroestrogens. Neuroestrogens binding to ER modulate the expression of the target genes at the transcriptional level, but also modulate gene expression by epigenetic regulation, which ensures the long-lasting effects of neuroestrogens beyond the perinatal period. Testicular androgens in the peripubertal period also contribute to the sexual differentiation of the POA and BNST, but aromatizing them to estrogens may not be necessary. Thus, peripubertal testicular androgens can act via AR directly to masculinize the sexually dimorphic nuclei.MM and ST prepared the manuscript. Both authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Metal-organic frameworks (MOFs) offer a unique variety of properties and morphology of the structure that make it possible to extend the performance of existing and design new electrochemical biosensors. High porosity, variable size and morphology, compatibility with common components of electrochemical sensors, and easy combination with bioreceptors make MOFs very attractive for application in the assembly of electrochemical aptasensors. In this review, the progress in the synthesis and application of the MOFs in electrochemical aptasensors are considered with an emphasis on the role of the MOF materials in aptamer immobilization and signal generation. The literature information of the use of MOFs in electrochemical aptasensors is classified in accordance with the nature and role of MOFs and a signal mode. In conclusion, future trends in the application of MOFs in electrochemical aptasensors are briefly discussed. The scheme of the reactions is outlined in These protocols are based on the fact that specific aptamer\u2013analyte interactions result in the implementation of bulky inert molecules preventing the access of redox-active species to the electrode surface and hence decreasing their signal recorded by voltammetry. Alternatively, increased charge transfer resistance can be recorded in the presence of the 3\u2212/4\u2212 redox indicator is applied so that the sensitivity of both methods is expected to be close to each other. However, voltammetry often shows slightly lower limits of detection (LOD) than EIS contrary to the common behavior of similar immune- and DNA sensors.For this reason, the electrodes are additionally treated after the aptamer immobilization with inert organic thiols, preventing such adsorption. 6-Mercaptohexanol is used in many protocols due to its hydrophilicity and easy attachment to the naked part of the electrode. In both EIS and DPV measurements, a 3\u2212/4\u2212 indicator remains the most frequently used in aptasensors, other species have found increasing application for signal generation. Some of them denoted as labels are attached to the carrier or electrode surface while other ones (redox indicators) are diffusionally free and can be accumulated in the surface layer due to analyte binding. In case of MOFs, they can serve as specific sorbents for redox-active species. Their activity is then used for the signal measurement. Thionine 3\u2212/4\u2212 to a rather small extent if common approaches of the biosensor assembling are used.All of the above mentioned is applicable for electrochemical aptasensors utilizing MOFs to enhance surface square, immobilize aptamers, and generate redox signals. The use of bulky and electrochemically inactive aptamer molecules makes the aptasensor behavior sensitive toward the conditions of electron transfer. Contrary to electroconductive carbonaceous materials, MOF particles offer a shuttle mechanism of the electron exchange with participation of both free and immobilized mediators, including the metals as counterparts of the MOF structure. Such an electron transfer path is easily broken by the inclusion of analyte molecules. For this reason, aptamers based on the MOF structures often show a record sensitivity of detection especially for small analyte molecules that affect the transfer of conventional redox indicators like [Fe(CN)Although most of the articles devoted to the electrochemical aptasensors based on MOF materials do not provide a comprehensive comparison of the characteristics with conventional analogs, it can be concluded that introduction of MOF particles can decrease the LOD values by at least one order of magnitude against the application of similar heterogeneous mediators of electron transfer. The difference might be even higher if MOF particles are used as parts of indicators attached to the electrode interface via target biochemical interactions with analytes. These benefits follow from the size of the particles and multiplication of the signal due to the much higher number of redox sites involved in the electron transfer. Appropriate examples are presented in Summarizing the modern progress in the development of aptasensors based on the MOF materials, one can conclude that further achievements can be expected from the broadened application of hybrid MOF-on-MOF and MOF@MOF particles that combine the advantages of several metals and linkers and can be easily adapted to a particular measurement mode or signal detection requirements. Thus, the use of such hierarchic materials with Ru bipyridine complex shows high efficiency of electrochemilumenscent measurements . Such hyThe variety of hybrid particles can also be extended by variation of the organic linkers and conditions of the synthesis, which can involve chemical and electrochemical steps and variation of the solvents and temperatures applied. A similar conclusion can be made about MOF-derived materials, where the initially synthesized MOF particles are then calcinated at high temperature or treated with chemical reagents. The products do not formally belong to the MOF family but retain the porosity, specific shape, and high adsorption capacity typical for precursors. Although many such derived materials have an alternative path of synthesis, the use of MOF precursors might be beneficial from the point of view of simplicity, and labor and time intensiveness.The following trends in the further progress of MOF-based aptasensors can be mentioned.Faster progress is expected in the multiplex assay and simultaneous determination of several analytes based on the number of aptamers introduced in the aptasensor structure.Growing interest will be directed to the signal-on aptasensors, where an increased concentration of the analyte increases their response. This kind of sensor seems more beneficial both from the point of view of metrological assessment of the results and application of the aptasensors in the real sample assay.Although the term \u2018MOF\u2019 is mostly related to the use of the linkers with carboxylic binding groups, the variety of the organic part of the MOFs will be increased by the introduction of new structures and precursors derived from imidazole fragments and some others exerting an unusual architecture of the internal space of the particles and their morphology.The interest in the synthesis of new MOF materials will be shifted to their implementation in supramolecular structures consisting of supramolecular polymers and aptamer molecules.In general, the high variability of the MOF structures and their easy accessibility as building blocks of aptasensors make it possible to rely on their broad application in medicine, food industry, and environmental pollution monitoring."} +{"text": "First Person is a series of interviews with the first authors of a selection of papers published in Biology Open, helping early-career researchers promote themselves alongside their papers. Jing Liu is first author on \u2018 Jing LiuWhat is your scientific background and the general focus of your lab?in vitro fertilization. Dr Song pays close attention to the effects of oxidative stress on female reproductive health.While studying for my doctorate, I explored the correlation between oxidative stress and endometriosis under the guidance of Dr Quan and Dr Song. I tried to investigate if advanced oxidation protein products (AOPPs), as the products and triggers of oxidative stress, could affect the biological behaviours of endometrial cells and further promote the development of endometriosis pathogenesis. Over the years, Dr Quan has focused on the pathogenesis of female infertility and the mechanism of repeated implantation failure in How would you explain the main findings of your paper to non-scientific family and friends?In the body, oxidation and anti-oxidation are kept in balance. When the balance is broken down, the organism will fail ill. AOPPs are the products of oxidative stress and can trigger oxidative stress. Our research found that overabundant AOPPs could change the biological behaviour of rat endometrial epithelial cells (rEECs) proliferation, migration and apoptosis. At the same time, the production of oxidative stress in rEECs ROS and nitrites both increased, and signalling pathways also were activated. We also confirmed in animal models that AOPPs could promote ectopic endometrial tissue growth by inducing endometrial cell proliferation and migration.What are the potential implications of these results for your field of research?Abnormal cell behaviours are closely related with the occurrence of various diseases. Increased proliferation and migration of endometrial epithelial cells could contribute to the progression of endometriosis. Previous studies have stated that the accumulation of AOPPs in body fluid were associated with endometriosis. In our research, AOPPs could induce rEECs proliferation and migration by activating ERK/P38 signalling pathways, and promote the growth of ectopic endometrial tissue. These findings showed that the accumulation of AOPPs could contribute the development of endometriosis pathogenesis by activating ERK/P38 signalling pathways.What changes do you think could improve the professional lives of early-career scientists?When we focus on our professional research, we should keep a sharp eye on the newest research progress and research techniques in various fields. New research techniques can provide us with creative experiment design. To open our eyes to different fields would help us with active thinking. In the future, multidisciplinary research will be the mainstream and the birthplace of great breakthroughs.\u201cIn the future, multidisciplinary research will be the mainstream and the birthplace of great breakthroughs.\u201dWhat's next for you?I plan to study the correlation between AOPPs and human endometrial cells further, and try my best to explore the effects of AOPPs on the endometriosis-related infertility. As a clinical doctor, I also need improve my clinical skills in assisted reproductive technology and combine basic research with clinical problems to help more women that are infertile."} +{"text": "Objective. The current paper presents an interesting case of facial reconstruction after the excision of a giant basal cell carcinoma located in the orbitofrontal region. Methods. Performing the excision while securing the appropriate oncologic safety margin has determined the appearance of a soft tissue defect that required a complex reconstruction using three regional flaps: frontal, temporal fascial and temporal muscle flaps.Results. After the excision and reconstruction in a single surgical stage, the postoperative result was favorable, the 12 months assessment showing that the patient was satisfied with the aesthetic aspect.Conclusion. Including the orbital exenterations in the excisional treatment of giant neoplasms located in the facial region requires a complex reconstructive plan. The surgical team has to consider the relief of the anatomical structures that are targeted, as well as the necessity of achieving satisfactory aesthetic results while ensuring oncological radicality. The excision of tumors located in the facial region requires the entire plastic surgeon\u2019s armamentarium, considering the margins of oncological safety and subsequently reconstructing the anatomical regions affected by the neoplastic process. Regarding the cephalic region, the patients\u2019 degree of tolerance related to the aesthetic result is very low compared to tumor pathology with another location.1]. The close collaboration between the plastic surgeon and the ophthalmologist during the entire duration of the treatment is essential in order to obtain good and stable results over time [2].The treatment of patients with giant tumors located in the orbitofrontal region should be performed by multidisciplinary teams that include, besides the plastic surgeon, the ophthalmologist, neurosurgeon, pathologist, and oncologist. The necessity of performing the orbital exenteration greatly increases the difficulty of the surgical intervention, contributing equally to the increase of the overall time of the surgery, as well as of the rate of complications associated with it . Performing the intraoperative histopathological examination is essential for the good course of the treatment, the complexity of the subsequent surgical interventions depending largely on the dimensions of the postexcisional defect [4]. In situations in which the patient\u2019s pathological personal history is favorable, performing the reconstruction of large soft tissue defects using free tissue transfer flaps is the best solution.The treatment of facial neoplasms extended in the intra-orbital structures is extremely complex, being tolerated by patients with difficulty, in the conditions in which ensuring oncological radicality often requires orbital exenteration [5]. The histopathological analysis is a fundamental element during the therapeutic protocol of neoplastic processes, establishing the surgical indications, the profile of the therapeutic protocol and the postoperative behavior [6]. The international scientific literature dedicated to understanding the particularities and evolution of giant basal cell carcinomas showed that these tumors express neuroactive mediators, also presenting an accelerated rate of cell multiplication [7].The histopathological examination of the giant tumors located in the fronto-orbital region showed that the cell typology was associated with the accelerated growth rate of the neoplastic processes, also having a significant impact on the postoperative prognosis [8].The frontal flap remains the main reconstructive option for soft tissue defects in the nasal region, especially in situations in which large radical excisions are performed [9]. In order to achieve this goal, the temporal muscle flap has proven to be a very good solution, ensuring the premise of rapid healing, with stable results over time.The reconstruction of the upper orbital region is extremely difficult in cases in which total excision of the soft tissue is required, with large areas of bone exposure. These situations require the reconstruction based on muscle and fascia flaps, the rich vascular supply being an essential element in accelerating the healing and integration processes of skin grafts [10]. The temporal fascial flap represents without a doubt a true \u201cseat belt\u201d in terms of reconstruction of the facial region in patients with giant tumors.The reconstruction of the orbital floor was performed with a temporal fascial flap, thus creating the premises for a rapid integration of the skin graft. This type of flap has the advantage of great flexibility, having the particularity of maintaining its viability even under very high degrees of rotation [11], because not infrequently, the initial therapeutic protocol is changed intraoperatively due to the neoplastic extension to the neighboring structures.Knowing in detail the remaining reconstructive resources following the excision is essential to cover the soft tissue defects of large dimensions. Regardless of the anatomical region involved, reconstructive options need to be analyzed from a wide perspective [12]. Giant tumors located at the level of the cephalic region are often associated with mutilating scarring [13]. The invasion of the orbital region by the neoplastic processes contributes in a significant manner to the outline of the mutilation nature of this pathology.The postoperative aesthetic aspect is an extremely important element that must be discussed in detail with the patient [14]. However, it should be mentioned that basal cell carcinoma is associated with an increased healing rate and consequently with a higher postoperative prognosis, compared to neoplastic pathology, such as squamocellular carcinoma or melanoma [15]. Regarding the presented case, the postoperative aesthetic result was considered satisfactory by the patient.Performing the orbital exenterations to ensure the radicality of the oncological treatment is the main factor that contributes to the decrease of the patients\u2019 satisfaction regarding the postoperative aesthetic result [Ensuring the radical excision of the tumor is the main component that influences the therapeutic algorithm.Postexcisional reconstruction is more difficult as the size of the defect is larger. In these situations, the reconstructive protocol should include an overview of all available options and their possible associations.Performing complex reconstructions involving the use of the temporal muscle flap associated with the temporal fascial flap and the frontal flap is possible and ensures good and stable results over time.Conflict of interestThe authors declare no conflict of interest.All authors agree with the publication of this manuscript.This clinical investigation complies with the standards of the Ethics Committee of the \u201cBagdasar-Arseni\u201d Clinical Emergency Hospital and adheres to the principles of the Declaration of Helsinki.The patient was informed and signed the informed consent to participate in this study."} +{"text": "Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random scattering of waves by environment. The proposed reduced state of the field combines averaged field with the two-point correlation function called single-particle density matrix. The evolution equation for the reduced state of the field is obtained by reduction of the generalized quasi-free dynamical semigroups describing irreversible evolution of bosonic quantum field and the definition of entropy for the reduced state of the field follows from the von Neumann entropy of quantum field states. The presented formalism can be applied, for example, to superradiance phenomena and allows unifying the Mueller and Jones calculi in polarization optics. It is generally believed that low frequency waves appearing in the macroscopic world such as various types of mechanical waves including acoustic ones, magnetohydrodynamic, radio-frequency electromagnetic or gravitational waves can be successfully described using classical wave equations with external sources . This isThe aim of this paper is to propose a novel approach to macroscopic description of fields based on the notions of single-particle density matrix and averaged field . This formalism allows computing all additive quantities including the entropy of the macroscopic field also introduced here. The definition of entropy involves maximum entropy principle applied to the von Neumann entropy of quantum field. A new family of quantum Markovian master equations is proposed, generalizing the previous framework of quasi-free quantum dynamical maps and semigroups ,3,4,5. IFirst Quantization interpretation, where the classical field is treated as a quantum wave function of the corresponding (quasi)particle. The further step called Second Quantization allows describing irreversible processes in terms of Markovian Master Equations for density matrices acting of the corresponding bosonic Fock space. In the final step, we develop the reduced state of the field formalism and apply it to the case of thermal sources and polarization optics.To illustrate in the simplest way the quantum features of classical fields, we begin with the discussion of light polarization in terms of Stokes parameters and its quantum-mechanical interpretation. Then, we present a complete description of a quantum field in terms of modes and its In optics, polarized light can be described using the Jones calculus, while the partially polarized one is treated using Mueller calculus .Jones vector and contains both amplitudes and phases of two orthogonal components of the wave electric field. The relevant parameter is the normalization of the Jones vectorI, and, using quantum picture of light, to the averaged photon number N. All those interpretations of Stokes matrixStokes parameters and form a four-dimensional Stokes vectorConsider first a monochromatic plane wave of light propagating along Axis 3 in a Cartesian frame with the basis complete description of polarization state of the monochromatic light beam. This assumption lies behind the Mueller calculus, which describes the action of any linear optical device by Stokes anticipated that those parameters provide a Poincare sphere. Therefore, a classical mixed state of polarization corresponding to a partially or completely polarized monochromatic light beam with a given intensity should be described by a probability measure on the Poincare sphere. Hence, the set of all mixed states is an infinite-dimensional simplex of all probability measures on the Poincare sphere generated by extreme points\u2014the Dirac measures concentrated on all points of the sphere. On the other hand, in Stokes formalism, any mixed state of polarization is given by the three-dimensional vector with the length smaller or equal to Bloch sphere. Therefore, one can say that Stokes was the first who discovered the quantum nature of light, but similarly to Columbus was not aware of the meaning of his discovery [Although the Stokes matrix/vector is constructed from classical correlations between components of the classical electric fields, the above completeness assumption is not consistent with the classical probabilistic scheme. Namely, by treating polarization as a classical dynamical variable, each fully polarized state of light satisfying the equality in Equation with a fiscovery .Jones calculus.The equivalence mentioned above sheds a new light on the Jones and Mueller calculi, which are useful tools in polarization optics. Namely, using the well-known results from the quantum theory of open systems ,8,9, we Equation is equivThe general Mueller map is completely positive but not trace preserving as The natural question, related to the thermodynamic properties of polarized light, is the definition of entropy for a monochromatic light beam with a given polarization state described by x is the position vector and k denotes a discrete index. The modes evolve in time according to the formulaIn this paper, we restrict ourselves to the classical field occupying a finite volume and hence described by the set of complex modes first-quantization modes, (1)(2)the classical energy of the field mode In the picture of From now on, we identify the classical field configuration represented by the linear superposition of modes with the corresponding vector in the Hilbert space of the first quantization. Therefore, the only mathematical difference between classical field and the first-quantization interpretation is the chosen normalization. In the first case, we normalize field to the given energy or intensity, while in the second case to one treating classical field as a wave function of a single particle.second-quantization formalism describes the quantum field in terms of bosonic Fock space The vacuum stateThe Fock space is spanned by the vectors obtained by application of all monomials in creation operators acting on the coherent state, which is a joint eigenvector of all annihilation operatorsFor any vector The coherent state can be explicitly written asThe coherent state lifting procedures applied to operators acting on the single-particle Hilbert space.In the following, we restrict ourselves to two classes of operators acting on the Fock space obtained by two different additive observable on the Fock space The single particle observable In particular, for the Hamiltonian, we havemultiplicative unitaryAny unitary operator The action of single-particle density matrixFor the non-interacting field with dynamics governed by linear field equations with possible external classical and coherent sources, the fundamental measurable quantities, such as energy, momentum and angular momentum are additive observables. Therefore, instead of the full density matrix Here, the trace The explicit form of a single-particle density matrix is given byaveraged fieldThe additional information about the phases of the field is contained in the correlation matrix given by the formulaThe definitions in Equations and 20)20) implyreduced description in terms of the pair reduced state of the field contains sufficient information about the most important properties of the macroscopic field interacting with environment. The reduced state of field is called pure if The In phenomenological thermodynamics of equilibrium systems, entropy is a function of a macroscopic state, which is characterized by a small number of controlled external parameters and temperature. Already in this case we have a certain freedom in selecting those external parameters related to our ability of controlling the system. The situation is more complicated for non-equilibrium systems where, typically, thermodynamic parameters including temperature become position-dependent and their choice is determined by the relevant time-scales of local equilibration processes.free energyU is the internal energy, T is the temperature, and S is the entropy), which determines the amount of work extractable from the system. Obviously, both extractable work and entropy depend on the available means of control over the system.A similar problem appears when the notion of entropy is discussed within classical or quantum statistical mechanics. The proper choice of the definition depends on the selected level of complexity of our theoretical framework. This level is determined by the set of accessible observables of the system, which can be measured and/or controlled. Again, this choice is also related to relevant time-scales. Such \u201csubjectivity\u201d in the definition of entropy does not lead to any inconsistencies. Namely, the basic thermodynamical quantity with direct physical interpretation depending on entropy is the N identical particles in a finite volume.The complete microscopical and statistical description of the state of such system is given by the N-particle probability distribution To illustrate the problem of selection of complexity level, we consider the classical gas of The natural choice for the entropy of such state is the Gibbs expressionHowever, the typical means of control over the gas are based on additive observables which does not involve correlations between individual particles. Therefore, for practical purposes, the statistical description of gas in terms of marginal single-particle probability distribution Equation in the cN-particle probability distributions with the same marginal Maximum Entropy Principle applied to the single-particle reduced description [Among the Equation can be tcription .We follow the analogous reasoning for the case of macroscopic field described in terms of the reduced state of the field quasi-free state on the Fock space, generated by the additive observable Consider first the One can easily compute the reduced state of the field corresponding to the state in Equation obtaininTo include also macroscopic coherence, one can apply the Weyl unitary transformation to produce the new stateThe reduced state of the field corresponding to the state in Equation is now 36)) are rmalism) .In the absence of coherent source, the equations become decoupled. Notice that, only in the absence of random source Although this type of equations is quite frequently used, its applicability is limited to classical coherent sources, zero-temperature environment and the absence of random scattering.To illustrate the presented formalism of reduced kinetic equations for reduced states of the field (Equations and 36)36)), we The reduced kinetic equations proposed above with possible nonlinear and time-dependent generalizations provide phenomenological tools to deal with macroscopic field interacting with an environment. There exist examples where special classes of the reduced kinetic equations can be derived from the underlying Hamiltonian models of the field interacting with the thermal bath and using appropriate approximations. The most popular approximation scheme combines Born, Markovian and secular ones and leads to the operators decoherence rateThe rates den Rule . The ranquations and 36)\u03b3\u2191(k),\u03b3\u2193.Equations and 41)41) can bsuperradiance conditionsuperradiance and can be studied for various physical implementations: from Hawking radiation of rotating black holes to ocean wave generation by wind [The quantum phenomenon of stimulated emission becomes particularly interesting for moving heat baths interacting with the macroscopic field. The case of rotating heat baths has been discussed in details in where quEquation intoVThe above inequality is a typical condition for shock wave generation, i.e., the velocity of the source must be higher than the critical velocity given by the phase velocity The method for reduced description of quantum field presented above can be applied to the polarization degrees of freedom in the case of linear optics devices. Namely, one can consider a light beam consisting of photons occupying the modes with a narrow band of frequencies around the central frequency q denotes the wave vectors of the beam modes, which are concentrated in the vicinity of the leading wave vector (i)(ii)(iii)Here, Equation with theFrom the discussion in the previous sections, it follows that the reduced description of a quantum field involves also the averaged field as an observable object. For polarization of a light beam, this is a two-dimensional complex vector The equation of motion for the averaged Jones vector is decoupled from Equation , but condoubly contracting. In terms of the explicit representationIntegrating the equation of motion (Equation ) betweenS^\u2032=\u03a6(S^)The Kraus decomposition in Equation is not uSimilarly, the corresponding In the most general case, the only condition which connects The condition implies that the difference of two completely positive maps Mueller\u2013Jones maps.Summarizing, in contrast to a general belief that Jones and Mueller calculi refer to physically different situations, we argue that the complete description of the polarization state of light beam needs a pair The set of Mueller\u2013Jones maps form a semigroup with the composition of two elementsFinally, we can settle the question of entropy for polarization of a light beam using the definition in Equation , which nThe mathematically consistent formalism of reduced description of quantum fields presented in this paper has a potentially wide range of applications. It is rather surprising that quantum features, in particular quantum statistics, have such an influence on the macroscopic behavior of wave-like excitations. Even for such macroscopic objects such as ocean waves generated by wind or magnetohydrodynamic waves in stellar atmospheres, the stimulated emission processes characteristic for bosons may lead to various macroscopic phenomena such as superradiance or creation of shock waves. The description in the form of two mathematical objects\u2014averaged field and population numbers \u2014can be seen as a macroscopic manifestation of particle\u2013wave duality in the quantum world. Namely, for coherent sources, zero temperature environment and absent random scattering, the description in terms of wave equations with sources and pure damping is sufficient. When random/thermal effects prevail, the averaged field tends rapidly to zero and kinetic equations for (quasi)particle population numbers govern the evolution of the relevant observables. This is clearly visible for the case of moving sources . When the source velocity approaches the critical value incoherent production of (quasi)particles rapidly grows. At this moment, the classical wave description breaks down, which is interpreted as the creation of shock waves. However, the full macroscopic formalism of the reduced state of the field is valid. The averaged field part becomes irrelevant and the physical phenomena are described by the dynamics of the single-particle density matrix. Such \u201cwave\u2013particle transition\u201d in the modeling of physical phenomena may explain the physical origin of singularities in purely classical theories such as hydrodynamics or general relativity. One can speculate that in general relativity when velocities of matter approach the speed of light, classical field description breaks down and the kinetic equations for graviton gas should be applicable."} +{"text": "Following publication of the original article , the autThe Abstract of the PDF has since been corrected in the published article.The publisher apologizes for any inconvenience caused."} +{"text": "This paper reports on a qualitative study on the impact of marriage and civil partnerships for lesbian, gay and bisexual (LGB) couples. Drawing on data from 50 dyad interviews in the UK, US and Canada, the paper investigates the ways in which couples make sense of spirituality in the context of a stigmatised sexuality. For some, the task of arranging a wedding or civil partnership ceremony provided a powerful reminder of their exclusion from mainstream religious denominations. This sense of stigma was also present in later life, when the lack of social esteem granted to same-sex relationships gave rise to a sense of disenfranchised grief . Whereas some participants tended to frame sexuality and spirituality as a kind of binary choice, others resisted this marginalisation from religious and spiritual activities, even if this meant finding a personal sense of spirituality beyond the confines of organised religion."} +{"text": "At the end of a very long life, older adults often experience a significant decline in cognitive function. However, there are older adults who have maintained high levels of cognition and physical health. The purpose of this symposium is to illuminate interdisciplinary findings of cognitive engagement with late-life benefits of cognitive functioning and physical health. Components of cognitive reserve include sociodemographic variables , psychosocial variables and physical and genetic reserve . Based on three major research studies , we highlight important aspects of building cognitive reserve and the implications for cognitive and physical health. The first presentation evaluates the importance of work complexity as a predictor of cognitive and physical health among participants of the SONIC study. Multiple group analyses yielded strong associations of occupational complexity with cognitive functioning for men. The second presentation reports logistic regression findings from the HAAS including education, strength and genetic markers, as well as mental health and their relatedness to cognitive abilities and physical health. The final presentation evaluates a structural equation model from the GCS, highlighting the interrelationship of cognitive reserve components with cognitive and physical health in very late life. We will summarize and integrate the findings for their theoretical and practical implications and provide future directions."} +{"text": "Social isolation and perceived loneliness are major issues as they may place older adults at greater risks for health problems. The objective status of social isolation and the subjective perception of loneliness may have distinct meanings, and their longitudinal reciprocal relationship remains unclear. The purposes of this study were to examine the reciprocal effects of social isolation and loneliness among U.S. adults aged 50 and above and to explore the moderating effect of solitary activities by using the data from three waves of the Health and Retirement Study (HRS) collected in the year 2008, 2012 and 2016. The index of social isolation was created by summing five commonly used indicators, including marital status, living arrangement, and three types of social contact. Loneliness was assessed by a summary score of 11 items. Solitary activities included 13 activities with limited or no social interaction. The results estimated by the cross-lagged effects model showed positive reciprocal relationship of social isolation and perceived loneliness across waves: respondents with a higher level of social isolation were predicted to have increased loneliness, and more perceived loneliness was significantly associated with a higher level of social isolation in the following waves. The results also indicated that solitary activity had a direct effect on decreasing loneliness. This study improves the understanding of reciprocal effects of social isolation and perceived loneliness over years and indicates that practice needs to address the issues of social isolation and perceived loneliness at the early stage and provide more opportunities of solitary activities."} +{"text": "Genetic heterogeneity is a well-recognized feature of hepatocellular carcinoma (HCC). The coexistence of multiple genetic alterations in the same HCC nodule contributes to explain why gene-targeted therapy has largely failed. Targeting of early genetic alterations could theoretically be a more effective therapeutic strategy preventing HCC. However, the failure of most targeted therapies has raised much perplexity regarding the role of genetic alterations in driving cancer as the main paradigm. Here, we discuss the methodological and conceptual limitations of targeting genetic alterations and their products that may explain the limited success of the novel mechanism-based drugs in the treatment of HCC. In light of these limitations and despite the era of the so-called \u201cprecision medicine,\u201d prevention and early diagnosis of conditions predisposing to HCC remain the gold standard approach to prevent the development of this type of cancer. Finally, a paradigm shift to a more systemic approach to cancer is required to find optimal therapeutic solutions to treat this disease. In the last decade, numerous studies have provided several and accurate details on the genetic alterations associated with hepatocellular carcinoma (HCC). In particular, the familial genetic alterations responsible for the development of cirrhosis and associated with HCC formation have been entirely unraveled . MoreoveAs shown in Table Based on these considerations, we can assert that the pharmacological principles of potential therapeutic agents targeting genetic alterations have been probably based on the wrong paradigm . First, The failure of these therapeutic strategies reinforces the importance of conventional therapies , which used alone or in combination, remain the only widely accepted clinical treatments for HCC \u201327. TherAn entirely different approach to the cure of HCC could consist of the prevention of the disease through the identification and targeting of the early genetic alterations that anticipate the neoplastic transformation. Unfortunately, this strategy seems also not to succeed in the difficulty to target/correct familial genetic alteration and oncogenic mutations. In this contest, the HBV transgenic mouse model offers the opportunity to study the process of tumor initiation and progression related to the HBV infection , 31, andAnother important aspect regarding the development of new potential therapeutic agents against HCC that should be taken into account is the role of metabolic-induced modifications acting as environmental modifiers with the ability to influence a susceptible genetic or epigenetic background . In this"} +{"text": "We report a case of volar fourth and fifth carpometacarpal (CMC) joint dislocation complicated by a hamate hook fracture. The CMC joint was reduced in a closed fashion and temporally fixed with Kirschner wires. Using intraoperative computed tomography, the displaced fracture of the hamate hook was reduced by open reduction and internal fixation and fixed with a screw. We suggest that this rare injury was caused by the over contraction of the flexor carpi ulnaris and avulsion force from the ligamentous structure around the pisiform, hamate, and metacarpal bones. Dislocation of the carpometacarpal (CMC) joint is a relatively rare injury, and volar dislocation of the CMC joint is less common than dorsal CMC joint dislocation. Nalebuff reported two types of volar dislocations of the fifth CMC joint: volar radial or volar ulnar, depending on the type of ruptured ligament . FracturA 60-year-old man fell from a staircase and landed on the floor with his right wrist hyperextended. He had no history of hand or wrist injury. He presented to the nearby clinic complaining of severe pain and swelling of the right hand. A diagnosis of fourth and fifth CMC joint dislocation was made by radiography Figures . Closed Dislocation of the CMC joint was performed by longitudinal traction under sedation. The patient underwent open reduction and internal fixation of the hamate hook and percutaneous fixation of the CMC joint on the following day. The surgery was performed under general anesthesia. Surgical exposure was achieved with a longitudinal skin incision made between the hamate hook and pisiform, which was prolonged proximally by a palmer crease in a zigzag fashion. Guyon's canal was released for the exposure and protection of the ulnar artery and nerve. The pisiform and hamate hook was identified. The fracture site of the hamate hook was located using a longitudinal incision of the palmar carpal ligament . It was Three months after the operation, CT revealed a gap of the fracture site at the hamate hook . HoweverSeveral studies have reported volar joint dislocation of the fifth CMC joint, mostly without accompanying fracture of the hamate hook , 3, 4. NIn our case, the exact mechanism of the hamate hook fracture associated with the volar dislocation of the fourth and fifth CMC joint was unclear; however, we suggest that this injury was an avulsion injury, based on previously published studies. Garcia-Elias et al. reported a hamate hook fracture with volar dislocation of the fifth CMC joint and pointed out that this injury was caused as an avulsion injury from the ligamentous structure around the pisiform . These aTo provide an anatomical and biomechanical background for this hypothesis, Pevny et al. showed that the pisohamate and pisometacarpal ligaments are much thicker and stronger than other soft-tissue attachments around the pisiform . Rayan eIn terms of treatment of hamate hook fractures, most injuries of this type, which are typically caused by repetitive microtrauma or blunt trauma to the palm of the hand from playing golf or baseball, are treated by hook excision. In the report by Gunther et al., the fracture was left untreated, resulting in nonunion without symptoms . AnotherIn our case, open reduction and internal fixation of the fractured hook was performed, taking into account the mechanism of this injury. In contrast to the usual type of hamate hook fracture, we noted a gap between the hook and body of the hamate. An intraoperative three-dimensional CT scan was helpful for obtaining anatomical reduction of the fracture site. The fracture resulted in nonunion, which may have been related to the initial displacement and residual instability around the hamate hook, since the HCS screw length did not exceed the more than half of the hamate body.This is the first report to describe volar dislocation of the fourth and fifth CMC dislocation with a hamate hook fracture and its relationship to the ligamentous structure around the hamate hook. Our case indicates the importance of traction force in the strategic treatment of this injury."} +{"text": "Chemical transformations by heterogeneous catalysis enabled fundamental research and chemical industry to support the technical development and stabilize the prosperity of the human community for \u2013 at least \u2013 the last hundred years. The Haber-Bosch process providing vast amounts of cheap ammonia and its importance for a wide spectrum of industrial branches might serve as a vivid example. Complex changes in the human society and the relation of the latter to nature, especially the protection of the environment, require the optimization of known and development of new chemical processes yielding the targeted products with minimal possible damage to the pristine nature.A deep understanding of the ongoing processes on the interface between the participating phases in heterogeneous catalysis results in optimized materials and enables the development of urgently needed new processes, e.g. in the context of renewable energy storage. In the simplest case, heterogeneous catalysis involves only three general steps \u2013 adsorption of the reactants, reaction of the adsorbed reactants and desorption of the products \u2013 all happening on the active sites located on the accessible surface of the catalysts. One of the key issues in understanding heterogeneous catalysis is, therefore, the question of the formation and atomic decoration of the active sites as well as the role of their electronic structure. The nucleus of active sites is often a transition metal. In the traditional catalytic processes, such active sites may be located on a catalytically non-active support or are indeed a part of the support, most commonly an oxide. The so-obtained catalyst is obviously very complex, one may take as an example the classical Cu/zinc oxide/aluminum oxide catalyst for the methanol synthesis. The understanding of the catalytic mechanism on an atomic-molecular level in such systems is strongly hindered simply by the large number of reactions and processes possible. The use of unsupported catalysts based on metallic alloys reduces the number of possible elementary processes on the surface. The large number of different atomic configurations on the surface due to the presence of extended solid solutions in alloys still does not allow for an easy development of more transparent and provable reaction mechanisms \u2013 especially when severe segregation of one of the components is occurring. A way out of this dilemma may be achieved employing intermetallic compounds as unsupported catalysts. Their ordered crystal structures are caused by the special bonding feature \u2013 a coexistence of two- and multi-center atomic interactions \u2013 and contain atomic arrangements markedly deviating from the closest packing patterns characteristic for alloys. For the same bonding reasons, the electronic structure of intermetallic compounds also differs strongly from the ones of elemental metals or their alloys. Thus, the use of intermetallic compounds as catalysts allows to influence both \u2013 geometrical and electronic \u2013 factors controlling the general steps of heterogeneous catalysis mentioned above. Further typical but not general advantages of the intermetallic compounds are their high thermodynamic stability , chemical resistivity, mechanic stability, etc., which may play a key role considering the practical manufacturing of heterogeneous catalysts.All this gives a rough idea about the huge research field of heterogeneous catalysis on intermetallic compounds. The special issue of Science and Technology of Advanced Materials offers to the reader current results obtained by research groups around the world on this topic.This Special Issue was initiated together with our honorable colleague \u2013 Professor An Pang Tsai from the Tohoku University in Sendai, Japan. Much to our regret, An Pang Tsai passed away during the work on this project. The Editors of the Special Issue and the authors of the included publications dedicate this work to the memory of An Pang Tsai."} +{"text": "This poster presents the results of an intervention study exploring how engagement in telemedicine at home affects chronic patients\u2019 perceptions of usability and acceptability of the employed equipment, perceptions of its psychosocial impact, and intention of future use in the context of population aging. A purposively selected sample of 103 patients (mean age: 58 years) with chronic conditions (diabetes and/or hypertension) recruited in a community health center in Slovenia tested a home telemedicine system (TMS). After three months of utilization, an assessment of the relative importance of the usability and acceptance of TMS as factors influencing the patients\u2019 self-reported psychosocial perception of TMS and intention of future use was performed based on a proposed structural equation model explaining these interdependencies. The results confirmed four of eight tested hypotheses. Notably, the intensity of TMS use was found to affect the evaluation of its usability, the perception of its psychosocial impact, and the intent of future use. Usability was found to be the main factor directly influencing acceptability, perception of psychosocial impact and intent for future use, whereas acceptability did not significantly affect either the perception of the psychosocial impact of TMS or the intent of future use."} +{"text": "The originality of this paper lies in the presentation of a new, innovative method for manufacturing medical screws with a cylindrical head of 316 LVM. This method is unique on a global scale, and its assumptions have been granted patent protection. The paper presents selected results of theoretical and experimental research on the developed process of forming of medical screws based on new technology. In the first part of the study a review of the types of screws used in the medical industry is made and the previous methods of their manufacture are described. The second part of the paper presents the assumptions and analysis of the elaborated process of metal forming of medical screws with a cylindrical head and ring thread made of 316 LVM austenitic steel. The theoretical analysis of the new process of forming a screw selected for testing was performed on the basis of numerical simulations. The experimental verification of the proposed theoretical solutions was carried out on the basis of laboratory tests, industrial research and qualitative research. The positive results obtained from computer simulations and experiments confirmed the effectiveness of the developed technology and the validity of its use in future in industrial practice. Austenitic steel of type 316 LVM X2CrNiMo18-15-3) is characterised by high metallurgical purity and is dedicated to medical applications for implants . In orde-15-3 is Austenitic stainless steel obtains significantly higher mechanical properties after cold forming, where the appropriate degree of crushing allows to control the increase in mechanical properties with a decrease in forming properties ,13. The In medical applications 316 LVM steel is used for implantation into orthopaedic connecting wires, pins, skinning nails, bone nails, femoral heads, vertebrae, acetabulum bones, hip joints, knee joints, bones and nail plates, catheters, internal fastening devices, dental implants or staples ,15,16,17At present, medical screws are produced mainly by machining, additive processing and injection moulding methods .The application of machining for the production of medical screws is described in the article ,24. The Among the methods of manufacturing medical screws, one can distinguish the additive processing described in the article ,26. In cOne of the ways of producing medical screws is also the injection moulding method developed by researchers from the Fraunhofer IFAM Institute for Production Engineering and Research of Applied Materials in Bremen. This technology involves the production of biodegradable medical screws made from polylactic acid, hydroxyapatite and medical stainless steel by injection moulding. Depending on the chemical composition of the screws, they can be biodegradable within 24 months. These screws can be manufactured precisely using conventional injection moulding methods, which means that there is no need for additional cavity processing. The resulting medical screws have similar properties to real bone. Their compressive strength is greater than 130 MPa, while a real bone can withstand 130\u2013180 MPa.Despite the existence of these technologies, new solutions are still being sought. Particularly noteworthy are the metal forming processes, which allow forming products with better mechanical and functional properties. Replacing the previously used technologies of medical screws production with the new methods of metal forming processing would allow forming products with better quality and at the same time reduce their manufacturing costs .It should be mentioned that the research subject matter undertaken is a response of the scientific world to the needs reported by medical producers for the development of an effective technology for the production of medical screws made of austenitic steel in terms of obtaining products of the highest quality. Therefore, the Lublin University of Technology has undertaken to develop a new method of forming medical screws using metal forming processes. This method is unique on a global scale, and its assumptions have been granted patent protection ,28. ThisThe material used for the test was 316 LVM austenitic steel in the annealed condition with the chemical composition and properties shown in Experimental verification of the assumed process was performed in laboratory conditions in the laboratory of the Department of Computer Modelling and Metal Forming Technology at the Lublin University of Technology and in industrial conditions in the Eurowkret Production Company on industrial machines. In laboratory tests, a specially designed and manufactured thread rolling device a and setExperimental verifications under industrial conditions at Eurowkret Production Company were carried out on industrial machines i.e.,-forging of screw heads was performed on a machine produced by the Zabierz\u00f3w Machinery Factory\u2019s model Pazm 6 using a set of dies shown in -thread rolling of the screw forgings was carried out on the Chun Zu CPR6S cross-wedge rolling machine using the dies shown in -Making cuts on abrasive disc cutting machines and water cooling;-Embedding preparations in the resin;-Grinding on abrasive discs with grits 80, 220 with water cooling;-Three-step polishing using a diamond suspension of 9 \u00b5m, 3 \u00b5m and colloidal silica 0.05 \u00b5m; and-Electrolytic etching in a 10% oxalic acid solution, at 3V.In order to analyse the correctness of the deformation process and to verify the simulation results, macro and microstructural tests were carried out, including a quantitative analysis of the microstructure of the products. Additionally, in order to check the degree of strengthening in selected areas, micro hardness tests were carried out using the Vickers method. Macro and microstructural analysis was performed for screw forgings and raw material with the use of optical microscopy and the observations were made in a bright field. The metallographic specimens for micrIn selected areas of the samples where microstructures were analysed, microhardness tests were also performed using the Vickers method in accordance with PN-EN ISO 6507-1:2006. The tests were performed on the HV 0.5 scale using the Futuretech FM 800 microhardness meter . Additionally, in selected areas, grain size analysis was carried out in accordance with the ASTM E112 standard.The results of the simulation confirmed the possibility of metal forming of medical screw forgings according to the proposed technology. The On the basis of the simulation, more important information was obtained about the analysed process of forming of the forging of medical screw, including the distribution of the intensity of strains, stresses, or the criterion of destruction and forming forces. The results of the laboratory experiment and in real conditions in the Eurowkret Production Company confirmed the correctness of the modelled process of forming medical screw forgings. During the research, the forces needed to form the screw head and roll the screw thread were determined. A comparison of forces measured experimentally and simulated is shown in The microstructure of the drawn wire used to produce the screw consistsOn the basis of macrostructural observations of etching metallographic specimen , areas 1The microstructures of formed screws shown in The microstructure of the analysed screw is characterised by considerable heterogeneity due to differences in plastic deformation values for individual areas. The structural changes are mainly related to the appearance of a significant number of twins, grains cut by the slip bands and deformations in the direction of the material flow.It is therefore necessary to consider the use of appropriately selected post-rolled heat treatment processes in order to homogenise mechanical properties and ensure proper corrosion resistance.The hardness measurements obtained in -Increasing the efficiency of product manufacturing;-Reduction of production costs. This advantage results from lower material, time, labour and energy consumption of the new process. It is possible to reduce process waste by up to 45% compared to machining. A screw produced by machining takes several minutes to several tens of minutes to produce, depending on the shape. In the case of metal forming processes for producing screws: forging the screw head takes a few seconds and rolling the screw thread also takes a few seconds. Due to the reduction of the manufacturing time of the screws according to the new technology, this will result in a reduction of the labour and energy intensity of the process;-Increase in product quality. Higher product quality is associated with a more favourable structure and high surface smoothness as a consequence of the application of plastic working processes, which results in better strength and service properties;-Pro-environmentality. The new process\u2019s environmental friendliness results from production that is less harmful to the environment, i.e., low waste by reducing material losses.This paper presents new possibilities of manufacturing medical screws using metal forming processes. On the basis of theoretical and experimental research, it has been found that in the assumed forming process it is possible to obtain forgings of medical screws with cylindrical head and ring threads of the correct shape. The advantage of the developed technology of metal forming screws over the hitherto applied machining lies in the following:The advantage of the new technology is its versatility. This method can be used to form screws from various metal biomaterials.The numerical analyses carried out confirm the advantages of using computer simulations when developing new technologies . Thanks The laboratory and industrial tests carried out made it possible to determine the optimum technological parameters for the correct implementation of the process of metal forming of screw forgings and to produce technology demonstrators with qualitative tests. It should be mentioned that due to the adopted scheme of the process, the microstructure of the obtained screw forgings is characterized by heterogeneity related to the occurrence of different deformation values in individual areas of the product. The quantification of structural change expressed by increase in hardness as a result of hardening and the change in the average grain diameter confirm the results of numerical analyses in terms of the expected deformations in individual areas. It is, therefore, necessary to consider the application of appropriately selected heat treatment processes after the screw thread rolling operation in order to homogenize the structure and mechanical properties and ensure proper corrosion resistance."} +{"text": "Most molecular maturity parameters of the oil samples suggest a maturity level equivalent to the onset of the peak of the oil generative window.The organic geochemistry of six oil samples from the offshore Block 17 was studied by a combination of classical biomarker and extended diamondoid analyses to elucidate source rock facies, the extent of biodegradation, and thermal maturity. Based on molecular data, oils are interpreted as depicting a mixture of two pulses of hydrocarbon generation probably from the Bucomazi and Malembo formations. Geochemical results also gave evidence of mixing of a lacustrine siliciclastic-sourced oil charge and a second more terrestrially derived oil type in the samples analyzed. A single genetic oil family was identified through hierarchical cluster analysis; however, two groups of oils were identified on the basis of their biodegradation levels using the Peters/Moldowan scale. Lower and upper Malembo oils have a slight depletion and a notable absence of Angola is one of the largest crude oil producers in Sub-Saharan Africa. The country produces about 1.8 million barrels per day after a boom from 2002 to 2008 as its deepwater fields began to take off. Currently, crude oil production comes almost entirely from offshore fields off the coast of Cabinda and deepwater fields in the Lower Congo Basin (LCB). Angola began producing oil in 1956, when the company Petrofina discovered the first accumulation near Benfica in the Kwanza Basin, and has estimated crude oil reserves of 13 billion barrels. Most of the proved reserves are located in the offshore parts of the Lower Congo and Kwanza basins, which developed during the Late Jurassic and Neocomian times on the conjugated margins of Africa and Brazil. The study area is located within the sedimentary LCB a, that i2 and is located around 230 km northwest of Luanda and ceased around the late Barremian and early Aptian 127\u2013117 Ma) )Analyzed oil samples from six fields in the Angolan Block 17 originated from the same source rocks consist of a mixture of two generation pulses of hydrocarbons. Biomarker data indicate that the first pulse originated from the Bucomazi Formation lacustrine facies, whilst the second one was generated from the more terrestrially derived facies of the Malembo unit. All the study oils belong to the same genetic type. Differences in the PM levels of biodegradation within the study oils could be explained by factors such as reservoir temperature and/or microbial communities."} +{"text": "The emergence of antibiotic-resistant bacteria presents a major challenge in terms of increased morbidity, mortality, and healthcare costs. The World Health Organization (WHO) states that antibiotic resistance is one of the biggest threats to global health, food security, and development. One promising alternative to treat infections caused by antibiotic-resistant bacteria is phage therapy, where bacteriophages (phages) are used to combat the pathogens. Phage therapy has a 100 years history, but it has been largely ignored in Western countries after the commercialization of antibiotics. Now, as antibiotic-resistant bacteria are becoming more common, a renewed interest in phage therapy is emerging.This Special Issue \u201cPhage Therapy: A Biological Approach to Treatment of Bacterial Infections\u201d covers different aspects of phage therapy. The issue altogether includes nine papers, four out of which are research papers, four are reviews, and one is a case study.Acinetobacter baumannii, which were isolated earlier in hospital wastewater in Thailand. They show that each of the analyzed five phages can infect 22\u201328% of 150 multiresistant A. baumannii strains and that one of the phages clearly reduces the cytotoxic effect of A. baumannii on human cells in vitro [Two of the published articles concentrate on the isolation and characterization of phages and on setting up a phage depository. Styles et al., describe the analysis of phages infecting Flavobacterium columnare becomes phage resistant in a one-month co-culture of the phage and the bacterium in lake water and describe the isolation and analysis of an evolved phage that can overcome the phage resistance in some of the bacterial mutants resistant to the original phage [Interestingly, three articles in the Special Issue concern phage therapy in aquaculture. A review article by Zaczek et al., discusses different aspects of phage application in aquaculture niches from both theoretical and practical points of view. They handle topics such as microbial spoilage of seafood, phage abundance in aquatic systems, and therapeutic application of phages in the fish industry in practice . The twoal phage .Escherichia coli [There are three review articles in the Special Issue that cover human phage therapy from different perspectives. The article by Steven Abedon gives a thorough summary of published information about the combination therapy of phages and antibiotics and Zalehia coli . The arthia coli . The last article of the Special Issue is a case report by Rubalskii et al., describing the phage therapy of eight patients having severe bacterial infections related to cardiothoracic surgery. The authors report the eradication of the pathogenic bacteria in seven out of eight patients for whom antibiotic treatments had earlier failed .This Special Issue represents a multifaceted collection of research and review articles that together cover several aspects of phage therapy: setting up a phage collection and the isolation and analysis of phages, the application of phages in the food industry (as represented by aquaculture), and the treatment of different types of human infections. We hope that the articles in this Special Issue will reach a wide audience and provide new information for professionals working on different aspects of phage research."} +{"text": "The performance of copper selenide and effectiveness of chemical catalytic reactors are dependent on an inclined magnetic field, the nature of the chemical reaction, introduction of space heat source, changes in both distributions of temperature and concentration of nanofluids. This report presents the significance of increasing radius of nanoparticles, energy flux due to the concentration gradient, and mass flux due to the temperature gradient in the dynamics of the fluid subject to inclined magnetic strength is presented. The non-dimensionalization and parameterization of the dimensional governing equation were obtained by introducing suitable similarity variables. Thereafter, the numerical solutions were obtained through shooting techniques together with 4th order Runge\u2013Kutta Scheme and MATLAB in-built bvp4c package. It was concluded that at all the levels of energy flux due to concentration gradient, reduction in the viscosity of water-based nanofluid due to a higher radius of copper nanoparticles causes an enhancement of the velocity. The emergence of both energy flux and mass flux due to gradients in concentration and temperature affect the distribution of temperature and concentration at the free stream. Conservation of energy is capable to unravel the nature of outward flux of energy. Meanwhile, Hort et al.8 once remarked that concentration currents and heat are two driven forces of fluctuations in temperature as in the case of conservation of energy. The effect of energy flux due to concentration gradient on six different gases (9. It was remarked that kinetic theory is useful to determine the Dufour coefficient of each gas and temperature difference within the domain 10). Dynamics of peristaltic flow through a channel of width 2a subject to energy flux due to concentration gradient and mass flux due to energy gradient was closely examined by Hayat et al.11. It was shown that temperature distribution is an increasing property of both Dufour and Soret effects. Although, the increase in the temperature distribution is more enhanced near the surface. The reverse is the observed effects of Dufour and Soret phenomena on the concentration as both properties decreases.The roles of energy and mass fluxes due to temperature and concentration gradients play major roles in electrochemical processes, dynamics of gases, chemical catalytic reactors, production of copper selenide sequel to the pioneering work by Louis Dufour12 led to a conclusion that a larger Dufour number was recommended for gaseous mixtures in which energy flux due to concentration gradient is significant. Following the suggestion by Partha et al.15, Lewis number is more appropriate than Schmidt number in characterizing heat and mass transfer due to the significance of energy flux due to concentration gradient and mass flux due to temperature gradient. This led to a robust analysis of the interconnectedness of Lewis number, Dufour number, and Soret number. The results show that when 13 as the most important source of heat conduction within the region where magnetic fields do not exist or even weak. The further associated increase in effective thermal conductivity to growth in Dufour effect. The major reason why temperature increases due to higher magnetic strength were associated with the fact that heat conduction in the perpendicular direction diminishes whenever magnetic field is intensified. The conclusions mentioned above are not affected in the correction published as Ref.14.Further examination of energy and mass fluxes by Linz16, Narayanan and Rakesh17, Lin and Yang18. Intrinsic magnetic related properties of magnetite vary as to its diameter changes (i.e. higher ratio of surface to volume). As the size of nanoparticles becoming smaller, the superparamagnetic nature of magnetic nano-sized particles even changes19. Ashraf et al.20 remarked that changes in the radius of nanoparticles affect the characteristics of both interphase and nanoparticles. According to Yapici et al.21, the outcome of comparative analysis of ethylene glycol conveying SiO22 noted that the viscosity of nanofluid can be greatly influenced by particle size, the nature of energy transfer between fluid\u2019s layer and the surface of the particles. There exist significant changes in the melting point of nanoparticles due to rises in the radius. This conclusion was based on the fact that the melting temperature was seen to be a decreasing property of higher particle size with a significant decreasing rate between spherical and nanoparticles; see Antoniammal and Arivuoli23.Enhancement in the transfer of heat across fluid flow has made experts in thermal engineering embrace the efficiency of nanofluids. The earlier mentioned advancement is based on the nature of the base fluid and nanoparticles. Nanoparticle concentration and temperature effects on the ratio of mass to density and viscosity are some of the physical properties. However, thermal conductivity and specific heat capacity at different levels of concentration of nanoparticles, nanoparticles\u2019 size, and temperature are some of the thermal properties. The concentration of nanoparticles, pressure drop, friction factor, nanoparticles radius are also some of the outlined characteristics of nanofluids as pointed out by Mohamoud Jama24 indicates that increasing the diameter of silicon dioxide nanoparticles in ethylene glycol and water corresponds to a decrease in the nanofluid\u2019s viscosity. It is worth remarking that the observed decrease in fluid\u2019s viscosity is more significant when the nanofluid is cold (negative temperature). At a larger temperature, the decrease in viscosity with particle size disappears. In the case of a nanofluid parallel to a moving stretchable sheet, platelet shape of molybdenum disulfide (MoS2) nanoparticles was found by Hamid et al.25 to produce a unique heat transfer. In another study, Sheikholeslami et al.26 illustrated the movement of multiple wall carbon nanotube and Iron(iii)oxide nanoparticles in a typical water based nanofluid through a porous medium when Lorentz force is predominant. The dynamics of water conveying alumina and copper nanoparticles over a split lid-driven square cavity was examined by Khan et al.27 and it was shown that higher Nusselt number proportional to the heat transfer rate is attained at the point when both lids meet. In another related report by Khan et al.28, carbon nanotubes and heating of the wall are two factors capable to boost the local Nusselt number in the case of water-based carbon nanotubes over a right-angle trapezoidal cavity. Meanwhile, the local Nusselt numbers are increasing property of solid volume fraction of carbon nanoparticles; Hamid et al.29. The analysis of seven different hybrid nanofluids by Nehad et al.30 shows that optimal Nusselt number is achievable when suction and stretching are significantly large but less dense nanoparticles like silicon dioxide and multiple wall CNT.The analysis of Namburu et al.when energy flux due to concentration gradient and mass flux due to temperature gradient are negligible, what is the significance of increasing radius of nanoparticles on the local skin friction coefficients, heat transfer rate, and mass transfer rate?Cu-water nanofluids?At various levels of energy flux and mass flux due to gradients in concentration and temperature respectively, how does increasing radius of copper nanoparticles influences the transport phenomena of What is the variation in Sequel to the aforementioned reviews of related literatures, in the presence of Joule heating and space-dependent heating, it is noteworthy to examine the significance of increasing nanoparticle radius and inclined magnetic field on the dynamics of chemical reactive water conveying copper nanoparticles through a porous medium. The outcome of such study would be very helpful to experts dealing with the performance of copper selenide and effectiveness of chemical catalytic reactors. This study was designed to provide answers to the following related research questions: 39, the governing equation suitable to investigate the aforementioned transport phenomenon is31 and Gosukonda et al.32 for the ratio of dynamic viscosity of the nanofluid to the dynamic viscosity of base fluid defined ash was adopted. The effective nanofluid properties are given by33 on the thermal conductivity of nanofluids, the model proposed by Maxwell34 was adopted to incorporate the enhancement in the thermal conductivity of Cu-Water nanofluid. Next is to introduce the following variablesM, and porosity parameter P, heat source/sink parameter S are defined as35, Saidi & Karimi36, Khoshvaght-Aliabadi & Hormozi37, Wan et al.38, and Bachok et al.39. The dimensionless local skin friction, heat transfer rate, and mass transfer rate areWhen energy flux due to concentration gradient and mass flux due to temperature gradient are significant, the dynamics of water conveying copper nanoparticles over a horizontal surface subject to magnetic field of strength 40) was used to obtain the system of first order IVP for Eqs. . Th. Th40) w24 corresponds to a decrease in the nanofluid\u2019s viscosity. Not only that, the outcome of an examination of (i) ethylene glycol and alumina nanoparticles mixture (ii) water and CuO nanoparticles mixture by Pastoriza-Gallego et al.49 shows that higher viscosity is bound to occur as particle size diminishes. It is worth deducing from Figs. When at all the levels of energy flux due to concentration gradient, reduction in the viscosity of water-based nanofluid due to a higher radius of copper nanoparticles causes an enhancement of the velocity.significance decrease in distribution of temperature across the domain due to increasing radius of copper nanoparticles is achievable when energy flux due to concentration gradient is sufficiently large in magnitude.when mass flux due to temperature gradient is highly negligible, optimal temperature is also observable when energy flux due to concentration gradient is sufficiently large. Reverse is the case when mass flux due to temperature gradient is sufficiently large as optimal temperature is ascertained at all levels of energy flux due to concentration gradient.reduction in the mass transfer rate the emergence of both energy flux and mass flux due to gradients in concentration and temperature affect the distribution of temperature and concentration at the free stream.Attempt had been made to examine the significance of increasing radius of nanoparticles, energy flux due to concentration gradient, and mass flux due to temperature gradient in the dynamics of chemically reactive fluid subject to suction and inclined magnetic strength. Based on the analysis, it is worth concluding that"} +{"text": "Policy dialogue for health policies has started to gain importance in recent years, especially for complex issues such as health financing. Moroccan health financing has faced several challenges during the last years. This study aims to document the Moroccan experience in developing a consolidated health financing strategy according to the policy dialogue approach. It especially considers the importance of conceptualising this process in the Moroccan context.We documented the process of developing a health financing strategy in Morocco. It concerned four steps, as follows: (1) summarising health financing evidence in preparation of the policy dialogue; (2) organising the health policy dialogue process with 250 participants ; (3) a technical workshop to formulate the strategy actions; and (4) an ultimate workshop for validation with decision-makers. The process lasted 1 year from March 2019 to February 2020. We have reviewed all documents related to the four steps of the process through our active participation in the policy debate and the documentation of two technical workshops to produce the strategy document.The policy dialogue approach showed its usefulness in creating convergence among all health actors to define a national shared vision on health financing in Morocco. There was a high political commitment in the process and all actors officially adopted recommendations on health financing actions. A strategy document produced within a collaborative approach was the final output. This experience also marked a shift from previous top-down approaches in designing health policies for more participation and inclusion. The evidence synthesis played a crucial role in facilitating the debate. The collaborative approach seems to work in favouring national consensus on practical health financing actions.The policy dialogue process adopted for health financing in Morocco helped to create collective ownership of health financing actions. Despite the positive results in terms of national mobilisation around the health financing vision in Morocco, there is a need to institutionalise the policy dialogue with a more decentralised approach to consider subnational specificities. Policy dialogue approaches legitimise the adoption of a health financing strategy.Participation in designing health financing strategies is important for a shared vision on health financing.Policy dialogue for health financing strategies needs to be institutionalised and cover sub-national levels.Universal Health Coverage (UHC) is an opportunity to strengthen health systems and promote equity . CountriBy definition, UHC is the capacity to provide all people with access to health services of sufficient quality, while also ensuring that the use of these services does not expose the user to financial hardship . For UHCHealth financing is one of the six blocks of the health system . There a\u2026 part and parcel of policy and decision-making processes, where they are intended to contribute to developing or implementing policy change following a round of evidence-based discussions/workshops/consultations on a particular subject\u201d\u00a0ubject\u201d\u00a0, the institutionalisation of dialogue is crucial for the continuity and maintenance of institutional intelligence , 23.R\u00e9gime d\u2019Assistance M\u00e9dicale or RAMED) has contributed to improving the coverage rate to reach 62%, according to the latest figures; and the scheme for self-employed, whose law was passed by the parliament on June 23, 2017. The latter is the most challenging in terms of implementation as it concerns the informal sector and highly heterogeneous categories of the population with less capacities to contribute.The path to universality in Morocco was designed around three components, as follows: a compulsory health insurance scheme for formal employees of both private and public sectors (it covers 34% of the population); the scheme for the poor and vulnerable financing presents fewer challenges [Different studies highlighted the challenges of health financing in Morocco. Whilst the compulsory health insurance scheme , the need to extend the coverage, the need to link the benefits package definition to health financing capacity, and the fragmentation of pooling. To deal with these challenges, Morocco decided to formulate a consolidated health financing strategy using a national dialogue approach.This paper documents the conceptualisation and application of the policy dialogue on health financing in Morocco. The Global Action Plan initiative for Healthy Lives and Well-being for All and the The main data sources for this documentation are (1) mission reports and presentations of the first scoping mission on health financing organised by WHO, (2) the detailed reports of each workshop prepared by the teams of rapporteurs during the national conference as well as the reports of both follow-up workshops, (3) the presentations that were made by national and international experts, decision-makers from other countries (ministers and experts), (4) the final report of the policy dialogue recommendations, (5) notes from our observation as participants and organisers, and (6) health financing documentation review prepared as the background of the policy dialogue.All reports and documents were centralised and categorised according to each step of the process . The analysis was performed in such a way as to extract details about the whole process of the policy dialogue from evidence gathering to the strategy draft finalisation. Based on the detailed reports of sessions and workshops, data were charted using a grid that follows the different phases of the policy dialogue.We then used a coding system to chart the data in a way to reconstruct each step of the process. This qualitative coding allowed structuring of the information and its analysis according to the specificity of each phase. As most of the policy dialogue steps were documented, the quality of findings was checked by sharing and cross-checking the results among the researchers, who themselves contributed and participated in the whole process. When missing parts are identified, we went back to the source documents and completed the analysis.A stepwise approach was used to structure the process of developing national recommendations for a health financing strategy in Morocco. The stepwise approach allowed understanding of the current status of health financing in Morocco and the consensus of national stakeholders on future visions and directions. This stepwise approach is described in Fig.\u00a0Participation in the different steps of the policy dialogue was organised based on criteria related to the involvement in health issues, either directly or indirectly, through actions on health determinants. An inclusive list was established within the organisation committee and from other non-governmental departments).Prime minister, other ministers, former ministers and government representativesParliament representatives of the social commissionNational observatories, Ministry of PlanningTechnical and financial partners, international experts and foreign ministers of health National professional councils, national experts from research centres and national institutes and representatives of medical schoolsRepresentatives of the private sector associationsTrade union representativesCivil society organisations with an active role in healthRepresentatives of the health insurance agency and health insurance fundsRepresentatives of hospitals, including teaching hospitalsProfessional associationsMedia and analysts in the social fieldsThe policy dialogue involved the following participants:The first follow-up workshop involved 22 experts from national departments concerned with health financing to translate the outputs of the policy dialogue into feasible actions. The second follow-up workshop involved decision-makers in the field of health financing , including technical and financial partners as a commitment to support the implementation phase of the strategy.The authors of this paper participated and contributed to the whole process . All preparatory meeting minutes, documents and workshop reports were analysed to redesign the framework of the policy dialogue for further learning. Each step of the health financing debate in Morocco was structured and organised to ensure the participation and inclusion of all actors.During the preparatory phase of the policy dialogue, technical staff from the MoH and WHO reviewed and summarised essential documents on health financing to facilitate the debate. These were research studies and consultant reports produced in Morocco between 2013 and 2018. The objective of this review was to provide access to available evidence on health financing in Morocco to all stakeholders involved in the policy dialogue process.Before the conference, a series of discussions with the main actors in the Moroccan health system allowed the mapping of the challenges around which the national debate would be organised. During these discussions, additional documents were also identified for the review. The preparatory phase was also an opportunity to identify the main actors to be invited to the national forum.The policy dialogue benefited from the collaboration between WHO, the European Union delegation in Morocco, the African Development Bank and the World Bank. Each of these organisations had, in recent years, produced evidence and provided technical assistance on different components of health financing in Morocco The collaboration allowed sharing of the evidence and technical expertise and the mobilising of experts and representatives from different countries to share country experiences and best practices during the dialogue process.The MoH steered the entire dialogue process, including the collaboration of partners and the selection, synthesis and dissemination of available information.The national conference on health financing was held on June 18\u201319, 2019, in Rabat; around 250 participants contributed to this event. The participation covered all actors concerned by the health system (see above). Different ex-ministers from other countries and international experts in the field of health financing shared their experiences. Two plenary sessions focused on sharing national and international expertise on health financing and health financing governance. Three parallel workshops were held to deepen the dialogue on health financing functions . The parallel workshops were designed to give more time for debate and participation. The organisation committee designated a general rapporteur who was responsible for consolidating the overall proceedings of the conference as well as rapporteur teams and moderators for each session. Templates were used for rapporteurs to document each step of the national event, including guiding questions for facilitators. Box 1 gives an example of the guiding questions for the resource mobilisation workshop.As a follow-up to the national conference, two workshops were organised to translate the health financing recommendations adopted during the debate into vision, strategic directions and feasible actions. Members of the research team participated in these workshops and documented the discussions. These two workshops were held by WHO to identify the main challenges and explore all current and future windows of opportunity to define concrete actions for implementation. The first workshop was technical, with the participation of professional staff and managers representing all departments involved in health financing. The second workshop was aimed at decision-makers\u2019 validation of the health financing actions and to discuss how partners could support the implementation phase.Box 1 Guiding questions for the resource mobilisation workshopWhat is the situation in Morocco in terms of resource mobilisation for health? How does Morocco compare to other countries?\u2022 What are the existing constraints for an optimised resource mobilisation process for health?\u2022 What opportunities exist in terms of fiscal space for mobilising resources for health?\u2022 How can civil society organisations and parliamentarians contribute to the debate and enrich the discussion towards a national consensus on a vision on resource mobilisation for health?\u2022 What are the existing possibilities for additional mobilisation of resources for health, in particular, innovative financing, partnership, etc.?\u2022 What recommendations can each stakeholder make in this regard? What is the point of view of experts in relation to the proposals for the case of Morocco?\u2022 What concrete actions can be taken for the short-, medium- and long-term vision in terms of mobilising resources for health?The results of this paper are structured around key components of the policy dialogue process, namely a summary of existing evidence on the current situation in Morocco, political commitment, participation, and exchange of experiences, the health financing functions and their governance. The following sections present the findings following each step of the policy dialogue.A summary of existing evidence on the current situation in Morocco was shared with participants. This summary was organised into three groups. The first synthesis provided an overview of the Moroccan health system using the latest data and health indicators. The health system overview also presented the organisational aspects and main challenges that Morocco is experiencing . The second synthesis was about the health financing analysis and indicators as well as its main challenges based on national health accounts and recent studies produced at the national level. In this synthesis, the financing of UHC schemes was examined by stressing areas of success but also pitfalls. Recent studies on the fragmentation and payment mechanisms were summarised, emphasising the difficulties that the purchasing function is facing. The third synthesis was about studies on the benefits package and access to health services, especially for the vulnerable population and former strategies and reforms. These syntheses covered financial access barriers and the quality of care and its link to health financing. International experiences were summarised and covered health financing strategies in general and, more specifically, areas of strategic purchasing in different countries . These international experiences contributed to enrich the debate by stressing the implementation challenges for each strategy. These syntheses were presented at the beginning of each workshop to give participants insights on what worked well in other settings, including a summary of technical recommendations for health financing.The health financing dialogue organised in June 2019 benefited from the support of His Majesty, the King, through the Royal patronage. The Head of Government (the Prime Minister) as well as other ministers from the government supported the national debate through their active participation. The importance of the dialogue was reflected in massive national media attention . The debFormer ministers of health of other countries that achieved excellent results in terms of health financing came to present their successes during the national conference but also stressed the challenges faced in implementation and equity . Other examples included France and Ghana and emphasised strategic aspects in the area of health financing. International examples focused on the importance of promoting primary healthcare through incentives from health financing and the convergences of UHC schemes to ensure more equity in the health system. The presentation of countries\u2019 path to UHC gave different perspectives for Moroccan participants to think of the diversity of approaches of a health financing strategy. The discussions also covered the roles, commitment of actors, central place of the benefits package and how they link to health financing. The financial protection and all measures adopted in each country were essential in the sharing of these experiences. International experts and resource persons also addressed the role of the private sector in the path to UHC and strategic purchasing as a leverage to improve the overall health system performance through health financing payment mechanisms. Former ministers of health discussed the complexity of the policy design and implementation and shared their specific approaches with Moroccan policy-makers. The Moroccan health system and its main challenges and achievements were presented during this session to give a background for international experts.The resource mobilisation workshop tackled the main challenges in Morocco and debated the following elements: (1) the situation regarding the resource mobilisation function for health in Morocco; (2) the mobilisation of extra-budgetary financial resources for the health sector; (3) main successful mechanisms to expand fiscal space for health; (4) taxation as an innovative source of financing in the health sector, and (5) practical recommendations to improve the mobilisation of resources for health. Participants acknowledged the effort of the state in terms of mobilising resources for health . ThThe dialogue around the pooling of resources in Morocco focused on reducing fragmentation between existing schemes. National experts presented the Moroccan experience and discussed its details with participants. Multiple insurance schemes cover portions of the population in Morocco, including private-sector employees, civil servants, students, self-employed workers, internal plans for some large companies, and the RAMED subsidies . These mThe workshop related to the Purchasing function raised the issue of the benefits package and the difficulties in quantifying its services. In Morocco, the benefits package is defined in broad terms and mainly through a description of activities . The anaIt was highly recommended to consider the needs of vulnerable groups in the design of the benefits package but also through contracting mechanisms. The objective is to ensure that no one is left behind and the Purchasing function enables the implementation of incentives that support operationalising this value in the health system. Box 4 presents the strategic recommendations for the Purchasing function.Agence Nationale de l\u2019Assurance Maladie) and the inter-ministerial committee that should be strengthened. The role of evidence for improving governance was also discussed to recommend improving the UHC information system. The autonomy of regions and empowering them with regional policy dialogues was mentioned as a determinant for better governance. Another element concerned the creation of spaces and channels for dialogue and participation of stakeholders, including the population, to develop recommendations for decision-makers. These spaces of discussion should be inclusive of all actors involved in health . To foster a long-term vision on health financing, participants and experts recommended the development of a national charter on health financing. Box 5 presents the strategic recommendations for the governance function.The plenary session devoted to health financing governance started by confronting the Moroccan experience with other countries\u2019 and the analysis of experts from WHO and the European Union. Discussions revolved around the importance of working on the unification of all health insurance schemes, mainly through the same reimbursement rates and benefits package, as well as the creation of an independent fund for the management of RAMED resources. Creating regional committees for decisions on health financing was highlighted as a requirement to improve the governance function. The separation of functions was stressed as a strategic requirement for UHC in general and particularly for RAMED to sustain in the future. An essential part of the debate focused on governance entities and their roles like the National Agency of Health Insurance , the moderator launches the general questions, and opens the debate. Moderators were given instructions to allocate more time for discussions and allow participants to express their views. However, the presence of health financing experts provided arbitration that reduced political divergence, especially when feasibility was analysed based on real international experiences. Despite that, disagreements concerning specific points were avoided by adjusting the recommendation through adopting a progressive merging in the long term rather than an immediate unification of funds. Another example is related to the fact that some actors suggested increasing reimbursement rates for health insurance schemes. The political sensitivity of this recommendation was mitigated by suggesting a revision of rates based on health technology assessment analysis so the increase could be objective. In this way, the purchasing system is\u00a0strategic.\u00a0The role of moderators was crucial to move from suggestions to adopted recommendations by confronting the expert point of view and invited resource persons from other countries. Facilitators were given the guidance to try to give the floor to all categories of participants and ensure the inclusion of all ideas.To continue the coproduction process started before, within and after the national conference, the MoH, with the support of WHO, organised two workshops to identify concrete actions. The starting point was the conference recommendations. The participation of all technical departments in the coproduction process gave strength to the proposed measures. The objectives of the follow-up workshops were to sustain stakeholder involvement and increase the ownership of proposed actions, beyond the recommendations of the conference.The draft strategy was developed along the following axes: (1) actions related to resources mobilisation, 2) the pooling of resources, (3) improving the purchasing function, (4) improving equity and ensuring the financial protection for vulnerable groups, (5) improving health financing governance, and (6) multisectoral actions related to health financing such as benefits package definition and health in all policies. Figure\u00a0 the poolBox 2 Main strategic recommendations for resources mobilisation in Morocco\u2022 Adopt and implement new financing mechanisms \u2022 Mobilise and prioritise resources for primary healthcare\u2022 Accelerate the implementation of health insurance for self-employed workers with stable funding\u2022 Increase fiscal space for health by strengthening tax revenue dedicated to health and strengthening the budget of the Ministry of HealthBox 3 Strategic recommendations for the Pooling function\u2022 Gradually work for the unification of compulsory health insurance schemes\u2022 Ensure equity in funding by adopting the same contribution rate for all insureds and the same benefit package for a harmonisation of schemes and their convergence and fight against the fragmentation\u2022 Encourage the pooling of risks, promote prepayment in order to reduce direct payments\u2022 Establish a system of solidarity between schemesBox 4 Strategic recommendations for the Purchasing function\u2022 Redefine a benefit package to be made available to the entire population, according to the needs of the population and evaluate feasibility, including funding, to deliver it to the whole population\u2022 Define a body in charge of updating the benefits package and its mechanism based on priority-setting and health technology assessment tools\u2022 Strengthen partnerships for processes of strategic purchasing\u2022 Develop hospital autonomy by strengthening their management according to performance and self-financing\u2022 Implement separation of financing and service provision functions\u2022 Strengthen and set up an information system for monitoring the performance of healthcare structures \u2022 Develop strategic purchasing methods based on performance contracting\u2022 Orient the purchasing function to develop family medicine\u2022 Develop the partnership to mobilise resources for health and encourage private investment in health\u2022 Encourage the private sector to invest according to the need for health\u2022 Strengthen the targeting system to allow financial protection of target and vulnerable groups\u2022 Ensure that the service package and funding integrate the needs of vulnerable groupsBox 5 Strategic recommendations for the Governance functionAgence Nationale de l\u2019Assurance Maladie) as the UHC regulatory authority\u2022 Strengthen the governance missions of the National Agency of Health Insurance \u2022 Establish a national health charter\u2022 Create governance bodies: inter-ministerial committee and regional health committees and institutional spaces for dialogue through the high Health Council\u2022 Create regional and national dialogue forumsThe objective of this article was to stress the central role of policy dialogue for developing a health financing strategy in Morocco and share this experience as a learning opportunity for other countries. In the past years, the development process of health strategies in Morocco adopted mostly a top-down approach with less involvement of all concerned actors. Former strategies on health financing were not implemented because of the lack of participation in their process and design and, thus, the lack of political legitimacy and technical ownership. In 2015, a strategy on health financing was developed by experts with suggestions of actions. This strategy could not find its way to becoming a formulated policy. The lack of involvement of parliamentarians, civil society, and the Ministry of Finance and the lack of high-level endorsement weakened the strategy adoption. Since 2011, the Arab spring social movement introduced a culture of participation as a way of legitimising policy decisions but alsoThe importance of dialogue in policy design appears to show its added value in other contexts, including LMICs. A case study from Liberia showed the role of policy dialogue to mobilise all stakeholders in supporting health policy design . The autThe literature on the benefits package definition shows the importance of involving stakeholders through debate and dialogue to set national priorities in terms of designing its content and the process to perform further adaptation . Althougconf\u00e9rences r\u00e9gionales de sant\u00e9) [The draft strategy proposed institutionalised dialogues at national but also at regional levels. For instance, existing examples of policy dialogue at the subnational level have proven effective in France (e sant\u00e9) . Anothere sant\u00e9) . In the This policy dialogue was the first experience in the area of health financing and is far from being perfect. It provides a first attempt to document this complex process and prepare the field for a more in-depth qualitative analysis of policy dialogues. The management of parallel sessions was a challenge because of the difficulty to simultaneously ensure freedom to participants to choose the sessions while keeping a representativity level in each session. Fortunately, the number of participants was more than expected (250 instead of 200), which allowed guaranteeing a good level of participation in each workshop. Maybe a prior pre-registration in workshops that respects an acceptable level of distribution and representativeness could solve the problem in the future.The complexity of the process did not allow us to perform individual interviews to analyse the contextual determinants of the policy debate. It might have been interesting to explore the psychological influence of this experience on health actors and to what extent they are willing to support the implementation of the strategy in the long-term.With this article, we tried to provide a structured analysis of a fruitful policy dialogue on health financing. The paper presents an approach of crafting health strategies with more legitimacy through participation. Although implemented in other countries, this approach is fairly new in Morocco. We hope that further qualitative research will explore policy commitments for health financing and draw a picture of its determinants. The documentation of policy processes, especially those involving a participatory approach, will enhance the overall learning for UHC in Morocco and internationally."} +{"text": "Agricultural waste mapping is an indispensable tool for the development and adoption of sustainable waste management practises in the agricultural sector. Current practices of agricultural plastic waste (APW) management in countries with large agricultural areas and thus high generation volumes of APW include uncontrollable disposal in fields or near water sources, or uncontrolled burning of the waste. These practices lead to irreversible deterioration of the natural environment through land and soil contamination, contamination of freshwater resources, air pollution and also pose public health issues. Given these negative effects on the environment, spatial prediction of APW generation becomes significant in sustainable agriculture. This dataset consists of the coordinates of the agricultural plots identified in the Republic of Cyprus and the area in square meters covered by agricultural greenhouses. The dataset has been used to perform APW generation mapping and predict the national generation quantities of waste low- density polyethylene (LDPE). The collection of the data is included in sixteen tables separated per geographical area-cluster. The agricultural plastic waste (APW) generation mapping was conducted with the use of up-to-date statistics from Cyprus Agricultural Payments Organization (CAPO), geographic information system (GIS) and satellite imagery. The agr21.greenhouses of more than 3 m height2.greenhouses of average height 2 \u2013 3 m3.tunnels of more than 3 m height4.tunnels of less than 2 m heightThe mapping of the greenhouse APW in Cyprus was conducted using primary data and information obtained from the records of CAPO on the latest statistics of recent years. In particular, the records of CAPO specify which agricultural plots enclose greenhouses and tunnels; the most recent records are for the year 2016. Accordingly, the number of plots enclosing greenhouses, their specific geographic locations and size are information that was deduced from the records. Given the exact coordinates of the agricultural plots, the plots have been located with the use of ArcGIS The validation of the dataset and the characterization of the robustness of data has been implemented through the conduction of a comparative assessment with previous findings of research initiatives on quantities of agricultural plastics. Among the tasks under the research project \u2018Design of a common agrochemical plastic packaging waste management scheme to protect natural resources in synergy with agricultural plastic waste valorisation \u2013 AgroChePack\u2019, the area of greenhouses and plastic generated in the districts of Cyprus have been determined. The findings of the AgroChePack indicate to a total mass of 985,656 tonnes of generated plastics; a value which in comparison to the total mass of APW calculated in this work \u2013 919,707 tonnes - is found among the acceptable limits of \u00b1 10%.The geographic locations of greenhouses in Cyprus distinguished into the defined clusters and were investigated in terms of this work are illustrated in No ethical issues are associated with this work.The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article."} +{"text": "The GSA Public Policy Advisor will facilitate a discussion about the 2020 reauthorization of the Older Americans Act with key stakeholders from Washington, DC. Also, the presentation will include perspective on GSA's active role in policy development and the legislative process."} +{"text": "Understanding the mechanisms that regulate atherosclerotic plaque formation and evolution is a crucial step for developing treatment strategies that will prevent plaque progression and reduce cardiovascular events. Advances in signal processing and the miniaturization of medical devices have enabled the design of multimodality intravascular imaging catheters that allow complete and detailed assessment of plaque morphology and biology. However, a significant limitation of these novel imaging catheters is that they provide two-dimensional (2D) visualization of the lumen and vessel wall and thus they cannot portray vessel geometry and 3D lesion architecture. To address this limitation computer-based methodologies and user-friendly software have been developed. These are able to off-line process and fuse intravascular imaging data with X-ray or computed tomography coronary angiography (CTCA) to reconstruct coronary artery anatomy. The aim of this review article is to summarize the evolution in the field of coronary artery modeling; we thus present the first methodologies that were developed to model vessel geometry, highlight the modifications introduced in revised methods to overcome the limitations of the first approaches and discuss the challenges that need to be addressed, so these techniques can have broad application in clinical practice and research. Invasive coronary angiography is the reference standard for assessing the extent and severity of coronary artery disease (CAD) which is a leading cause of death in the developed and developing world . This moOver the last few decades intravascular imaging catheters have been designed which can be advanced into the coronary arteries to obtain high-resolution cross-sectional images of the vessels. This enables comprehensive visualization of the lumen and plaque pathology. Intravascular ultrasound (IVUS) and optical coherence tomography (OCT) were the first invasive imaging techniques that were used to study plaque pathobiology and provided unique insights about the mechanisms that regulate plaque evolution. Validation studies using histology as the gold standard have demonstrated the advantages but also the limitations of IVUS and OCT in assessing plaque characteristics and led research toward the development of novel invasive imaging modalities that were able to overcome the drawbacks of the first techniques \u201311. NearRoelandt et al. were the first that attempted to reconstruct coronary artery anatomy from IVUS imaging data . They asA similar approach was also introduced to reconstruct coronary artery anatomy from OCT data . The obtDespite the undoubted role of these reconstruction methodologies in the clinical arena, they have significant limitations as they are unable to portray coronary artery geometry, evaluate the distribution of the plaque onto the vessel and accurately quantify plaque volume especially in the case of increased curvature where neglecting vessel curvature can lead to an underestimation of the plaque volume by 5% .In 1992 Klein et al. for the first time suggested fusion of IVUS and X-ray angiography for a more reliable assessment of vessel architecture . The proIn vivo validation of the developed methodology using X-ray angiography as the gold standard demonstrated that it provides accurate coronary modeling; however the time consuming protocol and increased radiation required for coronary reconstruction did not allow this method to have applications in the clinical arena -OCT and X-ray angiography was proposed by Athanasiou et al. and valiribution . Over thribution , 50, assribution , 52, andribution \u201355.P < 0.0001). Although this approach appears superior to others, previously reported in the literature, it does have limitations as it makes two assumptions; more specifically: (1) it implements an interpolation technique to place on the lumen centreline the OCT frames between those portraying side branches; this assumption cannot correct the error caused by the longitudinal motion of the OCT catheter that is increased at the beginning of the systole, and (2) it uses the origin of the side branches, which cannot be always accurately assessed in two angiographic projections, to estimate the rotational orientation of the OCT contours.The above OCT-based reconstruction methodology may have significant applications in the research arena but it also has two significant limitations: (1) it is not able to correct the geometrical error caused by the longitudinal movement of the OCT catheter within the vessel during the cardiac cycle , 56 and Moreover, this approach has not been thoroughly validated and therefore it is unclear what is the effect of the above limitations on vessel reconstruction and ESS computation.The methodologies developed to reconstruct the coronary artery anatomy from X-ray angiography and intravascular imaging data rely on the extraction of the lumen centerline or the catheter path from two angiographic projections. The angle difference between the two projections as well as the presence of vessel foreshortening in these projections can affect the efficacy of these approaches in assessing vessel geometry. Moreover, as it was stated above, the origin of the side branches is likely to not be well visible in the projections used for coronary artery reconstruction and this can affect the accurate estimation of the absolute orientation of the intravascular images on the 3D model. These limitations can be overcome by the use of CTCA which provide 3D images of the coronary artery tree.In 2010 van der Giessen et al. were the first to propose the fusion of CTCA and IVUS imaging to reconstruct the coronary arteries . This apThis approach has been used to evaluate the role of ESS on vessel wall healing following bioresorbable scaffold implantation and investigate the role of multidirectional ESS on the development of advanced atherosclerotic plaques in pig models , 60. TheTo enhance the research applicability of the 3D reconstruction methodologies mentioned above, user\u2014friendly systems have been developed that operate in a user-friendly environment and expedite coronary reconstruction . The firANGIOCARE was the IVUSAngio tool is the only freely available software for the fusion of IVUS and angiographic imaging data . The sofUniversity of Leiden and Medis Medical Imaging Systems BV have recently developed a novel software for coronary reconstruction . The sofFusion of intravascular imaging data and X-ray angiography or CTCA imaging has enabled accurate reconstruction of coronary artery geometry and the generation of 3D models that can be processed with CFD techniques to evaluate vessel flow patterns and examine the effect of the hemodynamic forces on plaque evolution. These models have been enriched our understanding about the mechanisms that regulate atherosclerotic disease progression enabling more accurate prediction of lesions that are likely to progress and cause events , 37, 38.The PREDICTION study which was the first prospective clinical study that used at scale fusion of intravascular imaging and X-ray angiography to assess the ESS distribution and highlighted the prognostic value of ESS in predicting plaque progression, thereby creating hope that an invasive assessment of plaque morphology combined with CFD analysis could enable accurate detection of vulnerable plaques . The finThe developed user-friendly software that included established data fusion algorithms are expected to facilitate research in the field and allow more complex simulations and complete assessment of vessel physiology. Cumulative data have highlighted the role of PSS on plaque destabilization and its value in predicting vulnerable lesions \u201372. The Recent reports have highlighted the need to refine the methodologies for the reconstruction of vessel architecture especially in stented segments \u201377. AdvaFusion of intravascular imaging and angiographic or CTCA imaging data allows generation of 3D models that accurately portray the vessel geometry and enable evaluation of plaque composition. These approaches have been extensively used to examine the implications of flow patterns on atherosclerotic disease progression and stent/scaffold thrombosis. Further advances in intravascular imaging, catheter design and the development of methodologies that will allow estimation of the distribution of different plaque components on the 3D plaque, and accurate reconstruction of stent architecture are expected to provide a complete and detailed evaluation of luminal geometry and coronary artery pathology. They are also expected to permit more precise quantification of the local hemodynamic forces, better prediction of plaque evolution, and optimization of focal therapies developed for the treatment of culprit or vulnerable lesions.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer, AK, declared a past co-authorship with one of the authors, CB, to the handling editor."} +{"text": "An accurate assessment of the adequacy of prenatal care utilization is critical to inform the relationship between prenatal care and pregnancy outcomes. This systematic review critically appraises the evidence on measurement properties of prenatal care utilization indices and provides recommendations about which index is the most useful for this purpose.MEDLINE, EMBASE, CINAHL, and Web of Science were systematically searched from database inception to October 2018 using keywords related to indices of prenatal care utilization. No language restrictions were imposed. Studies were included if they evaluated the reliability, validity, or responsiveness of at least one index of adequacy of prenatal care utilization. We used the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. We conducted an evidence synthesis using predefined criteria to appraise the measurement properties of the indices.From 2664 studies initially screened, 13 unique studies evaluated the measurement properties of at least one index of prenatal care utilization. Most of the indices of adequacy of prenatal care currently used in research and clinical practice have been evaluated for at least some form of reliability and/or validity. Evidence about the responsiveness to change of these indices is absent from these evaluations. The Adequacy Perinatal Care Utilization Index (APNCUI) and the Kessner Index are supported by moderate evidence regarding their reliability, predictive and concurrent validity.The scientific literature has not comprehensively reported the measurement properties of commonly used indices of prenatal care utilization, and there is insufficient research to inform the choice of the best index. Lack of strong evidence about which index is the best to measure prenatal care utilization has important implications for tracking health care utilization and for formulating prenatal care recommendations. Routine prenatal care is a series of regular contacts between a health care provider and a pregnant woman at scheduled intervals that occur between the confirmation of pregnancy and the initiation of labour. The primary goal of these encounters is to deliver effective screening, preventive (education), and treatment interventions that seek to improve health outcomes for both the mother and the newborn. Prenatal care also aims to address behavioural risk factors, support women\u2019s medical, social and psychological needs, and coordinate actions for labour and delivery .The American College of Obstetrics and Gynecology (ACOG) recommends visiting every 4\u2009weeks for the first 28\u2009weeks of pregnancy followed by bi-weekly visits up to 36\u2009weeks. After 36\u2009weeks, weekly visits are advised \u20137. EvideA variety of methods have been used in past research to determine adequacy of prenatal care in low-risk pregnancies. Over the last two decades, several scoring systems for prenatal care utilization have been developed, each employing different algorithms: the Kessner Index , the KotThe comparability of the different prenatal care utilization indices for low-risk pregnancies has not been completely explored. To date, no systematic review has incorporated a comprehensive analysis of the methods by which prenatal care utilization indices have been developed, nor appraised their relative value. A systematic evaluation of the measurement properties of these indices, their relative strengths and weaknesses, and the quality of the evidence that support their use is an essential step to inform the selection of these indices for research and clinical practice. To fill these knowledge gaps, we completed a systematic review of the scientific literature to assess and compare the measurement properties of prenatal care utilization indices.CRD42017067110). Comprehensive electronic searches of MEDLINE, EMBASE, CINAHL, and Web of Science were conducted from database inception to October 2018 for studies evaluating the measurement properties of prenatal care utilization indices. An information specialist designed and executed the search strategy using selected subject headings and keywords related to prenatal care utilization indices and measurement properties. The MEDLINE search strategy is available in Additional\u00a0File\u00a0The systematic review was conducted and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement . The rev. There were no restrictions on the study design; however, book chapters, editorials, letters, and in vitro or animal studies were excluded.Indices evaluating prenatal care utilization were defined as quantitative tools that evaluated both the initiation of prenatal care and the frequency at which a pregnant woman attends prenatal care services in low-rThe search strategy generated a list of articles that two reviewers screened independently for relevance. Titles and abstracts that were identified as relevant or those that provided insufficient information were pursued for further assessment. The full text of considered articles was again independently reviewed for inclusion , with disagreements resolved through consensus. The final reason for the exclusion of an article was documented in the PRISMA flow chart Fig.\u00a0.Fig. 1PTwo reviewers [SR and MO or IM and MO] independently evaluated the methodological quality of studies assessing the measurement properties of indices of adequacy of prenatal care utilization using the COnsensus-based Standards for the Selection of health Measurement INstruments (COSMIN) checklist . DisagreThe COSMIN checklist is frequently used in systematic reviews of indices and measurement instruments, and it is currently the only validated and standardized tool available for this purpose . For theor in one study of excellent methodological quality); moderate ; limited ; conflicting ; and unknown .For each index of prenatal care utilization, the overall levels of evidence on each measurement property were synthesized using the data on measurement properties reported in the included studies. If several studies informed the measurement properties of one index, findings were combined based on their number and methodological quality, and the consistency of the results. The level of evidence for the measurement properties of each index was classified according to the following criteria : strong Information on authors, publication year, study design, population characteristics, data sources, and measurement properties evaluated in individual studies were first extracted by one reviewer and verified for accuracy and completeness by a second reviewer [MO]. Discrepancies between data extraction and verification were sorted through consensus.The search strategy identified 2664 citations of which 712 duplicates were removed. Titles and abstracts of the remaining 1952 citations were screened for relevance, yielding to 308 articles judged as potentially relevant for the review. After applying the eligibility criteria to the full text of these and examining redundant publications, 13 unique studies were included in the review whiReliability was evaluated for eight indices: the APNCUI , 25, thePredictive validity was evaluated for the APNCUI , 26, 27,Concurrent validity based on head-to-head comparisons across indices was evaluated for the APNCUI , 22\u201325, This systematic review identified 13 studies that reported on the measurement properties of 12 indices of prenatal care utilization. The APNCUI and the Kessner Index were described the most while others were evaluated in only one or two articles per index, which weakens the level of evidence for the results. We used the COSMIN checklist to evaluate their methodological quality and the level of evidence informing their uptake. The scientific literature has not comprehensively reported the measurement properties of commonly used indices of prenatal care utilization, and there is insufficient research to inform the choice of the best index. Most of the indices of prenatal care utilization currently used in research and clinical practice have been evaluated for at least some form of reliability and/or validity. Evidence about the responsiveness to change of these indices is absent from these evaluations. The indices of prenatal care utilization supported by the strongest evidence regarding their measurement properties were the APNCUI and the Kessner Index followed by the PHS/EPPC and the GINDEX. Moderate evidence informs the reliability, predictive and concurrent validity properties of the APNCUI and the Kessner Index. Decisions about their use should be supported on recommendations promoted by local prenatal care clinical practice guidelines (CPG). Both APNCUI and the Kessner Index have similar criteria for optimal timing of initiation of prenatal care and number of prenatal care visits during pregnancy and seem to align with current CPG recommendations made by ACOG. However, they have different category responses of prenatal care adequacy, with the APNCUI having an extra category of \u201cIntensive\u201d care during pregnancy that the Kessner Index does not consider. The discrepancy within the literature prevents a consensus being formed about the strongest index to measure the adequacy of prenatal care.The most important strength of this systematic review is the use of the COSMIN taxonomy to evaluate the measurement properties of the proposed indices of prenatal care utilization based on the methodological qualities of the individual studies and the strength of the body of evidence that informs the use of each index. The use of COSMIN by two independent reviewers provided a consistent approach to assess the measurement properties of all indices.One limitation of this review is that we did not include indirect evidence from studies in which the indices were actually applied either to measure prenatal care utilization as a predictor of pregnancy or birth outcomes, or as an outcome of any other risk factor. One important use of the indices of prenatal care utilization has been to evaluate policy or public interventions seeking to improve the organization and evaluation of prenatal care services. In such situations, the indices can be used to evaluate the changes in levels of prenatal care utilization of such interventions. It is yet to be determined if the utilization of prenatal care services translates into improvements in birth outcomes for the mother and child however, a number of these indices may be useful in examining population utilization levels.Despite lingering uncertainty of the effectiveness of prenatal care and what adequacy entails, prenatal care has been proposed as a vital strategy to reduce the risk of adverse outcomes at delivery/birth . SeveralAdditionally, the indices are typically based on visit recommendations for average or low risk pregnancies and do not establish a recommended visit pattern for high risk women or for women with specific medical conditions. This may result in underestimating the prenatal care needs of high risk women and overestimating adequate utilization of prenatal care in the total population . BroaderPrenatal care utilization indices included in this review focus on quantifying the timing and amount of care used and therefore, they do not assess the quality or content of the prenatal services delivered . BecauseDifferences remain in the scientific literature and in CPG regarding recommendations about the timing and the frequency of prenatal care , 2. HoweMost commonly used indices of prenatal care utilization have moderate to limited evidence informing their validity and reliability. Current choices of a preferred index to measure prenatal care utilization can differ depending on the measurement properties that have priority to the users of the index. Important measurement properties such as criterion and predictive validity and responsiveness to change should be further evaluated for all the indices using sound research methodology.Additional file 1. MEDLINE Search Strategy."} +{"text": "Eimeria. Coccidiosis is the most prevalent disease in broilers with worldwide economic losses estimated to be up to US$ 3 billion per year. Understanding the relationship between nutrition and Eimeria infection is an important component of the growth and sustainability of poultry worldwide. Recognizing the role of nutrition in coccidial infections as part of new technologies to control disease pathogenesis should be a component of future control strategies. The papers presented in this collection address these findings and other new research in the control of coccidiosis.Poultry coccidiosis is a parasitic intestinal infection, caused by several species of Eimeria relationships during the initial steps of infection. L\u00f3pez-Osorio et al. presented new data describing the development of pathogenicity and immune response triggered by the early stages of the Eimeria life cycle .The first step in discovering new control measures against coccidiosis is understanding the host:Eimeria challenge are of primary importance. The I See Inside (ISI) methodology was recently developed and described by Sanches et al. to evaluate basal and infectious enteritis in chickens. The results showed the feasibility of ISI as an in-situ method that describes the pathogenic mechanisms of inflammation associated with losses in performance during subclinical Eimeria and necrotic enteritidis infections.More accurate methods to measure the different parameters involved in the performance of the birds under Eimeria infections. However, ionophores are classified as antibiotics and current political and societal pressure has resulted in the reduction and/or the banning of the use of these molecules in livestock production. Therefore, the use of dietary nutrients and feed additives as a means of controlling Eimeria infection and improving the broiler gut health were discussed and focused on answering some essential questions in the field. Exploring this area, Bortoluzzi et al. focused on how micro-minerals and their absorption by the gastrointestinal tract (GIT) could have an important role in the modulation of the intestinal physiology, immunity, and microbiology of broiler chickens during Eimeria infection. They showed the beneficial effects, increased performance and attenuated the inflammatory response when feeding Zn to broilers infected with coccidia and Clostridium perfringens. Additionally, Zn modulates the ileal microbiota thus improving the gut health of broilers. Further, Cu and Mn provided as feed additives during Eimeria infection were also shown to improve feed conversion and modulate the immune response during the infection.Administration of ionophores has long been employed as the best effective method to control Kiarie et al. focused on reviewing the role of feed enzymes and yeast derivatives in modulating coccidial infection. The authors discussed the use of these additives as an alternative and/or complementary strategy for the control of the infection. They cited several papers in which the use of whole yeast or its derivatives as nucleotides or cell wall-associated with feed enzymes enhanced cellular and humoral immunity which would be of utmost importance for increasing the effectiveness of coccidial vaccines.Similarly, Stefanello et al. discussed the use of a blend of protected organic acids and essential oils to improve the health of broilers challenged with Eimeria spp. and with Clostridium perfringens. The blend of protected organic acids and essential oils improved growth performance, nutrient digestibility, and intestinal health in the treated animals better than growth promoter (AGP) and could be an excellent alternative in AGP-free programs. Similarly, Calik et al. showed the improvement in the immune response followed by a modification in the intestinal morphology and increase in the performance of broiler chickens challenged with coccidiosis when fed with direct feed microbial (DFM) dietary additive. Chickens treated with the additive showed an increase in body weight gain and a reduction in the lesions at the duodenum and jejunum and an increment of the ileal villus area. An increase in the expression of IFN-\u03b3 and IL-1\u03b2 were also observed in the ileum, showing that the feed with microorganisms could be a promisor technique to improve the health of broilers.Two original research papers about feed additives were also published here. Soutter et al.. Further, the development of standard protocols to evaluate vaccine efficacy and the improvement of performance and nutrient utilization by birds is required. Gautier et al., describing the timing and magnitude of coccidiosis vaccination and its influence of growth performance and nutrient utilization during the various stages of Eimeria cycling. Coccidial vaccination impaired the feed conversion ratio (FCR) but did not change the body weight gain (BWG) and the feed intake (FI).Vaccination as the preferred method to prevent the infection was also discussed in a review of Taken together the papers published in this Research Topic showed several important results that could be used for the avian industry and highlight the importance of new studies in the exciting area of poultry coccidiosis.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The problem of transmission of intestinal microorganisms to tissues occurs when intestinal epithelial cells do not adhere tightly (tight junction), which is caused by improper nutrition, usually associated with poor mucosal status. The impact on maintaining its proper condition in the case of animals also depends on the proper preparation and fragmentation of the ingredients of the feed. Intestinal microbiota disorders are increasingly indicated as one of the causes of many autoimmune, neurodevelopmental and metabolic diseases. However, there are no studies indicating damage to the intestinal barrier of animals resulting in the penetration of microorganisms from the gastrointestinal tract directly into the bloodstream which may result in the development of chronic inflammation.Neovison vison) farm with a foundation stock of 4,000 females, abscesses were observed in the head, followed by progressive deaths. Antibiotic treatment with amoxicillin and clavulanic acid added to the animals\u2019 feed was not successful. Macroscopic and microscopic changes indicated local suppurative inflammation of the skin and subcutaneous tissue with the presence of purulent fistulas. Microbiological analysis showed a significant increase in Escherichia coli in all samples taken from the abscesses. The results indicate the migration of intestinal bacteria through disturbance of the permeability of the intestinal barrier and their transfer to the blood. Symptoms were alleviated in all animals following changes in the feed components and in feed particle size.On a mink (It is necessary to take into account the possibility of transmission of intestinal bacteria in the etiology of inflammatory diseases in animals. Conducting more research in this field will improve the understanding of the relationship between intestinal microbes and the health of the body as a whole. Enterobacteriaceae (e.g. Escherichia coli) are the cause of intestinal infections when the body\u2019s immune defences are weakened [The diet of animals is the most important factor affecting their productivity. In the case of carnivorous fur animals, this means obtaining optimal reproduction rates and skins with high quality parameters. The energy value of the feed rations should be adapted to the feeding period, and the feed should be preserved to protect it against the development of pathogenic microorganisms. Incorrect feeding of mink can lead to metabolic disorders, which are often imperceptible in the short production cycle of these animals. When selecting feed components, especially of animal origin, care should be taken about their microbiological status, freshness, high biological value and degree of homogenization , 2. In aweakened .Neovison vison).The aim of the study is to present the problem of the occurrence of purulent skin lesions on the head and neck of farmed American mink and by special methods: periodic acid-Schiff (PAS), Gomori methenamine-silver (GMS) and Ziehl-Neelsen acid-fast (ZN) staining. Before the organ samples were taken, the lesions in the head of the animals were cut and samples were taken for microbiological testing. The material was plated on an agar medium with 5% sheep blood, MacConkey agar, and Sabouraud agar with chloramphenicol, and then incubated for 24 hours at 37\u00a0\u00b0C. The resulting colonies were identified using bioMerieux API biochemical assays.2 to about 12\u00a0cm2, covered with yellow-brown crusts, were observed on the scalp and dorsal surface of the neck .Escherichia coli in all samples taken from the abscess material. The results indicate the migration of intestinal bacteria through disturbance of the permeability of the intestinal barrier and their transfer to the blood. The interview with the breeder indicated that the feed ration was correctly balanced in terms of energy. Attention was drawn to the introduction of a feed component in the form of a 30% share of ground turkey bones, which coincided with the onset of disease among the animals and could have caused mechanical damage to the intestinal barrier.The microbiological analysis showed a very large increase in E. coli strains in the colonic mucosa of dogs with granulomatous colitis [Disturbances of intestinal permeability have been demonstrated to play a role in the pathogenesis not only of gastrointestinal diseases, but also of nervous, immune and reproductive disorders . Translo colitis \u201314.To alleviate the symptoms in all animals and prevent remission of the disease, it was recommended that the ground turkey bones should be eliminated from the diet, as they were believed to have caused the damage to the intestinal mucosa. For economic reasons and in order to maintain the proper structure and energy value of the feed, the breeder chose to replace the turkey bones with poultry breast bones, but ground much more finely. This resulted in a significant reduction in the incidence of new cases of disease. Complete remission of the lesions was observed in the animals a few weeks after the formula and means of preparing the feed had been changed.Maintaining the proper state of intestinal epithelium depends on many factors, including the proper preparation of animal feed. Intestinal microorganisms that can enter the blood and tissue of animals through the intestinal epithelium should be considered in the etiological differentiation of disease lesions. A thorough understanding of bacterial translocation mechanisms will allow effective therapy of the source of infection without re-emission of inflammatory processes."} +{"text": "The research results presented in this article were obtained by joint scientific research on creatingcement materials with reduced impedance. It is known that functional additives added to impart electrically conductive properties have a negative impact on physical and mechanical characteristics of the material. This study suggests using the multiwall carbon nanotubes in the amount of 7% from binder mass as a functional additive. The results obtained prove that the addition of this amount of the modifier does not lead to a significant decrease of strength characteristics. Calcium nitrate in the amount of 1\u20137% was added in order to level the strength loss and to ensure the effective stable electrical conductivity. The multifunctionality of using this salt has been proven, which is manifested in the anti-frost and anticorrosive effects as well in enhancement of electrical conductivity. The optimal composition of the additive with 7% of carbon nanotubes and 3% of calcium nitrate ensures a reduced electrical impedance of cement matrix. The electrical conductivity was 2440 Ohm, while the decrease of strength properties was within 10% in comparison tothe control sample. The nature of changes in the microstructure were studied to determine the influence of complex modifications that showed significant changes in the morphology of the hydration products. The optimum electrical characteristics of cementitious materials are provided due to the uniform distribution of carbon nanotubes and the formation of a network of interconnected micropores filled with the solution of calcium nitrate that provides additional and stable electrical conductivity over time. Research on technological methods of imparting electrical conductivity to cementitious concretes is carried out to expand the functional properties of conventional building materials ,2. TodayElectrically conductive cement materials are used to prevent icing and snow accumulation in the structures of road surfaces, automobile bridges, parking lots, sidewalks and runways due to heat emission ,7. ElectIn addition, electrically conductive cement materials are used in the production of antistatic floors and as electromagnetic reflectors to protect against electromagnetic interference . At the The production of functional cement concrete is possible by mixing the traditional components with electrically conductive components providing stable electrical properties .Today, composite materials based on Portland cement are widely used ,4,11. FuAs noted in studies ,12, the In view of this, it can be said that the minerals of Portland cement clinker have a disordered crystal structure with vacant sites in the lattice points and also contain a variety of ions with significantly weaker bonds located in the interstices. The presented features of structure of cement minerals influence the physical and chemical properties of the material including the ability to transmit electrical current. The joint motion of the thermal and electric field can lead to the fact that one field can become a current carrier, which determines predominantly ionic conductivity of cement matrices. The value of this type of electrical conductivity depends on the degree of orderliness of ions in the crystal lattice .Factors such as the water-to-cement ratio , the volIn particular, the relationship between the conductivity of cement gel as a colloidal system and the presence of moisture in it was stated in studies ,5,11. ThThe effect of the crystallization degree on the increase of resistivity was also confirmed by studies of low-basic calcium hydrosilicates ,5,11 andTraditional cement materials based on Portland cement have increased resistivity, the values of which vary from 6.54 to 11 kOhm ,3,10,16.Along with this, it is necessary to take into account that the amount of conductive components should not exceed certain limits to provide the percolation effect for improving the electrical properties of cement material ,18 as weAccording to paper , the useThe mechanism of reduction of electrical resistance of a material is considered from the standpoint of optimal structure formation in which the electrically conductive particles forms cluster bonds contribuHowever, there are factors that determine the instability of electrical properties. They are associated with increased humidity due to operating conditions, with the blocking of electrically conductive ions by hydration products as well as with the issues of ensuring high initial strength and density of materials based on cement binders. This significantly limits the development of an efficient electrically conductive material due to the use of one functional modifier .A decrease of crystallization degree of structure of cement matrix is possible due to the use of various salts, in particular calcium nitrate, as shown by laboratory studies and the Today, calcium nitrate is used as an accelerator of the setting time of hardening cement paste and an inhibitor of electrochemical corrosion. The use of calcium nitrate solves the problem of decreasing strength, both at the initial hardening stage and when gaining strength within the project time. It should also be noted that calcium nitrate exclusively effects the morphology of secondary crystalline hydrates of the cement matrix ,32. BaseThe aim of the paper is to determine the effect of complex modification including multiwall carbon nanotubes and sodium nitrate on electrical properties of materials based on Portland cement.3S-64.6, dicalcium silicate C2S-10.7, tricalcium aluminate C3\u0410-7.0, tetracalciumaluminoferrite C4AF-14.7, MgO-1.4. Chemical and mineralogical compositions of cement have been confirmed by manufacturer\u2019s product bulletin provided by Eurocement Group. Portland cement CEM I 42.5 N was used as the main binder. The chemical and mineralogical composition of the clinker is represented by the following minerals, %: tricalcium silicate CPolyfractional natural quartz sand conforming to EN 196-1 was used as fine aggregate. The sand grains were predominantly rounded. The silica dioxide amount in sand was more than 98%.Control mortar (CSM) had the ratio of cement-to-sand equal to 1:3 by mass. Modification of the mortar was done by adding Fulvek 100 multiwall carbon nanotubes in the amount of 7% by cement mass . The particular amount of multiwall carbon nanotubes had been defined by previous studies ,34. FresCarbon nanotubes (CNTs) were added into 150 mLof water to obtain an aqueous modifying suspension, which was then treated with a HielscherUP200 ultrasonic homogenizer for 5 min at 150 W, the frequency of 26 kHz and the maximum amplitude of 70 \u03bcm. Additionally, calcium nitrate in the amount from 1% to 7% by cement mass was added into the mixing water with the step of 1%.At the age of 7, 14 and 28 daysthe flexural strength and compressive strength tests were performed for control and modified samples by using standard beam-shaped samples and hydraulic press. Coefficient of variation was 13.5%.The measuring diagram shown in The MNIPI E7-20 device was used to determine changes in the electrical conductivity of layers and resistivity. The operating principle of the device is based on the voltmeter-ammeter method. The voltage of the operating frequency from the generator is fed through the measured object to the converter that forms two sinusoidal voltages . Voltages are converted into digital form.\u22121 in transmitted light was used to analyze the samples by infrared spectroscopy.IR Fourier spectrometer \u201cIRAffinity-1\u201d in the frequency range 4000 \u00f7 400 cmThe analysis of the microstructure of the samples was conducted using scanning electron microscopes ThermoFisher Scientific Quattro S. The shooting was carried out in the low vacuum mode at 20 kV, without deposition, at a pressure of 50 Pa.The physical and mechanical properties of the compositions were determined to assess the effect of modifying additives. The flexural strength at the age of 28 days of the modified sample exceeds the corresponding parameter of the control sample by 7% and the compressive strength decreases slightly within 10% . At the Determination of electrical characteristics and comparison of control and modified samples showed that the use of the optimal amount of carbon nanotubes in the amount of 7% by cement mass has a positive effect on electrical capacity and resistance. Thus, the electrical capacity of the modified sample exceeds the electrical capacity of the control one by 2 times, while the resistance of the modified sample at the age of 28 days decreases up to 12% . The addThe changes of electrical characteristics over time can be justified by the process of structure formation and the formation of the matrix framework that is gradually filled with calcium hydrosilicates. The volume of the liquid phase is significantly reduced contributing to the resistance increase. Multiwall carbon nanotubes also contribute to the processes of structure formation. During hardening they are gradually covered with hydration products that block charge transfer which leads to an insignificant difference in the resistance values of the control and modified samples at the age of 28 days.IR spectral analysis and microstructure analysis were performed to determine the mechanism of the influence of carbon particles on the structure and composition of cement matrix.Several characteristic groups should be distinguished when comparing the obtained spectra in The morphology of hydration products is changed byadding the carbon modifiers; namely crystalline hydration products, including calcium hydroxide, are formed in much larger amounts . The cryThe functional role of carbon nanotubes in the cement matrix can be assumed based on the localization of hydration products . The funIt is known that calThus, calcium nitrate, depending on the amount, added provides the increase of compressive strength up to 20\u201330% as shown in It is known that theUniform distribution of multiwall carbon nanotubes in the material providesThe creation of a network of connected micropores that are filled with salt solutions is proposed in paper in orderThe role of carbon nanotubes in the formation of the structure of hydration products can be defined by analysis of the microstructure of a calcium hydroxide crystal with carbon nanotubes . Under tThe noted structural features of the modified cement matrix make it possible to explain the increase of its electrical conductivity by the presence of uniformly distributed carbon nanotubes in hydration products that provide a decrease of electrical resistance. At the same time the electrical conductivity of the cement matrix continues to decrease in period from 3 to 28 days due to the formation of new hydration products that screen the surface of nanotubes as well as due to the transition of calcium nitrate from the liquid phase to the structure of secondary hydration products. Secondary hydration products are formed on free surface of pores with the formation of lamellar aggregates including calcium hydroxide .The morphology of the cement matrix with the combined addition of carbon nanotubes and calcium nitrate is compacted with the formation of lamellar crystalline hydrates. The modified structure of the cement matrix provides the conductivity of charged particles and a decrease of electrical resistance of the material while the strength characteristics increase in comparison to the control composition.-the slight decrease of compressive strength is observed when using multilayer carbon nanotubes in the amount of 7% as an impedance-reducing modifier; at the same time the decrease of electrical resistance of samples was 12% at the age of 28 days of hardening compared to the control additive-free sample;-the increase of the electrical conductivity of the composition with 7% of carbon nanotubes should be noted in the period of structure formation from 3 to 28 days due to changes in the morphology of hydration products. The influence of microstructure features on the electrical conductivity of the cement matrix should be indicated since amorphous hydration products in the control sample determine the unsatisfactory electrical properties;-the positive effect of the complex modification with calcium nitrate and carbon nanotubes has been determined. It consists of sufficient strength characteristics while reducing the electrical resistance. The composition with the content of 7% carbon nanotubes in combination with 3% calcium nitrate ensures the compressive strength of 28 MPa and the electrical conductivity of 2440 Ohm.The following conclusions can be drawn by analyzing the results obtained:"} +{"text": "Traumatic brain injury remains a growing public health concern and represents the greatest contributor to death and disability globally among all trauma-related injuries. There are limited clinical data regarding biomarkers in the diagnosis and outcome prediction of TBI. The lack of real effective treatment for recovery calls for research of TBI to be shifted into the area of prevention, treatment of secondary brain injury and neurorehabilitation. The neuropeptide pituitary adenylate cyclase activating polypeptide (PACAP) has been reported to act as a hormone, a neuromodulator, a neurotransmitter and a trophic factor, and has been implicated in a variety of developmental and regenerative processes. The importance of PACAP in neuronal regeneration lies in the upregulation of endogenous PACAP and its receptors and the protective effect of exogenous PACAP after different central nervous system injury. The aim of this minireview is to summarize both the therapeutic and biomarker potential of the neuropeptide PACAP, as a novel possible target molecule presently being investigated in several human conditions including TBI, and with encouraging results in animal models of TBI. Traumatic brain injury (TBI) is caused by an external force and is oAnother clinically challenging aspect of TBI is the poor outcome and limited therapeutic possibilities. Based on animal studies hundreds of candidates have emerged as potential treatment option to reduce the brain damage. However, only a few have real translational value. The aim of this review is to summarize both the therapeutic and biomarker potential of the neuropeptide PACAP (pituitary adenylate cyclase activating polypeptide), as a novel possible target molecule presently being investigated in several human conditions including TBI, and with encouraging results in animal models of TBI .PACAP is a neuropeptide that was first isolated in 1989 from ovine hypothalamic extract . After iRegarding the biomarker value of PACAP, dozens of recent studies have investigated presence and changes of the neuropeptide in various human conditions. The presence of PACAP has been described in several human tissue samples and biological fluids . Among tThe first proof of PACAP being protective in traumatic brain injury came from observations by Farkas et al. . They ex+T cells and decreased CD8+. The authors concluded that possibly PACAP inhibits the expression of IL-12 thereby preventing T cell proliferation, and PACAP inhibited FasL expression suppressing the apoptosis of CD4+T cells [Another model that is often used to mimic diffuse TBI is the central fluid percussion head injury model. In this rat model, K\u00f6vesdi et al. examined the axonoprotective effect of PACAP in the brainstem . They ad+T cells .Soon after the discovery of PACAP, it was shown by RIA measurements that PACAP occurs at highest concentrations in the brain . SeveralThese results are in accordance with several other observations showing upregulated PACAP levels after traumatic nerve injuries . In a moHuman brain tissues were investigated from medico-legal autopsy cases by van Landeghem et al. in humanIn the acute management for TBI patients the standard medical and surgical interventions play a significant role. There is a lack of real effective treatment for recovery, this calls for research of TBI to be shifted into the area of prevention, treatment of secondary brain injury and neurorehabilitation. The importance of PACAP in the neuronal regeneration lies in the upregulation of endogenous PACAP and its receptors and the protective effect of exogenous PACAP after different central nervous system injury. The animal models not only can help us understand the pathophysiology of TBI, but allow us to develop interventions for preventing secondary injury, enhancing brain repair and improving recovery after TBI . The resRegarding the biomarker value of PACAP, an increasing amount of evidence suggests the high translational potential of PACAP as a diagnostic and/or prognostic biomarker, especially in subprocesses like extent of the blood-brain barrier disruption, or the state of the systemic inflammatory response syndrome. The expanding number of publications in the last few years dealing with the role of PACAP as a novel biomarker showing that it is a rapidly developing, hot and promising topic. We believe that future studies will contribute to a better understanding of the possible role(s) of PACAP in human TBI and could serve as a good source for multi-center clinical trials which involve this topic."} +{"text": "Digital technologies help to improve the work of psychiatric services through the use of modern approaches.The use of telepsychiatry (TP) during war allows people with psychiatric disorders to receive quality treatment that would otherwise be unavailable.TP and other digital technologies are an important resource for providing psychiatric care to internally and externally displaced persons affected by war.As our experience shows, the conditions for effective use of TP are availability of legislative, technical and staff base. The services are implemented according to the protocol, which defines the methods of treatment\u2019s effectiveness evaluation.The presentation will provide methodological approaches to the use of TP and other digital tools.None Declared"} +{"text": "We read with great interest the recently published SuDDICU trial and the After reading the articles, the Sepsis Group of the Department of Medicine, Surgery, Dentistry of the University of Salerno met to discuss how to implement SDD in our clinical practice. We wanted to share our thoughts on translating the results of the SuDDICU trial into efficacious real-life interventions. However, further relevant information regarding the study should be considered prior to applying the results of the study to our patients. The discussion led us to recognize that we missed but needed to include some information. The first is how to relate the SuDDICU trial low levels of antimicrobial resistance with our academic ICU higher rate of prevalence of antimicrobial resistance. Moreover, we noted the different percentages of surgical patients, which in our ICU will be greater than the 27,6% of the SuDDICU trial series, and our different rates of cardiac surgery patients, which represent only 3% of the whole population of the SuDDICU. Differences may be relevant, as a previous retrospective study showed that cardiosurgical patients preoperatively receiving tobramycin and polymyxin orally did not show beneficial effects on the clinical outcomes [Even if the possibility of the emergence of anti-microbial-resistant organisms is very unclear in our setting and undoubtedly scary, the chance that the use of SDD in patients receiving mechanical ventilation in the ICU may reduce the incidence of ventilator-associated pneumonia and new positive blood cultures is still very appealing. Our analysis is ongoing, and we look for hints and suggestions to understand how to apply this relevant research to our everyday practice."} +{"text": "The architecture of retrorectal fasciae is complex, as determined by different anatomical concepts. The aim of this study was to examine the anatomical characteristics of the inferomedial extension of the urogenital fascia (UGF) involving the pelvis to explore its relationship with the adjacent fasciae. Furthermore, we have expounded on the clinical application of UGF.For our study, we examined 20 adult male pelvic specimens fixed in formalin, including 2 entire pelvic specimens and 18 semipelvic specimens. Our department has performed 466 laparoscopic rectal cancer procedures since January 2020. We reviewed the surgical videos involving UGF preservation and analyzed the anatomy of the UGF.The bilateral hypogastric nerves ran between the visceral and parietal layers of the UGF. The visceral fascia migrated ventrally at the fourth sacral vertebra, which formed the rectosacral fascia together with the fascia propria of the rectum; the parietal layer continually extended to the pelvic diaphragm, terminating at the levator ani muscle. At the third to fourth sacral vertebra level, the two layers constituted the lateral ligaments.The double layers of the UGF are vital structures for comprehending the posterior fascia relationship of the rectum. The upper segment between the fascia propria of the rectum and the visceral layer has no evident nerves or blood vessels and is regarded as the \" holy plane\u201d for the operation. Colorectal surgeons must understand the anatomical characteristics of the related fascia surrounding the rectum during rectal cancer surgery. Moreover, anal, urinary, and sexual functions may be preserved if surgery is performed on the correct planes \u20133. In miAn increasing number of studies have suggested that the fascia covers the hypogastric nerves (HGNs). Researchers have conducted studies on large histological slices and suggested the term \u201cprehypogastric nerve fascia\u201d (pre-HGN fascia) for the fascia overlying the HGN . SimilarUnderstanding the anatomy of the UGF has specific clinical guiding significance in colorectal cancer surgery. In our previous anatomy study, we also proposed that the UGF derives from the renal fascia and includes the visceral and parietal layers, namely pTherefore, we conducted an in-depth study of the anatomy of the UGF in the pelvis and explored its anatomical relationships with surrounding fasciae, nerves, and organs in male formalin-fixed cadavers. We also described the clinical relevance of the UGF while performing mid-low laparoscopic TME.The study was conducted in strict accordance with protocols approved by the Biomedical Ethics Committee of Xi\u2019an Jiaotong University (Ethics Permit Number: 2014\u2009\u2212\u20090303). The format of the informed consent form was in line with the guidelines of the China Organ Donation Administrative Center. Furthermore, this anatomical study followed the CACTUS guidelines . The incThe pelvis was dissected along the midsagittal plane in 18 cadavers to observe the hemipelvis, and the whole pelvis was examined in the remaining cadavers. The dissections were conducted by two experienced colorectal surgeons from our department utilizing standard surgical instruments to avoid complicated visualization techniques.Observing the whole pelvis: The cadavers were cut transversely along the plane of the fourth lumbar vertebra and the perineal plane to observe the entire pelvis.According to the laparoscopic TME procedure, dissection was performed in the retrorectal space and reached the level of the levator ani muscle. Next, after identifying the peritoneal reflection, Denonvilliers\u2019 fascia was dissected, and its anterior space was observed. Then, dissection extended toward the lateral ligaments with special consideration to maintain the integrity of the fascial continuity.2.Preparation of the hemipelvis: A hacksaw was used to dissect 18 cadavers along the midline of the sagittal plane from the lower lumbar vertebral spinous process to the coccyx tip to obtain the hemipelvis.The operation was performed in the sacral promontory between the fascia propria of the rectum and the visceral layer of the UGF. Next, dissection continued between the parietal layer of the UGF and the sacrum.The main observations are summarized below: (1) the distribution of the UGF in the pelvic cavity and its relationship with surrounding fasciae and (2) the distribution of the UGF with HGNs, ureters, and pelvic plexus.Originating from the renal fascia, the visceral and parietal layers of the UGF extended along the sacrum to the pelvic floor, constituting the presacral fascia. The ureter and the bilateral HGNs ran between the two layers of the UGF Fig.\u00a0 13]. Th. Th13]. Presacral venous were exposed when the parietal pelvic fascia was lifted from the sacrum Fig.\u00a0. LateralOur study provides a comprehensive description of the UGF and its features. The UGF, originating from the renal fascia, comprises the visceral and parietal layers and extends downward to the pelvic floor along the sacrum. The UGF envelops the bilateral HGNs and ureters and extends to the pelvis. At the level of the fourth sacral vertebra, the visceral fascia inverts ventrally and forms the rectosacral fascia. However, the parietal layer continues along the sacrum and ends at the levator ani muscle. The parietal pelvic fascia is located on the surface of the sacrum.The UGF is known to originate from the renal fascia and envelop the ureters and HGNs. However, there are distinct discrepancies in opinions on the composition of the UGF. In a previous study, the ureters and genital vessels were found to be located in the two layers of the UGF, whereas the HGNs were not mentioned . NeverthUnderstanding the UGF may be instrumental in comprehensively understanding the terminology used for different anatomical regions located posterior to the rectum, as it comprises visceral and parietal layers. For example, different designations exist for the fascia covering the HGNs \u201321. ThesIn other studies, researchers have shown multiple layers of fasciae in the posterior rectum. Based on macroscopic dissection and histology, a previous study detailed that the parietal pelvic fascia comprises 2 lamellae ensheathing the autonomic pelvic nerves . NotablyThe mesentery-like structures formed by the UGF and the pelvic plexus in the lateral wall help us to comprehend the lateral rectal ligament Fig.\u00a0. PreviouAccording to the UGF\u2019s characteristics, we propose that the retrorectal space is divided into upper and lower segments by the visceral layer of the UGF. On the cranial side of the fourth sacral vertebra, the space between the fascia propria of the rectum and the visceral layer may be called the upper segment of the retrorectal space. Correspondingly, the lower segment of the retrorectal space lies between the visceral and parietal layers on the caudal side of the fourth sacral vertebra. The presacral space is located between the parietal layer of the UGF and the piriformis fascia . Based oBased on the above perspectives, the optimal operation plane of the retrorectal space is the upper segment of the retrorectal space between the fascia propria of the rectum and the visceral layer. This plane corresponds to the \u201choly plane\u201d of TME. Notably, after the operation reaches or crosses the rectosacral fascia, separation is performed in the lower segment of the retrorectal space Fig.\u00a0. HoweverThis study is the first to propose the double layers UGF as a theoretical guide in TME and expound on the relevant noteworthy details. Comprehending the characteristics of UGF and discerning the relevant, distinct fasciae are fundamental for colorectal surgeons to distinguish the surgical-related plane better, avoid nerve injury or bleeding, and conduct TME faster at the criteria plane. Establishing the retrorectal space based on the characteristics of UGF also provides a guide for separating the lateral rectal mesentery and discerning Denonvilliers\u2019 fascia adjacent to the UGF . MoreoveThis study has two limitations. The study of formalin-fixed cadavers instead of fresh cadavers is an inherent limitation, as findings could be ascribed to postmortem degenerative changes. In addition, we did not perform histological examinations in the present study. Such examinations will be conducted in future anatomical studies.The double layers of the UGF are vital structures for understanding the posterior fascial relationship of the rectum. The parietal and visceral layers of the UGF divide the retrorectal space into upper and lower segments. Furthermore, the upper segment between the fascia propria of the rectum and the visceral layer has no evident nerves or blood vessels and is the \u201choly plane\u201d for conducting the operation."} +{"text": "Extrahepatic biliary tract and gallbladder neoplastic lesions are relatively rare and hence are often underrepresented in the general clinical recommendations for the routine use of ultrasound (US). Dictated by the necessity of updated summarized review of current literature to guide clinicians, this paper represents an updated position of the Italian Society of Ultrasound in Medicine and Biology (SIUMB) on the use of US and contrast-enhanced ultrasound (CEUS) in extrahepatic biliary tract and gallbladder neoplastic lesions such as extrahepatic cholangiocarcinoma, gallbladder adenocarcinoma, gallbladder adenomyomatosis, dense bile with polypoid-like appearance and gallbladder polyps. This document represents the results of the Italian Society of Ultrasound in Medicine and Biology (SIUMB) guideline committee\u2019s research concerning the use of conventional and contrast-enhanced ultrasound (CEUS) in neoplastic lesions of the gallbladder and extrahepatic biliary tract.In 2016, we started collecting data from the literature published over the past 10\u00a0years about the role of ultrasound (US) and CEUS in neoplastic lesions of the gallbladder and extrahepatic biliary tract. Recommendations were formulated on the basis of the analyzed data. Further, they were assessed by a panel of Italian physicians, experts in the use of ultrasound in neoplastic lesions of the gallbladder and extrahepatic biliary tract at the \u201cConsensus\u201d that took place in Rome, on 16 November 2021, during the last national conference.The results of the expert committee\u2019s work were presented to SIUMB members on 17 November 2021, and the text, including recommendations, was then approved by the SIUMB executive bureau on 20 January 2022.This paper is the summary of the SIUMB\u2019s position concerning the use of US and CEUS in neoplastic lesions of the gallbladder and extrahepatic biliary tract. The aim is to present recommendation to define the cases in which it is proper to apply a more sophisticated ultrasound imaging technique, such as CEUS, and when other imaging techniques need to be used.The importance of ultrasound, and in particular the use of ultrasound contrast agents (UCAs), is well recognized in Italy, however a guideline document has not been developed by SIUMB. In the light of this lack, and on the strength of 2\u00a0decades\u2019 experience using CEUS, SIUMB set up a guidelines committee.In the first meeting, held in Rome in September 2016, the authors carried out an analysis and selection of the already published guidelines concerning the contributions of unenhanced and enhanced ultrasound to the diagnosis of neoplastic lesions of the gallbladder and extrahepatic biliary tract.After the analysis of international and national guidelines, the second step was to evaluate the most important papers on the role of conventional and contrast-enhanced ultrasound in the management of patients with neoplastic lesions of the gallbladder and biliary tree.To do that, we carried out a bibliographic search by entering the following terms in PubMed: \u201cbiliary tree and cancer and contrast enhanced ultrasound \u201cand \u201cgallbladder and neoplasm or cancer and contrast enhanced ultrasound \u201c.The research was limited to the period between 2016 and 2019, and led to the identification of 261 full papers for the item biliary tree cancer and 217 full papers for the item gallbladder neoplasm.By activating filters for clinical trials, review and meta-analyses, we reduced the search result items to 76 full papers for biliary tree cancer and 45 full papers for gallbladder neoplasm.We proceeded to filter these documents, only including: studies conducted on humans; studies in which the use of CEUS has been evaluated in terms of the identification and characterization of neoplastic lesions of the gallbladder and biliary tree, and the reporting data in terms of sensitivity/specificity or positive and negative predictive value (PPV-NPV); studies in which Sonovue was the only UCA employed ; studies in which a qualitative evaluation of contrast medium has been performed ; studies in which there were at least 30 patients ; studies published in English; and studies in which the gold standard was the histological result, the computed tomography (CT) and/or magnetic resonance imaging (MRI) diagnosis, or the clinical and radiological follow-up.Finally, 12 full papers were chosen for biliary tree cancer and 9 full papers for gallbladder cancer guidelines dated 2017 on the management and follow-up of gallbladder polyps as well as two meta-analysis articles).In this document, the SIUMB\u2019s guidelines committee decided to focus mainly on the US diagnostic aspects of gallbladder and biliary tree lesions, with no recommendations regarding the evaluation of tumor response after loco-regional treatment and systemic therapy.In drafting the final document, we decided to report the conclusions of the existing literature as recommendations, and to include the experts\u2019 opinions on all the gallbladder and biliary tree neoplastic lesions presented.The evidence for and strength of the recommendations is generally assessed according to the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system .The strength of recommendations depends on the quality of the evidence. Each recommendation is graded as strong or weak; high-quality evidence corresponded to a strong recommendation, while a lack of or uncertain evidence resulted in a weaker recommendation.However, in the field of neoplastic lesions of the gallbladder and biliary tree current level of evidence present in the major part of the published studies is scarce with most of the available studies being retrospective and even monocentric \u201313. MoreThe SIUMB experts\u2019 committee voted on each of the statements. Each member of the committee had the ability to approve, disapprove or abstain from voting on a particular statement. A strong consensus was reached when there was agreement in\u2009>\u200995%, while broad consensus was achieved when\u2009>\u200980% of the experts agreed.Extrahepatic biliary tracts include the right and left hepatic ducts, their confluence, the common hepatic duct, the cystic duct and the common bile duct.The most frequent neoplastic pathology of the extrahepatic biliary tract is represented by cholangiocarcinoma, glandular neoplasia (adenocarcinoma) originating from the cells of the ductal epithelium or from the periductal glands. Cholangiocarcinomas of the extrahepatic biliary tract are clinically characterized by jaundice/cholestasis , 14.Recommendation: Ultrasound examination represents the first level examination of such patients allowing clinicians to differentiate obstructive from non-obstructive forms of jaundice/ cholestasis (strong consensus).Extrahepatic cholangiocarcinoma includes the perihilar form and the distal form (originating from the common bile duct).Perihilar cholangiocarcinoma causes dilation of the upstream intrahepatic biliary tract, with normal extrahepatic biliary tract, while the neoplastic lesion may appear of variable echogenicity, but often is visualized as isoechoic comparing to the surrounding hepatic parenchyma and therefore poorly delineated and sometimes even invisible; in such cases, the dilation of the intrahepatic biliary tract and the lack of connection of the bile ducts to the hilum allows us to hypothesize the perihilar form of cholangiocarcinoma (strong consensus);CEUS helps to improve the visibility of the lesion, as well as to note the dilation of the intrahepatic biliary ducts , 7, 8 can become visible as an echogenic endoluminal lesion that cannot be differentiated from stones or echogenic material (dense bile-clots), but more frequently, in relation to the periductal infiltrating type growth , is not detectable by ultrasound. In such cases, therefore, US allows us only to identify the dilation of the intrahepatic biliary tract of the common bile duct and of the gallbladder (strong consensus);The common bile duct sometimes has a filiform or abruptly interrupted appearance in the tract affected by the neoplasm and the diagnosing requires the use of additional methods (MRI\u2014Echoendoscopy\u2014Endoscopic retrograde cholangiopancreatography) (strong consensus);It is rarely possible to note an echogenic material, without posterior acoustic shadow, located within the common biliary tract, which can simulate the presence of biliary debris, stones in formation or clots. CEUS can show the nature of the obstruction by presenting an enhancement of the lesion in case of neoplasm , 7, 8.Distal cholangiocarcinoma recommendations:The extrahepatic biliary tract is rarely affected by secondary tumors of metastatic type, especially those of gastrointestinal origin (i.e. colon and stomach cancers) or melanoma and lymphoma.At US, the metastases appear as masses that interrupt the biliary tract with an upstream dilation of the biliary tract (strong consensus);At CEUS the metastases can present a diffuse or peripheral hyperenhancement in the arterial phase, followed by a hypoenhancement in the portal and late phase, with an image very similar to what can be observed in the primitive forms ;In cases where these vascular signals are not detectable, CEUS can be used to differentiate solid lesions from the presence of biliary sludge. In particular, the absence of enhancement of the polypoid-like lesion is a sign of the presence of dense bile (sludge) (100% accuracy) , 16 . A diffuse variant and a focal variant were described and are characterized by diffuse and focal thickening of the wall. Sometimes in patients with adenomyomatosis of the gallbladder, small echogenic spots with \"comet tail artifact\" related to the presence of parietal cholesterolosis can be observed in the Rokitansy-Ashoff sinuses.Ultrasound represents the imaging method of choice in its identification and characterization, with an accuracy ranging from 91.5 to 94.8% (strong consensus);CEUS increases the sensitivity of US in identifying RASs and in documenting the continuity of the gallbladder walls. Moreover, CEUS targeted at identifying the thickening area of the gallbladder wall shows the same degree of vascularization as the adjacent wall, although an area of hyperenhancement can occur in 15% of cases (strong consensus);Avascular spaces representing RASs should be explored at the internal part of the thickened wall of the gallbladder;RASs appear avascular at all stages of the dynamic study, regardless of their content. The identification of avascular spaces in the context of the thickened gallbladder wall points on the presence of focal adenomyomatosis \u201322 ;The criteria used in the therapeutic clinical management of polypoid lesions of the gallbladder, identified after ultrasound screening, take into account the size of the polyp and the presence of some risk factors of malignancy , 26 with possible presence of a large vascular pole which is well visualized by color Doppler and especially by CEUS (strong consensus);Vascularization is characterized by regular vessels with a tree-like distribution. CEUS appearance is generally characterized by hyperenhancement in the arterial phase, followed by isoenhancement in the venous phase or, more rarely, by hypoenhancement. From the analysis of the literature data, there is currently no specific dynamic pattern that, following CEUS, would allow to distinguish adenoma from malignant tumor of the gallbladder ;The use of CEUS is strongly debated in this scenario and in the latest European guidelines of 2017 its use is envisaged only in the differentiation between chronic cholecystitis and neoplasia (strong consensus);This caution is linked to the fact that the CEUS imaging and in particular hyperenhancement in the arterial phase do not allow us to differentiate between a malignant and a benign lesion (strong consensus);According to the recent meta-analysis, the most accurate criteria in the identification and characterization of tumor pathology of the gallbladder by CEUS are represented by: (1) identification of the discontinuity of the gallbladder wall ; (2) infiltration of the adjacent liver parenchyma; (3) demonstration of tortuous and irregular vessels at the level of the tumor mass with thickening of the wall (strong consensus) , 28.Recommendations:During 2020\u20132021 other studies have been published in the field of differential diagnosis between adenomatous and cholesterol polyps \u201331, and Size: significantly greater mean diameter of adenomatous polyps vs cholesterol polyps (1.45\u20131.5\u00a0cm cut off);Gallbladder wall integrity: significantly more compromised wall integrity in adenomatous polyps;Vascular signs at color Doppler: greater vascular signals at color Doppler in adenomatous polyps;Mean polyp stalk diameter evaluated by CEUS: significantly larger in adenomatous polyps;Vascular pattern by CEUS (linear vs dotted): more frequent in adenomatous polyps.The literature must be evaluated with caution as it is manly from Eastern countries, however, the most important findings in the differential diagnosis between cholesterol and adenomatous polyps are:Size of the lesion (larger in neoplastic lesions);Gallbladder wall integrity: significantly more disrupted in malignant lesions;The irregularity and tortuosity of the vessels at CEUS ;Timing of wash out of the lesion (\u2264\u200928\u00a0s): more frequent in malignant lesions;Although the studies on the wash in/out curves of the gallbladder lesion appear promising, it is believed that there is currently insufficient evidence for their use in clinical practice that raises a need for further studies \u201332.Although the literature data must be evaluated with caution as the published literature was based mainly on Eastern countries experience, the most important findings in the differential diagnosis between benign/malignant lesions of the gallbladder, are:"} +{"text": "The outbreak of COVID-19 has long-term negative effects on mental health. This study shows the negative mental health effects of studying under pandemic limits involving distance learning and social isolation.The specialized studies carried out after the emergence of the Coronavirus revealed the impact of the measures implemented during the period of restrictions and after the outbreak of the pandemic, as well as the way in which these measures were felt by the general population.Qualitative analysis of students\u2019 answers regarding the stress felt after the outbreak of the pandemic.Social and individual anxiety remains a subject of investigation among female students, who are in the process of emotional maturation and professional training.Students remain a vulnerable population category, in the conditions in which society is in full post-pandemic adaptation process.None Declared"} +{"text": "Objectives: We evaluated the effectiveness of using appropriate chemical(s) to treat the dental unit waterline (DUWL), and we recommended appropriate strategies to manage the DUWL system to maintain\u00a0bacteria concentration below minimum recommended levels. Methods: Initial water samples were collected aseptically from the handpieces of the DUWL in dental clinics to assess the bacterial load prior to treatment of the dental unit. The dental staff were educated on the management and treatment of the DUWL. Appropriate chemicals were introduced to the DUWL system. Following the treatment, samples of water from the DUWLs were collected to assess the bacterial load. Results: The US CDC recommends a safe level of bacterial load of <500 CFU per mL of heterotrophic bacteria in the standard for drinking water by the US EPA. Initial results for the DUWL water showed unacceptably high levels of bacterial load between 1,930 and 35,000 CFU per mL prior to treatment. Subsequent sampling of DUWL water with treatment of appropriate chemicals showed vast reductions of the bacterial loads in all the dental units, with bacterial counts between <1 and 72 CFU per mL. Conclusions: It is important to ensure ongoing education and regular treatment with appropriate chemical and effective management and monitoring of all DUWLs from dental chairs to ensure that the water produced meets safe drinking standards."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Psychology and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "Postictal Psychosis of Epilepsy (PIPE), part of the group collectively known as Psychosis of Epilepsy, is characterized by an onset of confusion or psychotic symptoms within one week of apparent normal mental function. PIPE has been argued to be underdiagnosed in the clinical population, perhaps due to a failure to recognize the temporal relation between the seizure and the psychotic episode.To explore the concept and management of post ictal psychosis.We present a clinical case and a review of the literature concerning post ictal psychosis.We report the case of a 36 year old woman with focal refractory epilepsy after a likely episode of limbic encephalitis in 2015. Cognitive and psychiatric sequelae in the form of depressive symptoms, in treatment with neurology and psychiatry since 2021. One previous episode of psychotic symptoms during seizures. Worsening of seizure frequency since march of 2022 with apparent normalization (absence of seizures after dose reduction of eslicarbamazepine and introduction oflamotrigine) for about four days before being hospitalized in the neurology unit due tobehavioral abnormalities. During psychiatric exploration, the patient showed signs of partial clouding of consciousness with manierisms, ecopraxias and ecolalias; verbigerance in the form of the neurologist\u2019s name and bizarre movements like looking behind suggestive of sensoroperceptive disturbances. The symptomatology resolved itself during the following week after treatment with diazepam.Finally, a narrative review concerning the case was also performed; with particular emphasis on antipsychotic drugs with low risk of lowering seizure threshold (such as risperidone or aripiprazole) as the recommended treatment.Our findings point to the relevance of Postictal Psychosis of Epilepsy as a clinical entity. Further studies on pathogenic mechanisms and therapeutic management are required.None Declared"} +{"text": "The prolongation of life expectancy has considerably increased the prevalence of chronic diseases and therefore the concomitant presence of these, this usually predicts a more reserved prognosis for both conditions and can lead to greater complications, more complex treatments and therefore more expensive and with considerable delays in recovery.(6)The comorbidity of physical and mental illnesses is common, as both are often interrelated.One of the ways to ensure handling comprehensive and personalized comorbidity should be detected especially at the first level of care for timely treatment, It is the inclusion of mental health topics in the curricula of doctors in training in a specialty.Establish which programs of the medical and surgical specialties include topics related to mental health in their academic training, in order to strengthen the recovery process of patientsWhat medical specialties contain mental health topics in their training program?A descriptive study of the academic programs of Medical specialization in 50 countries. The questions we asked ourselves were:How many medical specialties have included the following mental health topics in their regular graduate program: affective disorders, anxiety disorders, psychosomatic disorders, substance use disorders, violence and palliative care?What other mental health topics are included in the different medical specialties?The results obtained indicate the importance of mental health in different states of physical health and especially to be taken into account by decision makers in health policies. Comorbidity in current medicine should be taken into consideration more objectively and thus favor the reduction of the suffering of the sick person. We know that an associated mental and physical condition can give atypical presentations that hinder diagnosis, clinical severity, and response to treatment and therefore may lead to increased utilization of health services. However, the fragmentation of medicine into increasingly limited specialties restricts the ability to see the patient holistically and therefore the decrease in the quality of medical care, increase in costs and delay in the recovery process.None Declared"} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Genetics and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "Genetic variation is a well-known contributor to the onset and progression of cancer. The goal of this study is to provide a comprehensive examination of the nucleotide and chromosomal variation associated with the onset and progression of serous ovarian cancer. Using a variety of computational and statistical methods, we examine the exome sequence profiles of genetic variants present in the primary tumors of 432 ovarian cancer patient samples to compute: (1) the tumor mutational burden for all genes and (2) the chromosomal copy number alterations associated with the onset/progression of ovarian cancer. Tumor mutational burden is reduced in the late vs. early stages, with the highest levels being associated with loss-of-function mutations in DNA-repair genes. Nucleotide variation and copy number alterations associated with known cancer driver genes are selectively favored over ovarian cancer development. The results indicate that genetic variation is a significant contributor to the onset and progression of ovarian cancer. The measurement of the relative levels of genetic variation associated with individual ovarian cancer patient tumors may be a clinically valuable predictor of potential tumor aggressiveness and resistance to chemotherapy. Tumors found to be associated with high levels of genetic variation may help in the clinical identification of high-risk ovarian cancer patients who could benefit from more frequent monitoring. BRCA) )Detailed genomic-level analyses of cancer, such as those reported in this paper, have significant benefits and limitations. Among the benefits is the fact that the results often generate novel hypotheses about the processes underlying cancer onset and progression. The limitations are that these hypotheses often require further testing on a clinical level in order to be validated. It is our hope that the findings on OC reported in this paper will stimulate clinical studies leading to improved early diagnosis and treatment of this devastating disease."} +{"text": "The article describes a specific method of using innovative transverse systems of flat bar frames as structures forcing elastic shape transformations of nominally flat folded sheets into the forms of ruled shell roof coverings. An innovative method for parametric shaping these forms and arrangement of frames constituting structural systems of sheds with folded thin-walled roof coverings, taking account of the specificity of designing elastically transformed roof sheeting, was proposed. The proposed method for defining the loads of the considered frames supporting lower shelves of the folds of transformed roof sheeting, as loads distributed uniformly along the length of the upper chord of a roof frame girder, is also an innovative approach. The above unconventional premises result in the innovative topic of the research presented in terms of checking the impact of changing the shape of subsequent flat frames (intended for the construction of sheds roofed with the transformed sheeting) on the geometric and mechanical properties of the members of these frames. For the defined loads and the proposed parameterization of the frame forms, an innovative set of conditions was developed to optimize their performance, and then a theoretical analysis of the observed dependencies was carried out. This analysis was performed in an unconventional, novel way using section modules of the cross-sections of all members. The performed computer simulations confirmed the significance of changes in the inclination of girders and columns on the geometric and mechanical properties of the members. The obtained results are the basis and justification for simulations and tests in the scope of further modification of the form, loads, work, and methods of using various configurations of flat frames in constructions. Flat bar frames are very useful as structural systems supporting transversely thin-walled roof coverings ,2. DiverThe upper chords of roof girders of the frame systems are directrices supporting the lower flanges of all roof sheeting folds. Subsequent folds attached to the mutually skew directrices or poles change their shapes from folded flat into folded shell, so their bending, torsion or bending-torsion deformation is achieved Figure . The arrInclination of girders or columns belonging to flat frames causes quantitative and qualitative changes in their structural work under the impact of the external loads. An intentional mutual inclination of adjacent directrices or columns belonging to subsequent frames changes the geometrical and mechanical properties of the thin-walled sheeting. By changing the shape of the subsequent transverse flat frames in a structural system, we can obtain an unconventional, interesting shell form of a roof or fa\u00e7ade envelope ,6. HowevMetal frames are willingly used for building constructions due to their relatively low weight and freedom of shaping complex forms of roofs and facades. Many innovative methods for designing diversified unconventional building forms and their structural systems are presented in the monograph by Abdel et al. . EffectiGeometrical and mechanical properties of nominally flat thin-walled corrugated sheets allow them to undergo large torsional and transverse bending deformations. If we assume that freedom of fold\u2019s deformations is assured while fixing them on roof directrices, it is possible to adjust the bottom flanges of the shell folds to the various shapes and mutual positions of the directrices shaped Figure a,b.The orthotropic mechanical properties of the nominally flat thin-walled folded metal sheets resulting from their folded geometric shape significantly restrict the freedom in shaping curved free forms of fa\u00e7ades and roofs shallow parabolic-hyperbolic sheeting to limited by closed spatial quadrangles . ReichhaA significant modification of Reichhart\u2019s method was made by Abramczyk , who devIn general, the static-strength performance of the common spatial and planar structural systems is described by Zhang et al. , and MarThe structural performance of tubular bar structures is also presented by Marshal . SignifiAll permissible loads and their combinations must be taken into account in each design process of buildings and their structural systems. Various directions of loads, including vertical, horizontal, and normal to the wall or roof surfaces, affect the complex equilibrium conditions of joints and members Kurobane et al. and PackThe aim of the research is to analyze the impact of changes in the inclination of a number of the selected elements of flat bar structural systems supporting various transformed thin-walled folded sheeting on the overall stability and the change in the strength properties of the elements belonging to these systems. A basic configuration of the simulated flat frame systems consists of a horizontal lattice girder and two vertical columns. Several planar lattice systems derived from the basic configuration have been created as the result of the inclination of their columns to the vertical or their girders to the horizontal. Computer models of the above derivative and basic rod systems were shaped to simulate their structural performance using the incremental non-linear Finite Element Method. The performed simulations allow one to observe a few major trends in changes in their strength work and ability to maintain overall stability.Each analyzed flat bar transverse system consists of a lattice girder with a height of 2 m and two single-branch columns. The upper and lower chords of the girder are connected to each other with V-diagonals spaced every 4 m. Its girder is horizontal. The two vertical columns are 12 m high, and their spacing is equal to 16 m. The following four frame shape types were considered: rectangular a, rectanIn the beginning, arbitrary cross-sections of all elements of an initial basic configuration Kb0 were adopted. A basic frame configuration Kb was obtained as the result of an optimizing process of Kb0 loaded with the above-mentioned four types of uniformly distributed load. Based on Kb, several derivative configurations were created as follows.The first type of derivative configurations are the Kg configurations characterized by girders tilted to a horizontal plane and vertical columns b. The seThe columns are fixed in the foundation. All rods of the same element of a respective frame configuration have identical cross-sections, except for the columns of each derivative combination Kg, where their poles vary in length.The research is related to the execution of several computer simulations modeling the mechanical performance of the basic and derivative frame configurations, accomplishing optimization of the cross-sections of their individual elements. The models were made in the Robot program . An elasThe incremental non-linear calculation method allows the discrete load values to be increased in subsequent calculation steps. The decrease in the stiffness of the entire frame and its individual bars caused by longitudinal eccentric compression forces acting in the bars and forces oriented laterally in relation to the vertical position of a flat frame was taken into account. The displacements of joints and bending of frame elements were also taken into account.The research carried out was divided into four main steps. In the first step, a basic framework configuration, Kb0, was optimized to achieve Kb. The arbitrary optimization condition is composed of three restrictions: (1) the maximum allowable effort of all individual bars equal to 235 MPa +/\u2212 3% for S235 steel, (2) the cross-section class not higher than 3, (3) the maximum low value of the critical load factor determined for each frame configuration, but not lower than 1.The results of the simulations accomplishing the initial concept adopted by the authors prior to the study and related to the optimization of only the Kbo basic rectangular frame configuration into Kb and then using the calculated cross-sections for the analogous elements of the subsequently simulated derivative configurations turned out to be impossible because of the very high stresses appearing in their columns and chords.Therefore, the optimization processes were also carried out for all derivative configurations. In addition, instead of analyzing the trends in changes in the stress levels appearing in all bars, the trends in changes in the section modules of the optimized cross-sections and the optimal critical load factor, resulting from the change in lattice girder or pole inclination to a horizontal plane or the vertical, were analyzed. The section modules were considered without taking the area and moment of inertia of the cross-sections into account due to the substantial share of bending moments appearing in the tilted columns of the derived configurations.i = 1 to 3). They were loaded and optimized in the same way as the basic Kb configuration.In the second step of the research, computer models of flat frames characterized by different inclinations of their girders to the horizontal were built. Three different discrete values of the h parameter belonging to the set {1.5, 3.0, 4.5 m} were adopted b. Three In the third step, several simulations of non-rectangular frame systems Kci and Kce derived from Kb were carried out. The specific property of these configurations consists in the inclination of their columns to the vertical at the same angle. Four different values of the d parameter were assumed for Kce c and fouIn the fourth last step of the research, several diagrams showing the effect of the inclination changes of each relevant frame element influencing the overall stability of the frame and the strength properties of its individual bars were developed.As stated at the beginning of this section, for each of the simulated basic and derivative frame configurations, four different types of loads were assumed. These loads were taken as characteristic of sheds covered with thin-walled, transformed folded sheeting. The nature of the load acting on each frame results from the way each fold was fixed to the roof directrices. The upper chords of the examined frame girders constitute the roof directrices.2 uniformly distributed over the length of the top chord of each frame , Kcer (r = 1\u20134), and Kcij (j = 1\u20134). The results obtained for Kgi are given in The results obtained for the inverted trapezial derivative configurations Kcij (j = 1\u20134) are presented in s1 and Ps2 columns and the bottom chord Pd, and the inclination ng of the Kgi\u2019s lattice girders. The ng slope is the ratio of the frame height h by the distance of 16 m between the columns of the rectangular basic frame configuration.The analysis of the strength work and stability of the examined frames results in a comparison of section modules calculated for the optimal cross-sections of the most strengthened elements, such as the columns and bottom chords of the simulated basic and derivative configurations Kb and Kgi. Two diagrams shown in s1, Tend Ps2, and Tend Pd from The dotted lines: Tend PThe diagrams shown in In the above diagrams, the zero value of ng was assigned to the corresponding value of the elastic section modulus of the optimized cross-section calculated for each element of the basic rectangular frame configuration. The change in the course of any line from the above-mentioned diagrams illustrates the change in the section modulus of the corresponding element of the successively simulated derivative configurations. Significant changes in the course of the above-mentioned lines indicate a significant impact of the girder inclination on the mechanical properties of the examined frame.In order to accurately illustrate the above-mentioned changes (and the above-mentioned impact), a diagram showing the impact of the girder inclination ng on the relative (percentage) increments \u0394Smod in the value of the Smod section modulus of the optimized frames in relation to basic configuration frame was built b. These cr, caused by the change in the lattice girder slope, start with the value 1.04 and increase very fast. This value obtained during the optimizing process of the cross-sections of the Kb\u2019s elements limits the effort of the compressed columns of the rectangular configuration to \u03c3c = 207 MPa. This value makes it impossible to exploit the total bearing capacity of the columns. Therefore, the overall stability of the frame plays an important role in the optimization of the Kb basic configuration. In the case of the examined derivative configurations Kgi, the decisive role in shaping the optimal cross-sections of all elements is played by their effort.An important tendency to qualitative changes in the mechanical properties of the analyzed frame configurations is shown in cr on the geometric and mechanical properties of the members of these frames. The performed computer simulations confirmed the significance of the changes in shapes of the flat frame configurations from rectangular to trapezial resulting from the inclination of the columns to the vertical or the girders to the horizontal, making a significant increase in the effort of their elements working under the adopted types of loads.For the defined loads and the adopted parameterization of the frame forms, an innovative set of conditions was developed to optimize their performance, and then a theoretical analysis of the observed dependencies was carried out. This analysis was performed in an unconventional, novel way using the section modules of the cross-sections of all members because the optimization of the cross-sections of the members of each analyzed base and derivative frame configuration had to be performed separately. The reason for such actions was the rapidly increasing effort of the elements with the increase in the inclination of the girder or columns when the arbitrary cross-sections of the initial derivative configurations were assumed to be identical to the optimized cross-sections of the base configuration calculated previously.In the case of lattice girder inclination, a significant increase in the values of the section modules calculated for the optimized cross-sections of the shorter columns occurs. It can reach even 1000% for the most inclined girders. The significant increase in the section modules of the optimized bar cross-sections is also observed in the case of the bottom chord, where it reaches 400%. In the remaining optimized elements, the increase in the section modules of their cross-sections is insignificant.In the case of the most tilted columns of the derivative frame configurations, the significant increase in the section modules of the optimized cross-sections is up to 200% for the trapezial configurations and 300% for the inverted trapezial configurations. In the case of other elements of these frames, the increase in effort is small.The selected mechanical changes of the examined frame configurations caused by the changes in the inclination of their girders to the horizontal or their columns to the vertical are quantitative and qualitative. This results from the fact that the calculated critical loads and the ability to maintain the overall stability of the frames are the limitations determining the optimal size of the column\u2019s cross-sections of the rectangular configuration. However, in the case of the considered trapezial and inverted trapezial frames, the decisive condition limiting the optimal size of the cross-sections of all their elements is the maximum allowable effort resulting from the yielding of the steel used. In the case of the derivative frame configurations, the values of critical load factor increase fast up to five and even seven for the frames with the most tilted elements."} +{"text": "The ability of the cockroach to locate an odor source in still air suggests that the temporal dynamic of odor concentration in the slowly expanding stationary plume alone is used to infer odor source distance and location. This contradicts with the well-established view that insects use the wind direction as the principle directional cue. This contribution highlights the evidence for, and likely functional relevance of, the capacity of the cockroach\u2019s olfactory receptor neurons to detect and process\u2014from one moment to the next\u2014not only a succession of odor concentrations but also the rates at which concentration changes. This presents a challenge for the olfactory system because it must detect and encode the temporal concentration dynamic in a manner that simultaneously allows invariant odor recognition. The challenge is met by a parallel representation of odor identity and concentration changes in a dual pathway that starts from olfactory receptor neurons located in two morphologically distinct types of olfactory sensilla. Parallel processing uses two types of gain control that simultaneously allocate different weight to the instantaneous odor concentration and its rate of change. Robust gain control provides a stable sensitivity for the instantaneous concentration by filtering the information on fluctuations in the rate of change. Variable gain control, in turn, enhances sensitivity for the concentration rate according to variations in the duration of the fluctuation period. This efficiently represents the fluctuation of concentration changes in the environmental context in which such changes occur. Panulirus argus, and the clawed lobster, Homarus americanus, revealed chemoreceptors that respond to a range of gradually increasing odor pulse concentrations basiconic and trichoid sensilla located on the cockroach\u2019s antenna which contain ORNs responsive to the odor of lemon oil . Schalle\u22121 has a slight (strong) effect on the gain of response for the variable plotted on the y axis (the instantaneous concentration). Similarly, a regression plane with a small (large) y slope indicates that the variable plotted on the y axis (the instantaneous concentration) has a slight (strong) effect on the gain of response for the variable plotted on the x axis (the rate of change).The phase of the oscillating impulse frequency was determined for different period durations by fitting each set of frequency data points with a sine wave curve . Then the z axis . The rese z axis . A regreThe ORNs of swA and swB basiconic sensilla simultaneously increased their activity with increasing odor concentration, though with different rates to different maxima , and theDuring oscillating concentration changes, the impulse frequency of the ORNs of both sensilla types oscillates regularly; the ratio of frequency oscillations to concentration oscillations was always 1:1. The frequency maxima of the ON ORNs of the basiconic sensilla were not in phase with the concentration maxima; they were intermediary, between the concentration maxima and the rate-of-change maxima suggests that the ON and OFF ORNs use different types of olfactory receptors are widely considered to play a crucial role in the transport of hydrophobic odor molecules through the hydrophilic fluid inside the sensilla from the wall pores to ORs. OBPs form a specific complex with a given odor that interacts with ORs, leading to the initiation of the olfactory transduction cascade. A type of basiconic sensillum (ab8) on the olecules . Insteadolecules . The ON olecules .Bombyx mori emphasize the onset and offset of rapid concentration changes by functioning as flux detectors and not concentration measures (The pheromone-ORNs of the trichoid sensilla on the antenna of the male moth measures . Flux demeasures by adsormeasures . In concmeasures .Drosophila and was exemplary in showing that the responses of ORNs to rapid, pulse-like concentration changes are invariant to variations in the pulse flow rate (Drosophila to track along wind-borne odor plumes to their source. In a recent study we have shown that changing the level of the flow rate has no effect on the responses of the ON and OFF ORN responses to oscillating changes in odor concentration (Flux detectors adsorb the stimulus molecules depending on both the stimulus concentration within the external medium and the relative velocity of the flux detector and the airspeed . The grelow rate . Thus, pntration . FurtherIn conclusion, our findings suggest that key aspects of the odor stimulus are extracted and processed separately in two parallel systems of ORNs located in morphologically different types of sensilla on the cockroach\u2019s antenna. The questions now are what mechanisms cause the two types of gain control and how does the brain determine what information is suitable at any given moment to guide the cockroach to the location of the odor source."} +{"text": "The registration of serial section electron microscope images is a critical step in reconstructing biological tissue volumes, and it aims to eliminate complex nonlinear deformations from sectioning and replicate the correct neurite structure. However, due to the inherent properties of biological structures and the challenges posed by section preparation of biological tissues, achieving an accurate registration of serial sections remains a significant challenge. Conventional nonlinear registration techniques, which are effective in eliminating nonlinear deformation, can also eliminate the natural morphological variation of neurites across sections. Additionally, accumulation of registration errors alters the neurite structure.This article proposes a novel method for serial section registration that utilizes an unsupervised optical flow network to measure feature similarity rather than pixel similarity to eliminate nonlinear deformation and achieve pairwise registration between sections. The optical flow network is then employed to estimate and compensate for cumulative registration error, thereby allowing for the reconstruction of the structure of biological tissues. Based on the novel serial section registration method, a serial split technique is proposed for long-serial sections. Experimental results demonstrate that the state-of-the-art method proposed here effectively improves the spatial continuity of serial sections, leading to more accurate registration and improved reconstruction of the structure of biological tissues.https://github.com/TongXin-CASIA/EFSR.The source code and data are available at Serial section electron microscopy (ssEM) is a mainstream image-acquisition method for connectomics research. In ssEM, a biological tissue is cut into 30\u201350\u2009nm thick sections and then imaged with a high-resolution electron microscope (EM) . This teOne major drawback of ssEM is the physical cutting of the tissue block into sections, which results in the loss of continuity between sections. In addition, the section cutting process also introduces nonlinear deformations of neurites used in this article is only applicable to short series and may not be suitable for long series. Therefore, to enable the use of structural regression in long series, this article proposes a strategy of dividing the long series into multiple short series and performing registration on each of them separately. With this strategy, the proposed registration framework can be used for the registration of long series.This study proposes a serial section registration framework that aimThe serial registration process involves multiple pairwise registrations, so the choice of pairwise registration model can significantly impact the results. Common parametric models may not be sufficient to accurately model the complex nonlinear deformations introduced during section preparation. To address this challenge, this article uses an optical flow approach to register pairs of sections. For each pair of sections, this registration process involves estimating the optical flow field between the sections using an optical flow network and warping them into the reference section. This approach improves on the state-of-the-art unsupervised optical flow method ARFlow used in this study is acquired from the brain of an adult Drosophila using serial section transmission electron microscopy. The dataset consists of 125 successive EM images with a voxel resolution of The CREMI dataset and the Dice coefficient of neurite labels to evaluate the accuracy of the registration results. The Dice coefficient is a measure of set similarity. In registration, the Dice coefficient of the label indicates the overlap degree of the same neuron after registration, and the Dice coefficient tends to 1, indicating that the higher the accuracy of the recovery of the same neuron structure after registration is. That is, the higher the accuracy of the registration result is. Furthermore, the Dice coefficient can measure the registration accuracy not affected by the image quality. To prevent the impact of labels representing intercellular spaces, this study uses the Dice coefficient of 50 neurites with the largest area on the first section to measure the accuracy of the registration results. This method was also used by ssEMNet to evaluate registration accuracy. For the FIB-mito dataset, this study only uses the NCC of sections to assess the registration results because unlike CREMI dataset, the dataset lacks dense labels of structures.This section evaluates the efficacy of the proposed pairwise registration and structural regression methods, as well as the entire framework including the long-serial splitting strategy. The optical flow method used in this study is an improved version of ARFlow. Thus, the original pre-trained ARFlow model indicates the similarity between the warped result and the reference section, it cannot directly reflect the accuracy of the registration. In comparison, NCC (GT) represents the similarity between the result and the ground truth. The Dice coefficient reflects the degree of overlap between the registered neurites and this study uses the NCC and Dice coefficient to evaluate the registration results. The NCC measures the pixel similarity between images. While NCC (Reference) indicates the similarity between the warped result and the reference section, it cannot directly reflect the accuracy of the registration.In comparison, NCC (GT) represents the similarity between the result and the ground truth. The Dice coefficient reflects the degree of overlap between the registered neurites and the ground truth neurites. To demonstrate the effectiveness of the proposed method, the method is compared with baselines, Elastic can suppress the influence of these factors during the training process to some extent. As demonstrated in However, it should be noted that the error accumulation assumption of the structural regression method is based on short serial sections, making it suitable only for short serial section registration. For long-serial section registration, a feasible strategy is to divide the long series into multiple short series for registration. This allows for the reconstruction of the spatial continuity of the entire block. While the short serial splitting strategy proposed in this article is simple, it may be worth considering other feasible schemes. Further study is needed to understand how to effectively connect the relationships between the short serials. In addition, this method cannot solve the registration accuracy problem caused by section fold and crack, which is also a problem that we need to further solve in the future.The experimental results show that the enhancements proposed in this study effectively improve the spatial continuity of serial sections, resulting in improved registration and accurate recovery of the spatial structure of biological tissues. This method represents a significant advancement in the field of serial section registration, as it addresses the challenges faced by current approaches and has the potential to greatly improve the accuracy of volume reconstruction in the connectomics research."} +{"text": "Application of the stent for treatment of the internal carotid artery (ICA) aneurysms has been extensively increased in recent decades. In the present work, stent-induced deformations of the parent vessel of ICA aneurysms are fully investigated. This study tries to visualize blood stream and calculated hemodynamic factors inside the four ICA aneurysms after deformations of parent vessel. For the simulation of the non-Newtonian blood stream, computational fluid dynamic is applied with one-way Fluid\u2013Solid interaction (FSI) approach. Four ICA aneurysms with different ostium sizes and neck vessel angle are selected for this investigation. Wall shear stress on wall of aneurysm is analyzed in two angles of deformation due to application of the stent. Blood flow investigation shows that the deformation of the aneurysm limited blood entrance to the sac region and this decreases the blood velocity and consequently oscillatory shear index (OSI) on the sac wall. It is also observed that the stent-induced deformation is more effective on those cases with extraordinary OSI values on aneurysm wall. Late re-hemorrhage and a lower rate of thorough obliteration are two main challenges for usage of the endovascular coiling. These disadvantageous of endovascular coiling have been explained and resolved by new achievements3. The usage of stent is recognized as an important method for the reduction of the aneurysm rupture risk in patients with high sac section area. The main application of the stent usage is to avoid the main blood stream entering into aneurysm sac and consequently, the rupture risk of the aneurysm is reduced5.Usage of stent and coiling technique are the primary techniques for the treatment of the cerebral aneurysm since surgical clipping is high-risk approach for treatment specially patients with high ages. However, it is reported that recanalization may occur after endovascular embolization7. The decisions of medical team are done based on some limited factors and their experience of is more prominent in final decision. As mentioned before, use of stent along with endovascular coiling could-significantly decrease the risk of rupture9. Stent not only preserve the coil fibre inside the aneurysm but also reformed parent vessel to reduce blood flow rate inside the sac section10. This function of the stent has motivated surgeons to apply this for most cases. However, details of deformation on the risk of aneurysm rupture were not investigated in full details12.Although several investigations have been done to compare these endovascular techniques, the selection of efficient method for treatment of various cases is still challenging13. The evaluation of the stent performance could be done via analysis of the main hemodynamic aspects of blood in the sac section area15. Jeong et al.14 investigated arbitrary secular geometry of aneurysm while real model of aneurysm has different geometrical features. Ullery et al.15 quantified the geometry and respiration-induced deformation of abdominal branch vessels and stents after fenestrated (F-) and snorkel (Sn-) endovascular aneurysm repair. However, hemodynamic analyses is required for investigated deformations. Sabernaeemi et al.16 and Voss et al.17 investigated hemodynamic in real 3-D models but limited computational results are presented about angle of deformation on the hemodynamic of blood stream.The concept of aneurysm deformation is highly important for the treatment of the patientsIn this study, the computational fluid dynamic is employed for the visualization of the blood flow in four different ICA aneurysms. Blood pulsatile flow is considered for the simulation of the blood stream in the real geometry of the selected ICA aneurysms. Two deformation stages are chosen as the post-interventional models for aneurysms after usage of stent. Wall shear stress and OSI index are compared and analyzed.18.It is confirming that all methods were carried out in accordance with relevant guidelines and regulations. Besides, all experimental protocols were approved by of the Ca' Granda Niguarda Hospital and it is confirmed that informed consent was obtained from all subjects and/or their legal guardian(s). This study selected geometry of aneurysm from Aneurisk websiteThis research study has focused on the different stages of deformation by hemodynamic analysis of blood stream. Computational fluid dynamic is used for the investigation of the blood hemodynamic of four distinctive cases as illustrated in Fig.\u00a028 Simulation of the transient blood stream inside the chosen ICA aneurysms is done via simple algorithm in ANSYS-FLUENT software. The interaction of the blood and vessel is model via One-way FSI in which the blood force influences on the aneurysm wall as an exterior force. One-Way FSI implies the effect of the fluid on to the solid and the solid deforms. In fact, pressure of blood near wall is considered as an exterior force in the solid part and it would results in deformation. Owing to pulsatile feature of blood flow inside the vessels, mass flow rate at inlet and pressure value at outlet is applied by displayed pattern in Fig.\u00a029. We used a correlation for calculation of the viscosity (Casson model) in relation with Hematocrit value as follow:Computational and theoretical techniques are extensively used in various biology science and biomedical systems31.In this model, effect of Hematocrit (H) is also applied for the estimation of the viscosity35. The resolution of applied grid near the wall inside the aneurysms is higher than other sections due to its important on the archived results40. The close-up view of the applied grid is also displayed to demonstrate the resolution of the grids44. The grid study is also performed to ensure about the grid independency. Table Figure\u00a02 in the chosen models. The presented results of OSI value are calculated at the end of 3rd blood cycles while the pressure, WSS and average blood velocity are reported on the peak systolic stage on the 3rd cycle where the blood stream is maximum46. Since these four hemodynamic factors play critical role on the hemodynamic of blood stream and they could use to examine the risk of aneurysm rupture, they are chosen in our investigations48.The ICA aneurysms are sorted by the ostium section area and all selected cases are related to female patients in which blood hematocrit value is 0.4 as presented in Table The deformed aneurysms are displayed in Fig.\u00a0In present work, the hemodynamic of the blood stream inside the original aneurysm and two deformed ones is investigated by comparison of the WSS, pressure and average blood velocity. The results of the mean WSS, OSI, mean wall pressure and mean velocity inside the aneurysms are presented in Table Figure\u00a0Effects of 1st and 2nd deformation on mean OSI (early diastolic) and sac velocity (peak systolic) are presented in Figs.\u00a0Figure\u00a0Figure\u00a0Figure\u00a0The aim of this study is to analyze the impacts of stent-induced deformation on the hemodynamic of the blood stream inside the sac section of four different ICA aneurysm. Comprehensive hemodynamic investigations are done to reveal the main changes in the blood stream after two stages of deformations. Selected sacs are chosen based on different ostium sizes and neck vessel angles. Computational fluid dynamic is applied for the hemodynamic study of the blood within the ICA aneurysms. OSI, WSS and pressure variations on the sac surface are displayed and investigated. The main advantageous of stent-deformation is formation of clot which prevent the entrance of blood flow. In addition, blood flow streams are compared for selected models of ICA aneurysms in two steps of deformations. Achieved results indicates that stent-induced deformation of the ICA cases significantly decreases OSI on aneurysms with extremely high OSI values. Besides, the velocity of the blood is reduced substantially when aneurysm deformation happens."} +{"text": "The term and concept of physical literacy is increasing in popularity in both policy and practice and is increasingly applied in the fields such as sports, public health and education.Physical literacy is the ability to move with competence and confidence in a variety of physical activities. Although there is widespread consensus that physical literacy has the potential to act as the basis for a lifelong active life, there is little known about how it is assessed, operationlized, and applied.The aim of this symposium is to describe what is currently happening in the European context. Therefore, in this symposium we will bring together researchers and practitioners to discuss the asseessment, operationlization and application of the physical literacy concept across four European countries.Presentation 1) Process of developing a physical literacy consensus statement for England (UK)Presentation 2) Towards national monitoring of physical literacy in DenmarkPresentation 3) Application of the concept of physical literacy in the context of water: aquatic physical literacy (Belgium)Presentation 4) Developing an individual-based advice for sports participation and physical activity in children using the concept of Physical Literacy (the Netherlands).The discussant of the symposium who will lead the discussion after the individual presentations will ensure exchange of knowledge, expertise and experiences among audience and presenters."} +{"text": "Acute kidney injury was suggested. Computerized tomography of the sagittal section of the body showed multiple osteosclerotic lesions in the vertebrae .Informed consent was obtained from the patient for the publication of this article."} +{"text": "Editorial on the Research TopicDevelopmental anomalies in the lung and their impact on later lifeLung development is a complex process that begins prenatally and continues until young adulthood. The normal course of lung development can be affected by various parameters such as prematurity, congenital disorders and environmental factors which may have a long lasting effect on lung structure and function. The aim of this research topic was to describe new approaches in the assessment of lung development after birth, considering morphological and functional aspects. It also aimed to highlight recent advances in the monitoring of lung development in health and disease during childhood, and describe consequences of developmental defects on lung function in later life.Dassios et al. measured the slopes of volumetric capnography and the ventilation to perfusion ratio in 25 extremely preterm infants to test the hypothesis that gas exchange in ventilated preterm infants occurs both at the level of the alveoli and via mixing of fresh dead space gas in the airways. They concluded that abnormal gas exchange was associated with lung disease at the alveolar level and found no evidence of gas exchange impairment originating at the level of the airways. The authors confirmed that targeted tidal volumes at values below the physiological dead space cannot be associated with efficient gas exchange .Premature infants are often ventilated in the early postnatal period and an effort is made to minimize iatrogenic lung injury by using appropriate lung volumes, which are adequate for gas exchange but do not excessively large that could cause overdistention and lung damage. Di Filippo et al. aimed to assess the effect of mechanical ventilation on lung function in a group of very preterm-born children studied at 11 years of age. The authors performed a comprehensive respiratory evaluation in 55 children which included spirometry, measurement of lung volumes, lung diffusion capacity and measurement of the fractional exhaled nitric oxide. They reported that there was no difference in any of these outcomes between ventilated and unventilated children. Their cohort included children with a mean gestational age of 30.6 weeks and a mean duration of invasive ventilation of one day, suggesting that such a brief course of invasive ventilation in this population of relatively mature preterms was not associated with a long-lasting impairment in lung function .While the immediate impact of mechanical ventilation on lung function is undoubtedly injurious, the long term effect of invasive support is also an area of growing research interest in the recent years. O\u2019Dea et al. reviewed the literature on the long-term cardiopulmonary outcomes following preterm birth during the current surfactant era, and aimed to outline the current knowledge of cardiopulmonary exercise testing in the assessment of children born preterm. They described that preterm-born children have increased respiratory symptoms and disrupted lung development with significant structural and functional lung sequelae. The authors highlighted that expiratory flow limitation and an altered ventilatory response consisting of rapid, shallow breathing were observed during exercise. The association, however, between exercise capacity and the traditional resting respiratory assessments was not clear. Some constraints such as the heterogeneity of study participants, treatments and exercise protocols, precluded our understanding of exercise capacity limitations in children born preterm. The authors suggested that the role of exercise interventions in mitigating the risk of chronic lung disease should be a focus of future randomised controlled trials .Lung function testing, however, might fail to capture the fine granularity of the spectrum of lung disease following premature birth, which might become apparent only under conditions of increased cardio-respiratory demand such as during moderate to intense aerobic exercise. Miseviciene et al. described a novel case of a rare combination of unilateral pulmonary artery agenesis and Kommerell's diverticulum which puts affected individuals at a high risk for pulmonary hypertension and rupture of the diverticulum. The authors described how a one-year old girl presented with prolonged cough and wheezing and a hypoplastic left lung in a chest radiograph. She later underwent echocardiography and chest computed tomography which confirmed an absence of the left pulmonary artery and right arch of the aorta and an anomaly of the subclavian arteries. She was referred for ongoing monitoring for possible development of pulmonary hypertension and compression from the vascular structures to the airways which might require surgical intervention .Other than prematurity, congenital defects can also impact on lung development with long lasting effects. Castillo et al. performed a mini-review which aimed to highlight the lack of standardized diagnostic and treatment guidelines for the disease in Latin America and compared North American and European diagnosis and management recommendations. They highlighted that certain diagnostic tools and treatment options are not universally accessible in Latin America and identified fifteen articles that provided recommendations on respiratory management, a minority of whom originated from Latin America. The authors commented on the relative absence of documentation, research, and recommendations regarding the prevalence of the disease in Latin America, likely due to unfavorable economic conditions. They suggested that in developing countries, the PICADAR score, which is based on clinical characteristics, can serve as an alternative method to identify patients who require further testing and have a higher probability of a diagnosis of the disease .Primary ciliary dyskinesia might affect lung development with severe lifelong implications but the disease is closely monitored epidemiologically predominantly in North America and Europe. The incidence of the disease can thus be underestimated particularly in developing countries, due to a lack of awareness and diagnostic facilities. In conclusion, prematurity, congenital disease and socioeconomic disparities can affect the provision of care in developmental respiratory disease. These conditions could be focused for enhanced surveillance and tailored care to optimize long term respiratory outcomes in childhood and beyond."} +{"text": "Prescription Drugs Marketed in the United States Should Be Approved by the FDA. Two events in the drug regulatory approval process in recent years warrant the attention of managed care pharmacists and should prompt reassessment of assumptions regarding drugs marketed in the United States. The first event, involving levothyroxine, received considerable public attention from 1999 through 2002. The second event involves an appellate court decision in May 1999 regarding combination product esterified estrogen and methyltestosterone . The circumstances of these events are noteworthy, and the social and market effects of each are sizable."} +{"text": "Editorial on the Research TopicAdvanced research on abdominal and thoracic aortic aneurysms: new insights into molecular mechanismsAbdominal and Thoracic aortic aneurysmal diseases are devastating with a high risk of rupture that frequently leads to death. Regardless of significant improvements in the comprehension of aortic aneurysmal disease development, a number of questions need to be addressed to clarify the conflicts in the research findings. This special research topic in Frontiers in Cardiovascular Medicine includes two original research articles, one review article, and one mini-review article aimed to highlight the molecular mechanisms of aortic aneurysms.Ladd et al. revealed that the administration of spironolactone attenuated the release of extracellular adenosine triphosphate (ATP) from endothelial cells to mitigate the activation of macrophages, and smooth muscle cell-mediated remodeling. Several epidemiological observations have highlighted the possible role of diabetes mellitus in protection against AAA incidence and prevalence (3). In a review article, Picatoste et al. reviewed and summarized the available literature on the relationship between diabetes mellitus and AAA incidence and discussed the potential molecular pathways involved (4). Bontekoe and Liu offered a substantial review and assessment of the current literature utilizing Singel Cell RNA sequencing for the AAA inspection and discussed the upcoming usefulness of this technology (5).Abdominal aortic aneurysms (AAAs) are permanent dilations of the abdominal aorta and the most common aortic aneurysms in humans . In thisDeng et al. established a novel model of proximal TAAs in mice by peri-adventitial elastase application in the proximal thoracic aorta via a midline incision in the anterior neck of the mice (6). The currently available elastase animal models of TAA are distal descending TAAs, limiting the knowledge of understanding proximal TAA pathologies. This new minimally invasive proximal TAA model avoids thoracotomy and tracheal intubation by elastase application in the peri-adventitia of the proximal thoracic aorta via a midline incision in the mouse on the anterior neck.Thoracic aortic aneurysms (TAA) are the second most common and life-threatening aortic disease. In an original article, We appreciate all the authors of this Special Issue for their contributions. We hope the four articles published in this Special Issue will assist researchers in improving our understanding of the underlying molecular mechanisms of aortic aneurysms and developing therapeutic targets for aortic aneurysms."} +{"text": "Serratus anterior muscle free flap is widely used in numerous indicated reconstructions. Only a few studies have dealt with the use of this flap in tongue reconstruction. We present a case series of 7 patients with carcinoma of the tongue who underwent hemiglossectomy followed by immediate reconstruction with serratus anterior muscle free flap between January 2017 and December 2019 at the University Hospital Brno. The aim of this study was to evaluate safety and efficiency of the reconstruction as well as the donor site morbidity. There was not a single case of flap failure observed and the donor site healed completely in all cases. The functional outcome depended on the severity of the primary oncological disease and health status of the patient. The serratus anterior muscle free flap represents an alternative option for reconstruction of the tongue. Growing incidence of tongue tumours places higher demands on its reconstruction . In ordeThe serratus anterior muscle free flap (SAMFF) provides a thin and pliable muscle tissue, which is easy to harvest and has long vascular pedicle of good calibre. There are only some cases of tongue reconstruction with SAMFF described in the literature despite its advantages.In this case series, we evaluated tongue reconstruction using SAMFF in 7 patients who underwent hemiglossectomy at the University Hospital Brno. The aim of the study was to evaluate safety and efficiency of the reconstruction, functional result of the restored tongue, eventual complications of wound healing, or flap failure and donor site morbidity.We present a case series of 7 patients who underwent hemiglossectomy for various carcinomas of the tongue between January 2017 and December 2019 at the University Hospital Brno. Inclusion criteria were surgical resectability of the tumour and the need of free flap reconstruction. The surgical resection was followed by an immediate reconstruction of the tongue with muscle flap SAMFF in all the patients (no myocutaneous form was performed). The radical tumour resection with or without laryngeal preservation, alongside with deep cervical lymphadenectomy for an advanced stage of invasive tumour, was performed by otolaryngologists, and the immediate reconstruction using SAMFF was performed by plastic surgeons as a one stage procedure. The design of the flap was adapted to the missing volume and shape, and an oval shape was used in all cases. The flap was spread and modelled at the site of the missing tissue to achieve maximum symmetry with the unaffected side of the tongue.There were 5 female and 2 male patients in the series with the average age of 63.8\u2009years (48\u201382\u2009years). Following parameters were assessed: age, sex, tumour characteristics, histopathological classification, complications in healing, and flap failure . FunctioThere was not a single case of flap loss observed in the postoperative period. There were only minor complications in wound healing in two cases of the reconstructed tongue which were treated conservatively. The healing of the flaps in the oral cavity was not accompanied by any complications despite the exposure of saliva, and a very good aesthetic result was achieved after the resolution of the swelling and the maturation of the tissues. The functional outcome of the reconstructed tongue dependedFree flap transfer represents a method of choice in tongue reconstruction after hemiglossectomy. Fasciocutaneous and perforator free flaps such as RFFF and ALTFF are the method of choice. Muscle free flaps, such as rectus abdominis muscle free flap , 4, vastSAMFF in general is a highly reliable flap with excellent anatomical characteristics for free flap reconstruction of small and moderate defects. It is easy to harvest and well-suited for head and neck reconstruction.The main disadvantages of this technique include the impossibility to use the two-team approach and donor site morbidity with the loss of the muscle tissue. In our study, we did not observe any problems with the upper extremity function, wound healing, or resulting scar in the postoperative follow-up. The scar was well accepted by the patients. Second, it is not possible to assess directly the flap perfusion of MSAFF by capillary refill, as it can be performed in case of fasciocutaneous or perforator flaps. On the other hand, capillary refill assessment of fasciocutaneous or perforator flaps used on reconstruction of deeply localized defects of tongue base is compromised by perioperative and postoperative swelling. Standard examination of muscle flap is performed by the Doppler probe, which identifies vascular pedicle pulsation. The microsutures are often performed on recipient neck vessels. Therefore, high interference from surrounding vessels prevents detection of vascular pedicle of muscle flap by dopplerometry.The SAMFF represents an alternative option for tongue reconstruction after hemiglossectomy. The main advantage of this flap is muscle-to-muscle connection, natural look of the reconstructed tongue, and easy harvesting of the flap with low donor site morbidity."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Chemistry and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."} +{"text": "The dentigerous cyst is a developmental odontogenic asymptomatic cyst, that is associated with the crown of an unerupted or impacted tooth. Early diagnosis is important to avoid any future complications and choose the best treatment option. The purpose of this case report is to describe the management of a dentigerous cyst related to lower second molar in a young female patient using orthodontic traction as a conservative treatment approach. This procedure helps to spare the patient an unnecessary surgical excision procedure and the associated excessive bone removal for a safety margin, stimulates bone healing and promotes the eruption of the cyst-associated tooth. The presence of an impacted, unerupted, embedded tooth, a tooth under development, odontome or a supernumerary tooth, carries the risk of development of one of the most common types of odontogenic cysts; the Dentigerous cyst or follicular cyst . It was The dentigerous cyst surrounds the crown of the unerupted tooth and is attached to its cervical region. It develops from accumulation of fluid between the remnants of the reduced enamel epithelium and the subjacent tooth crown shortly after complete formation of the crown . The natRadiographically, the Dentigerous cyst is usually characterized by a unilocular radiolucent lesion around the unerupted tooth, with a distinct dense periphery of condensed bone . An enlaThe classical treatment decision of the Dentigerous cyst is total enucleation with a safety margin of bone removal, to avoid recurrence together with extraction of the associated impacted tooth . HoweverThis article presents a clinical case with a Dentigerous cyst surrounding the crown of an unerupted lower right second molar that has been treated by orthodontic traction of the unerupted tooth.A fourteen-year-old female presented to our orthodontic clinic with a chief complaint of unpleasant appearance of her teeth. The pre-treatment records including clinical intra-oral and extra-oral examination, photos, panoramic radiograph, lateral cephalometric radiograph, upper and lower impressions were taken. Intraoral examination showed the patient had a full set of erupted permanent teeth up to the first molars in all quadrants. A single cusp tip of the lower left second molar was erupting, while the lower right second molar was unerupted. The patient showed an Angle class I malocclusion with moderate and mild crowding in the upper and lower arches respectively, and anterior deep bite Fig.\u00a0. ExtraorThe case was referred to the maxillofacial surgeon for consultation. Based on the clinical, radiological and CBCT findings, a diagnosis of dentigerous cyst was concluded.The treatment option recommend by the surgeon was extraction of the associated impacted tooth and the third molar, cyst enucleation and bone removal with a proper safety margin in order to minimize the possibility of recurrence and allow the regeneration of healthy bone.Considering the age of the patient, size of the cyst, the position of the cyst close to the oral mucosa, the dental deep overbite, the possibility of resolution of the cyst with the eruption of the second molar, the approach of enucleation and tooth extraction was considered to be aggressive for the current case. Hence, a more conservative approach for the sake of conservation of the natural permanent tooth was agreed upon with the surgeon. Orthodontic traction which will be accompanied by spontaneous decompression and marsupialization of the cyst will be done. Accordingly, a CBCT was captured Fig.\u00a0, which cThe treatment strategy depends on surgical exposure of the impacted second molar, bonding orthodontic attachment and orthodontic traction until the cyst lining attached at the cervical margin gets exposed in the oral cavity and blends with the oral mucosa, with no future risk of fluid accumulation and re-formation of the cyst. The critical stage in this procedure is the ability to maintain the decompression and cyst opening exposed in the oral cavity without healing until eruption of a part of the tooth.Orthodontic treatment was initiated on the lower arch, comprising bonding of fixed orthodontic appliance, Roth prescription, 0.018 inch slot Roth brackets, with bonding of a double tube on the lower right first molar to help orthodontic traction of the lower second molar. Levelling and alignment was initiated until an archwire of 0.017\u2009\u00d7\u20090.025 inch stainless steel archwire was reached. Surgical exposure of the second molar and bonding of an orthodontic tube to the exposed part of the buccal surface was done. An initial auxillary archwire 0.012 inch nickel titanium archwire attached between the auxillary tube of the first and second molars is used for occlusal traction of the second molar. In the monthly recall of the patient, the gauge of the wire was increased until the tooth is totally erupted on the oral cavity and an archwire of 0.017\u2009\u00d7\u20090.025 inch stainless steel archwire could be inserted as a main archwire. At this stage, intraoral photographs and a panoramic radiograph were taken Fig.\u00a0, which sThe threat the dentigerous cyst presents is reverted back to its asymptomatic, incipient, and expanding nature. The early diagnosis is of paramount importance prior to the expression of the aggressive form of the lesion. This aggressive form has the potential to expand the bone of the mandible, with the subsequent displacement of teeth, malocclusion, pain, inferior alveolar nerve paresthesia and root resorption .The intraoral observation of any altered eruption patterns, besides the proper examination of the pre-orthodontic panoramic radiograph to detect the presence of such cyst can be of significant help for early intervention. However, the differential diagnosis should be considered for identification of the Dentigerous cyst. When the follicular space exceeds 3\u20134\u00a0mm, Dentigerous cyst is suspected .The most convenient treatment option for the dentigerous cyst should be chosen to suit the clinical situation. Surgical enucleation is a common modality for treatment of the Dentigerous cyst . It encoThe other treatment option is decompression; which is elected in the currently presented case and in other previously reported cases . It encoIn the current case, the young age of the patient with the strong healing and eruptive power of the teeth, besides the favorable position of the cyst and the vertical impaction of the second molar privileged decompression and orthodontic traction of the embedded second molar. Besides, the preservation of the second molar could have a favorable functional and psychological impact on the patient . This prThe bonding of the impacted second molar on the day of incising the cyst wall and decompression and the early occlusal traction of the second molar gave chance for maintaining the intact and the fast eruption of the impacted tooth. After few months of released pressure on the bone, the panoramic radiograph showed complete disappearance of the cyst, increased radio-opacity of the surrounding bone with the full eruption of the second molar. These same findings were confirmed in the panoramic radiograph taken 3 years after finishing the orthodontic treatment, with no evidence of recurrence Fig.\u00a0.The chosen treatment option of decompression of the developing dentigerous cyst simultaneously with guided eruption of the associated second molar showed acceptable results with such a case.The post-treatment panoramic radiograph showed complete radiographic healing of the cyst and uneventful eruption of the impacted second molar, which credits the treatment modality elected to save the involved tooth."} +{"text": "When a radar detects marine targets, the radar echo is influenced by the shape, size and dielectric properties of the targets, as well as the sea surface under different sea conditions and the coupling scattering between them. This paper presents a composite backscattering model of the sea surface and conductive and dielectric ships under different sea conditions. The ship scattering is calculated using the equivalent edge electromagnetic current (EEC) theory. The scattering of the sea surface with wedge-like breaking waves is calculated using the capillary wave phase perturbation method combined with the multi-path scattering method. The coupling scattering between ship and sea surface is obtained using the modified four-path model. The results reveal that the backscattering RCS of the dielectric target is significantly reduced compared with the conducting target. Furthermore, the composite backscattering of the sea surface and ship increases significantly in both HH and VV polarizations when considering the effect of breaking waves under high sea conditions at low grazing angles in the upwind direction, especially for HH polarization. This research offers valuable insights into optimizing radar detection of marine targets in varying sea conditions. Composite scattering modeling for sea surfaces and targets is of great significance for monitoring sea environment, identifying and intercepting targets ,2. The dThere have been many studies on sea surface scattering ,4,5. ResPrevious studies were dedicated to analyzing the scattering from mental targets or dielectric targets, but without consideration of the diffracted field for a dielectric wedge. On the other hand, sea-only scattering is not competent to fully describe the couple scattering mechanism between the sea surface and breaking waves under high sea conditions. There have been few studies on the composite scattering of a sea surface with breaking waves and the dielectric target above it under high sea conditions. In this paper, the deterministic distribution of breaking waves on the sea surface under high sea conditions is established, and the scattering of the wedge-like breaking waves and the multiple scattering between the breaking wave and the surrounding sea surface are calculated. The numerical matching technique is introduced to quickly calculate the scattering field of electrically large dielectric targets with wedge structures. Combined with the modified four-path method, the composite scattering characteristics of conducting and dielectric ships under different sea conditions are calculated. The influence of breaking waves and the dielectric property of the target on composite scattering of a sea surface and ship under high sea conditions are analyzed. The results provide valuable insights for improving radar efficiency in detecting sea surface targets.The total field of the conducting target can be obtained by adding a physical optical field and diffraction field, namely GTDEEC=POEEC+PTDEEC. POEEC is calculated according to the integration direction selected by Cui and Wu , and PTDThe total scattering field of the impedance wedge includes the physical optical field and edge diffraction field. When the equivalent edge electromagnetic current is used to solve the edge diffraction field, the incremental length diffraction coefficient\u00a0The PO diffraction coefficient matrix\u00a0The PO diffraction coefficient of the lower wedge can be obtained using the diffraction coefficient of the upper wedge with transforming\u00a0When calculating the diffraction coefficient of the impedance wedge, we use the numerical matching technique to expand the spectrum function of the impedance wedge with oblique incidence and arbitrary wedge angle according to :(3)Se,hEPQfaThe integral term\u00a0The length of the simulated two-dimensional sea surface in\u00a0In this paper, the deterministic distribution of breaking waves on the sea surface is obtained by using the whitecap coverage along with a comparison to SASS-II measured data [2 and a windspeed of 13 m/s, averaging 50 sea surface samples) along with the comparison to measured data [To verify the correctness of the model, we presented the backscatter coefficient of sea surfaces generated using the Monte Carlo method The backscattering RCS of the ship decreases due to the existence of the coated dielectric layer, indicating that radar stealth can be achieved by using coated absorbing materials to reduce the signal intensity of the target.(2)The sea surface scattering dominates for near-vertical incidence, while for small grazing angle incidences, ship scattering is dominant, and the tower scattering peak is visible.(3)Under high sea conditions, it is crucial to consider the impact of the breaking waves on the scattered echo. The mirror scattering decreases, but the incoherent scattering increases with the increase in wind speed due to the roughness of the sea surface. The backscattering RCS of the sea surface is enhanced due to breaking wave scattering at small grazing angles. As wind speed increases, the scattering enhancement of breaking waves with small grazing angles causes the tower scattering peak to submerge in the sea background.(4)The azimuthal variation of the ship scattering shows several peaks at the angle that is perpendicular to the four sides of the ship. When the scattering of the sea surfaces is taken into consideration, the sea surface scattering results in the disappearance of scattering peaks at the bow and stern. When considering the scattering of the sea surface, the scattering peaks at the bow and stern disappear. Additionally, for high sea conditions, the scattering peaks of the target are completely submerged.To sum up, it is necessary to consider the impact of the breaking waves and the dielectric materiel of the target on the scattered echo. We believe that the model can provide a theoretical basis for radar to detect marine targets with better efficiency.In this paper, a composite scattering model from the sea surface with breaking waves and a conducting and dielectric ship above it is presented. The EEC theory is used to calculate the ship scattering, while the scattering of the sea surface with wedge-like breaking waves is calculated using the capillary wave phase perturbation method and the multi-path scattering model. The coupling scattering between them is obtained using the modified four-path model. Using the proposed model, the RCS results of the conducting and dielectric ships on sea surfaces at low, medium and high wind speeds under different polarizations are simulated numerically. The numerical results show that radar stealth can be achieved in a certain range by using coated absorbing materials to reduce the signal intensity of the target. When the wind speed increases, the composite backscattering from the sea surface and target for both polarizations increases with the consideration of the effect of breaking waves, especially for upwind under high sea conditions and small grazing angle incidence, and the scattering enhancement of HH polarization is more obvious. The tower scattering peak of the conductor ship is visible at low and high wind speeds. However, it is difficult to separate the target from the sea background under high sea conditions due to the scattering weakness of the dielectric target itself, the enhancement of the scattering effect of the sea surface and the spike scattering of the breaking waves."} +{"text": "This is a peer-review report submitted for the paper \u201cIn-hospital Mortality and the Predictive Ability of the Modified Early Warning Score in Ghana: Single-Center, Retrospective Study.\u201dThis study is aboutOne of the main concerns about this study is that the sample size is relatively small (N=112) for a national referral hospital in Ghana. Authors should provide more evidence on whether the sample and size were representative of the target population. Relatedly, since the authors state that they recruited practically all medical inpatients hospitalized for a period of more than 2 years (January 2017 to March 2019), it would be good to provide the total recorded number of in-hospital patients for that period.In making the case for the validity of LMEWS, the authors have relied heavily on the afferent arm of clinical deterioration in critically ill patients, while not accounting for the efferent arm of medical response. The afferent arm identifies patients at risk of clinical deterioration and activates the efferent arm if necessary. The efferent arm examines the patients and intervenes in the treatment. The functioning of the efferent arm in the study settings ought to have been discussed in drawing up the conclusion and recommendation of the LMEWS."} +{"text": "These outbreaks serve as a reminder of the difficulties faced in less resourced nations for improvement in regulation and enforcement of quality control procedures in the pharmaceutical industry.With the rise in the number of drug manufacturers, monitoring of drug supply in developing countries is extremely important. These cases represent just the tip of the iceberg because not all children are referred to Tashkent referral centers.An epidemic of severe oligoanuria and altered sensorium has been reported in approximately 120 children from Uzbekistan and adjoining areas of Tajikistan since September 2022. This brief report describes the biochemical and clinical data of 50 patients from two clinical centers from Tashkent, Uzbekistan and Gambia linked to the use of these syrups containing toxic levels of DEG or EG.15 Similar outbreaks have occurred in the past and can recur in the future and have been typically associated with the contamination of ingestible pharmaceutical products.14 During the outbreak in Nigeria in 1990, it was found that wholesale distributors substituted DEG for propylene glycol in paracetamol syrups when selling to small pharmaceutical manufacturers to reduce expenses.7 These outbreaks serve as a reminder of the difficulties faced in less resourced nations for improvement in regulation, enforcement, or strict implementation of quality control procedures in the pharmaceutical industry.9 Compromises within the pharmaceutical supply chain lead to a proliferation of substandard and falsified (SF) medications. This emphasizes the necessity to evaluate various perspectives through investigations into the structural, political, economic, and ethical elements that shape the quality and utilization of medical products. There is a need for international expansion of formal training on SF medical products to address systemic inadequacies and emphasize the integration of public health education within pharmaceutical training and strengthen the capacity for pharmacists and physicians together to play a pivotal role in safeguarding public health.810 The intricate interconnection of social and ethical determinants in the issue of SF medicinal products underscores that mere technical interventions are insufficient; instead, it necessitates a comprehensive systemic policy approach enriched and directed by interdisciplinary research across various fields.9Globally, there have been several cases of fatal pediatric AKI linked to the use of cough syrups or acetaminophen and cough syrup combination syrups containing excessive amounts of diethylene glycol (DEG) and EG. Last year, similar cases have been reported in Indonesia (AKI cases in toddlers"} +{"text": "Bioactive Soft Materials and Application\u201d is to present the current state of research and development in the field of bioactive soft materials with a perspective focus on material design and their biomedical application.Soft biomaterials represent hydrogels but also a variety of thicker surface coatings based on grafting of hydrophilic macromolecules or their physical assembly by methods like layer-by-layer technique. Soft biomaterials have several advantages that allow uptake and controlled release of bioactive protein-based molecules like growth factors and other kind of drugs. Moreover, soft biomaterials of hydrophilic character allow control of protein adsorption and cell adhesion and interaction which not only can avoid blood coagulation but also tailor immune response or engrafting of the construct to guide tissue regeneration. Another advantage of soft biomaterials is the ability of materials scientists to provide them with stimuli responsive properties that they can react to changes in temperature, pH value, ionic strength and other physical or chemical stimuli to liquefy, control release of drugs or cell interaction depending on the environment. Hence, the aim of the Research Topic \u201cLi et al. shows the synthesis and formulation of temperature- and pH-responsive chitosan hydrogels that are loaded with doxorubicin and liposome-encapsulated curcumin which demonstrates their potential to be applied for the treatment of solid tumors. Another paper from Zhang et al. described the use of thermosensitive Poloxamer 407 hydrogels for the loading of concentrated growth factors (CGFs) which showed a sustained release of growth factors and demonstrated to be good candidates for the repair of segmental bone defects. In addition, a thermo-responsive and injectable hyaluronic acid/nHAp (HA/nHAp) composite hydrogel was developed by Liu et al. with the incorporation of notoginsenoside R1 (NGR1) used because of its anti-inflammatory properties to reduce TNF alpha production and thus promote bone regeneration. Moreover, Wang et al. reported the fabrication of soft nanoparticles using konjac glucomannan that are loaded with curcumin which can accomplish colonic localization release and target local inflammatory macrophages, showing their potential to be used as oral delivery vehicles for the treatment of inflammatory bowel disease.The Research Topic includes several studies on stimuli responsive hydrogels being used as drug delivery platform for biomedical applications. An interesting work from Lin et al. developed a unique freeze-drying system for the fabrication of Haversian system using chitosan/type I collagen composite materials to mimic native bone tissue to enhance bone regeneration. This universal preparation platform based on the directional freezing technology is also expected to be applied for the fabrication of scaffolds with cavities roughly arranged at right angles to each other. In addition, Luo et al. reviewed the latest progress in studying the roles of culture-condition stimulated exosomes or their loaded hydrogels and the differences in terms of origination and function between exosomes and the other kinds of EVs (microvesicles and apoptotic bodies) and mesenchymal stem cell lysates which provided a fundamental understanding on the specific application of different EVs and the future development.An interesting work is about the design of platform for batch fabrication of native tissue mimicking scaffolds. Overall, this Research Topic covers recent advances in designing of bioactive soft materials and their applications for drug delivery and tissue repair. The editors hope that the current Research Topic will contribute to the research and development in the field of soft materials for drug delivery and tissue repair, inspiring future exploration to broad the biomedical applications of bioactive soft materials."} +{"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Genetics and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."} +{"text": "Disruption of cerebral thermal homeostasis accompanies various CNS diseases. Presumably, (neuro)inflammation and the changes of temperature heterogeneity of the cerebral cortex may be interrelated links in the pathogenesis of schizophrenia.to study the association between the brain thermal balance indicators, inflammatory markers and clinical features of the disease in patients with schizophrenia during therapy.37 patients aged 16 to 46 years with schizophrenia were examined. Clinical examination included psychometric assessment using PANSS, HDRS, and YMRS scales. Cortical temperature was determined by microwave radiometry. Temperature heterogeneity was assessed by calculating the Pearson correlation coefficient between temperature indicators in 9 symmetrical areas of the cerebral cortex. The activity of the proteolytic system of inflammation (ratio of leukocyte elastase (LE) and \u03b11-proteinase inhibitor (\u03b11-PI) activity) and the level of autoantibodies to S100B and MBP antigens were determined in patients\u2019 blood.Low temperature heterogeneity is related to an increase in the activity of the proteolytic system of inflammation and a good response to therapy in most patients. High temperature heterogeneity is associated with insufficient activity of the proteolytic system of inflammation and the development of autoimmune reactions, which is accompanied by a more severe course of the pathological process and, in most cases, treatment resistance.The association between the features of the thermal balance of the brain and inflammatory markers confirms the hypothesis of their role in the pathogenesis of schizophrenia. Temperature heterogeneity of the brain can serve as a criterion for predicting of therapeutic response in patients with schizophrenia.None Declared"} +{"text": "Circularly polarized luminescence (CPL) is an important part in the research of modern luminescent materials and photoelectric devices. Usually, chiral molecules or chiral structures are the key factors to induce CPL spontaneous emission. In this study, a scale-effect model based on scalar theory was proposed to better understand the CPL signal of luminescent materials. Besides chiral structures being able to induce CPL, achiral ordered structures can also have a significant influence on CPL signals. These achiral structures are mainly reflected in the particle scale in micro-order or macro-order, i.e. the CPL signal measured under most conditions depends on the scale of the ordered medium, and does not reflect the inherent chirality of the excited state of the luminescent molecule. This kind of influence is difficult to be eliminated by simple and universal strategies in macro-measurement. At the same time, it is found that the measurement entropy of CPL detection may be the key factor to determine the isotropy and anisotropy of the CPL signal. This discovery would bring new opportunities to the research of chiral luminescent materials. This strategy can also greatly reduce the development difficulty of CPL materials and show high application potential in biomedical, photoelectric information and other fields. A scale-effect model based on scalar theory can help understand the CPL signal, and measurement entropy of CPL detection may determine the isotropy and anisotropy of the CPL signal. R). When the anisotropy scale (L) of material particles is greater than or equal to this critical scale, the anisotropic environment has a great influence on the CPL signal. Only when R is affected by the dielectric function of the anisotropic environment of the luminescent materials. The dielectric distribution of luminescent materials depends on the molecular structure and the ordering in the assembly. In addition, we propose a method of artificially introducing anisotropy, such as by introducing a magnetic field and changing the dielectric function distribution\u2014a highly universal strategy for developing CPL spontaneous radiation materials with controllable CPL emission wavelength and signal intensity.The asymmetry of matter has gradually attracted the attention of a wide range of researchers in materials science and chemistry . RecentlThe understanding of CPL by different investigators varies considerably ,21. A meFirst, the influence of the anisotropic environment on the polarization properties of electromagnetic waves is not negligible. Therefore, a quantifiable model and an explanation based on previous research and a fiTo quantitatively determine the influence of the particle scale in uniform anisotropic media on CPL spontaneous radiation, Equation (1) was obtained in nonmagnetic media and transparent media. Please refer to the L is the transmission distance in the medium and R is the critical scale of the anisotropic environment.where When Combined with the above considerations, some qualitative conclusions can be obtained about the influence of the anisotropic environment on the CPL signal of luminescent materials in the visible band and near-infrared band:When the anisotropic environment has a definite influence on the phase of polarized light arising from Regarding the influence of the polarity and CPL are the special elliptically polarized light (EPL) with a special phase difference between electric vector ocument} of CPL aThe influence of the scale effect of the medium on the spontaneous emission of the material needs to consider not only the influence of the anisotropic distribution of the medium on the polarization state of the spontaneous emission, but also the polarization state of the incident excitation light see .121 and P3221) when using the same crystal plane as the incidence direction film; CPL signals corresponding to the crystal structure could be measured from the film , we developed a highly universal strategy for developing CPL spontaneous radiation materials with controllable CPL emission wavelengths and controllable signals, i.e. artificially introducing anisotropy to influence CPL signals. In this strategy, molecules or aggregate-state luminescent materials , CdSe quantum dots, aggregation-induced emission (AIE) tetraphenyl ethylene (TPE) dyes and perovskite-based luminescent films with different luminescent mechanisms were elected here to prove this strategy. Figure In previous studies, the origin of CPL emission and the chirality of materials were usually discussed as related phenomena. Besides the chiral-induced CPL emission phenomena, we found that the scale effect based on the anisotropy of the medium is found to be another key factor to induce a CPL signal. Additionally, it is easy to conclude that a pair of chiral structures can induce CPL emission using an analysis based on the scale-effect model emanating from scalar theory. Therefore, these findings can also explain the reason why a pair of chiral structures can induce CPL signals with opposite signs and equivalent intensity from another angle. In addition, we found that the measured entropy of materials is the key factor that determines the isotropy or anisotropy of measured CPL signals, rather than the influence of LPL. It is shown here that the influence of the anisotropic environment on the CPL signal is extremely important in the development of CPL luminescent materials. When luminescent materials orderly aggregate, an anisotropic medium environment will be formed. The anisotropic medium environment will change the refractive indices We selected luminescent materials with different luminescence mechanisms, including but not limited to organic dyes that emit fluorescence and phosphorescence light as well as inorganic luminescent materials, for verification of our theory. In addition, we proposed a method of artificially introducing anisotropy, such as introducing a magnetic field and changing the dielectric function distribution, and developed a highly universal strategy for developing CPL spontaneous radiation materials with controllable CPL emission wavelength and signal.This discovery will further expand the application range of CPL luminescent materials. For example, the CPL signal of different wavelengths of dyes in an anisotropic environment can be used to observe the dielectric distribution and anisotropic scale of a microscopic-ordered structure. Furthermore, different cells with different anisotropic scales can be stained and the CPL signals of spontaneous radiation can be observed to quickly determine the size of different cells.Detailed materials and methods are available in the nwad072_Supplemental_FilesClick here for additional data file."} +{"text": "The attainment of accurate motion control for robotic fish inside intricate underwater environments continues to be a substantial obstacle within the realm of underwater robotics. This paper presents a proposed algorithm for trajectory tracking and obstacle avoidance planning in robotic fish, utilizing nonlinear model predictive control (NMPC). This methodology facilitates the implementation of optimization-based control in real-time, utilizing the present state and environmental data to effectively regulate the movements of the robotic fish with a high degree of agility. To begin with, a dynamic model of the robotic fish, incorporating accelerations, is formulated inside the framework of the world coordinate system. The last step involves providing a detailed explanation of the NMPC algorithm and developing obstacle avoidance and objective functions for the fish in water. This will enable the design of an NMPC controller that incorporates control restrictions. In order to assess the efficacy of the proposed approach, a comparative analysis is conducted between the NMPC algorithm and the pure pursuit (PP) algorithm in terms of trajectory tracking. This comparison serves to affirm the accuracy of the NMPC algorithm in effectively tracking trajectories. Moreover, a comparative analysis between the NMPC algorithm and the dynamic window approach (DWA) method in the context of obstacle avoidance planning highlights the superior resilience of the NMPC algorithm in this domain. The proposed strategy, which utilizes NMPC, demonstrates a viable alternative for achieving precise trajectory tracking and efficient obstacle avoidance planning in the context of robotic fish motion control within intricate surroundings. This method exhibits considerable potential for practical implementation and future application. The utilization of ocean and river resources by humans has led to the growing prominence of underwater robots in various applications. Underwater robots are utilized in several sectors, including civil domains such as marine environmental monitoring, maritime search and rescue, seabed resource exploration, and aquatic biology research. Additionally, they find applications in military sectors for reconnaissance activities and the tracking of enemy submarines ,2. ReseaRobotic fish, classified as a subset of underwater robots, possess attributes that align with those of a time-variant, strongly coupled, and multi-input multi-output nonlinear system. Trajectory tracking control and obstacle avoidance planning control have garnered substantial attention from scholars both locally and internationally due to their prospective uses ,4. In thUp to this point, the predominant techniques employed for trajectory tracking of underwater robots consist of traditional proportional integral derivative (PID) control ,6, slidiIn recent years, there has been significant progress in the development of autonomous obstacle avoidance technology in the field of underwater robotics. This has led to an increase in research efforts by both local and international research teams, focusing on the study of autonomous obstacle avoidance planning for underwater robots. In order to guarantee the autonomous ability of underwater robots to avoid obstacles in scenarios including multiple stationary objects, Li introduced a control approach for obstacle avoidance in a spherical underwater robot. This strategy relies on the utilization of an ultrasonic sensor array. This approach considers the kinematic and dynamic models of the robot as well as the properties of ultrasonic sensors . In orde(1)The utilization of nonlinear model predictive control (NMPC) in the realm of motion control for nonlinear robotic fish systems. This approach utilizes real-time optimization solutions to boost the adaptability of robotic fish in different surroundings, allowing for accurate trajectory tracking and obstacle navigation. In contrast to conventional trajectory planning techniques that predominantly focus on position and velocity alterations, this methodology incorporates enhanced control over acceleration.(2)The implementation of trajectory planning and optimization algorithms based on acceleration, which enables the achievement of smoother motion for robotic fish and the reduction of excessive inertial forces during movement. Consequently, this approach improves the precision of trajectory monitoring as well as the level of comfort experienced.The conventional control techniques employed in the field of robotic fish frequently depend on PID control or empirical methodologies. However, these methods demonstrate some shortcomings in terms of their performance and ability to withstand challenges in intricate underwater settings. The present work introduces several novel contributions, which are outlined below:The remainder of this paper is organized as follows. In order to accomplish trajectory tracking of the robotic fish, a kinematic model is created to characterize its motion, incorporating variables such as velocity, location, and rotational angles. To achieve a more sophisticated formulation of the kinematic model for the biomimetic robotic fish, it is necessary to build a coordinate system that effectively characterizes the fish\u2019s motion. As depicted in y upward .The coordinates of the fish body\u2019s center of mass in the world coordinate system are as follows: In a similar manner, the matrix that represents the transition of the angular velocity of the fish body\u2019s center of mass from the coordinate system of the carrier to the coordinate system of the world can be expressed as follows.Here, x)=RT(x) .When The iterative formula for the velocity of the fish\u2019s center of mass is as follows.Here, only the planar motion of the robotic fish is taken into consideration. This implies that substituting The kinematic model of the robotic fish pertains to the temporal progression of its motion states, which often includes details such as position, velocity, and orientation. Nevertheless, in order to incorporate the acceleration of the fish, it becomes imperative to transition from the kinematic model to a dynamic model. The dynamic model takes into account not just variations in velocity and position, but also integrates the impact of acceleration on these variables.The dynamic model of the robotic fish exhibits a strong correlation between acceleration and steering angle. The rate of change in acceleration has a direct influence on variations in the steering angle, while modifications in the steering angle, in turn, have a reciprocal effect on both the direction and magnitude of acceleration. The mathematical representation of the relationship between acceleration and steering angle can be expressed as followsThis study enhances the kinematic model by incorporating acceleration, so transforming it into a dynamic model. This modification allows for a more accurate representation of the motion behavior exhibited by the robotic fish. The inclusion of acceleration in the analysis facilitates a more comprehensive comprehension of the dynamic properties shown by the fish. This holds special significance in the context of designing control algorithms and optimizing the motion performance of the fish.For a nonlinear system, consider the following general form of a discrete modelSetting straints .At any arbitrary time, the predictive model of the NMPC system can forecast all state variables of the system from time horizon .NMPC aims to solve, at each time step, the following constrained finite-horizon optimization problemThe framework flowchart of NMPC control is shown in Here, Finally, the control inputs obtained from the optimized control sequence at time step The obstacle avoidance function entails modifying the penalty function value by considering the discrepancy between the distance of obstacles and the target point. In this case, a greater function value is assigned to a closer distance. An augmentation in the weight coefficient has a tendency to render the planning outcomes more cautious. In scenarios when the robotic fish lacks information regarding obstacles, the weight coefficient will have no impact on the planning results. When encountering substantial measurement or estimation mistakes in the state of the robotic fish, it is possible to utilize higher weight coefficients. Nevertheless, this phenomenon also results in a rise in tracking deviation, as demonstrated in the subsequent analysisThe objective function at the trajectory planning level is to minimize deviations from the global reference path and achieve obstacle avoidance. The act of circumventing hurdles is achieved by implementing penalty functions in the following mannerThe management of motion kinematic trajectory tracking and obstacle avoidance in robotic fish include the manipulation of the fish\u2019s lateral and longitudinal accelerations to regulate its forward and backward velocity, lateral velocity, and yaw rate. This action is undertaken with the purpose of ensuring that the center of mass of the fish follows a specified reference trajectory, resulting in the continuous reduction of the discrepancy between the projected trajectory and the reference trajectory until it reaches zero.NMPC employs a nonlinear model to forecast forthcoming states by considering the present state and a sequence of control inputs inside the control horizon. The process at hand is evidently characterized by an iterative methodology, wherein it is necessary to establish an explicit iteration equation in order to estimate the solution of the differential equation, considering that the control sequence is unknown. In the realm of actual engineering applications, numerical methods that are frequently employed encompass the Euler method and the fourth-order Runge\u2013Kutta algorithm. In this part, the forward Euler approach is utilized to discretize Equation (10) and transform it into a predictive model.Taking into account the maximum hydrostatic pressure that the robotic fish can endure while swimming underwater, the following dynamic constraints are incorporated:The paper presents trajectory tracking and obstacle avoidance simulations as a means to validate the efficacy of the NMPC algorithm in the motion control of the bionic robotic fish. This study aims to evaluate and compare the tracking performance of the NMPC tracking algorithm and the Pure Pursuit (PP) algorithm on S-shaped and cloverleaf trajectories. Furthermore, this study conducts a comparative analysis between the obstacle avoidance method of NMPC and the Dynamic Window Approach (DWA) algorithm, focusing on their respective effectiveness in avoiding obstacles along circular routes. This study employs a comparative analysis to establish the precision of the NMPC algorithm in trajectory tracking and evaluate the efficacy of its obstacle avoidance planning.In a 10 m by 10 m plane, a reference trajectory is set with the starting coordinates and the ending coordinates . Due to the computational complexity of the algorithm, a point mass model without considering size information is employed for the fish model. The initial state of the fish is denoted as Although both of the aforementioned algorithms are capable of achieving trajectory tracking, it is important to note that their respective tracking effects exhibit notable differences. The PP tracking method has a tendency to diverge from the reference trajectory, particularly when encountering steep curves. In contrast, the NMPC tracking approach does not demonstrate this behavior. In order to offer a more comprehensive analysis of the tracking effects exhibited by the two algorithms, a comparison was undertaken by examining the disparities between the trajectory tracked by each algorithm and the reference trajectory. The findings are displayed in The superiority of the robotic fish\u2019s tracking accuracy under the control of the NMPC algorithm, as compared to the PP algorithm, is clear, leading to reduced errors. The PP tracking algorithm exhibits notable inaccuracies at sample points 69, 211, and 358, wherein the largest tracking error amounts to 0.2046 m. The observed divergence above the acceptable threshold for tracking error is by a magnitude of 204.6%. This deviation may be attributable to abrupt changes in the trajectory occurring at these specific sample locations. In contrast, the trajectory error observed in the NMPC tracking method consistently remains below 0.1 m, while the turning errors converge within the allowed range of error. Upon comparing the error values of the two algorithms, it becomes apparent that the NMPC method demonstrates superior performance in relation to overshoot and settling time when compared to the PP algorithm. The findings of this study provide empirical evidence supporting the superior accuracy and stability of the NMPC algorithm in trajectory tracking when compared to the PP algorithm.The occurrence of trajectory tracking errors may be attributed to the orientation angle of the robotic fish, denoting the angle between the lateral body axis of the fish and the direction of the Earth\u2019s -axis. In the range of sample points 90 to 230, it is seen that the magnitudes of the orientation angles obtained under the PP control approach tend to be greater compared to those obtained using the NMPC method. The occurrence of tracking errors may be observed in Based on In order to enhance the credibility of our control method\u2019s ability to track trajectories in a more intricate setting, subsequent to the successful tracking of the S-shaped curve trajectory, the robotic fish proceeded to navigate the more demanding cloverleaf-shaped trajectory. In contrast to the S-shaped curve, the cloverleaf curve is characterized by pronounced turns and fast fluctuations in speed, necessitating a considerable level of agility and accurate control capabilities from the robotic fish system. The initial state of the fish is denoted as The provided diagram illustrates that both the NMPC and PP tracking methods are capable of achieving trajectory tracking for the cloverleaf curve. However, distinct disparities between the two approaches become apparent when examining the locally magnified image. The tracking approach employed by the PP system exhibits a tendency to execute premature turns in instances where the steering angle exceeds a certain threshold, resulting in suboptimal adherence to the intended curved trajectory. In order to provide a more comprehensive illustration of the tracking impacts of the two techniques, an analysis is conducted to determine the disparities between the tracking trajectory and the reference trajectory. The obtained findings are visually presented in The experimental results demonstrate that by employing the NMPC method as depicted in Based on the analysis of The obstacle avoidance control for the robotic fish, based on the principles of trajectory tracking, is implemented using NMPC. The process entails the identification of recently introduced obstacles and the iterative calculation of both the objective function and the obstacle avoidance function. The aforementioned iterative procedure ultimately accomplishes the objectives of tracking control and obstacle avoidance control along a predetermined trajectory. The PP algorithm does not possess inherent obstacle avoidance functionality. Its primary objective is to guide the robot along a specified course, disregarding the existence of impediments in the surrounding environment. The presence of barriers within the environment can potentially result in collisions when employing the pure PP algorithm. The DWA method has been specifically developed to address the challenges of path planning and obstacle avoidance in the context of mobile robots. The objective of this research is to facilitate the robot\u2019s ability to promptly react, maneuver past barriers, and adhere to the predetermined path in order to achieve the desired outcome within intricate surroundings. Therefore, in the process of validating obstacle avoidance control algorithms, a comparative analysis is conducted between the NMPC obstacle avoidance algorithm and the DWA obstacle avoidance algorithm. The simulation results obtained through the utilization of NMPC and DWA obstacle avoidance algorithms on a predetermined track are depicted in In this context, the initial state of the robotic fish is defined as Based on Upon analysis of In order to evaluate the efficacy of the DWA and NMPC obstacle avoidance techniques in safely maneuvering around obstacles, the distance between the centroid of the robotic fish and the centroids of the obstacles is employed as a metric. The findings are illustrated in By examining Based on the analysis of The exact control of robotic fish motion has long been a challenging task in the realm of underwater robotics, particularly inside complicated underwater settings. In order to tackle this particular difficulty, the present study utilizes a trajectory tracking and obstacle avoidance planning algorithm for autonomous underwater vehicles inspired by NMPC (Nonlinear Model Predictive Control). The method presented in this study aims to enhance control performance by incorporating real-time system status and external environmental information. This approach facilitates the achievement of agile and responsive control for the robotic fish. The initial step involves establishing the dynamic model of the robotic fish within the global coordinate system. Subsequently, the NMPC algorithm is delineated and accompanied by the formulation of obstacle avoidance and objective functions, hence facilitating the development of an NMPC controller that incorporates control restrictions.In order to assess the efficacy of this methodology, the NMPC algorithm and the PP algorithm were employed to achieve trajectory tracking on S-shaped and cloverleaf curves, respectively. The comparison results are shown in Both the NMPC tracking method and the PP tracking method have the capability to accurately follow the reference trajectory. In the case of straight or smooth pathways, both of these approaches provide comparable trajectories and demonstrate efficient control. Nevertheless, disparities in trajectory tracking between the Pure Pursuit (PP) and Nonlinear Model Predictive Control (NMPC) methods become apparent, particularly while navigating along S-shaped routes. In this scenario, the PP tracking approach exhibits a lower level of precision compared to NMPC. In contrast to the PP tracking approach, NMPC demonstrates superior performance in precisely tracking the reference trajectory on straight roads, as well as ensuring convergence to the reference trajectory on curved paths. The differences between the PP and NMPC tracking systems are apparent within the complex four-leaf clover curve environment. Premature turning may be observed in the trajectory of the PP tracking method at the extremities of the clover leaves, indicating a deviation from the reference path. On the other hand, it can be observed that the NMPC control approach exhibits a high level of precision in achieving convergence towards the desired trajectory at the leaf tips. This finding serves as empirical evidence supporting the efficacy of NMPC in accurately monitoring trajectories.Subsequently, a comparison examination was conducted to assess the obstacle avoidance planning capabilities of the NMPC algorithm and the DWA algorithm. The comparison results are shown in The DWA obstacle avoidance approach satisfies the necessary conditions for ensuring safety. However, it does encounter several challenges, including an excessively large obstacle avoidance radius and a lack of rapid convergence of the avoidance trajectory to the intended path. On the other hand, the obstacle avoidance method using NMPC guarantees that the trajectory for avoiding obstacles stays within a suitable range during the collision avoidance procedure and after the avoidance maneuver is executed. This approach exhibits notable precision and stability. In addition, the obstacle avoidance method employed by NMPC ensures that there is a minimum distance maintained between the center of mass of the fish and the center of mass of the obstruction, which is within a range of 1.5\u20132 times the radius of the obstacle. This demonstrates the method\u2019s commendable performance in terms of both effectively avoiding collisions and maintaining stability. This study examines the effectiveness of the NMPC algorithm in the context of obstacle avoidance planning, hence assessing its resilience.The utilization of NMPC by the robotic fish in order to achieve trajectory tracking and plan for obstacle avoidance exhibits certain constraints, hence suggesting potential avenues for future enhancements.One of the main challenges associated with NMPC approaches is their high computational cost, as they necessitate the recalculation of control algorithms at each time step. Therefore, in intricate underwater environments, particularly those that necessitate immediate reactions, the substantial computing complexity could potentially undermine real-time efficacy.The effectiveness of NMPC approaches is contingent upon the precision of the system dynamics models. Nevertheless, the dynamics of underwater environments can be impacted by various factors, including water flow and turbulence. The task of guaranteeing the precision of the model presents a formidable challenge, thus impacting the efficacy of NMPC.One limitation of NMPC methods is their inadequate adaptability to complex underwater environments. In situations characterized by extreme conditions or high complexity, these methods may struggle to effectively respond to various scenarios, particularly when faced with unfamiliar obstacles or rapidly changing conditions. Consequently, this can result in a decline in the performance of the NMPC algorithm.Future research endeavors may prioritize the refinement of NMPC algorithms, with a particular emphasis on augmenting computational efficiency, mitigating reliance on model correctness, and optimizing control parameters to effectively accommodate intricate underwater settings.Multi-Model Fusion: The integration of diverse sensor data and distinct control models is employed to raise the flexibility of the system to environmental variations, thus augmenting the system\u2019s robustness.The proposed approach involves the integration of deep learning techniques with adaptive control methods, allowing the robotic fish to autonomously modify control strategies by leveraging learnt environmental patterns. This integration aims to enhance the adaptability of the robotic fish in complicated situations.This study intends to achieve trajectory tracking and obstacle avoidance effects by enhancing the NMPC algorithm and implementing it on the nonlinear system of the robotic fish. Initially, the establishment of the global coordinate system and the body-fixed coordinate system, along with the transformation relationship between the body centroid of the fish and these coordinate systems, was undertaken. A kinematic model incorporating acceleration was formulated based on the provided information. This work presents a comprehensive examination of the NMPC approach, focusing on the formulation of obstacle avoidance functional functions and NMPC objective functions. In continuation of the previous discussion, a NMPC controller was developed, integrating control constraints and system constraints in order to guarantee the precision and stability of motion trajectories. The examination of the simulation outcomes revealed a close correspondence between the tracked trajectory of the centroid of the fish and the reference trajectory. When faced with barriers, the fish demonstrated the ability to successfully navigate around them, subsequently returning to the designated trajectory and ultimately reaching the intended destination. A comparative analysis between the PP tracking algorithm and the NMPC tracking algorithm revealed that the latter exhibited superior precision. In contrast to the DWA obstacle avoidance algorithm, the NMPC algorithm demonstrated superior levels of safety and stability. Nevertheless, it should be noted that the NMPC algorithm exhibited a comparatively higher computing burden and lengthier execution duration. Additional investigation is required to enhance and refine the NMPC algorithm in order to achieve optimal performance in the execution of simulation experiments. The primary objective of this study was to conduct a simulation with a solitary fish. Subsequent research endeavors may encompass exploring the concept of multi-fish formation control in order to attain trajectory tracking capabilities and effectively avoid obstacles."} +{"text": "The degree of success and effectiveness of the child\u2019s socialization largely depends on the timely formation of social emotions, the ability to understand the emotional states of the participants in the interaction and manage their emotions.Studying the features of understanding the emotional states of peers and adults by children of preschool age with disabilities.The study involved 227 children aged 5-7 attending educational institutions: 95 children without developmental disorders; 73 children with severe speech disorders; 9 children with motor disorders; 25 children with visual impairment ; 15 children with hearing impairment ; 10 children with autism spectrum disorder. The \u201cEmotional Faces\u201d method (Semago) and the method of studying the child\u2019s understanding of tasks in situations of interaction (Veraksa) were used.Tasks for the categorization of emotional states cause difficulties in children with speech disorders, since they require a certain mastery of vocabulary for the designation of emotional states. As a result of limited communication in children, there is a lack of understanding of the meaning, causes and motives of the actions of other people, as well as the consequences of their actions, their impact on others.Preschool children with motor disabilities are inferior to peers without developmental disabilities in accurate verbalization of emotional states, manifested in a primitive description of emotions.Visually impaired preschool children do not have sufficiently clear ideas about socially acceptable actions in communication situations, about ways of expressing relationships with peers and adults.Children with hearing impairment better understand the emotional states of their peers than the states of adults, but they do not know how to show their attitude towards their peers. Difficulties in verbalizing emotions are observed.Children with autism spectrum disorder experience significant difficulties in recognizing various situations of interaction, isolating tasks and requirements set by adults in these situations; children practically did not try to depict an emotion, having difficulty in differentiating it.The research confirmed the assumption that children with disabilities have significant difficulties in differentiating similar emotions, they do not accurately determine the emotional state of their peers and people around them. This paper has been supported by the Kazan Federal University Strategic Academic Leadership Program.None Declared"} +{"text": "We report the feasibility of using gelatin hydrogel networks as the host for the in situ, environmentally friendly formation of well-dispersed zinc oxide nanoparticles (ZnONPs) and the evaluation of the antibacterial activity of the as-prepared composite hydrogels. The resulting composite hydrogels displayed remarkable biocompatibility and antibacterial activity as compared to those in previous studies, primarily attributed to the uniform distribution of the ZnONPs with sizes smaller than 15 nm within the hydrogel network. In addition, the composite hydrogels exhibited better thermal stability and mechanical properties as well as lower swelling ratios compared to the unloaded counterpart, which could be attributed to the non-covalent interactions between the in situ formed ZnONPs and polypeptide chains. The presence of ZnONPs contributed to the disruption of bacterial cell membranes, the alteration of DNA molecules, and the subsequent release of reactive oxygen species within the bacterial cells. This chain of events culminated in bacterial cell lysis and DNA fragmentation. This research underscores the potential benefits of incorporating antibacterial agents into hydrogels and highlights the significance of preparing antimicrobial agents within gel networks. These findings suggest that ZnONPs composite hydrogels hold great potential for future applications in the field of biomedical engineering."} +{"text": "Armed conflicts, collective situations of adversity, and gross social injustices cause widespread mental suffering in affected populations. In these crises, conflicts break down and traditional community support mechanisms are weakened or destroyed. The loss of trust in others and the lack of hope for change undermine social cohesion at the deepest levels of communities. Therefore, it is important not to overlook the psychosocial impacts of social injustice and violence on the individual and society undermines other efforts to build peaceful societies. Nevertheless, the use of mental health and psychosocial support (MHPSS) approaches to support social cohesion is still very uncommon.The objective of the proposed intervention in the Ituri province of the Democratic Republic of Congo was to complement the economic recovery activities of the most vulnerable populations with a psychological support approach. This was to ensure more sustainable results in the appropriation of problem management strategies through the strengthening of individual well-being and group support mechanisms.The psychosocial intervention is organized around a community psycho-education to sensitize the populations to mental health issues and to promote the awareness of their possible suffering in order to access a psychological care system. The protocol included five weekly group sessions designed to strengthen participants\u2019 individual and collective psychological resources. Several indicators were measured to assess the impact on social cohesion In eight months of intervention between July 2021 and February 2022, 1024 people were able to participate in the psychological support program. 90% of them showed improvement in psychological well-being, daily functioning and resilience. In addition to these very optimistic results on individual aspects, 65% of the participants increased the level of prosocial behaviour.The psychosocial intervention proposed in an area of permanent conflict and adversity was mainly aimed at improving the well-being of people showing signs of distress to make them better able to complete their economic activity project. The results showed that taking into account the psychosocial dimension, not only reduced distress and allowed people to better project themselves in the future, but also promoted prosocial behavior. All these elements contribute strongly to social cohesion.None Declared"} +{"text": "The main objective of this Special Issue is to showcase outstanding papers presenting advanced materials for clothing and textile engineering. Advanced materials have improved the performance of conventional and protective clothing and accelerated the development of e-clothing and smart clothing. The main directions of progress have become visible in recent years and are reflected in the research on and development of new materials and the improvement of their properties. Special treatment methods for textile materials are being developed to improve their use properties as well as textile care. This is of great importance for textile and clothing engineering.This Special Issue brings together several outstanding articles on a wide range of topics, including the impact of textile production and waste on the environment, the properties of yarns and textile materials, and the wearing comfort of textiles and clothing.Environmental pollution is a major problem and a very important research topic. Microplastics have become one of the major environmental hazards. Thus, it is imperative to investigate the main potential sources of microplastic pollution on the environment. \u0160aravanja et al. gave an overview of washable polyester materials. They stated that the main goals are to produce polyester with optimal structural parameters and to switch to recycled production as much as possible, including the use of the most appropriate agents that protect the structure of polymer while preventing the release of microplastics during washing. Another goal is to optimize the washing and drying processes of synthetic materials .The importance of cleaning is evident in the use of textile materials for protection against electromagnetic radiation, which is suitable for composite structures of garments and for technical and interior applications. The authors evaluated the stability of the materials\u2019 shielding properties against EM radiation after the application of apolar and polar solvents, in synergy with a cyclic process involving the parameters of wet and dry cleaning .The consumption of eco-friendly fibers with a decrease in synthetic fibers, which can reduce pollution generated by the textile industry, was studied in .Nanotechnology, more specifically electrospun nanofibers, has been identified as a potential solution for developing efficient filtration systems. In one study, the electrospinning technique for producing polyamide nanofibers was optimized by varying several parameters, such as polymer concentration, flow rate, and needle diameter. The optimized polyamide nanofibers were combined with polypropylene and polyester microfibers to construct a multilayer and multiscale system with increased filtration efficiency .The comfort of wearing textiles and clothing is the focus of a few papers. \u010cubri\u0107 et al. examined the mechanical properties of polyurethane materials used to construct inflated thermal-expanding insulation chambers that serve as adaptive thermal layers in intelligent clothing, as well as their efficiency in providing thermal protection .An increasing number of companies are developing advanced high-tech sportswear, often with high compression, for professional athletes. Analysis of the effects of sportswear compression on the physiological comfort of athletes is a very important topic .\u00ae as core filaments, were investigated in [The moisture vapor permeability and thermal wear comfort of bamboo and Tencel as environmentally friendly fibers, together with fiber core and polypropylene, nylon, and Coolmaxgated in .The flammability of protective clothing systems for firefighters is increasingly being investigated. Hursa \u0160ajatovi\u0107 et al. presented studies on a clothing system for protection against heat and flames using a fire manikin and systematically analyzed the damage caused after testing .\u00ae flame-retardant material were investigated in one study. This research focused on the effects of washing conditions on the effluent composition and durability of the flame-retardant material\u2019s properties and the appearance of Proban\u00ae cotton fabrics [The effects of washing parameters on the thermal properties and appearance of Proban fabrics .Knezi\u0107 et al. analyzed the effect of the number of electrically conductive yarns woven into a fabric on the values of electrical resistance. They observed how the direction of action of the elongation force affects the change in electrical resistance of the electrically conductive fabric, taking into account the position of the interwoven electrically conductive yarns .Ielina et al. studied the geometric parameters of a knitted loop. In this paper, a mathematical description of the coordinates of the characteristic points of the loop and an algorithm for calculating the coordinates of the control vertices of the second-order spline, which determine the configuration of the yarn axes in the loop, are presented .Knitted fabrics are subjected to dynamic loading due to body movements during their use. The influence of elastane content on the elastic properties of knitted fabrics under dynamic loading was investigated in .Stockings must be made of high-quality material and provide comfort to wearers. Stockings that were knitted in a plain single jersey pattern and made with the highest percentage of ring, rotor, and air-spun modal or micromodal yarns of the same linear density in full plating with various textured polyamide 6.6 yarns were investigated in .A very interesting area of research is the new approach to the creation process in fashion design that results from the use of thermal camouflage in the design of clothing. In one study, the main variation factors of thermal images were determined by analyzing their color behavior in a daytime and nighttime outdoor environment in the presence and absence of a dressed human body through the use of a thermal imaging camera .Drape is one of the most important characteristics associated with the quality and attractiveness of a garment. Memon et al. investigated how bending stiffness and thickness affect the drapeability of garment leathers. This study also has practical implications as it can help practitioners better understand these elements and select appropriate materials for garment companies and customers .Petrak et al. presented an analysis of the parameters of a polygonal computer model that affect fabric drape simulations. The fabric drape simulations were performed using the 2D/3D CAD system for computer clothing design on a disk model, which corresponded with the real tests using a drape tester; the aim was to perform a correlation analysis between the values of the drape parameters of the simulated fabrics and the realistically measured values for fabrics .Virtual prototyping is a technique in the apparel development process that involves the use of computer-aided design to develop apparel and their virtual prototypes. Virtual simulations of trousers and real-trouser prototypes were compared to investigate their fit and comfort on scanned and kinematic 3D body models, as well as on a real body. By changing the body posture, the real and virtual body circumferences were changed, which affected the fit and comfort of the virtual and real trousers .Bogovi\u0107 et al. described a study on the use of 3D printed knee protectors intended for wheelchair users. The construction of clothing and the 3D modeling of the elements integrated into the garment were interdependent, and the design solutions were found to provide adequate and reusable garments, especially for sensitive target groups such as people with disabilities .The properties of flame-retardant fabrics and the possibility of their finishing in the processes of dyeing and printing were studied. The possibility of reactive printability on protective flame-resistant fabrics varying in the composition of weft threads and weave was studied, and the washfastness of the printed samples was analyzed .Kumpikait\u0117 et al. investigated the distribution of crimp in new jacquard fabric structures combining single- and double-ply weaves and fabric width to provide a method for predicting crimp .The pilling resistance of fashion fabrics is a fundamentally important and common problem when wearing clothes. The aim of one study published in this Special Issue was to evaluate the pilling behavior of linen/silk fabrics with different mechanical and chemical finishes and to determine the influences of the raw materials and the specifics of dyeing and digital printing with different dyes .In another study, the first descriptive bibliometric analysis to study the most influential journals, institutions, and countries in the field of artificial intelligence in the textile industry was conducted. The analysis covered all major areas of artificial intelligence, including data mining and machine learning .Materials has recognized this trend and made possible the publication of this Special Issue. The published articles highlight new trends and address many important issues in the development of advanced materials for textile and clothing engineering. The articles published in this Special Issue show that the field of textile and clothing engineering is experiencing extraordinary development dynamics. We are pleased that the Editorial Office of"} +{"text": "The Publisher retracts the cited article.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Psychiatry and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Chemistry and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Psychology and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "PF isThe diagnosis of penile fracture is mainly clinical, made from a thorough history and physical exam alone -9. The pIn the present paper the authors studied the further evidence concerning the diagnostic accuracies of magnetic resonance imaging (MRI) and ultrasound (US) in the diagnostic assessment of patients with suspected PF and concluded that the results of this study suggest that MRI is more suitable to confirm PF and identify the site of the associated tunica albuginea tear while US is a good tool for ruling out PF."} +{"text": "Glycaemic control is of one the main goals for managing type 2 diabetes. In sub-Saharan Africa and the Democratic Republic of the Congo, studies have reported alarming poor control rates. Patients with poor glycaemic control are exposed to complications leading to high cost of care and deteriorated quality of life. In recent studies by our group, we have demonstrated that poor glycaemic control is high and driven by proximal and distal factors in Kinshasa, Democratic Republic of the Congo. Financial constraints impacted many aspects of care at multiple levels from the Government to persons living with diabetes. Financial constraints prevented good preparation, organization and access to diabetes care. Difficulties in implementing lifestyle changes, lack of health literacy and limited healthcare support were also contributing to poor glycaemic control. Through a Delphi study, a group of experts reached a consensus on five potential strategies for improving glycaemic control in the Democratic Republic of Congo as follows: changing the healthcare system for better diabetes care extended to other noncommunicable diseases, ensuring consistent financing of the healthcare, augmenting the awareness of diabetes among the general population and the persons living with diabetes, easing the adoption of lifestyle modifications and reducing the burden of undiagnosed diabetes. This paper reflects on the urgent need for an improved management framework for diabetes care in the Democratic Republic of the Congo. Specifically, the Government needs to increase the investment in the prevention and treatment of noncommunicable diseases including diabetes. Description of factors driving poor glycaemic control for accurate drafting of interventions in Kinshasa, Democratic Republic of Congo.Identification of strategies of interventions for improving glycaemic control in Kinshasa, Democratic Republic of Congo.Description of implications of the research findings for sub-Sahara Africa.The burden of diabetes mellitus is on the increase and this had made it a significant public health problem globally . In 2021Despite its increasing importance, diabetes is not receiving the deserved attention in many countries in sub-Saharan Africa, including the DRC. Diabetes care experiences several challenges including lack of organization in the health system, insufficient financing for diabetes care and limited information on the diabetes complications and the genetics (Due to inadequate diabetes care, studies have shown that less than one third of the persons with diabetes reached glycaemic targets in sub-Saharan Africa and in the cosmopolitan city of Kinshasa, DRC . The othDue to a lack of practical framework for improving glycaemia control among persons living with diabetes in the DRC, we conducted four sub-studies to explore ways of improving glycaemic control in the DRC. The first sub-study was a systematic review and meta-analysis on the prevalence and factors driving glycaemic control in sub-Saharan Africa (In the DRC, the five main determinants for poor glycaemic control among persons with type 2 diabetes were: financial constraints, lack of affordable healthcare coverage, difficulties in implementing lifestyle changes, lack of health literacy and limited healthcare and social support.Diabetes mellitus is a lifelong and costly disease requiring many life adjustments including financial ones to enable reliable and consistent funding of care . In 2019In the DRC, only a small proportion of the population is covered by health insurance . Most peIn our qualitative exploration of the perspectives of patients (A study in our setting showed that the knowledge of persons with type 2 diabetes was poor (Our study participants reported that they received little support in the management of their type 2 diabetes (Reflecting on the findings of the previous studies of the project, the experts in the Delphi study recommended five strategies for improving glycaemic control among patients with type 2 diabetes: changing the healthcare system for better diabetes care extended to noncommunicable diseases, ensuring consistent financing of the healthcare, augmenting the awareness of diabetes among the general population and the patients, easing the adoption of lifestyle modifications and reducing the burden of undiagnosed diabetes (Achieving better glycaemic control in our environment requires changes in the way the healthcare system operates . CurrentAn effort should be made by the Government to ensure consistent funding of healthcare (It is crucial to ensure that the general public is more aware of diabetes (Lifestyle modifications are the cornerstone of type 2 diabetes management in Africa (Opportunistic screening for diabetes that targets individuals at high risk of the disease such as those with the past history of CVD, family history, obesity, concurrent hypertension and/or other cardiovascular risk factors could therefore identify new cases at early stages of the diseases when complications have not set in. Monitoring the weight gain, assessing the body mass index and measuring the waist circumference for each patient consulting in a primary healthcare facility will contribute to early screening of diabetes. Providing reliable means of laboratory diagnosis as a point of care glycosylated haemoglobin will be important to consider. With prompt lifestyle changes and treatment, early detection could ease the management at diagnosis and delay the occurrence of complications (Most countries in sub-Saharan Africa face similar challenges when it comes to caring for patients with type 2 diabetes. In a broader perspective, improving care of noncommunicable diseases including diabetes requires more involvement from the Government and the society in terms of better financing, health system preparation and prevention. A task force at the regional level could help in monitoring the progress in the diabetes care made by the countries, sharing the information on evidence-based strategies and mobilizing the resources available worldwide ."} +{"text": "The current study describes a case of an aberrant cleido-occipital muscle. In particular, this muscle was arising from the middle part of the clavicle, inserted into the medial part of the upper trapezius muscle, and crossed over the supraclavicular nerves with possible compression of them, especially during shoulder abduction. Knowledge of the muscular variability of the posterior cervical triangle is crucial for supraclavicular nerve entrapment syndrome diagnosis and treatment. The appearance of aberrant muscular fascicles may lead to misinterpretation of neck imaging, as well as difficulties during surgical procedures undertaken in the region. The variant musculature of the posterior cervical triangle has been rarely described in the literature as compared to the shoulder region . The preDuring a routine neck dissection of an 82-year-old male formalin-fixed cadaver for educational purposes at the Department of Anatomy and Surgical Anatomy of the School of Medicine of Aristotle University of Thessaloniki, an aberrant muscle was found in the supraclavicular region. A\u00a0muscular band was noticed following the removal of the skin and subcutaneous fat of the right posterior cervical triangle, deep to the sternocleidomastoid muscle. It originated from the middle part of the clavicle, approximately 1.6 cm lateral to the sternocleidomastoid origin, and was inserted into the medial part of the upper trapezius muscle portion, blending with its muscle fibers. This accessory muscle crossed over the supraclavicular nerves\u2019 common trunk and significant nerve compression was supposed to had been happening during shoulder abduction , along with the rest of the superficial cervical plexus branches. As it approaches the clavicle, the common trunk divides into a medial, an intermediate, and a lateral descending branch . EntrapmIn the described case, the aberrant muscle fibers coursed over the supraclavicular nerve common trunk and significant compression was supposed to be noticed during upper limb abduction, leading to a potential supraclavicular entrapment syndrome. Potential compression of the supraclavicular nerves due to an accessory cleido-occipital muscle of the trapezius has been previously reported in the literature . Rahman Awareness of the variant musculature of the posterior cervical triangle is essential for supraclavicular nerve entrapment syndrome diagnosis and treatment. The presence of accessory muscle fibers may lead to misdiagnosis during neck ultrasound and imaging, as well as to difficulties during surgical interventions in this anatomical area."} +{"text": "Now EditorialOffice of Chinese Journal of Lung Cancer has decided to retract it.The editorial office will strengthen checking procedures for the academicpublishing specification to avoid the recurrence of such events in future.Editorial Office of Chinese Journal of Lung CancerJune, 2023"} +{"text": "Acoustic dyadic sensors (ADSs) are a new type of acoustic sensor with higher directivity than microphones and acoustic vector sensors, which has great application potential in the fields of sound source localization and noise cancellation. However, the high directivity of an ADS is seriously affected by the mismatches between its sensitive units. In this article, (1) a theoretical model of mixed mismatches was established based on the finite-difference approximation model of uniaxial acoustic particle velocity gradient and its ability to reflect the actual mismatches was proven by the comparison of theoretical and experimental directivity beam patterns of an actual ADS based on MEMS thermal particle velocity sensors. (2) Additionally, a quantitative analysis method based on directivity beam pattern was proposed to easily estimate the specific magnitude of the mismatches, which was proven to be useful for the design of ADSs to estimate the magnitudes of different mismatches of an actual ADS. (3) Moreover, a correction algorithm based on the theoretical model of mixed mismatches and quantitative analysis method was successfully demonstrated to correct several groups of simulated and measured beam patterns with mixed mismatches. Sound signal is a very important information resource, and accurate measurement of sound signal is of great value in daily life and industrial production fields. Measurement of directionality of sound field is to use the directivity of acoustic sensors or detection systems to obtain the sound signal in a certain direction, while the sound in other directions is regarded as noise and reduced to the minimum, thus improving the signal-to-noise ratio of the acoustic detection system . AcoustiAccording to the Taylor series expansion theory of sound pressure , completDepending on the order of the measured physical quantity, there are several types of acoustic sensors. A scalar acoustic pressure sensor is omnidirectional (directivity of order zero) and usually used as arrays to achieve directivity . An acouThe directivity index of an ADS with only one uniaxial acoustic particle velocity gradient channel at its acoustic center. At present, the second-order physical quantities of sound field are usually measured using the finite-difference approximation method ,25,26,27According to the type of sensitive units, ADSs can be divided into two types: sound pressure type and particle velocity type. Sound pressure type ADSs, composed of APSs, measure the second-order sound pressure gradient after twice finite-difference approximation of sound pressure . ParticlFinite-difference approximation method is an approximate measurement method, which has measurement errors. The measurement error is greatly influenced by the consistency of the differential units. Therefore, the high directivity of an ADS is extremely dependent on the consistency of its sensitive units.When there is a mismatch between sensitive units, the measurement error of the second-order physical quantity at the acoustic center of the ADS increases , the dirBefore reducing the mismatch, it is necessary to understand the mismatch through analysis. According to the reported literatures ,4,23,25,Silvia examinedAubauer et al. reportedYang et al. verifiedSun et al. studied The above reported literatures have laid a foundation and provided ideas for analyzing the mismatch problem of sensitive units of ADSs. Different types of mismatches were put forward and analyzed which is helpful to improve the understanding of each type of mismatch. In addition, for reducing the mismatches to ensure the high directivity of ADSs, the multi-channel filtering approach proposed by Silvia can helpA theoretical model of mixed mismatches is established to discuss and analyze the situation when all types of mismatches exist at the same time, which makes up for the lack of discussion and analysis of mixed mismatches in the reported literatures.A quantitative analysis method based on directivity beam pattern is proposed for analyzing the mismatches between the sensitive units of ADSs. It can help the designer of an ADS conveniently judge what type of mismatch exists in the designed sensor from the directivity beam pattern obtained from the experimental measurement, and easily estimate the specific magnitude of this type of mismatch, so as to better find the shortcomings in the original design scheme and improve it.A correction algorithm for the mismatches between sensitive units of ADSs is proposed, according to the theoretical model of mixed mismatches and the quantitative analysis method based on directivity beam pattern. It successfully corrects the directivity beam pattern with mixed mismatches obtained from simulation and experiment, which verifies the correctness and practicability of the theoretical model and the quantitative analysis method, and also provides a way for ADSs with mismatches to continue to play their high directivity advantages.However, the actual situation is complicated, and various types of mismatches may exist at the same time. Moreover, the above two methods to reduce the influence of mismatches do not make full use of the directivity beam pattern of the sensor, which can most intuitively show the sensor\u2019s performance of pointing out the direction of sound propagation. Following are the important aspects of this study: In this section, the theoretical model of mixed mismatches is established based on the finite-difference approximation model of uniaxial acoustic particle velocity gradient. Then, the change in the directivity beam pattern of an ADS with only one type of mismatch is deduced based on the theoretical model of mixed mismatches, and the influence of various mismatches on directivity beam pattern of an ADS is summarized. Thereafter, the change in directivity beam pattern in the presence of mixed mismatches will be analyzed.x axis is As is shown in x axis components of the acoustic particle velocity of the two points with a spacing of For harmonic plane waves, the According to the finite-difference approximation method, when the spacing Equation :(3)\u2202vx\u2202xThe substitution of (1) and (2) into (3) gives,Equation (4) is the finite difference model of uniaxial acoustic particle velocity gradient. The acoustic particle velocity at the origin point mated as ,(5)vx\u2248pSimplifying Equations (4) and (5), and considering Equations (6) and (7) show that, at certain frequency (x axis components of particle velocity at the two points may deviate from Equations (1) and (2) due to the inconsistency of the two APVSs, even under the condition when x axis components of particle velocity at the two points in Consider an ADS using two actual acoustic particle velocity sensors (APVS) as its sensitive units to measure the particle velocity of the two points in For the ideal case, where Equation (10) is the theoretical model of mixed mismatches for the particle velocity gradient measured by a particle velocity type ADS with the particle velocity sensitive units are mismatched with each other. Equation (10) describes the estimated value of the acoustic particle velocity gradient at the acoustic center of the ADS when the amplitude, phase and axis of the two sensitive units are all mismatched. From Equation (10), the directivity beam pattern of the sensor can be obtained when there is only a single mismatch, so as to better show the influence of various types of mismatches on the directivity of the ADS.When only amplitude mismatch exists, When From When only phase mismatch exists, When As is shown in When only axial mismatch exists, When As illustrated in Summarizing the effects of the above mismatches on the directivity beam pattern of an ADS, we can get When there are two types of mismatches between two sensitive units at the same time, their normalized directivity beam patterns of the particle velocity gradient derived from Equation (10) are shown in From Comparing Equations (11) and (13), we can find that both amplitude mismatch and axial mismatch will have an impact on the amplitude of the measured particle velocity gradient, so there will be mutual coupling between amplitude mismatch and axial mismatch when they exist at the same time. This can be also seen from When all three types of mismatches between the two particle velocity sensitive units exist, the normalized directivity function of the particle velocity gradient derived from Equation (10) can be written as:The directivity beam patterns of the particle velocity gradient at the acoustic center of the ADS with combinations of different mismatch magnitudes are shown in It can be seen from In our previous work , we propTwo MEMS TAPVS chips can be used as particle velocity sensing units to form a particle velocity type ADS, as shown in The ADS\u2019s 2D temperature distribution in As shown in The particle velocity gradient can be obtained by the difference between the outputs of two TAPVSs and its variation with the angle of sound incident direction From As mentioned in It can be understood from Therefore, we define the difference between the ratio of the ideal cosine squared beam pattern and the ratio of the beam pattern with amplitude mismatch As seen in Equation (15), for a certain When When When When From It can be also seen from We use the Difference of Axial Sensitivity (DAS) to quantitatively describe the asymmetry caused by the phase mismatch As seen in Equation (16), for a certain When When When When When From Therefore, we define the ratio of the normalized directivity function when the incident angle As seen in Equation (17), for a certain When When When When When From From the above analysis, we can see that, for a certain By using the three quantitative parameters in Here, we are going to use two actual ADSs as examples to illustrate the usage of this quantitative analysis method for mismatches between the sensitive units of an ADS based on directivity beam pattern. One ADS is shown in As seen in The deviation of the results shown in Calculate the ratio of the measured output data when the incident angle Calculate the difference between this ratio and the ratio of the ideal cosine squared beam pattern, which is similar to Equation (15).First, for amplitude mismatch, the actual Beam Width Increase (BWI) can be calculated by the following steps, which is also illustrated using Equation (18):For the measured beam pattern shown in Next, for phase mismatch, the actual Difference of Axial Sensitivity (DAS) can be calculated as follows:For the measured beam pattern shown in At last, for axial phase mismatch, the actual Ideal Axial Ratio (IAR) can be calculated as follows:For the measured beam pattern shown in The magnitude of each type of mismatch of the ADS from can be eSubstituting the estimated As seen in Therefore, the designer of an ADS can use this quantitative analysis method to conveniently judge what type of mismatch exists in the designed sensor from the measured directivity beam pattern, and easily estimate the specific magnitude of this type of mismatch, and then use this estimation to help improve the original design scheme.In the actual development process of this kind of new acoustic sensor, the ADS, it is inevitable that the differential units will be mismatched; if we can continue to use them and ensure their high directivity to some extent, it will be of great significance. In this section, a correction algorithm for the mismatches between the sensitive units of ADSs will be proposed, and both simulation results and experimental results will be used to show the correction ability of the algorithm.According to the quantitative analysis method in Import and preprocess the experimental data. The preprocess is mainly to normalize the voltage data output from the experiment.Determine the quantitative parameters in Estimate the magnitude of the mismatches, Obtain the theoretical directivity beam pattern with such mismatches.Output the beam pattern after reducing the difference from the ideal shape.Following this idea, the correction algorithm can be divided into five steps:Step 1 is easy to understand. The implementation of Step 2\u20134 is shown in A program based on Mathematica has been developed to verify the correction ability of the algorithm according to the above algorithm steps.Several groups of different spacing As seen in Another ADS was developed, which is also a particle velocity type ADS based on MEMS thermal APVSs. The spacing The comparison of the directivity beam patterns of the three actual ADSs mentioned in this article before and after correction is shown in As seen in For a certain mismatch, the three quantitative parameters in The correction of simulation results and experimental results show the ability of the designed correction algorithm for mismatches. It verifies the correctness and practicability of the theoretical model and quantitative analysis method mentioned in In this article, the impact of different types of mismatches of the two sensitive units of an acoustic dyadic sensor (ADS), a new type of acoustic sensor with high directivity, on the directivity beam patterns were analyzed and discussed in detail from the perspectives of theory, simulation and experiment. A theoretical model of mixed mismatches was established based on the finite-difference approximation model of uniaxial acoustic particle velocity gradient. The change in directivity beam pattern of an ADS with only one type of mismatch was deduced based on the theoretical model of mixed mismatches. Additionally, the effects of various mismatches on the directivity beam pattern of an ADS was summarized: (1) the main effect of amplitude mismatch is to increase the beam width. (2) the main effect of phase mismatch is to create asymmetry. (3) the main effect of axial mismatch is to make the concave points bulge and sensitive axis direction deflected. After that, the change in directivity beam pattern in the presence of mixed mismatches was analyzed by providing the normalized directivity function and the directivity beam pattern. An actual ADS was demonstrated by using MEMS thermal acoustic particle velocity sensors as sensitive units. Additionally, the possible mismatches of the actual ADS can be qualitatively obtained by comparing the measured directivity beam patterns with the theoretical model of mixed mismatches. A quantitative analysis method based on directivity beam pattern was proposed to easily estimate the specific magnitude of the mismatches. Three quantitative parameters, Beam Width Increase (BWI), Difference of Axial Sensitivity (DAS) and Ideal Axial Ratio (IAR) were used or defined with equations based on the theoretical model of mixed mismatches proposed in this article. An actual ADS with spacing of 3.6 mm based on MEMS thermal APVS chips was developed to be used as an example to illustrate the usage of this quantitative analysis method. By using the three quantitative parameters, BWI, DAS and IAR, the magnitude of different types of mismatches, Lastly, a correction algorithm based on the theoretical model of mixed mismatches and quantitative analysis method proposed in this article, for the mismatches between sensitive units of ADSs, was successfully demonstrated to correct several groups of simulated and measured beam patterns with mixed mismatches. However, there are still differences between the correction beam patterns and the ideal cosine squared shape, especially for the correction of the measured results. Therefore, the correction algorithm proposed in this article is not optimal. From the experimental point of view, it may be because the position of the sensor is not completely fixed during the measurement process, and the test sound field is not a perfect plane wave. It might be improved by improving the fixture and testing in an anechoic chamber. From the point view of the algorithm, there are two possible ways to improve the algorithm in future research: (1) optimize or redefine the three quantitative parameters , so that they can represent the deviation of directivity beam patterns more accurately; and (2) find a more accurate method to define the difference between the theoretical beam pattern with mixed mismatches and the ideal cosine squared beam pattern."} +{"text": "Background: The recent worldwide outbreak of Mpox virus infections has raised concern about the potential for nosocomial acquisition during handling of contaminated bedding or clothing. We conducted simulations to test the hypothesis that decontamination of bedding prior to handling could reduce the risk for contamination of personnel. Methods: We conducted a crossover trial to test the effectiveness of spraying contaminated bedding with a hydrogen peroxide disinfectant in reducing contamination of personnel during handling of the contaminated bedding. Bedding was contaminated on top and bottom surfaces with aerosolized bacteriophage MS2. Personnel (N = 10) wearing a cover gown and gloves removed the bedding from a patient bed and placed it into a hamper both with and without prior hydrogen peroxide spray decontamination. After handling the bedding, samples were collected to assess viral contamination of gloves, cover gown, neck or chest, and hands or wrists. Results: Contamination of the gloves and cover gown of personnel occurred frequently during handling of bedding and 20% of participants had contamination of their hands or wrists and neck after the simulation . Decontamination of the bedding reduced contamination of the gloves and eliminated contamination of the cover gown, hands or wrists, or neck. Conclusion: Decontamination of bedding prior to handling could be an effective strategy to reduce the risk for nosocomial acquisition of Mpox by healthcare personnel.Disclosures: None"} +{"text": "Sir,The first successful IVF birth in 1978 laid the foundation for the selection of human embryos prior to their transfer into a uterus. The optimization of this selection process is viewed as crucial to counter the low success rates of IVF, which is considerably affected by the failure of the majority of the transferred embryos to implant. What originally started with non-invasive morphological quality assessment and morphokinetic analysis of embryos, progressed to aneuploidy screening and testing for single gene disorders on biopsied embryonic cells and has already reached the level of screening for polygenic conditions and traits (in vitro from induced pluripotent stem cells generated from parental skin cells. Even functional stem cell-derived oocytes from male mice have already produced fertile offspring (in vitro derived oocytes and sperm to human reproductive medicine within the next few years (First, the approaches mentioned above can only reveal their true benefits if patients have a high number of embryos to choose from. In the future, the low number of natural oocytes matured upon hormone stimulation might be superseded by limitless quantities of gametes developed Second, as also highlighted by Without doubt, the convergence of these technologies will herald a new era of embryo selection in reproductive medicine with putative consequences for our idea of human origins. For the benefit of both patients and clinicians, its foreseeable clinical translation must be accompanied by an intensive ethical deliberation ("} +{"text": "Aerobiology covers both the ecological study of bioaerosols and the subsequent interaction in the natural and built environment but also the ultimate disposition in biological systems [It is simultaneously professionally humbling and an absolute pleasure to be associated with the launch of a new open access journal, with added emphasis in a scientific field as rich and diverse as aerobiology. The field of aerobiology encompasses the living microbiologically diverse world in our collective atmosphere, spanning to both the ecological phenomena and the anthropomorphic effects of a shared biology in the air. The conception of the Journal is intended to be unique amongst peer publications\u2014the scope of systems , with emThere is a strong argument to be made for an aerobiology Journal whose scope spans both ecological and human health. The microbiological diversity in what is transported in our collective atmosphere drives many processes in the ecology of the planet and how our ever-changing environment impacts human health. Identification and characterization of the changes associated with the airborne transport of bioaerosols provide touchstone data on the effects of climate change on the planetary environment , agriculAerobiology.The COVID-19 pandemic provides an idealized exemplar of how rapid dissemination of scientific information in specific areas of inquiry was needed in critical decision-making and ultimately adding and expanding academic knowledge of the subject matter. In the case of the world emergence of SARS-CoV-2, the recognition that the virus was indeed airborne was built upon data methods and critical thinking that decidedly fell within the field of aerobiology. It was only after the understanding that the emergent virus remains replication competent in aerosol suspensions that longer distance transmission was possible , galvaniAerobiology (ISSN 2813-5075) [Aerobiology, the journal will provide an essential venue for the healthy exchange of prevailing theory and associated discourse in this much understudied subject matter area. Aerobiology fills a void in current publications as the Journals scope spans both the life and health sciences, whereas other aerobiology-centric journals mainly cover life sciences. Those of us engaged in the curation of aerobiology data should care about the growth and vitality of this publication and ensure its success by contributing our work without delay. An exceptional group comprises the growing editorial board of Aerobiology, whose deep expertise spans numerous applied subjects in this exciting field of study. There is palpable excitement at the prospect of submission and review of submissions from a diverse group of engaged contributors throughout the scientific world community. I would like to thank everyone for their continued interest in the field of aerobiology, and I am looking forward to working towards making Aerobiology one of the marquee open access publications in the sciences.The Journal 13-5075) holds gr"} +{"text": "It is well established that the quality of mental health care and human rights mutually reinforcing. Until now, Georgian psychiatry is highly institutionalized, oriented towards medical treatment and suffers from a lack of recognition of the importance of the human rights concept.The purpose of the evaluation was to gather information on the current state of human rights and service quality in the inpatient mental health facilities throughout Georgia; pilot the WHO Quality rights toolkit as a major instrument to monitor mental health institutions within the country; develop recommendations for improvement of service care in psychiatric institutions and initiate changes based on the assessment results.All inpatient mental health facilities operating in the country were selected for the evaluation. The assessment team conducted visits in facilities in March \u2013 May, 2019. All visits were planned in advance. All five themes of WHO Quality rights tool were covered. Interviews, observation and documentation reviews were used during the assessment process.Infrastructure malfunction is linked to the lack of encouraging environment, with scarce of daily and social activities. Comprehensive, patient-oriented individual recovery plan has not been initiated throughout the country. Treatment is focused mainly on medication treatment aimed at reducing / removing psychotic symptoms and timely discharging patients or \u201ccalming them down\u201c. Taking into consideration scarcity of community-based service alternatives, the patients frequently have no choice where to get the relevant service. In general, the patients are satisfied with how they are being treated. The challenge is the incidents of violence among the patients and ensuring relevant safety measures. Educational and employment programs for persons with mental disorders are not developed in the country.Based on the assessment findings recommendations for improvement of service care at mental health policy and institutional level were elaborated.Despite some improvements in developing community services the assessment revealed gaps in mental health care and lack of understanding of the concept of human rights. The instrument was sensitive to identify poor treatment and violation of rights but less sensitive in determining differences in existing services. It is discussed that an in-depth assessment using the specific theme of the tool can help develop specific recommendations.None Declared"} +{"text": "An understanding of tooth morphology provides better insight and is an important foundation for providing successful endodontic and restorative treatment. It also provides a deeper evaluation and baseline for future prosthetic rehabilitation. Thestudy aimed to measure and characterizes mandibular molars using a normal divider and measurement scale using mandibular casts. Over 300 mandibular first molars were measured in dimensions including the buccolingual width, mesiodistal width, mesiobuccalheight, mid-buccal height, and distobuccal height; same were repeated with intra-oral scanners digitally. The mean and standard deviation of the measurements and the correlation to age and gender were calculated using SPSS software to fabricateimplant-specific strip crowns. The different ethnicity and diverse genetic mixture in the population of India provides a wide range of scope for evaluation of the difference, genetics has on the morphological heights and width of teeth. Implantology is a field of precision, theamount of skill and evaluation required in the field by the practitioner is a base for successful prosthetic rehabilitation. The success of an implant is determined by the precision in the placement, evaluation of the space available for prostheticrehabilitation, the amount of calibration required for the practitioner and the awareness of the dimensions of teeth they are replacing. ,2 A balaThis study was condFor the fabrication of the casts, perforated stock trays were selected carefully for proper coverage of all the teeth, borders of the tray were evaluated for ideal sulcus depth. The impressions of the mandibular arches were made using putty and lightbody addition silicone following the manufacturer's instructions; impressions were then disinfected using a 2% glutaraldehyde solution for 5 minutes and were poured using type 4 dental stone. The cast was removed from the impressions after 30 mins. The baseof the cast was formed and provided a serial number for identification. ,16The patients were subjected to intra-oral scanning using 3shape software. The scanned cast was then transferred to 3shape software and measurements were done using 3shape software. The buccolingual and mesiobuccal width was measured and cross-verifiedwith the manual measurements of the same casts. The digital scan measurement and evaluation were done and correlated with the manual measurements. Similar to the manual method, mesiobuccal, mid-buccal, and distobuccal heights and buccolingual and mesiodistalwidth were measured and correlated against manual measurements, and the results were obtained. ,18The measurements were taken by a single operator. Each value was measured three times and a mean of the values was noted. All measurements were made using vernier calipers and calculated using a divider and scale for accuracy of the results and the samemeasurements were done using digital scans of the same cast and using 3shape software for individual measurements.,20Using these measurements, mean values were calculated. Three different sizes of mandibular molars were fabricated for different sizes of teeth for the south Indian population. Following that, the designing of strip crowns was done and STL(stereo lithography) files were made. The fabricated STL files were then 3-D printed using flexible resin and then used for the fabrication of temporary crowns. ,22SPSS software was used to calculate the mean age, buccolingual, mesiodistal width, and mesio-buccal, mid-buccal, and disto-buccal heights. The distribution of data was analAfter evaluation of data using the SPSS software of both digital and manual methods, the software revealed no significant difference between the measurements done using manual and digital methods. Henceforth, the study was done using the manual methodas the base for further calculations.The mean of all dimensions calculated with standard deviation was correlated with age group and evaluated with the highest frequency distribution. Attempts have been made to reduce possible sources of error and bias. The limitation of the study is the small sample size and restriction to the mandibular first molar tooth particularly. The amount of human error has been reduced by making sure thecollection of data done by one operator. The margin of error of the instrument might be inherently present but it doesn't seem to affect the data at a larger scale. The results of the present study are generalizable to the South Indian population and SouthAsian populations. The sample collection for this study has been done from the general population presenting to the OPD of Saveetha Dental College, Chennai, Tamil Nadu. A study repeated in the same population with a different sample should yield similarresults. Subjects with different ethnicities may yield different results.et al. showed the analysis of maxillary anterior teeth crown width-height ratios: a photographic, three dimensional, and standardized plaster model. The study analyzed width and height ratios of maxillary anteriorteeth at different crown levels using photography and models were constructed for width and height analysis. The study was limited by its design and included only teeth in the esthetic zone. Also, the ratios described in the study provided an idea about theratio of height and width. It did not take into consideration the variability of height and width and different places of the tooth. Also, fabrication of ratios meant the tooth was considered similar to a cuboid with uniform dimensions which is not the casemorphologically. This study henceforth includes dimensions of the teeth considering five different locations including both height and width of the crown. Also, the current study serves as a base for characterization of the tooth into three different sizeswhich helped in the fabrication of strip crowns for the development of temporary crowns serving as an ideal to lay down the base for the values received from the study. [In 2022, Naseer Ahmed e study. et al. in his article titled correlation between clinical parameters of crown and gingival morphology of anterior teeth and periodontal biotypes, included quantitative analysis of clinical parameters of crown andgingival morphology of maxillary anterior teeth. They also analyzed the correlation of these parameters with periodontal biotypes to provide objective standards for periodontal biotype diagnosis. The study has focused on the determination of crown heightand width and correlating it with the gingival biotype of the patient. It provides an insight into how the crown and gingival biotype is interlinked and hence the adaptation of the concept to evaluate the soft tissue formation around an implant when thedimensional accuracy of the temporary crowns is higher and can be evaluated which was the core of the current study. The insights of how the crown dimensions revealed the correlation between cervical gingival margins of mandibular anterior teeth on bothsides and how they are symmetrical and thin biotype accounts for small proportions. Gingival angle of 95.95 and papilla width of 10.01mm are the optimal cutoff values for characterization of individuals as thick biotypes. This data provides valuable insightfor practitioners to classify the gingival biotype of the patient and the expected results post-final prosthesis insertion in the oral cavity of the patient. But the study came with similar limitations that it focused only on anterior teeth and restrictedsample size. [In 2020, Xiao Jie Yin le size. et al. in his article titled comparison of maxillary anterior teeth crown ratio between genders in the Laotian population. It aimed to determine the distribution of facial types and to compare the crownwidth/length ratio of six maxillary anterior teeth between males and females in the Laotian population. The study characterized facial types and crown width and length ratio were taken. The study also focused only on anterior teeth and the ratios did notprovide specific variations in height and width through the mesiodistal length of the tooth. [In June 2015, Phonepaseuth e tooth. The studies mentioned above had restrictions to anterior teeth and the aesthetics concerns were limited to anterior tooth region. This study provides an insight to the esthetic needs of posterior teeth and a method to reduce the time involved intemporization of the teeth following implant. This study also takes into consideration the different techniques of impression making and that the difference in manual and digital methods was not significant when providing the measurements of mandibularfirst molar. It also helps in characterizing mandibular molars into small, medium and large for fabrication of temporary shells.The raw data used to support the findings of this study are included in the article and will be made available on request.This study revealed the data collected using manual and digital method as not having any significant difference and helps characterization of mandibular molars into three different sizes. Also age and gender plays a significant role in the size oftooth and morphology of the tooth."} +{"text": "Buckled graphene has potential applications in energy harvest, storage, conversion, and hydrogen storage. The investigation and quantification analysis of the random porosity in buckled graphene not only contributes to the performance reliability evaluation, but it also provides important references for artificial functionalization. This paper proposes a stochastic finite element model to quantify the randomly distributed porosities in pristine graphene. The Monte Carlo stochastic sampling process is combined with finite element computation to simulate the mechanical property of buckled graphene. Different boundary conditions are considered, and the corresponding results are compared. The impacts of random porosities on the buckling patterns are recorded and analyzed. Based on the large sampling space provided by the stochastic finite element model, the discrepancies caused by the number of random porosities are discussed. The possibility of strengthening effects in critical buckling stress is tracked in the large sampling space. The distinguishable interval ranges of probability density distribution for the relative variation of the critical buckling stress prove the promising potential of artificial control by the atomic vacancy amounts. In addition, the approximated Gaussian density distribution of critical buckling stress demonstrates the stochastic sampling efficiency by the Monte Carlo method and the artificial controllability of porous graphene. The results of this work provide new ideas for understanding the random porosities in buckled graphene and provide a basis for artificial functionalization through porosity controlling. Buckling is one of the most common phenomena in low-dimensional nanomaterials. Generally, buckling induces delamination , instabiThe causes of bucking in graphene vary . BendingFor the porosities or the local atomic vacancy defects in graphene, the conventional and analytical deterministic models have a shortcoming in the random distribution quantification and computational cost . As mentCompared with molecular dynamics, the finite element model of graphene has competitive merits in computational costs of buckling analysis, especially for the tremendous stochastic sampling procedures . MoreoveIn this paper, a stochastic finite element model is proposed for the efficient quantification and propagation of the random porosities in buckled graphene. The main contents include the graphene geometrical characterization, method description, and result discussion. The buckling patterns of porous graphene are provided and compared under different boundary conditions. Based on the huge sampling space provided by the stochastic finite element model, the discrepancies caused by the numbers of random porosities are discussed. The work provides an important reference for the exploration of buckled graphene in energy harvest, hydrogen storage, and artificial functionalization.The carbon atoms in graphene are combined with the solid covalent bonds to form the periodic honeycomb lattice, which results in extraordinary material and mechanical properties. In order to describe the special periodic microstructure, the beam element is used to simplify the carbon covalent bonds in the finite element model. The corresponding material and geometrical parameters are settled according to the specific values reported in the literature ,31.P), aN) is a deterministic value. The number of the vacancy defect atoms P and aN.In addition, the percentage of random porosities is settled as in nditions , the in-In addition, the boundary condition (B3) in the third case is more complicated than that in B1 and B2. As presented in Furthermore, the implementation program and numerical computation are performed by the integration of Matlab (Version 2020a) and ANSYS (Version 14.5) commercial software. The corresponding parameter definition, parameter value assignment, and Monte Carlo stochastic sampling are completed under the Matlab programming environment. ANSYS parameter design language (APDL) is used to create the finite element model for porous graphene and perform mechanical computation for elastic buckling. The flowchart of the stochastic finite element model by the combination of Monte Carlo stochastic sampling with the finite element computation is shown in In The critical buckling stress is the key factor in controlling the elastic recycling process efficiently . The eigIn order to compare the impacts of boundary conditions, the buckling patterns of both the porous graphene and the pristine graphene under different boundary conditions are presented in Under the third and fourth boundary conditions, the effects of random porosities in the buckling patterns are more apparent than those under the above-mentioned two boundary conditions. In The buckling patterns provide the deformation shape of porous graphene under compressive loads. For the hydrogen storage and energy harvest by the porous graphene, besides the buckling patterns, the critical buckling stress is the key factor to operate and control the process. The critical buckling stress is the compressive load at which the structure suddenly buckles or loses stability. Therefore, the critical buckling stress calculation for porous graphene needs further exploration. On the other hand, the randomly distributed porosities inevitably exist in the graphene sheets and nanoribbons with the current research and manufacturing technologies. The discrepancies caused by the numbers of random porosities are also essential to be analyzed and discussed.In order to further analyze the effects of random porosities in buckled graphene, the randomly distributed porosities in graphene are propagated by the repeated Monte Carlo stochastic sampling procedure. Different percentages of random porosities and boundary conditions are performed in the stochastic finite element model. The percentages of random porosities propagated in graphene are 0.2, 0.5, 1, 3 and 5%, respectively, and they are represented by P1, P2, P3, P4, and P5.In general, the atomic vacancies lead to the reduction of critical buckling stress compared with that for the pristine graphene under all four boundary conditions, since the mean of the relative value is less than one as in x and y axes are dimensionless units. The x-axis is relevant to the percent of atomic vacancy defects; the y axis corresponds to the relative value between the results of porous graphene and pristine graphene. In Furthermore, the fluctuation levels of the critical buckling stress caused by the location randomness of atomic vacancy defects are also computed by comparison with the pristine graphene. In According to the computational results of the stochastic finite element model, the specific situations caused by the location distribution of atomic vacancies are also tracked. As presented in In order to more precisely analyze the random porosities in the buckled graphene, the probability density distribution of the bucking critical stress under different boundary conditions is computed and presented in The differences in the probability density distribution of the relative variation for critical buckling stress are also presented in In addition, the shapes of the probability density distribution in The buckling patterns and critical stress of porous graphene are sensitive to boundary conditions.The randomly distributed atomic vacancy defects in porous graphene destroy the regularity and symmetry of buckling patterns.The possibility of strengthening effects in critical buckling stress is tracked under the first, third, and fourth boundary conditions.The distinguishable interval ranges of probability density distribution for the relative variation of the critical buckling stress prove the promising potential of artificial control by the atomic vacancy amounts.It is a potentially feasible method to improve the hydrogen storage and release performance by the adaptation and change in the boundary condition of the porous graphene.The approximated Gaussian density distribution of critical buckling stress demonstrates the stochastic sampling efficiency by the Monte Carlo method and the artificial controllability of porous graphene.The results of this work provide new ideas for understanding the random porosities in buckled graphene and provide a basis for energy harvest, hydrogen storage, and artificial functionalization.This paper proposes an efficient numerical model for the random porosity quantification and propagation in pristine graphene. Based on the huge sampling space of the stochastic finite element model, the following points can be concluded:"} +{"text": "Following the publication of this article , concernThe University of California, Davis confirmed that author DPW admitted to manipulation of the data underlying the results presented in Figs 6 and 7.PLOS Genetics Editors issue this Expression of Concern to notify readers of the above issues.The authors are working with PLOS to try and address these issues. Meanwhile, the"} +{"text": "This study provides a detailed, in-depth analysis of the anatomy, topography, and branching patterns of the meningeal arteries in dromedary camels, a subject that has not previously been thoroughly studied in animals, providing insight into the intricate biological adaptations that allow them to survive in harsh environments. By precisely examining 20 heads obtained from freshly slaughtered dromedaries, we revealed the origins and topologies of the rostral, middle, and caudal meningeal arteries using advanced casting techniques for precise rendering. Our findings indicate that the rostral meningeal artery derives from the external ethmoidal artery and primarily supplies the rostrodorsal region of the frontal lobe. The middle meningeal artery provides blood to approximately two-thirds of the brain meninges. The caudal meningeal artery is derived from the occipital artery and supplies the meninges covering the cerebellum, caudal part of the falx cerebri, and tentorium cerebelli. Significantly, our study revealed the presence of accessory branches originating from the rostral epidural rete mirabile, a finding not previously described in the existing literature. These branches supply the meninges of the frontal and lateral regions of the frontal lobes. This novel study advances our understanding of the meningeal arteries in dromedaries and has significant implications for advancements in veterinary neuroscience. Recently, there has been an increasing demand for an improved understanding of the anatomy and vascularization of these arteries in various animal species. Camels are exceptional animals with a remarkable adaptability to harsh environmental conditions, exhibiting adaptations that extend specifically to the blood supply to the brain8. Several studies have focused on the neuroanatomy of this animal. However, the meningeal arteries of camels have received limited attention and a thorough understanding of their distribution and branching patterns is lacking. Investigating the structure and function of meningeal arteries in camels can help us better understand the biological mechanisms that enable them to survive in such extreme conditions.The meningeal arteries play a crucial role in supplying all three layers of the meninges and partially contribute to the blood supply to the central nervous system3.The meninges are the three membrane layers that cover and protect the central nervous system from trauma by acting as shock absorbers. They stabilize the brain and prevent it from moving around the skull while supporting blood vessels, nerves, and cerebrospinal fluid9. In addition to supplying blood to adjacent skull structures, meningeal arteries have few anastomoses with the cerebral arteries.Generally, there are three main meningeal arteries: the rostral, middle, and caudal. The rostral meningeal artery supplies the rostral region of the brain, whereas the middle meningeal artery primarily supplies the lateral and basal regions. The caudal meningeal artery provides blood to the caudal brain regions. These arteries originate from different sources and have different entry locations into the cerebral cavity depending on the species and individual anatomical variations. The branches of the meningeal arteries rise between the dura mater and periosteum of the cranial bone, supplying both structures10. The meningeal arteries are particularly sensitive to cranial lesions. The caudal meningeal artery is particularly susceptible to cranial injuries because of its unique anatomical position and proximity to the temporal bone region11. Several clinical complications have been associated with meningeal artery involvement, including epidural hematomas12, aneurysms13, and fistulas14. Understanding the topography of the meningeal artery is essential before performing surgical procedures at the base of the skull. Studying the anatomy of the meningeal arteries in camels can contribute valuable information to the field of comparative anatomy, which is crucial for understanding the evolutionary processes of the central nervous system and its vasculature across different animal species, and holds significant importance for potentially enhancing research related to various neurological conditions in humans15.The middle meningeal artery is the largest of these three arteries in humans, and knowledge of its anatomical position and branches is significant in radiology and surgical procedures18. To the best of our knowledge, this is the first study to provide a detailed description of all three meningeal arteries in dromedaries.While no studies have focused on describing the meningeal arteries in animals, some studies have mentioned the role of the middle meningeal artery in forming the rostral mirabile rete structure but did not discuss the branching pattern of this arteryThis study aimed to conduct a comprehensive analysis of the anatomy, topography, branching patterns of the meningeal arteries, and anastomoses with other cerebral arteries in dromedaries. By employing advanced casting techniques, we attempted to provide a more accurate and detailed three-dimensional description and elucidate the finer arterial branches of the meningeal arteries that might be overlooked using traditional dissection methods.Our results revealed that the meningeal arteries with diverse origins penetrate the camel skull through distinct foramina, move between the dura mater and bone, and give rise to several branches that supply blood to different regions of the meninges. These arteries are primarily classified into three types: the rostral, middle, and caudal meningeal arteries Figs.\u00a0, and 3.FThe rostral meningeal artery is derived from the external ethmoidal artery as it passes through the multiple ethmoid foramina Fig.\u00a0. In turnWe observed variations in the origins of the middle meningeal artery. Our examination revealed that the middle meningeal artery emerged from the rostral epidural rete mirabile (RERM) in most of the studied samples Fig.\u00a0. The midThree additional samples revealed the existence of an artery that emerged from the medial side of the maxillary artery, ascended dorsomedially, crossed the lateral region of the tympanic bullae, and entered the cranium through the foramen ovale. In these three samples, the middle meningeal artery entered the cranium, split into several branches, and contributed to the caudal root of the RERM before releasing several branches that supplied the meninges.The rostral branch had a wider region of supply than the caudal branch in all studied samples Fig.\u00a0. The rosThe caudal branch of the middle meningeal artery traveled caudally along the squamous portion of the temporal bone. It extends to the lower margin of the parietal bone, where its terminal branches continue to move backward toward the transverse fissure between the cerebrum and the cerebellum Fig.\u00a0. These bThe caudal meningeal artery originates from the occipital arteries and enters the cranial cavity via the mastoid foramen Figs.\u00a0 and 7. TAfter entering the mastoid foramen, the caudal meningeal artery sends a thick branch Fig.\u00a0 that supIn addition to the rostral, middle, and caudal meningeal branches, our observations revealed the presence of supplemental accessory branches arising directly from the RERM Figs.\u00a0 and 5. T19, who reported that the rostral meningeal artery is a branch of the anterior cerebral artery that supplies the trigonum olfactorium and anterior part of the dura mater. Despite the lack of comprehensive investigations on the origin and course of the rostral meningeal arteries in animals, some studies have suggested that the rostral meningeal arteries may be involved in the development of the rete in goats20 and arterial circles in dogs3. In goats, three or four rostral meningeal arteries arise from the maxillary artery anterior to the ophthalmic artery and enter the cranium through the foramen orbito-rotundum20. According to Parkash and Jain20, a network of three to four rostral meningeal arteries, two to three middle meningeal arteries, the caudal meningeal artery, and the cerebrospinal artery join the RERM in goats. In dogs, the rostral meningeal artery travels dorsally through the dura at the caudal margin of the cribriform plate, moves through the inner layer of the frontal bone, and anastomoses rostrally with the middle meningeal artery to form an arterial circle on the lateral wall of the cribriform plate. In dogs, many smaller branches leave the arterial circle and reunite to create an ethmoidal rete3.To the best of our knowledge, this is the first study to comprehensively outline the complex topologies and origins of all meningeal arteries in dromedaries. Owing to the lack of thorough investigations in this field, the precise definition and identification of these structures were challenging. While some studies have touched upon the role of the middle meningeal arteries in the formation of the RERM, extensive exploration of this and other meningeal arteries has not been the subject of any prior research or descriptive studies in animals. The meningeal arteries comprise arterial branches that extend between the dura mater and the bone, supplying both structures. We found that the rostral meningeal artery is derived from the external ethmoidal artery as it passes through multiple ethmoid foramina. This finding is in contrast to those described by Kanan21. Our findings in camels confirm that the supply of the middle meningeal artery is widespread. Zguigal2 reported that the middle meningeal artery shares a trunk with the rostral tympanic artery, arising from the inferior alveolar artery.Among the three meningeal arteries, the middle meningeal artery has the most extensive supply region, providing blood to approximately two-thirds of the meninges in the brain22 found that the maxillary artery of camels releases the alveolar, buccal, and middle meningeal arteries. In comparison, our study found that the middle meningeal artery arose from the RERM and, in one case, the rostral branch of the middle meningeal artery was derived from the rostral branches of the maxillary artery.In contrast, Badawi et al.24. In canines, the middle meningeal artery separates from the dorsal surface of the maxillary artery when entering the alar canal3. According to O'Brien et al.25, the middle meningeal artery in cattle and giraffes originates from the rostral auricular artery before entering the temporal meatus through the retro-articular foramen and supplies the caudolateral aspect of the meninges. According to studies by Zguigal2 and Kieltyka-Kurc et al.5, the middle meningeal artery enters the cranium through the oval foramen parallel to the mandibular nerve and then gives off numerous branches to supply the meninges in camels.In cats and lions, the middle meningeal artery originates from the posteromedial wall of the maxillary artery, passes through the oval foramen to enter the cranial cavity, and runs posteromedial along the posterior margin of the rete7. The contribution of the middle meningeal artery to the RERM has been emphasized in animals such as giraffes, cattle, goats, dog, and cats25. Several studies have addressed the significance of the RERM, a network of anastomosing arteries found exclusively in artiodactyls and carnivores30. The RERM pools blood from different arteries and acts as a reservoir for the afferent blood. RERM is believed to reduce cerebral blood pressure from afferent arteries32. The RERM also maintains a lower brain blood temperature by dissipating heat from warm arterial blood34. Our findings suggest that meningeal arteries emerging from the RERM may have a significant thermoregulatory function in camels. The rostral ethmoidal rete mirabile is an important anatomical feature situated within the venous sinus inside the camel's cranial cavity. This sinus plays an essential role in receiving cooler arterial blood from veins in the regions surrounding the eyes, forehead, and nasal cavities. This arriving blood is notably cooler than the blood within the RERM itself. This unique setup results in the cooling of the arterial blood contained in the rete mirabile before it is directed to supply the brain. Our research lends strong support to the idea that this mechanism contributes to maintaining the brain at a lower temperature than the rest of the body36. This difference in temperature is enhanced by the cooling effect of the middle meningeal artery which originates from the RERM. This cooling process in the region supplied by the middle meningeal artery plays a crucial role in the overall thermal regulation of the camel's brain. This suggests that these arteries are part of the cerebral cooling system that is crucial for preserving brain function in extreme environments, and act as vital components in maintaining the brain's thermal balance. Evans and De Lahunta3 found that the middle meningeal artery in dogs exits the ramus anastomoticus and runs along the calvaria's vascular groove, splitting into rostral and caudal branches. Our study found that camels exhibited similar branching patterns, with the rostral branch of the middle meningeal artery supplying the meninges covering the parietal and temporal lobes, and the caudal branch of the artery supplying the meninges covering the occipital lobe across all examined specimens. Additionally, our study revealed that camels have accessory meningeal branches that emerge directly from the rete and primarily supply the meninges covering the ventral and lateral aspects of the frontal lobe.The middle meningeal arteries contribute to the caudal root of the RERM in camels2, the caudal meningeal artery is the terminal branch of the caudal auricular artery, originating from the external carotid artery. Tayeb37 reported that the caudal meningeal artery is the terminal branch of the auriculomeningeal artery and the only collateral branch of the external carotid artery. According to Benkoukous38, the external carotid artery gives rise to the common trunk of the caudal auricular and caudal meningeal arteries in camels. In our study, we observed that, except for one sample, the caudal meningeal artery consistently originated from the occipital artery and entered the cranial cavity through the mastoid foramen. In this exception, the caudal meningeal artery originated from the condylar artery, which is usually a branch of the occipital artery in the camel. This is similar to the observation made by O\u2019Brien et al.25 in giraffes, where the large caudal meningeal artery arises from the occipital artery and enters the skull via the mastoid foramen, and a small caudal meningeal artery departs rostrally from the condylar artery. The caudal meningeal arteries in dogs branch off the occipital branch along the nuchal crest of the occipital bone, branching into the dura of the dorsocaudal cranial cavity through the mastoid foramen and supplying some branches to the tentorium cerebelli just dorsal to the petrosal part of the temporal bone3. In horses, the occipital artery supplies blood to the nuchal region, caudal meninges, middle and inner ears39.The caudal meningeal arteries are branches of the occipital artery that enter the cranial cavity via the mastoid foramen. According to ZguigalIn conclusion, this study is the first to comprehensively describe the origins and courses of all meningeal arteries in dromedaries, filling a significant gap in the existing animal literature. Notably, the referenced information on the meningeal arteries was not the focus of the cited studies. This indicates a lack of dedicated research on this specific topic and highlights the novelty of our research in filling this substantial gap. This study identified often overlooked arterial sources in the meninges, such as the additional accessory branches emerging directly from the RERM and supplying most of the meninges of the frontal lobes. We were able to account for variations owing to the substantial sample size. The origin and course of camel meningeal arteries suggest their essential role in efficient brain cooling mechanisms, explaining their ability to survive in extremely harsh environments. Understanding the detailed anatomy of meningeal artery branching patterns in dromedaries is essential for both veterinary practitioners and researchers. This information can help in the diagnosis and treatment of various neurological conditions in dromedaries, ensuring the optimal health and well-being of these animals. Further research in this field can contribute to advancements in veterinary medicine and enhance our understanding of the intricate circulatory system in dromedaries, emphasizing the uniqueness of their anatomical structure.5.This study was conducted in accordance with the guidelines of the Animal Research Ethics Committee of United Arab Emirates University. In this study, we injected various casting materials into the right and left common carotid arteries of 20 freshly slaughtered male Omani dromedaries (2\u20136\u00a0years old) obtained from Al Ain City Municipality Camel Slaughterhouses, as described in our previous studiesWe injected liquid polyurethane resin (Polytek EasyFlo 60) into the common carotid arteries of 10 dromedary heads, and red latex neoprene into the remaining 10 heads. Owing to the flexibility of latex, specimens injected with latex were dissected to remove the camel brains, including their meninges, from the cranial cavity while maintaining intact arteries. We used the resin-injected specimens to create three-dimensional models of the meningeal arteries within the cranial cavity after removing the brain using a high-pressure washer. This method was used to remove brain tissue without damaging the resin-cast fillings. High-resolution photographs of all the dissected specimens were captured using a Sony camera with a resolution of 42 megapixels."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Immunology and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."} +{"text": "Delirium tremens is one of the most serious complications associated with alcohol withdrawal. It affects a percentage of 5 to 20% of users and is not related to the duration of consumption nor to the quantities taken. An early diagnosis will facilitate a quick treatment without putting at risk the vital prognosis.Our objective is to identify the different indicators mentioned in the existing literature and to compare these to the clinical and paraclinical data of our patientsWe present through clinical vignettes, the cases of two patients hospitalized in our department of addictology for a cure of alcohol withdrawal and who presented an episode of delirium tremens.everal clinical and paraclinical parameters have been linked to statistically significant differences in the published reports related to this subject. Thrombocytopenia remains the common element between the different publications and was the case in our two patients.Clinically, the presence of a previous episode of delirium or seizure during withdrawal , as well as tachycardia (>100 bpm) and low number of quit attempts were significantly related to the occurrence of delirium tremens. The majority of the predictors identified were paraclinical and included: hyponatremia, hypokalemia, elevated ALT and homocyctein levels, low pyridoxine levels, and the presence of structural brain damage.the literature on predictors of delirium tremens remains poor. more studies are needed to confirm the data already mentionedNone Declared"} +{"text": "Coercion in psychiatric wards may improve the safety of patients and surroundings, on the other hand, its use affects compliance and satisfaction with treatment. In Poland the coercive measures are strictly regulated by The Mental Health Act (1994). Most of published studies refers to the coercion only during hospitalisation.Assessment of the extent of coercive measures in psychiatric emergency room and evaluation of the relationships between the use of direct coercion and selected demographic-clinical factors.This study was conducted at the Bielanski Hospital in Warsaw on all the patients admitted to the psychiatric ward over one year. The extent of coercion in the psychiatric emergency room, demographic and clinical data were collected. Patients were assessed in Brief Psychiatric Rating Scale (BPRS) prior to admission. Patients\u2019 sociodemographic and clinical factors were tested in a multivariate logistic regression model.In the study 318 patients were included. Coercion of some form in the psychiatric emergency room was used in 29% of cases: admission without consent in 22% of cases and direct coercion in 7%. Use of direct coercion in the psychiatric emergency room was associated with BPRS scoring: positively with severity of disorientation symptoms and negatively with severity of depression symptoms. Suicide attempts in the past were discovered to reduce the risk of being a subject of coercive measures. We found no demographic data associated in any way with coercion use.Coercion in psychiatric emergency room was related to patients\u2019 mental state and their past medical history. There is no evidence of coercive measures misuse towards any demographic group.None Declared"} +{"text": "The Covid-19 pandemic has brought many changes to everyday life of adolescents. The purpose of this study is to evaluate the impact of restrictions due to Covid-19 pandemic on the mental health of adolescents: changes in everyday life that can potentially have an impact on their mental health, the prevalence and worsening of social anxiety symptoms and consequent depression symptomsTo evaluate the impact of restrictions due to Covid-19 pandemic on the mental health of adolescents: changes in everyday life that can potentially have an impact on their mental health, the prevalence and worsening of social anxiety symptoms and consequent depression symptomsAnonymised questionnaires were used to collect the data. They were given to the patients staying in the department of psychiatry of Children\u2019s Clinical University Hospital in Riga, Jugla as well as to some adolescents visiting child psychiatrist in outpatient settings from January till the end of April, 2022. The following personal data were collected - age, gender, family status - as well as information about different factors affecting the mental health of adolescents during the pandemic: how often they spent time with friends, whether or not they have lost any friends, distance learning, seeking help from mental health professionals, quality of sleep and a chance to receive emotional support. Patients also filled Liebowitz social anxiety scale and PHQ-9: modified for adolescents\u2019 depression scale.Restrictions due to pandemic mostly affect the participants negatively, promoting the worsening of social anxiety symptoms in 42% of the respondents with positive results of the Liebowitz scale. Statistically significant connection between social anxiety and depression symptoms was found. During the pandemic most of the patients were more often seeking professional help. Patients with worsening social anxiety symptoms were found to have statistically significant connection to losing friends during the pandemic. Most of the recipients with already diagnosed social anxiety were given this diagnosis during the pandemic (67% of the cases).The restrictions due to Covid \u2013 19 pandemics negatively affect adolescents including those with social anxiety, promoting the worsening of symptoms as well as prevalence of depression symptoms in these individuals. The results suggest that coping strategies must be implemented in order to decrease the consequences of the pandemic on adolescents.None Declared"} +{"text": "Mammaliicoccus sciuri DNA instead of Trueperella pyogenes. The conclusions reported in the article are no longer supported by the analyses; therefore, the article has been retracted.Following publication, the authors contacted the Editorial Office to request the retraction of the cited article, stating that further analysis of phage vB-ApyS-JF1 sequencing data discovered contamination with This retraction was approved by the Chief Editors of Frontiers of Microbiology and the Chief Executive Editor of Frontiers. The authors agree to this retraction."} +{"text": "The debut was in September 2021 with a hospitalization in the Brief-acute hospitalization unit due to florid psychotic clinic-He consumed several drugs in his twenties and was diagnosed of HIV at the age of 29. He abandoned the use of drugs after the diagnoses and keep good adherence to the antiretroviral treatment (Abacavir + Lamivudine +Efavirenz).At the age of 46 (January 2020), he was successfully transplanted a kidney. Afterwards, he started taking inmunosupresor medication to avoid transplant rejectionAt the few months of the transplant and the beginning of the inmusosupresor medication, the patient became more irascible with moments of remarkable disinhibition and progressive abandon of the work obligations.In January of 2021, he got divorced after months of difficulties with his wife, married 28 years before, due to the mentioned problems as well as moments of bizarre and disorganized conducts and suspicion towards his wife with probable delusional jealousy. He, therefore, lost his job, hose and marriage and started taking drugs again after 17 years of abstinence.He was hospitalized in e Brief Acute Inpatient Unit in September 2021 with distrustful and hypervigilant attitude- He was suffering from delusional ideation of harm and persecution with high distress and emotional repercussion. He also presented disorganized conduct and probable auditory hallucinations. He was positive to amphetamines and cocaine After 3 days without consuming; there was no remission of the clinic.Discussing the association between the initiation of inmunsupresor medication and the beggining of psychotic clinicFirst psychotic episode (FEP) has a likely consequence of the initiation and maintenance of Tacrolimus -due to a kidney transplant- with the concomitant abuse of amphetamines and cocaine as a trigger factor.The psychotic clinic progressively remitted in one week after the administration of 3 mg/day of risperidone.The antiretroviral treatment was changed due to the poor adherence during the disorganization period. The tacrolimus was not removed because of the good response to the neuroleptic and the risk of transplant rejectionThe patient started with prodromic symptoms of psychosis at the time he began with the inmunosupresor medication. Progresively, the psychotic clinic worsen wit the consequence of a biographical break with the consequence of a divorce, therlost of work and home and a drug relapse.There is evidence of the association between psychotic episodes in people with no psychiatric history and the inmunosupresor medication for the kidney transplant . This case remarks the need of an exhaustive medical anamnesis in the diagnosis of psychiatric pathologies.None Declared"} +{"text": "This work explores the role of knowledge claims and uncertainty in the public dispute over the causes and solutions to nonpoint-driven overfertilization of the Mar Menor lagoon (Spain). Drawing on relational uncertainty theory, we combine the analysis of narratives and of uncertainty. Our results show two increasingly polarized narratives that deviate in the causes for nutrient enrichment and the type of solutions seen as effective, all of which relate to contested visions on agricultural sustainability. Several interconnected uncertainties are mobilized to dispute the centrality of agriculture as a driver for eutrophication and to confront strategies that may hamper productivity. Yet, both narratives rest on a logic of dissent that strongly relies on divergent knowledge to provide legitimacy, ultimately reinforcing contestation. Transforming the ongoing polarization dynamics may require different inter- and transdisciplinary approaches that focus on sharing rather than assigning responsibility and that unpack rather than disregard existing uncertainties.The online version contains supplementary material available at 10.1007/s13280-023-01846-z. Recent insights from social and interdisciplinary science discuss eutrophication as a \u2018wicked\u2019 hydro-social issue . Intense local struggles reacting to episodes of anoxia and death of aquatic species have recently boosted the visibility of this environmental conflict and questioned the legitimacy of formal scientific advice to policy making. As we write this article, this Mediterranean lagoon became the first European ecosystem with formal legal rights thanks to a citizen-led initiative approved by Spanish authorities.2 coastal lagoon located in the Spanish Region of Murcia, host of emblematic and endangered aquatic species (see case description in the Appendix). Over five decades, the Mar Menor has been influenced by a variety of pressures from important socioeconomic changes in the area, namely, touristic promotion, rapid urbanization of the coastline, fabrication of sand beaches, expansion of ports and canals with the Mediterranean for navigation and a major transformation of the inland agricultural activity. Triggered by the construction of the Tajo-Segura water transfer, the area shifted from a structure based on family agriculture of mostly rainfed crops and cattle to a much larger area based on intensive vegetable production for exportation to European countries . The distinction between these two forms of uncertainty is however situation specific. In the context of nonpoint pollution driven eutrophication, the variegated sources of nutrients may fall under incomplete knowledge if there is technical, legal and cultural possibility of tracking down leakages. However, it may well be that the number of sources is too vast to monitor, or that it is so complex that it is impossible to be measured, thereby becoming unpredictable that 1) actively produce knowledge about the eutrophication of the Mar Menor and (2) are echoed by local and social media, thereby contributing to public debates. The data gathering and analysis process was organized in several iterative steps , identifying both news and actors related to knowledge generation. In parallel, we selected the three most read regional newspapers with different ideological orientations and revised over 200 articles containing knowledge claims about the Mar Menor eutrophication, all published during 2021. Finally, we reviewed the minutes from meetings of the Mar Menor Scientific CommitteeSA1 in the Appendix): scientific papers (3), PhD dissertations (1), scientific reports (12), reports from environmental (4) and agricultural (1) organizations, reports from public and private consultancies (3), transcripts from experts talks (5) and policy documents (3). Dates of publication range from 2013 to October 2021 when the \u2018Framework on Priority Actions on the Mar Menor\u2019 from the Spanish Ministry for Ecological Transition was released. The final list of publicly relevant knowledge holders (institutions and organizations) is shown in Table SA2 in the Appendix.Selection of literature. The previous process gave us a preliminary overview of the main knowledge and uncertainty claims as well as their related controversies in public media. It also provided an initial list of knowledge holders together with key literature that was collected from public websites and scientific journals. From this preliminary sample, we selected those studies that explicitly addressed a comprehensive analysis of the eutrophication problem, considering its causes and pointing at potential solutions. As the analysis advanced and we gained a deeper understanding of the main controversies, we expanded the list of knowledge holders and the sample of literature in order to include standpoints from the agricultural sector. After the third eutrophic crisis started in August 2021, new public controversies emerged together with explanatory reports and policies. We included three new documents to cover the episode. The selected literature sample includes 32 documents . The final outputs of this process are four double-entry matrices, one per controversial theme, with claims classified by type of uncertainty and by analytical domain (see an example in Table A4 in the Appendix). The cascades template was used to display each set of interconnected uncertainties across the different domains of the social-technical-environmental system.Analysis of uncertainty. Following the above described cascades methodology . The succession and superpositions of plans and figures of protection had not yet manifested in visible changes for the ecosystem. One discussed reason for this stagnation is that the question of who is responsible for action is part of the controversy. If solutions come in hand of managing the aquifer by extracting and denitrifying groundwater, then the Segura river basin district and the Spanish Government are the responsible public bodies. If they are tied to the management of nutrients through fertilization practices, then it is the Regional Government of Murcia in charge. The idea of a drastic change in agricultural productivity is at odds with the regional alignment in defense of agricultural sustainability. If political stall is underpinned by contestation over the causes and solutions to the eutrophication problem, then the existence of uncertainties over those causes and solutions can only reinforce controversies and delay action Below is the link to the electronic supplementary material."} +{"text": "We present the case of a 78-year-old man with multiple somatic pathologies and associated depressive symptoms, under treatment with Citalopram 10mg, who was admitted due to cholangitis secondary to biliary prosthetic obstruction.Empirical antibiotic treatment with Meropenem and Linezolid was started, along with an increase in the dose of Citalopram to 20mg due to mood worsening. The patient begins with symptoms consisting of complex and polymorphic visual hallucinosis, without any affective or behavioral repercussions. He does not present another semiology of the psychotic sphere.To highlight the importance of knowing the different interactions and adverse effects of drugs for good clinical management.We collected the complete medical history of our patient and we carried out a review of the interactions and adverse effects described with the antibiotic drug Linezolid.As the onset of hallucinations was temporarily correlated with the use of medications, drug-induced hallucinations were suspected, resolving completely after 2 days after withdrawal of Linezolid treatment.Linezolid is a nonselective inhibitor of MAO A and B, preventing the destruction of monoamine neurotransmitters like serotonin, dopamine, or norepinephrine. It has dopaminergic properties that may enhance the central nervous system effects of anticholinergics and co-prescription with serotonergic drugs increases the risk of serotonin syndrome.This case highlights the importance of taking into account drug interactions and adverse effects to reduce the risk of drug induced symptoms and optimize their management.The increase in resistance to antibiotic treatment allows us to anticipate that the use of Linezolid will increase in the coming years, and it is important to know its mechanism of action given the interactions with psychotropic drugs that we use in our usual clinical practiceNone Declared"} +{"text": "Cloninger divides personality into temperament and character, proposing that temperament is innate and character is shaped by environment. With the development of noninvasive methods for measuring central nervous system activity, there have been many attempts to test personality theories using neuroscientific research methods. Thus, the use of neuroscience to examine existing theories of personality will enable a review of these theories and may lead to the formulation of new theories of personality.The purpose of this study was to investigate the biological factors underlying temperament and personality development in healthy adults by analyzing neural networks in the brain using resting-state functional magnetic resonance imaging.The study was conducted after obtaining prior approval from the Ethics Committee of Kanazawa Medical University. Eighty-one healthy subjects who consented to the study after explaining the purpose and methods were imaged with a 3T MRI scanner in the resting state, and statistical image analysis was performed using the CONN toolbox. Personality and temperament were assessed using the temperament personality test based on Cloninger\u2019s 7-dimensional model of personality.Five types of neural networks were extracted by independent component analysis, including Salience, Default mode, and Language. Regression analysis revealed a significant relationship between the functional connectivity of the networks and temperament/personality traits.We were able to observe the functional connectivity of representative neural networks from the data of healthy subjects, suggesting that individual differences in the degree of functional connectivity of neural networks may be related to the individual characteristics of temperament and personality of the subjects.None Declared"} +{"text": "The value of pharmacological antidepressants have been contested since they were first introduced in the 1960s, but the points of contention have varied over time. This session will examine and critically discuss some of the concerns that are commonly voiced today, with particular emphasis on the evaluation of efficacy. The session will cover topics such as the utility of dichotomized outcome measures and whether the use of these measures risk inflating apparent efficacy, whether antidepressant effect sizes are too small to be clinically meaningful, and whether there is individual variability in the response to pharmacological antidepressants.F. Hieronymus Speakers bureau of: I have received speaker\u2019s fees from Janssen and H Lundbeck."} +{"text": "This study uses a two-step approach to construct a multi-period double-difference model and introduces a quasi-natural experiment of the Broadband China pilot policy to investigate whether household financial market participation at the urban level is affected by the digital economy, which is significant for promoting Chinese households' shift from savings to investment and alleviating the long-standing problem of insufficient household financial market participation in China. In terms of direct impact, the digital economy increases the household financial market participation rate of urban residents by 3.26%, and increases the financial market participation rate of highly financially literate households by 2.14%; in terms of indirect impact, the development of the digital economy increases the total number of household smart Internet devices by 8.27%, and similarly increases the attention to household financial information by a significant 4.22%, which further positively influences the household financial market participation rate. This paper also evaluates the individual and regional differences of the digital economy on household financial market participation, and the estimated causal effect of the digital economy on household financial market participation is purer, which expands the scope of research on the digital economy and household financial market participation, and provides a certain reference basis and policy inspiration for the government to promote the construction of the digital economy. The advancement of the digital economy provides effective support for household participation in financial markets by broadening access to financial literacy and enhancing the convenience of financial investment for residents6. Aiming to analyse the implications of the digital economy for household participation in financial markets, scholars have quantified the digital economy in terms of both the level of Internet development and the level of digital financial development7. However, no uniform conclusions have been reached. Even though several scholars have discussed the effect of digital economy policies10, existing studies have ignored how these policies affect household financial market participation rates at the city level. As a result, the mechanism and heterogeneity analysis of the impact of the digital economy on household financial market participation rates needs to be further investigated. At the micro level, this paper is important for reducing household financial vulnerability, increasing household income diversity, and thus improving quality of life and welfare. At the macro level, guiding the transformation of residents' savings to the investment side, helping in expanding domestic demand, promoting the sound development of China's financial market, and achieving the strategic goal of a commonwealth is important for the government.The macroeconomic recovery in the post-epidemic era has caused a gradual shift in Chinese residents' asset allocation from risk-free broad asset classes to risky financial assets. Financial asset allocation is now entering an accelerated period and residents' effective participation in financial markets is crucial to the long-term sound appreciation of the family wealth11, and that broadband infrastructure is one of the key components of the digital economy12, and that broadband service policies can effectively penetrate in free economies throughout the world13, the aforementioned literature serves as the basis for the investigation in this paper.The latest edition of the China Household Finance Survey (CHFS) published in 2019 shows that Chinese households are not active in the financial market, with real estate assets accounting for 66.3% and financial assets accounting for only 12.8%. In terms of broad asset allocation, real estate accounts for only 15% of U.S. households and 46% in Europe, compared to Chinese households' over-investment in real estate. In terms of financial asset allocation, the proportion of financial assets of American households is as high as 51%, and that of European households is 28%, while the proportion of financial asset allocation of Chinese households is at a relatively low level in the international arena. It is worth noting that in recent years, China's booming digital economy has become a possible means to address the mystery of \"limited participation\" in China's household financial market. Given that digital infrastructure has a positive impact on enhancing the financial literacy of the population14. This study explores the impact of the digital economy on the financial market participation rate of Chinese households by using the \"Broadband China\" pilot policy as a quasi-natural experiment as well as a two-step method to construct a multi-period double difference model based on Chinese household financial survey data. The following are the possible marginal contributions of our study: first, this paper introduces a quasi-natural experiment on the basis of the perspective of digital economy development to determine the causal effect of the digital economy on the household financial market participation rate. Second, this paper uses a two-step approach to construct a multi-period double-difference model that controls for individual-level characteristics without losing individual-level information and integrates information on household financial market participation at the prefecture level. The estimated household financial market participation rate at the city level is more statistically representative. Finally, this paper likewise assesses the regional differences and individual differences in household financial market participation rates that are affected by the digital economy while verifying the influence mechanism from two paths: information transmission and financial information concern, which broadens the scope of the study and deepens the paper\u2019s policy connotation.On the one hand, the \"Broadband China\" policy aims to optimize and expand the construction of Internet infrastructure for broadband applications, and on the other hand, the \"Broadband China\" policy also includes the enhancement of residents' ability to use the Internet, the cultivation of digital consumption habits, and the promotion of the digital transformation of industries, therefore, the \"Broadband China\" policy is a good proxy for the development of the digital economy, and is mostly used in the literature to assess the impact of the digital economyThe following parts of this paper are organised as follows: the second part consists of the policy background; the introduction of data, variables, and models is covered in the third part; the fourth part discusses the main empirical results, robustness test, heterogeneity test, and mechanism discussion; the fifth part involves the research conclusion.The strategy implementation plan of \"Broadband China\" was released by the State Council in August 2013, and the Ministry of Industry and Information Technology and the National Development and Reform Commission released the list of 117 \"Broadband China\" demonstration cities (clusters) in three batches from 2014 to 2016. The objective is to optimize and upgrade broadband networks and reinforce the application of broadband networks.Since the implementation of the \"Broadband China\" strategy, China has made significant progress in digital infrastructure construction. According to the Ministry of Industry and Information Technology's 2021 Communications Industry Statistical Bulletin, by the end of 2021, the number of China's Internet broadband access ports reached 1.018 billion, up from fewer than 15 million at the start of the twenty-first century.16; at the firm level, the implementation of the \"broadband China\" policy aids in improving firm productivity and concentration18 and increasing the number of mergers and acquisitions19; at the level of green sustainable development, the implementation of \"broadband China\" policy further improves the effectiveness of green innovation in China through its impact on green technology innovation21; at the financial market level, the implementation of the \"broadband China\" policy has successfully mitigated the risk of stock price collapse22 and played a positive role in the spread of digital inclusive finance23. However, no scholars have evaluated the impact of the implementation of the \"broadband China\" policy at the level of household financial market participation.Several scholars have investigated various dimensions based on the quasi-natural experiment of the implementation of the Broadband China policy. At the consumption level, the implementation of the \"broadband China\" policy has a considerable positive impact on rural household consumption and electricity consumptionUnder the assumption of information asymmetry, investors lacking information have negative returns relative to the total market return, whereas investors with information can buy low and sell high to gain additional returns, which comes from information profit. An information-asymmetric market can make it difficult for households to engage in financial markets, which might cause some households to avoid using financial services. Through the development of network infrastructure and the optimisation of digital infrastructure, the \"broadband China\" policy's implementation reduces the obstacles to financial market participation caused by information asymmetries, giving the implementation of the \"Broadband China\" policy in the pilot cities an opportunity to explore the impact of the digital economy on household financial market participation.This paper examines the implementation of the Broadband China policy as a quasi-natural experiment by using data from the China Household Finance Survey (CHFS) for 2013, 2015, 2017, and 2019 to ascertain whether the digital economy can enhance the likelihood of household financial market participation in China. The \"Broadband China\" policy data were compiled from the list of selected \"Broadband China\" demonstration cities from 2014 to 2016 published by the Ministry of Industry and Information Technology of the People's Republic of China. The China Household Finance Survey data use modern sampling methods, a computer-assisted survey system (CAPI), along with other survey techniques and management tools to control sampling and non-sampling errors, and the data are representative and of high quality, which is the most extensively employed micro household finance data. The data of other prefecture-level cities used in this paper primarily comes from the China City Statistical Yearbook.24. In light of this, a two-step approach is used for the estimation in this paper25. In particular, this study first integrates data on household financial market participation to the prefecture-level controlling for individual characteristics and then performs double-difference estimation using prefecture-level data, in contrast to direct estimation with individual-level data, estimation with prefecture-level data allowing for intuitive interpretation of the estimated coefficients without losing individual-level information. Moreover, due to the fact that individual-level characteristics are controlled for in the first estimation step, the city-level household financial market participation estimated from this is more statistically representative. The model is estimated in the first step as:In this paper, the multi-period double difference method is used for estimation. If respondents' participation in financial markets is the explained variable, direct individual-level estimation encounters two major issues: first, Logit and Probit models, which are typically utilised for regressions on dichotomous variables, do not apply in double difference estimation, and marginal effects calculated based on regression coefficients are even less meaningful; Second, even switching to a linear probability model for estimation does not yield an intuitive explanation of the influence of a digital economy on household financial market participationIn the above equation, subscript mentclass2pt{minimThe city-level household financial market participation calculated in the first step is used in the second estimation step together with other data to create an unbalanced panel data and is regressed using a multi-period double difference model with the following estimated model:entclass1pt{minimaThe findings of descriptive statistics are depicted in Table Table As shown in Fig.\u00a0p-values are above 0.1. The results indicate that the core findings of this paper are robust.This paper constructs a randomised experiment at both the pilot city and implementation time levels by randomly selecting the pilot cities of Broadband China and generating the implementation time of the policy at random. The randomisation process is repeated 500 times and the regression is performed 500 times. Figure\u00a0The digital economy as suggested by the baseline results, significantly increases the financial market participation rate of Chinese households. However, so as to exclude confounding factors from the findings, further robustness tests are conducted in this paper. In this paper, we first consider that the explanatory variables increase simultaneously with the time trend; therefore, column (1) of Table 26.Although the previous results provide some evidence for the exogeneity of Broadband China's policies, this paper employs an instrumental variables approach to further validate the conclusions. We use historical postal and telecommunication data in 1984 for each city in China as the basis for constructing instrumental variables. On the one hand, the Internet is a continuation of the development of traditional communication technology, and the basic level of local historical communication facilities may affect whether a local area can be shortlisted as a pilot of the Broadband China policy; on the other hand, the impact of traditional communication tools, such as post offices and fixed-line telephones, on economic development diminishes with the decline in the frequency of use, which satisfies the exclusivity. It should be noted that the raw data of the instrumental variables selected are in cross-sectional form and cannot be directly used for the econometric analysis of panel data. In this paper, we introduce a time-varying variable to construct the panel instrumental variableSpecifically, the number of Internet users in the country in the previous year is constructed as an interaction term with the number of post offices and telephones per 10,000 people in each city in 1984 as an instrumental variable for the city's digital economy index in that year.Column (8) of Table From the objective investment environment, the level of development of the digital economy and the information environment in which China's urban and rural areas are located vary, and from the heterogeneity of households, the level of financial literacy also leads to different awareness of household investment, which affects household financial market participation rate. For examining the heterogeneous impact of the digital economy on the financial market participation rate of Chinese households, this paper takes two heterogeneous factors into account, urban\u2013rural regional differences, and differences in household financial literacy, in further research so as to investigate whether there is heterogeneity in the impact of the digital economy on the financial market participation rate of Chinese households. This paper constructs financial literacy indicators from three dimensions: financial information, financial knowledge and financial capability, and calculates financial literacy scores using the iterative principal factor method. In this section, the paper uses the median score of financial literacy as a dividing line to delineate the high and low levels of financial literacy of households, Households with financial literacy scores greater than the median score are high financial literacy households, and households with financial literacy scores less than the median score are low financial literacy households.The results in columns (1) and (2) of Table The development of digital infrastructure and digital media represented by the Internet is the core element of the growth of the digital economy, which acts on household financial market investment decisions by altering residents' information search channels. The paths for promoting household financial market participation are as follows: first, the information transmission path where the digital social network information transmission function is strengthened, the amount and efficiency of information content that households obtain through the social network are greatly enhanced, affecting household financial market participation behaviour; second, the financial information concern path, where the massive amount of Internet information makes households intentionally or unintentionally exposed to increased financial and economic knowledge as well as wealth management awareness. As a result, this paper uses the same two-step method to determine the proportion of households participating in the financial market in different cities after controlling for individual characteristics, based on the data of whether the respondents have smart Internet devices and financial information concerns and then regresses the interaction term of the \"Broadband China\" policy pilot. Of which, the respondents have intelligent Internet access devices as 1, the respondents do not have intelligent Internet access devices as 0; the respondents on the financial information concern level is based on the questionnaire data to the respondents on the financial information concern level of no concern at all to very concerned about the division of 1\u20135.The regression results inTable The conclusion that the digital economy greatly enhances household financial market participation remains valid after several robustness tests. The digital economy can further promote the household financial market participation rate by increasing the proportion of households holding smart Internet devices as well as the attention paid to household financial information, while the digital economy has a greater effect on the financial market participation rate of urban households and the financial market participation rate of households with high financial literacy. Hence, the government should pay considerable heed to the investment in network infrastructure construction in rural areas in order to reduce the distance between households and financial services in backward areas. This will enable the residents in backward areas to better enjoy the dividends brought by the development of the digital economy. Simultaneously, financial institutions and financial management departments should strengthen the popularisation of financial knowledge and enhance household financial literacy with the high permeability of digital media, so as to raise investors' awareness of risk prevention.This paper highlights that the development of the digital economy may be a critical tool to solve the mystery of \"limited participation\" in the financial market of Chinese households, which is critical to effectively improve the financial welfare of Chinese households, raise the income of residents through multiple channels while enhancing people's sense of well-being.Supplementary Information."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Psychology and the Chief Executive Editor of Frontiers. The authors responded to the concerns raised, but did not provide a satisfactory explanation. The authors do not agree to this retraction."} +{"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Immunology and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."} +{"text": "The initiation, growth, and rupture of cerebral aneurysms are directly associated with Hemodynamic factors. This report tries to disclose effects of endovascular technique (coiling and stenting) on the quantitative intra-aneurysmal hemodynamic and the rupture of cerebral aneurysms. In this paper, Computational Fluid Dynamic are done to investigate and compare blood hemodynamic inside aneurysm under effects of deformation (due to stent) and coiling of aneurysm. The blood stream inside the sac of aneurysm as well as pressure and OSI distribution on the aneurysm wall are compared in nine cases and results of two distinctive cases are compared and reported. Obtained results specifies that the mean WSS is reduced up to 20% via coiling of the aneurysm while the deformation of the aneurysm (applying stent) could reduce the mean WSS up to 71%. In addition, comparison of the blood hemodynamic shows that the blood bifurcation occurs in the dome of aneurysm when endovascular technique for the treatment is not applied. It is found that the bifurcation occurs at ostium section when ICA aneurysm is deformed by the application of stent. The impacts of coiling are mainly limited since the blood flow entrance is not limited in this technique and WSS is not reduced substantial. However, usage of stent deforms the aneurysm angle with the orientation of parent vessel and this reduces blood velocity at entrance of the ostium and consequently, WSS is decreased when deformation of the aneurysm fully occurs. These qualitative procedures provide a preliminary idea for more profound quantitative examination intended for assigning aneurysm risk of upcoming rupture. Although risk of rupture is moderately low in unruptured cerebral aneurysms, protective interventions are frequently considered owing to the poor diagnosis of intracranial haemorrhage8. Existing treatments of intracranial aneurysms convey a small but momentous risk that could surpass the natural risk of aneurysm rupture, and this augments the importance of performance of different endovascular techniques for reduction of the rupture risk of cerebral aneurysms for clinicians14. Pre-interventional analysis of the aneurysm is mainly related to size of aneurysm in which rupture risk of larger aneurysm is higher than lower ones19. Although this logic seems reasonable, some reports indicates rupture of small aneurysm is not ignorable. Hence, the importance of hemodynamic analysis and related factors for the evaluation of the aneurysm rupture is raised23. Previous reports and articles confirmed that the mechanisms of aneurysm growth and rupture are related to hemodynamic factors29. Besides, hemodynamic factors also present valuable measure for the estimation of aneurysm rupture. A few geometric measures, for example shape descriptors and aspect ratio, are considered in former studies as substitutes for hemodynamic information34. Although these factors could be suitable for refining risk evaluation, the underlying mechanisms are difficult to connect rarely disclosed via these values38.The precise evaluation of the endovascular techniques is crucial for the selection of efficient treatment for saccular aneurysms44. This study tries to investigate the effects of these two techniques on hemodynamic factors. CFD technique is used for modeling of the pulsatile blood flow and achieve hemodynamic factors (i.e. OSI and WSS) for estimation of the aneurysm rupture.These methods have their own cons and pros and comparison of these techniques are seldom reported for ICA aneurysms45.It is confirming that all methods were carried out in accordance with relevant guidelines and regulations. Besides, all experimental protocols were approved by of the Ca' Granda Niguarda Hospital and it is confirmed that informed consent was obtained from all subjects and/or their legal guardian(s). All study are approved by Ca' Granda Niguarda Hospital ethics committeeNine different patients with intracranial aneurysms are investigated in present study. Due to similarity between the hemodynamic of some of these aneurysms, results of two most distinctive models are presented in this paper. For the selection of the aneurysm, the chosen aneurysms have high angles with parent vessel orientation. Besides, the sac Neck Vessel angle is almost identical in these two chosen models. Selected cases of 34 and 06 are related to women and man, respectively. Hence, HCT value of the female and male cases are 0.4 and 0.45, respectively. Details of chosen aneurysms are presented in Table 45 which offer full shape of the ICA aneurysm with case number of CO34 and C006. The surface of chosen aneurysm is produced by ICEM software. Then, sac section is split for the implementation of the porous as coiling technique. Deformation is also applied according to angle of the parent vessel angle with normal vector of ostium surface. The main concept for the deformation is to preserve the straight angle of the main parent vessel to reduce the blood entrance in to sac section area. Details of ICA ostium and neck vessel angles are presented in Table The geometry (.stl) of selected aneurysm is obtained from Aneurisk Website48. One-way FSI model is used for the modeling the blood interactions with aneurysm wall. Casson model is developed for the calculation of the viscosity term in the main governing equations. For modeling of coiling, sac section is assumed to fill with porous material with specific details associated with size of aneurysm55. Applied porosity for the chosen aneurysms are presented in Table In this work, Navier\u2013Stokes equations are solved for computational modeling of the blood stream. It is assumed that the blood stream is non-Newtonian, incompressible and transient31. There are four distinctive stage in the figure for the comparison of the results. To ensure about the archived data, results of 3rd blood cycle is presented in this work and reported OSI value is for last time step (step\u2009=\u20093000). Maximum blood velocity occurs in the peak systolic and this condition is evaluated as wore case scenario. This study tried to investigate the rheology aspects of blood stream60.Figure\u00a0To investigate the impacts of coiling, Table Comparison of the WSS in the sac of the aneurysm at peak systolic stage are demonstrated in Fig.\u00a0Figure\u00a0As explained earlier, one of side effects of the stent is to change the angle of blood inflow with normal vector of ostium area. Figure\u00a0The impacts of the deformation are clearly observed on the distribution of the WSS as shown in Fig.\u00a0In this work, computational fluid dynamic is used for the modeling of the blood stream inside cerebral ICA aneurysms. The impacts of endovascular coiling on blood hemodynamic are investigated by applying porosity inside the sac. Also, the deformation of the aneurysm by the effects of stent is also studied in two different stages. WSS, pressure and OSI value of selected aneurysm are compared in different stages of blood cycles. Obtained results indicates that the mean WSS is reduced up to 20% via coiling of the aneurysm while the deformation of the aneurysm (applying stent) could reduce the mean WSS up to 71%. In fact, the impacts of limiting blood stream by deformation is more than coiling on protection and control of aneurysm growth and probable rupture."} +{"text": "Risks of lower third molar surgery like the inferior alveolar nerve injury may result in permanent consequences. Risk assessment is important prior to the surgery and forms part of the informed consent process. Traditionally, plain radiographs like orthopantomogram have been used routinely for this purpose. Cone beam computed tomography (CBCT) has offered more information from the 3D images in the lower third molar surgery assessment. The proximity of the tooth root to the inferior alveolar canal, which harbours the inferior alveolar nerve, can be clearly identified on CBCT. It also allows the assessment of potential root resorption of the adjacent second molar as well as the bone loss at its distal aspect as a consequence of the third molar. This review summarized the application of CBCT in the risk assessment of lower third molar surgery and discussed how it could aid in the decision-making of high-risk cases to improve safety and treatment outcomes. Third molar surgery is known to be the most common oral surgical procedure . Lower tOne major indication of lower third molar surgery is to preserve the adjacent second molar, which could be affected by the third molar impaction because of persistent food trapping and recurrent infection . There iOrthopantomogram (OPG) has been the most useful imaging for the assessment of IAN injury risk of lower third molar surgery ,16. OPG Dental CBCT machines were developed and commercialised in the late 1990s. Since then, CBCT has been popularized, and it is used extensively in the field of dentistry. Its utilization was largely advanced by the development of dental implantology . With CBIn third molar surgery, CBCT offers a thorough understanding of the 3D relation between the third molar and the IAC . The bonCBCT undoubtedly offers a better visual of the relationship between the tooth root and the IAC as an imaging modality. Yet, does the difference lead to clinical relevance? The query was raised by several researchers who argued the absolute need for routine CBCT as a pre-operative assessment imaging tool. Matzen et al. prospectively evaluated 186 lower third molar surgery and found that only 12% of the cases had changed the treatment plan as an influence by the CBCT . Manor eCompared to OPG, CBCT carries a larger radiation dose. For this reason, the European Academy of Dento-Maxillo-Facial Radiology stated that CBCT should not be used in a routine manner for the assessment of lower third molar surgery and should be prescribed when plain radiography could not provide sufficient diagnostic information . Cost an2 could clearly exhibit the IAC at the mesial root of the third molar [2 for OPG could be fulfilled with well-designed clinical studies [One of the major drawbacks of CBCT is its radiation exposure and the related potential health hazards . Althougrd molar . However studies ,64. The Apart from pathologies like caries or pericoronitis, the impaction of third molars may result in external root resorption (ERR) of the adjacent second molars. It was found that 20\u201347.7% of the impacted lower third molars are associated with ERR of adjacent second molar ,66,67,68The clinical decision to prophylactically remove the impacted lower third molar to prevent ERR relies on the position of the third molar to the adjacent second molar. Traditionally, OPG is used to assess if there is a risk of ERR of the second molar when the third molar is impacted. However, it was found that the accuracy of OPG in diagnosing ERR was low because of the overlapping 2D images. The use of CBCT to assess ERR of the second molar as a consequence of impacted lower third molar is found to be more accurate. Oenning et al. found that for the same sample of patients with impacted third molars, OPG only found about 23% of the ERR of adjacent second molars that were diagnosed by CBCT . With anWith the presence of an ERR of the adjacent second molar, patients should be notified of the risk of the pulpal pathology of the second molar after the removal of the impacted third molar. Fortunately, the majority of adjacent second molars with ERR by the lower third molar appeared to be unaffected after the lower third molar removal. A classic paper by Nitzan et al. found that in most cases, the damaged periodontium of the second molar as a consequence of ERR was completely re-established one year after the third molar surgery . HoweverIt is known that third molar impaction affects the periodontal health of the adjacent second molar and leads to periodontal attachment loss of the tooth . It is cLingual plate fracture during lower third molar surgery is uncommon but potentially may lead to other severe complications like lingual nerve injury, bleeding, or displacement of fractured root or tooth into the sublingual space ,83. The Despite the potential benefits of CBCT in assessing the risks of lower third molar surgery, the imaging modality is not without its limitations. Apart from the higher radiation dosage, it is known that CBCT is more expensive in the equipment and maintenance costs, which may not be affordable for some patients. Routine screening of third molars using CBCT is costly and may be considered unnecessary. It is of note that the interpretation of the images of CBCT needs the proficiency and experience of the clinicians, which forms the most critical part of the risk assessment process. Wrong interpretation may put the patients at risk of potentially irreversible damages like IAN injury.Radiographic imaging has been the most useful tool for risk assessment of lower third molar surgery. CBCT is useful in understanding the 3D relationship between the third molar and the relevant adjacent structures, in particular, the IAC and the adjacent second molar. CBCT has been proven to be sensitive to identifying the true proximity of the tooth root and the IAC, yet it might not reduce the risk of IAN injury if the third molar is to be removed in a conventional manner. Since coronectomy of the lower third molar is proven to be safe in the long term, it offers a treatment alternative if a third molar carries a high IAN injury risk. CBCT may be useful in the decision-making of whether coronectomy or total removal of the lower third molar shall be performed by considering the proximity of the tooth and the IAC. There are also efforts to reduce the radiation dosage of CBCT to acquire a diagnostically acceptable image to improve the safety of the imaging modality. CBCT is also used to assess the risk of second molar root resorption or bone loss as a consequence of the third molar impaction. CBCT may also identify risky cases of lingual plate fracture when the lingual plate is very thin, or the tooth root perforates the lingual plate. The development and research of CBCT have increased its popularity as part of the risk assessment and informed consent procedure for lower third molar surgery. Further research on low-dose CBCT to acquire similar diagnostic-value images is ongoing to improve patients\u2019 safety."} +{"text": "The forensic investigation of asphyxia deaths still poses a challenge due to the need to demonstrate vital exposure to hypoxic insult according to high levels of evidence. The pulmonary effects of hypoxia are complex and the understanding of the mechanisms underlying the acute pneumotoxicity induced by hypoxia is still incomplete. Redox imbalance has been suggested as the protagonist of the main acute changes in pulmonary function in the hypoxic context. The development of knowledge in biochemistry and molecular biology has allowed research in forensic pathology to identify some markers useful in immunohistochemical diagnostics of asphyxia deaths. Several studies have highlighted the diagnostic potential of markers belonging to the HIF-1\u03b1 and NF-kB pathways. The central role of some highly specific microRNAs has recently been recognized in the complex molecular mechanisms involved in the hypoxia response; thus, several research activities are currently aimed at identifying miRNAs involved in the regulation of oxygen homeostasis (hypoxamiR). The aim of the manuscript is to identify, the miRNAs involved in the early stages of the cellular response to hypoxia, in order to characterize the possible implications in the forensic field of the determination of expression profiles. At present, more than 60 miRNAs involved in the hypoxia response with different expression profiles (upregulation and downregulation) have been identified. Despite the multiple and different effects on reprogramming following the hypoxic insult, the evaluation of the diagnostic implications of hypoxamiRs in the forensic field presupposes a specific treatment of the influences on HIF-1\u03b1 regulation, cell cycle progression, DNA repair, and apoptosis. The forensic investigation of asphyxia deaths still poses a challenge due to the need to prove vital exposure to hypoxic insult according to high levels of evidence. Historically, the study of pulmonary histomorphological changes resulting from asphyxia has been considered of great importance for diagnostic purposes. However, the correlation of histopathological findings with the diagnosis of asphyxia death requires an accurate assessment of any concomitant conditions such as alterations in the relationship between ventilation and perfusion attributable to age, comorbidities, and exposure to smoke. Therefore, over the years, constant research has been aimed at studying the molecular response to the decrease in oxygen tension.Hypoxia represents a fundamental physiological stimulus capable of triggering adaptive and maladaptive responses in a wide variety of pathophysiologically relevant situations. The effects of hypoxia at the pulmonary level are complex and the understanding of the mechanisms underlying the acute pneumotoxicity induced by the hypoxic insult is still incomplete. Redox imbalance has been suggested as the protagonist of the main acute changes in lung function in the hypoxic context, in particular as regards hypoxic vasoconstriction, endothelial dysfunction, edema, and lung inflammation. Growing evidence suggests that hypoxia is capable of causing severe mitochondrial oxidative stress in the lungs by increasing the production of ROS in complexes I and III of the electron transport chain The development of knowledge in the field of biochemistry and molecular biology has enabled research in forensic pathology to identify some molecular markers of response to hypoxic insult useful in immunohistochemical diagnostics of asphyxia deaths. Precisely, several scientific contributions have highlighted the diagnostic potential of markers belonging to the molecular pathways of HIF-1\u03b1 and NF-\u03baB.The central role of some highly specific microRNAs (miRNAs) in the context of the complex molecular mechanisms involved in the response to hypoxia has recently been recognized. In parallel, a role of miRNAs as emerging diagnostic biomarkers of the cellular response to low oxygen tensions has been established, so much so that several research activities are currently aimed at identifying the miRNAs involved in the regulation of oxygen homeostasis (hypoxamiRs) Despite the evidence generated, currently, the molecular biology of the pulmonary response to hypoxic insult requires further investigation aimed at clarifying - among other things - the actual diagnostic usefulness in the forensic field. The majority of the studies carried out so far have been conducted on animal models and have been specifically directed to the evaluation of the expression profiles of hypoxamiRs in different pathological conditions such as pulmonary hypertension, myocardial infarction, hypoxic-ischemic encephalopathy, skin ischemic lesions, and neoplastic progression.The main objective of the present study is the identification of a miRNA panel, whose expression in the lungs is induced by an acute hypoxic insult, to evaluate the actual possibility of using hypoxamiRs in the diagnosis of asphyxia deaths.To this purpose, a literature search was carried out aimed at identifying the markers currently supported by scientific evidence. In the selection process, the molecular pathways currently studied in post-mortem diagnostics were also considered with the aim of conferring objectivity and facilitating efficacy evaluations.The term \"asphyxia\" in its etymological meaning indicates the absence of the pulse, the cessation of the heartbeat, and therefore, by extension, the absence of respiratory acts or peripheral pulses with consequent respiratory block The main issues of interest in the diagnosis of asphyxia deaths concern the diagnosis of death, the comparative study of cadaveric phenomena for the estimation of the post-mortem interval (PMI), and the chronological diagnosis of the lesions as well as the differential diagnosis between vital phenomena and post-mortem. The medico-legal investigation of the chronology and vitality of the lesions requires the integration of different techniques and produces results that can be difficult to interpret, especially in cases of advanced decomposition and poor representation of the signs of the vital reaction.At present, the integration of macroscopic evidence and microscopic data obtained from histological and immunohistochemical investigations allows in most cases the chronological and vitality diagnosis of the lesions.The macroscopic diagnosis of mechanical asphyxia is based on the generic signs of asphyxia death and the presence of traumatic injury related to the application of injurious means The histopathological study of asphyxia deaths, as well as the autopsy one, is aimed at evaluating signs of asphyxia , as well as traumatic lesions . For a long time, immunohistochemical techniques have been of great importance in the study of the vitality of lesions, since death is very rapid, affecting the tissue with alterations included in the early stages of the inflammatory response At present, the diagnosis of mechanical asphyxia still presents difficulties due to the poor specificity of the macroscopic signs and the limited expression of vitality related to the rapidity of death. Such a condition requires a reflection on the need to identify additional diagnostic means to be used in a complementary or, in some cases, substitute way. For this purpose, a joint evaluation of the inflammatory response, especially through the immunohistochemical markers just proposed, and of the hypoxic insult response through the study of the molecular pathway of HIF-1\u03b1 could be useful. Undoubtedly, in addition to the methods outlined, a fundamental contribution to post-mortem investigations can be made by molecular biology through the study of miRNA expression profiles.The synthesis of mature miRNAs represents a complex multiphasic process capable of undergoing the influence of low oxygen concentrations at practically all levels. While causing a reduction of about 50% in the levels of target proteins HypoxamiRs are involved in the stereotyped rapid adaptive response to oxygen deprivation, even transient; precisely, miRNAs act rapidly at the post-transcriptional level to organize the response to environmental stress. Furthermore, such molecules coordinate a dynamic biological response based on gene regulation and - in particular - on transcription. Such a complex regulatory system is supported by significant molecular crosstalk aimed at effective communication between hypoxic signaling mechanisms and miRNAs (biogenesis and function).Despite currently acquired knowledge, the precise mechanisms by which miRNAs carry out their multiple regulatory effects remain largely undefined. Further insights will derive from future research aimed at defining the complexity of hypoxic adaptive processes. In particular, the study activity will have to continue in the current direction toward the better characterization of the role of HIF and redox reactions in the regulation of miRNA biogenesis.At present, more than 60 miRNAs involved in the hypoxia response with different expression profiles (upregulation and downregulation) have been identified. Despite the multiple and different effects on reprogramming following the hypoxic insult, the evaluation of the diagnostic implications of hypoxamiRs in the forensic field presupposes a specific treatment of the influences on HIF-1\u03b1 regulation, cell cycle progression, DNA repair, and apoptosis samples allows the study of wide retrospective cases Given the defined diagnostic implications, it is essential to carry out further studies aimed at improving the knowledge in the field of miRNA biology and of the modalities of expression regulation in response to hypoxia"} +{"text": "The renin-angiotensin aldosterone system (RAAS) is a hormonal cascade that functions in the homeostatic control of arterial pressure, tissue perfusion, and extracellular volume. Dysregulation of the RAAS plays an important role in the pathogenesis of cardiovascular and renal disorders.To review the role of the RAAS in the development of hypertensive cardiovascular disease and related conditions and provide an overview of the classes of pharmacologic agents that inhibit this system.The RAAS is initiated by the regulated secretion of renin, the rate-limiting enzyme that catalyzes the hydrolysis of angiotensin (Ang) I from the N-terminus of angiotensinogen. Ang I is in turn hydrolyzed by angiotensin-converting enzyme (ACE ) to form Ang II, a potent vasoconstrictor and the primary active product of the RAAS. Recent evidence has suggested that other metabolites of Ang I and II may have biological activity, particularly in tissues. Development of agents that block the RAAS began as a therapeutic strategy to treat hypertension. Preclinical and clinical studies have indicated important additional cardiovascular and renal therapeutic benefits of ACE Is and ARBs. However, blockade of the RAAS with these agents is incomplete.Therapeutic approaches that target more complete inhibition of the RAAS may offer additional clinical benefits for patients with cardiovascular and renal disorders. These approaches may include dual blockade using ACE Is and ARBs in combination, or new therapeutic modalities such as direct renin inhibition with aliskiren, recently approved for the treatment of hypertension."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Chemistry and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "Since the outbreak of the COVID-19 pandemic, Fangcang shelter hospitals have been built and operated in several cities, and have played a huge role in epidemic prevention and control. How to use medical resources effectively in order to maximize epidemic prevention and control is a big challenge that the government should address. In this paper, a two-stage infectious disease model was developed to analyze the role of Fangcang shelter hospitals in epidemic prevention and control, and examine the impact of medical resources allocation on epidemic prevention and control. Our model suggested that the Fangcang shelter hospital could effectively control the rapid spread of the epidemic, and for a very large city with a population of about 10 million and a relative shortage of medical resources, the model predicted that the final number of confirmed cases could be only 3.4% of the total population in the best case scenario. The paper further discusses the optimal solutions regarding medical resource allocation when medical resources are either limited or abundant. The results show that the optimal allocation ratio of resources between designated hospitals and Fangcang shelter hospitals varies with the amount of additional resources. When resources are relatively sufficient, the upper limit of the proportion of makeshift hospitals is about 91%, while the lower limit decreases with the increase in resources. Meanwhile, there is a negative correlation between the intensity of medical work and the proportion of distribution. Our work deepens our understanding of the role of Fangcang shelter hospitals in the pandemic and provides a reference for feasible strategies by which to contain the pandemic. Plagues such as the Spanish flu, Ebola and COVID-19 have had a huge impact on human economic production and daily life . FangcanInfectious disease dynamics models are frequently used to quantify the effect of control strategies on transmission control . They tyThe propagation of infectious diseases is highly stochastic, according to past experience. Scholars have developed a range of infectious illness models to capture this stochastic process, such as adapting the infectious disease dynamics model to the random network ,14 or inThere are three areas of relevant research regarding Fangcang shelter hospitals. The first area focuses on introducing the basic situation of the cabin hospital ,4,20. ThTo quantitatively answer this question, an infectious disease model with two stages based on the SEIR model was created in order to simulate the various scenarios of the epidemic prevention process . WhetherIn this model, both mildly and severely affected patients seek treatment at the designated hospital in the first stage as the Fangcang shelter hospital is not yet operational. The severe patients have priority access to the designated hospital, and both mild and severe patients are constrained by the limited amount of accommodation. Equation (1) shows the number of mild and severe patients who enter the designated hospital each day in the initial phase and are admitted in various compartments. The second stage involves the opening of the Fangcang shelter hospital, where mild patients who are already being treated at the designated hospital are transferred; meanwhile, only severe patients are treated at the designated hospital, each with their own upper limitations. The number of mild patients entering various pods in the second stage is shown in Equation (2). D. The patient division and SEIR model were combined to create the final state transfer process, which is shown in The estimate of the maximum capacity of the designated hospital and the Fangcang shelter hospital with new medical resources is shown in Equation (3). Three categories of model parameters are employed in this study. Disease parameters, which are taken from traditional infectious disease transmission models and reflect the fundamental features of the disease, make up the first group ,41,42. TWe presented a one-stage simple model that significantly simplifies the two-stage model to aid in the analytical analysis. The transfer procedure of the model is shown in The requirement that must be met denotes the extent to which the number of patients increases with the number of susceptible patients at a given level of prevalence; this can be obtained by setting the derivative of the number of confirmed patients. When this requirement is met, the epidemic hits an inflection point and the number of current patients stops growing. This requirement can be expressed as We conduct simulations to determine the effectiveness and size of the role of Fangcang shelter hospitals in epidemic prevention and control with different allocations and efficiencies. These simulations will be used to determine the number of uninfected individuals who remain after the Fangcang shelter hospitals are put into use.(i): The Fangcang shelter hospitals\u2019 efficacy. The effectiveness of the medical resources invested in the Fangcang shelter hospitals in order to prevent the epidemic\u2019s spread is examined in this section. In addition, the connection between the quantity of emergency resources built and the epidemic\u2019s spread when those resources are used to build Fangcang shelter hospitals are also examined. Specifically, we set (ii): Impact of the use and distribution of medical resources. This section investigates whether changing medical resource allocation and use have an effect on stopping the epidemic\u2019s spread. Specifically, we set (iii): Optimal conversion rate and allocation ratio combinations. When the additional medical resources are constant, this section investigates the effects of various combinations of allocation ratios and conversion rates on containing the epidemic\u2019s spread. We specifically set We first tested the model\u2019s ability to demonstrate the usefulness of allocating medical resources to Fangcang shelter hospitals to stop the epidemic by varying the number of infections under various Fangcang shelter constructions. The development of infections under various medical resource addition rates and using real-time patient data in Wuhan from 29 January is shown in research that proWe then immediately conducted an examination of how various conversion rates and allocation ratios contributed to the outbreak\u2019s control. The role of increasing the conversion rate under various additional medical resources is shown in It can be seen that with fixed resources, increasing the conversion rate can raise the maximum carrying capacity of Fangcang shelter hospitals, which is consistent with our prediction. Although there is theoretically no upper limit to the conversion rate, in practice, the work intensity and energy of health care workers are limited, and the conversion rate cannot be increased indefinitely. Increasing the conversion rate can have the effect of alleviating the lack of resources when resources are relatively scarce. If resources are available in sufficient quantities, excessively raising the conversion rate will not improve the preventative outcomes due to the fact that the Fangcang shelter hospital has enough space to accommodate all of the patients with a mild illness; indeed, adding more beds would only result in excess and increase the workload of the medical personnel.The effect of increasing the allocation ratio of new medical resources is shown in As the allocation ratio rises, The allocation ratio with the highest proportion of susceptible populations under scenarios with different quantities of total new medical resources is identified via the comparison of the proportion of susceptible populations. s depicts the epidemic\u2019s spread under the scenario of fixed medical support with various allocation ratios and conversion rates. Overall, when the conversion rate and allocation ratio are both higher, the proportion of susceptible population is larger and epidemic prevention is more effective. With the same level of epidemic prevention, there is a specific inverse proportional relationship between the distribution ratio and conversion rate. If the conversion rate and distribution ratio reach a certain level, the proportion of the susceptible population no longer rises, and the epidemic prevention effect in this region is the same.The combined impact of the conversion rate and allocation ratio on the efficiency of epidemic prevention was then simulated as shown in As can be seen, the upper bound of the allocation ratio does not change as the conversion rate increases. The lower bound of the allocation ratio under the optimal parameters decreases and exhibits a certain inverse relationship with the increase in the conversion rate, which is consistent with our prediction. The best epidemic prevention can only be achieved at a given resource quantity when the minimum combination of Fangcang shelter hospital and a designated number of hospital beds is met. To meet this demand, a minimum conversion rate needs to be attained in order to ensure that a sufficient number of Fangcang shelter hospital beds can be provided. As the conversion rate increases, the selected interval of the allocation ratio gradually increases, but the maximum cannot be exceeded.We further confirmed the change in the number of deaths, with the lowest number of deaths occurring in the same parameter domain as when the number of infections was lowest, demonstrating that excessive medical resource consumption does not result in better epidemic prevention. The differences between the scenarios of absolutely insufficient resources (A two-stage infectious disease model was constructed in this paper. The model takes into account the limited medical resources based on the conventional SEIR model, adds the maximum capacity of the designated hospitals and the Fangcang shelter hospitals, as well as various admission strategies that denote the two stages. In the first stage, all patients were admitted to the designated hospital, and in the second stage, patients were triaged; thus, mild patients entered the Fangcang shelter hospital first, and severe patients entered the designated hospital, and the transition between the two stages was marked by the operation of the Fangcang shelter hospital. Furthermore, we defined an allocation ratio in order to represent the method used to allocate healthcare resources, and also defined a medical resource conversion rate in order to demonstrate the advantages of Fangcang shelter hospital regarding their admission of minorly affected patients. Using these two parameters, we investigated the role of health care resource allocation strategies in stopping the spread of the epidemic.In order to examine the effects of medical resource allocation on epidemic prevention and control, we analyzed the function of Fangcang shelter hospitals and their impact on epidemic prevention and control, as well as the influence of medical resource allocation. Our analysis demonstrates, firstly, that Fangcang shelter hospitals can successfully slow the spread of epidemics, which is consistent with previous research. Second, when medical resources are scarce, increasing the conversion rate and the proportion of input to Fangcang shelter hospitals can help stop the spread of the epidemic rapidly; however, when medical resources are abundant, excessively increasing either the conversion rate or the proportion of input to Fangcang shelter hospitals is detrimental to the overall prevention and control of epidemics. This may be explained by the fact that when medical resources are insufficient, putting resources into Fangcang shelter hospitals can minimize the potential transmission among the public by admitting as many minorly affected patients as possible; meanwhile, when there is a certain amount of spare medical resources, over-investing in Fangcang shelter hospitals can produce a wasteful excess of resources and designated hospitals require investment in order to save seriously affected patients.This paper develops a two-stage model of infectious disease to examine the role of Fangcang shelter hospitals in outbreak prevention and control, and examines the impact of healthcare resource allocation on outbreak prevention and control. Our findings demonstrate that Fangcang shelter hospitals can effectively control the rapid spread of epidemics, which is consistent with previous research. Furthermore, we further discuss what the optimal medical resource allocation solutions are when medical resources are either limited or abundant. The results show that the optimal allocation ratio of resources between designated hospitals and Fangcang shelter hospitals varies with the amount of additional resources.The results of our study\u2019s analysis of allocation ratios and conversion rates may inform further developments in healthcare resourcing policy. Firstly, the early diagnosis and elimination of an epidemic are the cornerstones of prevention strategies. More medical resources are helpful during epidemic prevention and control if the disease has already spread to a given area, but excessive resource investment will lead to resource redundancy and ineffectiveness. Secondly, the government should allocate as many medical resources as possible to large-scale isolation measures, such as Fangcang shelter hospitals, if medical resources are insufficient. Meanwhile, it also needs to guarantee a certain number of beds for serious illnesses in designated hospitals. To prevent medical staff from becoming overwhelmed, the use of medical resources should be organized intelligently. However, overwhelmed medical staff will not result in a greater contribution to the prevention of epidemics. Finally, in order to achieve the best epidemic prevention and control objectives, the government\u2019s medical resource allocation policy might consider both minorly and seriously affected patients, isolation admission and medical treatment, and flexible disposal within a reasonable interval, combined with the actual situation.Our work is not without limitations. Firstly, we did not consider the social and economic costs in our model. Indeed, from an economic perspective, future research could also examine the long-term effects of short-term investments in Fangcang shelter hospitals and designated hospitals. This is because allocating medical resources to either the designated hospitals or Fangcang shelter hospitals requires the consideration of whether to renovate the existing site or build a new one, as well as the long-term future operations. Secondly, the parameters of our model are static, which may not be the case in the real world. Although we set key parameters using some realistic scenarios, these parameters remain constant. Future research could focus on the dynamics of infectious disease models with multiple thresholds. Further research using realistic and feasible optimization scenarios and employing operational research methods could contribute from an optimization perspective.Finally, our paper offers guidelines regarding the allocation of resources in such Fangcang shelter hospitals. The effects of the rational governmental allocation of health care resources vary during epidemics. Combined with reality, healthcare resources are flexibly allocated and deployed in order to adjust and therefore maximize the benefits of these measures. Our work exploring the impact of Fangcang shelter hospitals on epidemic prevention and control has improved our understanding of the impact of healthcare resource allocation on infectious disease prevention and control, and will inform future disease prevention and control responses."} +{"text": "In the Calculating epidemiologically adjusted Ct values subsection of the Methods, there are a number of formatting errors in the sixth and ninth equations. Please view the complete, correct sixth equation here:In the Investigating variables associated with within-host viral burden subsection of the Results, there is an error on the third line of the second paragraph referencing"} +{"text": "The presence of an upper subscapular nerve branching from the posterior division of the superior trunk, and it being accompanied by an accessory subscapular artery, is of both clinical and surgical significance. During routine dissection of the root of the neck in a 75-year-old male cadaver, an unusual branch from the third part of the right subclavian artery was observed lateral to the dorsal scapular artery. Continued dissection revealed that this artery traveled between the anterior divisions of the superior and middle trunks of the brachial plexus before traveling alongside a nerve from the posterior division of the superior trunk of the brachial plexus. This artery and nerve descended on the anterior aspect of the subscapularis muscle before piercing into its muscle belly. We believe this to be a previously unreported unique variation of the upper subscapular nerve that is accompanied by an accessory subscapular artery on its course to the subscapularis muscle. Knowledge of anatomical variations like this may lead to decreased complications in nerve blocks and\u00a0surgical procedures related to the shoulder. The brachial plexus is a network of nerves that provides both somatic motor and sensory innervation to the upper extremity, including the rotator cuff. As the brachial plexus travels through the posterior triangle of the neck and then into the axilla, arm, forearm, and hand, it contains various regions that are named according to how the plexus is formed . The mosTypically, the upper subscapular nerve emerges from the posterior cord along with the lower subscapular nerve to innervate the subscapularis muscle, one of the four muscles making up the rotator cuff, which collectively assist in producing a wide range of shoulder movement while maintaining the stability of the glenohumeral joint ,4.The vascular supply to the rotator cuff muscles typically arises from multiple arteries, including the subscapular artery, posterior circumflex humeral artery, dorsal scapular artery, and suprascapular artery ,5. Both The variation described in this case is a novel finding of unique neurovascular branching of the posterior division of the superior trunk and the third part of the subclavian artery, both of which terminate in the subscapularis muscle. The relations of the brachial plexus to surrounding muscles or blood vessels are very complex, and this complexity must be comprehensively understood by physicians engaged in surgical intervention in this region . AwareneA 75-year-old male donor was received through the Saint Louis University (SLU) Gift of Body Program of the Center for Anatomical Science and Education (CASE) with signed informed consent from the donor. The donor\u2019s self-reported medical history included Alzheimer\u2019s disease and Parkinson\u2019s disease. Every effort was made to follow all local and international ethical guidelines and laws that pertain to the use of human cadaveric donors in anatomical research . The CASUpon dissection of the root of the neck, two arteries were observed branching from the third part of the right subclavian artery before coursing between components of the brachial plexus. The dorsal scapular artery meandered between the superior and middle trunks of the brachial plexus on its path to supply the levator scapulae and rhomboid muscles. The other branch, identified as an accessory subscapular artery, emerged from the subclavian artery 0.8 cm lateral to the dorsal scapular artery, though medial to the first rib, before traveling between the anterior divisions of the superior and middle trunks Figures , 2.This accessory subscapular artery was present in addition to the subscapular artery, posterior circumflex humeral artery, suprascapular artery, and dorsal scapular artery, all of which demonstrated typical branching patterns to supply the rotator cuff. Further dissection of the accessory subscapular artery revealed that it traveled with the upper subscapular nerve after emerging from the posterior division of the superior trunk of the brachial plexus Figure . This arAnatomical variations of the brachial plexus in human infant and adult cadavers are well documented. Kerr lists 29 different forms of the brachial plexus among 175 cadaver specimens dissected between 1895 and 1910 ,12. TherIn another case report by Deshmukh et al. 2016), the upper subscapular nerve was reported to originate in a triplet fashion resulting in accessory upper subscapular nerves. Of these accessory nerves, one originated directly from the posterior cord, the other from the lower subscapular nerve, and the remaining third\u00a0midway between the other two . Emamhad, the uppThe presence of multiple branches from the third part of the subclavian artery is a unique finding by itself. The dorsal scapular artery has been found to branch from the third part of the subclavian artery 67-75% of the time with the thyrocervical trunk being the second most common origin . As the The literature is ripe with reports of arterial branching pattern variations supplying the rotator cuff muscles. For example, the subscapular artery typically arises from the third part of the axillary artery; however, it has also been reported to arise from the second part of the axillary artery or be completely absent with thoracodorsal and circumflex scapular arteries arising separately from the axillary artery . The supExplanations for these variations in branching from the subclavian artery and the subsequent relationship of those branches with the brachial plexus may be the result of embryologic developmental differences . An initKnowledge of this atypical relationship between the upper subscapular nerve and the accessory subscapular artery is important for anatomists, radiologists, surgeons, and anesthesiologists due to its clinical and surgical implications. Neurovascular variations must be kept in mind whenever surgical access is needed to repair vascular or neurological lesions in the axilla, scapular region, or posterior triangle. The location of this anomalous artery could also create difficulty in placing a supraclavicular block because it increases the risk of damage to the blood vessel. Atypical relationships between nerves and arteries could increase the chances of impingement of vessels leading to ischemia and loss of function .During routine dissection of the root of the neck, a unique variation involving an accessory subscapular artery from the third part of the subclavian was observed. After reviewing the literature, we found this to be a novel variation as there have been no reported cases of this anomaly. As this unique vessel passed between the anterior divisions of the superior and middle trunks of the brachial plexus, it was accompanied by an unusual branch from the posterior division of the superior trunk as it traveled to the subscapularis muscle. To the best of our knowledge, there have been no documented cases of the coexistence of these neurovascular variations. This cadaveric study improves the knowledge of variations in the anatomy of the brachial plexus and branches of the subclavian artery, which is significant to anatomists, radiologists, anesthesiologists, and surgeons."} +{"text": "Experiences and practices addressing ethical conflicts and malpractice from the personal perspective of health care workers in these settingsIdentification and engagement with violenceMeasures for the reduction of restraints and coercion (violation of the autonomy of patients)The code of ethics of the EPA intends to guide the ethical practice of psychiatry by offering a comprehensive approach to the ethical challenges in the field. It highlights universal ethical principles and considers their application to the specific practice of psychiatry. Ethical questions in psychiatric practice are manifold: tensions between respect for autonomy versus care and protection from harm, problems with coercive therapy and capacity for judgement etc. To receive more information, the Committee on Ethical Issues conducted a survey on \u201cEthics in psychiatric practice\u201d to collect information from inpatient treatment settings of individual wards in psychiatric hospitals Europe-wide on following topics:In this talk, the preliminary results of this survey will be presented and discussed.None Declared"} +{"text": "For children, receiving adequate nutrition in their first 1000 days of life is vital to ensuring their appropriate growth and preventing the future development of diseases .Breastfeeding and complementary feeding play important roles in determining future health. Breast milk is rich in components that stimulate a baby\u2019s immune system positively from the day it is born, so breast milk is beneficial and should be recommended at least in the first six months of life . The AmeChildren is to highlight recent data in the context of child nutrition. This Special Issue publishes a study by Natalie R. JaBaay et al. in a Michigan cohort (USA) of children aged 1\u20133 whose diet meets basic nutritional recommendations. Breastfeeding rates, fruit and vegetable intake, and the avoidance of added sugars in infancy were all beneficial eating behaviors for the children in the study population, but behaviors related to the restriction of nutrient-poor foods and added sugars in early childhood were not observed. The authors summarize that the area of infant nutrition requires additional public attention and education [The goal of this Special Issue of ducation .Improper diet and a low level of physical activity are the main determinants of the development of the obesity epidemic among children. Nutrition in early childhood, including breastfeeding and complementary feeding, also has a significant impact on the development of obesity . A rapidProper nutrition is needed to halt these negative trends. Establishing healthy dietary patterns in infancy through preschool age may prevent the development of negative health effects in the future and promote a higher quality of life.I believe this Special Issue contributes to the enhancement of further studies on child nutrition."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Psychology and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "The degree of success and effectiveness of the child\u2019s socialization largely depends on the timely formation of social emotions, the ability to understand the emotional states of the participants in the interaction and manage their emotions.studying the features of understanding the emotional states of peers and adults by children of preschool age with special educational needs.The study involved 227 children aged 5-7 attending educational institutions: 95 children without developmental disorders; 73 children with severe speech disorders; 9 children with motor disorders; 25 children with visual impairment ; 15 children with hearing impairment ; 10 children with autism spectrum disorder. The \u201cEmotional Faces\u201d method (Semago) and the method of studying the child\u2019s understanding of tasks in situations of interaction (Veraksa) were used.Tasks for the categorization of emotional states cause difficulties in children with speech disorders, since they require a certain mastery of vocabulary for the designation of emotional states. As a result of limited communication in children, there is a lack of understanding of the meaning, causes and motives of the actions of other people, as well as the consequences of their actions, their impact on others.Preschool children with motor disabilities are inferior to peers without developmental disabilities in accurate verbalization of emotional states, manifested in a primitive description of emotions.Visually impaired preschool children do not have sufficiently clear ideas about socially acceptable actions in communication situations, about ways of expressing relationships with peers and adults.Children with hearing impairment better understand the emotional states of their peers than the states of adults, but they do not know how to show their attitude towards their peers. Difficulties in verbalizing emotions are observed.Children with autism spectrum disorder experience significant difficulties in recognizing various situations of interaction, isolating tasks and requirements set by adults in these situations; children practically did not try to depict an emotion, having difficulty in differentiating it.The research confirmed the assumption that children with disabilities have significant difficulties in differentiating similar emotions, they do not accurately determine the emotional state of their peers and people around them. This paper has been supported by the Kazan Federal University Strategic Academic Leadership Program.None Declared"} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Chemistry and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "The cognitive health of older adults has become an increasingly important topic as the world's population continues to age. This symposium discusses the latest research and best practices for promoting and maintaining cognitive health in older adults, with an emphasis on the role of physical activity. Four expert presentations will delve into various aspects of this complex topic.The first presentation will provide a bibliometric analysis of the scientific literature about the link between lifestyle and cognition.The second presentation will present cross-sectional findings on the relationship between 24-hour movement behaviours and cognition in adults aged 55 years and above.The third presentation will present the effects and process evaluation of a cognitively enriched walking program for older adults, focusing on the challenges of research in real-life settings.The final presentation will highlight the impact of a combined physical and cognitive activity program on executive function and walking speed in adults aged 70-85 years.The bibliometric analysis and cross-sectional findings will provide a broad overview of the current state of research in this area, while the intervention studies will offer practical insights into the design and implementation of physical activity programs that target cognitive health in older adults. This symposium is a valuable opportunity for researchers, practitioners, and policy makers to learn about the latest developments in this field and discuss the future direction of research and policy."} +{"text": "Steel\u2013concrete continuous composite beams are widely used in buildings and bridges and have many economic benefits. Slip has always existed in composite beams and will reduce the stiffness of composite beams. The effect of creep under a long-term load will also be harmful. Many scholars ignore the combined effects of slip and creep. In order to more accurately study the mechanical properties of steel\u2013concrete continuous composite beams under long-term loads, this paper will consider the combined actions of slip and creep. By combining the elastic theory and the age-adjusted effective modulus method, the differential equation of the composite beam is derived via the energy variational method. The analytical solutions of axial force, deflection and slip under a uniform load are obtained by substituting the relevant boundary conditions. The creep equation is used to simulate the behavior of concrete with time in ANSYS. The analytical solution is verified by establishing a finite element model of continuous composite beams considering slip and creep. The results suggest the following: the analytical solution is consistent with the finite element simulation results, which verifies the correctness of the analytical solution. Considering the slip and creep effects will increase the deflection of the composite beam and the bending moment of the steel beam, reduce the bending moment of the concrete slab and have a significant impact on the structural performance of the continuous composite beam. The research results considering the coupling effect of slip and creep on continuous composite beams can provide a theoretical basis for related problems. The steel\u2013concrete composite structure can not only give full play to the performances of the materials themselves but also to the overall superiority of the materials after combination , so it is widely used in the construction industry and in bridge engineering ,2. In stMost of the studies on slip and creep effects were carried out in simply supported composite beams, and there are few studies on continuous composite beams. Dezi et al. proposed a viscoelastic analysis method and a parametric analysis of continuous composite beams with flexible shear connectors, considering the effects of creep in composite beams ,8. UsingBan et al. studied the time-dependent behaviors of simply supported composite beams with blind plate bolts under sustained loads. The results showed that the use of blind plate bolts in composite beams is beneficial to the time-dependent response . Nguyen Slip and creep have been considered in most composite beam research, but their combined effects can lead to inaccurate solutions under long-term loading. Compared with simply supported composite beams, the analytical problem of a continuous composite beam under the combined actions of slip and creep is difficult to solve. At present, there is no analytical solution in the literature for the internal force and deformation of a continuous composite beams under the coupling effect of slip and creep. This analytical solution is obtained in this paper. In order to accurately and conveniently calculate the coupling effect of slip and creep on steel\u2013concrete continuous composite beams, this paper studies the mechanical properties of continuous composite beams under long-term uniform loads and establishes a governing differential equation for an axial force function for continuous composite beams via the energy variational principle. By introducing the relevant boundary conditions, the analytical solutions for the axial force, deflection and slip of a continuous composite beam under a uniform load are solved. Finally, the correctness and applicability of the obtained formulae are verified via ANSYS examples. At the same time, the effects of three kinds of shear stiffness on the slip and deflection of continuous composite beams are calculated. It is hoped that this research can provide a theoretical basis for the practical engineering calculation of continuous composite beams under the influence of a long-term load.In this section, based on the elastic theory and the age-adjusted effective modulus method, the basic assumptions of continuous composite beams are established, and the related theories of the slip and creep of continuous composite beams are described. Using the energy variational principle, the strain energy equation of the composite beam considering the effects of slip and creep is established, and the control differential equation and boundary conditions are obtained via the variational method and the partial integral method.Without considering the influence of transverse shear deformation in the composite beam, the curvatures of the steel beam and the concrete flange plate are completely consistent, and the vertical lifting phenomenon at the interface of the steel\u2013concrete composite beam is ignored.The cross-sections of the concrete slab and the steel beam meet the plane section assumption, respectively, and the stud connector conforms to the elastic sandwich setting.The stress\u2013strain relationship between the steel beam and the concrete is linear throughout the stressing phase, and the concrete is not cracked or spalled.The shear connectors are evenly distributed along the length direction of the composite beam, and the shear force of each shear connector is linearly related to the slip.In the coupled analysis of the slip and creep effects of continuous steel\u2013concrete composite beams, the calculation is based on the following basic assumptions ,24:WithoV are the axial force, bending moment and shear force of the composite beam; The section and element force diagram of the composite beam is shown in From the diagram of the forces on the unit and the equilibrium relationship, it is known that:From the bending moment equilibrium condition, In Equation (2), The shear connector of the composite beam is similar to the assumption that the spring conforms to the elastic interlayer. The displacement diagram of the interlayer slip is shown in ectively .According to the relationship between the axial strain and the displacement of the composite beam, On the basis of considering the influence of the interface slip of the composite beam, the age-adjusted effective modulus method (AEMM) is used to consider the creep effect of the composite beam concrete. This method uses the integral mean value theorem to transform the integral equation into an algebraic equation. At the same time, the influence of the aging properties of the concrete is considered, which simplifies the calculation and meets the needs of calculation accuracy. It is suitable for finite element analysis. The algebraic constitutive relation is ,27:(6)\u03b5ct; Through the effective modulus of age adjustment, considering the influence of the aging coefficient on the creep coefficient, we can rewrite Equation (6) as:The range of the aging coefficient H. Torst recommenH. Torst combinedH. Torst considerThe standard loading age of concrete is 7~14 days, and most concrete creep is completed within 1~2 years. The creep coefficient curves of concrete are different under different initial loading times. When the loading time of concrete is the same, the earlier the initial loading age, the greater the creep coefficient of the concrete will be . The calAccording to the principle of minimum potential energy, the composite beam maintains an equilibrium state when subjected to external forces. Therefore, this paper will establish the strain energy equations of the flange and steel beam of the steel\u2013concrete composite beam.The strain energy of the concrete slab:The strain energy of steel beam:The elastic interlayer slip\u2019s strain energy:The external load potential energy:K is the single shear connector connection stiffness and e is the shear connector spacing.The total potential energy of the combined beam is According to the principle of minimum potential energy, the first-order variation of the total potential energy is divided into zero, that is, The governing differential equations and boundary conditions can be obtained by Equation (15):According to the internal force balance relationship of the composite beam, In Equation (18), According to the relationship between the bending moment and curvature:Substituting Equation (19) into Equation (18), we can obtain:Simplifying Equation (20) by After simplifying the first derivative of Equation (16), we obtain:After substituting Equation (20) into Equation (22) and calculating the first derivative, we obtain:Substituting the first term of Equation (16) into Equation (23), the slip differential equation is obtained:Simplifying Equation (20) by In the case of determining the structural form and load, arrange the integral using the boundary conditions and axial force control differential equations to obtain the expression of the deflection function:In this section, according to the differential equations of the axial force, slip and deflection solved above, an analytical calculation of continuous composite beams under the combined actions of slip and creep is carried out. By substituting different boundary conditions, the analytical solution for a continuous composite beam under a uniform load is obtained.A two-span continuous beam subjected to uniform load q is selected as the research object, as shown in The bending moment function of the continuous composite beam is shown in Equation (27):The bending moment function (27) of the composite beam is substituted into the differential Equation (21) of the axial force function. Then, with the boundary conditions The first derivative of the bending moment function (27) of the composite beam is substituted into the differential equation of the slip function (21). Then, through the boundary conditions According to the deflection function expression in Equation (32), the bending moment function in Equation (33) under the uniform load of the composite beam and the analytical solution of the axial force, shown in Equations (28) and (29), are introduced, and then through the boundary conditions There are two creep analysis methods in ANSYS: explicit and implicit time integration methods. At the same time, the equation of the creep strain rate, t in Equation (34), we obtain:After the first derivative of the variable Substituting the relationship Creep should have a linear relationship with the strain rate of concrete. Take t and the creep coefficient. The time at the ith moment is According to the expression, we can see the relationship between For the calculation of the creep coefficient 2, the Poisson\u2019s ratio value is 2 and the Poisson\u2019s ratio value is The section size of the steel\u2013concrete continuous composite beam selected in this paper is shown in A sketch of the continuous composite beam model under a uniform load is shown in The ANSYS finite element composite beam model was built for structural analysis. The upper and lower flanges of both the concrete slab and the steel beam were simulated using the solid unit Solid45, and the web of the steel beam was simulated using the Plane42 unit; the studs between the concrete slab and the steel beam were simulated using the Combin39 spring element. Combin39 can define the load\u2013slip curve with an axial or torsional function and can provide good simulations of studs in composite beams. By controlling the meshing and nodes of the steel beam and the concrete slab, the steel beam and the concrete node (two overlapping nodes) were selected at the position at which the studs were arranged, and the spring element Combine39 element was established. The spring element node, the concrete node and the steel beam node were connected by the degree of freedom coupling. The constraint arrangement was a fixed hinge support at the left end of the continuous beam and movable hinge supports at the mid-span and right end. The finite element model of the continuous composite beam is shown in The results of the comparison between the theoretical and FEM calculated axial forces at the mid-span position of the left span of the continuous composite beam are shown in As can be seen from As can be seen in The comparative results of deflection and slip at the mid-span position of the left span of the continuous composite beam are shown in As can be seen from As can be seen in The shear stiffness of the shear connector was taken as the research variable, and the remaining parameters were the same as in the previous example. Three kinds of shear stiffness, namely, The maximum axial forces of the composite beam at the initial time and the final time of creep under the three shear stiffnesses are compared, as shown in According to the data in In the case of different shear stiffnesses, the maximum deflection and slip values of the composite beam on the 14th and 728th days are compared, as shown in According to According to A comparison between the concrete bending moment of the composite beam and the bending moment of the steel beam at the initial moment and at the final moment of creep is shown in According to In this paper, the energy variational method is used to establish the control differential equation, considering the combined actions of slip and creep. The theoretical analytical solutions for the slip, deflection and axial forces of continuous composite beams under uniform loads are derived. The correctness of the analytical solution is verified by establishing a finite element composite beam model.For a composite beam under a long-term load, creep will increase the deflection of the composite beam and the steel beam bending moment, reducing the concrete bending moment. The deflection of the composite beam increases by 14% under the influence of creep, the bending moment of the steel beam increases by 12% and the bending moment of the concrete decreases by 49%. Creep has little effect on the slip and axial force of a composite beam, increasing by about 2%.The analytical solution obtained in this paper is suitable for the calculation and analysis of any equal-span continuous composite beam under a uniform load and can calculate any shear stiffness, which can quickly and intuitively reflect changes in the deflection, slip and axial force under a long-term load. The increase in the shear stiffnesses of shear connectors will increase the axial forces of composite beams and reduce slip, deflection and bending moments. The increase in shear stiffness will reduce the influence of creep on the axial forces and slip of composite beams and increase the influence of creep on deflection.In this paper, the slip, deflection and axial force of steel\u2013concrete continuous composite beams under the coupling of slip and creep effects are analyzed and studied. The conclusions are as follows:In this paper, mechanical properties under the coupling actions of slip and creep are studied. Future work will carry out experimental research on continuous composite beams. The changes in the internal forces in the negative moment zones of composite beams under coupling and the stress caused by concrete cracking affected by creep must be further studied."} +{"text": "Semiconductor chips on a substrate have a wide range of applications in electronic devices. However, environmental temperature changes may cause mechanical buckling of the chips, resulting in an urgent demand to develop analytical models to study this issue with high efficiency and accuracy such that safety designs can be sought. In this paper, the thermal buckling of chips on a substrate is considered as that of plates on a Winkler elastic foundation and is studied by the symplectic superposition method (SSM) within the symplectic space-based Hamiltonian system. The solution procedure starts by converting the original problem into two subproblems, which are solved by using the separation of variables and the symplectic eigenvector expansion. Through the equivalence between the original problem and the superposition of subproblems, the final analytical thermal buckling solutions are obtained. The SSM does not require any assumptions of solution forms, which is a distinctive advantage compared with traditional analytical methods. Comprehensive numerical results by the SSM for both buckling temperatures and mode shapes are presented and are well validated through comparison with those using the finite element method. With the solutions obtained, the effects of the moduli of elastic foundations and geometric parameters on critical buckling temperatures and buckling mode shapes are investigated. Semiconductor chips have garnered significant scholarly attention due to their indispensable engineering applications in stretchable electronics , flexiblA variety of numerical and approximate methods can be employed to address the above issue. Raju and Rao investigAlthough numerical methods can effectively solve a variety of engineering problems within an acceptable margin of error, analytical methods are still vital since they not only provide benchmark solutions but also are helpful for rapid parameter analysis. Bouazza et al. conducteIt is noted that most existing analytical methods are limited to yield only L\u00e9vy-type (including Navier-type) solutions for plates with simply supported BCs, leading to an urgent demand for the exploration of new analytical solutions for non-L\u00e9vy-type plates. The symplectic superposition method (SSM), incorporating the symplectic approach pioneered by Yao et al. and the In this paper, with the SSM, the thermal buckling of a semiconductor chip on a substrate, which is treated as a thin plate on a Winkler foundation, is studied. The rest of the paper is organized as follows: The governing equation for the thermal buckling of the thin plate in the Hamiltonian system is introduced in As shown in The governing equations for the thermal buckling of the thin plate areBy introducing the thermal buckling problem of the plate into the Hamiltonian system, the governing equation is obtained as (4)\u2202Z\u2202y=In this section, the thermal buckling problem of a fully clamped rectangular thin plate is explained in detail. Firstly, the original problem is divided into two fundamental subproblems, each of which is based on a plate with simply supported BCs. Secondly, each subproblem is deduced without any assumptions of the solution forms by the techniques of separating variables and symplectic eigenvector expansion. Finally, the analytical solution of the original problem is obtained through the superposition of subproblems\u2019 solutions.The original problem a is diviFor the first subproblem, the separation of variables is implemented in the symplectic space such thatSubstituting Equation (5) into Equation (4) we obtain the eigenvalue problemThe eigenvalue problem is solved according to the BCs in the The eigenvalues areAccording to the symplectic eigenvector expansion , the staThe final solution of the governing equations expressed in terms of the undetermined coefficients For the second subproblem, the coordinate transformation is used to replace To make the superposition of the two subproblems equivalent to the original problem, the superposition of the subproblems\u2019 solutions must fulfill the actual BCs. For the original problem, the BCs areFor Therefore, we obtain an infinite system of linear equations regarding the coefficients To ensure accurate and reliable benchmarks for subsequent structural designs, the analytical thermal buckling results of semiconductor chips are presented using the SSM and verified with the FEM. All the present numerical and graphical results based on the SSM were achieved through programming in the Mathematica software Version 12.0, and the FEM-based numerical solutions were obtained via the ABAQUS software where thFirstly, the convergence study for the buckling temperatures of the plates with different aspect ratios, Additionally, the first five buckling temperatures of CCCC plates with aspect ratios With the analytical solutions, the effects of the moduli of the Winkler foundation and geometric parameters on the critical buckling temperature This study gives an analytical model to solve the problem of thermal buckling of semiconductor chips on a substrate within the framework of the Hamiltonian system. The SSM is used to transform the original problem into two subproblems, and the separation of variables and symplectic eigenvector expansion are utilized to solve each subproblem. The buckling temperatures and corresponding mode shapes are determined by the requirement of the equivalence between the original problem and the superposition of the two subproblems. After the fast convergence of the method is verified, comprehensive numerical examples are provided, which can serve as benchmarks. With the analytical solutions, the effects of the moduli of the Winkler foundations and geometric parameters are quantitatively studied. Specifically, it is observed that with the increase in the foundation parameter, both the critical buckling temperature and the half wave number of thermal buckling mode shape increase. Additionally, the critical buckling temperatures of plates continuously decline with an increase in aspect ratio. From the perspective of ensuring safety, a larger foundation modulus is favored and the shape of the semiconductor chips is recommended to be square to protect semiconductor chips from thermal buckling. While recognizing the inherent merits of the current approach in terms of its versatility and accuracy, it is important to acknowledge that, like any other solution method, it has certain limitations. The SSM is primarily tailored for addressing linear partial differential equation problems, posing limitations when encountering nonlinear partial differential equations. To tackle complex issues involving plastic behavior or substantial deformations, it becomes necessary to complement the method with perturbation techniques to effectively linearize the nonlinear equations and subsequently address them as a combination of linear equations. Although this paper focuses on thermal buckling problems of plates on elastic foundations, it is also necessary to mention that the SSM holds the potential for broader applications in the analysis of bending, vibration, and buckling problems associated with similar structures."} +{"text": "The influence of maternal autoimmunity mediators on child development and brain function has been the subject of several studies. Clinically, most have focused on the association between maternal autoimmunity and the diagnosis of autism in children. On the other hand, data are rarer concerning the rest of the mental disorders and mainly, they are obtained from small cohorts.The aim of this study is to discuss the association between the presence of autoimmune pathology in the mother and the development of mental disorders in the childwe conducted our study through a descriptive study of six clinical cases.80 % the patients treated were male57% had a characterized depressive disorder34% had ADHD9 % had ASDMaternal autoimmune diseases were associated with increased mental disorders in children. These results suggest a possible shared genetic vulnerability between the two conditions or a potential role of maternal immune activation in the expression of neurodevelopmental disorders in children.None Declared"} +{"text": "Gastric metastasis of lung cancer is rare, and the cases of disappearance of gastric metastasis and liver metastasis caused by oxitinib treatment have not been reported.A 47-year-old male patient with no history of diabetes, hypertension or smoking presented with chest discomfort after eating. At the time of consultation, the diagnosis of adenocarcinoma of the right lower lobe of the lung with liver and gastric metastasis was considered by pathological examination of biopsy of the fundus of the stomach near the cardia, pathological examination of CT-guided lung aspiration and pathological examination of liver occupancy aspiration, combined with immunohistochemical results. He was found to have exon 19 deletion in next generation sequencing. We performed osimertinib on him (EGFR\u2013TKI) systemic therapy, followed by local radiation therapy to the right lower lung primary lesion.After systemic treatment with osimertinib and local radiotherapy of the primary site, the metastases disappeared and the primary site showed post-radiotherapy changes, and the evaluated efficacy was complete remission.This is the first report to our knowledge of a patient who presented with gastric and hepatic metastases from lung cancer and achieved complete remission with osimertinib and local radiotherapy, with good quality of life, which also provides a basis for future clinical work and is of great significance. The diagnosis and treatment of advanced non-small cell lung cancer (NSCLC) have changed dramatically since the discovery of several oncogenic driver mutations and the advancement of research into corresponding targeted therapies , bone (25%), adrenal glands (22%), kidney (10\u201315%) and pericardium (20%) was admitted to the hospital with apparent cause of chest discomfort after eating and feeling of obstruction to eating. Computed tomography (CT) scans of the chest, abdomen and pelvis were performed after admission, showing a lesion in the lower lobe of the right lung with a high likelihood of lung cancer, an abnormally enhancing foci in the parietal lobe of the liver with metastases not excluded, and a hypodense foci in the lower segment of the right anterior lobe of the liver Fig.\u00a0a\u2013f. In o). Based on a review of the literature, it is considered that although the mechanism of gastric metastasis from lung cancer has not been clearly elucidated, the possibility of hematogenous dissemination through the rich vasculature of the stomach is high considering the anatomical structure of the stomach, but this conjecture has also not been clearly confirmed when patients with EGFR-mutated advanced NSCLC were treated with an EGFR tyrosine kinase inhibitor (TKI) monotherapy compared to standard chemotherapy (Pezzuto et al. Our review of the literature found that systemic treatment followed by local treatment was beneficial in delaying drug resistance in patients (Al-Halabi et al. This is the first report to our knowledge of a patient who presented with gastric and hepatic metastases from lung cancer and achieved complete remission with osimertinib and local radiotherapy, with good quality of life, which also provides a basis for future clinical work and is of great significance. In conclusion, for patients with stage IV lung cancer with oligometastases, the addition of local therapy after systemic therapy may help delay drug resistance and facilitate survival."} +{"text": "In order to efficiently investigate the effect of the mesoscale heterogeneity of a concrete core and the randomness of circular coarse aggregate distribution on the stress wave propagation procedure and the response of PZT sensors in traditional coupling mesoscale finite element models (CMFEMs), firstly, a mesoscale homogenization approach is introduced to establish coupling homogenization finite element models (CHFEMs) with circular coarse aggregates. CHFEMs of rectangular concrete-filled steel tube (RCFST) members include a surface-mounted piezoelectric lead zirconate titanate (PZT) actuator, PZT sensors at different measurement distances, a concrete core with mesoscale homogeneity. Secondly, the computation efficiency and accuracy of the proposed CHFEMs and the size effect of representative area elements (RAEs) on the stress wave field simulation results are investigated. The stress wave field simulation results indicate that the size of an RAE limitedly affects the stress wave fields. Thirdly, the responses of PZT sensors at different measurement distances of the CHFEMs under both sinusoidal and modulated signals are studied and compared with those of the corresponding CMFEMs. Finally, the effect of the mesoscale heterogeneity of a concrete core and the randomness of circular coarse aggregate distribution on the responses of PZT sensors in the time domain of the CHFEMs with and without debond defects is further investigated. The results show that the mesoscale heterogeneity of a concrete core and randomness of circular coarse aggregate distribution only have a certain influence on the response of PZT sensors that are close to the PZT actuator. Instead, the interface debond defects dominantly affect the response of each PZT sensor regardless of the measurement distance. This finding supports the feasibility of stress wave-based debond detection for RCFSTs where the concrete core is a heterogeneous material. As typical structural components carrying increasing vertical or axial load, large-scale concrete-filled steel tube (CFST) members have been extensively applied as columns in high-rise buildings or arches and piers in long-span bridges. The non-uniformly distributed temperature in the curing procedure of the concrete core of CFST members after pouring concrete and the unavoidable long-term shrinking and creeping of concrete core may cause interface debond defects between the concrete core and steel tube, weakening the desired confinement effect of the steel tube on the concrete core and negatively affecting the ductility and load-carrying capacity of the CFST members. Therefore, the development of reliable interface debond defect detection methodologies for the CFST members is of great emergency. Most of the traditional non-destructive testing (NDT) techniques, which have been successfully applied in defect detection for reinforced concrete (RC) or steel structures, including infrared thermal imaging ,2, electRecently, defect detection for concrete materials, composites, RC and steel structures based on stress wave measurement using piezoelectric lead zirconate titanate (PZT) actuators and sensors have attracted much attention ,11,12,13One of the most potential defects in CFST members is the interface debond defect between the concrete core and steel tube. In order to accurately detect interface debond defects in CFST members, Xu et al. ,22 develHowever, one of the main concerning shortages in the above-mentioned numerical analysis on stress wave travel in CFST members is that the concrete core was considered as a homogeneous material with uniform material parameters. Considering the fact that concrete is a typical heterogeneous material, composed of mortar, coarse and fine aggregates and an interfacial transition zone (ITZ) between aggregates and the mortar, etc., it has been a common concern if the mesoscale heterogeneity and randomness of aggregate distribution in the concrete core dominantly affect stress wave travel within CFSTs and PZT sensor responses at different measurement distances when compared with interface debond defects in CFSTs . With thThe mesoscale homogenization method is computationally efficient for the behavior simulation of mesoscale concrete materials and structures, where the mesoscale concrete is modeled with a number of representative area elements (RAEs) two-dimensionally (in 2D) or representative volume elements (RVEs) three-dimensionally (in 3D). The concrete material of each RVE or RAE is homogenized with uniform mechanical properties ,33,34. WIn the study by Wang et al. , the effIn this paper, multi-physics coupling mesoscale finite element models (CMFEMs) composed of a surface-mounted PZT actuator, a surface-mounted or embedded PZT sensor, and a 2D mesoscale RCFST specimen with randomly distributed circular aggregates are firstly established. After that, the corresponding coupling homogenization finite element models (CHFEMs) with different RAE dimensions are established to study the effect of RAE dimensions on both stress wave fields and the response of PZT sensors at different measurement distances. For simplicity, the stress wave fields in mesoscale finite element models (MFEMs) and the corresponding homogenization finite element models (HFEMs) with and without debond defects are determined considering different RAE dimensions. The effects of RAE dimensions on the simulation results of stress wave fields are analyzed comprehensively. Therefore, the PZT sensor responses with different measurement distances in CMFEMs and the corresponding CHFEMs with and without debond defects under various excitation signals are also determined considering different RAE dimensions. The results show that the mesoscale heterogeneity of a concrete core and the randomness of circular aggregate distribution only have a certain influence on the response of PZT sensors that are close to the surface-mounted PZT actuator. Compared with the mesoscale heterogeneity of a concrete core in the form of the random distribution of circular aggregates, the interface debond defect dominantly affects the stress wave fields and the response of PZT sensors at various measurement distances.In recent years, the effect of the morphology of coarse aggregates on the properties of concrete materials and structures has been an active topic . In thisThe piezoelectric equation for the employed PZT patches can be found in the previous studies ,37,38,39In addition, the equation of motion for the coupled RCFST\u2013PZT model is described in the form of vectors and generalized matrices ,35:(2)M0In this study, the plane dimensions of the concrete core of the 2D RCFST member are 400 mm by 400 mm and the thickness of the concrete core is 10 mm. The concrete core is surrounded by a plane steel tube with a thickness of 5 mm. The employed PZT patches acting as either sensors or actuators are of a length of 10 mm and a thickness of 0.3 mm. The polarization direction of each PZT patch is its thickness direction. In order to investigate the influence of different measurement distances between the PZT actuator and the PZT sensors, a number of mesoscale RCFST\u2013PZT coupling models with different PZT sensor locations are established with the RAM approach ,35.The size and quantity of circular aggregates in a specific RCFST member are specified according to an ideal gradient curve shown as follows .(3)P with the Walraven method.i-th and j-th circular aggregates and In a Cartesian coordinate system, the coordinates of the center of a circle are randomly generated within the RCFST member using the Monte Carlo method. In order to avoid circular aggregate overlap and contact, the distance between any two circular aggregates must be greater than a certain value. The overlap of two circular aggregates is judged based on the distance between their centers according to Equation (5). The aggregate with a large radius should be put in first because the effective delivery area after the successful delivery of an aggregate is gradually reduced.Here, three CMFEMs with different randomly distributed circular coarse aggregates mimicking the mesoscale heterogeneity and randomness of a concrete core, i.e., CMFEM 1, 2 and 3, are established as shown in As a further investigation of the previous study by Wang et al. , CHFEMs In order to study the computational accuracy of the CHFEMs, CHFEMs with different RAE dimensions are established for each CMFEM. The material properties including density, Poisson\u2019s ratio and elastic modulus of each RAE with different dimensions can be determined according to the method proposed by Wang et al. , where eFor the debond defect detection of RCFST members using stress wave fields and PZT sensor response measurements, high-frequency signals are commonly used as inputs on a PZT actuator mounted on the surface of the steel tube of an RCFST. The high-frequency excitation signal leads to a short stress wave wavelength, where fine meshing and short integration time steps are required. Here, the typical 2D RAE dimensions of 20 mm by 20 mm, 25 mm by 25 mm and 40 mm by 40 mm for each CMFEM model are considered to distinguish the size effect of RAEs on the stress wave fields and PZT sensors measurement simulation results. Since the advantage of the computational efficiency of CHFEM has been demonstrated in the recent research by Wang et al. , a compaFor the mesoscale simulation of the stress wave field within different mesoscale RCFSTs, it is convenient to exclude the direct and inverse piezoelectric effects of PZT actuators and sensors and the coupling effect between them and the RCFST member. In this study, the stress wave propagation procedure within different mesoscale finite element models (MFEMs) of RCFSTs and the corresponding homogenization finite element models (HFEMs) employing different RAE sizes under the excitation of a pulse force are investigated for comparison.The applied pulse force signal for stress wave fields and modulated sinusoidal excitation signal for the PZT sensor measurement simulation is the same as that in . The modThe measurement results of different PZT sensors under both sinusoidal and modulated signals are simulated using both the CMFEMs and the corresponding CHFEMs, where the circular coarse aggregates are randomly distributed and the concrete core is heterogeneous at the mesoscale.In the study by Wang et al. , the infWithout a loss of generality, the stress wave fields of the healthy MFEM 1 without interface debond defects and the corresponding HFEMs with different RAE dimensions at different time instants are investigated and compared at first. The three side dimensions of 20 mm, 25 mm and 40 mm for square RAEs are considered.In In addition to a comparison of the element numbers, DOFs and the computation time for stress wave propagation simulation between the healthy MFEM 1 and its corresponding HFEMs with different RAE dimensions are shown in In order to further investigate the effect of different RAE dimensions on the stress wave field simulation results and to compare it with that of the interface debond defect, the stress wave fields of the MFEM 1 with a debond defect and the corresponding HFEMs with different RAE dimensions at different time instants are simulated. As shown in The stress wave fields of the MFEM 1 with an interface debond defect and its corresponding HFEMs with different RAE dimensions are shown and compared in When comparing the stress wave fields of the healthy RCFSTs at each identical time instant shown in In practice, for the debond detection of RCFST members, it is difficult to get the stress wave fields at different time instants. PZT sensors are usually used to measure the stress wave at certain locations and the measurements are used to detect the debond defect. Therefore, a further numerical study on the response of PZT sensors in CMFEMs and the CHFEMs at different measurement distances is carried out to distinguish the influences of the mesoscale heterogeneity of concrete on PZT sensor measurement and those of debond defects in the following chapter.The effects of both themesoscale heterogeneity of concrete cores in the form of randomly distributed circular coarse aggregates and debond defects on the output of PZT sensors at different measurement distances from the PZT actuator are investigated. Multi-physics field simulation on the measurement of PZT sensors of three CMFEMs and the corresponding CHFEMs with different RAE dimensions under both sinusoidal excitation and modulated excitation is performed. The frequency of the sinusoidal input signal is 20 kHz and the frequency of the modulated excitation signal described in Equation (3) is 20 kHz. Both input signals are of an amplitude of 10 V. The distances between the PZT sensor and actuator are assigned to be 80 mm, 160 mm, 240 mm, 320 mm and 410 mm. The distance of 410 mm means that the PZT sensor is installed on the opposite side surface of the RCFST, representing the surface-mounted PZT actuator and sensor installation strategy. Considering the similarity of the stress wave fields of the HFEMs with different RAE dimensions shown in In order to investigate the effect of interface debond defect on PZT sensor measurements, a numerical simulation on the PZT response of the three CMFEMs with identical debond defects, CMFEM 1D, CMFEM 2D and CMFEM 3D, is also carried out. Comparing the PZT sensor response at identical measurement distance of CMFEMs with and without a debond defect as shown in In this section, the time-domain responses of PZT sensors at different measurement distances from the PZT actuator of the three CMFEMs with and without an interface debond defect under the modulated sinusoidal excitation signal are investigated and the results are shown in The similarity of the PZT sensor measurements at an identical CMFEM measurement distance to different mesoscale structures can be found in In the previous study by the authors of , the infTo further compare the influence of the debond on the measurement of PZT sensors at different measurement distances, the responses of PZT sensors of the CHFEMs with an interface debond defect using different RAE dimensions are shown in A further investigation on the influence of RAE dimensions of CHFEMs on PZT sensor responses under the modulated sinusoidal excitation signal is performed. From all the above-mentioned simulation results of both CMFEMs and CHFEMs, it is concluded that debond defects are the dominant factor affecting the PZT sensor\u2019s response at identical measurement distances. A close look at the influence of the heterogeneity of concrete cores and randomness of coarse aggregate distribution in RCFSTs on the responses of PZT sensors at different measurement distances is taken in a quantitative manner in the following chapter.Further analysis of the maximum amplitude of the time-domain response of the PZT sensors under a sinusoidal input signal and of the wavelet packet energy of the response of PZT sensors under a modulated input signal is carried out.As shown in Here, the effect of different measurement distances on the response of PZT sensors is further elucidated under sinusoidal signals. The average amplitude and variance in PZT sensors at identical measurement distances in the three CMFEMs and three CHFEMs shown in Here, the influence of the mesoscale hetereogeneity of a concrete core and the randomness of coarse aggregate distribution and of the debond defect on the responses of the PZT sensors of three CMFEMs and the three CHFEMs corresponding to CMFEM 1 under the modulated sinusoidal signal is further distinguished.A wavelet packet analysis of each sensor response is carried out to determine the wavelet packet energy of the output signals collected by each PZT sensor , and theAs shown in Multi-physics and multi-scale CMFEMs composed of a PZT sensor, a PZT actuator and a 2D RCFST cross-section are constructed, and the corresponding CHFEMs with different RAE dimensions are established. The feasibility of using CHFEMs for stress wave field and PZT sensor response simulation and their computational efficiency are illustrated. Then, the difference in the influences of the mesoscale heterogeneity of a concrete core in the form of the randomness of coarse aggregate distribution in a RCFST and of the debond defect on the responses of PZT sensors at different measurement distances are elucidated. Based on the analysis of the numerical simulation results, the following conclusions can be made.(1) The feasibility and the computational efficiency of the proposed mesoscale homogenization approach for RCFST stress wave fields and PZT sensor response simulation are numerically confirmed. The stress wave fields of the CMFEMs and their corresponding CHFEMs with different RAE dimensions are similar, and the influence of an interface debond defect on the stress wave field is dominant. The concrete core with mesoscale homogeneity in the form of random coarse aggregate distribution in RCFSTs has no obvious influence on the stress wave fields of RCFST specimens.(2) By comparing the PZT sensor measurement simulation results of healthy CMFEMs and those of the corresponding CHFEMs, the heterogeneity in the form of random concrete core distribution only affects the responses of PZT sensors close to the PZT actuator. The influence of mesoscale heterogeneity on PZT sensor measurement becomes weaker when the measurement distance increases. RAE dimensions have no obvious influence on the PZT responses of CHFEMs. This finding also works for CMFEMs and CHFEMs of RCFST members with interface debond defects.(3) From the simulation results of the CMFEMs and the CHFEMs, it is clear that the interface debond defects of RCFSTs have a more obvious effect on the response of each PZT sensor when compared with those of a heterogenous and random distribution a concrete core at the mesoscale in RCFSTs. (4) The findings from this study imply that the interface debond defect detection method using stress wave measurement from a PZT sensor is reasonable in practice even though the concrete core in RCFST specimens is a heterogenous and random material. The mesoscale heterogeneity and randomness of concrete cores affect PZT sensor responses locally, and the interface debond defect is the dominant factor affecting the PZT sensor measurement no matter what the measurement distance is."} +{"text": "While social media is increasingly used by dermatologists to educate the public , any socTikTok and Instagram were selected for their size and dearth of published literature, compared to the known presence of misinformation on platforms like Twitter and Facebook . On TikTNondermatologists made up the majority of telangiectasia-related content on TikTok . Of 123 Many patients use social media platforms for dermatologic information . Our finThis study provides a sample of content creators in telangiectatic-related content. This study reinforces the importance of social media presence of board-certified dermatologists to comment on and combat inaccuracies by creating educational content and reacting to erroneous information. Further research is necessary to evaluate the scope of misinformation and its deleterious effects."} +{"text": "Pandemic caused by the virus COVID-19 had a significant impact on mental health of the population, not only by increasing the levels of stress and anxiety, but by affecting the most vulnerable ones, aggravating the symptoms of mental illnesses in people suffering from one of the mental health conditions [1], including the people suffering from schizophrenia. Pandemic made the increased need of that particular patient population for various psychotherapeutic and sociotherapeutic interventions even more evident. Art therapy is a form of psychotherapy that in itself integrates expressive characteristics of art with explorative characteristic of psychotherapy using the visual language of arts as the main media of communication and expression. Art therapy has been used from its beginnings with people suffering from one of the psychotic disorders [2] and it is enlisted today in NICE guidelines as one of the psychological therapies of schizophrenia [3].To understand and to activate the potential of artistic expression in patients suffering from psychotic disorders during the pandemic of virus COVID-19.During the period of lockdown in pandemic of virus COVID-19, a young male patient suffering from schizophrenia was admitted to the Acute ward of the University psychiatric hospital Sveti Ivan in Zagreb. As the patient was keen on visually expressing himself, five individual psychodynamically oriented art therapy sessions were carried out on a weekly basis with professionally trained art therapist during the period of patient\u2019s hospitalization. The patient was offered various art materials allowing him to visually express himself in a free manner and the artistic artefact created during the process served as a catalyst for later therapeutic work.splitting, hidden aggressive potentials and, in the end, the nature of father-son relationship connecting the image of coronavirus causing fear and discomfort with the image of the oppressive father.During the therapeutic process, single image was being gradually made and developed session by session. As new layers of color and form were added to the painting, each session revealed new layers of meaning and symbolism to both patient and therapist. First sessions pertained to the anxiety caused by the experience of pandemic, but as the process moved forward, deeper subject matters were brought to the surface, such as the nature of the therapeutic relationship, patient\u2019s Circumstances caused by the pandemic of virus COVID-19 aggravated the patient\u2019s symptoms and his internal conflicts. The art therapeutic process, with its possibility of projections and its multilayered interpretations, enabled the patient to express the true conflict and disturbing content hiding underneath the anxiety related to the pandemic of coronavirus which the patient was primarily complaining about.None Declared"} +{"text": "Case Reports in Neuropharmacology 2022 represents the diverse range of research and clinical applications in this vast field. From the use of new medications for post-surgical pain management to the recognition of rare drug-induced side effects, the present Research Topic highlights the breadth and complexity of neuropharmacology research and practice. These case reports emphasize the importance of personalized approaches to medical treatment, careful monitoring of potential drug interactions and side effects, and continued research to improve patient outcomes in this field.The Research Topic of The four articles of this Research Topic share a common neuropharmacology background as well as an emphasis on patient outcomes. One of the highlights of this Research Topic is the variety of specific Research Topic covered, while at the same time each article highlights the importance of personalized approaches to medical treatment and the need for careful monitoring of potential side effects and drug interactions.Liu et al. report on the use of dinalbuphine sebacate (DS), a mixed kappa opioid agonist/mu opioid antagonist, as part of a multimodal analgesia (MMA) protocol for morbidly obese patients undergoing laparoscopic sleeve gastrectomy. These patients are at an increased risk of opioid-related side effects, such as post-operative nausea and vomiting and respiratory depression. Obstructive sleep apnea, which is prevalent among patients with morbid obesity, further put these patients at risk for respiratory depression in a non-obese patient following thoracoscopic wedge resection of pulmonary nodules. This report adds valuable information to the literature on PUS, specifically regarding its occurrence in non-obese patients and the potential role of propofol anesthesia in its development. PUS has been previously associated with urinary uric acid (UA) disorders, reported in morbidly obese patients undergoing gastric bypass surgery and/or propofol anesthesia in individuals with preexisting UA metabolic disorders isoforms. The article discusses the impact of different medications on CAMK2 activity and associated calcium signaling and suggests personalized treatment regimens based on CAMK2 catalytic activity. The hypothetical treatment framework proposed by the authors is an important step towards clarifying the questions that will guide further research. This is an essential consideration, as dysregulation of calcium signaling can have profound consequences for neuronal development, function, and survival (survival . The repDuan et al. discuss a rare case of severe skin rash and lymphadenopathy associated with the use of lamotrigine and valproic acid in a patient with bipolar disorder type I. The case highlights the potential for severe skin reactions and lymphadenopathy associated with the use of these medications and emphasizes the need for caution during titration and early withdrawal of both when signs of hypersensitivity appear. While the dermatologic toxicity of lamotrigine is known, this report is a reminder of the importance of careful consideration of the unknown variables that can complicate the course of apparently well charted adverse effects risk trajectories.Together, these articles underscore the importance of personalized approaches to medical treatment, particularly in patient populations with specific vulnerabilities or susceptibilities to certain side effects. They also highlight the need for careful monitoring of potential drug interactions and side effects to ensure the best possible patient outcomes. By understanding the neuropharmacology underlying different medical conditions and treatments, we can continue to develop effective and personalized approaches to medical care."} +{"text": "To the Editor,We read with great interest the article by Atalay et al. in which they illustrate the frequent presence of hummingbird signs in their patient group with idiopathic normal pressure hydrocephalus (iNPH) . The resThe authors found the presence of the hummingbird sign in 92.3% of their iNPH subjects. Remarkably, a perfect agreement for the hummingbird signs was met between three observers , increasTaken together, we think that the high prevalence of the hummingbird sign in the study of Atalay et al. may be associated with possible underlying comorbid PSP pathology in some of those patients which could not be excluded via the patient selection method. To clarify this issue, future prospective studies including a large group of patients with a definitive iNPH diagnosis are warranted. The results of these studies may provide crucial contributions regarding the diagnostic process of iNPH. Besides, these study results may enlighten the pathophysiological role of the brainstem in iNPH symptomatology, and also its role in the clinical manifestations of neurodegenerative diseases such as PSP."} +{"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Chemistry and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."} +{"text": "Interest in calcium phosphate cements as materials for the restoration and treatment of bone tissue defects is still high. Despite commercialization and use in the clinic, the calcium phosphate cements have great potential for development. Existing approaches to the production of calcium phosphate cements as drugs are analyzed. A description of the pathogenesis of the main diseases of bone tissue and effective common treatment strategies are presented in the review. An analysis of the modern understanding of the complex action of the cement matrix and the additives and drugs distributed in it in relation to the successful treatment of bone defects is given. The mechanisms of biological action of functional substances determine the effectiveness of use in certain clinical cases. An important direction of using calcium phosphate cements as a carrier of functional substances is the volumetric incorporation of anti-inflammatory, antitumor, antiresorptive and osteogenic functional substances. The main functionalization requirement for carrier materials is prolonged elution. Various release factors related to the matrix, functional substances and elution conditions are considered in the work. It is shown that cements are a complex system. Changing one of the many initial parameters in a wide range changes the final characteristics of the matrix and, accordingly, the kinetics. The main approaches to the effective functionalization of calcium phosphate cements are considered in the review. Bone tissue is a part of the human musculoskeletal system, which participates in the transfer of force from one part of the body to another under controlled tension, and protects and fixes internal organs. In addition to performing a mechanical function, bone tissue performs a biological function, as it participates in metabolism ,2,3,4. BBone tissue\u2019s ability to regenerate effectively, maintain mineralization and repair itself after damage depends on its ability to dynamically remodel. However, the regenerative process is limited by the ability to self-repair: osteogenic insufficiency occurs if the critical size of the defect is exceeded, and the defect is filled with fibrous connective tissue.There are many different clinical circumstances under which a significant part of bone or a whole bone is lost. Bone defects can be caused by various reasons. They can be associated with various pathogenic conditions and clinical outcomes, including injuries (fractures), infections (osteomyelitis), tumors, osteoporosis, and many other bone diseases .According to statistics, 20 million orthopedic surgeries in the world per year are performed, 70% of which require the use of bone implant material for filling and repairing bone defects .Various osteoplastic materials can be used to fill in bone defects caused by various diseases, or for the purpose of their prevention. They are able to deliver functional substances locally, fill in defects and serve as a material for bone tissue reconstruction.2+ ions during resorption can affect the differentiation of osteogenic cells and the level of inflammatory cytokines.Calcium phosphate cements are similar in composition to the mineral component of bone tissue. They have a high specific surface area and are used in medicine as osteoplastic materials. Blocks and granules made of pre-hardened cement are a promising type of skeleton for the restoration of bone defects. They have an increased rate of resorption compared to matrices obtained by high-temperature processing. Functional substances can be volumetrically incorporated into them at the stage of mixing the components. The kinetics of the release of functional substances may vary. The release of CaThe functionalization of calcium phosphate cements is of great clinical interest for the treatment or prevention of various diseases of bone tissue. Prolonged elution is the main requirement for carrier materials of pharmaceutical substances. Another important requirement is the absence of a mutual negative influence of the functional substance and the matrix on each other\u2019s properties. The release of functional substances from calcium phosphate cements depends on many parameters of the matrix, specific interactions between the functional substance and the matrix, as well as environmental factors.A significant advantage of calcium phosphate cements is the wide range of changes in the properties of matrices that can affect the release of functional substances and the process of bone tissue restoration.2+ ions and phosphatases, and form amorphous calcium phosphate on the surface, followed by the formation of hydroxyapatite crystals from it. Hydroxyapatite is the most important inorganic phase in bone (molecular formula Ca10(PO4)6(OH)2; it contains impurity ions such as CO32\u2212, Cl\u2212, F\u2212, Na+, Mg2+, K+, Zn2+, Fe2+, Cu2+, Sr2+ and Pb2+ ..244].The release curves of functional substances from calcium phosphate matrices most often show bimodal release. In this case, a typical initial release occurs within the first 24 h, followed by a sustained slow release . This coMulti-stage release profiles potentially offer much greater advantages over monotonic drug elution kinetics. The rapid initial release is able to effectively stop the pathological process, while its longer second release phase will gradually support the healing process .The rate of release of functional substances depends on the morphology of the cement stone particles, as it leads to a different degree of adsorption of the substance on the inner surface. Desorption of functional substances from the surface of calcium phosphates with needle morphology of crystals is higher compared to desorption with lamellar morphology .The surface charge of molecules of functional substances increases adsorption with the surface of calcium phosphate cement due to electrostatic attraction. Positively charged functional substances bind to the surface of calcium phosphate, since there are many negatively charged phosphate and hydroxide ions on the surface. Negatively charged functional substances bind to the surface of calcium phosphate with a large amount of calcium ions , such asHowever, the presence of cephalexin has been reported to inhibit the growth of hydroxyapatite crystals. This is due to the ability of carboxylic acid molecules to be adsorbed on the surfaces of the initial components and nascent crystals, and to inhibit the growth of the target phase .The values of the surface zeta potential are negative for calcium phosphates, but they differ depending on the arrangement of atoms on two types of crystal planes along the (a) axis and along the (c) axis . Plane , high-performance liquid chromatography (HPLC) and fluorescent polarization immunoassay (FPIA). The electrochemical impedance spectroscopy (EIS) of Pasqual Group positions is a method for determining the amount of a drug without aliquot selection, as required by traditional methods .A brief review of the scientific literature over the past 5 years on calcium phosphate cements as carriers of functional substances for the treatment of bone tissue is presented in Bone tissue repair is often limited by large defects and associated complications . Effective treatment strategies are aimed at the resection of the affected area of bone tissue, followed by treatment. Due to the limited availability of the affected bone for systemic therapy and for the prevention of postoperative complications, a local therapy method is used with the delivery of functional substances during surgery, or using minimally invasive procedures. Local prolonged drug delivery systems in effective therapeutic doses to the site of bone disease can contribute to the effective treatment of various bone diseases with the least side effects.2+ ion is an important homing signal; it brings together various cell types necessary to initiate bone remodeling. Ca2+ ions affect protein growth factors, and additionally stimulate the expression of osteogenic marker genes.Calcium phosphate cements have a chemical composition similar to the inorganic component of bone tissue. They are biocompatible, resorbable, injectable and self-hardening. They can take the form of a bone defect and fit tightly to the bone bed. The CaAt present, the most common approach to using calcium phosphate cements as scaffolds is the bulk incorporation of anti-inflammatory, antitumor, antiresorptive, and osteogenic functional substances.The functionalization of calcium phosphate cements for a specific task requires an understanding of cement systems. Cements are complex systems. Changing one of the many parameters of the system changes the final characteristics of the matrix in a wide range, including the kinetics of elution of functional substances.In this paper, the diseases of bone tissue are described, in the treatment of which calcium phosphate cements can be used as carriers of functional substances. Knowledge of the pathogenesis of diseases and the methods of treatment used\u2014in particular, drug therapy\u2014can help develop the concepts of directed functionalization for various clinical cases. We tried to isolate and describe in detail the factors that affect the release of functional substances from calcium phosphate cements, as related to the parameters of the cement system and the nature and affinity of functional substances for restoring bone tissue with the matrix.Over the past two years, interesting reviews have been published by various scientific groups, which indicate an undying interest in calcium phosphate cements. A more detailed description of the properties of injectable calcium phosphate cements and the possibilities of their regulation can be found in the Vezenkova review, where cements are considered from the point of view of obtaining an osteoinductive product . The mecThe functionalization of calcium phosphate cement is one of the most important directions for solving the problems of the treatment and restoration of bone tissue.The use of calcium phosphate cements as carriers of anticancer drugs and bisphosphonates is addressed in the reviews of Zogakis et al. and NtepBased on an understanding of the importance of calcium phosphate cements in the process of bone tissue restoration, and the possibility of functionalization by incorporating various drugs and growth factors directly into the volume of the cement matrix, or via encapsulation in polymer carriers, we assume that future research will aim at solving the problems of interaction between the carrier and functional substances, stabilizing (controlling) the release of functional substances from the matrix, and creating ideal frameworks aimed at faster restoration of the integrity of bone tissue."} +{"text": "Pneumonia is the most frequent lower respiratory tract disease and a major cause of morbidity and mortality globally . SeveralThe most common classification of pneumonia is based on the location where pneumonia occur. Community-acquired pneumonia occur at the community and hospital-acquired pneumonia occur at the hospital. Pneumonia among residents of nursing homes or long-term care facilities is categorized as nursing-home-acquired pneumonia. This concept is based on the idea that the causative bacterium of pneumonia is determined to some extent by the location where pneumonia occur and serves as a guideline for the use of antibiotics ,3,4. GuiAspiration pneumonia is the most dominant form of pneumonia in older adults . Many stSerial studies concerning the relationship between forced vital capacity (FVC) on spirometry and death due to pneumonia in older people have been published by the same laboratory in Japan. In the first study, the authors analyzed the data of a large cohort and found that low FVC was associated with a high all-cause mortality rate of community-dwelling older people . In the Taken together, the above four reports indicate a strong relationship between impairment of skeletal muscle and pneumonia or pneumonia-related death of older adults. Mendes and colleagues showed that healthcare-associated pneumonia is the main causes of hospital admission and death among patients with pneumonia, the majority of which were empirically treated, in Portugal . The higThe remaining four papers in this issue highlight the importance of comprehensive functional assessment and interdisciplinary team care, which is beyond the use of antibiotics, in older patients with aspiration pneumonia. Yoshimatsu et al. reported no evidence of the benefit of anaerobic coverage in the antibiotic treatment of aspiration pneumonia . They alSince this special issue reveals the importance of skeletal muscle and an interdisciplinary team approach, it will help clinicians make decisions and treatment choices. We appreciate the valuable contributions of all authors. We are also grateful to the reviewers for their professional and constructive comments and to the JCM team for their continuous support with this special issue."} +{"text": "The goal of hemodynamic resuscitation is to optimize the microcirculation of organs to meet their oxygen and metabolic needs. Clinicians are currently blind to what is happening in the microcirculation of organs, which prevents them from achieving an additional degree of individualization of the hemodynamic resuscitation at tissue level. Indeed, clinicians never know whether optimization of the microcirculation and tissue oxygenation is actually achieved after macrovascular hemodynamic optimization. The challenge for the future is to have noninvasive, easy-to-use equipment that allows reliable assessment and immediate quantitative analysis of the microcirculation at the bedside. There are different methods for assessing the microcirculation at the bedside; all have strengths and challenges. The use of automated analysis and the future possibility of introducing artificial intelligence into analysis software could eliminate observer bias and provide guidance on microvascular-targeted treatment options. In addition, to gain caregiver confidence and support for the need to monitor the microcirculation, it is necessary to demonstrate that incorporating microcirculation analysis into the reasoning guiding hemodynamic resuscitation prevents organ dysfunction and improves the outcome of critically ill patients. The core of hemodynamic resuscitation has traditionally focused on blood pressure and cardiac output; however, these measurements imperfectly reflect tissue perfusion. The recent emphasis on clinical signs of tissue perfusion such as capillary refill time and skin mottling is an important step toward a perfusion driven resuscitation. However, these types of skin perfusion assessment techniques are severely limited as these indices assess a relatively large volume of tissue and alterations in other microcirculatory beds may remain hidden using these techniques.In patients in shock of various origins, an important number of studies have consistently demonstrated that persistent microcirculatory alterations are associated with organ dysfunction and mortality. More than 600 papers have highlighted the clinical relevance bedside monitoring of the microcirculation. This level of interest led to the publication in 2018 of guidelines for the assessment of sublingual microcirculation by the European Society of Intensive Care Society Task Force [The future challenge is to transform microcirculation monitoring from an important research tool into an essential bedside monitoring technique used by clinicians to individualize hemodynamic resuscitation based on microvascular parameters. The purpose of this paper is to provide an update on the current state of microcirculatory monitoring in critically ill patients, and to present an approach for guiding therapy. We present a viewpoint on its potential role in the future of hemodynamic monitoring and on how it could influence the hemodynamic management of critically ill patients.The two main determinants of the primary function of the microcirculation for oxygen transport are convection and diffusion to the cells). Parameters related to the convective and diffusive capacity of the microcirculation are used to quantify the functional state of the microcirculation. Most hemodynamic strategies used in ICU focus on promoting blood flow and arterial oxygen transport (convection). However, achieving adequate diffusing capacity is also essential for optimal oxygen transport to the tissues, a variable that can only be measured by direct observation of the microcirculation. For example, the diffusive capacity of the microcirculation may be compromised during fluid therapy if increased RBC flow cannot compensate for dilution of RBC mass and if tissue edema induces increased diffusion distances between RBC and tissue cells, making it more difficult for oxygen to reach the latter. Understanding these two main components of oxygen transport to cells is essential to best guide hemodynamic strategies.The analysis of the microcirculation allows clinicians to appreciate the behavior of the different constituents of the blood and their interactions with the endothelium and the glycocalyx. For example, its observation by hand-held vital microscopes (HVM) not only allows detailed quantification of the behavior of red blood cells directly responsible for oxygen transport to tissues, but also allows observation and quantification of the behavior of leukocytes . VisualiThe goal of hemodynamic resuscitation is to meet the oxygen and metabolic needs of the various organs, which can only occur through optimization of the microcirculation Fig.\u00a0. We hopeThis is essential because clinical studies in different states of shock both in adult and pediatric patients have consistently shown that the persistence of microcirculatory alterations with lost of coherence between macrocirculation and microcirculation is predictive of organ failure and unfavorable outcomes in a more sensitive and specific manner than systemic hemodynamic and biological parameters \u201316. PrevAn other illustrative example of the potential interest in microcirculation assessment is the evaluation of the response to fluids. While a lot of emphasis has historically focused on the SV response to fluid infusion, the microcirculation represents a perhaps more element of the response in terms of tissue perfusion \u201322. A stIt is therefore necessary to integrate the analysis of the microcirculation in the reasoning guiding hemodynamic resuscitation to prevent organ dysfunction and improve the outcome of critically ill patients. Hemodynamic individualization based solely on macrocirculatory parameters is an incomplete view of hemodynamic optimization and the microcirculation must also be taken into account.There are a number of different methods to assess the microcirculation at the patient's bedside; all have strengths and future challenges Table . It is i2 delivery and local metabolism. Different adaptations of optics can allow measurements of hemoglobin levels and O2 saturation in microvascular vessels. In addition, it is also feasible to assess local redox state of the mitochondria through the analysis of ultraviolet absorbance [The recent introduction and validation of automated microcirculatory analysis software allowing point-of-care application of sublingual microcirculation-guided therapy is a significant step toward the introduction of routine use of HVM technology at the bedside , 24. It sorbance . In futuA further expansion of microcirculatory monitoring will occur when microcirculatory information is obtained from the microcirculation of organs themselves, as opposed to using the sublingual area as a proxy. Indeed, prior studies have assessed skin, conjunctiva, nail fold, rectal, stoma and vaginal microcirculations in various clinical conditions, although sublingual microcirculation is by far the most studied and clinically relevant microcirculatory bed to date. Even though several experimental studies have shown a coherence between sublingual and other organ surfaces, such as the intestines and kidney microcirculation , 27, it An interesting technique for monitoring organ microcirculation at the patient\u2019s bed is contrast-enhanced ultrasound (CEUS) which uses gas microbubbles surrounded by a stabilizing envelope (phospholipid or protein envelope) Table . Differe2, CRT and peripheral perfusion index (PPI) [Several techniques for the evaluation of peripheral perfusion are proposed Table . Alteratex (PPI) . These p2 extraction and microvascular reactivity. A slower recovery of StO2 during the reperfusion phase is an independent predictor of mortality in patients with sepsis [Near-infrared spectroscopy (NIRS) Table has beenh sepsis .Magnetic resonance imaging (MRI) Table allows fThe monitoring of brain microcirculation is a challenge due to its inaccessibility. However, the retina is considered a window to the brain and retiIn order to establish microcirculation analysis as a standard of care, it is necessary to demonstrate that the integration of microcirculation analysis has an impact on the prevention and treatment of organ dysfunction Fig.\u00a0. It is aVery few studies have used microcirculation-targeted resuscitation. The ANDROMEDA-SHOCK trial , 50 suggWe can speculate on how the future diagnostic platform of the critically ill patient could be realized as technology develops, and more and more insight is gained into the pathogenesis and cellular origin of disease. Ultimately such a holistic diagnostic platform aimed at understanding the mechanism of disease and guiding therapy would have to encompass the total hierarchy of the cardiovascular system from the macrocirculation to the microcirculation including both cellular and subcellular components Fig.\u00a0. The varParallel to such hardware developments will be the development of an innovative mathematical resource that continuously develops physiological models of the virtual patient and has the ability to identify changes in the phenotype of organ and cellular systems. It is expected that AI will play a central role in the translation of the evaluation of the clinical condition of the virtual patient and to predict response to clinical interventions. This is achievable, for example, for the case of the microcirculation, by integrating AI methodologies with algorithmic analysis of microcirculatory images able to differentially diagnose specific alterations in the phenotype of the microcirculation known to be corrected by specific therapeutic interventions . AI willThe classical therapeutical interventions have variable effects. Fluids may improve microcirculation in the early stages of shock, but this improvement may not occur in the later stages . The optVasoactive agents also have variable effects on microcirculation, improving microcirculation in some patients but failing in others. It should always be kept in mind that the effect of vasopressors is dependent on blood volume, the functionality of vasopressor receptors and the intensity of microvascular alterations.The development of new therapeutics is warranted to restore microcirculation when it is compromised.l-arginine [4) (cofactor of nitric oxide synthases) [Manipulation of nitric oxide (NO) pathways was one of the first routes explored given its crucial role in controlling microvascular perfusion . In expearginine and tetrnthases) . But stunthases) , 60.Alternatively, manipulating the arachidonic pathway is an attractive future direction and trials evaluating the impact of vasodilatory prostaglandins are underway. Legrand et al. , 55 are As during inflammatory and infectious states, cellular interactions within the microcirculation evolve toward a proadhesive and procoagulant phenotype, attempts to minimize cell aggregation should be tested. Multiple interventions were tested in experimental conditions, but few reached the clinical arena. Among these, ascorbate and several anticoagulants were particularly promising. Prior preclinical studies have repeatedly demonstrated that ascorbate improves microvascular perfusion and decrease white blood cells and platelets adhesion in experimental models of sepsis \u201359. In sBecause of its antioxidant properties, albumin is also an interesting therapeutic option to limit glycocalyx alterations and preserve endothelial function in intensive care patients . HoweverFuture hemodynamic strategies in ICU patients should integrate macrocirculatory and microcirculatory optimization in an attempt to give clinicians the most complete picture of their patient's physiology and thus provide a clear path to treatment Fig.\u00a0. In the Hemodynamic management requires individualization of macrovascular and microvascular parameters. Clinicians are currently blind to what is happening in the microcirculation of organs, which prevents them from individualizing resuscitation by targeting the microcirculation. Limiting hemodynamic resuscitation to an optimization of the systemic hemodynamics without knowledge of the microcirculation exposes to persisting alterations in tissue perfusion or excessive therapeutic interventions. The challenge for the future is to have noninvasive, easy-to-use equipment that allows for reliable assessment and immediate quantitative analysis of the microcirculation at the patient's bedside. The use of automatic analysis and the future possibility of introducing artificial intelligence into the analysis software could make it possible to eliminate observer bias and provide orientation of therapeutic options coupled with an analysis of the microvascular responses to the applied interventions. In addition, to gain caregiver confidence and support for the need to monitor the microcirculation, it is necessary to demonstrate that incorporating microcirculation analysis into the reasoning guiding hemodynamic resuscitation prevents organ dysfunction and improves the outcome of resuscitation patients."} +{"text": "Numerous clinical and epidemiological studies show that the rate of comorbidity of anxiety disorders is high in bipolar patients compared to the general population. This is associated with a poorer prognosis, poorer functioning and higher suicidal risk. Anxiety comorbidity should therefore be carefully investigated.Our main objectives are to explore the therapeutic complexity of anxiety disorders in patients with bipolar disorder To investigate the existence of psycho-pathological links and vulnerabilities between bipolar disorder and anxiety disorders.through a clinical vignette and a review of the existing literature on the comorbidity of anxiety disorders and bipolar disorders, and the resulting therapeutic issuesAnxiety comorbidity is quite common in the bipolar population. In the American National Comorbidity Survey (NCS), lifetime comorbidity is close to 90%. Two recent French clinical studies show the existence of at least one anxiety disorder in approximately 25% of bipolar subjects (24% and 27.2%), which will have an impact on the course of the bipolar disorder, with a particular increase in the risk of suicide, hence the importance of adequate treatment. This treatment faces two obstacles: the risk of manic episodes under antidepressants and the risk of dependence on benzodiazepines. Emphasis is also placed on non-drug approaches, including cognitive-behavioural and psycho-educational therapies.Anxiety comorbidity is not without consequence on the evolution of bipolar disorder. Its particularly high prevalence means that it cannot be neglected or ignored in current practice.None Declared"} +{"text": "This review article discusses the epigenetic regulation of quiescent stem cells. Quiescent stem cells are a rare population of stem cells that remain in a state of cell cycle arrest until activated to proliferate and differentiate. The molecular signature of quiescent stem cells is characterized by unique epigenetic modifications, including histone modifications and deoxyribonucleic acid (DNA) methylation. These modifications play critical roles in regulating stem cell behavior, including maintenance of quiescence, proliferation, and differentiation. The article specifically focuses on the role of histone modifications and DNA methylation in quiescent stem cells, and how these modifications can be dynamically regulated by environmental cues. The future perspectives of quiescent stem cell research are also discussed, including their potential for tissue repair and regeneration, their role in aging and age-related diseases, and their implications for cancer research. Overall, this review provides a comprehensive overview of the epigenetic regulation of quiescent stem cells and highlights the potential of this research for the development of new therapies in regenerative medicine, aging research, and cancer biology. Stem cells are cells that have the ability to self-renew and differentiate into different cell types. They are critical for the maintenance and repair of various tissues in the body. One unique characteristic of stem cells is their ability to enter a state of quiescence, or a dormant state, in which they stop actively dividing and remain in a state of inactivity until activated to differentiate.Quiescence is an important state for stem cells, which allows them to maintain their stem cell properties and respond to changing physiological demands. Further research into the regulation of quiescence in stem cells will be essential for the development of new therapies for a range of diseases and conditions .Quiescent stem cells are a unique and important population of cells that play a critical role in tissue maintenance and regeneration. These cells have been found to have a distinct molecular signature that sets them apart from other stem cell populations.Overall, the molecular signature of quiescent stem cells is a complex and highly regulated network of gene expression, epigenetic modifications, and metabolic processes that work together to regulate stem cell behavior. Understanding the molecular mechanisms that govern quiescence in stem cells is critical for the development of new therapies for a range of diseases and conditions .Epigenetic modifications play a critical role in regulating the behavior of quiescent stem cells. These modifications refer to changes in gene expression that occur without changes to the underlying DNA sequence, and they can be influenced by a variety of environmental factors.Epigenetic modifications play a critical role in regulating the behavior of quiescent stem cells. Understanding the mechanisms that govern these modifications will be essential for the development of new therapies that target quiescent stem cells for tissue repair and regeneration .Histone modifications are critical epigenetic regulators of gene expression in quiescent stem cells. In quiescent cells, histone modifications help to maintain stem cell properties and regulate the transition from quiescence to proliferation.Histone modifications are important regulators of gene expression in quiescent stem cells. Understanding the specific mechanisms by which these modifications regulate stem cell behavior will be critical for the development of new therapies for tissue repair and regeneration.Chromatin remodeling is an important process in the regulation of gene expression in quiescent stem cells. Chromatin remodeling involves the physical reorganization of chromatin structure, which can influence the accessibility of DNA to transcription factors and other regulatory proteins.Chromatin remodeling is an important mechanism for regulating gene expression in quiescent stem cells. Understanding the specific mechanisms by which chromatin remodeling complexes regulate stem cell behavior will be critical for the development of new therapies for tissue repair and regeneration.DNA methylation is a critical epigenetic modification that regulates gene expression in quiescent stem cells. DNA methylation involves the addition of a methyl group to the cytosine residues of DNA, usually in the context of a CpG dinucleotide. This modification can influence gene expression by altering the accessibility of DNA to transcription factors and other regulatory proteins.Overall, DNA methylation is an important regulator of gene expression in quiescent stem cells. Understanding the specific mechanisms by which DNA methylation regulates stem cell behavior will be critical for the development of new therapies for tissue repair and regeneration.The study of quiescent stem cells is a rapidly developing field with exciting future perspectives. One area of research that holds great promise is the development of new therapies for tissue repair and regeneration. Quiescent stem cells have the potential to differentiate into a variety of cell types, making them ideal candidates for cell-based therapies to treat a wide range of diseases and injuries. One challenge in the field of quiescent stem cells is the development of methods to activate quiescent stem cells and promote their proliferation and differentiation. A better understanding of the epigenetic modifications and signaling pathways that regulate quiescence and activation will be critical for developing such methods. Another area of research with future perspectives is the role of quiescent stem cells in aging and age-related diseases. Aging is associated with a decline in stem cell function and a decrease in the number of quiescent stem cells. Understanding the mechanisms that regulate stem cell quiescence and how these mechanisms are disrupted with age may lead to new therapies to improve stem cell function in aging populations. The study of quiescent stem cells also has implications for cancer research. Quiescent cancer stem cells have been implicated in tumor recurrence and resistance to chemotherapy. Understanding the mechanisms that regulate quiescence in cancer stem cells may lead to the development of new therapies to target these cells and improve cancer treatment outcomes.The study of quiescent stem cells has broad implications for regenerative medicine, aging research, and cancer biology. Continued research in this field will undoubtedly lead to new insights and developments that have the potential to transform medicine and improve human health ."} +{"text": "In patients with schizophrenia, numerous studies have shown a relationship between negative symptoms and cognitive deficits and a similar impact of these domains on different clinical features such as onset, course and prognostic relevance. However, this relationship is still today subject of scientific debate.The aim of the present study is to conduct a systematic review of the literature on data concerning the relationships between neurocognition and social cognition deficits and the two different domains of negative symptoms \u2013 avolition-apathy and expressive deficit.A systematic review of the literature was carried out following PRISMA guidelines and examining articles in English published in the last fifteen years (2007 - March 2022) using three different databases . The included studies involved subjects with one of the following diagnoses: high risk of psychosis, first episode of psychosis, or chronic schizophrenia. Other inclusion criteria of the reviewed studies included: evaluation of at least one neurocognitive or social cognition domain and at least one negative symptom using standardized scales; analysis of the relationship between at least one neurocognitive or social cognition domain and a negative symptom.Databases search produced 8497 results. After title and abstract screening, 395 articles were selected, of which 103 met inclusion criteria. The analysis of retrieved data is still ongoing. Preliminary evidence highlighted: a correlation between social cognition and negative symptoms, in particular with the \u201cexpressive deficit\u201d domain; a positive correlation between the severity of negative symptoms and that of neurocognitive deficits (in particular with the \u201cprocessing speed\u201d domain); an association of verbal working memory deficits with alogia and anhedonia.The study of the relationship between negative symptoms, neurocognitive deficits and social cognition could contribute to the understanding of the aetiology of psychotic disorders and therefore to the identification of therapies for the improvement of overall functioning and quality of life. The studies analysed so far show some interesting associations between cognition and negative symptoms, but the presence of often inconsistent results, partially attributable to the different conceptualizations of the various domains of negative symptoms adopted, hinders the generalization of the results.None Declared"} +{"text": "Social desirability bias is often speculated to influence survey responses but seldom studied in healthcare. The objective was to explore whether social desirability scores (SDS) or the presence of interview observers is associated with inaccurate recall and overestimation of antenatal care (ANC) services.Longitudinal validation study comparing recalled receipt of ANC services and nutrition components of ANC against direct observations of care. An adapted short form Marlowe-Crowne questionnaire was used to generate an SDS, and the presence of interview observers was treated as a separate exposure. We assessed accuracy and overestimation of recalled receipt of ANC services against observed receipt using log-binomial regression, adjusting for age, education, first-pregnancy and socioeconomic status.Rural Southern Nepal with recruitment from five government health posts.401 pregnant women.Social desirability scores did not significantly predict accuracy or overestimation of most types of ANC care except counselling on nausea. Higher SDS was associated with more accurate recall ) and less overestimation ). The presence of mothers-in-law or husbands during interviews was associated with greater overestimation of the number of ANC visits received by more than three visits ) and ), respectively. Those interviewed with friends present tended to overestimate the receipt of counselling on nausea, avoiding alcohol and not smoking.The presence of observers can lead to overestimation of the receipt of ANC care and support the conduct of interviews in private settings despite challenges of doing so in village contexts. Findings that the SDS did not predict the accuracy of most types of ANC care might reflect a reality that such questions may not be sensitive from a social-norms perspective. Additional local adaptation of SDS is recommended. The effect of social desirability bias and the presence of interview observers on response accuracy was evaluated against the gold standard of observed receipt of antenatal care services.The tool used to assess social desirability was adapted for comprehension for the local population in rural Nepal but has not been validated for that context.Interviewers noted the presence of others at any point during the interview, and not specifically during specific portions of the interview, likely attenuating estimates of the relationship between the presence of others and validity of reported services receipt.The WHO recommends a number of nutrition interventions in pregnancy, including iron-folic acid (IFA) supplements, counselling on healthy eating and physical activity, and in certain contexts calcium, micronutrient supplements and balanced energy and protein supplements.2Many survey-based coverage indicators come from self-reported data, and a growing number of studies have explored the validity of responses to questions about the receipt of nutrition interventions delivered through antenatal care (ANC).5Social desirability bias (SDB) is a common type of response bias that reflects the tendency of respondents to reply in a manner that may be viewed positively by social peers or that is consistent with social norms and expectations. SDB is generally thought to be greater for questions that are perceived as sensitive or subject to judgement such as those related to sexual practices or family planning.8Survey-based scales to measure social desirability were developed in the field of social psychology in the 1960s beginning with the 33-question Marlowe-Crowne scale.14SDB has been frequently described as an area of potential concern in nutrition-related studies collecting data about diet, physical activity and self-reported weight, but few studies have directly quantified the strength of association between social desirability scores and these domains.Adapt and test a modified short-form version of the Marlow-Crowne SDS in rural Nepal.Explore whether social desirability or the presence of other adults during interviews helps to explain previous findings related to overestimation of the receipt of services.Explore whether social desirability or the presence of other adults during interviews is associated with inaccurate recall of ANC and associated services.Tarai) of Nepal located in Province 2, the province with the lowest rates of ANC coverage in Nepal. According to the 2016 Demographic and Health Survey, only 36% of the women in Province 2 received all four recommended ANC visits, far below the national average of 59%.20This study took place in Sarlahi District, an area in the southern plains to women who complete all four visits.This study was a part of a larger effort to validate coverage indicators for various nutrition services through ANC, described in detail elsewhere.10.1136/bmjopen-2022-071511.supp1Supplementary dataQuestions from a short-form Marlowe-Crowne social desirability index were traCorrelations between different items of the social desirability index were explored after reverse coding negatively keyed questions, and most were found to be positively associated with one another . Tests oEnrolment criteria for the parent study included currently pregnant, married women aged 15 years or older, who (a) resided in the study area, (b) visited one of the five health posts for their first ANC visit and (c) had the intention to return to these study sites for subsequent ANC visits. All interview questions were translated and back-translated by Nepali/English speakers familiar with the study context. These questions were pretested, and minor changes were made to help facilitate comprehension in the local context. Both Maithili-speaking and Nepali-speaking interviewers conducted the interviews following training exercises designed to standardise the approach to translation. Interviewers were trained to replicate usual survey conditions. This included asking respondents to answer questions themselves even if a family member had tried to answer it for her and repeating or rephrasing questions in a simpler way if respondents were confused or responded with \u2018I don\u2019t know\u2019.The study started in December 2018 and reached full enrolment by November 2019. Direct observations of ANC visits were nearly completed by March 2020 when a shutdown for the COVID-19 pandemic interrupted all non-emergency health services in Nepal. Data collection for the postpartum 6-month visit was started in September 2019, but COVID-19-related shutdowns caused delays. The mean number of days that passed from delivery to the postpartum data collection visit was 252 with an SD of 75 days. Precautions taken to prevent the spread of COVID-19 included masking by interviewers and respondents, handwashing with soap provided to interviewees, holding interviews outdoors when weather permitted and maintaining a distance of more than 6 feet during data collection.We examined whether SDS or the presence of other adults during interviews was associated with the accuracy or overestimation of recalled receipt of specific ANC services using log-binomial bivariate and multivariable regressions to directly estimate risk ratios or log-Poisson models in the case of model non-convergence.This research conformed with the principles embodied in the Declaration of Helsinki. The institutional review boards at Johns Hopkins Bloomberg School of Public Health and the Nepal Health Research Council both approved this study including permission to collect and resume data collection during the COVID-19 epidemic. Signed consent was obtained from all participants. Patients and the public were not involved in the design, conduct, reporting or dissemination plans of this research. Dissemination and discussion of study design and results are made to a national committee and a local district committee.The research questions were not informed through direct consultation of patients\u2019 priorities, experience or preferences. However, many members of our local research team are from the local population where the research is conducted and were involved in pretesting, questionnaire development and recruitment. Findings will be disseminated to national and local stakeholders through meetings and local presentations.Of the 441\u2009women enrolled in the study, 396 were included in our analysis after accounting for exclusion criteria described in About half of women overestimated the number of IFA tablets they received by at least 30 tablets and over a third overestimated the number of ANC visits by at least one visit . About aThe presence of any other adult was significantly associated with overestimation of receipt of counselling on managing nausea as well as overestimation of the number of ANC visits. MILs were present during about a quarter of all interviews. Their presence was associated with significant overestimation of the receipt of deworming tablets ) and more than doubled the risk of overestimating the number of ANC visits by three or more visits ) . InteresWomen interviewed in the presence of MILs were less accurate in estimating the number of ANC visits but more accurate in recalling the receipt of counselling related to avoiding alcohol consumption in pregnancy. Women tended to be less accurate when estimating the number of ANC visits or receipt of IFA in front of their husbands .We also examined how the receipt of help from observers to answer questions during the interview may have affected the accuracy of recall of services . AssistaPrevious studies exploring the validity of questions asking respondents to self-reported receipt of ANC services have raised SDB as a possible type of error that might affect coverage estimates.The effects of SDB are complex,25 26However, it is unclear why higher scores on the SDS were positively associated with more accurate recall of counselling on management of nausea. One possible explanation is that the counselling focused only on women that complained of nausea and not all women.In contrast to questions about the simple receipt of services during ANC visits, questions about the number of ANC visits or IFA consumption could be more subject to SDB because they relate to a woman\u2019s initiative in seeking services or taking supplements and because there is likely positive sentiment around the benefits of services for women and their babies. This could explain the finding that the presence of MILs and/or husbands during the interview was associated with a significantly greater overestimation of the number of ANC visits. Cognitive testingIFA coverage is an important indicator for global monitoring of nutrition interventions; it is included in the Global Nutrition Monitoring Framework and an indicator for the Countdown to 2030.4Our study is the first that we are aware of to test a SDS in the context of rural Nepal, though SDSs have been used in urban IndiaA strength of our study is that we used direct observation to assess the receipt of services in clinics. However, the tool used to assess social desirability has not been validated in a rural context and was only slightly adapted for comprehension by respondents in our area. One adaptation we made to improve comprehension by the study population was to reformat the SDS tool from statements into questions. Ultimately, however, it is not clear how well the scale was able to capture socially desirable response tendencies in this context. Our study used a recall period of approximately 6 months, which may have facilitated more accurate recall of receipt than typically is assessed in surveys such as the DHS, which currently administers ANC modules to women up to 2\u2009years postpartum. The implications of different recall periods for SDB are unclear. Additionally, the enumerators noted the presence of others at any point in the interview and not specifically during the questions about receipt of services; without more detailed information about observer presence when the questions were asked, it is difficult to attribute findings specifically to SDB.These findings reinforce the guidance often given during survey training to conduct surveys without others present, though doing so in practice is often challenging in rural village settings where norms around privacy differ. As we found that the SDS was not well correlated with overestimation or lack of accurate recall, there was no need to correct for SDB. Given that the generalisability of our study to other contexts including other parts of Nepal is uncertain, we recommend further studies be conducted in other contexts to develop and validate SDSs and use them to better understand the effects of SDB on estimates of services use."} +{"text": "The study of hominin genomes has provided insights into human evolution and reproductive functions. The hominin genome is the complete set of genetic material that is present in the cells of hominins, a group of primates that includes modern humans and their extinct relatives. It consists of approximately three billion base pairs of DNAs that are organized into 23 pairs of chromosomes . The geOne of the most significant findings from the study of hominin genomes is the evidence of interbreeding between early modern humans and Neanderthals. Genetic analysis has shown that Neanderthal DNA makes up approximately 1-4 % of the DNA in non-African populations . This iThe study of hominin genomes has also shed light on the evolution of reproductive functions in humans. For example, genetic studies of the genomes of Neanderthals and modern humans have identified several genes that are involved in reproductive biology, including those that influence fertility, sperm motility, and the development of testes and ovaries . Some oSome of these genes have undergone rapid evolution in humans, suggesting that they may have played a role in the adaptation of our species to different environments and lifestyles . AnotheMoreover, the identification of genetic variants associated with fertility and reproduction in ancient hominin genomes has important implications for modern human health. For example, genetic studies of the genomes of Neanderthals and modern humans have identified variants associated with reduced fertility, which may have contributed to the extinction of some early human populations. A study published in 2016, examined the DNA of Neanderthals and found that they had a higher number of harmful mutations in genes related to male fertility compared to modern humans . AdditiIn conclusion, the study of hominin genomes has provided insights into human evolution and reproductive functions. The discovery of interbreeding between early modern humans and Neanderthals, as well as the existence of new hominin species like the Denisovans, has challenged our understanding of human origins and the process of speciation. The genetic analysis of FOXP2 and Y chromosomes has revealed how changes in our genome have influenced the evolution of language and migration patterns in early humans. Discovery of more fossils and ancient DNA would be critical in refining the understanding of human evolution and deciphering the existence of infertility like physiological ailments.The authors declare no conflicts of interest.Not applicable."} +{"text": "Mycobacterium avium subsp. paratuberculosis (MAP), is a chronic progressive granulomatous enteritis mainly affecting domestic and wild ruminants worldwide. Although paratuberculosis could be prevail in Ethiopia, there is a scarcity of epidemiological data on paratuberculosis in the country. Thus, this study was conducted to estimate the prevalence of paratuberculosis based on gross and microscopic lesions in cattle slaughtered at ELFORA Abattoir, central Ethiopia. Small intestines and associated lymph nodes of 400 apparently healthy cattle which were slaughtered at ELFORA export abattoir were examined for gross and microscopic lesions of paratuberculosis. The microscopic lesions were classified into four grades (I-IV) based on the type and number of cells infiltrated into the lesion. The prevalence of paratuberculosis was estimated on the basis of gross as well as microscopic lesion of paratuberculosis.Paratuberculosis, caused by The prevalence of paratuberculosis was 11.25% on the basis of gross lesion. However, relatively lower prevalence was recorded based on microscopic lesion. The gross lesions were characterized by intestinal thickening, mucosal corrugations and enlargement of associated mesenteric lymph nodes. On the other hand, the microscopic lesions were characterized by granuloma of different grades ranging from grade I to grade III lesions.The present study indicated the occurrence of paratuberculosis in cattle of Ethiopia based on the detection of gross and microscopic lesions consistent with the lesion of paratuberculosis. The result of this study could be used as baseline information for future studies on the epidemiology and economic significance of paratuberculosis. Mycobacterium avium subsp. paratuberculosis (MAP) which is a facultative intercellular acid-fast bacillus (AFB) belonging to the genus Mycobacterium. MAP is an extremely slow growing mycobactin-dependent organism that replicates within the macrophages of both the gastrointestinal tract and associated lymphoid tissues [Paratuberculosis (PTB), commonly known as Johne\u2019s disease (JD), is a chronic progressive granulomatous enteritis of ruminants. It also affects a wide variety of domestic and wild life species worldwide , 2. It i tissues . Contami tissues . Cattle tissues . Intraut tissues .Paratuberculosis induces a significant economic and health problem worldwide, especially in the cattle industry. Economic losses occur due to decreased in milk production 15\u201316%), low carcass yield %, low ca, decreasParatuberculosis occurs in most part of the world and its incidence is rising from time to time . It has In contrast to these industrialized countries, the exact sanitary situation of most African countries with regards to paratuberculosis is unknown but the occurrence of the diseases is suspected in most countries and the disease has been confirmed in a few countries of Africa. There are very few studies carried out to date, which leaves a big information gap on a very important disease of livestock. Only case reports and limited prevalence studies covering small areas in a few countries are available . For exaPrior to this study, Temesgen and Gemehu in 1995 reported a case of paratuberculosis from Ethiopia based on the history of diarrhea cases lasted for two years and clinical examination althoughThe study was conducted from September 2013 to July 2014 at Bishoftu ELFORA export abattoir in central Ethiopia. ELFORA export abattoir is located at Debre Zeit Town 47\u00a0km southeast of Addis Ababa. Currently the abattoir is one of the modern export abattoirs in Ethiopia and is exporting meat of small ruminants to Middle East countries and African countries , But the cattle slaughtered in the ELFORA export abattoir are used for local consumption. During the study on average 400 and 500 sheep and goats respectively, were slaughtered at this abattoir per day. On the other hand, on average 70\u201385 cattle were slaughtered per week based on local market needs.Bos indicus) and only males with variable body condition scores and age groups.Animals used for the study were cattle slaughtered at ELFORA export abattoir. The study cattle were purchased from different zones of the country particularly Borana, Arsi, Bale, Gondar, Jimma and southern Ethiopia. All study animals were local breed and on the associated intestinal lymph nodes as previously described by Hailat . All chaHistopathological examination was performed according to a protocol described by Bancroft . Tissue Data from gross examination and laboratory results were entered into Microsoft Excel 2010 spread sheets and the prevalence of bovine paratuberculosis was calculated in percentage. Animal prevalence was defined as the number of cattle found positive for paratuberculosis lesion per 100 animals examined.The prevalence of paratuberculosis on the basis gross pathology was 11.25% . Thus, of the 400 cattle examined for the gross lesion of paratuberculosis, 45 cattle had intestines and/or lymph nodes with gross lesions compatible with the lesion of paratuberculosis. The gross lesions were characterized by thickening and corrugations of the intestinal mucosa, or enlargement of associated intestinal lymph nodes. Grossly, the intestinal wall was thickened in its different parts particularly at the ileoceacal junction. When thickened parts of the intestine were opened they showed corrugations and elevations which did not disappear on stretching . Thus, the 45 cattle which were positive for gross lesion of paratuberculosis were further examined for microscopic lesion using Hematoxylin-Eoesin stained sectioned tissues of which eight animals had microscopic lesions consistent with the microscopic lesion of paratuberculosis. The microscopic lesions were characterized by an increase in the thickness and congestion of the intestinal mucosa due to inflammatory cells infiltrations. Out of the eight cattle with microscopic lesions, two cattle showed severe thickness and corrugation of the mucosa which were formed longitudinal and transverse ridges that could not be reduced by stretching . The prevalence of paratuberculosis was estimated on the basis of gross as well as microscopic lesion of paratuberculosis.The prevalence of paratuberculosis recorded in the present study on the basis gross lesion was 11.25%, which was similar with the prevalence of the study conducted earlier in Canada while itMacroscopically a variety of gross lesions which were normally associated with paratuberculosis were observed in the ileum. Thickening and corrugations of the ileum were obvious in the last portion of the ileum particularly at the ileoceacal junction. In severely affected cases, there were diffuse thickenings along with transverse and longitudinal corrugations of the intestinal wall making irregular folds \u201342. CongThe prevalence of paratuberculosis recorded in the present study was 2% on the basis of microscopic lesion. This microscopic lesion-based prevalence was comparable with the prevalence reported by the study conducted earlier in Pakistan , while iThe grade III lesion found in 2 cattle had diffuse infiltration of epithelioid cells, macrophages and multinucleated giant cells in the last portion of the small intestinal ileum and their associated lymph nodes. The grade III lesions observed in the present study corresponds most closely to those of the multibacillary form of paratuberculosis which is in agreement with the earlier by other authors , 44\u201348. Grade II lesions had the same pathological pattern of grade III, except that the severity of the lesions was moderate with few epithelioid cells, but there were more lymphocyte infiltrations. While the grade I lesions had less cellular infiltrations which was consisted more of lymphocytes and some scattered macrophages. This observation is consistent with the observation made earlier by other researcher . PreviouThe study is not without limitation as it was based on gross and microscopic lesions of paratuberculosis confirmation of the diagnosis using identification of the etiologic agent (MAP) or it is nucleic acid using molecular typing was necessary although this could not be done because of resources.This study indicated the occurrence of paratuberculosis in cattle of Ethiopia on the basis of gross and microscopic lesions. This study was the first in pathologically detecting lesions consistent with the lesion of paratuberculosis although only one study reported the seroprevalence of paratuberculosis earlier. Thus, the result of the present study highlights the need for additional studies to establish its epidemiology and economic significance of partatuberculosis in the country."} +{"text": "Following the publication of this article , concernThe University of California, Davis confirmed that author DPW admitted to manipulation of the data underlying the results presented in Fig 2.PLOS Genetics Editors issue this Expression of Concern to notify readers of the above issues.The authors are working with PLOS to try and address these issues. Meanwhile, the"} +{"text": "The use of a stent to coil an aneurysm can alter the position of the main blood vessel and affect blood flow within the sac. This study thoroughly examines the impact of stent-induced changes on the risk of MCA aneurysm rupture. The research aims to assess the effects of coiling and vessel deformation on blood flow dynamics by comparing the OSI, WSS, and blood structure of two distinct MCA aneurysms to identify high-risk areas for hemorrhage. Computational fluid dynamics is used to model blood flow. The results indicate that aneurysm deformation does not always decrease the risk of rupture, and coiling is more effective in occluding blood flow than aneurysm deformation. If there is a residual neck or unoccupied neck remnant, growth, recanalization, and rupture may occur, making treatment essential5.In the United States, the rupture of intracranial aneurysms is a significant challenge, as 30,000 cases of ruptured aneurysms are reported in US patients each year. The rupture of an aneurysm can lead to death or debilitation, and endovascular coils are the primary conventional treatment technique used to obstruct aneurysms and reduce the risk of rupture. Previous reports suggest that the quality of coiling plays a crucial role in reducing the risk of aneurysm rupture, particularly in the aneurysm ostium region, which experiences high-velocity impinging blood flow due to poor filling7. This technique substantially decreases the blood flow into the sac and makes a framework for thrombus establishment9. The application of the stent is helpful for the efficient coiling of the cerebral aneurysm11. The usage of the polymer foam as coils results in optimistic long-standing healing based on the in vivo studies. On the other side, the filling of the aneurysm could effectively have been done by the usage of a stent since this permits the coiling of the aneurysm with higher density14. In addition, the usage of the stent allows higher coiling near the neck area which is susceptible to rupture by the reason of the high flow rate.The usage of shape polymer foam is suggested as one of the reliable methods for occlusion of the aneurysm18. The recognition of the blood hemodynamic under impacts of the stent-induced deformation offers valuable information for surgeons for estimation of long-term treatment of the aneurysm22. The post-interventional impacts of stents have been investigated in limited research and results considerably vary based on the types and shapes of aneurysms27.The side effect of stent usage is the deformation of the parent vessel which also has a great impact on the performance of the occlusion of blood stream into the main sac areaIn this study, the hemodynamic analyses of the MCA aneurysms have been performed to reveal the impacts of the stent on the rupture risk of the saccular aneurysm after the deformation of the parent vessel. To study the hemodynamic impacts of the aneurysms, computational fluid dynamics is applied for the analysis of the bloodstream inside the MCA aneurysm at two stages of deformations. In addition, the influence of coiling porosity is also investigated after the deformation of the parent vessel.28.It is confirming that all methods were carried out in accordance with relevant guidelines and regulations. Besides, all experimental protocols were approved by of the Ca\u2019 Granda Niguarda Hospital and it is confirmed that informed consent was obtained from all subjects and/or their legal guardian(s). All study are approved by Ca\u2019 Granda Niguarda Hospital ethics committee32. The transient form of Navier\u2013stokes is used for the modeling of blood flow through the vessel and sac section35. The one-way FSI model is used to apply the impacts of the vessel interaction under the impacts of blood pressure39. This technique is extensively used for the simulation of deformable domain where fluid has impacts on the solid wall42. Computational technique has been used in different applications of engineering44 and medical devices46. In this approach, the pressure force on the vessel would result in the defamation of the sac and vessel. The bloodstream is assumed transient with cardiac cycle and non-Newtonian28. Casson model is applied for the calculation of the blood viscosity29.The simulation of blood hemodynamics inside the MCA aneurysm with/without deformation by stent is done via computational fluid dynamics28. This figure also demonstrates the applied boundary condition for the chosen aneurysms. Inflow blood is applied by mass flow rate pattern under the impacts of deformation and coiling is displayed in Fig.\u00a0To understand the role of deformation on the hemodynamic of the MCA aneurysm, Fig.\u00a0OSI index is critical for the evaluation of the aneurysm rupture. Figure\u00a0The results of mean OSI on sac wall for different coiling porosities under impacts of deformations are presented in Fig.\u00a0In Fig.\u00a0The impacts of stent-induced deformation on the hemodynamics of two distinctive MCA aneurysms are fully investigated in the present research. This study examined and explored the role of endovascular coiling when the parent vessel is deformed by the implementation of stents on the parent vessel near the sac. The modeling of the bloodstream in vessels and aneurysms is done via the computational fluid dynamic technique. The blood flow feature and WSS of aneurysm are compared to disclose the influence of coiling after post-interventional deformation. Presented results show that the effects of MCA aneurysm deformation are not always favourable for occlusion of the blood entrance. A comparison of coiling and deformations indicate that the coiling of an aneurysm would effectively reduce the risk of MCA aneurysm rupture."} +{"text": "The field of immunology is rapidly progressing, with new monogenic disorders being discovered every year. The heterogeneity of clinical manifestations and the genetic background of immunodeficiencies brought about the new definition of inborn errors of immunity (IEI), which was adopted by the International Union of Immunological Societies (IUIS) in 2019. This term reflects a considerable change in the viewpoint of immunologists, with a deeper recognition of the non-infectious manifestations of IEI and their atypical presentations. In this current intriguing context, this Special Issue offers an overview of some of the most updated concepts in immunology, ranging from the discussion of some peculiar aspects of well-known entities to the presentation of recently discovered diseases. The Special Issue includes six original research papers and seven review papers submitted from different countries.As the first contribution to this Special Issue, we provided a review on the autoimmune manifestations of IEI, with a specific focus on the various molecular mechanisms involved in autoimmunity and potential targeted therapeutic strategies . This woFollowing this, other interesting elements can be derived from the review paper by Pieniawska-\u015amiech et al., in which some of the most relevant non-infectious presentations of IEI are discussed . SpecifiThe atypical presentation of IEI is also the main focus of the paper by Morawska et al., which explores the spectrum of atopic manifestations in patients with selective IgA deficiency (sIgAD). This review gives specific attention to the epidemiological and clinical features of the atopic diseases associated with sIgAD and discusses the most relevant diagnostic aspects . SimilarThe other papers in this Special Issue have a major focus on the molecular mechanisms responsible for IEI. Concerning this, the review paper by Romano et al. discusses the role of epigenetic alterations associated with IEI . This paThe review paper by Mertowska et al. deeply dThe paper by Votto et al. deals wiAnother paper analyzing a rare and poorly recognized entity is the cohort study by Alberio et al. , in whicFinally, the wide spectrum of antibody deficiencies is the main focus of four research papers and a review paper. The study by Wi\u0119sik-Szewczyk et al. analyzesFinally, the paper by Sgrulletti et al. focuses To conclude, the present Special Issue represents an overview of the current immunological scenario, and deals with different innovative concepts and clinical and research approaches. Indeed, the expanding availability of immunological and genetic testing offers the opportunity to identify new disease entities and elucidate the function of new genes involved in the development and regulation of the immune response. In this continuously evolving field, both researchers and clinicians need to be constantly updated on the most relevant innovations, and with this Special Issue we hope to have contributed to this extremely relevant topic."} +{"text": "The transition to intelligent transportation systems (ITSs) is necessary to improve traffic flow in urban areas and reduce traffic congestion. Traffic modeling simplifies the understanding of the traffic paradigm and helps researchers to estimate traffic behavior and identify appropriate solutions for traffic control. One of the most used traffic models is the car-following model, which aims to control the movement of a vehicle based on the behavior of the vehicle ahead while ensuring collision avoidance. Differences between the simulated and observed model are present because the modeling process is affected by uncertainties. Furthermore, the measurement of traffic parameters also introduces uncertainties through measurement errors. To ensure that a simulation model fully replicates the observed model, it is necessary to have a calibration process that applies the appropriate compensation values to the simulation model parameters to reduce the differences compared to the observed model parameters. Fuzzy inference techniques proved their ability to solve uncertainties in continuous-time models. This article aims to provide a comparative analysis of the application of Mamdani and Takagi\u2013Sugeno fuzzy inference systems (FISs) in the calibration of a continuous-time car-following model by proposing a methodology that allows for parallel data processing and the determination of the simulated model output resulting from the application of both fuzzy techniques. Evaluation of their impact on the follower vehicle considers the running distance and the dynamic safety distance based on the observed behavior of the leader vehicle. In this way, the identification of the appropriate compensation values to be applied to the input of the simulated model has a great impact on the development of autonomous driving solutions, where the real-time processing of sensor data has a crucial impact on establishing the car-following strategy while ensuring collision avoidance. This research performs a simulation experiment in Simulink and considers traffic data collected by inductive loops as parameters of the observed model. To emphasize the role of Mamdani and Takagi\u2013Sugeno FISs, a noise injection is applied to the model parameters with the help of a band-limited white-noise Simulink block to simulate sensor measurement errors and errors introduced by the simulation process. A discussion based on performance evaluation follows the simulation experiment, and even though both techniques can be successfully applied in the calibration of the car-following models, the Takagi\u2013Sugeno FIS provides more accurate compensation values, which leads to a closer behavior to the observed model. The importance of traffic simulation models has increased in recent years with the growing interest in driver assistance systems and autonomous driving systems. The development and calibration of such complex systems can greatly benefit from the use of simulations to reduce time and costs. Among other traffic simulation models, the continuous-time car-following model is particularly useful in the design and fine-tuning of various advanced driver assistance systems (ADASs) such as adaptive cruise control and coopThis paper aims to evaluate the performance of two well-known fuzzy inference techniques, Mamdani and TakaThe justification behind conducting this research also results from a literature search and analysis on the main scientific databases, such as ISI Web of Science (WoS), Scopus, and Google Scholar. The search was carried out between 2 June and 5 September. Performs a literature search on the main scientific databases such as ISI WoS, Scopus, and Google Scholar and evaluates the results obtained based on their relevance to the comparative analysis of the application of Mamdani and Takagi\u2013Sugeno FISs in the calibration of car-following models;Performs a literature review in two directions, one considering the approaches based on the application of the two fuzzy inference techniques in the calibration of car-following models, and another focusing on similar comparative investigations of these techniques in other fields of research;Proposes an original methodology to perform the comparison of Mamdani and Takagi\u2013Sugeno FISs in the case of the calibration of car-following models by adapting their characteristics according to the needs of the modeling of this concept;Implements a three-parallel-simulation model in Simulink to reproduce the behavior of the observed car-following model and to calculate the compensation values to be applied to the input of the simulation model so that it replicates the behavior of the observed model; for this last purpose, two calibration models are implemented corresponding to the two fuzzy inference techniques;Validates the proposed methods for the calibration of the car-following model using real-world traffic data and provides a comparative evaluation in terms of the performance of the Mamdani and Takagi\u2013Sugeno FISs in the context of the calibration of the car-following models;Identifies the limitations of this study and provides recommendations for future research.This research addresses a scientific literature gap related to a comparative analysis of the behavior of a car-following calibration system in the case of using Mamdani and Takagi\u2013Sugeno FISs. To achieve this goal, this article has the following scientific contributions:This discussion of related work follows two directions. The first analyzes the applicability of fuzzy techniques in the implementation of the calibration process of car-following models, as identified during the literature analysis of the papers found according to the approach described in Bennajeh and Ben Said proposedHowever, the use of fuzzy-based solutions showed promising results in addressing the uncertainties in autonomous driving systems capable of ensuring collision avoidance. P\u00e9rez et al. developeSimilar comparative studies were conducted in various fields, such as optical and wireAn increased number of articles that conducted comparisons between the fuzzy techniques analyzed in this study was observed in the field of power engineering. Phunpeng and Kerdphol concludeHowever, the studies performed are not limited to the comparison of these two fuzzy techniques: they also investigated possible directions of optimization that can benefit in both cases. Bagis and Konar applied The existing literature also offers some comparisons between Mamdani and Takagi\u2013Sugeno FISs related to intelligent transportation systems (ITSs), except for the topic of car-following calibration. Saleh et al. performeThis research applies the methodology illustrated in A simulated car-following model has as main components a subsystem designed for parameters handling and a subsystem responsible for the control strategy of the vehicle behind based on the observed behavior of the vehicle moving ahead , while ensuring collision avoidance . In thist for both cases of inference engines. The validation system compares the dynamic characteristics of the observed system at time t with the simulated values obtained at time t after applying the compensation values according to the output of the calibration system for both cases of inference engines. As shown in The calibration system uses In the following subsections, this article presents in detail the characteristics of each of the systems involved in this research. Furthermore, an experimental case study based on real traffic data is chosen to validate this research. All the following information allows other researchers to replicate this study and facilitates future developments.The representation of traffic phenomena at microscopic level ensures a better granularity in identifying the root cause of traffic congestion and simplifies the identification of appropriate measures to improve the traffic control systems. According to Yin et al. , this roS [The state-space representation of the car-following model in continuous time and without time delay in Equation uses thestance S ,34.(1)xS applies the average length of the vehicle for passenger cars ZThe surface diagram describing the three-dimensional relationship between the inputs y to cover the needs for the visualization of the simulated behavior of the car-following model in the presence of Mamdani and Takagi\u2013Sugeno FISs, respectively. This allows for an easier comparison to the observed model, which is considered as the ideal model . Consequently, The simulation experiment uses Simulink to implement the car-following model . Compareuently, x and FV and complies with the definition of membership functions previously presented in this methodology. This implementation uses the notation s so that it can comply with the fuzzy theory that uses the notation with easily interpreted significance [Transport Delay immediately after the output of the Fuzzy Logic Controller blocks aims to cover the processing and response time of these units and, consequently, to ensure that the proposed calibration model fits to real-time processing.The detailed view of the calibration subsystem . The devificance and is aThe validation of the calibration models uses real traffic data from the local center for traffic monitoring and control in Timi\u0219oara city, Romania. The input data consist of velocity profiles for the Performance analysis considers the calibration time of Mamdani and Takagi\u2013Sugeno FISs as the main method of evaluation; this metric is also observable based on a visual analysis of the obtained results.i-th recordings of observed and simulated values from a total of N recorded data.To obtain a better overview in terms of performance, this research also uses the same methodology as in other similar studies ,44,45,46y is available in The calculation of these metrics for the running distance The simulation results show thaThe Takagi\u2013Sugeno FIS behaves better than the Mamdani FIS in identifying the time-varying appropriate compensation values that, applied to the simulated running distance of the FV ,24,47. AHowever, there are differences in terms of computational efficiency; the existence of perturbations and higher scale-down simulated values of the FV running distance before the system achieves the calibrated state are explained by the existing literature. The Mamdani FIS identifies fuzzy output values through the aggregation of multiple fuzzy rule outputs ,24,26,27y to validate the application of Mamdani and Takagi\u2013Sugeno FISs in the calibration of a continuous-time car-following model. Similar to the evaluation of these metrics in the case of running distance S in the calculation of y has no effect on influencing the performance of the two fuzzy techniques because it applies to all the time-varying processed parameters and is a constant value that is provided by the car-following strategy and cannot be measured.Thus, even if both fuzzy techniques can be successfully applied for the calibration of car-following models, the Takagi\u2013Sugeno FIS is characterized by more accurate results, the computational efficiency being ensured by the use of constant crisp values in the definition of the membership functions and the application of the weighted average technique in the determination of the FIS output.This article addresses a gap in the scientific literature consisting of the absence of a comparative analysis between the application of Mamdani and Takagi\u2013Sugeno FISs for the calibration of car-following models. The proposed methodology for performing this comparison consists of defining the membership functions of both FISs considering the standard safety distance that should be applied in the FV control strategy to avoid collisions during travel. Although the input membership functions and the fuzzy rules are the same, the differences arise from the definition of the output membership functions .A simulation experiment performed in Simulink shows that both FISs succeeded in calibrating the car-following model. However, the Takagi\u2013Sugeno FIS obtains better results in terms of performance according to the evaluation of performance metrics represented by MAE and RMSE, widely used for evaluation in related work. The impact of the computation errors that lead to scaled-down values of the running distance of the FV is also observable in both cases, but the differences are smaller in the case of the Takagi\u2013Sugeno FIS.Furthermore, the Takagi\u2013Sugeno FIS is suitable for use in the calibration process of car-following models based on the stability process of the identification of the time-varying compensation values, whereas the approach using the Mamdani FIS is affected by perturbations. The source of these perturbations is strongly connected to the definition of output membership functions: the higher surface of values covered in the case of the Mamdani FIS can introduce erroneous compensation values, while the Takagi\u2013Sugeno FIS offers a more stable behavior because of the definition of output membership functions as crisp constant values.However, this research also has some limitations that can be addressed in future research. This results from the intervals chosen for the membership functions and also from the type of function chosen. The particularity of the continuous-time car-following model used in this research can influence the behavior of the calibration method and can impact the performance of the FIS. The weaknesses of this research are also related to the use of simulated fault injection to represent the errors from the measurement of sensor data and those introduced by the simulation process, the configuration of the Simulink Band-Limited White Noise block also having influence on the calibration process.Future research can perform comparative evaluations of these two fuzzy techniques in the presence of additional control mechanisms such as genetic algorithms or neural networks. Also, it is worth analyzing how Mamdani and Takagi\u2013Sugeno FISs behave for the calibration of traffic models at upper levels of modeling, such as mesoscopic and macroscopic.Future research can also address the limitations of this research described at the end of The use of fault injection mechanisms can be replaced by approximated values of the possible measurement errors calculated according to the worst-case scenario by subtracting the values corresponding to the tolerance associated with the sensors used for traffic data acquisition. Currently, the data used in this research are provided by the local traffic management center, and the obtained data do not have details related to the data sheets of the inductive loops placed in the city for traffic monitoring.Another way to collect data for the simulation is to use specific simulators and human subjects. In addition to recording the traffic profiles from the simulator, the emotional behavior of the human subject can also be recorded. The use of tools such as eye-tracking, galvanic skin response analysis, or heart rate analysis facilitates the classification of driver behavior that can be introduced as a third input variable of the FIS. This is essential for adopting the appropriate car-following strategy in the case of autonomous vehicles due to the possible interaction with human-driven vehicles on the road."} +{"text": "Despite multiple attempts to understand its molecular mechanisms, cancer continues to pose a significant challenge to both the general population and healthcare workers. Multiple studies have highlighted the important role of microorganisms in almost all aspects of cancer . The fatYi-Hui and George show the important aspects for the prediction of colorectal cancer (CRC) using 16S rRNA or shotgun metagenomics data through a specific algorithms approach. The approach presented in the article demonstrates the key CRC-related taxa including predominant levels of Bacteroides fragilis (B. fragilis). The importance of identifying specific microbiota constituents such as of B. fragilis may be helpful to reveal the microbial signatures through molecular techniques which may provide valuable information for the disease pathology. In addition, the outcome of the study will open avenues for the management of CRC.One of the most common and deadliest types of cancer is colorectal cancer (CRC). Narabayashi et al. explore the involvement of the adaptor molecule RBM4 in the recruitment of DNMT. The study also shows the role of gut microbiota in the TLR4 gene methylation in the colonic epithelial cells (CEC) line through increasing RBM14 expression. RBM14 is an adaptor molecule controlling the recruitment of DNMT that bind to specific target genes. Furthermore, compared to cells from germ-free mice, the results of this study report the overexpression of RBM14 in conventional mice colonic epithelial cells. Collectively, the results show that the gut microbiota mediates TLR4 methylation in colonic epithelial cells through RBM14 upregulation. These results indicate that the intestinal microbiota helps in intestinal homeostasis through epigenetic regulation, corroborating recently published work (DNA methyltransferase (DNMT) is an important enzyme which controls the process of DNA methylation. hed work . Sarfraz et al. discusses the very important aspect of metabolic regulation by nutrition and microbiota. Metabolic regulation by microbiota and nutrition is an important aspect for physiological regulation in multiple diseases including cancer. This article discusses the disparities among microbiota composition and provides the relationship between diet and microbiota with a focus on using this association to harness microbiota modulation through diet for physiological benefits. The article demonstrates various host and exogenous factors which alter the composition of gut microbiota. The article also emphasizes the role of lifestyle and diet on microbiota composition. Moreover, the work shows the important information of the gut microbiome on dietary interventions and the possible effect of nutrition including feeding pattern and circadian patterns which may affect the host metabolism and resultant intestinal microbial population. Host metabolism plays an important role in the progression of cancer and therefore the information presented in the article provides a valuable addition to the field by shedding light on the metabolic regulation by microbiota through diet. The consumed polysaccharides from a diet play a crucial role in the modulation of intestinal microbiota and their functions. The microbial metabolites such as short-chain fatty acids (SCFAs) have the potential to alter the physiology of the host, and act as prominent factors to promote certain immunological disorders and metabolic disorders including cancer and obesity. Hence, personalized nutrition may be used for the management and cure of various immune diseases, metabolic disorders, neurological disorders, and cancer.The article by Approximate 90% of most of oral cancers belong to types of oral squamous cell carcinoma (OSCC).Saproo et al. present a brief research report on the saliva samples of OSCC patients and normal controls. The report presents findings about the differential gene expression analysis of salivary RNAs from OSCC patients. The findings also illustrate potential salivary indicators of OSCC and their relationship to different aspects of carcinogenesis. Notably, the study also explores microbial dysbiosis among OSCC patients and their relationship with tumor-promoting pathways. Though few other studies present microbiota dysbiosis among OSCC patients, the study involving the detection of the integrated landscape for OSCC specific salivary RNA from an Indian population provides a valuable addition to the field. The study found that Prevotella is significantly enriched in OSCC and indicated that differently abundant microbial taxa are also involved in pathways associated with carcinogenesis. The role of the Prevotella genus in inflammation has been presented in several diseases (diseases . The difOverall, the articles gathered in the current research provide a valuable source for understanding the different perspectives about microbiota-mediated cancer and their implications in the development, diagnosis, and management of various cancer types. The high throughput host-microbiota interaction data is generated by current methods and gathered in several databases. The articles included in the current Research Topic create a thought-provoking space and update the knowledge status for microbiota involvement in cancer, which in turn may be helpful for their utilization in the diagnosis, therapy, and management of different types of cancer."} +{"text": "This report presents the first case of painful anterior shoulder snapping due to a thickened, fibrotic bursa snapping between the subscapularis and the short head of the bicep during external and internal rotation of the humerus. A 46-year-old presented with a 10-month history of on-and-off anterolateral right shoulder pain and snapping. Direct treatment to the anterior suspected lesions partially and temporarily relieved the pain but did not reduce the snapping. Further musculoskeletal examination and dynamic ultrasound scanning showed dysfunction in the scapulothoracic movement and defects of the muscles that interact with the infraspinatus aponeurotic fascia. An ultrasound-guided diagnostic injection to the suspected lesions in the infraspinatus fascia and its muscles attachments improved the scapulothoracic movement, and the snapping and pain were eliminated immediately after the injection, which further shows that the defects in the infraspinatus fascia may be the root cause of the painful anterolateral snapping. The importance of the infraspinatus fascia and its related muscle in maintaining the harmony of the scapulothoracic movement and flexibility of the shoulder is considerable."} +{"text": "PLOS ONE Editors retract this article [After this article was published, similarities were noted between this article and submissions by other research groups which call into question the validity and provenance of the reported results, and the adherence of this article to the PLOS Authorship policy. Further editorial assessment identified concerns regarding the integrity of the peer review process. In light of these issues, the article .HR did not agree with the retraction."} +{"text": "Road dust cotains tire wear particles (TWPs) and a large amount of mineral particles (MPs). Given that tire tread in vehicles is mainly comprised of natural rubber (NR), isoprene and dipentene could be the main pyrogenic products stemmed from the thermolysis of NR. This offers a great chance to quantify the exact mass of TWP in road dust. As such, this study focused on the influence of MPs on the trends in thermolytic behaviors of NR using the resistive furnace (furnance) and Curie point pyrolyzers. This study confirmed that a reliable correlation in line with the formation of isoprene and dipentene could not be realized using the furnace type of a pyrolyzer. This means that employing the furnace type of a pyrolyzer in quantitification of TWPs could not be a viable and approproiate option due to the diverted thermolytic trends of NR due to differences in the heat transfer and adsoprtion of the pyrogenic products triggered by MPs. In the Curie point type of a pyrolyzer, the production rates of isoprene and dipentene were linearly responded to the mass of NR. The ferromagnetic substance in MPs could lead to the thermolytic trend change of NR. Thus, adopting the Curie point type of a pyrolyzer could be a viable option for quantification of TWPs in road dust when the effects of ferromagnetic substance are well neutralized."} +{"text": "The journal retracts the 2022 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Bioengineering & Biotechnology and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "Goal: Conventionally, a surgeon's skill is assessed through visual observation by experts and by tracking patient outcomes. These techniques are very subjective and demands enormous time and effort. Hence, the aim of this study is to construct a framework for automated objective assessment of micro-neurosurgical skill. Methods: A mask region-based convolution neural network (RCNN) is trained to identify and localize instances of surgical instruments from the recorded neurosurgery videos. Then the tool motion and tool handling metrics are computed by tracking the detected instrument locations through time. Microscope adjustment patterns are also investigated via the proposed time based metrics.Results: This study highlights the metrics that could potentially emphasize the variance in expertise between a veteran and a novice. These variations include an expert exhibiting a lower velocity, lower acceleration, lower jerks, reduced path length, higher normalized angular displacement, increased bi-manual handling, shorter idle time and smaller inter tool-tip distances while handling tools accompanied with frequent microscope adjustments and reduced maximum and median intervals between adjustments when compared to a novice. Conclusions: The developed vision based framework has proven to be a reliable method to assess the degree of surgical skill objectively and offer prompt and precise feedback to the neurosurgeons. I.Central Nervous System is incredibly complicated with various interconnections that involve all other major organs and glands Checklist based surgical skill assessments like Structured Assessment of Technical Skills (OSATS), Operative Performance Rating System (OPRS), Multiple Objective Measures of Skill (MOMS) are prone to evaluator bias and offer limited feedback to the trainee residents besides demanding huge time and effort from the experts 1)To automate the assessment of micro-neurosurgical skills in the recorded neurosurgery videos through the introduction of metrics to apprehend the surgeons' tool and microscope handling characteristics.2)To perform statistical analysis to measure the reliability of metrics in grading surgeons' skill. The related works in automated surgical skill assessment is furnished in the Section The objective of this study isII.The surgical competence is a blended outcome of knowledge, technical skills, decision making and team-handling skills of the surgeon. The competencies are commonly assessed either based on the observational approach through rating checklists or by patient outcome measures In recent years, the interest has shifted towards automated analysis of instrument and hand movement from the recorded videos as an effective alternative to access the psychomotor skills. III.A.The dataset comprises of video recordings of a neurosurgeon performing variety of neurosurgeries like removal of gliomas, colloidal cyst and craniopharyngioma over real patients ranging from the year 2011 to 2017 in the Department of Neurosurgery, National Institute of Mental Health and Sciences (NIMHANS), India. All surgeries were carried out with the aid of Leica OH5 or Leica OH6 neurosurgery microscopes. And the video recordings of the surgery were acquired at the rate of 25 frames per second with the frame resolution of 640 \u00d7 480. The authors have taken approval from the NIMHANS ethics committee on B.An automated framework to segment microsurgical instruments and to characterize operating patterns using the instruments is presented in the Section IV.The operating maneuvers of a neurosurgeon are accredited as a combination of intuitive (subconscious) to analytical (conscious) actions A.Instance segmentation of the surgical tools is the key to skill analysis. The Mask-RCNN is employed in the proposed model for instance segmentation of surgical tools from the neurosurgical videos. The Mask RCNN model is built on top of the Faster RCNN with the inclusion of small fully convolutional Neural network (FCN) applied to each region-of-interest (RoI) and predict the object mask for each instance in parallel to the existing branch for bounding box recognition Micro-surgical tools have similar form factor and shape resulting in higher chances of misclassification. The false positive outputs are discarded by thresholding the confidence scores. Incorrect detections and mislabelled instances which are inherent consequences with the usage of frame based segmentation models over videos were handled efficiently with a robust and efficient post-processing (REPP) technique conceived by Sabater et al. B.Each predicted mask instances of surgical instruments are binarized by thresholding. The binary image of the surgical instrument is then subjected to medial axis skeletonization to extract the skeleton pixels through which no minimal path from any inner point to the shape boundary exists For angle estimation, a minimum area rotated rectangle is determined for each instances of surgical instruments as shown in the Fig. C.The Suturing segments from 10 different neurosurgeries performed by a neurosurgeon from the year 2011 to 2017 are analysed to decipher the parameters that could elicit distinctive and notable improvements in the surgical skill over time. Five video segments from the year 2011 to 2013 and five segments from the year 2017 are analysed to objectively compare the improvement in dexterity with the transformation of the surgeon into a veteran. All instruments in the surgery were hand-held and no robotic assistance was utilized to reduce resting tremor. From each video recording, the frames are extracted at the rate of 12 per second. Then the extracted frame sequences are subjected to Mask RCNN model followed by post-processing to segment the desired surgical tools. The 2-Dimensional coordinates of the segmented tool's tip and the orientation of the tools are then computed as described in the Section V.The advent of microscope into the operating room by Nylen in 1921 revolutionized the surgical practices In the operating microscopes, when the focusing aid is active, the focusing lasers are triggered. Hence, any microscope adjustments is indirectly inferred from the presence of the two red colored laser spots on the image. Detecting and tracking of the laser dots in the recorded videos helps to analyze the adjustment patterns.A.Yolov5 which belongs the family of single-stage deep learning framework for object detection was employed to detect the laser spots. Yolov5 employs CSPDarknet53 as backbone for feature extraction which feeds into a path aggregation network (PANet) for feature fusion and followed by the YOLO head to generate predictions B.Neurosurgeons are required to be very adept in handling microscope as almost all surgeries of brain and spine inclusive of all complex delicate procedures are carried out with the aid of microscope. With the aid of operating microscopes, there is minimal chance of disturbance to the neighbouring regions of abnormality even in intricate neuro-procedures thereby resulting in improved patient outcomes.In micro-neurosurgery, the perceived image space under the magnification offered by microscope is different from the actual. The major challenge to the neurosurgeon is to get trained to assimilate tactile feedback from the instruments and the visual information from the microscope and to automatically compensate for the perceptual mismatch from experience VI.Efficient handling of tools and operating microscope are salient traits of an experienced neurosurgeon. This study aims to establish the useful metrics that assesses the improvements in micro-neurosurgical skills of a neurosurgeon over the years of practice. Mann-Whitney U test is employed to study the difference in the metrics between the two range of years as it does not rely on distributional assumptions.Multiple tools like Needle Holder, Straight Microscissors, Dural Tooth Forceps and Suction have a role during suturing. This is very pertinent to examine the optimal multi-tool handling, tool switching, task planning and sequencing ability of a surgeon and hence analysis of suturing segment is opted in this study. The tool handling metrics are evaluated and compared between the suturing segments of ten neurosurgery videos of a neurosurgeon recorded between the years 2011 to 2013, and 2017. The initial step for computing tool-based metrics involves detecting and localizing the different micro-neurosurgical tools using Mask-RCNN. Then the tip of the surgical tool is tracked to compute the tool-handling metrics. The parameters are sensitive enough to discriminate various levels of expertise of a neurosurgeon as summarized in the Table The first fifty minute duration of eight recorded neurosurgeries performed by a neurosurgeon over the years 2011\u20132012 and 2016\u20132017 are analysed to study the operating microscope adjustment patterns and summarized in the Table VII.This study detailed a framework for automated objective assessment of micro-neurosurgical skills and this is the first reported study on assessment of real-life neurosurgery rather than a bench-top task. The proposed video based methodology is designed to assist residents to measure and compare the tool handling and microscope handling characteristics with explainable standardized metrics. And to the best of our knowledge, this is the first time operating microscope adjustment features have been reported in micro-surgical skill analysis. The proposed technique has potential to offer real-time feedback and shows promise as a reliable and valid method to track performance over time and to accomplish meaningful comparison. This pilot study has proven that it is feasible to completely automate the hassle process of surgical skill assessment in residents and is definitely a valuable contribution in the direction of automated surgical skill assessment. Our ongoing research and the future direction in this field is to establish a structured grading system for surgeons to assess micro-neurosurgical proficiency's in an uniform scale."} +{"text": "The herpes zoster oticus results from the reactivation of the varicella zoster virus, a DNA virus of the Herpesviridae family with strictly human-to-human contamination, affecting the geniculate ganglion of the facial nerve. The manifestations of shingles and post-herpes signs are associated with psychiatric manifestations such as anxiety, insomnia and depressive disorder. Shingles and depressive disorder share common features, such as decreased cellular immunity and a high prevalence in the elderlyIs there a correlation between the intensity of depression and the comorbidity of herpes zoster and depression? Is there an explanation for this association? Can adequate therapy of the infection prevent the occurrence of the depressive disorder? Does the existence of this comorbidity affect the response to antidepressants?case report and litteraturecase reportWe will try to answer these questions in this work while illustrating by the case of a patient having been touched by this comorbidity and while being based on what was published in literature.None Declared"} +{"text": "Regarding last years, in German psychiatry the effects of the changed statutory framework conditions for the use of physical restraints and the COVID-19 pandemic on the treatment in emergency psychiatry were discussed.Against these background, changes in the severity of disease and regarding the use of coercive measures in our emergency psychiatry are to be analysed.An internal retrospective study in the emergency psychiatry was performed.- The socio-demographic patient data (exception: gender) and the distribution of the main diagnoses groups remained stable. There was a reduction in the treatment volume by 4% in the pandemic period compared to 2019.- Both in 2019 and 2021, significant increases regarding the number of patient characteristics of the intensive treatment according to the OPS code 9-61 were measured.- During pandemic period 2021, a significantly rise in the percentage of involuntarily committed treatment cases [10] imposed.- Following changed framework conditions, there were decreases in the total duration of physical restraint and the number of restraint events per restrained treatment case [10] ratio of five-point/ seven-point restraint events reduced significantly and continuously.The amendments in statutory framework for the use of physical restraints made personnel more aware of the issue and consequently led to changes in restraint practice at our emergency psychiatric unit. These effects were partially cancelled by the increases in the severity of diseases during the pandemic.None Declared"} +{"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Immunology and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."} +{"text": "The journal and Chief Editors retract the 29 November 2021 article cited above.Following publication, concerns were raised regarding abnormal similarities with the contents of other articles published by unrelated research groups. A subsequent investigation, which was conducted in accordance with Frontiers\u2019 policies, raised strong concerns over the authorship of the articles, resulting in a loss of confidence in the findings presented in the article.The authors have not responded to this retraction.This retraction was approved by the Chief Editors of Frontiers in Endocrinology and the Chief Executive Editor of Frontiers."} +{"text": "An experimental study has been carried out to assess the effectiveness of infrared thermography in wrinkle detection in composite GFRP (Glass Fiber Reinforced Plastic) structures by infrared active thermography. Wrinkles in composite GFRP plates with different weave patterns (twill and satin) have been manufactured with the use of the vacuum bagging method. The different localization of defects in laminates has been taken into account. Transmission and reflection measurement techniques of active thermography have been verified and compared. The section of a turbine blade with a vertical axis of rotation containing post-manufacturing wrinkles has been prepared to verify active thermography measurement techniques in the real structure. In the turbine blade section, the influence of a gelcoat surface on the effectiveness of thermography damage detection has also been taken into account. Straightforward thermal parameters applied in structural health monitoring systems allow an effective damage detection method to be built. The transmission IRT setup allows not only for damage detection and localization in composite structures but also for accurate damage identification. The reflection IRT setup is convenient for damage detection systems coupled with nondestructive testing software. In considered cases, the type of fabric weave has negligible influence on the quality of damage detection results. The engineering application of composite structures as an alternative to typical metallic materials is often limited because of the great number of possible failure forms. The design process of the composite parts is inseparably connected with the selection and optimization of the manufacturing method. The failure form in composite materials can occur not only as a consequence of the boundary and loading conditions but also during the manufacturing process. The structures exposed to environmental factors are especially prone to durability reduction and different failure forms evolution during normal service life ,2. Thus,Several nondestructive damage detection methods are applied to this failure form in different types of composite structures . The curThe wrinkle detection in CFRP structures by the total focusing method (TFM) has been investigated by Ma et al. . A compaThe development and trends in nondestructive testing and evaluation for wind turbine composite blades have been presented by Yang et al. and SongIn this paper, the wrinkle characterization in the GFRP structures by infrared thermography has been taken into account. First of all, the two types of glass woven weave\u2014twill and satin have been applied in the vacuum-assisted production of plates with wrinkles to verify the influence of the material type on the results of thermographic inspections. The different subsurface wrinkle localization has been prepared to include the aspects of the defect position to the accuracy of the infrared inspections. The section of the turbine blade with a vertical axis of rotation containing post-manufacturing wrinkles has been prepared to verify the proposed active thermography measurement technique in the real structure. One part of this model has been manufactured with the use of a gel coat finishing surface containing the surface defect to validate the accuracy of the active thermographic measurement techniques in the case of wrinkle detection with disturbing factors. The transmission and reflection thermographic measurement techniques have been verified in the analyzed models and the real wind turbine blade.Vacuum bagging is a practical method for both large-scale and small-scale applications, such as wind turbine blades, boats, car components, or some hobby projects. This technique has been used to fabricate GFRP samples with wrinkles based on satin and twill fiberglass fabric configurations and epoxy resin. The applied setup of the flat vacuum bagging technique is shown in The manufactured plates have been scanned with the use of a 3D laser scanner REVscan and measured with the use of GOM Inspect software 2022 SP1. The Creaform REVscan is a self-positioning, handheld scanner for inspection, quality control, and reverse engineering measurements. The scanner allows to digitize 18,000 points per second with an accuracy of up to 50 \u00b5m. The size of the wrinkles measured concerning the composite surface has been demonstrated in In the next step, vacuum-assisted production was applied to manufacture a section of the real structure of a turbine blade with a vertical axis of rotation containing post-manufacturing wrinkles. The wrinkles appeared during the fabrication of the blade model in the closed mold. The analyzed wind blade model is presented in Thermography is the process of detection, registration, processing, and visualizing the infrared radiation emitted by non-contact measurements by imaging devices. Active thermography employs an external heat source generating internal heat flow and an increase in temperature to induce relevant thermal contrast between areas of interest.The reflection and transmission thermographic methods have been tested to check the possibility, accuracy, and effectiveness of wrinkle detection. The experimental setup schema for the case of two halogen lamps has been demonstrated in The transient thermal analysis has been applied with the use of one 1500 W halogen lamp. The time of the analysis was equal to 300 s, and the time of heating was 30 s. It means that the thermal response of the analyzed structures was monitored both in the heating and cooling process. The measurements were conducted with a frame rate of 9 Hz. The analyzed specimens with wrinkles have been analyzed with the use of one halogen lamp in the transmission and reflection IRT measurement configurations. It should be emphasized that during analyses, manufactured wrinkles were always oriented on the opposite side with respect to the IR camera. It means that convexity was invisible from the observer\u2019s point of view. The IR camera monitored only the flat side of specimens\u2014see microimages in c distribution versus time. The thermal contrast was computed as the difference between the temperature of the reference area and the temperature of the defective area. In active thermography, the defective areas are detected as areas having a lower temperature. In the transmission setup, the highest thermal contrast of more than 4 \u00b0C has been observed for the wrinkle in the fourth layer (point P2). The wrinkle in the seventh layer presented a little lower value of Tc. The wrinkle on the top layer also revealed the visible difference between the defective and the intact area. The thermographic results of the reflection IRT setup presented in c. The wrinkles have been detected; however, only the slight difference between the position of the wrinkles has been revealed. Here, the highest thermal contrast is lower than in the transmission setup. In the reflection mode, the indication of wrinkle position on the temperature profile is not so clear as in the transmission mode. The temperature profile revealed the wrinkle in the seventh layer (above point P1), and in the fourth layer (above point P2), but the wrinkle on the top layer (above point P3) was not so obvious\u2014The main goal of this work was concerned with the effectiveness of thermography in the case of wrinkle detection in GFRP multilayered composite structures. Firstly, the influence of material type and the depth of the defects in the composite plates on the thermal response has been studied. The two types of glass woven weave, namely twill and satin, having subsurface wrinkles with different localization, have been prepared. In the 8-layer composite plates, the manually induced wrinkles were placed in the fourth and seventh layers. In the case of the twill sample, another wrinkle was made on the top layer caused by folding the vacuum bag when the air was pumped out\u2014c value is observed for the deeper wrinkle in the fourth layer where the maximum value is lower than 3 \u00b0C\u2014For satin plate thermograms, point P1 indicates the seventh layer wrinkle, whereas point P2 shows the fourth layer wrinkle\u2014The comparison between temperature distribution vs. time of the transmission and reflection thermographic setup revealed diverse heating and cooling behavior of plates. The possibility of heating the plates was lower in the case of the reflection mode. In this case, the achieved maximum temperature during the 30 s of heating was about 10 \u00b0C lower for the twill plate and about 6 \u00b0C lower for the satin plate. Additionally, the heating charts obtained from the applied IRT setups are different. In the transmission mode, after heating, the natural cooling of samples is slow, with smooth characteristics of temperature decay, whereas, in the reflection mode, the temperature after heating decreases quickly in the first part of the cooling, and next, the speed of cooling is much lower.For the assumed 8-layered plates made of GFRP, the manufactured twill sample possesses lower thickness and higher fiber volume fraction compared with the satin sample.The section of a turbine blade with a vertical axis of rotation containing post-manufacturing wrinkles has been investigated with the use of both transmission and reflection IRT setup configurations. The measurements were conducted with an IR camera frame rate of 30 Hz setting the heating time at 20 s and the whole time of analysis at 50 s. The temperature profile measured at the gel coat-covered part of the section was lower than for the structure without a finishing surface. However, characteristics of the profile concerning detected wrinkles have been very similar. The heating source localized at the top of the analyzed structure caused nHowever, the internal stiffening rods disturbed the wrinkle localization, and the defect analysis was cumbersome. Therefore, the correct analysis should consider the information on the internal structure of the section regardless of the heating source localization.The results demonstrate the capability of infrared thermography wrinkle detection in different structures made of GFRP composites. Different measurement techniques have been analyzed to verify the effectiveness and accuracy of the damage detection process.Two different glass fiber weaves of the fabric have been analyzed in this research. The influence of the weave type on damage detection results has been negligible. The qualitative assessment of the wrinkle detection based on obtained thermographic results was similar for both materials. From the measurement point of view, it should be noted that the greater thickness of the specimen made of satin fabric results in the lower temperature achieved during the heating process. However, the thermal contrast measured as a difference between temperatures in reference areas and defective points is similar for both glass fabrics\u2014From the structural health monitoring and inspection point of view, it means that it is not necessary to adapt the methodology of the thermographic inspection to the type of fabric because the effectiveness of the damage detection process is similar.The methodology of the thermographic inspection determines the effectiveness and quality of the damage detection results. The transmission IRT setup allows us to accurately determine the state of the analyzed structures. The straightforward thermal contrast parameter can be applied here also in the automated damage detection monitoring systems based on the active thermography measurement because the difference between healthy and defective structures is substantial and easy to interpret. The thermal contrast measured in the time domain requires the reference characteristics and may be applied to the mass-produced items\u2019 quality control. The detailed analysis of the intensity of thermal contrast gives accurate information about the position, orientation, and dimension of internal defects. The thermal profile analysis allows satisfying results without the reference structure because each deviation from the temperature level may result from structural defects. Moreover, the detailed and local analysis of the defective areas can be also applied not only in the qualitative damage detection and localization systems but also for quantitative assessment of the damage size, orientation, and localization between laminate layers. The results of the reflection IRT setup depend on the localization and the size of the wrinkle concerning the heat source. The thermographic results of the composite plates with wrinkles demonstrate the problems with the damage assessment in this measurement technique. It should be reminded that the subsurface wrinkles were oriented on the opposite side to the IR camera and the heating source. The measured parameters, namely temperature and thermal contrast in the time domain and the temperature profile, have not allowed for an unambiguous interpretation in the damage detection context. In this case, the inspection of the composite part should be supported by the visual analysis of the temperature distribution during the heating and cooling process and the results presented by the specialized software for thermal image analysis. On the other hand, the analysis of the manufactured section of the wind turbine blade proved that the reflection measurement technique allows for obtaining satisfying results, especially in situations where the transmission configuration is not possible to apply or introduces many complications associated with the geometry of the analyzed structures. The thermographic results of the wing section partially covered by the gelcoat demonstrated lower temperatures achieved in the structures with the additional surface.In general, the additional layers applied on composites reduce the thermal flow during the active thermography inspection, which may decrease the effectiveness and accuracy of a damage detection system. However, in the analyzed case, the gel coat has not disturbed the quality of the damage detection process. The verification of the covered composite structures requires special attention and testing of the inspection parameters because, from a safety and reliability point of view, the crucial issue is to avoid missing substantial defects during thermographic research.The difference between reference point temperature and temperature on the detected wrinkles was similar in both parts of the section. The temperature profile also indicates a similar temperature disturbance caused by the wrinkle. Moreover, the gelcoat surface defects do not influence the damage detection of internal wrinkles. The damage in the gel coat, which may be identified by visual inspection, does not interfere with the wrinkles in the thermographic inspection, which is especially important from the practical application point of view. The thermographic inspection and quality control of the composite structures revealed internal defects, even if the visual inspection only identified surface damages. It is worth pointing out that the uniform heating process in active thermography is essential in the damage detection process, and it is difficult to achieve in complicated shapes of structures. The transmission IRT setup in the case of the real structure analysis demonstrates the typical problems in the real construction analysis. The internal stiffening elements of the structure may disturb the results and influence misinterpretation in the damage detection and localization context. Regardless of the localization of the heating source (inside or outside the structure), the correct interpretation of the thermographic results requires knowledge about the internal structure of the analyzed part.The presented results deal with wrinkle detection in GFRP structures by active thermography inspection in different measurement setup configurations. To summarize obtained results, the following conclusions can be formulated.The type of fabric weave has negligible influence on the results of the thermographic inspection. Despite the different thicknesses of the analyzed specimens resulting in different temperatures in the heating process, the thermal contrast indicating the defective areas is similar;Straightforward thermal parameters can be applied in structural health monitoring systems. The transmission IRT setup allows for damage detection and localization in composite structures. Detailed analysis of the heating source and IR camera localization and the observed temperature relation between reference and defective areas may also be applied to assess the damage size, orientation, and localization in the laminate layers;The reflection IRT setup is convenient for damage detection systems supported by the nondestructive testing software allowing for visual verification of the thermographic results;In our case, the gelcoat finishing surface has no substantial influence on the effectiveness of the damage detection process. However, additional coatings reduce the thermal flow during the active thermography inspection; thus, the testing of thermographic parameters is necessary to confirm the accuracy of conducted tests.Further research will focus on the application of active thermography in transmission and reflection configurations in the real blades of wind turbines with the horizontal axis of rotation .The quality control of horizontal axis wind turbine blades allows for the validation of presented results and the assessment of the active thermography of real constructions."} +{"text": "Implant-supported removable prostheses (ISrP) improve the quality of life, especially in patients who underwent mandibular reconstruction, but few studies have focused on the effect of ISrP in the fibular mandible on the function of the temporomandibular joint. The purpose of this pilot case series was to determine the usefulness of four-dimensional computed tomography (4DCT) images for the evaluation of differences in condylar movements with and without ISrP. Three patients who underwent ISrP following segmental mandibulectomy and free-flap reconstruction were evaluated. The participants were instructed to masticate a cookie during the 4DCT scan. The distance between the most anterior and posterior positions of the condyles on the sagittal view of the 4DCT images during the chewing of the cookies was measured and compared with and without ISrP. 4DCT revealed changes in the distances of condylar protrusion with and without wearing ISrP, but there were no obvious differences among the three patients. The 4DCT motion analysis was useful for the evaluation of the effect of wearing ISrP on condylar movements during mastication in patients with mandibular reconstruction and may become a useful objective evaluation method for the functional evaluation of ISrP. It is believed that dental rehabilitation with osseointegrated implants in patients who underwent jaw bone resection and reconstruction improves oral function, oral diet achievement, and oral health-related quality of life , 2. ImplRecently, it was reported that four-dimensional computed tomography (4DCT) can visualize the changes in mandibular movement in reconstructed mandibles , 6. The Three patients who underwent segmental mandibulectomy and simultaneous reconstruction with a free fibular osteocutaneous flap at Kobe University Hospital were enrolled in this study. Additionally, 4DCT examinations were performed as routine postoperative follow-up imaging to monitor tumor recurrence or osteoradionecrosis. The 4DCT examination was performed with an Aquilion ONE at the Kobe University Hospital . All images were acquired axially with a 320-detector row CT scanner to allow for multiple phases of unenhanced 3D volume acquisition with 16 cm coverage. The patient's forehead was fixed with\u00a0tape to prevent bodily movement. They were instructed to chew a cookie during the scan. CT scans were performed with and without ISrP. The gantry was angled to limit radiation to the eyes, and the inferior aspect of the field of view was tailored to minimize radiation to the thyroid. The exposure doses in all study participants were within the notification values recommended by the American Association of Physicists in Medicine (AAPM) Working Group on Standardization of CT Nomenclature and Protocols for 4DCT. For image post-processing, a volume rendering (VR) image was generated using a commercial software .The maximum distance between the most anterior and posterior positions of the posterior portion of the condyle during mastication was measured using sagittal multiplanar reconstruction (MPR) images. The number of strokes during the scanning was counted. The percentage of the larger protrusion was also calculated. All measurements were performed by one oral and maxillofacial surgeon\u00a0(author J.Y.) in a blinded manner. The patient provided written informed consent to participate in this study after receiving a full explanation of the purpose and structure of the study, which had already been approved by the Medical Ethics Committee of Kobe University (No. B200052). This article was previously posted to the medRxiv preprint server on April 27, 2020.Participants' characteristics are presented in Table Figure The number of strokes during scanning and the percentage of larger protrusive distances above the mean values are listed in Table No remarkable changes in condylar protrusion with or without ISrP were found among the three patients. Patient one had a history of segmental mandibulectomy, neck dissection, and fibula flap reconstruction for gingival cancer and did not undergo radiation therapy. Patient three had a history of concomitant chemoradiotherapy for oropharyngeal cancer and subsequent segmental mandibulectomy and fibula flap reconstruction for mandibular osteoradionecrosis. Although the surgical invasion of patient three was the lowest among the three patients, it was affected by irradiation. We found a similar condylar protrusion of the reconstructed side with and without ISrP in patients one and three, whereas the ranges of condylar protrusion of the non-reconstructed side were smaller in patients with ISrP than in those without ISrP in patients two and three. The number of missing teeth and the extent of the segments were similar in patients one and three, but larger in patient two than in patients one and three. Condylar movements during mastication are probably affected by the area and the number of missing teeth, irradiation, and the extent of defects of the mandible and the surrounding soft tissue, such as the masticatory muscle.ISrP in the reconstructed mandibles contributes to the improvement of patients' quality of life, whereas there is no strong evidence to support the effect of ISrP because of the lack of objective evaluation methods of function . This piThe influence of the loss of posterior teeth on the condyle has been a controversial issue. It was previously reported that prosthetic rehabilitation significantly changed the condylar position in women with good general health . In thisThere are some important studies that have evaluated the effects of ISrP. Roumanas et al. analyzedFinally, there were some limitations in the current pilot report. First, this report included only three patients whose primary diseases, treatment history, and extent of mandibular defects were heterogeneous\u00a0because the number of patients who underwent mandibulectomy and osseous reconstruction and subsequently completed ISrP was very limited . Second,Although 4DCT could visualize changes in the distances of condylar protrusion with and without wearing ISrP, there were no obvious differences among the three patients. Further investigations are necessary to clarify the usefulness of the 4DCT motion analysis for the evaluation of condylar movements during mastication in patients wearing ISrP."} +{"text": "A significant increase has been observed globally in multi-centre trainee-led trauma & orthopaedic (T&O) research collaborative projects with more emphasis have been on tackling important research questions since the start of the COCID-19 pandemic. The objective of our analysis was to determine the number of trainee-led research collaborative projects in T&O in the United Kingdom that were started during the COVID-19 pandemic.A retrospective analysis was conducted to determine how many trainee-led national collaborative projects in T&O were conducted since the start of the COVID-19 pandemic lockdown (March 2020 to June 2021) and the number of projects identified were compared to the previous year (2019). Any regional collaborative projects, projects that were started before the onset of COVID and projects of other surgical specialities were not included in the study.There were no projects identified in 2019 while in the Covid pandemic lockdown we identified 10 trainee-led collaborative trauma & orthopaedic projects with six of them being published with level of evidence from three to four.Covid was unprecedented and has placed considerable trials across healthcare. Our study highlights an increase in multi-centre trainee-led collaborative projects within the UK and it underlines the feasibility of such projects especially with the advent of social media and Redcap\u00ae which facilitate recruitment of new studies and data. During the previous year, there has been an upsurge of trainee-led Orthopaedic research collaborative projects globally. The concept of Trainee Led Collaborative projects is not a new concept in the UK and one such project was a two-year study in the 1980s looking at measles which was conducted by the Royal College of Surgeons of General Practitioners trainees.The recent pandemic has been a driving force for an increase in the number of collaborative projects in the United Kingdom. There has also been an increase in the enthusiasm of the trainees to get involved in research and to collectively answer important clinical questions. The National Research Collaborative (NRC) was established which acts as an umbrella organisation which facilitates multiple collaborative groups and networks and aims to promote participation among various specialities, including general surgery and Trauma and Orthopaedics. To promote this, the NRC has guides that aim to help set up projects that are collaborative focusing on the key operational and organizational principles.The value and importance of the recent rise of successful Orthopaedic trainee Led research projects in the past year are always discussed with arguments both for and against.In this study, we aim to evaluate the number of trauma & orthopaedic trainee-led research collaborative projects that took part since the start of the COVID-19 pandemic in the UK, exploring the value and feasibility of such collaboratives in driving forwards clinical academia.A systematic search was done online using key phrases \u2018trainee research\u2019, \u2018trainee-led collaborative\u2019 and \u2018Orthopaedic trainee-led research collaborative\u2019. Websites such as Association of Surgeons in training (ASIT), Twitter, British Orthopaedic Training Association (BOTA) and British Orthopaedic Association (BOA) were also checked for any trainee-led collaborative projects that were posted.The timeline for data collection had been from March 2020 to January 2021, essentially the Covid Lockdown period in the United Kingdom.All of the projects identified in this study were done in the United Kingdom.Any regional collaborative projects, projects that were started before the COVID-19 and projects that involved other surgical specialities were excluded from the study.The number of projects identified was also compared to that published in 2019.All conference abstracts and proceedings were also excluded from the analysis.All of the publications were identified from website listings of collaboratives and searches done on PubMed using the project names.All of the publications which were PubMed-indexed were included in the study.Using the Health Research Authority decision tool, this study was not deemed to need ethical approval.Ten trainee-led collaborative trauma & orthopaedic projects were included in the analysis with six being published in peer reviewed journals. The level of evidence ranged between three and four. There were five audits and five cohort studies. The patients that were included in the studies ranged from 927 to 140,231 with Supraman Collaborative study with the lowest number of recruited patients.Comparing to 2019, we were not able to identify any trainee-led collaborative projects. Collectively 2249 centres participated in collaboratives projects with the maximum number of centres participating in one study was 1674 and lowest number was 38 centres. Almost all of the collaboratives recruited centres by the use of social media, twitter, or by networking in conferences and the use of the British Orthopaedic Trainee Association.The data collection method used most was by using REDCAP (Research Electronic Data Capture), which is a web application which manages databases and online surveys in a secure encrypted manner, and confidential excel sheets were also used .st 2020) and collected using a secure online database (Redcap). Conclusion of the study was as there was a higher mortality in patients after seven week delay with ongoing symptoms compared to those patients who were asymptomatic and whose symptoms had resolved. This research resulted in four PubMed cited publications in various high impact journals.11COVID Surg Collaborative Surg Week was a cohort study which was international, multicentre, and prospective in nature and included patients undergoing any type of surgery. This was initiated by the NIHR Global health research unit on Global surgery, Birmingham UK and the aim of the study was to get more data as to update clinical practice during the Covid Pandemic regarding the importance of identification of symptomatic vs. asymptomatic SARS-COV-2 preoperatively along with the determination of the optimal timing of surgery following an infection. One Hundred forty thousand two hundred and thirty one patients were included in the study with 1674 hospitals in 116 countries. Data was collected up to four blocks of seven consecutive days and 1st February to 14th March 2021 (prospective). Collection of data was done on a Excel spreadsheet and analysis of the data was done against the BOA standard (Full weight bearing for activities of daily living). Hospital-by-hospital variation was calculated for fracture location and description, type of operation, seniority of operating surgeon and duration of restricted weight bearing. Nineteen thousand one hundred fifty three patients were included in the study with 81 hospitals taking place with 430 collaborators. The Conclusion of the study was that there was a large difference noted in the overall percentage of Neck of femur fractures vs non neck of femur fractures fragility fracture patients that were allowed to fully weight bear after surgery. Outside of the femur, the most common fracture locations for non-Neck of femur fragility fractures are foot and ankle fractures and proximal tibia fractures. This audit was published with a proposal for further research to enable surgeons to feel confident to make non-Neck of femur fragility fractures to become Fully weight bear after surgery.Chlorhexidine Gluconate versus Povidone-Iodine Skin Antisepsis Prior to Upper limb Surgery (CIPHUR) was a prospective National service evaluation. The aim was to compare local practise of antiseptic use and compare them to the standards outlined by NICE which recommended using alcoholic Chlorhexidine gluconate (CHG) for preoperative skin preparation to reduce the risk of Surgical site infections. The inclusion criteria were any adult or children identified prior to any form of surgery distal to the shoulder joint while the exclusion criteria were any active infection at the time of upper limb surgery. The audit included 2,454 patients and collaborators from 92 centres participated and data was recruited via Excel sheets. The Systematic review and network meta- analysis of antiseptic in clean surgery and the national survey of clinical practice and clinician opinion was done and is pending publication.(Patient related outcome measures) practice within elective orthopaedics. 38 enrolled trusts across nine regions participated in the national audit. The conclusion of the audit was that standardization of PROMS practice is required across all orthopaedic procedures as there is limited consensus and wide variation in their usage. However, the integration of PROMS within Best Practise Tariff has encouraged PROMs uptake and consistency. The audit has not been published yet.Evaluating the Measures in patient reported outcomes, values and experiences (emprove) was a national collaborative audit of elective orthopaedic clinical practise conducted by the south west orthopaedic division (SWORD). The audit was against national society standards with the aim to assess concordance with standards and define the configuration of current PROMS st of January 2019 of December 2019. Exclusion criteria were open fractures, undisplaced fractures, if there was a separate acute but distinct fracture present on the ipsilateral upper limb or if primary surgical intervention occurred over three weeks from the time of injury. Data collection was done via Redcap. Nine hundred twenty seven patients were identified as undergoing surgical intervention for a displaced supracondylar elbow fracture across 42 hospitals. The conclusion of the study was that the majority of displaced supracondylar fractures of the elbow are operated on during daytime hours, with most being performed the day after injury. Varying surgical techniques are utilised across the United Kingdom. Overall, two wires in a cross configuration were most commonly utilised with the wires left percutaneous. However, Paediatric Orthopaedic specialists appear to prefer lateral only fixation. A very low rate of deep infection and revision surgery for displacement was noted in both crossed and lateral only fixation patients. A manuscript has been prepared and sent to a journal for consideration of publication.Supracondylar fracture management was a retrospective trainee led national evaluation which was designed and led by members of the South West Orthopaedic Research division (SWORD). The primary objective was to identify the surgical practice and post-operative care being provided for displaced supracondylar elbow fractures across the United Kingdom. Secondary objectives included the identification of patient characteristics, common mechanisms of injury, fracture types undergoing surgical intervention, and significant post-operative complications associated with the management of these injuries. Recruitment was done across the United Kingdom through promotion of Twitter. Inclusion criteria were patients that were less than 16 years of age with a displaced supracondylar fracture of the elbow of any time confirmed on X-ray that required acute surgical intervention between the 1It was a cross sectional study which was survey based conducted in 87 centres in the UK from November 2020 to April 2021. The aim of the study was to investigate the availability of radiological imaging (MRI) directly from the Emergency Department and minor injury units and whether pathways were made for suspected scaphoid fractures. Recruitment was done via previous projects, social media (twitter) and clinician contacts. All centres that regularly treated acute wrist trauma were included in the study. The study showed that only a fraction of hospitals across the UK offers MRI directly from minor injury units and emergency departments when suspecting a scaphoid fracture. The results of the study have been published in the Bone and joint journal on the Nov. 29, 2021.13st January to June 30th 2019. The main objective was to determine the outcomes and management of complex ankle fractures in the United Kingdom and compare them to the BOAST guidelines for the management of complex ankle fractures. All adult patients with complex ankle fractures (Ao43/44) which were open or closed were included in the study. These complex fractures included patient with diabetes with or without neuropathy, rheumatoid arthritis, alcoholism, polytrauma and cognitive impairment. Fifty-six centres participated in the study and data from 1360 patients were collected. The study concluded that 9% of patients were managed with a Hind foot nail and were also noted to have the most complications. Only 21% of patients were allowed to weight bear fully after the procedure despite BOAST guidance. The Results have not been published.Hindfoot ankle reconstruction Nail trail (HARNT) was a national collaborative study in affiliation with British Orthopaedic Trainee Association and data was collected retrospectively between 1Evaluation of practice patellofenoral instability collaborative was a BASK trainee collaborative retrospective national audit that was performed over a five years period. The Audit was to evaluate which procedures and in which combination are being used to surgically manage patellofemoral instability in the United Kingdom and these were compared to the British Orthopaedic Association surgical management of recurrent patellar instability.Data was collected by the help of coding departments and theatre records and were analysed and on excel sheets. Fifty sites across the United Kingdom participated with 3,639. The study showed that the surgical management of Patellofemoral instability varies across the country but as new national guidelines are implemented a re-audit of practice should be done. This has not been published.Pansurg Predict was an International Observational Cohort study which was retrospective and prospective in nature. This study was sponsored primarily by Imperial College London with the primary aim to measure the risk associated with patients presenting to hospital with a surgical pathology during the pandemic. Secondary aim was to create a dynamic risk prediction model. Collaborators were recruited via social media, Twitter and Data collection was done by REDCAP.th March 2020, 5th April 2020, 20th May 2020 and 26th June 2020.Data was collected retrospectively and prospectively and the data was collected in such a way that it coincided with the publications by the Royal Colleges of Surgeons Guidance for surgeons working during COVID-19 pandemic on 20th March 2020 to30th August 2020 were included in the study. There were 55 Participating centres from 18 countries.All patients who presented to hospitals with an acute orthopaedic pathology during 9The study showed that the capacity of the operating room declined by 63.6% along with a decline of surgical staff by 27.2%. The results were published in the Annals of Surgery on 2021.14This was a national, multicentre observational cohort study with an aim to assess the safety of upper extremity surgery and to assess the 30 days mortality of patients. The secondary objectives were any complications related and unrelated to SARS COV-2 and any hospital safety processes that were in place. Data collection was between 01/04/2020 to 14/04/2020. The collaborators were recruited by social media and data was collected on a standardised encrypted excel spreadsheet. About 74 centres participated in the study with 1093 patients being recruited into the study. The study showed that Complications related to Sars-cov-2 were 0.18% for upper limb surgery with zero deaths when patients were operated on the same day as admission.The study also suggested that surgery should not be delayed whilst waiting for the results of the Sars-Cov-2 test for any upper limb day case surgery. The study was published in the BMJ quality and safety Journal on April 2021.15There is little literature available highlighting the importance of Trainee led collaborative research projects and no scientific analysis such as our study has been carried out in the past. Our analysis provides the current direction and activity within the UK trainee collaborative movement however even though larger studies with multiple centres and high recruitment of patients are being undertaken we have noted that only six studies have contributed to the literature by publishing in peer review journals.Our Analysis is suggestive of an increased ambition of the various groups in the UK with more prospective/retrospective national studies. Multiple regional orthopaedic trainee organizations such as British Orthopaedic Training Association (BOTA), south west orthopaedic division (SWORD) have taken an initiative to ensure more multi-collaborative projects are started. These Trainee bodies are pivotal in promoting such collaborations in the United Kingdom and encourages the expansion of such collaboration in other specialties. Our study also highlights an expanding footprint in literature by these UK trainee collaborative research projects which are of respectable quality directly impacting clinical practise.16The main contributory factors of the increase in UK trainee led collaborative projects are a highly ambitious trainee body within a postgraduate surgical training system boosted by the COVID Pandemic. We feel that the surge of such collaborative projects especially during the pandemic are due to the fact that the junior doctors understood the importance of being involved in high yield research and contributing to and improving the current evidence-based practise of common pathologies. We feel Covid was a trigger for junior doctors to get involved in more research and maybe this was due to the fact that there was less operating due to the COVID pandemic. An increased enthusiasm has also been noticed after the Royal College of Surgeons Clinical Trials Initiative have established a wide network of trial centrals across the United Kingdom.17Our analysis also notes the importance of social media for recruitment and programs such as Twitter. These have been instrumental for the success of such multicentre collaborative projects as due to the pandemic there was less chances of networking in conferences, courses, workshops, and other events.REDCAP is growing in popularity in the multicentre collaborative projects as it is a centralized online database. They provide an accessible, secure and affordable approach to database and statistician access and this ensures the long-term success of the trainee collaborative movement.However, the most core aspect of the future success of more Multicentre collaborative projects of trainee led collaborative projects is ensuring the enthusiasm of the Trainees is maintained through open participation and ensuring fair recognition of the involvement of trainees. In all of projects included in the study the method of recognition has been the use of a collaborator status in peer reviewed journals.The introduction of Collaborative Trainee led Research projects is essential across the world for trainees as they can develop and sharpen their research methodologies and especially in countries, like Pakistan, where trainees are not satisfied with their development of research skills.20The limitations of this study stem from the small sample size and the lack of representation from other countries. However, we feel that after this research is published and the advantages seen, there will be an increase of impact caused by Trainee led collaborative projects in Europe and worldwide.Orthopaedic Surgical trainees in the United Kingdom have been instrumental in the development of an innovative and valuable model for healthcare research despite COVID 19 having placed significant challenges. This is proven by the fact that there has been a significant increase in such collaborative Orthopaedic Trainee led projects in the United Kingdom in the Covid Lockdown.TK: Designed, Collected, analysed data, write up of paper.RK: Analysis and review of paperUA: Supervisor, final check and responsible and accountable for the accuracy or integrity of the work."} +{"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Public Health and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} +{"text": "Over the last decades, the non-linearity of natural objects has been shown to be suitable to the application of fractal analysis. By extending the principles of fractal geometry to the study of biology and to the human body, fractal physiology is receiving considerable research attention. The human body, including its most complex structure, i.e., the brain, is characterized by recursive systems and complex networks. Today it is well proven that the structural organization of the brain, the architecture of neural and vascular networks, and the dynamics of functional activity, amongst others, have fractal properties . AdvanceChanges of objectively observed fractal characteristics are markers of age-related alterations, including the ones seen in normal and pathological aging and mental disorders as well. A change in the structural complexity, quantified by means of the fractal dimension, amongst other parameters, and of the dynamic patterns of activity of the brain and retina occur in diseases of various etiologies, including neurodegenerative disorders, brain tumors, cerebrovascular diseases, visual system\u2019s disorders, and cognitive impairment during stress, chronic anxiety or depression. Moreover, the patterns of physiological and psychological responses to fractal stimulation therapy open the way for the formation of new therapeutic strategies.This Issue is dedicated to computational fractal-based methodologies in medicine and applications of the achievements of fractal physiology in the diagnosis of the brain and retina disorders. An objective assessment of the fractal complexity of the structural and functional organization of the brain can provide new tools useful for differential diagnosis, novel prognostic tests to assess the brain and retina disorders and criteria to monitor the effectiveness of the therapeutic interventions.Rowland et al. raises the issue whether pathological states of neurons might affect this fractal branching optimization. Using confocal microscopy to obtain images of CA1 pyramidal neurons and constructing 3-dimensional models of the dendritic arbors, the authors found that control rats exposed to an enriched habitat and training in spatial memory showed higher dendrites branching complexity and connectivity compared to other conditions. Rats with or without anterior thalamic nuclei lesions both optimized the connectivity in respect to the material cost of competing connections. The authors used an improved technique on characterization of fractal dimension of dendritic pattern. The fractal dimension values to optimize these connections did not differ between rats\u2019 groups with or without lesions, perhaps due to small morphological differences induced by these injuries. However, the results show that successful application of this method to characterize the dendritic pattern can become a promising diagnostic tool to help detect the neuronal pathological states that affect fractal optimization.Fractal dimension of the neuronal dendrites\u2019 arborization was recently shown to be related to functional constraints - the need and ability of dendrites to compete to make connections to other neurons . AnterioS\u00e1nchez and Mart\u00edn-Landrove employed a method based on dynamic quantum clustering to perform contrast enhanced MRI of brain tumors and describe different morphological parameters including fractal dimension and lacunarity, growth dynamics exponents, regularity measures, and parameters derived from multifractal analysis. The tumor surface regularity, one-dimensional and bi-dimensional fluctuations of the tumor interface were analyzed by Detrended Fluctuation Analysis. Tumor interface fractal dimension, local roughness exponent and surface regularity were shown to be parameters that can discriminate between gliomas and meningiomas/schwannomas.The tumor interface has an irregular geometry due to the complex dynamics of the tumor growth process, including the tumor cells proliferation of and the invasion of surrounding tissues. Fractal analysis is commonly used to characterize the complexity the tumor surface and to mDick and colleagues analyzed the degree of multifractality of EEG time series obtained in healthy subjects and patients with mental disorders (paranoid schizophrenia and depression) using the method of maximum modulus of the wavelet transform and multifractal detrended fluctuation analysis. The main differences between the multifractal properties of brain activity in healthy people and those with mental disorders were in the width of the multifractality spectrum and its location associated with the correlation or anticorrelation dynamics of the values \u200b\u200bof successive time series. Schizophrenia was characterized by a greater degree of multifractality compared to depression. These data suggest that the degree of multifractality can be included in a diagnosis tool as the test for the differential diagnosis and further study of mental disorders.Currently, the attention of many researchers is attracted by the study of EEG in brain diseases , and theRacz and coauthors introduced an extension of irregular-resampling auto-spectral analysis to the bivariate case - multiple-resampling cross-spectral analysis (MRCSA) to separate the fractal component of the cross-spectral spectrum of long-range coupled signals. The authors showed the applicability of MRCSA to EEG recordings by analyzing fractal connectivity at rest and under the condition of word generation.More and more attention is now paid to the study of fractal functional connectivity in the brain. The effects of scale-dependent interactions make it difficult to analyze it. The assessment of the extent of the narrowband oscillatory components in combination with broadband fractal, which are presented in empirical data is of great importance given that the fractal and oscillatory signal components may reflect different underlying processes . For betThe versatile applications of fractal analysis to the neurosciences, using the accumulated empirical data and existing theoretical models make it possible to identify promising future applications in the clinical setting and open new promising areas of research."} +{"text": "The guttural pouch of the horse is a diverticulum of the auditory tube and has a complex anatomical structure. Disease of the guttural pouch can lead to neurological signs and hemorrhage due to its close relation to major vessels and some cranial nerves. Endoscopy allows direct visualization of the pouch and is considered to be the gold standard for its evaluation. Nevertheless, diagnostic imaging can bring useful additional information and this review article describes the value of each technique and the main imaging findings to be expected in the diagnosis of guttural pouch disease.The most common diseases of the guttural pouch are empyema, tympany, mycosis and temporohyoid osteoarthropathy. The challenge in diagnosis of guttural pouch diseases lies in the complex anatomy of the guttural pouch and adjacent associated structures. Diagnostic imaging is a good complement to endoscopy for the diagnosis of some guttural pouch diseases, especially to make a full assessment of the lesions involving the pouch and surrounding structures. This review article describes the value of each diagnostic imaging technique in the diagnosis of guttural pouch disease and the corresponding imaging findings. Radiography is generally used as the first line to complement endoscopic findings, and can give useful additional information although it is limited by superimposition. Ultrasonographic examination of the guttural pouch is of limited value due to the presence of gas in the guttural pouch but can eventually be used to detect fluid within the pouch or can help to evaluate the soft tissues located lateral and ventral to the guttural pouch. Cross-sectional imaging, especially CT, is increasingly available and appears to be the best technique to fully assess the surrounding soft tissues and to precisely identify lesions of the temporohyoid apparatus, temporal bone and skull base that are associated with guttural pouch disease. The guttural pouch is a diverticulum of the auditory tube that connects the pharynx to the middle ear bilaterally. This diverticulum has a complex anatomical structure, consisting of a cavity divided by the stylohyoid bone into a small lateral recess and a larger medial recess . The gutRadiography is generally the first-line imaging modality to be used to complement endoscopy for the diagnosis of guttural pouch diseases . Air-filUltrasonography of the region of the guttural pouches is of limited value due to the presence of air in the guttural pouch and the vertical ramus of the mandible, and the consequent inability to image through them . Due to Cross-sectional imaging, especially CT, is becoming increasingly available for imaging horse heads, particularly with the development of standing CT, allowing the elimination of general anesthesia and reduced costs . The majEmpyema is the most common disease of the guttural pouch and can be caused by upper airway infection or drainage of abscesses of the retropharyngeal lymph nodes into the ipsilateral pouch ,3,8,10. Guttural pouch tympany is a condition of young horses that is a result of trapped air within the guttural pouch due to an excessive amount of tissue at the pharyngeal orifice that allows air to enter the guttural pouch during deglutition but prevents it from exiting . It causBleeding within the guttural pouches is most frequently due to guttural pouch mycosis or injury of the rectus capitis and longus capitis muscles that attach to the skull base . Other rBasilar skull trauma with injury and avulsion of the rectus capitis and longus capitis muscles occurs when the horse falls over backward ,15,16. STemporohyoid osteoarthropathy is a disorder characterized by osseous proliferation of the temporohyoid joint that can lead to neurological disorders, especially facial nerve deficit ,18,19. TMRI may also be useful in select cases to diagnose temporohyoid osteoarthropathy ,9,17, alThe most common cause of a mass effect involving the guttural pouch is adenomegaly of the retropharyngeal lymph nodes secondary to guttural pouch empyema . HematomEven if endoscopy of the guttural pouch remains the gold standard for identifying most guttural pouch diseases, diagnostic imaging, especially radiographic and CT examinations, and occasionally ultrasound, can be very useful to complement clinical examination and endoscopic evaluation for the diagnosis of some diseases of the guttural pouch in order to precisely determine the prognosis and adapt the treatment."} +{"text": "In normal anatomy, the anterior tibial artery is typically the first branch of the popliteal artery before it becomes the tibioperoneal trunk. The normal course of the anterior tibial artery includes piercing through the interosseus membrane and continuing through the anterior compartment. It then continues onto the dorsum of the foot as the dorsalis pedis artery at the level of the malleoli.\u00a0We describe a unique case of an anomalous origin of the dorsalis pedis artery from the peroneal artery. It is important for vascular surgeons to be aware of this variant while interpreting arteriograms of the lower extremity. It can be easily misinterpreted as an occluded distal anterior tibial artery with reconstitution of the dorsalis pedis artery from the collaterals. The leg, foot, and ankle receive arterial blood from the popliteal artery and its associated branches. A complete understanding of normal anatomy is crucial to the treatment of vascular disease. In the posterior region of the proximal tibiofibular joint, the popliteal artery gives off the anterior tibial artery and continues as the tibioperoneal trunk . The antDistally, the anterior tibial artery passes deep to the extensor retinaculum and continues onto the dorsum of the foot as the dorsalis pedis artery (DPA) at the level of the malleoli. This location also serves as an anatomical landmark for pulse palpation .\u00a0The DPAIn addition to a robust understanding of normal lower extremity vascular anatomy, an awareness of the anatomic variants such as those described in this report, is necessary for application in vascular, orthopedic, and radiologic medicine. In this report, we describe an anomalous origin of the dorsalis pedis artery from the peroneal artery which holds important clinical implications including interpretation of arteriograms, estimation of disease burden in peripheral arterial disease, and revascularization planning.Cadaveric dissection of a 94-year-old male who died of end-stage cerebral infarction revealed a unilateral anomalous origin of the right dorsalis pedis artery.\u00a0Examination of the lower extremity revealed no scars suggestive of prior operations. The right popliteal artery entered the posterior compartment of the leg from the popliteal fossa and gave rise to the right anterior tibial artery and the right tibioperoneal trunk. The path of the right anterior tibial artery was traced through the proximal oval aperture in the interosseous membrane to reach the anterior compartment of the leg Figure .\u00a0The right tibioperoneal trunk provided the right posterior tibial artery medially and the right peroneal artery laterally. The right anterior tibial artery entered the anterior compartment of the leg and was identified intermediate to tibialis anterior muscle and extensor hallucis longus muscle running longitudinally with the right anterior tibial veins and the right deep peroneal nerve. The right anterior tibial artery became hypoplastic and terminated as muscular and fascial branches. The right peroneal artery descended in the posterior compartment of the leg deep to the transverse intermuscular septum. It was then visualized passing through the distal oval aperture in the interosseous membrane to enter the anterior compartment of the leg as the perforating branch of the peroneal artery. The right perforating branch of the peroneal artery bifurcated into a smaller medial branch and a larger lateral branch. The medial branch was seen anastomosing with the anterior tibial artery. This is where the anterior tibial artery becomes hypoplastic and terminates. The lateral branch passed deep to the tendons of fibularis tertius and extensor digitorum longus muscles to continue as the right DPA Figure .\u00a0The right DPA continued distally alongside the right dorsalis pedis veins and the right deep peroneal nerve on the dorsum of the right foot.Anatomical variations of the DPAVariations have been identified in the origin, course, and branching pattern of the DPA. Anomalous origin of DPA from the perforating branch of the peroneal artery has been described before in both cadaveric and arteriogram studies. According to Hemamalini and Manjunatha, an anomalous origin of DPA can occur bilaterally or unilaterally . Three iVazquez et al. studied the blood supply of the foot and ankle using a sample of 150 human embalmed cadavers. It was noted that 287 cases (95.7%) displayed a normal continuation of the anterior tibial artery to DPA distal to the talocrural joint. In these cases, the DPA was identified lateral to the tendon of the extensor hallucis longus and medial to the tendons of the extensor digitorum longus. Six cases (2%) demonstrated a lateral deviation of the anterior tibial artery as far as the lateral malleolus. They reported an enlarged perforating branch of the peroneal artery continuing as DPA in four out of 300 limbs, for an incidence rate of 1.3%. Lastly, three cases (1%) demonstrated the origination of an additional lateral branch from the anterior tibial artery that replaced the perforating branch of the peroneal artery [This anomaly was additionally studied by Keen via cadaveric dissection of 140 subjects, or 280 limbs, and anomalous origin was discovered unilaterally 12 times, and bilaterally once, demonstrating 5% incidence . Kim et Clinical implicationsAwareness of anatomical variations in the origin, course, and branching patterns of DPA is important for vascular surgeons, orthopedic surgeons, and radiologists. Documentation of dorsalis pedis pulse via palpation or Doppler is useful in evaluating anterior compartment syndrome and peripheral arterial diseases (PAD) ,10. HoweEmbryonic originVariations in the branching pattern of foot and ankle vasculature can be attributed to disruption of normal embryonic development . The devEven though anomalous variations in lower extremity vascular anatomy have been described in the literature, they have been largely ignored within medical education. Vascular practitioners are often faced with recognition and treatment of anomalous vascular anatomy without adequate background education or understanding of the possible variants. This report describes a key variation\u00a0of lower extremity vasculature by highlighting the peroneal origin of the DPA. Recognition of this variant has numerous clinical applications including radiologic interpretation of arteriogram studies, palpation, and bedside Doppler of pulses, and planning of revascularization efforts."} +{"text": "Psychiatry has changed a lot during the last decades and lot of effort has been made to ensure that treatment of schizophrenia spectrum disorders is in line with modern science of medicine. Therefore, in everyday practice the psychiatrist is in a constant process of decision making when to start pharmacological treatment of early psychosis and with what dosage. The lecture will include evidence-based treatment options, the aspects of clinical practice and the power of shared decision-making in psychiatry.None Declared"} +{"text": "The nerve supply of the distal part of the hindlimb is very important for the motor and sensory function of the hindlimb. The dromedary camel is historically and currently a very important species for transportation, riding, and racing in many countries. Therefore, understanding the structural components of its limbs is highly important for clinical and surgical purposes. The nerve supply of the distal part of the hindlimb has been discussed in a few domestic species; however, little is known about the nerve supply in the dromedary camel. This study aimed to show the anatomical structure of the nerve supply of the distal part of the hindlimb. Dromedary hindlimbs were collected from a slaughterhouse. Then, they were fixed using 10% formalin. Subsequently, dissection was performed to show the group of nerves that supply the hindlimb\u2019s distal portion. The result shows the branches of the superficial fibular nerve and tibial nerve. It is very important to understand the nerves that supply the distal part of the hindlimb for anesthesia of the skin, tendons and joints.This study aimed to describe the anatomy of the nerve supply of the hindlimb\u2019s distal portion in a dromedary camel\u2019s foot. In our study, we used ten adult slaughtered dromedary camels of different sexes and ages (4\u20136 years). The hindlimbs were preserved using 10% formalin for about one week. The distal part of the hindlimb of the camels was dissected with extreme precision to show the group of nerves responsible for the nervous supply to the distal part of the hindlimb in dromedary camels. This study shows the numerous branches of the superficial fibular nerve along its extension to the dorsal surface metatarsus and the abaxial aspect of the third digit. The results show that the tibial nerve possesses many branches along its extension to the plantar surface skin of the metatarsus. Additionally, it provides the axial and abaxial plantar surfaces of the fourth digit and the interdigital surfaces as well as its branches to supply the plantar-abaxial and plantar-axial of the third digit. The present study shows the anatomical nerve supply of the hindlimb\u2019s distal portion that is essential for anesthesia and surgery in this region. The origin and distribution of the sciatic and femoral nerves have been studied in bovines ,2, sheepKnowledge of the position and distribution of nerves in the distal parts of limbs is of great importance, especially for treating tendonitis, osteoarthritis and sesamoiditis in dogs . The objFor this work, we used twenty distal hindlimbs of ten freshly slaughtered adult dromedary camels of different sexes and ages (4\u20136 years). The specimens were obtained from a typical Buraydah slaughterhouse, Qassim Region, KSA. The hindlimb samples were fixed in 10% formalin solution for about one week. Then, samples were washed with distal water, and the skin was removed to be dissected. In order to remove the tissue around the nerves of the distal hindlimb, the nerves were gently massaged with gauze pieces bathed in 1% glacial acetic acid ,21. The The distal part of the hindlimb in camels receives its nerve supply from the terminal branches of the ischiatic nerve, including the common fibular or and tibial nerves. Nn.dorsalis digitalis communis III and IV) continues along the medial surface of the fourth IV digit continues along the lateral surface of the third III digit continues along the medial surface of the third III digit ) to supply the axial and abaxial aspects of the III and IV digits .Lateralis. b) of the lateral plantar nerve at the fetlock joint continues on the lateral aspect of the IV digit as an abaxial plantar proper digital nerve of the IV digit supplies the plantar-abaxial aspect of the fourth digit\u2019s skin recorded that the superficial fibular nerve divides into the common dorsal nerves of the III and II digits . These fThe common dorsal digital IV nerve continues as the abaxial dorsal proper digital nerve of the IV digit. These results are consistent with the previously reported locations in camels ,25, in cThe common dorsal digital III nerve in our work has three branches, including the axial dorsal proper digital nerve of the IV digit, the axial dorsal proper digital nerve of the III digit, and the abaxial dorsal proper digital nerve of the III digit. These results disagree with what was reported in camelThe dorsal aspect of the pes was dorsally innervated by the abaxial dorsal proper digital nerve of the IV digit from the common dorsal digital IV nerve and the axial and abaxial dorsal proper digital nerves of the III digit from the common dorsal digital III nerve of the superficial fibular nerve. The relevant findings disagree with ,23,24 inIn domestic animals, the foot structures\u2019 sensitive innervation was investigated . It is mIn the dorsal section of the hindlimb, the superficial fibular nerve continues to be the dorsal common digital nerve. The dorsal common digital nerve splits into the dorsal proper digital nerves, which feed the axial and abaxial surfaces of the corresponding digits ,27,28,29The tibial nerve\u2019s division into medial and lateral plantar nerves confirms the pattern described in goats , domestiThe current study revealed that the tibial nerve divided at the proximal third of the metatarsus into two branches, namely the lateral and medial plantar branches. This result is inconsistent in camels, as in other studies, the division of the tibia nerve was observed at the tarsus , while iThe medial plantar branch continues as a common plantar digital nerve of the III digit. Our findings disagree with those of other studies on camels, which reported that the medial plantar nerve continues as the common plantar digital nerve of the II digit .Our results confirm that the medial plantar branch continues as the abaxial plantar proper digital nerve of digit IV in domestic animals , in boviThe medial plantar branch divides into the axial plantar proper digital nerve of the III digit and the abaxial plantar proper digital nerve of the III digit. Similar findings were reported in previous studies in domestic animals ,13. In cSmuts et al. (1986) recorded that the medial plantar nerve divides into common plantar digital nerves of the II and III digits . In contIn this study, the lateral plantar nerve divides into the following two branches: the medial branch and the lateral branch. The lateral branch continues as the abaxial plantar proper digital nerve IV of the digit and supplies the plantar-abaxial aspect of the IV digit. These findings are in agreement with the results of other studies in camels ,23,24, iOur results show the medial plantar digital nerve bifurcates from the tibial nerve at the tarsus level. They both continue together distally to the middle of the metatarsal bones. Then, they divide into the plantar common digital nerves II\u2013IV. After that, they continue medially, becoming the plantar proper digital II abaxial nerve. Our results are similar to the findings reported in dogs ,31 and iThe saphenous nerve distribution on the cranial and medial surface of the camel leg was similar to that described in bovines , camels Finally, these results show that the nerve distribution of the distal parts of the hindlimbs are of great importance, because they will help in the determination of the regional anesthesia of the different nerves in the distal hindlimb, especially in the treatment of tendonitis, osteoarthritis and sesamoiditis ,36. On tThe findings show the anatomical structure of the nerve supply of the distal hindlimb in dromedary camels. We showed the nerve blocks and their distribution within this region, including the skin, tendons and joints. The findings will assist in successful anesthesia and surgery in this region."} +{"text": "Scramjet engines are considered a highly promising technology for improving high-speed flight. In this study, we investigate the effects of using multi-extruded nozzles on fuel mixing and distribution inside the combustion chamber at supersonic flow. Additionally, we explore the impact of an inner air jet on fuel mixing in annular nozzles. To model fuel penetration in the combustor, we employ a computational technique. Our study compares the roles of three different extruded injectors on fuel diffusion and distribution at supersonic cross-flow. Our findings reveal that the use of an inner air jet increases fuel mixing in the annular jet, while the use of extruded nozzles improves fuel distribution by enhancing the vortices between injectors. These results demonstrate the potential benefits of incorporating multi-extruded nozzles and inner air jets in the design of scramjet engines. One of the key challenges in designing these engines is achieving efficient fuel mixing and distribution at supersonic flow within the combustion chamber. In this context, the present study investigates the use of multi extruded nozzles to enhance fuel mixing and distribution. The study also explores the impact of an inner air jet on fuel mixing in annular nozzles. The computational modeling technique is employed to analyze the fuel penetration and diffusion in the combustor5. The study compares the performance of three different extruded injectors in terms of fuel diffusion and distribution at supersonic cross flow. The findings reveal that the use of extruded nozzles and inner air jet can significantly improve fuel mixing and distribution by enhancing the vortices between injectors. The results of this study have important implications for the design and development of more efficient and effective scramjet engines8.The development of scramjet engines has received considerable attention in recent years due to their potential to improve high-speed flight11. The critical challenge in designing these engines is to achieve efficient fuel mixing and distribution, as the combustion process is highly sensitive to these factors14. One way to improve fuel mixing and distribution is by using multi-nozzle injectors. A study15 investigated the use of multi-jet injectors in a Mach 6 scramjet combustor and found that the use of multi-jet injectors improved the combustion efficiency and reduced the flame length.Scramjet engines have been a topic of extensive research due to their potential to achieve high-speed flight and space access17 investigated the use of transverse injection in a Mach 4 scramjet engine and found that it significantly improved fuel mixing and distribution.Another approach to improve fuel mixing and distribution is by using transverse injection, where fuel is injected perpendicular to the air flow. Several studies21 investigated the use of swirling flow in a Mach 2.5 scramjet engine and found that it improved fuel mixing and distribution by promoting turbulence and increasing residence time. The impact of fuel injection pressure on fuel mixing and distribution has also been studied. Many papers26 investigated the effect of fuel injection pressure on the combustion efficiency of a Mach 3 scramjet engine and found that higher injection pressures improved fuel mixing and distribution, resulting in higher combustion efficiency.The use of swirling flow to enhance fuel mixing and distribution has also been explored. Many papers29 investigated the effect of nozzle geometry on the combustion efficiency of a Mach 5 scramjet engine and found that the use of lobed injectors improved fuel mixing and distribution compared to circular injectors.The impact of nozzle shape and geometry on fuel mixing and distribution has also been explored. Many papers33. These studies provide valuable insights for the design and development of more efficient and effective scramjet engines36.In summary, there have been various approaches to improving fuel mixing and distribution in scramjet engines, including the use of multi-nozzle injectors, transverse injection, swirling flow, higher injection pressures, and optimized nozzle geometryThis study has tried to investigate the influence of different arrangements of extruded lobe-injectors on the fuel penetration and shock interactions inside the combustion chamber. To do this, shock waves are compared on the jet planes to reveals the influence of extruded configurations on the fuel mixing mechanism at supersonic flow. Various lobe nozzles are investigated as demonstrated in Fig.\u00a040. The shock wave formation is inherently happened in our model and consequently, the energy equation must be considered in our modeling42. Meanwhile, the turbulence effects are also important in our model and SST model of turbulence is considered for the calculation of viscosity43. The secondary gas of hydrogen is also modeled as fuel jet and mass transport equation is also used with Fick. Law for estimation of the diffusion of hydrogen gas45. Reactions are not considered in this study. The air flow is considered as ideal gas in the present model51.The simulation of high-speed air stream with compressibility effects is mainly done via solving RANS equations for continuum domain52. The usage of theoretical method is conventional in engineering applications56.Figure\u00a0The produced grid for the selected computational domain is displayed in Fig. 38 down stream of three suggested nozzle configurations are done in Fig. Comparison of the circulation strength downstream of the proposed configurations is presented in Fig. Figure This study focuses on investigating the effects of multi extruded injectors on the fuel mixing mechanism in scramjet engines. Specifically, we examine the roles of three different types of nozzles located inside the engine's combustor, including annular and coaxial nozzles. To model the release of fuel jet from the extruded nozzles at supersonic cross flow, we develop a computational fluid dynamic approach. Through this approach, we compare the mixing efficiency and circulation power of each model. Our findings indicate that the use of coaxial jet nozzles decreases the circulation power, while achieving maximum fuel mixing efficiency. Additionally, we observe that the utilization of an inner air jet in the coaxial configuration further enhances fuel mixing in the nozzle gap."} +{"text": "One of the effective treatment options for intracranial aneurysms is stent-assisted coiling. Though, previous works have demonstrated that stent usage would result in the deformation of the local vasculature. The effect of simple stent on the blood hemodynamics is still uncertain. In this work, hemodynamic features of the blood stream on four different ICA aneurysm with/without interventional are investigated. To estimate the relative impacts of vessel deformation, four distinctive ICA aneurysm is simulated by the one-way FSI technique. Four hemodynamic factors of aneurysm blood velocity, wall pressure and WSS are compared in the peak systolic stage to disclose the impact of defamation by the stent in two conditions. The stent usage would decrease almost all of the mentioned parameters, except for OSI. Stenting reduces neck inflow rate, while the effect of interventional was not consistent among the aneurysms. The deformation of an aneurysm has a strong influence on the hemodynamics of an aneurysm. This outcome is ignored by most of the preceding investigations, which focused on the pre-interventional state for studying the relationship between hemodynamics and stents. Present results show that the application of stent without coiling would improve most hemodynamic factors, especially when the deformation of the aneurysm is high enough. The effects of hemodynamic forces on the wall of the intracranial artery may result in a pathological stretching of the vessel wall, and a deformed vessel is known as an Intracranial Aneurysm (IA). The development of IA is relevant to many pathophysiological factors, not only the hemodynamic ones, e.g., endothelial function, anatomic variations (certain areas in the Circle of Willis) are more vulnerable Liu . The maiThe primary hemodynamic factors of Oscillatory shear index (OSI), Wall shear stress (WSS) and Relative residence time (RRT) are introduced for the comparison of the different ICA aneurysms and computed tomography angiography (CIA). In fact, these techniques enable researchers to access the main geometrical aspects of the aneurysm and use them for further investigations is used to model of blood flow inside the aneurysm and calculating shear stress on the aneurysm wall. Comparisons of blood velocity and stream are also presented to reveal the influence of aneurysm deformation on the blood and its impact on the wall of the vessel.After evaluating of more than 30 ICA aneurysms, the geometries (.stl file) of four distinctive aneurysms are chosen from the Aneurisk website , this study reports WSS and average pressure on the vessel and mean velocity inside the sac for this stage. Meanwhile, the OSI value is calculated at the end of the third cardiac cycle (3000 steps) .As explained in the previous sections, the main attitude of the present work is to investigate the role of the stent on the hemodynamics of the blood stream inside the aneurysm. In the current study, the two stages of interventions are assumed based on the neck vessel angle mentioned in Table The impacts of aneurysm deformation on the main hemodynamic factors of mean WSS, OSI, pressure and velocity are demonstrated in Table Figure\u00a0Comparison of OSI contour for original and deformed aneurysms are displayed in Fig.\u00a0Figure\u00a0In this study, the impacts of the stent on the flow structure of the four ICA aneurysms are comprehensively investigated. The primary attention of this research is to visualize blood flow and compared hemodynamic characteristics of ICA aneurysms before and after aneurysm deformation. Computational Fluid dynamic is applied to simulate the blood stream inside the aneurysm and calculate hemodynamic factors, i.e. WSS, pressure and OSI on the aneurysm wall. The stent effect on two-stages of deformation is disclosed and explained in this work. Attained results indicate that deformation of the aneurysm considerably decreases the WSS on the aneurysm wall due to limited blood entrance. However, the value of OSI does not change in deformed aneurysms. Pressure contour on the aneurysm wall also indicates that the value of this factor does not alter considerably while its location varies by deformation."} +{"text": "To the editor,Nocturnal enuresis is a disorder characterized by intermittent urinary incontinence that occurs during periods of sleep for at least one episode per month for at least 3 months . These eThis pathophysiological picture is captured by sympathetic receptors of the Autonomic Nervous System, which cause greater cardiomyocyte activity, secreting B-type natriuretic peptide (BNP) by the ventricles, which stimulates natriuresis and diuresis in order to compensate for the vasoconstrictor systems. that are activated in these situations. This increase in BNP, due to the pathophysiological condition of OSA, causes inhibition of the secretion of antidiuretic hormone (ADH), which regulates diuresis through the reabsorption of water in the collecting ducts .The study by Ribeiro et al. , publishThe improvement shown for nocturnal enuresis in the study by Ribeiro et al. was clinFrom this, due to the paradoxical effect of ADH and BNP, some considerations of the mechanism of effect of upper airway obstruction on nocturnal enuresis in children arise: the small clinical effect of the increase in BNP is enough to strongly increase the proportion of nights dry? Can the delay time in data collection between 90 and 120 days affect hormonal indicators mainly in the heterogeneity of the sample? Is there another factor involved?Therefore, we can conclude that airway clearance modifications had little clinical impact on the hormonal actions of ADH and BNP as collected.The Authors"} +{"text": "Acute psychotic disorders are increasingly being diagnosed in people addicted to PASA part of these patients develops chronic psychotic disorders for reasons that still insufficently known.The aim of the study was to determine preventive potential of antipsychotics in the development of chronic psychotic disorders as well as possible side effects of theur use.The prospective retrospective qualitative study conducted in the period Septmeber 2017-September 2022.Data from medical records and electronic databases were used in the study.A structured questionnare for conductin research,a clinnical psychatic inteview,MMPI 202,tests to determine of ilegal PAS in body flluids.According to the results of the study adequate treatment of the underlying desease,fewer or complete abscence of relapses,social and psychoteraapeutic support had the greater effects.In the group of opiate addicts an adequate dose of supstitution therapy it often played a crucial role.In experimental conditions the hypotesis about the preventive effect of antipsychotics on the development of psychotic disorders in peoplle addicted to PAS.On the contrary a whole series of new questions has beenopened.None Declared"} +{"text": "Cancers highlights interdisciplinary research that applies physical science principles and approaches to the study of cancer biology.Dysregulated cellular processes drive malignant transformation, tumor progression, and metastasis, and affect responses to therapies. While much is known about the biochemical and genetic drivers of these processes, less is understood about the influence of biophysical properties on oncogenesis. This Special Issue of The research articles presented in this issue address a broad range of both basic and translational cancer biology problems. A central theme at the crossroads of tumor biology and biophysics is the role of the extracellular matrix (ECM) and mechanical forces in modulating cell behavior. We first present an article by Druzhkova et al. that exaThe following three articles take a biophysical approach to developing new cancer treatment strategies. Mathews et al. sought tDysregulation can occur at various levels, from single molecules to cell populations. This Special Issue concludes with two reviews that investigate the biophysical properties of proteins in the promotion of oncogenic signaling. A review from the Renz laboratory elaborates on the role of protein distribution and dynamics in differentiating normal cells from cancer cells, with a focus on comparing beta-catenin and CapG in gynecological cancers . The rolCancers contains exciting examples of how biophysical approaches can provide new insights into oncogenic processes. One of the key contributions of biophysical approaches is the ability to investigate intact and living cells at multiple spatiotemporal scales. Thus, the continued incorporation of biophysical perspectives and techniques into the field of cancer biology will undoubtedly advance our mechanistic understanding of oncogenesis and facilitate the development of novel therapeutic targets and combination therapies.The collection of articles in this Special Issue of"} +{"text": "Imaging Mueller polarimetry is capable to trace in-plane orientation of brain fiber tracts by detecting the optical anisotropy of white matter of healthy brain. Brain tumor cells grow chaotically and destroy this anisotropy. Hence, the drop in scalar retardance values and randomization of the azimuth of the optical axis could serve as the optical marker for brain tumor zone delineation.The presence of underlying crossing fibers can also affect the values of scalar retardance and the azimuth of the optical axis. We studied and analyzed the impact of fiber crossing on the polarimetric images of thin histological sections of brain corpus callosum.We used the transmission Mueller microscope for imaging of two-layered stacks of thin sections of corpus callosum tissue to mimic the overlapping brain fiber tracts with different fiber orientations. The decomposition of the measured Mueller matrices was performed with differential and Lu\u2013Chipman algorithms and completed by the statistical analysis of the maps of scalar retardance, azimuth of the optical axis, and depolarization.Our results indicate the sensitivity of Mueller polarimetry to different spatial arrangement of brain fiber tracts as seen in the maps of scalar retardance and azimuth of optical axis of two-layered stacks of corpus callosum sections The depolarization varies slightly (The crossing brain fiber tracts measured in transmission induce the drop in values of scalar retardance and randomization of the azimuth of the optical axis at optical path length of In case a residual brain tissue with malignancy is left without treatment, tumor recurrence and the survival of a patient will be at stake.In this study, we explore the impact of brain fiber crossing on the polarimetric maps of scalar retardance, azimuth of the optical axis, and depolarization. We measured the stacks of differently oriented thin histological sections of human brain corpus callosum with the transmission Mueller microscope, because the corpus callosum serves as a connection between the two brain hemispheres and has a well-defined orientation of fiber bundles.22.1We used a formalin-fixed human brain obtained from the autopsy of an anonymous donor. A waiver for ethical approval was obtained from the Ethics Committee of the Canton of Bern (BASEC-Nr: Req-2021-01173). We excised a The schematics of the spatial arrangement of corpus callosum thin sections during the measurements with transmission Mueller microscope see Sec.\u00a0 is preseparallel , then crparallel and finaparallel .The presence of glass slides did not affect our measurements, because the measured Mueller matrix of glass without tissue was close to that of air , with the errors in normalized Mueller matrix elements 2.2A custom-built Mueller microscopeWhite-light 2.8\u00a0W light-emitting diode (LED) was chosen as a light source followed by a band-pass color filter 20\u00a0nm) to select the wavelength of a probing light beam. A ,Both the polarization state generator (PSG) and the polarization state analyzer (PSA) are comprised of the identical optical elements, but arranged in a reverse order. In short, they include a linear polarizer and two ferroelectric liquid crystal (FLC) retarders (Meadowlark FPR-200-1550) with a quarter-wave retarder placed between them. By varying the voltage applied to the FLC retarders, the orientation of their fast optical axis is switched between 0\u00a0deg and 45\u00a0deg. This approach for polarization modulation assures the generation of all optimal polarization states required to obtain the complete Mueller matrix of a specimen under study. The light scattered from a sample is collected by another objective , one can use the following mathematical representation: 3.2A decomposition algorithm, first proposed by Lu and Chipman4The corpus callosum in a human brain acts as a bridge connecting its two hemispheres. It consists of distinct fiber bundles that are oriented in a specific well-defined manner. In order to verify the presence and preferential orientation of the nerve fibers in studied samples, It is worth to mention that the TPEF microscopy measurements were taken at a different site of the corpus callosum tissue compared to the Mueller microscopy measurements see . However4.1We measured the Mueller matrix images of a single stripe of corpus callosum of ,Then we measured the stack of two superimposed stripes of corpus callosum (\u00a0deg see and plot4.2It was demonstrated that the myelinated nerve fiber bundles display negative form birefringence.The corresponding box-whisker plots of the azimuth of the optical axis are shown in ssue see . The aziThe box-whisker plots of the azimuth distribution for the stack of two parallel (The spread of data is much larger for the distributions of the azimuth calculated for the corpus callosum stripes overlapped at 45\u00a0deg and 90\u00a0deg. The randomization of the azimuth values and the drop of total scalar retardance values serve as the indicators of the loss of optical anisotropy of the last two-layered stacks of corpus callosum.4.3The maps of the depolarization calculated using either differential or Lu\u2013Chipman decomposition applied pixel-wise are shown in The lowest depolarization values account for a single layer of corpus callosum see , while t stripes .The box-whisker plots of the distributions of total depolarization are shown in 5In our study, the thin histological sections of human brain corpus callosum were arranged in different spatial configurations to measure their polarimetric response with the custom-built transmission Mueller microscope that was designed, aligned, and calibrated to measure the complete Mueller matrices of the samples under study. The two-layered stack (To extract the polarization and depolarization properties of the brain corpus callosum specimens, two different types of decomposition algorithms were used, namely the differential and Lu\u2013Chipman decompositions. The former assumes the continuous variation of the polarization and depolarization properties of a medium, while the latter makes use of their sequential appearance. It makes the differential decomposition particularly suited for the post-processing of Mueller matrices of biological tissues that usually represent the complex structures with spatially intermixed polarimetric properties. However, the differential decomposition works well in transmission configuration, but may fail to process the Muller matrix data recorded in reflection, as for highly scattering media it may lead to the calculations of the logarithm of a negative number. The Lu\u2013Chipman decomposition works well on the Muller matrix data recorded in both transmission and reflection configurations. The latter configuration is the only one relevant for the clinical applications of the imaging Mueller polarimetry operating in a visible wavelength range. Hence, the equivalence of the results obtained with the Lu\u2013Chipman and differential decompositions in transmission configuration supports the use of the former algorithm for the interpretation of the Mueller matrix data of biological tissues recorded in reflection.It was found out that the net scalar retardance of the stack of two thin stripes of corpus callosum decreases with the increase of fiber crossing angle between two overlapped sections of brain corpus callosum tissue. This effect is explained by the partial compensation of the phase shift acquired by polarized light passing through the first stripe by the phase shift of the opposite sign acquired through the second stripe with different preferential orientation of the fibersOur prior studies demonstrated that the maps of the azimuth of the optical axis allow us monitoring a preferential 2D orientation of the fiber bundles at the site of the measurement in reflection configuration. The analysis of the azimuth images of Two-layered stack of corpus callosum stripes (Similar values of scalar retardance and azimuth of the optical axis were obtained for all spatial arrangement of corpus callosum sections with both decomposition algorithms, while higher values for the total depolarization parameter were obtained with Lu\u2013Chipman decomposition. Hence, the choice of decomposition algorithm seems to be of non-negligible importance for the final results and the decomposition algorithm should be selected properly, in accordance to the experimental configuration, samples\u2019 type and initial assumptions.The results of our studies demonstrate that significant drop of retardance values and randomization of the azimuth in combination with almost constant depolarization values (variation in vivo applications of imaging Mueller polarimetry in neurosurgery. The estimation of light penetration depth within the white matter of brain in reflection configurations is the subject of our ongoing studies.Achieving this goal will be of significant importance for"} +{"text": "Frontiers in Medicine introduced the idea of bundling several papers dealing with similar or related problems into so-called Research Topics. It is expected that this editorial concept might convey an organized, inter-linked overview of related research results to the readers. This concept provides additionally the possibility to make functional connections between research projects which are anchored in different scientific disciplines. Such combined presentation helps to broaden the scientific horizon of experts working in different fields, and supports the interpretation of their results in wider scientific and social context. This special collection has published eight manuscripts of researchers from different countries and continents reporting examples of the latest knowledge in regulatory science. The goal of our selection was to cover a broad variety of regulatory challenges emerging in connection with the development of various nucleic acid-based drugs, oncological agents, the evidence-based evaluation of herbal medicinal products, the impact of pharmacogenetic programs on healthcare and finally linking marketing approval to the speed of pricing and reimbursement decisions.Chiu et al. wrote an extensive overview of the tasks and activities of the U.S. Food and Drug Administration (FDA) Division of Applied Regulatory Science (DARS). This division consists of interdisciplinary teams developing primarily modern biological methods for improving in vitro assessment of drugs effects. The publication describes several examples of new types of assays supporting regulatory decision making.Chisholm and Critchley from Australia argued that the rapid development of artificial Intelligence (AI) and machine learning techniques will dramatically influence the future work of regulatory experts. According to their review, it is mandatory to prepare the personnel to use efficiently and critically these possibilities for leading to successful international cooperation of regulatory agencies in adapting their work to the changing scientific environment, geopolitical shifts, pandemics, shortage of raw materials and interruptions of supply chains.McDermott et al. called attention to future importance of large scale pre-emptive panel genetic testing of many individuals for common pharmacogenetic variants underlying diseases. This genetic information could be stored in medical records and used if needed to select individualized targeted therapy for such diseases. Lack of knowledge, and the cost of the intervention were found to be the main barriers for implementing this program. Early leadership engagement, positive institutional culture, engaging stakeholders, and the selection of clinical champions were considered as facilitators to implement such pharmacogenetic service.Guerriaud and Kohli. They flagged some of the currently recognized disparities of categorizing similar products into different categories because of their different origins. In addition, the regulatory status of RNA drugs is differently defined by the EMA and FDA which obviously makes the international registration strategy difficult. The authors suggested some proposals for improving future classification based on updated definitions and recommended steps towards an international harmonization.The intriguing problems of the regulatory classification of the rapidly enlarging group of RNA drugs with quite different biological mechanisms of action was discussed by Zhou et al. described a recently initiated new program that used modern comparative clinical trial methodology to provide solid scientific background for characterization of efficacy and safety. Most of the phase II and III trials are prospective, double blind, randomized, parallel group trials. The results of these trials are expected to improve both the regulatory management and evidence-based use of these products in the clinical practice. According to the authors, the number of modern clinical trials is still small compared to the great wealth of the empirical knowledge.Over many centuries, China developed a rich collection of herbal medicines jointly referred to as Traditional Chinese Medicines. They were traditionally evaluated only empirically. Ilan describes an administration approach called \u201cdigital medical cannabis\u201d which is based on the 2nd generation AI system able to modify the dose for optimizing individually patient benefit.The optimal use of herbal medicines depends on their standardization. Following the legalization of the cannabis market, many products with different amounts of active ingredients flooded the market. In addition, the intensity of the pharmacological effects and tolerance development are individually very different. Especially for effective patient care adjusting cannabis administration according to the needs of the patients is very important. Zhang et al. reported a systematic review comparing the time for oncologic drug approval following multi-regional clinical trials (MRCTs) as compared with single-country studies. MRCTs involving US, Europe and Japan lead to the shortest time for the approval of new oncological agents. The inclusion of additional regions prolonged the time for approval. Since bridging country trials need the least time, the authors recommend that additional single-country bridging studies be performed to shorten drug approval time if MRCTs do not apply.Gallo et al. analyzed the time needed for pricing and reimbursement decisions between 2018 and 2020 in Italy. They argue that the more than double time needed for decision making in case of new drugs is almost entirely due to the much longer health technology assessment procedure and related price negotiations.The time needed for pricing and reimbursement of drugs are influenced by the observed effect differences between the new and the available therapies, the clinical importance of the new agents as well as by the scientific quality of the clinical studies. We hope that combining these articles dealing with various aspects of drug development spanning from the bench to the patients, from the clinical studies through the regulatory decision and health technology assessment to the broad healthcare application will support the work of many colleagues active at various points of this complex process. Joint publication of these papers demonstrates that the decisions made at the different steps by various experts must consider the complexity of the entire process including also their social and ethical impacts.The draft and final version were prepared by SK-F. All authors contributed to the article and approved the submitted version."} +{"text": "According to the marked increase in the elderly population in Japan, an increasing number of patients were administered with anticoagulant agents, either for secondary prevention of ischemic stroke or primary prevention in patients with nonvalvular arterial fibrillation. Patients with hemorrhagic stroke or hemorrhagic conversion of ischemic stroke during anticoagulant therapy were reported to have poorer clinical outcomes owing to the difficulty in acute neurosurgical management, but the risk assessment for hemorrhagic events associated with anticoagulants in the central nervous system is not easy (2). The authors promptly performed the partial removal of hematoma with simultaneous administration of fresh frozen plasma, leading to an emergent reduction of intracranial pressure and then safely attempted radical decompressive craniectomy after the effect of apixaban had diminished. The authors\u2019 two-stage management strategy was further justified by the continuous monitoring of intracranial pressure between first- and second-stage surgeries, and the favorable staged decompression was demonstrated by serial computed tomographic scans (2). While considering the recent advances in the neuroendoscopic modalities, the application of neuroendoscopic partial removal of the hematoma could have been a management option in the first-stage surgery in this case. It is necessary to evaluate the efficacy and safety of neuroendoscopic removal of intracranial hematoma in patients with anticoagulant administration in future studies.In this difficult clinical condition, the authors successfully managed a patient with massive hemorrhagic infarction during direct oral anticoagulant medication of apixaban by two-stage surgery using intracranial pressure monitoring (3). Based on the cumulative evidence, the most recent issue of the Japanese Guidelines for the Management of Stroke in 2023 recommends the administration of Andexanet alfa for the patients with hemorrhagic stroke using factor Xa inhibitors (Recommendation grade B) (4). Taken together, radical decompressive craniectomy with hematoma removal after the neutralization of factor Xa activity could be a management option if Andexanet alfa is available. Further investigation of a larger number of patients from multiple institutes is warranted to address this important issue.Recently, Andexanet alfa, a factor Xa inhibitor that neutralizes the anticoagulant effect of factor Xa inhibitors, including apixaban, became available in Japan. Although Andexanet alfa was not available at the time in this reported case, this neutralizing agent is expected to significantly contribute to the management of the patients with hemorrhagic stroke, including hemorrhagic infarction managed by factor Xa inhibitors, such as apixaban and edoxaban NoneMiki Fujimura is one of the Editors of JMA Journal and on the journal\u2019s Editorial Staff. He was not involved in the editorial evaluation or decision to accept this article for publication at all."} +{"text": "The objective of the present study is to evaluate the anatomy of the inferior hypogastric plexus, correlating it with urological pathologies, imaging exams and surgeries of the female pelvis, especially for treatment of endometriosis. We carried out a review about the anatomy of the inferior hypogastric plexus in the female pelvis. We analyzed papers published in the past 20 years in the databases of Pubmed, Embase and Scielo, and we included only papers in English and excluded case reports, editorials, and opinions of specialists. We also studied two human fixed female corpses and microsurgical dissection material with a stereoscopic magnifying glass with 2.5x magnification. Classical anatomical studies provide few details of the morphology of the inferior hypogastric plexus (IHP) or the location and nature of the associated nerves. The fusion of pelvic splanchnic nerves, sacral splanchnic nerves, and superior hypogastric plexus together with visceral afferent fibers form the IHP. The surgeon\u2019s precise knowledge of the anatomical relationship between the hypogastric nerve and the uterosacral ligament is essential to reduce the risk of complications and postoperative morbidity of patients surgically treated for deep infiltrative endometriosis involving the uterosacral ligament. Accurate knowledge of the innervation of the female pelvis is of fundamental importance for prevention of possible injuries and voiding dysfunctions as well as the evacuation mechanism in the postoperative period. Imaging exams such as nuclear magnetic resonance are interesting tools for more accurate visualization of the distribution of the hypogastric plexus in the female pelvis. The hypogastric plexus is responsible for the autonomic innervation of the pelvic viscera. Injury to these nerves during surgical interventions can be associated with voiding dysfunctions and the evacuation process. Knowledge of the anatomy of the hypogastric plexus is very important in female pelvic surgeries, especially operations for the treatment of endometriosis. Endometriosis is a pelvic dysfunction in women that requires a delicate and thorough surgical approach. The surgeon must have skill and knowledge of this region in order to avoid injury to the viscera, vessels and nerves of the pelvis. In recent times, laparoscopic and robotic surgery have greatly improved the visualization of the anatomical structures of the pelvis during these procedures -3.Classical anatomical studies provide few details about the morphology of the inferior hypogastric plexus (IHP) or the location and nature of the associated nerves. The aim of the present work is to evaluate the surgical anatomy of the hypogastric plexus through a narrative review of the literature, highlighting its importance during diagnosis and its approach during surgical procedures for the treatment of endometriosis.In this study we carried out a review of the anatomy of the inferior hypogastric plexus in the female pelvis. We analyzed papers published in the past 20 years in the databases of Pubmed, Embase and Scielo, found by using the key expressions \u201cHypogastric plexus\u201d; \u201cInferior hypogastric plexus\u201d; \u201cMRI\u201d; \u201cEndometriosis\u201d; \u201cRobotic surgery\u201d; and \u201cLaparoscopic surgery\u201d. We found several papers in these databases and we included only papers in English and excluded case reports, editorials and opinions of specialists .We also studied two human fixed female corpses and microsurgical dissection material with the aid of a stereoscopic magnifying glass with 2.5x magnification. A detailed dissection of the female pelvis was performed, identifying the superior hypogastric plexus at the level of the sacral promontory and its distribution in the female pelvis.The autonomic innervation of the pelvis originates from the continuation of the aortic plexus in the downward direction. Fibers of the inferior mesenteric plexus, situated below the inferior mesenteric artery, receive sympathetic fibers from the paravertebral trunk. Anterior to the fifth lumbar vertebra and in the region of the sacral promontory, these fibers unite with branches of the lower lumbar splanchnic nerves and form the so-called superior hypogastric plexus (SHP) or presacral nerve , 5. The The SHP divides anteriorly to the sacrum into two narrow and elongated networks with variable diameter, just below the sacral promontory, giving rise to the presacral nerves, better known as hypogastric nerves, which in general gather in a trunk and are called the hypogastric nerves (right and left) . The hypThe hypogastric nerves have an important relationship with the internal iliac vessels, being located medially and inferiorly to them, surrounded by retroperitoneal fat, also maintaining a relationship with the sigmoid colon on the left side and the rectum before the inferior hypogastric plexus is formed. Each nerve or hypogastric nerve passes inferiorly over the lateral part of the rectum (or the rectum and vagina in women). In the inferior and anterior region of the sacrum, each hypogastric nerve receives the pelvic splanchnic nerves from the sacral roots from S2 to S4, giving rise to the inferior hypogastric plexus (IHP) Figure-.The IHP is formed by the union of the hypogastric nerves with the pelvic splanchnic nerves (nerves of Eckhardt) in the region posterior and medial to the internal iliac artery (hypogastric artery) . The disThe IHP branches out maintaining important relationships with the pelvic viscera in women. The ureter is an essential positional reference for the IHP: not in terms of its superior angle, the distance to which to the ureter is variable, but in terms of its top, in other words its (anterior) inferior angle: in all cases this top is at the ureter\u2019s point of contact where it perforates the posterior layer of the broad ligament. In the region of the intersection with the uterine artery, branches of the IHP originate and go to the bladder and vagina . Two groIn the dissected parts, we observed that the superior hypogastric plexus was divided into right and left hypogastric nerves in the sacral promontory region and the pelvic splanchnic nerves joined these nerves, forming the IHP. In turn, the IHP originated fibers that innervate the viscera of the anterior and posterior compartments of the pelvis. There are few imaging-related studies enabling visualization of the pelvic region .The radiologist\u2019s role in the management of endometriosis is becoming increasingly important as more centers move towards the use of female pelvic MRI exams to diagnose, delineate, or follow-up endometriosis lesions . The EurIt is important to diagnose endometriosis and thoroughly assess its extent, especially when surgical treatment is being considered. Magnetic resonance imaging (MRI) is a careful examination and interpretation technique that allows more accurate and complete diagnosis and staging than ultrasonography, especially in cases of deep pelvic endometriosis. In addition, MRI can identify implants in hard-to-reach places in endoscopic or laparoscopic explorations .MRI has been used routinely in patients with suspected deep endometriosis, where it and can identify lesions in different sites in a single evaluation, allowing assessment of the extent of the disease. MRI is also an effective technique for the preoperative diagnosis and staging of deep infiltrative endometriosis (IEM). However, the usefulness of MRI, because of sequences susceptible to chronic blood degradation products such as T2*-weighted images, remains uncertain . In an ilevator ani muscle, round ligament and bladder ; lateral surface of the rectum; pelvic ureter; and particularly the region of the crossing with the uterine artery, pararectal space, paracervix, hypogastric artery, piriformis muscle, bladder .During the performance of pelvic endometriosis surgeries, whether laparoscopic, conventional or robotic, knowledge of the relationships between the hypogastric plexus and the pelvic viscera is of great importance. Endometriosis is a disease defined by the presence of endometrial tissue outside the uterine cavity. It is a progressive disease, without a clearly established etiopathogenesis, influenced by genetic and environmental factors . The disThe identification and prompt treatment of endometriosis are essential and are facilitated by precise clinical diagnosis. Endometriosis is classically defined as a chronic gynecological disease characterized by the presence of tissue similar to the endometrium outside the uterus. It is believed to arise due to retrograde menstruation. However, this description is outmoded and does not reflect the true scope and manifestations of the disease. The clinical presentations are varied, the presence of pelvic lesions is heterogeneous and the manifestations of the disease outside the female reproductive tract remain poorly understood. Endometriosis is now considered to be a systemic disease instead of a disease that predominantly affects the pelvis .Of the pathogenic theories proposed , none explains all the different types of endometrioses. According to the most convincing model, the hypothesis of retrograde menstruation, endometrial fragments that reach the pelvis via the retrograde transtubal flow become lodged in the peritoneum and abdominal organs and proliferate and cause chronic inflammation with the formation of adherences . The lesIn robotic surgery, pelvic autonomic nerves end up being easier to identify with the magnification provided by an endoscopic camera . These sZakhari et al. carried The superior hypogastric plexus has been described along with the hypogastric nerve, the most superficial and easily identifiable component of the inferior hypogastric plexus. It was identified and used as a reference point to preserve the autonomous bundles in the pelvis. The following steps, illustrated with laparoscopic images, describe a surgical technique designed to identify and preserve the hypogastric nerve and deeper inferior hypogastric plexus without the need for more extensive pelvic dissection to the level of the sacral nerve roots: transperRobot-assisted nerve-plane-preserving eradication of deep endometriosis is as technically feasible as the conventional laparoscopic approach. The step-by-step technique should help surgeons perform each part of the surgery in a logical sequence, making the procedure easier and safer to complete. However, the latent benefits of robot-assisted nerve-sparing surgery in the treatment of deep endometriosis remain unclear .A meta-analysis confirmed that robotic surgery is safe and feasible in patients afflicted with endometriosis. The articles examined suggested that robotic surgery is a valid option and can be considered an alternative to conventional laparoscopic surgery, especially in advanced cases .The precise knowledge of the innervation of the female pelvis is of fundamental importance for prevention of injuries, voiding dysfunctions and problems in the evacuation mechanism in the postoperative period. Imaging exams such as nuclear magnetic resonance are an interesting tool for more accurate visualization of the distribution of the hypogastric plexus in the female pelvis."} +{"text": "The contrast betweenthe proposed model and the results showed that the direct influence of ITsecurity influences the government\u2019s attitude toward COVID-19 and DTimplementing actions to achieve SDGs. The findings of this work are of greatvalue both for the actors involved in the design and implementation of publicpolicies and for those responsible for local governance in their objective toimprove citizens\u2019 experience of the services provided and in exceptionalsituations such as the one experienced as a result, of-COVID-19.This paper analyzes how Digital Transformation (DT) processes have influenced theAttitude of local governments (LGs) toward the COVID-19 pandemic and theireffect on achieving the United Nations\u2019 Sustainable Development Goals (SDGs).The data were collected from LGs in Spain ( The public sector, especially local government (LG), is currently immersed in aperiod of constant transformation and uncertainty to which it must respond withradical changes to meet the needs of its citizens .As has already happened in business, citizens demand that the public sector undergoes a similar approach .Such a Numerous research studies examine the effect of collaboration on innovation success.This is the case of the review of the mediating role of social performance betweencooperation and innovation performance conducted by The adoption of social performance techniques in industrial companies is expected toincrease significantly over the next 10\u2009years. A consequence of this will be thatmanagers can promote sustainable innovation by collaborating with consumers andimproving the social performance of their companies . For exaDT in the public sector is not necessarily voluntary. It is often imposing, as LGsare forced to adopt digital innovation to meet the requirements of reforms launchedat the national or supranational level . This waThe SDGs represent the commitment of world leaders to act on a more sustainable pathtoward inclusive and equity growth-able. Electronic government actions based on DTshould favor the generation of a new paradigm in providing services throughweb-based functionalities and ICTs . InrespThis paper addresses and analyzes the DT processes implemented by LGs in Spain from adouble perspective: their influence on the achievement of the SDGs and the attitudeof LGs and their employees during the COVID-19 pandemic. To achieve this objective,we have examined how LGs and their employees have adopted and integrated newelectronic and digitalization processes into their daily tasks to effectively andefficiently interact with citizens. The proposed model was tested with theparticipation of 124 Spanish LGs. The results showed the direct influence of ITsecurity on attitude toward COVID-19 and that of DT on implementing actions toachieve SDGs. This study found that DT processes in LGs were relevant in attainingSDGs and instrumental in resolving the difficulties raised in the relationshipbetween LGs and citizens due to COVID-19.The contrast of the influence exerted by manageable elements of the local governmentsthemselves, such as the IT skills of their workers and the actions of IT securityand DT in the context of potentially facilitating conditions for aligning efforts inmoments of great uncertainty such as those experienced during COVID-19, constitutesone of the main contributions of this work.DT processes and the implementation of e-services in the public sector is anuanced reality that is difficult to define . AccordiWithin any organization, transformation, and change in the way it relates to itsclients or users may be conditioned by the dimension of variables such asflexibility and the extent of bureaucracy it entails , and theIn the case of a DT process in an LG, size can be fundamental to achieving theproposed objectives when providing electronic resources for citizens . In mostThe link between the organization\u2019s dimension and innovation is common inresearch on innovative processes and management . Small aCitizens\u2019 expectations regarding the public services provided by LGs and theirfinancial management have given rise to many codes of conduct and ethicaldeclarations to optimize financial and budgetary resources .Codes oTrust seals have a similar objective but depend on an external entity. Thepossession of these seals increases user confidence and is linked to the use ofgood practices by the organization . Such seThe relevance of the budgetary dimension, along with codes of conduct and trustseals and conditions LGs when it comes to accessing resources, acquiringknowledge, and implementing innovative actions that improve the servicesprovided and the working conditions of their employees. Based on the above, wecan establish the following research hypotheses:H1 (+): The facilitating conditions based on the budgetarydimension and codes of conduct and trust seals exert a direct andpositive influence on implementing digital transformation actions inlocal governments.H2 (+): The facilitating conditions based on the budgetarydimension, codes of conduct, and trust seals exert a direct andpositive influence on the acquisition of resources and knowledge inthe field of Information and Communication Technologies in localgovernments.The budgetary dimension of LG can limit access to and use of ICTs and publicemployees in performing their regular tasks . This meWithin LGs, as in any organization, the security of information systems is acrucial element that is constantly evolving. This requires that the staff gainthe knowledge and technological skills to manage the safety of the informationsystems used in developing their tasks . Such knThe fundamental value of the IT skills acquired by LG personnel for theimprovement of information systems security, together with the achievement ofthe objectives set by the public administration, led us to establish thefollowing two research hypotheses:H3 (+): The acquisition and availability of knowledge and skillsin using technology to perform the usual tasks of local governmentworkers exert a direct and positive influence on the digitaltransformation actions implemented.H4 (+): The acquisition and availability of knowledge and skillsin the use of technology for the performance of the usual tasks oflocal government workers exert a direct and positive influence onthe security of the information systems used.The need for LGs to plan financially and strategically plan well in advance makesit difficult for municipalities and their staff to adapt to technological andenvironmental changes . This shEffectiveness and efficiency in the provision of public services by LGs throughDT are related to the favorable alignment of the actions of publicadministrations with the balanced development in social, economic, andenvironmental sustainability advocated by the SDGs .Thanks to the radical decrease in the cost of collecting, storing, and processinginformation , DT offeHow DT enables LGs to take actions to achieve sustainable development (in itsthree dimensions) led us to establish the following research hypothesis:H5 (+): Digital transformation actions by local governmentsenable the provision of public services effectively and efficientlyaligned with Sustainable Development Goals.One of the consequences of the DT actions carried out by the LGs has been theincrease in the security measures adopted to protect the information systems andresources used in the provision of services . The incThe importance attached by LGs to the security of their information systems,along with the increasing implementation of protocols to increase certainty andtrust ,has broInformation systems security has been vital during the COVID-19 pandemic due to tBased on the verification of the importance of the security measures adopted byLGs for the information systems used by their employees to deliver services tocitizens, the following research hypothesis was established:H6 (+): Improvement in the security of the information systemsof local governments exerts a direct and positive influence on theattitude of workers when carrying out their tasks during theCOVID-19 pandemic.The contrast of the conceptual model proposed in N\u2009=\u20098,131; The methodology for collecting the data and information was a questionnairedistributed to the total number of LGs in Spain in 2020 may be due to one of the following reasons:Municipalities\u2019 usual problem lies primarily in the need for more resources andtraining of public employees. However, the low response rate may be owing to theneed for more collaboration and coordination between different departments,municipalities, and other government agencies. Undoubtedly, this study addressesa sensitive issue because municipalities\u2019 lack of adaptation to new technologiescan affect their ability to provide adequate services, which is a fact that isnot easy to recognize.The exogenous latent variable of the proposed model and the questionnairewas:Facilitating Conditions. These were grouped into threeitems: the municipality government budget, measured on a scale of 4budget intervals, as shown in -\u2003The last item used was the degree of use of information systems audits thatenable trust seals. These audits depend on a second entity that auditscompliance with the standards that give rise to the seal.The endogenous variables proposed in the model were:IT Skills: This latent variable was measured based onthree items referenced in the competencies of the Digital CompetencyFramework 2.0 . This type of analysis is well-suited for exploratory analysis.The partial least squares regression technique is considered convenient formodeling structural equations based on variance and is recommended for socialsciences, specifically in the study of organizations .The performance of the PLS-SEM analysis followed recommendations for reflectiveor B-mode constructs . First, Secondly, the construct reliability and validity were analyzed . TheCroR2 , 9 , 10 , and 11 (Sustainable Cities andCommunities).Another relevant conclusion of this work is the relevance of technological securitymeasures and processes in information systems perceived by LG employees. This willencourage a better employee attitude in situations arising from COVID-19.Regarding the implications for the theory of the results of this work, it isessential to emphasize the relevance of certain (facilitating) conditions withinLGs. These affect the DT and IT security processes, particularly as determinantsin current challenges such as the implementation of SDGs and the situationevolving because of COVID-19.The results of this work allow us to establish that the facilitating conditions,based on the budgetary dimension, and the protocols and trust seals (due to thedirect and positive influence that they have on the actions that the LGs take)favor DT and on the potential for their workers to acquire knowledge andtechnical skills. These enabling conditions indirectly influence, through DT andIT skills, the achievement of the SDGs by LGs and the attitude of workers towardCOVID-19.The practical implications derived from this work for the actors and agentsinvolved in the design and implementation of public actions and policies showthe importance not only of the development of end-user policies but also thatthese must address previous issues that will favor the achievement of theobjectives pursued.The main benefit of this study is that it provides a detailed understanding ofhow digital transformation processes have influenced the attitude of LGs towardthe COVID-19 pandemic and the achievement of the SDGs.Like all scientific research, aspects of this work need to be improved, such asthe sample size of LG respondents concerning the total population under study.This urges caution when establishing possible extrapolations of the results tothe totality of LGs. Another potential limitation is the national nature (Spain)of the LGs analyzed. Spanish LGs have some particular characteristics, such astheir unitary political system in which the central government has great powerover the policies and services of LGs. In contrast, LGs have greater autonomy inother European countries, such as Germany or the United Kingdom.Another particularity is the usual more precarious financial situation than inother European countries due to the need for more financial resources anddependence on central government funds. And finally, a centralized managementmodel, with a great weight of bureaucracy and a lack of citizen participation,while in other European countries, such as the Netherlands or Denmark, there aremore decentralized and participatory management models. Future research couldinclude municipalities in European countries.Regarding potential future research lines, local governments\u2019 budgetarydimensions could be analyzed comparatively in terms of the different budgetsizes. Still, with similar DT and IT, security actions are undertaken."} +{"text": "Membrane dialysis is one of the membrane contactors applied to wastewater treatment. The dialysis rate of a traditional dialyzer module is restricted because the solutes transport through the membrane only by diffusion, in which the mass-transfer driving force across the membrane is the concentration gradient between the retentate and dialysate phases. A two-dimensional mathematical model of the concentric tubular dialysis-and-ultrafiltration module was developed theoretically in this study. The simulated results show that the dialysis rate improvement was significantly improved through implementing the ultrafiltration effect by introducing a trans-membrane pressure during the membrane dialysis process. The velocity profiles of the retentate and dialysate phases in the dialysis-and-ultrafiltration system were derived and expressed in terms of the stream function, which was solved numerically by the Crank\u2013Nicolson method. A maximum dialysis rate improvement of up to twice that of the pure dialysis system ( Membrane extraction ,2, membrThe dialysate phase concentration in a dialysis process does not remain unchanged and the solvent is inevitably to pass through the membrane, across which exists a pressure gradient between two phases besides the other membrane contactors. Early studies focused on a simplified analysis by ignoring the concentration variance of the dialysate phase and assuming no solvent passed through the membrane ,18. A twThe phenomena of suction or injection through the membrane due to the ultrafiltration operation will lead to a complex flowing pattern in the channels of a membrane dialyzer, in which the velocity distribution can be solved using the stream function coupled with the perturbation method ,34. A twa (retentate phase) and annular subchannel b with inner radius A hydrophilic membrane with thickness a and annular subchannel b under the assumed conditions as follows:The velocity distributions were derived with the use of the continuity equations and momentum balance equations in both inner subchannel The dimensionless forms of the velocity distributions, Similarly, the dimensionless forms of velocity distributions in the annulus subchannel, a, b and m refer to the retentate, dialysate and membrane phases, respectively; The mass transfer equations in the retentate, dialysate and membrane phases were derived in the concentric tubular dialyzer with the ultrafiltration system, as shown in The solute in the membrane phase is simultaneously transported by convection and diffusion due to the dialysis-and-ultrafiltration operation, as shown in The general solution of Equation (22) isOne obtains Equations (24) and (25) for Thus, the solute concentration distribution By substituting Equation (27) into the boundary conditions of Equations (17) and (19), the derivatives of Hence, the two-dimensional solute concentration distributions of a concentric tubular dialysis-and-ultrafiltration operation can be solved by the governing equations, Equations (11) and (12), with the use of the boundary conditions of Equations (14), (15), (20), (21), (28) and (29).A concentric tubular dialysis system without ultrafiltration operations, that is, a (retentate phase) and annular subchannel b , respectively, except for incorporating The velocity distributions are the same equations, that is, Equations (7) and (8) and Equations (9) and (10), for the inner subchannel The solute in membrane phase is transported by diffusion only, as shown in Integrating Equation (41) twice to obtain the general solution asThus, the solute concentration distribution By substituting Equation (44) into the boundary conditions of Equations (36) and (38), the derivatives of Hence, the two-dimensional solute concentration distributions of a concentric tubular dialysis-and-ultrafiltration operation can be solved by the governing equations, Equations (30) and (31), with the use of the boundary conditions of Equations (33), (34), (39), (40), (45) and (46).The mass balances of Equations (11) and (12) and Equations (30) and (31) were made for membrane dialysis along the flowing direction with and without ultrafiltration operations, respectively. Thus, the solute concentrations in both the inner and annular streams were solved numerically using the Crank\u2013Nicolson method. The availability of computing software facilitates the numerical solution for this problem, in which the node numbers in the The dialysis rate of the dialysis system with and without the ultrafiltration operation is defined asFurthermore, the dialysis rate improvement The velocity distributions of The two-dimensional theoretical model of the concentric tubular dialyzer with and without ultrafiltration operations was solved numerically using the Crank\u2013Nicolson method, for which the convergence tolerance of the numerical solutions is shown in The influences of the ultrafiltration rate A smaller volumetric flow rate of retentate phase results in a lower concentration distribution due to the increase in the residence time for transporting the solute from the retentate phase to the dialysate phase, as shown in The influence of ultrafiltration rates and the retentate phase flow rates on the average concentrations of both the retentate and dialysate phases are presented in Similarly, the influences of various ultrafiltration rates and volumetric flow rates of the dialysate phase on both the retentate and dialysate phases are presented in The average outlet concentrations of the retentate phase The effect of the channel thickness ratio The solute is transported through the membrane by two mechanisms in a dialysis module with ultrafiltration operation: (a) diffusion (caused by the concentration difference across the membrane) and (b) convection (caused by the ultrafiltration operation). The dialysis rate The results show that the dialysis rate For the same reason as refers to the increase in the convective mass-transfer coefficient, the dialysis rate The experimental work is cost prohibitive at the present time and there is only the available experimental data from the flat-plate dialyzer for confirmation. The present study could be analogized from those in our previous work on the flat-plate module ; this isdialyzer for compThe dialysis efficiency The dialysis efficiency The influence of the membrane sieving coefficient on the dialysis efficiency The dialysis rate improvements The dialysis rate improvements A concentric tubular dialyzer with ultrafiltration operation to augment the dialysis rate was investigated theoretically. Two-dimensional mathematical equations were developed and formulated to predict the dialysis efficiency and dialysis rate improvement in the dialysis-and-ultrafiltration system as compared with the module without the ultrafiltration operation, which was solved numerically using the Crank\u2013Nicolson method. There were two innovation points of this module design: (1) a two-dimensional mathematical model of the concentric tubular dialyzer was developed theoretically; (2) the simulated results show that the dialysis rate improvement was significantly improved with implementation of the ultrafiltration effect. A more direct method provides a straightforward strategy to the solution for determining the effect of the channel thickness ratio, which allows the specification setting by the designer to be met with economic consideration.Average concentration distributions of the dialysis system in the retentate phase decrease along the axial direction and with decreasing ultrafiltration rate whereas the average concentration of the dialysate phase increases with increasing ultrafiltration rate and the retentate phase flow rate.Both average concentration distributions in the retentate and dialysate phases decrease with the volumetric flow rate of the dialysate phase.The average concentration distribution of the retentate phase decreases with increases in both the membrane sieving coefficient and channel thickness ratio.The average outlet concentration of retentate phase increases with increasing retentate phase flow rate because the residence time is increased as well as with increasing ultrafiltration rate, channel thickness ratio and volumetric flow rate of the dialysate phase.The results show that the dialysis rate The dialysis efficiency The dialysis rate improvements Regarding the pure dialysis system without ultrafiltration operation, its dialysis efficiency could be readily enhanced by employing the module with ultrafiltration operation, especially for operations with lower volumetric flow rate of the retentate phase. A maximum dialysis rate improvement of up to twice was found in the module with the ultrafiltration rate The results demonstrate the technical feasibility of dialysis rate improvement in the tubular membrane dialyzer with ultrafiltration operation. It was also found that the concentrations of the components in the retentate could be removed to reduce solutes in wastewater treatment processes, which depend on the separation technique and the operational parameters in membrane-based separation processes. In this paper, both the dialysis rate and ultrafiltration operation were examined from an economic perspective by implementing various ultrafiltration rates in the tubular dialyzer. Therefore, the alternative membrane sieving coefficient, membrane material and ultrafiltration rate require further investigation regarding the economic considerations of the tubular dialyzer. It is believed that the availability of such a simplified mathematical formulation as developed here is the value in the present work in designing a concentric tubular dialyzer and will be an important contribution to the design and investigation of multi-stream or multi-phase problems with coupling mutual boundary conditions. One may follow the present theory and develop a mathematical formulation to deal with multi-stream or multi-phase heat- or mass-transfer devices for each particular application with various geometries."} +{"text": "The aim of the study is identification of correlations between the serum concentrations of iron and the risk of breast and/or ovarian cancer among female BRCA1 mutations carriers.The subjects selected for the trial were Polish women, positive for at least one of three founder mutations in BRCA1 gene dominating in Poland . Persons with detected tumor were considered as cases and the others were considered as controls. One case and two controls were paired regarding many criteria to achieve the maximum of similarity between them.The proportion of cases and control in the first quartile was taken as a reference to calculate the odds ratio, confidence interval and p-value of the multivariate conditional logistic regression.The iron was quantitatively measured by ICP-MS (Inductively Coupled Plasma Mass Spectrometry), .This study shows that concentration levels of iron in blood serum are a strong factors associated with an additionally increased risk of breast and ovarian cancer among BRCA1 mutation carriers.For iron concentration, all quartiles above the first one had a decreased risk of breast or ovarian cancer. The results are shown in Table Similarly, high ratios of iron to selenium were significantly associated with disease protection which is shown in Table"} +{"text": "The CASCADE study was a pragmatic cluster RCT, with integral process and economic evaluations, of a complex psycho-educational intervention for young people with diabetes. The study was funded by the NIHR-HTA and carried out by a multi-institutional team .The extensive integrated multi-method process evaluation was planned prospectively and ran for the four years of the trial. The aims of the process evaluation were to 1) assess the feasibility and describe the provision of the CASCADE intervention within a standard clinic setting for a diverse range of young people; and 2) build on and help explain trial outcome findings and provide information on how the intervention might be modified.The process evaluation used a range of both qualitative and quantitative methods including observations of education sessions; interviews with a sub sample of young people, parents and staff; questionnaires relating to views of the intervention; attendance data and case note review.Currently there is limited published guidance on how to conduct a high quality integrated process evaluation that improves the usefulness of trial findings. In this presentation, members of the CASCADE process evaluation research team will provide a detailed example of carrying out such an evaluation. The presentation will include; the justifications for the methodological approach used, descriptions of context and sampling, data collection and analysis methods. Attention will be drawn to the specific methodological processes and practical facilitators employed to maximize the integration of trial and process components."} +{"text": "The evidence for Neanderthal lithic technology is reviewed and summarized for four caves on The Rock of Gibraltar: Vanguard, Beefsteak, Ibex and Gorham\u2019s. Some of the observed patterns in technology are statistically tested including raw material selection, platform preparation, and the use of formal and expedient technological schemas. The main parameters of technological variation are examined through detailed analysis of the Gibraltar cores and comparison with samples from the classic Mousterian sites of Le Moustier and Tabun C. The Gibraltar Mousterian, including the youngest assemblage from Layer IV of Gorham\u2019s Cave, spans the typical Middle Palaeolithic range of variation from radial Levallois to unidirectional and multi-platform flaking schemas, with characteristic emphasis on the former. A diachronic pattern of change in the Gorham\u2019s Cave sequence is documented, with the younger assemblages utilising more localized raw material and less formal flaking procedures. We attribute this change to a reduction in residential mobility as the climate deteriorated during Marine Isotope Stage 3 and the Neanderthal population contracted into a refugium. When chipping stone to create sharp edged tools, there are a wide range of strategies that a knapper may employ. The factors influencing the choice of knapping strategy include downstream effects from the selection of particular types of stone and clast morphologies, as well as the cultural repertoire, foraging methods and mobility of the hominin group. Understanding knapping strategies can therefore inform us about several aspects of hominin behaviour. In this study we look at knapping strategies among a particularly iconic set of hominins: the Neanderthals of Gibraltar, who are both one of the most comprehensively studied and latest surviving of all Neanderthal populations.Homo sapiens over the last 100 thousand years. Gibraltar is home to some of the world\u2019s most significant Neanderthal sites. The region is historically significant as one of the first discoveries of Neanderthal skeletal remains was made in Forbes Quarry in 1848 The Rock of Gibraltar is a limestone klippe peninsula at the southern tip of Iberia and reprThe association between Mousterian technology and the Neanderthals is well documented across Europe and Gibraltar itself has played a role in establishing the link The Gibraltar caves have been subject to excavations by a number of teams. Previous studies of the lithic assemblages have documented artefact typologies, reduction sequences, spatial patterns and putative functions Various materials suitable for lithic manufacture are available on Gibraltar, the lowest quality of which is the limestone of The Rock itself. A quartzite outcrop occurs on the western side of The Rock, with primary sources of quartzitic sandstone available within 10km of Gibraltar In this article we examine artefacts from four different caves: Vanguard, Beefsteak, Ibex and Gorham\u2019s . Each caVanguard Cave is one of a series of caves on Governor\u2019s Beach, which is on the south-east side of Gibraltar. Optically Stimulated Luminescence (OSL) dating indicates Middle Palaeolithic occupation mainly took place during MIS 5, after which time the cave became filled with sand In the Middle Area of Vanguard cave three occupation horizons have been identified in situ knapping The intermediate occupation horizon contains two hammers of quartzite and one of sandstone, which, along with 46% of artefacts being smaller than 15 mm and the refitting of some chert flakes, suggests some In the upper occupation horizon the presence of two quartzite cobble hammerstones, one of which refits from two halves, suggests some knapping took place here The Northern Alcove in Vanguard Cave, which is approximately the same level as the three occupation horizons also contained artefacts of quartzite, chert (including red chert), and limestone, and is associated with a hearth \u00e9clat debordant on dark red chert, and were likely introduced as finished artefacts. It is suggested that some of these chert pieces were used as shucks for opening the associated shellfish 2 area, and they include refits and 997 artefacts <15 mm in maximum dimension. All these factors indicate that they represent a discrete knapping episode, with the low frequency of thermal modification showing that this took place after the associated fire had died down A hearth located in the upper part of Vanguard Cave, dated to 108.5 kya Beefsteak Cave is located near to Europa Point at the southern tip of Gibraltar. Uranium series dating of layer D, which overlies Middle Palaeolithic artefacts in layers C and B, produced a date of 98.8\u00b115.5 kya Ibex cave is located high on the eastern side of Gibraltar about halfway along the length of The Rock. Tooth enamel from a layer underlying Mousterian artefacts was dated using Electron Spin Resonance to 37 kya (early uptake (EU)) or 49 kya (linear uptake (LU)) Adjacent to Vanguard Cave on Governor\u2019s Beach is the larger Gorham\u2019s Cave. Micromorphology indicates that he cave was occupied intermittently by both hominins and hyenas Based on stratigraphy, the Gorham\u2019s Cave Mousterian sequence may be divided into six main phases. The lowermost phase contains few artefacts and is undated so will not be discussed further. The next phase comprises the upper Sands and Stony Lenses member (SSLm) which is divided into six subunits and may be correlated with Waechter\u2019s layers L, M, O and P Overlying the Sands and Stony Lenses member is the Lower Bioturbated Sands member (LBSm), which has numerous coarse and fine facies. Five radiocarbon dates place the age of this member at c. 47.5 kya in situ knapping, with 39% of flakes having cortex The next member going up the sequence is the Bedded Sands (BeSm), which date to around 46 kya The most recent Mousterian member in the middle area of Gorham\u2019s Cave is the Upper Bioturbated Sands member (UBSm). The three lower subunits of this member have Mousterian artefacts with radiocarbon dates for these subunits ranging from 45\u201334 kya Towards the back of Gorham\u2019s Cave new excavations have uncovered Mousterian artefacts in a young deposit known as Layer IV Here we statistically assess the lithic patterns described above, including the variation in reduction techniques and raw material exploitation across the Mousterian of the Gibraltar Caves. We examine the diachronic variation in raw material selection, flake platform preparation and core reduction technology through the Gorham\u2019s sequence.A total of 54 cores and assayed clasts from the four Gibraltar caves described above were examined to quantify patterns in stone reduction technology in the Gibraltar Mousterian. The Middle Palaeolithic cores of Gibraltar are typologically characteristic of Neanderthal technology elsewhere e.g. The largest flakes from Gibraltar are made on the non-local honey coloured chert. To assess the differences in the use of this chert in comparison to the local varieties available on The Rock we examined artefact type frequencies from Gorham\u2019s Cave . A chi-sIn Gorham\u2019s Cave a pattern was observed whereby the use of more local and coarser grained materials appears to increase through time. Limestone is available in the cave itself, while quartzite occurs both as beach cobbles and as a now submerged primary outcrop in front of the cave. Chert, whilst sometimes procured as small beach pebbles, is generally more spatially restricted on The Rock, with some chert even procured from inland. We use a series of Fisher\u2019s Exact tests to compare the proportions of coarser grained limestone and quartzite, with finer grained chert , betweenPlatform preparation is a parameter of investment in flake production, with higher proportions of platform preparation reflecting more formal production of flakes. Using Fisher\u2019s Exact tests we assess the pattern of decreasing platform preparation through time in Gorham\u2019s Cave, by comparing the proportion of platform preparation in sequential levels . There iet al.It has been suggested that the occupation of the Layer IV of Gorham\u2019s Cave represents the early Upper Palaeolithic To obtain a statistical overview of the technological variation in the Gibraltar Mousterian we measured a suite of variables on the lithic cores. To put the Gibraltar cores in context we also measured cores from two classic Middle Palaeolithic sites: Le Moustier in France, the type site of the Mousterian, and Tabun Layer C, one of the best known Levantine Middle Palaeolithic assemblages. The variables measured were as follows: the percent of cortex remaining on the core; the number of flake scars; the proportion of blade scars; the length to width ratio of the core ; the width to thickness ratio of the core; the ratio of the proximal width to the distal width of the core; the lateral and distal curvature of the upper surface; the relative intersection height of the main flaking surface and the underlying surface; the mean platform angle; the number of platforms; the proportion of the perimeter of the upper core face that was faceted; the proportion of the core face covered by the length of the largest scar; and the scar pattern angle of the upper and lower faces Four components were extracted with Eigenvalues over 1, hence these factors explain a greater proportion of the variance in the input variables than any individual input variable. The first two components accounted for 28.9% and 18.5% of the variance respectively, so almost half the variance in the input variables is explained by these two components. The component matrix shows thA scatter plot of the first two principal components shows how the cores from each assemblage are distributed . Most ofThe Mousterian technology of Gibraltar documents the use of the caves by Neanderthal populations during MIS 3, 4 and 5. The homogeneity in technology in Beefsteak and Ibex Caves and the presence of refits in the small assemblage from the latter, suggests that the occupation of these caves may be ascribed to single episodes. On the other hand, Vanguard Cave contains a longer sequence of stratified occupations with a greater variety of lithic assemblages; yet, low artefact densities and the presence of refits illustrates that individual occupations were relatively short-term. This accords with the evidence that the hearths at Vanguard were either used once, or used, abandoned and later reused In general we may describe three technological strategies employed by the Neanderthals of Gibraltar. The most formal involves Levallois reduction of large clasts of honey coloured chert from inland Iberia. Large Levallois flakes and some Levallois cores in this honey chert were then selected and carried over a distance of at least 17km to Gibraltar. The intermediate strategy comprises the exploitation of chert and quartzite from outcrops on and around The Rock by Levallois and discoidal reduction techniques, often with platform preparation. The third strategy involves the expedient single and multiplatform reduction of quartzite cobbles and chert pebbles from the beaches in front of the caves, or even using the limestone of the caves themselves. All three strategies are evident in the earliest dated occupation phases on Gibraltar from Vanguard Cave. The ephemeral Beefsteak and Ibex Cave occupations are characterised by the intermediate strategy. In Gorham\u2019s Cave there appears to be a diachronic trend with the earlier levels focussed on the more formal strategies; then a shift towards expedient strategies in the later levels, with no non-local honey coloured chert unknown in the final phase of Mousterian occupation.The formal cores are significantly smaller and have higher flake scar densities than the informal cores, indicating they were more heavily worked. The three strategies appear to reflect different mobility patterns as the most formal technology is practised on the non-local material and the most expedient technology is used on the most immediately available material. Several researchers have correlated expedient forager technology with low mobility and formal forager technology with high mobility e.g. A GIS analysis of the Southern Iberian Mousterian showed that sites are concentrated both near the coast and along major rivers The optimal area for Late Pleistocene hominin occupation in southern Iberia, with the highest rainfall and temperature, and the greatest stability and diversity, would have been Gibraltar and its immediate environs Bio-climatic modelling indicates that the favoured habitats of the southern Iberian Neanderthals became fragmented during MIS3 separating coastal and upland populations Homo sapiens c. 37kya, there was also a reduction in Neanderthal range size with far fewer exotic materials being exploited than in the earlier Middle Palaeolithic Parallels may be found with MIS3 Neanderthals populations elsewhere. In the southern Caucasus the environment was stable and diverse, like Gibraltar, and also did not suffer the MIS3 deterioration to the same extent as surrounding regions sensu Henrich, Gibraltar has been hypothesized to be one of the last refuges of the Neanderthals with a date of 28 kya for the youngest Mousterian occupation in Layer IV of Gorham\u2019s Cave The Mousterian record from the Gibraltar caves provides a rich sequence of Neanderthal occupation in an optimal habitat. The high biodiversity and stability of the Gibraltar climate may have allowed this region to act as a refugium for the last surviving Neanderthals"} +{"text": "P-values obtained remain valid. In any case, this mistake does not affect the results and conclusions of the paper.The authors of the above research article have informed the journal that an error occurred during assembly of the graphs shown in Figure"} +{"text": "In the present century both basic research in orchid flower evo-devo and the interest for generating novel horticultural varieties have driven the characterization of many members of the MADS-box family encoding key regulators of flower development. This perspective summarizes the picture emerging from these studies and discusses the advantages and limitations of the comparative strategy employed so far. I address the growing role of natural and horticultural mutants in these studies and the emergence of several model species in orchid evo-devo and genomics. In this context, I make a plea for an increasingly integrative approach.The diverse morphology of orchid flowers and their complex, often deceptive strategies to become pollinated have fascinated researchers for a long time. However, it was not until the 20th century that the ontogeny of orchid flowers, the genetic basis of their morphology and the complex phylogeny of Orchidaceae were investigated. In parallel, the improvement of techniques for The unique diversification of flower morphology in Orchidaceae has taken place in the framework of a relatively conserved structure. Generally orchid flowers consist of three outer tepals similar to each other, two distinct inner lateral tepals and a highly differentiated inner median tepal or labellum Figure . Female Arabidopsis thaliana and some non-model species like Tulipa gesneriana and Lilium regale.Because of the key role of the gynostemium and labellum in orchid reproduction their origin has been a recurring question in botany and evolutionary biology since the 19th century. The finding that flower organ identity is specified by the genetic and physical interaction of MADS domain transcription factors are expressed mostly in the gynostemium and in some instances also in the perianth are reproducibly expressed in the gynostemium and ovary isolated so far are expressed in all flower organs , DcOAG1 (AG-like) or OMADS6 (SEP3-like) to actinomorphy in the outer perianth was associated to the transformation of outer tepals into organs resembling inner-lateral tepals and labellum, thus suggesting this gene is involved in the specification of the inner perianth and CeMADS2 (DcOAG1-like) showed that while both genes are expressed in the gynostemium and buds of wild-type flowers, CeMADS1 is not expressed in developing buds of the multitepal mutant. This study thus suggests that CeMADS1 is a class C gene and both CeMADS1 and CeMADS2 are not functionally redundant in the specification of gynostemium identity.More recently, the study of floral terata from glyp mutant of Phalaenopsis hyb. \u201cCD1\u201d the inner lateral tepals bear ectopic pollinia and their epidermal cells are morphologically intermediate between those of wild-type tepals and those of the gynostemium and PeMADS7 was detected exclusively in the column of the wild-type flowers, while only PeMADS1 was detected in the gynostemium-like inner lateral tepals of the glyp mutant. While this study suggests PeMADS1 have been generated for these species advance ten orchid species as candidate models from each of the five Orchidaceae subfamilies as well as the sister group Hypoxidaceae: Apostasia shenzhenica and Neuwiedia malipoensis (Apostasioideae); Vanilla shenzhenica and Galeola faberi (Vanilloideae); Paphiopedilum armeniacum and Cypripedium singchii (Cypripedioideae); Habenaria delavayi and Hemipilia forrestii (Orchidoideae); Cymbidium sinense as well as the previously mentioned Phalaenopsis equestris (Epidendroideae) and Sinocurculigo taishanica (Hypoxidaceae) . Recently, the chloroplast genome of E. pusilla and a transcriptome have been sequenced based on Arabidopsis thaliana and Antirrhinum majus as well as from research on other monocot species like Tulipa gesneriana and Lilium regale. On the other hand, this comparative approach and the technical limitations to genetically manipulate orchids have set important challenges to functionally approach the genetic basis of orchid flower development.At its beginnings orchid flower evo-devo greatly profited from knowledge on well- established model species like The systematic morphological and molecular characterization of flower terata offers a way around these limitations and has enabled the formulation of several testable models based on the large amount of information on class B MADS-box genes, the most studied developmental genes in this family.Arabidopsis thaliana (Smyth et al., The growing amount of transcriptomic information in a diverse group of orchid species calls for a second wave of integration and comparative analysis at an unprecedented scale. While the apparent number of \u201cendless forms most beautiful,\u201d the sinking prices of RNA-seq and the competitive nature of scientific endeavor might tempt us to sequence \u201cyet another orchid transcriptome\u201d the most significant advances on this subject will come from systematically integrating all available information and testing our findings experimentally by means of unifying developmental and evolutionary hypotheses and models. This process requires not only sharing and comparing information but also the agreement on common concepts for the developmental processes we are investigating. For example, because most studies describe orchid flowers buds based on their size it is not possible to make an objective comparison of transcriptomes or other patterns of gene expression within and between species. An alternative would be that for every species with a transcriptome a description of discrete stages of its development is generated and considered in the design of future expression studies as it is routinely done for Because of the prevalent occurrence of gene duplication in orchids the value of gene phylogenies and profiles of gene expression strongly depends on considering as many known duplicates as technically possible. By doing so it is possible to objectively compare studies and minimize the artifacts coming from simultaneously measuring the expression of highly similar paralogs.Because orchid evo-devo is a relatively young area there are still many major challenges to overcome. In the near future the vitality of its research program depends on the consolidation of one or several model species amenable to genetic manipulation or with a rapid life cycle that enables the fruitful integration of genetic analysis and transcriptomic resources. In the long run, the scientific relevance and reach of orchid evo-devo will rely on its contribution to understanding orchid ecology and evolution in questions like the interaction between environmental variables, pollinators and the activity of developmental transcription factors.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Plasmodium falciparum infections within the urban extent of Khartoum state in Sudan is investigated using data from cross-sectional surveys undertaken from 1999 to 2008 to inform the Khartoum Malaria Free Initiative (KMFI).Identifying the location and size of residual foci of infections is critical where malaria elimination is the primary goal. Here the spatial heterogeneity of P. falciparum parasites. Residential blocks were mapped. Data were analysed for spatial clustering using the Bernoulli model and the significance of clusters were tested using the Kulldorff scan statistic.From 1999\u20132008 the KMFI undertook cross-sectional surveys of 256 clusters across 203 random samples of residential blocks in the urban Khartoum state in September of each year. Within sampled blocks, at least five persons, including at least one child under the age of five years, were selected from each household. Blood smears were collected from the sampled individuals to examine the presence of A total of 128,510 malaria slide examinations were undertaken during the study period. In 1999, overall prevalence was 2.5%, rising to 3.2% in 2000 and consistently staying below 1% in subsequent years. From 2006, over 90% of all surveyed clusters reported no infections. Spatial clustering of infections was present in each year but not statistically significant in the years 2001, 2002, 2004 and 2008. Spatial clusters of high infection were often located at the junction of the Blue and White Niles.Persisting foci of malaria infection in Khartoum are likely to distort wide area assessments and disproportionately affect future transmission within the city limits. Improved investments in surveillance that combines both passive and active case detection linked to a geographic information system and a more detailed analysis of the location and stability of foci should be undertaken to facilitate and track malaria elimination in the state of Khartoum. Spatial heterogeneity in risk of malaria infection is regarded as a significant driver of the basic reproduction rate of transmission in any endemic area but becomes increasingly important as the overall intensity of transmission declines Plasmodium falciparum infections across 256 cross-sectional surveys undertaken from 1999 to 2009 within the urban extent of Khartoum state in Sudan is examined to inform the Khartoum Malaria Free Initiative established in 2002.Techniques that detect the presence of statistically significant small-area clusters are often used to assess local heterogeneity of disease 2Anopheles arabiensis is the main vector of malaria Khartoum state is one of the 26 states in Sudan with a total population of more than 5 million people in an area of approximately 28,000 KmMalaria control in Khartoum dates back to 1904 when retained oil was used as the main vector control tool leading to the eradication of the disease in the state Since 1995, the KMFI, led by the state malaria control programme, undertook cross-sectional community surveys during the peak transmission month of September in selected areas in Khartoum. Sample sizes were calculated on an expected prevalence of 10% which was continually revised downwards each year as prevalence decreased and, by 2004, surveys were powered on a prevalence of 0.5% P. falciparum infection rates. A spatial only cluster analysis was undertaken because the cross-sectional surveys relied on a fresh sample each year and consequently few clusters were surveyed continuously throughout the study period. A Bernoulli model was used for the analysis of spatial clustering in the data for several reasons. First, the number of people surveyed varied by location and it was important to adjust for these sampling changes to avoid clusters that are driven by the number of people surveyed rather than the number of people who had infection. Second, this model allowed for locations that are always of high malaria prevalence relative to other locations/years to contribute to the spatial clustering, a particularly important advantage given the generally low prevalence of the survey locations throughout the study period. The Bernoulli model requires the case and control data, represented respectively by P. falciparum positive and negative samples, and the spatial location for each case and control for each survey year P. falciparum infection and the P-value of the Kulldorff scan statistic.The Kulldorff spatial scan statistic Ethical approval for this study was obtained from the National Research Ethics Committee of the Federal Ministry of Health. Formal permission was obtained from the Khartoum malaria control program of the State Ministry of Health. Written consent was sought from all participants and parents/guardians of young children before blood samples were collected.From 1999\u20132008 a total of 128,510 malaria slide examinations from cross-sectional surveys in 256 sampled clusters in 203 residential blocks were undertaken by the Khartoum national malaria programme . The numP. falciparum prevalence in each year from 1999\u20132008 The analysis here demonstrates the utility of spatial cluster analysis techniques to help identify possible residual foci of infections in an area of very low malaria transmission. The observed heterogeneity of risk therefore implies that clusters of high prevalence contribute disproportionately to wide area mean estimates of infection prevalence and is likely to be the main reservoirs of continued transmission P. vivax cases was observed across the study period [unpublished data] there is need for a much better understanding of the burden of this parasite. This is particularly important given the challenges to elimination posed by the hypnozoite, liver stage of the parasite life cycle, which stays dormant for long periods and a single infection can lead to a series of relapses Furthermore, although only one case of Surveys of residential blocks and enhanced clinical surveillance should be improved to define progress and possible success in malaria elimination. The declining levels of infection prevalence will render community cross-sectional parasite surveys inefficient in monitoring transmission over time and expanding them to the required sample sizes will become very expensive This study demonstrates the potential of cluster analysis techniques in identifying the location and radius of spatial clusters of malaria infection which are likely to be residual foci of infections. The results provide evidence that the KMFI has reached a level of measurable success however the remaining foci are likely to distort wide area assessments and disproportionately affect future transmission within the city limits. Data of higher spatial and temporal resolution with detailed series of individual, household and cluster level predictors of infections are required to for a comprehensive assessment of the location and stability of residual foci of infections in Khartoum state. Bespoke versions of the cluster analysis techniques could be developed for applications to new forms of clinical case detection data with sufficient investment in health information systems."} +{"text": "Influence of isotropic and anisotropic properties of membrane constituents (nanodomains) on formation of tubular membrane structures in two-component vesicle is numerically investigated by minimization of the free energy functional based on the deviatoric-elasticity model of the membrane. It is shown that the lateral redistribution and segregation of membrane components may induce substantial change in membrane curvature resulting in the growth of highly curved tubular structures. The shape of lipid bilayers, cellular or artificial, strongly depends on composition and lateral distribution of membrane components Except for the sake of simplicity there is no a priori reason to consider membrane constituents/nanodomains to be isotropic [6 7] instead of anisotropic, which actually represents a more general approach Not only proteins and/or protein-lipid complexes but also lipid molecules should be in general considered anisotropic elongated inner stiff supporters, e.g. microtubules external pulling forces, such as, optical tweezers anisotropic properties. As for example, the membrane attached crescent shaped BAR domain proteins have clearly anisotropic shape and therefore their energy depend on their local orientation or statistically averaged local orientation, depending on the local curvature of the membrane Coupling between the cell/liposome shape and non-homogeneous lateral distribution The aim of this work is to study the influence of anisotropy of membrane nanodomains on the shape transformations and lateral segregation of membrane components in two-component axially symmetric vesicles of fixed topology. The special attention is devoted to the stability and growth of tubular membrane structures with thin tubular protrusions having small spherical vesicles at their free tips induced The model vesicles are built up by two components (A and B) ant their shapes are obtained numerically by the direct minimization of the free energy functional of the membrane under the constraints of constant vesicle surface area In this work, we have investigated under what conditions the formation of thin tubular structures is favorable. The special examples of such systems observed in experiments , i.e. th2. The chosen vesicle radius in the calculations was of the order of 250 nm.The calculations were performed for different values of the bending rigidity for each component, When both components are isotropic, the vesicle is composed of small spherical beads connected by narrow passages, such as the first vesicle in In the membrane systems encountered in the nature a small number of membrane components (minority) is usually strongly anisotropic, while a much larger number of membrane components (majority) is considerably less anisotropic, or isotropic. The anisotropic BAR domain proteins attached to the bilayer membrane The cylindrical protrusions are formed when at least one component is anisotropic. Such behavior is demonstrated in It is interesting to note that very small amount of the anisotropic component is enough to induce the formation of the cylindrical protrusion. Moreover, the length of the protrusion depends on the concentration. This is due to the separation of the components in the membrane, where the anisotropic components are located mainly in the tubular part which has a very small surface area compared to the rest of the vesicle. It is interesting to note, that total component segregation was observed for The calculations presented in The complete mixing was observed for small concentration of the anisotropic component, The anisotropy of one of the components is not however a sufficient condition for the formation of the tubular structures. We have also observed that cylindrical protrusions may be induced by changing the properties of the isotropic component. It is demonstrated in In the systems in which the cylindrical tubules are created when the proteins are adsorbed at the membrane surface, the radius of the tubule is determined by the intrinsic curvature of the protein stable tubular protrusions. When the components are isotropic such cylindrical structures may be created only when some external force is applied. For example when membrane is pushed by growing microtubules or pulled by molecular motors. The width of the tubes depends on the intrinsic curvatures of anisotropic components. When the membrane is composed of isotropic components the stable protrusions which are created without any external force are built of a series of connected beads.We have shown that accumulation of anisotropic components may lead to the formation of thin tubular protrusions. The anisotropy of components is a necessary condition for creation of the In the model the membrane is composed of two components A and B which can be either isotropic or anisotropic and are characterized by the intrinsic principal curvatures For simplicity we assume linear dependence of the bending rigidity The contour of the vesicle is parametrized by the angle, tion see [24], [3The ansatz for the local relative concentration of the component A has the form :see also In numerical calculation we have to find both the function"} +{"text": "Chondrolysis of the ankle is a very rare condition. We report a case of chondrolysis of the ankle following ankle arthroscopy and microfracture of the osteochondral lesion of the talar dome. The patient's symptoms were relieved after articulated distraction arthroplasty. Chondrolysis is a clinical condition characterized by rapid destruction of articular cartilage on both sides of the joint leading to loss of joint space and joint stiffness. The cause has not truly been identified . ChondroA 32-year-old gentleman had inversion injury to his left ankle on 2007 resulting in persistent medial ankle pain. He was treated with physiotherapy without improvement. Radiographs and magnetic resonance imaging (MRI) of his left ankle showed the presence of osteochondral lesion (OCL) of the medial talar dome . Ankle aChondrolysis is characterized by progressive loss of joint space and increased stiffness resulting from destruction of the articular cartilage on both sides of the joint . It is aThe pain is usually out of proportion to the clinical findings, leading to a misdiagnosis of complex regional pain syndrome. This condition should be suspected in patients complaining of persistent, unrelenting pain, stiffness, and severe diffuse articular cartilage loss, within twelve months after an operation or potential cartilage insult to the ankle joint. The clinical symptoms should generally exceed a comparable amount of joint destruction in an otherwise chronic condition, for which the patient has likely had more time to adapt to the damage of the joint cartilage .Proposed etiologic factors can be classified into four categories including 1) thermal laser), (2) chemical , 3) mechanical , or 4) other factors [ other (e thermal mechanicOptimal treatment of chondrolysis of the ankle is inconclusive because of its rarity. Although total joint arthroplasty remains the gold standard for the treatment of extensive articular cartilage damage, nonarthroplasty options are reasonable for those with chondrolysis of the ankle as the patient is frequently of young age with high physical demands . Nonoper"} +{"text": "Nodular thyroid disease affects 500 to 600 million people worldwide. Tumours of the thyroid account for about 1% overall human cancers. Thyroidectomy is the most common endocrine operation. Surgical treatment for benign thyroid nodules is recommended for: progressive increase in nodule size, substernal extension, compressive symptoms in the neck region, the development of thyrotoxicosis and in case of preference of that kind of treatment reported by the patient. In Poland thyroidectomy is the fourth surgical procedure and concerns 25000 operations yearly. Reduction of surgical injury with simultaneous retention of current safety and radical nature of surgical procedure forces the work in a relatively small operating field. Electric devices enabling the achievement of full and lasting haemostasis during thyroidectomy supplant traditional surgical method with no impact on the incidence of perioperative complications, while at the same time allowing to shorten the duration of the procedure. The haemostatic effect is associated with generation of heat, which apart from the intended result may bring about thermal tissue injury. During the surgical procedure important is to determine the thermal spread around the active tip of electric devices in the operating field during thyroidectomy, and the safe temperature range during the operation to protect important structures of the neck. The mean safe distance of the active tip of an electric device from important anatomic structures is 5mm minimally and depends on the device type, time of using and its power settings. All the modern techniques of vessel sealing are associated with generation of heat and its spherical spread, which causes thermal injury to the surrounding tissues. Their mode of operation through, among others, structural changes in collagen and elastin, leads to lasting joining of sealed vessel walls and tissue structures. These systems enable a safe sealing of vessels of up to 7mm in diameter.In conclusions: In the cases analysed by the author concerning the thyroidectomy techniques, it is recommended to replace to electric devices with ligatures or clips or human fibrinogen in place near the laryngeal nerves, parathyroid glands and the trachea. The decision on the change of the method of haemostasis maintenance in the vicinity of crucial structures belongs to the surgeon."} +{"text": "To control this disease, funding and research should be prioritized on the basis of determined needs. Although Rift Valley fever is a disease that, through its wider societal effects, disproportionately affects vulnerable communities with poor resilience to economic and environmental challenge, Rift Valley fever virus has since its discovery in 1931 been neglected by major global donors and disease control programs. We describe recent outbreaks affecting humans and animals and discuss the serious socioeconomic effects on the communities affected and the slow pace of development of new vaccines. We also discuss the mixed global response, which has largely been fueled by the classification of the virus as a potential bioterrorism agent and its potential to migrate beyond its traditional eastern African boundaries. We argue for a refocus of strategy with increased global collaboration and a greater sense of urgency and investment that focuses on an equity-based approach in which funding and research are prioritized by need, inspired by principles of equity and social justice. Since Rift Valley fever virus (RVFV) was first identified in 1931, after an investigation of an epizootic among sheep on a farm in the Great Rift Valley of Kenya, the understanding of this zoonotic disease has grown considerably (Although the disease disproportionately affects vulnerable communities with low resilience to economic and environmental challenges, RVF has remained largely neglected by major global donors and disease control programs. With high numbers of competent vector species present in disease-free regions, the intensification of international trade in live animals, and the uncertain effects of climate change, RVF is now considered a major challenge in global zoonotic disease control (The potential of RVFV to migrate was established after large outbreaks of RVF occurred among animals and humans in Egypt in 1977, in other geographic zones of Africa, and then outside the African continent in Saudi Arabia and Yemen in 2000 (The Table further demonstrates the spread of the disease; 7 of 9 major outbreaks in the past 15 years resulted in human cases outside the Rift Valley region in East Africa. The Table also highlights the difficulty of developing adequate surveillance systems and therefore the difficulty of accurately estimating morbidity and mortality rates for human populations in resource-poor settings. In the 5 outbreaks for which estimated numbers of human cases have been published, \u2248339,000 infections are believed to have occurred. In the 4 outbreaks for which estimated and reported cases are documented, numbers of estimated cases are 78\u00d7 higher than numbers of reported cases (There is a paucity of studies that have examined the socioeconomic effects of past outbreaks of RVFV, which reflects a lack of research focus on the broader social effects of the disease. One study that did examine the socioeconomic effects of the 2006/2007 RVFV outbreak in Kenya highlighted the concern that the outbreak had tended to disproportionately affect impoverished pastoralist communities, with those in the North Eastern Province of Kenya being hardest hit (The ban of livestock imports to the Middle East from East Africa, instituted after the 1997/1998 RVFV outbreak in Kenya and Somalia, particularly affected the export trade out of Somalia. The ban was variably enforced by several Middle Eastern countries but most notably by Saudi Arabia, which imports large numbers of ruminants for the annual Hajj pilgrimage. In 1997, the year before the onset of the ban, 2.8 million live animals were exported from the Somaliland port of Berbera, making it the single biggest exporting port for ruminants in the world that year. With the livestock trade accounting for 65% of gross domestic product in Somaliland, the export ban had a devastating effect on a region already suffering in the grip of a protracted civil war (The slow pace of development of new vaccines and diagBefore modern safety standards were instituted in laboratories, RVFV was regularly transmitted between laboratory staff; 47 cases were documented worldwide (Fortunately, with the advent of recombinant genetic technology and the development of reverse transcription PCR techniques obviating the need to handle and store live virus, new vaccines and diagnostic tests in development can now be produced in laboratories of lower BSL (Interest in RVFV and investment in its control were only substantially increased among the global health research and policy community after greater awareness of its potential to migrate beyond its traditional East African boundaries was noted. However, the recognition that much of the industrialized world has animals and arthropod vectors capable of transmitting the virus seems to have focused and accelerated efforts to develop improved tools for outbreak forecasting, monitoring, diagnosis, and prevention.In more recent years, the classification of the virus as a potential bioterrorism/agroterrorism agent has also helped spur investment and activity, particularly in the area of vaccine development and diagnostics (Growing restrictions stemming from biosecurity concerns now affect research activity across a range of infectious diseases and have most recently been highlighted by concerns over the publication of research into the production of genetically engineered variants of the influenza A subtype H5N1 virus (Increased sales costs of vaccines have a variety of negative consequences; in particular, this increase could put at risk well-established mechanisms of international cooperation in global infectious disease surveillance. This risk was dramatically highlighted in 2006 and 2007 when Indonesia refused to share samples of influenza subtype H5N1 isolates with the World Health Organization. The event caused a risk to global health and occurred in direct protest to the inequitable sharing of virus samples and vaccine development technology (Despite some of these challenges, some positive developments have occurred in global collaborative efforts for controlling zoonotic diseases, including RVFV. These include initiatives like the One Health (In recent years, the perceived risk of RVFV becoming established in Europe and North America, and the theoretical risk of it being used as a bioterrorism agent, has brought a welcomed increase in investment to combat the disease yet has skewed priority areas of focus for that investment. The ideal that should be adopted is a more equity-based approach in which funding and research are prioritized on a needs-identified basis for the aid of those most disadvantaged in the global community. This approach would concentrate efforts on those interventions that most positively affect these vulnerable communities and, in addition, prevent or minimize the spread of the disease to previously non\u2013disease-endemic high-income countries.Such an approach would ensure research and policy emphasis on the socioeconomic effects of RVFV outbreaks. Interventions could then address international trade policies and their ramifications on livestock trade and the development of appropriate support systems within exporting countries to mitigate and minimize the risk of bans being instituted. In addition, encouraging farmers to focus their livestock-rearing efforts on breeds more resistant to infection with RVFV and a greater study of the genetic factors that make these breeds resistant should also be promoted as part of this global effort. Developing better surveillance systems is key.Fears of RVFV being used as a bioterrorism agent should not sideline the real security effects of the disease in driving impoverished communities to find other, more dangerous means of income. Did the bans on livestock from Somalia, for instance, and the resulting lost economic opportunities afforded by a well-developed functioning ruminant export market, contribute to the drive of persons and communities to seek alternative sources of income, including taking part as combatants in the civil war in or in the piracy trade that has developed in the region? Are the stringent measures being imposed on laboratories that store or work with the virus serving to concentrate technical expertise and industrial know-how in the hands of scientists in a very few industrialized countries, thus contributing to limited scientific inquiry and collaboration, which further escalates costs? Although these questions are yet to be answered conclusively, exploring the case for lowering current BSL requirements of laboratories and production facilities could be 1 method of mitigating these costs.A greater sense of urgency and investment is required for controlling, better managing, and preventing future large-scale outbreaks of RVFV. Future long-term success lies in building on global collaborative initiatives, the closer integration of multilateral agencies, and a wider participation from livestock-importing countries and emerging economies that are investing in RVFV-endemic countries. A worldwide strategy, both in tune with and inspired by principles of equity and social justice, could ultimately deliver the best outcomes in combating this neglected tropical disease.Rift Valley fever vaccine development."} +{"text": "Our understanding of the mechanisms governing the response to DNA damage in higher eucaryotes crucially depends on our ability to dissect the temporal and spatial organization of the cellular machinery responsible for maintaining genomic integrity. To achieve this goal, we need experimental tools to inflict DNA lesions with high spatial precision at pre-defined locations, and to visualize the ensuing reactions with adequate temporal resolution. Near-infrared femtosecond laser pulses focused through high-aperture objective lenses of advanced scanning microscopes offer the advantage of inducing DNA damage in a 3D-confined volume of subnuclear dimensions. This high spatial resolution results from the highly non-linear nature of the excitation process. Here we review recent progress based on the increasing availability of widely tunable and user-friendly technology of ultrafast lasers in the near infrared. We present a critical evaluation of this approach for DNA microdamage as compared to the currently prevalent use of UV or VIS laser irradiation, the latter in combination with photosensitizers. Current and future applications in the field of DNA repair and DNA-damage dependent chromatin dynamics are outlined. Finally, we discuss the requirement for proper simulation and quantitative modeling. We focus in particular on approaches to measure the effect of DNA damage on the mobility of nuclear proteins and consider the pros and cons of frequently used analysis models for FRAP and photoactivation and their applicability to non-linear photoperturbation experiments. The DNA damage response plays a crucial role in oncogenesis or the intercalating dye Hoechst, exposure of cells to UVA illumination leads to single and double strand breaks required for these transitions are delivered via ultrashort pulses (ps to fs) thus limiting the average laser power to levels compatible with cell viability. For non-linear excitation, the sum of the energy of the incoming photons has to match the definite energy gap between two electronic states. For DNA bases, the maximum of linear absorption lies at 260 nm and excitation at this wavelength leads to the formation of UV-photoproducts, as mentioned above. Hence, the same type of lesion can be generated by irradiating cell nuclei with femtosecond pulses at a wavelength of 780 nm, corresponding to the simultaneous absorption of three near-infrared photons excitation is the tool of choice. First described in theory by Maria G\u00f6ppert-Mayer in 1931 and demonstrated experimentally by Kaiser and Garrett in 1961, this process relies on the simultaneous absorption of multiple photons at very high photon densities, as they are present within the focus of the objective lens causes a temperature rise of 0.2 K due to linear absorption , and, to a lesser extent, photoactivation, have yielded important insights into the dynamics and binding properties of histones in intact, native chromatin Kimura, . MobilitFluorescence photoactivation has also been combined with laser microirradiation to study local changes of chromatin structure due to DNA strand breaks . These simplifications have enabled analytical solutions of the reaction-diffusion model, exactly describing the redistribution of the fluorescence signal to equilibrium as a function of time. The basic assumptions that are employed for this type of approach are the following:The system's properties remain close to unperturbed, i.e., the number and the distribution of binding sites do not change during the observation period. In addition, binding sites are assumed to be homogenously distributed within the nucleus. The latter does not hold true for chromatin proteins because chromatin density varies between subnuclear regions .The photobleached/photoactivated volume has a uniform extension along the z-dimension. According to this assumption intensity changes introduced by the photobleaching/-activating laser do not vary along the optical axis, and the system can be described by a two-dimensional model.The finite dimension and the geometry of the cell nucleus can be neglected. This simplification applies only when the photobleached/photoactivated spot is small as compared to the nuclear volume.Numerous studies have undertaken quantitative analysis of FRAP and photoactivation data of nuclear proteins under undisturbed conditions . In general, a reaction-diffusion model is assumed to be the best mathematical description of the dynamic behavior of these proteins. According to our current understanding, nuclear proteins diffuse stochastically in all three dimensions within the nucleoplasm until they collide with binding partners with whom they undergo transient interactions. This model assigns to each protein the following characteristic parameters: the binding and release constants Otherwise, the solution of the reaction-diffusion model has to be found via numerical modeling where changes in fluorescence are approximated consecutively for each time point for a given set of parameters. For both the analytical and the numerical procedure, the obtained solutions are optimized by testing combinations of different parameters iteratively until the simulated behavior fits the experimental data.The power of kinetic modeling consists in its ability to make quantitative predictions about the reaction of the biological system under study. For FRAP models, this ability has been questioned, because different approaches have yielded very different results for the same or similar proteins. Therefore, a cross-validation strategy that compares different models as well as different experimental methods to generate the primary data is highly recommended . Exposure to short laser pulses to induce DNA damage initiates a signal chain that may develop on different time scales for different types of lesions/binding sites. This amplification process is inherent to the biological response to DNA damage and leads to a fast and massive propagation of the damage signal, as best exemplified by the spreading of the phosphorylation of histone H2AX from the initial strand break to chromatin regions of the size of a few megabases within a significantly larger zone containing the damage . On the other hand, the small spot size minimizes the influence of nuclear geometry which may therefore be neglected without inducing significant errors (point 3).An overview of recently proposed approaches for the analysis of FRAP and photoactivation experiments specifying their most important features is given in Table Numerical solutions of the reaction-diffusion model provide a more accurate approach to describe protein dynamics. In their study, Beaudouin et al. present a method that includes both the real geometry of the nucleus and an inhomogeneous distribution of binding sites and is independent of the shape of the bleached/photoactivated region. The differential equations are solved numerically using a finite difference method. The approach was validated for the photoactivation of five different chromatin proteins (Beaudouin et al., None of the currently available models addresses the issue of the temporal non-equilibrium of the system that is characteristic for mobility measurements performed subsequently to DNA microdamage, either via FRAP or photoactivation. Addressing this issue is a promising avenue for future work and a prerequisite for the proper quantitative description of the response of nuclear proteins to DNA damage.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "There are 16 morphologically defined classes of rat retinal ganglion cells (RGCs). Most commonly, they are classified on the basis of several criteria including: soma size, dendritic field diameter, the dendritic branching pattern and the depth of stratification in the inner plexiform layer. Recently, it has also been shown that the intrinsic physiological properties of each rat RGC type vary enormously. Using multicompartment models of RGC types we investigated whether the location of the axon initial segment (AIS), the site of greatest sodium channel density and lowest voltage threshold, can be predicted by measurements of spike waveform made at the soma.The action potential waveform in many neurons consists of several components, which can be determined by examining the first and second derivatives of the membrane potential. In this study, we focus on this technique as an objective method to analyze the action potential waveform for different morphological RGC types. In addition, we analyze the features of the phase plot, which shows the rate of change of the membrane potential against the membrane potential itself. Phase plot analysis allows the measurement of subtle differences in the action potential waveform such as the initial segment-soma/dendritic break (ISSD), which corresponds to the early rising phase of the action potential. When the recording is made at the soma, the presence of the ISSD in the phase plot indicates that a low threshold region (i.e. the AIS) is further away from the soma.Rat RGCs were characterized electrophysiologically using standard whole cell patch clamp recording techniques. Data were acquired at 20 kHz using custom software developed in LabView . Spontaneous spikes and spikes evoked by just-threshold current were used for analysis. For each of the recordings, the amplitude and time of the trough between the peaks in the second-order derivatives were analyzed. After three dimensional confocal reconstruction of each recorded cell (Zeiss PASCAL) it was classified morphologically into one of the 16 predefined types. Multicompartment models of real retinal ganglion cells were constructed from 3D rendering confocal reconstructions and their physiology was simulated using the Hodgkin-Huxley formalism in the NEURON environment. Sodium channel density in the AIS and its distance from the soma were systematically varied and the effects on the phase plot analyzed.Simulations showed that the further the AIS was from the soma, the more pronounced the ISSD break, resulting in a larger break with a deeper trough between the two peaks in the phase plot. This result allows us to predict the location of the AIS based on recordings of the impulse waveform. In addition, we found that the density of sodium channels in the AIS affects spike propagation into the soma. We observed that decreasing sodium conductance in the AIS, required two spikes to occur in the AIS in order to evoke a somatic spike. This was also observed experimentally, in particular in C4 cells. Further analysis of individual RGC spike waveforms demonstrated that certain RGC types could be reliably identified using their spike waveforms."} +{"text": "For the case of the infinite-sites model, I derive analytical formulas for the expected number of polymorphic sites in sample of DNA sequences, and apply the developed simulation and analytical methods to explore the fit of the model to HIV genetic diversity based on serial samples of HIV DNA sequences from 9 HIV-infected individuals. The results particularly show that the estimates of the ratio of recombination rate over mutation rate can vary over time between very high and low values, which can be considered as a consequence of the impact of selection forces.This paper presents a novel population genetic model and a computationally and statistically tractable framework for analyzing within-host HIV diversity based on serial samples of HIV DNA sequences. This model considers within-host HIV evolution during the chronic phase of infection and assumes that the HIV population is homogeneous at the beginning, corresponding to the time of seroconversion, and evolves according to the Wright-Fisher reproduction model with recombination and variable mutation rate across nucleotide sites. In addition, the population size and generation time vary over time as piecewise constant functions of time. Under this model I approximate the genealogical and mutational processes for serial samples of DNA sequences by a continuous coalescent-recombination process and an inhomogeneous Poisson process, respectively. Based on these derivations, an efficient algorithm is described for generating polymorphisms in serial samples of DNA sequences under the model including various substitution models. Extensions of the algorithm are also described for other demographic scenarios that can be more suitable for analyzing the dynamics of genetic diversity of other pathogens Recombination has an important role in shaping the dynamics of within-host HIV genetic diversity, particularly making the virus capable of escaping the pressures of antiviral drugs and immune system Early studies Under these models the expected numbers of average pairwise differences in serial samples stay the same From the same data sets, I also observe linear relationships between the dynamics of the numbers of polymorphic sites and the numbers of average of pairwise differences in each individual's case . To makeTo illustrate non-linear relationship between the expected dynamics of the two statistics for the observed sample sizes under the Wright-Fisher model with constant population size, I compute the expected numbers of polymorphic sites under the finite-sites Jukes-Cantor model as well as under the infinite-sites model by using the formulas of Tajima While the standard coalescent as well as the ancestral recombination graph might not be directly applicable for analyzing the dynamics of HIV genetic diversity within a host, the concepts and features of these models had and have great impact on extending coalescent theory for other evolutionary settings. Both models were derived as continuous limits of discreet genealogical and mutational processes under the Wright-Fisher models with constant population size and with and without recombination by applying the time scaling concept that is measuring time proportional to a very large population size. This concept was also applied for other forward in time Wright-Fisher models with variable population size, selection, or migration but without recombination see e.g. to derivLater studies To overcome the limitations of the previous models and methods mentioned above, I first describe a forward in time population genetic model to represent HIV evolution in HIV-infected individuals in the chronic phase of infection. The population in the model is considered to be homogeneous at the beginning, representing the time of HIV seroconversion, and to evolve according to the Wright-Fisher reproduction model with recombination by allowing the population size and generation time to vary over time. To make this model computationally and statistically tractable for analyzing serial samples of DNA sequences, I apply the time scaling approach at multiple time intervals to describe a continuous coalescent-recombination process for tracing the lineages of the samples back in time and superimposing mutational events on the lineages according to an inhomogeneous Poisson process. Based on these processes I describe computationally efficient algorithm for generating polymorphisms in serial samples of DNA sequences drawn randomly under this population genetic model. Further extensions of the algorithm are also described for population genetic models that can be more suitable for analyzing the dynamics of genetic diversity of other pathogen populations in vivo and in vitro.Within this framework I consider two substitution models: a finite-sites model with variable mutation rate across nucleotide sites and the infinite-sites model. For the infinite-sites case, I derive analytical formulas for the expected number of polymorphic sites in samples of DNA sequences. For this quantity, Tajima To model within-host HIV evolution, I take into account the following observations: (1) HIV population within HIV-infected individuals usually collapses at seroconversion after several weeks of infection and recovers quickly as a homogeneous population In this model DNA sequences are represented as a combination of I design the finite-sites model in a such way that it represents the heterogeneity of substitution rate across nucleotide sites and infer some of the parameters in this model based on serial samples of HIV-1 DNA sequences from envelop gene region The population model for Variation in a sample of DNA sequences under the above population model can be described by combining the genealogical history of the sequences with mutations on the lineages of the sequences. The genealogical history traces the ancestral lineages of the sequences back in time before time For each interval For this Markov chain the transition from the state The following algorithm uses the tracing procedure recursively to describe a bottom up process for generating genealogical history of serial samples and a top down process for adding mutation events on the genealogy. The algorithm can be used to generate variation in serial samples under the population model described above.Algorithm 1Set the values of Apply the above procedure for tracing the lineages of the Update the values of As the value of Add mutation events independently on different branches of the genealogical history for time interval Increase the value of For the case of a non-recombining locus also holds for the case of I apply the models and methods described in the previous section for exploring within-host HIV evolution based on serial samples of HIV DNA sequences from 9 HIV-1-infected individuals studied by First, I fit the population genetic model to data sets under the finite-sites model, in which the values of I implemented Algorithm 1 for this mutation model into a computer program (in C programming language) for generating the polymorphisms in serial samples and applying the Monte Carlo approach to estimate the expected values of summary statistics for the observed sample sizes. To fit the model to the data sets for each individual's case, I consider two summary statistics: the numbers of polymorphic sites and divergences in the serial samples. Divergence in a sample of sequences is defined as the average of the numbers of differences between the founder sequence and the sequences in the sample. The founder sequence is the sequence of the homogeneous population at the beginning; and for the observed samples, the founder sequence is inferred from the alignment of the sequences in the first sample taken and (3) are used for computing the expected values of the four statistics. In this case the estimated expected values of the four statistics do not show strong qualitative discrepancy with their observed values except in the case of individual Pt9 see and 4. TThe contrast between the mimicking powers of the two mutation models in the content of the four statistics is more obvious when I fit the population genetic model to the data sets under the infinite-sites model by matching the observed values of the numbers of polymorphic sites and average numbers of pairwise differences in the serial samples to their expected values, and I use the other two statistics as controls. The overall-fit scores in this case are also in I use the estimated models for the case of the finite-sites model (described in the previous section) to explore the signature of recombination on the dynamics of HIV genetic diversity. For this purpose I choose two statistics based on the linkage disequilibrium measure Using the computer program based on Algorithm 1 and Monte Carlo approach, I estimated the expected values of In the case of patient Pt1, I explore further and consider three hypotheses for The purpose of this study was to develop a computationally and statistically tractable framework, including recombination, for analyzing the dynamics of HIV genetic diversity in HIV-infected individuals. To derive this framework, I first designed a population genetic model that carries some of the features of within-host HIV evolution. Particularly, the model includes recombination, variability in population size and generation time, and heterogeneity of mutation rate across nucleotide sites. In addition, I considered the population size and generation time to vary over time as piecewise constant functions of time; these choices were made in order to derive the framework including recombination and without overwhelming the model with parameters that would be difficult to estimate.In spite of these choices, the model and framework can be extended for other evolutionary settings. Particularly, the model can be extended to include various distributions for the breakpoints along HIV genome, as well as for mutation rates across nucleotide sites. As another extension of the framework, I described Algorithm 2 that is applicable for serial samples of non-recombining sequences in a more general demographic scenario by allowing the population size to be a piecewise continuous function of time. Such a demographic scenario may be more suitable for exploring evolutionary dynamics of other pathogens at genomic level in vitro and in vivo. Particularly in vitro experiments in which a bacterial population goes through recurrent bottlenecks by growing or declining exponentially over time. However, in this setting the number of the parameters can increase and can be challenging to estimate them. For example, if the population size Another extension of the model is to replace the assumption of homogeneous population at time Note that the developed framework can be applied to generate HIV transmission chains and HIV epidemics at the genomic level. To be able to accomplish such a task, it is important to have a better understanding of the space of the values of the vectorsThe application of the framework to the serial samples of HIV DNA sequences from nine HIV infected individuals allowed to explore the fit of the model to the data sets and the impact of the recombination on the dynamics of within-host HIV genetic diversity. Particularly, these results show large variability for inferring the e points . These rThe wide spectrum of the inferred values of Since selection forces have impact on shaping within-host HIV genetic diversity ntervals and 7 caAppendix S1The proof of Lemma 1. This section shows the derivation of the formula (2) by using the developed representation for polymorphisms in samples of DNA sequences under the piecewise constant population size model and the memoryless property of coalescence waiting times in the standard coalescent.(PDF)Click here for additional data file."} +{"text": "Here we review the studies to date that have employed novel optogenetic tools to improve our understanding of the pain pathway at the peripheral, spinal and supraspinal levels.The process of pain perception begins in the periphery by activation of nociceptors. From here nociceptive signals are conveyed via the dorsal horn of the spinal cord to multiple brain regions, where pain is perceived. Despite great progress in pain research in recent years, many questions remain regarding nociceptive circuitry and behavior, in both acute nociception and chronic pain states. Techniques that allow for selective activation of neuronal subpopulations Chronic pain represents a significant clinical problem affecting up to 20% of the general population in a mouse model of visceral pain (Crock et al., Another important region involved in the modulation of pain is the PFC. As with the amygdala, this region is particularly associated with the affective component of the pain experience (Tracey and Mantyh, Recently we have applied an optogenetic approach to explore novel brain regions involved in opiate analgesia. Opioids are important clinically, however their use is limited through the development of tolerance and addiction (Ling et al., The significance of this study lies in the ability to modulate the pain sensing TRPA1 neurons, without the need for genetic manipulations to confer this sensitivity. This approach may be useful for research and treatment of pain in the future. In particular, such an approach would allow selective optical control of subpopulations of peripheral nociceptors without the need for genetic manipulations.Optogenetic approaches require the expression of an exogenous light sensitive molecule in the target system. Although this has proved a very useful tool in basic neurobiological research, by definition this approach is of limited use clinically. An alternative approach is the use of \u201coptopharmacology\u201d, that is administration of compounds that confer light sensitivity onto a specific cell type (Kramer et al., Another approach has been the recent development of a photoactive MOR, the main receptor mediating the analgesic actions of morphine (Barish et al., The power of optogenetics lies in the ability to achieve regional and cell type specific neuronal activation. These methods provide an unprecedented opportunity to probe the complexities of the pain pathway at the peripheral, spinal and supraspinal levels. Among the most exciting developments in the field to date is the ability to produce pain-like behaviors in transgenic mice by optical stimulation of nociceptors in the skin (Daou et al., Somewhat surprisingly, to date no studies have taken advantage of these tools within the dorsal horn however this may reflect technical challenges in inserting optogenetic fibers into the spinal cord. Considering the importance of this component of the pain pathway, however, it is likely that optogenetics will also provide valuable new insights into this complex circuitry. Interneuron based optogenetic experiments have been performed within the ventral horn of the spinal cord, to investigate the contribution of particular subsets to locomotor activity (Dougherty et al., As described, a small number of studies have explored brain areas in pain using optogenetics, however many other areas of the pain matrix remain to be studied in this way. Both brain studies to date have relied either on electrophysiological recordings (Ji and Neugebauer, The studies described here are only the beginning of what we expect to be fruitful and informative exploration of pain circuits using optogenetic tools. It is likely that subsequent studies will move beyond these initial proof of concept studies, and use optogenetic tools to tackle unanswered questions regarding pain circuitry.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The persistent spread of Rhodesian human African trypanosomiasis (HAT) in Uganda in recent years has increased concerns of a potential overlap with the Gambian form of the disease. Recent research has aimed to increase the evidence base for targeting control measures by focusing on the environmental and climatic factors that control the spatial distribution of the disease.One recent study used simple logistic regression methods to explore the relationship between prevalence of Rhodesian HAT and several social, environmental and climatic variables in two of the most recently affected districts of Uganda, and suggested the disease had spread into the study area due to the movement of infected, untreated livestock. Here we extend this study to account for spatial autocorrelation, incorporate uncertainty in input data and model parameters and undertake predictive mapping for risk of high HAT prevalence in future.Using a spatial analysis in which a generalised linear geostatistical model is used in a Bayesian framework to account explicitly for spatial autocorrelation and incorporate uncertainty in input data and model parameters we are able to demonstrate a more rigorous analytical approach, potentially resulting in more accurate parameter and significance estimates and increased predictive accuracy, thereby allowing an assessment of the validity of the livestock movement hypothesis given more robust parameter estimation and appropriate assessment of covariate effects.Analysis strongly supports the theory that Rhodesian HAT was imported to the study area via the movement of untreated, infected livestock from endemic areas. The confounding effect of health care accessibility on the spatial distribution of Rhodesian HAT and the linkages between the disease's distribution and minimum land surface temperature have also been confirmed via the application of these methods.Predictive mapping indicates an increased risk of high HAT prevalence in the future in areas surrounding livestock markets, demonstrating the importance of livestock trading for continuing disease spread. Adherence to government policy to treat livestock at the point of sale is essential to prevent the spread of sleeping sickness in Uganda. Trypanosoma brucei rhodesiense and Trypanosoma brucei gambiense, cause the fatal disease human African trypanosomiasis (HAT); the clinical progression, as well as the preferred diagnostic and treatment methods differ between the two types. Currently, the two do not overlap, although recent spread of Rhodesian HAT in Uganda has raised concerns over a potential future overlap. A recent study using geo-referenced HAT case records suggested that the most recent spread of Rhodesian HAT may have been due to movements of infected, untreated livestock (the main reservoir of the parasite). Here, the initial analysis has been extended by explicitly accounting for spatial locations and their proximity to one another, providing improved accuracy. The results provide strengthened evidence of the significance of livestock movements for the continued spread of Rhodesian HAT within Uganda, despite the introduction of cattle treatment regulations which were implemented in an effort to curb the disease's spread. The application of predictive mapping indicates an increased risk of HAT in areas surrounding livestock markets, demonstrating the importance of livestock trading for continuing disease spread. This robust evidence can be used for the targeting of disease control efforts within Uganda to prevent further spread of Rhodesian HAT.The tsetse transmitted parasites, Trypanosoma brucei rhodesiense parasite, and the Gambian form of the disease, caused by Trypanosoma brucei gambiense are not believed to overlap, and Uganda is the only country thought to support transmission of both diseases within its borders Glossina spp), and are fatal if untreated, although the speed of progression to death varies between the two (within approximately six months for Rhodesian HAT compared with years for Gambian HAT). A reservoir of infection is present for T. b. rhodesiense (predominantly livestock in Uganda) in contrast with T. b. gambiense for which no known reservoir exists other than humans. As a result, the most effective control options for the two forms of the disease differ, as do diagnostic procedures and treatment regimes. Currently, treatment is implemented based on knowledge of the areas affected by each type of HAT; Gambian HAT occurs in the north west of Uganda and Rhodesian HAT in the south east. Medical staff in endemic areas will presume infection is caused by the subtype known to exist in that area and implement the appropriate treatment regimen. A definitive diagnostic differentiation between the two parasite subtypes is difficult and requires expensive, complex methods which are not currently available in affected areas. Consequently, spatial concurrence of the two forms of HAT would compromise diagnostic and treatment protocols, resulting in a higher proportion of treatment failures and placing increased pressure on an already stretched health system The geographical ranges of Rhodesian human African trypanosomiasis , caused by the T. b. rhodesiense endemic areas at a local livestock market Recent research has focused attention on the environmental and climatic variables involved in the spatial distribution and spread of Rhodesian HAT Glossina species present across the sub-Saharan fly belt; within the areas of Uganda affected by Rhodesian HAT, the predominant species of tsetse vector is Glossina fuscipes fuscipes, which is restricted to riverine vegetation habitats It is well documented that the focal distribution of human HAT is determined largely by the ecological and environmental requirements of the tsetse fly vectors The dependence of HAT transmission on the availability of competent vector populations leads to indirect associations between the spatial distribution of HAT and a variety of environmental and climatic factors. Within affected regions, areas with high HAT incidence tend to occur where there is a lot of contact between humans, tsetse flies and animal reservoirs (for Rhodesian HAT), for example, watering points et alBatchelor T. b. rhodesiense to the study area was assessed, given a more robust parameter estimation and appropriate assessment of covariate effects. The application of such methods to epidemiological research for the estimation of covariate effects and predictive mapping has been demonstrated in several recent studies including Diggle et alet alet alet alThe use of non-spatial methods for the analysis of data with a spatial structure can lead to biased regression parameters, underestimated standard errors, falsely narrow confidence intervals and, thus, an overestimation of the significance of covariates No patient names were recorded to maintain patient confidentiality and to adhere to the International Ethical Guidelines for Biomedical Research Involving Human Subjects. The use of these data was approved by the University of Edinburgh Research Ethics Committee.2 and a population of approximately 261,000 T. b. rhodesiense in the reservoir and, thus, altered the epidemiology of HAT in this area in the subsequent year; hence we have excluded from the analysis any cases diagnosed after 2006.The area of study included Kaberamaido and Dokolo districts (in the Eastern and Northern regions respectively) in Uganda, which have been affected by Rhodesian HAT since 2004. The study districts border the northern shore of Lake Kyoga with a combined area of approximately 2,740 kmet alRecords of all patients resident within Kaberamaido and Dokolo districts that received a positive diagnosis of HAT between January 2004 and December 2006 were obtained from Lwala hospital (Kaberamaido district) and Serere hospital . All villages within Kaberamaido and Dokolo districts were geo-referenced using a handheld global positioning system , with direction from local guides. The HAT records were linked to the geo-referenced village dataset using village of residence and visualised using ArcMap 9.1 . To maintain the anonymity of subjects and patient confidentiality and to adhere to the International Ethical Guidelines for Biomedical Research Involving Human Subjects, no patient names were recorded within the database or as part of the data collection process. Further details regarding the provenance of these data have previously been published by Batchelor Non-spatial logistic regression methods were used to identify a set of environmental, climatic and social variables that were significantly correlated with HAT prevalence et alThe locations of all livestock markets and health centres within the study area were recorded during fieldwork using a handheld GPS. Maps detailing areas of woodland within the study area were obtained from the National Biomass Survey, which was conducted by the Uganda Forest Department between 1995 and 2002 The spatial variation in HAT prevalence within Kaberamaido and Dokolo districts was modelled using model-based geostatistics S(x) as follows:The total number of HAT cases The stochastic spatial component is modelled as a zero mean Gaussian process with variance geoRglmThe model parameters were estimated using a Bayesian framework with a Markov chain Monte Carlo (MCMC) algorithm in the package Priors (in Bayesian inference a prior is a probability distribution expressing uncertainty about a parameter before taking into account data observations) were selected for each parameter to represent prior knowledge of their distributions. Non-informative, uniform priors were selected for the regression parameters, Due to convergence and mixing problems when including all of the covariates listed in th iteration thereafter stored to assess the significance of each explanatory variable. Convergence and mixing of the MCMC algorithms was judged based on traceplots and autocorrelation plots for each model parameter to ensure that convergence had been reached, the chains had mixed adequately and autocorrelation amongst the samples was minimal. The mean values from the posterior distribution and their 95% credible intervals (CI)s were calculated and exponentiated to provide odds ratios (OR)s and their respective uncertainty measures. Only those covariates that were significantly associated with HAT prevalence were selected for the multivariate spatial regression model.The univariate spatial models were run for 2,000,000 iterations, with the first 1,000,000 discarded and every 100An initial run of the multivariate model was carried out following the optimisation of th iteration thereafter stored, resulting in a total of 5,000 samples from the posterior distributions. The regression parameters and 95% CIs were obtained from the model and exponentiated as above.The fixed A 2 km spatial resolution prediction grid was created for the study area, containing covariate values at each prediction location (grid cell). Samples from the predictive distribution for each prediction location were generated using the MCMC algorithm given the explanatory variables at each grid cell. The posterior medians and lower and upper 95% CI limits from the predictive distributions were extracted to give predicted prevalence and uncertainty estimates at all locations. The predictions were then exported to ArcMap for illustrative purposes.et alA scatter plot of predicted prevalence versus observed prevalence was created to illustrate the relationship between the model predictions and observations, and the correlation between fitted and observed prevalence was calculated. In addition, the mean error, median error and absolute mean error were calculated based on the difference between observed and predicted prevalence at each location, to give an indication of the prediction bias (mean and median error) and accuracy (absolute mean error). The Pearson residuals were calculated There were a total of 692 villages within the study area (Kaberamaido and Dokolo districts); all but two were geo-referenced . Of the remaining 690 villages, 18 that had recently separated into two were merged for the purpose of the analysis. Within the study period 354 cases of HAT were reported from these two districts, which equates to an overall period prevalence (2004\u20132006) of 0.14 per 100 population, although this value is very likely to be an underestimate due to complex issues surrounding care seeking behaviour for HAT and the under utilisation of health services From the univariate spatial regression model, five variables which were significantly correlated with HAT prevalence using deterministic, non-spatial logistic regression did not retain their statistical significance , and so was omitted from the final multivariate model. The remaining three covariates retained significance at the 95% level see . Both in2\u03c3 there was a slight positive skew. Traceplots and autocorrelation plots for model parameters were examined to assess the mixing and convergence of the MCMC algorithms and each appeared to have reached convergence during the burn-in period and to be mixing well. Autocorrelation amongst samples was minimal. The posterior distribution curves for the final model parameters analysis as published by Batchelor et alFollowing on from the non-spatial logistic regression methods discussed in Batchelor T. b. rhodesiense to Kaberamaido and Dokolo districts. Previous research has established that the introduction of Rhodesian HAT transmission within Soroti district (which neighbours the study area) was due to movements of untreated cattle from endemic areas through a local livestock market et alT. b. rhodesiense is likely to have been introduced to Dokolo and Kaberamaido via the continued movement of untreated livestock, despite the introduction of a law requiring the treatment of livestock from endemic areas, prior to sale Five covariates did not retain statistical significance during the univariate spatial regression and one did not retain significance during the multivariate spatial regression, indicating that the non-spatial model may have inflated the significance of covariates and produced inaccurate parameter estimates. The final spatial model included three covariate effects: distance to the closest livestock market, distance to the closest health centre and minimum LST. These results, using a more robust assessment of covariate effects, provide considerable strength to the hypothesis that the movement of infected, untreated livestock from endemic areas resulted in the introduction of Within the study area, it is problematic to separate the effects of differential utilisation of the HAT treatment centre, where those living closer are more likely to travel there for diagnosis and treatment than those living further away, from the purposeful siting of the treatment centre within the area most affected by HAT. Following the detection of a number of cases in Kaberamaido district in 2004, appropriate training and equipment were provided to one hospital within the area. The facility was selected based on a number of criteria, including the location within the affected area. Due to this difficulty, the distance to the closest health centre of any kind was used rather than distance to the HAT treatment centre. The significance of this variable in the spatial regression model highlights the importance of accessibility to health services as has been shown previously Minimum LST was observed to be a risk factor for HAT, with higher prevalence in areas with higher minimum LST. Minimum LST is calculated using measurements of radiance modified by the atmosphere in several spectral wavebands and varies depending on climate and also landcover properties et alWhen the performance of the spatial regression model was compared with that of the non-spatial model with only sporadic transmission to humans The research described utilised a variety of data sources providing information relevant to the distribution of the tsetse fly vector and, thus, also the distribution of Rhodesian HAT. However, accurate tsetse distribution or density data were not available for the study area, although the explicit inclusion of information on the spatial distribution of tsetse may have resulted in improved predictive power and provided further information on the determinants influencing the spatial heterogeneity in HAT prevalence within the main focus of disease. Additional factors that may play an important role in the observed spatial heterogeneity of HAT within Uganda include demographic factors, migration and human movement and behaviour patterns, due to their influence on the frequency of interaction between humans, tsetse and livestock. Although human migration has the potential to introduce The current research has demonstrated the application of Bayesian geostatistical modelling to the spatial distribution of HAT within a small area of Uganda. The more robust results provide strengthened evidence of the role of livestock trade in the continued spread of Rhodesian HAT within Uganda and the utility of this methodology for the prediction of HAT prevalence based on external covariates has also been demonstrated. The dataset used in this situation covered a relatively small area (two districts) with as complete a dataset as possible . The predictive power of this model over larger areas is constrained due to the limited area from which the observed data came. To allow the full exploitation of these methods, future work will focus on a larger study area using a sample of villages. This will allow an investigation of HAT prevalence in relation to wider covariate ranges and will allow extrapolation over larger areas. The Bayesian implementation of model-based geostatistics as described here is computationally expensive and can be time consuming, but the application of such methods to epidemiological research is being assisted by a growing base of knowledge and expertise, along with the creation of more efficient algorithms et alThe research presented here illustrates the importance of spatial autocorrelation in epidemiological variables; the use of non-spatial logistic regression analysis resulted in a model with a large number of covariates, complicating the interpretation of their effects. The use of a generalised linear geostatistical modelling framework, which models the residual autocorrelation after accounting for covariate effects, gave more precise and less biased parameter and significance estimates, with only three covariates retaining significance in the final model. The Bayesian implementation of the method allowed the incorporation of uncertainty in each of the model parameters from the posterior distributions and from the definition of a random variable. By carrying out the spatial-regression analysis, the quantified relationships between HAT prevalence and significant covariates can be more confidently described and interpreted. The predictive accuracy was also increased by using the spatial regression when compared to the non-spatial logistic regression analysis. These results strengthen the evidence in support of the hypothesis generated by the analysis discussed in Batchelor Figure S1Posterior distributions for model parameters.(0.25 MB TIF)Click here for additional data file.Figure S2Traceplots of MCMC output for each parameter.(0.68 MB TIF)Click here for additional data file.Figure S3Autocorrelation plots of MCMC output for each parameter.(0.37 MB TIF)Click here for additional data file."} +{"text": "There was an error in the first sentence of the Funding Statement. The first sentence of the Funding Statement should read: \"The funders of Sividion Diagnostics, Cologne, Germany conducted the Endopredict assay on all 34 tissue samples in a blinded way upon request of the authors.\""} +{"text": "There is a pressing need for further clinical evidence to better inform both patients and clinicians when recommending the optimal type and timing of breast reconstruction. The QUEST trials are unique; the first surgical trials comparing different types (A) and timing (B) of Latissimus dorsi (LD) reconstruction with a primary outcome of quality of life (QoL). Surgical trials are challenging, therefore necessitating a feasibility study to assess the acceptability of randomisation from both the perspectives of the patients and healthcare professionals. It was decided to develop a DVD to compliment the patient information sheet, in order to help patients understand the concepts of the clinical trials, surgical techniques, randomisation and clinical equipoise.All the women filmed in the DVD had a prior breast reconstruction and were invited to participate in a Q&A session with the Chief Investigator (CI) of the QUEST trials. The CI and 2 breast care nurses were also interviewed about the surgical techniques, complications and side-effects and the time taken to return to normal everyday activities after a breast reconstruction. All women will be given a short questionnaire to complete, assessing their level of understanding of randomisation and clinical equivalence irrespective of their entry into the QUEST trial.Filming took place over 1 day in Bristol and editing over a further 2 months. Drawings were produced to pictorially explain clinical equivalence, randomisation and the different types of LD reconstruction. The DVDs and the patient information sheets will be given to all women who are eligible and interested in the QUEST trials. We will be evaluating the impact of this DVD on patient recruitment by both quantitative (questionnaires) and qualitative (interviews) methodology.It proved feasible to develop a patient targeted DVD based on the experience of patients and healthcare professionals to enable potential QUEST participants to make fully informed decisions."} +{"text": "Accurate temporal estimations are essential in order to face the surrounding variety of everyday situations The evidence of a close relationship, in childhood populations, between temporal accuracy and the performance in tasks involving WM, attention and impulsivity control; (ii) The evidence of age related functional differences comparing the activity of the prefrontal cortex during the execution of timing as well as WM, attentive and impulsivity control tasks.The implications behind this hypothesis are intriguing because they may help to clarify, through the study of cognitive development models, the relationship between the development of the EF and the progression of the level of sophistication of time keeping skills. Moreover, the study of the time keeping functions in childhood populations could represent a potential element of evaluation to qualitatively determine and/or monitor the EF development during the critical phases of brain growth. Finally, one advantage in charting the developmental trajectory of time processing and EF at certain critical moments of development is that this can help to differentiate between experience-dependent versus inborn aspects of time and EF.cognitively controlled\u201d and \u201cautomatic\u201d timing processes mechanisms involved in the formation of three dimensions of EF discussed in this article. However, cognitively controlled timing skills cannot be reduced to these three EF, considering that the representation of time is built also with the active involvement of other processes (e.g., those implied in the representation of space and quantity, see Walsh, Future works devoted to exploring the developmental hypothesis discussed in this paper may wish to combine behavioral measures and brain methods in a longitudinal perspective, which may be recognized as important in addressing the link between cognitive and neural development. This approach would help to clarify whether and how these three domains of EF and cognitively controlled timing skills develop in parallel."} +{"text": "We have reported the risk of chest drain insertion inferior to the diaphragm when using current international guidelines . AnotherWe used the above guidelines to place markers (representing chest drains) in the thoracic wall of 16 cadavers bilaterally (32 sides), 1 cm anterior to the midaxillary line. Subsequent dissection identified the course and termination of the long thoracic nerve, the site of lateral cutaneous branches of intercostal nerves, and their relation to the markers.Grays' Anatomy (40th edition) it terminated before the inferior border of serratus anterior. Most commonly it was found to end by branching in the fourth (right) or fifth (left) intercostal space (range third to sixth). Lateral cutaneous branches of intercostal nerves were found in the fifth intercostal space in 25 of 32 cases. Contrary to the description in Last's Anatomy (12th edition) they always passed anterior to the midaxillary line (and marker).The long thoracic nerve was found in the fifth intercostal space in 16 of 32 cases, always in or posterior to the midaxillary line. Contrary to the description in Placement 1 cm anterior to the midaxillary line minimises risk to the long thoracic nerve and lateral cutaneous branches of intercostal nerves. We therefore conclude that not all areas of the British Thoracic Society safe triangle are indeed safe, and anteroposterior placement should follow the European Trauma Course and ATLS guidelines: just anterior to the midaxillary line ."} +{"text": "The safety of paediatric medications is paramount and contraindications provide clear pragmatic advice. Further advice may be accessed through Summaries of Product Characteristics (SPCs) and relevant national guidelines. The SPC can be considered the ultimate independent guideline and is regularly updated. In 2008, the authors undertook a systematic review of the SPC contraindications of medications licensed in the United Kingdom (UK) for the treatment of Attention Deficit Hyperactivity Disorder (ADHD). At that time, there were fewer contraindications reported in the SPC for atomoxetine than methylphenidate and the specific contraindications varied considerably amongst methylphenidate formulations. In 2009, the European Medicines Agency (EMA) mandated harmonisation of methylphenidate SPCs. Between September and November 2011, there were three changes to the atomoxetine SPC that resulted in revised prescribing information. In addition, Clinical Guidance has also been produced by the National Institute for Health and Clinical Excellence (NICE) (2008), the Scottish Intercollegiate Guidelines Network (SIGN) (2009) and the British National Formulary for Children (BNFC).An updated systematic review of the Contraindications sections of the SPCs of all medications currently licensed for treatment of ADHD in the UK was undertaken and independent statements regarding contraindications and relevant warnings and precautions were then compared with UK national guidance with the aim of assessing any disparity and potential areas of confusion for prescribers.As of November 2011, there were seven medications available in the UK for the treatment of ADHD. There are 15 contraindications for most formulations of methylphenidate, 14 for dexamfetamine and 5 for atomoxetine. Significant differences exist between the SPCs and national guidance part due to the ongoing reactive process of amending the former as new information becomes known. In addition, recommendations are made outside UK SPC licensed indications and a significant contraindication for methylphenidate is missing from both the NICE and SIGN guidelines. Particular disparity exists relating to monitoring for suicidal and psychiatric side effects. The BNFC has not yet been updated in line with the European Union (EU) Directive on methylphenidate; it does not include any contraindications for atomoxetine but describes contraindications for methylphenidate that are no longer in the SPC.Clinicians seeking prescribing advice from critical independent sources of data, such as SPCs and national guidelines, may be confused by the disparity that exists. There are major differences between guidelines and SPCs and neither should be referred to in isolation. The SPC represents the most relevant source of safety data to aid prescribing of medications for ADHD as they present the most current safety data in line with increased exposure. National guidelines may need more regular updates. Attention Deficit Hyperactivity Disorder is a commonly diagnosed disorder affecting around 3-9% of school aged children and young people in the UK; the worldwide pooled prevalence rate of ADHD in the same population is estimated at 5.29% ,2 The nuAt the time of writing, there were eight medications licensed in the UK to treat ADHD. These medications were granted a marketing authorisation (license) by the UK regulatory agency, the Medicines and Healthcare products Regulatory Agency (MHRA) (formerly the Medicines Control Agency) following review of required efficacy and safety data. The MHRA is a government agency established in 2003 and is responsible for ensuring that medicines and medical devices are effective and are acceptably safe. Safety information is monitored from a variety of sources and, if required, risk benefit assessments are conducted. The MHRA initiated a risk: benefit assessment for atomoxetine in 2006.A draft SPC is submitted within the application for a product license and is then finalised in conjunction with the Marketing Authorisation Holder (MAH) and the regulatory agency with the purpose of enabling safe and appropriate prescribing. The totality of the data reviewed by the regulatory agency and which subsequently informs the SPC far exceeds that which is published in the peer-reviewed scientific literature. In summary, the SPC is the agreed statement of known facts about a given pharmaceutical compound at a particular point in time and it is critical that it is reviewed and amended regularly as new information emerges.Guidelines on the preparation and maintenance of SPCs are laid out by the European Commission (EC) and contraindications are defined as 'situations where the medicinal product must not be given for safety reasons' . StatemeThe conduct of pharmacovigilance for medicines for paediatric use requires special attention and reporting by multiple stakeholders of potential adverse events is of increasing importance . One sucBecause monitoring can and does lead to important safety changes the SPC can be regarded as the current and updated 'ultimate guideline' for safe and effective use of that compound. Other worldwide regulatory agencies have similar views. The US Food and Drug Administration (FDA) which is responsible for protecting the public health by assuring the safety, efficacy and security of drugs in the United States considers the communication of risks and benefits through its product labelling as \"the cornerstone of risk management efforts for prescription drugs\" . The SPCAlthough the entire SPC is important, the sections that relate to Contraindications (Section 4.3) and Special Warnings and Precautions for Use (Section 4.4) remain the most critical with respect to safety. Healthcare professionals, however, are not always fully aware of the content of SPCs, how it has been derived or how to access them.In April 2008, the authors presented a systematic review of the Contraindications sections of the eight medicines authorised in the UK at that time for the treatment of ADHD in children ) and a methodological study into the measurement of suicidality in paediatric clinical trials ) .Suicidality is a key area of disparity between national guidance and SPCs. The SPC for atomoxetine specifically highlights the need for monitoring for the appearance or worsening of suicide-related behaviour and states that suicidal behaviours have occurred uncommonly in clinical trials. Suicide-related events are also listed as an uncommon Undesirable Effect on the SPC. The SPCs for methylphenidate list suicidal tendencies as a contraindication for usage, contain a warning relating to the emergence of suicidal behaviours and also list suicidal ideation as an uncommon Undesirable Effect. They also mandate monitoring for all psychiatric adverse events and this will include suicidal issues of all types. The dexamfetamine SPC states that any family history of suicide should be investigated prior to initiation of treatment as part of the screening for risk of bipolar disorder.In contrast to the SPC content, the NICE guidelines warn clinicians of \"...suicidal problems and self harming behaviour with atomoxetine\" and make no mention of the risks associated with other treatments. The SIGN Guidelines repeat this disparity by stating the need to closely monitor patients on atomoxetine in particular for agitation, irritability, suicidal thinking and self-harming behaviour.The atomoxetine suicidality data are derived from a retrospective analysis of atomoxetine usage in 14 paediatric trials which reported greater suicidal ideation with atomoxetine when compared with placebo (p = 0.016). In a cohort of 1,357 atomoxetine-treated subjects, there were five cases of suicidal ideation reported, no completed suicides and one suicidal attempt . The samSimilar inconsistencies arise with other sections of the SPCs. The NICE guidelines suggest using either atomoxetine or methylphenidate with co-morbid anxiety, but specifically recommend the need for observation for agitation or irritability only with atomoxetine despite a strong warning on the methylphenidate SPC relating to the need to clinically evaluate patients for agitation and irritability prior to the use of methylphenidate. The SPC also advises that patients should be regularly monitored for the emergence or worsening of these symptoms during treatment, at every adjustment of dose and then at least every 6 month or every visit. The atomoxetine SPC states that whilst treatment emergent agitation can be caused by atomoxetine at usual doses there are no specific monitoring recommendations around agitation and irritability. Methylphenidate is associated with the worsening of pre-existing anxiety, agitation or tension. Clinical evaluation for anxiety, agitation or tension should precede use of methylphenidate and patients should be regularly monitored for the emergence or worsening of these symptoms during treatment, at every adjustment of dose and then at least every 6 months or every visit. Recent changes to the atomoxetine SPC highlight that anxiety was not worsened in clinical trials but that patients being treated with atomoxetine should be monitored for the appearance or worsening of anxiety. The wording around the recommendations in the guidelines does not fully reflect the SPC content of either product.A similar disparity exists for usage of either agent when tics or Tourette's syndrome are present. Both NICE and SIGN recommend use of either methylphenidate or atomoxetine although the SPCs have differences. The atomoxetine SPC reports that there was no worsening of tics or Tourette's in a placebo-controlled trial, very rare post-marketing reports of tics have been received and that patients should be monitored for the appearance or worsening of tics. The methylphenidate SPC states that methylphenidate is associated with the onset of tics, that both tics and Tourette's may worsen with methylphenidate treatment and that clinical evaluation and examination should precede methylphenidate treatment.Other areas of debate include adverse events attributed to a drug. For example, in both National Guidelines, it is recommended that sexual dysfunction is monitored in atomoxetine patients (children and adults) but this requirement is not reflected in the SPC. The SPC contains no clinical trial data on sexual dysfunction specifically in children, only post-marketing surveillance data which includes children, adolescents and adults.\u00ae) since June 2011 [The disparity between these independent data sources may be complex for clinicians to interpret particularly when the NICE guidelines recommend use of methylphenidate in adults which has only been an approved indication for one form of methylphenidate , dysthymic disorder (13.5%) and Major Depressive Disorder (3%) when usiAn important consideration is the balance of regulatory statements as contained in the SPC and the expertise of those routinely prescribing medications. Clinicians have had significant experience of using methylphenidate for many years in many children and as such have considerable expertise in weighing up potential tolerability and safety risks with the benefits of medication to individual patients. Relevant clinical publications, clinical expertise in using the medications and achieving positive results in cases that are contraindicated according to the SPC are highly influential in determining future drug use. There is experience of contraindicated medications being prescribed to patients with positive outcomes despite the potential risk of rare adverse events. The advantage of the regulatory statements is that they are based on cumulative case reports and as such reveal the rarer adverse events that would not otherwise be picked up in routine practice. As a consequence, appropriate levels of monitoring and precaution can be recommended for different products. These types of issue are evidenced by the level of debate in the scientific literature regarding the potential for methylphenidate to cause or worsen tics ,10,20.Clearly data are emerging rapidly in ADHD with the advent of new treatments and epidemiological research but national guidelines and SPCs cannot be updated as frequently as data emerge. For the period from January 2009 to April 2011, the search term \"atomoxetine\" in Pub Med generates 245 citations with at least 94 citations likely to contain clinical data, meta-analyses or reviews. The future safety of paediatric treatments are paramount and are likely to be advanced by the FDA and EMA regulations and the ability of large databases to address outcome measures of a more pragmatic kind. Some safety aspects may even be improved by drug treatment. Recent data on atomoxetine usage in a cohort of 13-16 year old children improving unhealthy dietary behaviours and physical activity as well as reducing behaviours contributing to unintentional injuries .To be able to make prescribing decisions based on sound evidence, clinicians need to be aware of, and be familiar with, the different sources of information available to them. The SPC, as an independent document reflecting current knowledge, may be the best way of achieving this.Clinicians seeking guidance regarding the use of medications for ADHD in UK will find significant disparity between relevant national guidance and SPCs; these differences extend to licensed indications, contraindications, warnings and precautions and monitoring schedules. The contraindications sections within the SPC provide clear categorical statements for which the relevant medications should not be prescribed.In view of the approval of the content of the SPC by the regulatory agency and the ongoing changes that take place to reflect current safety findings, many of which have been discussed in this manuscript, clinicians may be advised to consider the SPC as the ultimate guideline. The very recent decision by NICE not to update the current ADHD Guideline until 2014 may further increase the relevance of the respective SPCs.ADDUCE: ADHD Drugs Use Chronic Effects Programme; ADHD: Attention Deficit Hyperactivity Disorder; BNFC: British National Formulary for Children; CHMP: Committee for Medicinal Products for Human use; DSM-IV: Diagnostic and Statistical Manual of Mental Disorders Volume 4; EC: European Commission; EMA: European Medicines Agency; eMC: electronic Medicines Compendium; EU: European Union; FDA: Food and Drugs Administration; HKD: Hyperkinetic Disorder; MAH: Marketing Authorisation Holder; MHRA: Medicines and Healthcare products Regulatory Agency; NICE: National Institute for Health and Clinical Excellence; PSUR: Periodic Safety Update Report; SIGN: Scottish Intercollegiate Guidelines Network; SPC: Summary of Product Characteristics; STOP: Suicidality Treatment Occurring in Paediatrics; UK: United Kingdom.Nicola Savill and Chris Bushe are employees and shareholders of Eli Lilly and Company Ltd. Eli Lilly is the marketing authorisation holder and manufacturer of atomoxetine in the UK and has financed this manuscript.The paper has been written by NS and CB. The tables were compiled by NS and checked by CB. Both authors read and approved the final version of the manuscript."} +{"text": "We present a 25-year-old male patient with a diagnosis of multiple enchondromatosis, who developed chondrosarcoma on the proximal humerus of the right upper limb. The patient had the pre-existing lesions of Ollier\u2019s disease discovered during his childhood. The patient underwent wide resection of the sarcoma with a prosthetic replacement of the proximal humerus. So far we have followed up the patient for 8 years with no evidence of local recurrence and/or metastasis. The therapeutic results have been satisfied with a good functional recovery of the treated limb, enabling the patient to return to the pre-disease daily living and occupational activities. The reconstructive procedures represent an effective surgical strategy for limb salvage in the treatment of large segmental defects after resection of humeral tumors, substantially solving the functional and esthetic problems due to such a wide resection, and significantly improving the quality of life for the patient. Ollier disease is a rare non-hereditary skeletal disorder characterized by the presence of multiple enchondromas (enchondromatosis), and these cartilaginous lesions can be limited to one limb, or localized to one half of the body , 2. The A 25-year-old male patient was admitted to our department, presenting progressive pain and numbness with a rapidly growing mass located in the upper part of the right arm for 2 months in 2005. At the time of the hospital admission, he had completely lost his occupational capacity of the arm (100% unable to work). Physical examination revealed a palpable mass on the upper part of his right arm with a size 13 \u00d7 12 cm. Radiographic examination demonstrated a large mass over 10 cm in diameter on the proximal humerus, with massive cortical erosion, extension of the tumor into soft tissues and indistinctness of the surface of the tumor . A diagnA limb-salvage strategy with treatment of the large segmental defects following resection of the tumor and a prosthetic replacement of the proximal humerus was designed for the patient. The informed consent was obtained from the patient prior to the operation. The tumor resection prosthesis was applied in the same session following the tumor resection . The tumDuring the 8-year follow-up, clinical and radiologic examinations were done at the periodic controls. There are no signs of local recurrence and/or remote metastasis so far. The therapeutic results have been satisfied with a good functional recovery of the treated limb, enabling the patient to return to the pre-disease daily living and occupational activities. The overall functional outcomes were assessed by the musculoskeletal tumor society (MSTS) scoring system. The patient had the overall score 24 in the last examinations of the follow-up, and the ranges of motion of the shoulder joint are shown in Proximal humerus tumors treated with amputation usually result in a complete loss of the hand/limb function. Limb-salvage procedures have been popularly put into clinical practice in spite of amputations in bone tumor surgery. However, the surgical procedures remain challenges for orthopedic surgeons toward simultaneous replacing the defects of bone/soft tissue and restoring the functions of the involved joints/limbs after tumor resection. Preservation of the functional capacities of the involved upper extremity along with a complete removal of the tumor is considered as the mostly important criteria in the surgical management of the bone tumor of the proximal humerus. Moreover, concerns also should be paid for preservation of length and cosmesis of the upper extremity toward supporting quality of life for the patients. The indications for the prosthetic replacement of the proximal humerus tumors as proposed by Ross et al have beeIt is worth noting that good function of a treated upper limb with the procedures is largely based on the stability of the shoulder joint, while the stability of the shoulder joint can only be ensured by reliable reconstruction with minimizing the amount of resection of muscle and soft tissue within the complete removal of the tumor during surgery . Various"} +{"text": "With the advent of whole genome sequencing made clinically available, the number of incidental findings is likely to rise. False positive incidental findings are of particular clinical concern, and they can usefully be classified into four categories. In order of increasing challenge, there is first, the substantial proportion of 'textbook cases' of mutations documented to cause human disease in a highly penetrant Mendelian fashion, which are incorrectly annotated in the databases. The second is the technical/measurement error rate in genome-scale sequencing. Third is the incorrect assignment of prior probabilities for much of our genetic and genomic knowledge. The fourth derives from testing multiple hypotheses across millions of variants. I will describe the nature of these components, provide rough estimates for the magnitude of the problem and point out existing approaches that will serve to control the growth of these aspects of the incidentalome."} +{"text": "This study explores potential data mining applications in the Casemix context, which is expected to yield effective and efficient health care services. The objective of work focuses on determining hidden relevant patterns which can\u2019t be processed by human capabilities all alone. California Drug and Alcohol treatment Assessment (CALDATA) of administrative type database can be relevant study for the medical diagnosis in usage of alcohol and drugs for patients admitted and discharged during the stay in hospital to discover knowledge for recovery process.We utilized the observational study on cases registered to California Department of Alcohol and Drug Programs (ADP) to promote the initiative for increasing availability of abusive drug usage data for better drug recovery services among the California. The cases were diagnosed with Minitab diagnostic tool to access the Casemix databases for retrieval of hidden information using data mining tools. The K means clustering having used with dendrogram to determine the possibility of existence of patient admitted and discharged on the accountability for usage of abusive substance between the years 1991-1993. The classification of data is done among the educated and uneducated class for categorized race with correlation age at the time of admission to hospital. The analysis has been performed on the patients admitted due to abusive substance usage and treatment provided during the stay in hospital and discharge status for final medical diagnosis provided to patient those have suffered for long stay during hospitalization.There has been a tremendous increase in the incidence rate of admission cases in age group 45-49 years. The probability of over 40% cases acquiring maximum number of abusive substance exists in patients who have obtained post graduate education. The decline approximately 2.3% of criminal activities after proper diagnosis to patients with high level of alcohol dependency among the cases observed.The total number of cases evaluated to study was 1,826 in 1991-1993; total number of features selected was 1,205 for each case diagnosed. The cases were diagnosed on the basis of admission and discharge among the prevalence of abusive substance usage. The subject was classified for different subjects such as education, age, duration of stay in the hospital, estimated reduction on criminal cases, decrement of hospital cases while the treatment provided during the stay.We calculated the overall usage of abusive substance among the categorized race at time of admission with reference to the age. The results shows white were among the categorized age group of 17 and under has the maximum usage of abusive substance whereas native Americans are the one those who have minimum consumption of abusive substance usage. In diagnosis of longest time of stay during the treatment in hospital from day of admission to day of discharge due to abusive substance usage we have calculated the overall maximum number of prevalent cases during the year 1991- 1993, we have found that longest stay was observed for male/female aged below 17 years year and were correlated to marital status that is unmarried. The clusters in the dendrogram has been observed, where the largest cluster represents the maximum number of unmarried male/ female patients diagnosed for highest abusive substance usage, whereas the second cluster represents the second highest cases for divorced/ separated has the maximum usage of abusive substance, third cluster represent the single between the age group 21-24 years accounted for 48% cases for length of stay during hospitalization, fourth cluster represents the married cases those who were among age group 35-39 years, the minimum number of cases were studied for widowed.The integrated approach, K-means and Hierarchical Clustering technique using Minitab are well suited techniques to provide insight of health service databases. The probability of patients acquiring the abusive substance depends on several factors such as education, age, marital status and several other factors related to patients. The discharge status of these highly correlates to criminal activity and discharge status of hospitalization cases which have reduced down tremendously after providing treatment to admitted patients."} +{"text": "Volumetric measurement of polyacrylamide hydrogel (PAHG) is useful for surgical planning. It is not only a significant factor in the preoperative evaluation of breast augmentation, but may also directly affect the postoperative shape of the breast. The objective of the present study was to evaluate whether magnetic resonance imaging (MRI) is able to provide precise calculations of injected PAHG volumes. MRI scans of ten randomly selected patients were imported to Mimics software. The volumes of PAHG were obtained following the reconstruction of the injected PAHG. In order to assess the precision and observer independency of the technique, the volumes of PAHG were estimated by three plastic surgeons using this method. No significant differences were identified among the PAHG injection volumes calculated by the three observers (P=0.173). The intra-observer correlation coefficient was 0.964, which indicates the precision and feasibility of this method for calculating the volume of PAHG. The use of MRI in combination with Mimics software to calculate PAHG volumes is likely to be of significant clinical benefit in preoperative surgical planning. Polyacrylamide hydrogel (PAHG) has been widely used for injection augmentation mammaplasty in Russia, China and Iran for more than two decades . AlthougSince April 2006, when the China State Food and Drug Administration announced that PAHG was prohibited from production and clinical application in plastic surgery, significant social concern was raised concerning the use of PAHG injections as soft tissue fillers. It was estimated that ~200,000 patients have received PAHG breast augmentation in the last decade . BetweenIt is vital to estimate the precise volume and depth of the PAHG injected for breast augmentation. It is not only a significant factor in the preoperative evaluation of breast augmentation, but may also directly affect the postoperative shape of the breast. Therefore, a reliable method for calculating the volume of the injected PAHG is required. Magnetic resonance imaging (MRI) scans are commonly used to analyze the position of the injected PAHG. Therefore, the development of a rapid and precise method of estimating the volume of PAHG on the basis of MRI scans would be of benefit. The purpose of the present study was to define a volume measurement method. In addition, the reliability and precision of the volume measurement method were monitored.Between 2005 and 2012, 407 patients underwent PAHG removal breast surgery in the Department of Aesthetic and Plastic Breast Surgery of the Plastic Surgery Hospital. Each patient underwent breast MRI pre-operatively. Ten patients that had never had breast surgery prior to the study were randomly selected and enrolled in the study. Clinical characteristics of the patients are shown in All patients were scanned prior to the operative procedures by MRI. A 1.5-T MRI scanner with dedicated breast coils was used for the imaging. The standard protocol included axial T2-weighted images with and without fat depression and sagittal T1- and T2-weighted images with fat depression. The DICOM images of the MRI scans were imported into Mimics software . The axial T2-weighted images with fat suppression were useThree plastic surgeons independently carried out the volumetric measurement. Each patient was measured ten times by three independent plastic surgeons. Calculated PAHG volume data are presented in PAHG was injected in different layers and distributed differently in different patients. In patient 8, PAHG was injected into the subglandular space and distributed regularly and 2A. Calculated PAHG volume data are presented in et al considered the indications were as follows: Absence of breast neoplasm and infection; removal of >90% of the injected gel; no residual hydrogel in pectoral muscles and/or the subpectoral space; enough healthy mammary tissue and pectoral muscle present to cover the breast prostheses; inframammary folds are intact or are able to be reconstructed simultaneously; and no systemic or psychological problems (The purpose of the present study was to determine a method for precisely measuring the volumes of injected PAHG. The results demonstrate that the volume of the injected material may be precisely calculated, which is significant for the preoperative evaluation of patients prior to the removal of PAHG. Information concerning the distribution of the injected PAHG and the extent of the tissue infiltration was obtained. It was therefore possible to estimate the difficulty and the risks involved in the removal of the PAHG, as well as ensuring the feasibility of immediate breast augmentation and the postoperative shape of the breast. On the basis of the estimated volume, appropriately sized implants may be selected prior to breast augmentation, following the removal of the PAHG. However, there are specific restrictive indications for immediate breast shape repair. Luo The measurement method may also be used to calculate the volume of the residual hydrogel. The effectiveness of various approaches for the removal of PAHG from the breast may be evaluated on the basis of the estimated volume of residual hydrogel.et al have reported a method using a 3-D MRI reconstruction technique to determine the volume and distribution of the PAHG (There have been few studies analyzing volume measurement of PAHG. However, several studies have discussed the volume estimation of silicone gel-filled breast implants from MRI images \u201312. Prevthe PAHG . HoweverIn the present study, it was not possible to compare the calculated volumes of injected PAHG with the actual volumes, since information concerning the preoperatively injected PAHG volumes was not available. In addition, it was not possible to compare these volumes with the volume of the removed materials. Firstly, since PAHG is a hydrophilic filler material, some of the material may be easily aspirated following saline dilution. During the surgical procedure, the cavity was repeatedly irrigated with a large quantity of normal saline intraoperatively. Therefore, the amount of the removed PAHG may not be fully consistent with the estimated volume of PAHG. Secondly, the PAHG, as well as the degenerated tissue, were removed intraoperatively leading to the volume of all excised tissue being inconsistent with the estimated volume of PAHG. Thirdly, some of the PAHG may have been injected into another area, including the subcutaneous or intercostal muscles. In consideration of the intraoperative safety and the postoperative shape of the breast, it may not be possible to remove the PAHG completely. This also caused the volume of the materials removed to differ from the calculated volume of PAHG.In conclusion, MRI imaging offers a precise method for the volumetric measurement of injected PAHG. This is significant for pre- and post-operative evaluation and the selection of implants for immediate reconstruction."} +{"text": "During this last decade, nonlinear analyses have been used to characterize the irregularity that exists in the neuronal data stream of the basal ganglia. In comparison to linear parameters for disparity , nonlinear analyses focus on complex patterns that are composed of groups of interspike intervals with matching lengths but not necessarily contiguous in the data stream. In light of recent animal and clinical studies, we present a review and commentary on the basal ganglia neuronal entropy in the context of movement disorders. Characterization of the neuronal data stream of basal ganglia neurons has been the foundation of most of the functional models for movement disorders. The divergences (or complementarities) between these models mostly result from the analytical strategy used to characterize and to model the data stream of the basal ganglia (BG) neurons. The analysis of the firing rate has forged the \u201crate hypothesis\u201d while the frequency analysis has forged the \u201coscillatory model.\u201d Briefly, the rate hypothesis and oscillatory model suggest that increasing activity and/or beta oscillations in the output nuclei of the BG ) reduces motor selection and leads to hypokinesia in Parkinsonism. Since this last decade, different groups have integrated new mathematical tools to characterize and/or model the activity of the BG neurons including nonlinear analyses which describe complex patterns in the neuronal data stream. After reviewing the recent findings from the nonlinear analysis of the BG neurons, we present a review on the avenues and hypotheses brought by these newly integrated mathematical tools and their possible impacts on the next generation of functional models of the basal ganglia. The basal ganglia are part of corticocortical loops \u20137 and ha The sequential and convergent arrangement of excitatory and inhibitory neurons in these nuclei has forged the concept that basal ganglia and, inherently, information processing rely on the summation of excitatory and inhibitory inputs and are therefore linear in nature. To compare neuronal activity in the basal ganglia to the model predictions, the measurements of the firing rate and other linear markers in the time domain have been examined in animal and clinical studies. Data from these studies have contributed to the \u201crate model\u201d for movement disorders. This model is founded upon the assumption that the direct pathway (Str-GPi/SNr) is up-regulated by the D1 dopaminergic receptor and indirect pathway (Str-GPe-GPi/SNr) is down-regulated by the D2 dopaminergic receptor. The dynamic balance between these two pathways contributes in motor selection and motor inhibition, respectively . The \u201craIn addition to time domain analyses which are based on the probability distribution of interspike intervals (ISIs), other studies have characterized the firing activity of basal ganglia neurons in the frequency domain. In a majority of studies, differences in oscillatory activities have been identified between normal and pathological conditions \u201342. In PThe linear analyses used to characterize the basal ganglia activities in time and frequency domains measure the resultant linear combinations of independent patterns in the data stream. These analyses characterize the interspike interval (ISI) series by the summation of probability distributions for different durations of ISIs or several frequencies (power spectrum). However, the irregularity in the neuronal firing activity is not linear \u201351 sinceThe clinical relevance of these nonlinear features in neuronal discharge is not yet clear. Specifically, it is unknown whether the nonlinear dynamics of basal ganglia neurons are affected by the conditions of parkinsonism or dystonia. In retrospective analyses of a database of PD and dystonia neurons with temporal organizations (as defined in ), SanghePharmacological studies in primate models for movement disorders are needed to further investigate this hypothesis as their basal ganglia neuronal activity exhibits similar patterns to those seen in patients . In addiSince anti-Parkinsonian treatments decrease entropy and hyperkinetic conditions are associated with lower entropy, it is time to include the basal ganglia neuronal entropy as a putative interfering factor in the current model for the selection and the inhibition of motor information in the basal ganglia circuitry. Through exploring the current data framework available, we present a primary hypothesis on the nature of the GPi neuronal entropy regarding abnormal movement production.From the data discussed above, we hypothesize that high entropy in the GPi neuronal data stream is associated to an increased motor inhibition while reduced entropy in the GPi neuronal data stream is envisaged as a feature for increasing motor selection see . This hyThe relation between the entropy theory and the functions of the basal ganglia can be substantiated by the intrinsic (and logarithmic) relation between entropy and the correlation dimension . Since tThe use of nonlinear domain analyses to describe the neuronal and network activities inside the basal ganglia may provide new qualitative and quantitative information regarding the nature of the sensory-motor processing as well as its distortion in pathological conditions. It is expected that the inclusion of key nonlinear features into silicone-based models of the basal ganglia could better reproduce complex and nonstationary signals recorded in normal and pathological conditions. To date, the \u201centropy hypothesis\u201d may be useful to initiate a debate on nonlinear dynamics in basal ganglia activity and their roles in the selection and inhibition of motor programs. Most importantly, these nonlinear analyses may contribute to reduce the gap between the basal ganglia models and the theories on the processing of motor information."} +{"text": "Auricular acupuncture has been utilized in the treatment of diseases for thousands of years. Dr. Paul Nogier firstly originated the concept of an inverted fetus map on the external ear. In the present study, the relationship between the auricular acupuncture and the vagal regulation has been reviewed. It has been shown that auricular acupuncture plays a role in vagal activity of autonomic functions of cardiovascular, respiratory, and gastrointestinal systems. Mechanism studies suggested that afferent projections from especially the auricular branch of the vagus nerve (ABVN) to the nucleus of the solitary tract (NTS) form the anatomical basis for the vagal regulation of auricular acupuncture. Therefore, we proposed the \u201cauriculovagal afferent pathway\u201d (AVAP): both the autonomic and the central nervous system could be modified by auricular vagal stimulation via projections from the ABVN to the NTS. Auricular acupuncture is also proposed to prevent neurodegenerative diseases via vagal regulation. There is a controversy on the specificity and the efficacy of auricular acupoints for treating diseases. More clinical RCT trials on auricular acupuncture and experimental studies on the mechanism of auricular acupuncture should be further investigated. Acupuncture is a part of traditional Chinese medicine (TCM). It has been accepted in China and has been used as one of the alternative and complementary treatments in western countries. Auricular acupuncture has been also used in the treatment of diseases for thousands of years. In the classic TCM text of Huang Di Nei Jing, which was compiled in around 500 B.C, the correlation between the auricle and the body had been described; all six Yang meridians were directly connected to the auricle, whereas the six Yin meridians were indirectly connected to the ear by their corresponding yang meridian, respectively . In HippNogier presented his discovery in several congresses and published it in an international circulation journal, which eventually led to the widespread acceptance of his approach. With some exceptions, the Chinese charts were very similar to Nogier's originals . The autonomic nervous system (ANS), which plays a crucial role in the maintenance of homeostasis, is mainly composed of two anatomically and functionally distinct divisions: the sympathetic system and the parasympathetic system. In terms of the influence of the parasympathetic system, the physiological significance of the vagus nerve is clearly illustrated by its widespread distribution . It conti6.7) point [Cardiac vagal postganglionic fiber endings release acetylcholine, which are bound with cholinergic M receptors on the myocardial cell membrane or vascular smooth muscle. Activation of the vagus nerve typically leads to a reduction in heart rate and blood pressure. Cardiovascular vagal regulations by auricular acupuncture have been investigated in clinical trials and animal experiments , 10\u201318. i) point . Acupunci) point .15) and cardiovascular regulation. In healthy volunteers, a significant decrease in heart rate and a significant increase in heart rate variability after manual ear acupressure at auricular acupoint CO15 have been shown [15 [15 produced marked short-term and long-term depressor effect as well as evident immediate effects on cardiac functional activities in grade II and grade III hypertension and marked effects on angiotensin II in grade III hypertension [Several investigations had focused on the relationship between auricular acupoint \u201cHeart\u201d and a greater percentage change in normalized low frequency power of HRV, thus, it suggested that auricular acupuncture intervention led to more cardiac parasympathetic and less cardiac sympathetic activities, which contributed to the improvement of postmenopausal insomnia . 4 combined with other acupoints of Daimai (GB26), ST36, and Sanyinjiao (SP6) resulted in a net increase in vital capacity during the period of acupuncture analgesia which lasted for 3 to 4 hours after stimulation [In a controlled single-blind study, a significant decrease in the olfactory recognition threshold by auricular acupuncture at the auricular \u201cLung\u201d point was found in 23 healthy volunteers . Bilatermulation . In fourmulation . Increase in intragastric pressure has been induced by auricular acupuncture in rats . By compThe auricle is innervated by cranial nerves and spinal nerves. Innervations of at least four nerves supply the anterior auricle: the auriculotemporal nerve, the auricular branch of the vagus nerve (ABVN), the lesser occipital nerve, and the greater auricular nerve. The auriculotemporal nerve is a mandibular branch of the trigeminal nerve, which mainly supplies the anterosuperior and anteromedial areas of the external ear. The auricular branch of the vagus nerve, which is the only peripheral branch of the vagus nerve, mainly supplies the auricular concha and most of the area around the auditory meatus. The lesser occipital nerve mainly innervates the skin of the upper and back parts of the auricular. The greater auricular nerve (GAN) from the cervical plexus supplies both surfaces of the lower parts of the auricle. The innervation of the auricle is characterized by a great deal of overlap between multiple nerves see Fig. Both Chinese and Western researchers have recognized the relationship between the auricle and vagal regulation. Arnold's reflex was first described in 1832 by Friedrich Arnold, professor of anatomy at Heidelberg University in Germany. It is one of the somato-parasympathetic reflexes. Physical stimulation of the external acoustic meatus innervated by the ABVN elicits a cough much like the other cough reflexes induced by vagal tone. There were also clinic reports on vagal tone responses such as cardiac deceleration and even asystole and depressor response, induced by stimulations including cerumen cramming in auditory canal or auricular concha , 24. EngThe anatomical relationship between the ABVN and the nucleus of the solitary tract (NTS) has been investigated. After applying horseradish peroxidase (HRP) to the central cut end of the ABVN in the cat, some labeled neuronal terminals were seen in the interstitial, dorsal, dorsolateral, and commissural subnuclei of the NTS; some of these terminals may be connected monosynaptically with solitary nucleus neurons which send their axons to visceromotor centers in the brainstem . 15 activates the cardiac-related neurons in the NTS to evoke cardiovascular inhibition, whereas the inactivation of the NTS with local anesthetics decreased the cardiovascular inhibitory responses evoked by auricular acupuncture [The auricular concha is mainly innervated by the ABVN. The relationship between the acupuncture stimulation at auricular concha and the NTS has also been investigated. In an animal study, acupuncture stimulation at auricular concha induced the hypoglycemic effect by activating the firing activities of the neurons in NTS . It is apuncture . Recently, it is suggested to assess the function of the vagus nerve through transcutaneous electric stimulation of the ABVN innervating parts of the ear. The 8\u2009mA stimulation was performed at five different electrode positions at the subject's right ear. A clear, reproducible vagus sensory evoked potential (VSEP) was recorded after stimulation at the inner side of the tragus of the right ear, instead of the other stimulation positions at the lobulus auriculae, the scapha, thecus antihelices superior, and the top of the helix. It is considered that cutaneous stimuli of this region are transported via the auricular nerve to the jugular ganglion and from there with the vagus nerve into the medulla oblongata and to the NTS . AlthougThe NTS in the brainstem carries and receives visceral primary afferent signals from a variety of visceral regions and organs. Neurons that synapse in the NTS participate into the autonomic reflexes, with a result to regulate the autonomic function. Outputs that go from the NTS are transferred to a large number of other regions of the brain including the paraventricular nucleus of the hypothalamus and the central nucleus of the amygdala as well as to other nuclei in the brainstem . Perhaps, extensive connections between the NTS with visceral organs and other brain structures may elucTherefore, we proposed the \u201cauriculovagal afferent pathway\u201d (AVAP); both the autonomic and the central nervous system could be modified by auricular vagal stimulation via projections from the ABVN to the NTS see . The nuclei of the vagus nerve in the brainstem have been implicated as one of the earliest regions in the pathophysiological process of both Alzheimer's and Parkinson's diseases. Far-field potentials from brainstem after transcutaneous vagus nerve stimulation at the auricle have been utilized as a noninvasive method in the early diagnosis of neurodegenerative disorders \u201333. We sVagus nerve stimulation has been approved by FDA as an alternative treatment for neuropsychiatric diseases such as epilepsy and depression. In order to avoid the disadvantages of cervical vagus nerve stimulation, less invasive methods including transcutaneous vagus nerve stimulation \u201336 and e15, but not Stomach (CO4), produced depressor effect on vascular hypertension [Several studies investigated the specificity of auricular acupoints. Parts of the studies agree on the concept that specific areas of the ear are related to specific areas of the body. Acupuncture at COrtension , 42. Spertension , 44. There is still disagreement on the specificity of auricular acupoint. Similar patterns of cardiovascular and gastric responses could be evoked by stimulation at different areas of the auricle, which do not support the theory of a highly specific functional map in the ear . AuriculThere are inconsistent study results related to the treatment effects of auricular acupuncture, which may be related to trial designing, clinical observation measures, the set of sham acupuncture, and statistical analyses \u201348. In c"} +{"text": "Memory T and B lymphocytes and long lived plasma cells represent a repository of the antigenic experience of an individual. By analyzing the specificity and function of these cells we can gain insights into the human immune response and identify mechanisms of protection and immunopathology. We have developed methods to dissect the functional heterogeneity and antigenic repertoire of human T cells, B cells and plasma cells. These methods are used: i) to identify subsets of effector and memory T cells with distinct roles in immune surveillance and protection in different tissues against different classes of pathogens, and ii) to dissect the relative role of plasma cells and memory B cells in the antibody response to pathogens and to isolate broadly neutralizing antibodies. A better understanding of the class and specificity of the human immune response will be instrumental to guide the design of effective vaccines."} +{"text": "Multi-electrode recordings allow the recording of the activity of a neural population of tens to hundred cells over periods of hours. Two examples are given by the recording of the activity of ganglion cells in the retina -3,6 , anIn the present work we propose a new and efficient algorithm to infer fields and pairwise couplings of an Ising model from the data. Our procedure considerably improves over the algorithm presented in and is bij inferred from Dark and Flicker. We have found that most of the couplings are conserved under the two stimuli but some pairs of neurons with large interactions in Flicker have weak couplings in Dark. We have used the inferred couplings to draw retinal maps in the receptive fields plane of the cells. For Dark, the largest coupling map define a planar graph with short range connections. For flicker the strong non conserved couplings pointed out in the previous paragraph often are long-range interactions.Our procedure has been validated on synthetic data sets, and used to re-analyze multi-electrode recordings of neural cells of the activity of salamander ganglion cells previously published in ,6 and ofWe will discuss some important aspects of the Ising model such as: How do couplings change with the removal of cells from the recording? What temporal correlations are neglected in the Ising model (dependence on the bin size Dt)? How do couplings inferred with a dynamical model (Integrate and Fire) compare with Ising couplings ?"} +{"text": "Low-dimensional attractive manifolds with flows prescribing the evolution of state variables are commonly used to capture the lawful behavior of behavioral and cognitive variables. Neural network dynamics underlie many of the mechanistic explanations of function and demonstrate the existence of such low-dimensional attractive manifolds. In this study, we focus on exploring the network mechanisms due to asymmetric couplings giving rise to the emergence of arbitrary flows in low dimensional spaces. Here we use a spiking neural network model, specifically the theta neuron model and simple synaptic dynamics, to show how a qualitatively identical set of basic behaviors arises from different combinations of couplings with broken symmetry, in fluctuations of both firing rate and spike timing. We further demonstrate how such network dynamics can be combined to create more complex processes. These results suggest that 1) asymmetric coupling is not always a variance to be averaged over, 2) different networks may produce the same dynamics by different dynamical routes and 3) complex dynamics may be formed by simpler dynamics through a combination of couplings. The mechanistic explanations employed in behavioral, cognitive and neural sciences often take the form of network models and their dynamics. Various signatures of nonlinear dynamical phenomena are ubiquitous in these disciplines, e.g. phase transitions, pattern formation and time-scale separation. Yet, in the literature, one does not find a systematic account of the relationship between the emergence of the dynamics of behavioral and cognitive processes and the dynamics of the underlying neural networks and their structural properties. We argue here that such an account of the structure-function relationship begins by understanding the different ways the structure of a neural network leads to its collective dynamics. In particular, we focus on the contributions of network connectivity as a means to control the emergence of arbitrary low-dimensional dynamics.The nonlinear nature of human perception and action dynamics is well documented, with the early example of the Necker cube. In general, hysteresis and autonomous switching in cases of perceptual ambiguity have been modeled in terms of a bistable system Recent examples from neuroscience are available on the temporally extended neural processes underlying such behaviors. For examples, Graziano and colleagues stimulate a local ), which themselves can be dynamically complex, nonlinear, multistable and display all the features of behavior known from cognitive sciences. Perdikis and colleagues proposed such SFMs as building blocks for cognitive architectures Two important forms of degeneracy are present in the network models in this work: First, for any given process a network may generate, the mapping of the generating dynamical mechanism onto a connectivity matrix may be achieved by many different structural configurations. This degeneracy is systematic and reflected mathematically by the adjoint mapping of To account for the structure of more complex behaviors, it becomes necessary to understand how basic processes or primitives may be constituents of complex behaviors. In nonlinear or non-modular systems, compositionality is a nontrivial problem. The presupposition of timescale hierarchy allows for the decomposition of complex dynamics into simpler dynamics on multiple temporal scales. Such decomposition may either occur in parallel, which will give rise to two mutually coupled subsystems forming a hierarchy; alternatively the decomposition may occur sequentially (such as in the coexistence of slow and fast manifolds in the system dynamics), which will give rise to serial behaviors on different timescales (fast-slow). We showed the effectiveness of the former in the last simulation where control signals on a slow timescale reshape the effective phase space of the network; this reshaping produces produces transitions in the qualitative dynamics produced by the network that can then be identified by examining the changes in principal components over time. When applied in combination with sequential dynamics, phase space reshaping of SFMs may rapidly yield complex articulated processes, suggestive of how such processses may be structured in behavior.Recent work Another formulation of how behavioral dynamics are embedded in neural networks has been extensively developed by Sch\u00f6ner and colleagues The more general framework of liquid state computing, recently outlined in This has been clearly identified in the work of At the macroscopic level of the brain, lines of research have developed that coincide theoretically with the results presented here. In particular, the theory of neurocognitive networks As an illustrative example of a functionally meaningful process in behavioral neuroscience, we examine the production of simple movements, where it has been suggested that multistability, limit cycle and monostability form a fundamental set of classes of behaviors We derive different possible implementations of the basic dynamics discussed above in terms of the firing rate and spike timing patterns, and connectivity. We begin with a spiking state variable Such phase models are obtained using standard techniques in nonlinear oscillator theory In Putting the neuron in a network context, we introduce an additional term in the coefficient of In order to analyze the rate dynamics of We will also consider a reduction of the network relevant to short timescales based on the assumption To derive the phase locking attractors in a network, we use the set of conditions that for any pair of neurons In general, the existence conditions yield simple stability results whose structure does not depend on the details of the phase response curve. In order to apply such results to In order to obtain a phase response curve, we start with the definition of the perturbed period Figure S13D network dynamics: Analogously with A fixed point, B limit cycle, C bistability, and D monostability.(TIFF)Click here for additional data file.Figure S2Excitator flows: The behavior of an Excitator system is shown here for bistable, limit cycle and monostable dynamics in the phase space (top) and the time series (bottom). Red lines in the phase space are the nullclines of the system, while black lines show how the phase flows with time on example trajectories.(TIFF)Click here for additional data file.Figure S3Theta neuron dynamics:A Firing rate as a function of input for the theta neuron described in the text. Two circles give the input and rate for the two sample simulations shown on the right. B, C Time series from simulations of (TIFF)Click here for additional data file.Figure S4Rate reduction approximation The assumption of the rate reduction in the text is that the mean firing rate captures the relevant information in a spike train. Here we show in A and B, respectively, cases of low and high firing rates. Upper panels show in blue, red and green curves the omega dynamics time series under a mean firing rate, equal interspike interval (ISI) spike train and Poissonian spike train. Bottom plots show the log sum squared error of the mean firing rate time series with respect to that of equal ISI and Poissonian spike trains in green and blue.(TIFF)Click here for additional data file.Figure S5Phase response curves:A The phase response curve is found by perturbing the post synaptic neuron with a single presynaptic spike which produces a jump in the phase of the post synaptic oscillator. The black traces show the pre and post synaptic neurons B and C corresponding to positive and negative coupling values, respectively. Both are bimodal, however for both the stronger knee reflects the sign of coupling. The gray lines show the same response curve assuming the phase is linear with time.(TIFF)Click here for additional data file."} +{"text": "After publication of this work , we haveThe University of Georgia has filed a United States provisional patent on this technology.YY and RJ conceived the study. RJ performed the experiments under the guidance of YY. An equal contribution by YY and RJ was made for literature review and drafting of the manuscript. Both authors read and approved the final manuscript."} +{"text": "The proposed method evaluates the changes in the collection of activated or suppressed signaling pathways involved in aging and longevity, termed signaling pathway cloud, constructed using the gene expression data and epigenetic profiles of young and old patients' tissues. The possible interventions are selected and rated according to their ability to regulate age-related changes and minimize differences in the signaling pathway cloud. While many algorithmic solutions to simulating the induction of the old into young metabolic profiles in silico are possible, this flexible and scalable approach may potentially be used to predict the efficacy of the many drugs that may extend human longevity before conducting pre-clinical work and expensive clinical trials.The major challenges of aging research include absence of the comprehensive set of aging biomarkers, the time it takes to evaluate the effects of various interventions on longevity in humans and the difficulty extrapolating the results from model organisms to humans. To address these challenges we propose the The increasing burden of the aging on the economies of the developed countries is turning the quest to increase healthy life spans from an altruistic cause into a pressing economic priority required to maintain the current standards of living and facilitate economic growth , provided clues that transcription profiles of cancer cells mapped onto the signaling pathways may be used to screen for and rate the targeted drugs that regulate pathways directly and indirectly related to aging and longevity. Instead of focusing on individual network elements, this approach involves creating the signaling pathway cloud, a collection of signaling pathways involved in aging and longevity each comprised of multiple network elements and evaluating the individual pathway activation strength. Despite significant advances in aging research, the knowledge of the aging processes is still poor, and combining all available factors involved in cellular aging, aging of the organisms, age-related diseases, stress-resistance, and stress-response along the many other factors into a comprehensive signaling pathway cloud may be more beneficial than focusing on the narrow collection of elements. The creation of the pathway cloud may allow for the annotated databases of molecules and other factors to be screened for effectiveness of individual compounds in replicating the \u201cyoung\u201d signaling activation profiles in silico.Our prior work with gene expression and epigenetics of various solid tumors and naked mole rat that senesce at a slower rate than members of the same order show less transcriptome changes with age from 10 to 200%. We converted the obtained gene lists from different models to the general list of human orthologs where it is possible. 226 genes of 315 from our set were subjected to over-representation pathway analysis in , general metabolism , RNA transport, cell cycle and meiosis, gap junction, peroxisome, cyrcadian rhythm, different synapse types , gastric acid secretion as well as age-related diseases pathways , Hepatitis B, HTLV-I infection and cancer pathways . We considered obtained such a way pathways as probably associated with the human longevity. Human genes known as key activators/repressors of these pathways may be used in provided further mathematical model.SPCD) is proportional to the following estimator function,AGEL]i and [RGEL]j are gene expression levels of an activator i and repressor j, respectively. To obtain an additive rather than multiplicative value, it is enough just turn from the absolute values of the expression levels to their logarithms, arriving at the pathway activation strength (PAS) value for each pathway -to-Young ratio, YORn, one just has to divide the expression levels for a gene n in the sample taken for the senescent person by the same average value for the normalized young group. The discrete value of ARR (activator/repressor role) equals to the following numbers:The methods that may be applied for the possible analysis of geroprotector efficiency by pathways regulation have been arisen from our research experience of cell signaling pathways. As far as we have seen before equals to zero when the OYR value lies within the tolerance limit, and to one when otherwise. During the current study, we have admitted that the OYR lies beyond the tolerance limit if it satisfies simultaneously the two criteria. First, it either higher than 3/2 or lower than 2/3, and, second, the expression level for a corresponding gene from an old patient of an individual patient differs by more than two standard deviations from the average expression level for the same gene from a set of analogous young tissue/organ samples.The Boolean flag of in vivo and in vitro may be expanded to identify and predict the efficacy of personalized aging-suppressive intervention regimens for individual patients based on the transcriptome information from various tissue biopsies and blood samples.We propose a new computational approach for identifying and rating the variety of factors including small molecules, peptides, stress factors and conditions with the known effects on the transcriptomes at different ages of one or more cell or tissue types or known targets Figure . The appPAS values for the general set of the pathways as close to zero as possible. Since many of the drugs approved for use in humans have known molecular targets and some have been screened for the impact on longevity in model organisms for each individual pathway Figure and consin silico screening and ranking of drugs and other factors that act on many signaling pathways implicated in aging processes by calculating their ability to minimize the difference between signaling pathway activation patterns in cells of young and old patients and confirming the results using in vivo and in vitro studies.Longevity studies of aging-suppressive drug efficiency in higher mammals take several years and decades and may cost millions of dollars. An intelligent process for predicting the activity and ranking the geroprotective activity of various factors and strengthening the prediction in rapid and cost-effective studies on cell cultures and model organisms may help increase the longevity dividend of these studies. In this paper we propose a method for The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "A number of new data were obtained concerning the relationship of nitrogen oxide metabolism and C-reactive protein formation, clinical course of rheumatoid arthritis. For the first time a complex approach was suggested for the pathogenic justification of simvastatin use in the scheme of conventional treatment to increase the therapy efficiency, to achieve stable early remission in patients with rheumatoid arthritis. It was proved that an important mechanism of increasing the therapeutic efficiency of simvastatin was its action on the system of endothelial function in blood and joint fluid. It was suggested that one should include assessment of blood and joint fluid for nitrogen oxide, nitrate diaphorase and nitrate reductase in the algorithm of investigation and dynamic observation, choice of tactics and therapy efficiency assessment.Obtained new data are necessary for increasing the pharmacotherapy efficacy in patients with rheumatoid arthritis taking into account the metabolic activity of NO-synthetase mechanism in blood and synovial fluid. An algorithm was suggested for screening observation and differentiated management of patients with rheumatoid arthritis taking account of severity of nitrogen oxide metabolism disorders. A differentiated approach was worked out and justified of simvastatin prescription both to increase the efficacy of treatment taking into account the clinical activity of the disease and to correct metabolic disorders in patients with rheumatoid arthritis."} +{"text": "Freezing of gait (FOG) is a disabling symptom of advanced Parkinson's disease (PD) that leads to an increased risk of falls and nursing home placement. Interestingly, multiple lines of evidence suggest that the manifestation of FOG is related to specific deficits in cognition, such as set shifting and the ability to process conflict-related signals. These findings are consistent with the specific patterns of abnormal cortical processing seen during functional neuroimaging experiments of FOG, implicating increased neural activation within cortical structures underlying cognition, such as the Cognitive Control Network. In addition, these studies show that freezing episodes are associated with abnormalities in the BOLD response within key structures of the basal ganglia, such as the striatum and the subthalamic nucleus. In this article, we discuss the implications of these findings on current models of freezing behavior and propose an updated model of basal ganglia impairment during FOG episodes that integrates the neural substrates of freezing from the cortex and the basal ganglia to the cognitive dysfunctions inherent in the condition. Freezing of Gait (FOG) is a common disabling symptom of Parkinson's disease (PD) that typically manifests itself as a sudden inability to walk, despite the intention to move forward , the STN and the MLR and the bilateral ventral striatum , the STN is able to bypass the striatum and directly drive an increase in inhibitory GABAergic output from the output structures of the basal ganglia, such as the internal segment of the GPi. Increased activity in the GPi, which is a member of the direct pathway of the basal ganglia, leads to an increase in the rate of inhibitory output onto the brainstem and thalamic structures that control the output of effective motor behaviors Frank, see Fig. Given iGiven the specific patterns of connectivity within the basal ganglia circuitry, the likely sequelae of impaired striatal activity is that the output structures (the GPi internus and the substantia nigra pars reticularis) will enter into low-energy oscillatory states, coupling with structures such as the STN (Buzs\u00e1ki and Draguhn, These proposed roles of the STN are well supported by behavioral (Aron and Poldrack, The oscillating inhibitory state of the basal ganglia nuclei may also explain the poorly understood phenomenon of \u201ctrembling in place,\u201d which refers to lower limb oscillatory activity in the 5\u20137 Hz range commonly observed during episodes of FOG (Moore et al., One of the major implications of this model is that freezing is best conceptualized as a functional disorder that only manifests once certain circumstances have occurred. This raises an interesting question regarding the likely location of pathology in the brains of patients with PD and freezing behavior. Based on the model, any pathological process that impaired the capacity of the brain to deal with information processing, and thus, to manifest increased conflict signaling would lead to an increase in freezing behavior.Although there are many regions of the brain in which pathology would lead to increased global conflict, the most likely candidates are the ascending neurotransmitter projection systems of the brainstem, such as the ventral tegmental area, the locus coeruleus, and or the dorsal raphe nucleus. Each of these nuclei sends modulatory neurotransmitters to large portions of the brain, including both cortical and subcortical sites involved in walking and executive function. Indeed, these regions are often the target of Lewy body pathology (Rye and DeLong, Another possible candidate region is the PPN (Mazzone et al., It is also possible that the proposed increase in STN oscillatory activity is due to a dysfunctional process within the pSMA (see Figure Although the separate predictions of these different hypotheses may be difficult to dissociate with fMRI, measures with higher temporal resolution may help to clarify the precise role of each structure in the spatiotemporal evolution of a freezing episode. As such, future studies should now be constructed to test the different aspects of this model using an array of neuroscientific techniques. Firstly, activity within the STN and PPN could potentially be recorded directly during DBS surgery, allowing for the analysis of the time course of activation and deactivation patterns within the different nuclei with respect to freezing behavior. Future neuroimaging experiments should explore the presence or absence of impairments in functional and effective connectivity associated with the predictions of the model, with a particular emphasis on the dynamic connectivity between cortical and subcortical structures. Finally, computational modeling experiments could be designed in order to probe the dynamic elements of the model, focusing on whether abnormal conflict processing through the STN can predict the specific behavioral patterns displayed on different neuropsychological and motor-based tasks by patients with freezing. Together, the results of these studies will help to inform the next generation of therapeutic advances for freezing behavior in PD, including the utilization of targeted closed-loop DBS (Rosin et al., The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Breast cancer is a major health problem among women in the world. The successful treatment of this disease is limited by the fact that essentially all breast cancers become resistant to chemotherapy. Therefore, there is a need to design new chemotherapeutic agents able not only to target breast cancer but also to display increased efficacy and overall decreased systemic toxicity. Platinum (II) complexes are widely used in cancer chemotherapy. The most important platinum-based drugs are cisplatin, carboplatin, the first platinum (II) derivatives entering the market, and more recently oxaliplatin, nedaplatin, iobaplatin, heptaplatin and picoplatin The computer-aided calculations are commonly applied to rational design of many pharmacological group of drugs. Platinum based cytostatics are one of the exceptions. The correlation between antiproliferative activity and molecular descriptors of Pt-drug analogues of clinically used complexes are present in this work. The main goal of this study was to show the relations between hydrophobicity and biological activity in the series of neutral platinum (II) and platinum(IV) complexes. The object of the study is several groups of analogues of oxaliplatin and picoplatin complexes, with N-donors ligands . The influence of the type and the positions of the substituents on the conformational energies and thermodynamic stabilities of a series of platinum (II) and platinum (IV) complexes has been studied by molecular mechanics. The calculations were carried out for the ligand conformations. The obtained energies and thermodynamic stabilities are in agreement with experimental data on the reactivity and antitumor activity of the compounds."} +{"text": "During this rather short time, he together with Emil von Behring discovered the causative pathogens of tetanus and diphtheria and contributed substantially to our basic understanding of the interaction of the immune system and invading pathogenic microorganisms In keeping with the tradition of Kitasato, a major theme of the symposium will be the translation of basic science principles into understanding human disease. The keynote lecture of the 2011 Kitasato symposium will be delivered by Antonio Lanzevecchia (Belinzona/Switzerland) who has contributed many novel insights into understanding of immune regulation and host defense. His lecture is entitled \"Dissecting the human immune response to pathogens\".After successful meetings in 2009 and 2010, an international faculty of largely immunologists and rheumatologists will gather in Potsdam on September 22This year's Kitasato Symposium will be a joint meeting with the Research Collaborative Consortium (Sonderforschungsbereich) 650 \"Cellular approaches to the suppression of unwanted immune reactions - from bench to bedside\". As in previous years, the Kitasato symposium will focus on mechanisms of autoimmunity and tolerance emphasizing the role of cytokines. A deeper understanding of these aspects and adapyive and innate immunity has paved the way to innovative therapies for autoimmune disease within the last decade, especially in rheumatoid arthritis and very recently in systemic lupus erythematosus (BAFF/BLyS blockade).In specific sessions, the role of tolerance in autoimmunity as well as transplantation, signaling pathways in cytokine stimulation, the analysis of new cytokine targets, and the translational of cytokine biology into human disease will be discussed. The Symposium will especially focus on novel developments within the last few years with the promise of yielding new targets for therapy. In addition, insights on disease biology developing from the clinical use of biologis will be highlighted.It is the promise of the meeting to provide new perspectives of basic, translational and clinical research in the field serving the ultimate goal of improving the treatment of patients. The collection of the individual contributions is summarized in the following abstract supplement."} +{"text": "With the gradual development of intelligence, human got curious to know his origin and evolutionary background. Historical statements and anthropological findings were his primary tool for solving the puzzles of his own origin, until came the golden era of molecular markers which took no time to prove it\u2019s excellence in unveiling answers to the questions regarding the migration pattern of human across different geographical regions. As a bonus these markers proved very much beneficial in solving criminal offenses and in understanding the etiology of many dreaded diseases and to design their prevention. In this review, we have aimed to throw light on some of the promising molecular markers which are very much in application now-a-days for not only understanding the evolutionary background and ancient migratory routes of humans but also in the field of forensics and human health. Since the origin, spread of mankind across the world has always been an emerging area of interest for modern biologists. Humans migrated \u201cout of Africa\u201d to other geographical locations around the world and eventually diversified into distinct human races populating distinct geographical regions. Human diversity did not only remain restricted to their socio-cultural and linguistic domains but also have penetrated deep inside their genetic root. The wealth of genetic/allelic diversity is not only an excellent resource for human diversity studies but also is highly informative for the study of human genetic predisposition of various diseases . Thus thSince human genome varies from individual to individual, no two individuals are alike genetically or phenotypically. With the development of various molecular techniques the application of genetics to the study of human evolution gave rise to the fields of molecular evolution and molecular anthropology. Various informative and polymorphic genetic markers were discovered and the gene frequency data emerging out from their analyses largely contributed to the successful study of evolution and diversity of human races worldwide. The use of a good number of uniparental and biparental markers for genetic diversity studies is a recent trend in which Y-haplogroup, mitochondrial DNA (mtDNA), human leukocyte antigen (HLA) and killer-cell immunoglobulin-like receptor (KIR) are the promising ones. The inheritance pattern emerging out from the analyses of these markers stirred a debate on the validity of two distinct models of human dispersals since their inception more than 25 years ago .Homo sapiens, distributed throughout the Old World and all regional populations were connected by gene flow as they are today. Some skeletal features developed and persisted for varying periods in the different geographical regions justifying the development of recognizable regional morphologies in the continents of Africa, Europe, and Asia. On the other side, the \u201crecent out of Africa\u201d model which according to The application of mtDNA to trace the evolutionary pattern and the migration events in human is based on the fact that certain haplotypes are observed in peoples of certain geographical regions of the World on the Y chromosome. There is now extensive knowledge regarding the geographic origins of Y-SNPs based on studies of global populations . BecauseY haplogroup diversity has been carried out by Earlier studies indicated that Y-chromosome polymorphisms were geographically restricted and that FST values for the NRY were higher than those for mtDNA . Hammer Additionally, Y-chromosome microsatellites find extensive application in forensic researches whereby databases of population haplotype frequencies are established for Europe, the United States and for Asia. Y microsatellite analysis provides assailant specific profile during diagnosis of the rape case when the rapist is mainly azoospermic . PaterniThe major histocompatibility complex (MHC)/HLA is unique in that it is the most polymorphic genetic system in the human genome and the only system to display functional polymorphism . Due to Human leukocyte antigen polymorphism study has been carried out in many of the ethnic populations of India including the primitive tribal group Toto . DebnathApart from being an invaluable tool for population genetic studies, MHC polymorphism has important role in organ transplantation and human disease associations. HLA associations have also helped in defining syndromes of disease categories having common/shared pathogenic mechanism like ankylosing spondylitis and related spondylo-arthropathies that are presumed to be associated with HLA-B27. HLA association studies in infectious and autoimmune diseases show the presence of susceptibility and protective alleles in populations of different ethinic origins . HLA assHuman leukocyte antigen associations with diseases vary in different populations. Disease predisposing genes and their molecular subtypes could help to determine and predict the incidence of the diseases in some populations. It is therefore important to have a population based database of HLA alleles and their frequencies of prevalence in healthy individuals so that disease predisposing influence of a particular phenotype could effectively be assessed in the populations.Being a functionally a polymorphic system, investigations into the distribution of MHC alleles in world populations are very important in this regard since the MHC genetic makeup of each of these populations would reflect interplay of both the basic genetic origin and effects of natural phenomenon such as founder effect and environmental selection. Differences in the prevalence of HLA alleles in different populations in varied environmental conditions could be utilized to assess the role of each of these alleles in conferring survival advantage to human populations.Killer-cell immunoglobulin-like receptors were first described by Immunogenetic studies based on KIR genes in different ethnic populations around the world show significant differences in the distribution of group A and B haplotypes. Whereas in the Japanese population group A genotypes were found at frequencies well above 50%, only a single individual out of 67 exhibited a group A genotype in a survey among Australian Aborigines ,b.The KIR frequencies of many of the ethnic populations were analyzed worldwide. In one such work, KIR gene profile was studied for the Rajbanshi population, an essential caste population of Sub-himalayan part of north-eastern India . It was Figure 3 (adopted from The frequencies of the inhibitory KIR genes in most of the world population groups are very high except those on the B haplotypes, i.e., KIR2DL2, KIR2DL5A, and KIR2DL5B. Detailed analysis revealed that indigenous populations such as aborigines and Amerindians have outlying frequencies of the KIR genes. Obviously there is a close inverted correspondence between the frequencies of KIR3DL1 and KIR3DS1 genes in an individual population. Based on KIR haplotype B genes In humans, KIRs recognize HLA class I proteins leading to the inhibition or activation of cytotoxic cell activity and cytokine production by T and NK cells thus focusing on the role of these receptors in immunological responses of NK cells . The intHuman have developed their interest in unveiling the mysteries of human migratory pattern and evolutionary trends since his origin. These above mentioned markers are serving the scientific world to trail back through time to understand the dispersal pattern of humans. To add to their importance, these markers are also responsible for understanding the underlying etiology of certain disease pathogenesis. Application of these markers especially Y-SNPs in forensics has been an interesting achievement in the past decade. Apart from these markers, a group of recently emerging markers which are gaining the attention of the researchers all over the world are the toll-like receptors TLRs; . In addiThe authors declare that the research was conducted in the absence of any commercialor financial relationships that could be construed as a potential conflict of interest."} +{"text": "Many ecosystem services provided by forests are important for the livelihoods of indigenous people. Sacred forests are used for traditional practices by the ethnic minorities in northern Thailand and they protect these forests that are important for their culture and daily life. Swidden fallow fields are a dominant feature of the agricultural farming landscapes in the region. In this study we evaluate and compare the importance of swidden fallow fields and sacred forests as providers of medicinal plants among the Karen and Lawa ethnic minorities in northern Thailand.We made plant inventories in swidden fallow fields of three different ages and in sacred forests around two villages using a replicated stratified design of vegetation plots. Subsequently we interviewed the villagers, using semi-structured questionnaires, to assess the medicinal use of the species encountered in the vegetation survey.We registered a total of 365 species in 244 genera and 82 families. Of these 72(19%) species in 60(24%) genera and 32(39%) families had medicinal uses. Although the sacred forest overall housed more species than the swidden fallow fields, about equal numbers of medicinal plants were derived from the forest and the fallows. This in turn means that a higher proportion (48% and 34%) of the species in the relatively species poor fallows were used for medicinal purposes than the proportion of medicinal plants from the sacred forest which accounted for 17\u201322%. Of the 32 medicinal plant families Euphorbiaceae and Lauraceae had most used species in the Karen and Lawa villages respectively.Sacred forest are important for providing medicinal plant species to the Karen and Lawa communities in northern Thailand, but the swidden fallows around the villages are equally important in terms of absolute numbers of medicinal plant species, and more important if counted as proportion of the total number of species in a habitat. This points to the importance of secondary vegetation as provider of medicinal plants around rural villages as seen elsewhere in the tropics. Ecosystem services and goods have received much attention in recent years. Typically services and goods include 1) supply of valuable commodities and materials such as agricultural-, forest-, mineral-, and pharmaceutical products, 2) support and regulation of environmental conditions through flood control, water purification, pollination, and a number of other similar processes and 3) provision of cultural and aesthetic benefits that may also be the basis for ecotourism [As in many tropical regions, shifting cultivation is a major land use system in northern Thailand and it is a major driver of deforestation in the upland areas ,3. AboutSacred forests are segments of the landscape that represents old traditions of preserving climax forest patches based on local culture and religious beliefs and they are found throughout the world. A sacred forest represents a functional link between cultural life and the forest management system of a region. Sacred forests have been studied in many parts of the world including Africa, , China , and espSimplistic views of ethnoecological relationships between ethnic groups and their surrounding ecosystems often view the untouched virgin species rich forests as the main provider of useful plants, whereas secondary vegetation is often seen as degraded and useless. A growing body of evidence however points to these secondary recovering ecosystems as important providers of useful plants. Examples of how secondary vegetation make important contributions to the provision of useful plants come from the Amazon and the Atlantic forests in South America -26 and fThe study area is in Mae Cheam watershed in northern Thailand approximately 75 km southwest of the city of Chiang Mai. This watershed is important for its biodiversity and its varied forest types and vegetation and in addition it is inhabited by several ethnic minority groups . Our stuWe established sampling plots around both villages in 2009 and 2010 in the sacred forest and swidden fallow fields of various ages . Three plots (20\u2009\u00d7\u200940 m) were laid out parallel to contour lines and these three plots were replicated in each habitat. In the 24 plots all plant species were collected and later identified at the Queen Sirikit Botanic Garden Herbarium (QSBG) with the help of taxonomic specialists J. F. Maxwell and M. Norsaengsri. Voucher specimens are deposited at the herbaria of the Department of Biology, Chiang Mai University and at Queen Sirikit Botanic Garden Herbarium (QSBG), Chiang Mai, Thailand. Based on species lists derived from the vegetation surveys of each habitat type, ethnobotanical data were gathered between August, 2011 and February, 2012 using semi-structured interviews. Our informants were villagers who were born and had always lived in the communities and their ages ranged from 15\u201384 years. Photographs of plants and freshly collected material from the swidden fallow fields and sacred forest were shown to the informants following established interview techniques ,36. The Jaccard\u2019s Index (JI) was used to determine the similarity of medicinal plants species , which ia is the number of species unique to area A and b is the number of species unique to area B, and c is the number of species found in both areas.Where Use Value was calculated to determine the most important medicinal plant species in each habitat ,Ui is the number of use-reports cited by each informant for a given species in each habitat and N is the total number of informants.Where Linear regression was done to account for correlated responses between the age of fallow fields and total number of medicinal plants in each sampling sites. Chi-square test was used to analyze differences between habitat and number of medicinal plants species in the two villages and to analyze if the sources of medicinal plants depend on the habitat. All analyses were done with the SPSS 16.0 software package for Windows.In total we registered 365 species, 245 in the Karen village and 240 in the Lawa village. The highest species richness was found in the sacred forests of both villages and the lowest number of species was found in the youngest (1\u20132 years old) fallow fields nor when the villages were tested separately . Linear regression test in both villages showed that the age of the fallow fields was a weak factor and had negatively significant effect on the total number of medicinal plants and also negative effect in each village but without significant differences . This explains that the age of fallow did not affect the total number of medicinal plants. So although the sacred forest is much older and richer in species than the fallow fields, they do not provide higher number of medicinal plant species also when the village were tested separately .Because the four habitat types provide roughly similar numbers of medicinal plants even if their overall species richness is significantly different, the proportion of the species that is used medicinally of a given habitat is greatly different. The young 1\u20132 years) fallow fields have few species but 48% and 34% of them are used medicinally by the Karen and the Lawa, respectively. In the species rich sacred forests, in contrast, only 22% and 17% of the species are used medicinally (Figure\u00a0\u20132 years etc. It is interesting that the most recently abandoned field, i.e., the swidden fallows that are 1\u20132 years old, have the highest proportion of their species being used medicinally. This preference for using secondary vegetation as a source of medicinal plants has previously been demonstrated in the Atlantic Forest of Brazil [The swidden fallow fields of different ages of regeneration and the sacred forests provided about equal numbers of medicinal plant species to the two villages. This is surprising when seen in the light of the much higher overall species richness of the sacred forest compared to the surrounding swidden fallow fields. The more intense use of the secondary vegetation of the fallows may be because they are closer to where the villagers have their houses. Another possible explanation may be discouragement coming from the village council\u2019s desire to conserve the sacred forest. The fallow fields, in contrast, are part of the productive land surrounding the villages and the swidden fallows belong to individual villagers which eliminates any problem related to ownership, f Brazil and alsof Brazil , in dry f Brazil and in Vf Brazil . It appeSacred forest and their surrounding fallow fields of different age of regeneration provided approximately the same number of medicinal plant species to both villages. Because the fallow fields were less species rich, the proportion of their species with medicinal uses was consequently higher. Sacred forests are conserved as community forest and they make up a network of protected forest in northern Thailand . NeverthThe authors declare that they have no competing interests.The article was initiated by AJ, who recorded and analysis data and prepared the first write-up of the manuscript. HB has critically edited and shaped subsequent versions. AI, AJ, PW have read and approved the final version of the manuscript. All authors read and approved the final manuscript.AJ is a PhD student at the University of Chiang Mai, Thailand, under supervision of associate professor PW and co-supervision of AI and AJ, assistant professors at University of Chiang Mai and members of the Ethnobotany research group. HB is professor at Aarhus University, Bioscience, and functions as external supervisor to AJ and as host to her long term visit to Aarhus University."} +{"text": "The structure of the cerebral cortex results from the orderly migration of two major types of neurons, the glutamatergic projection neurons and the GABAergic interneurons. Most GABAergic neurons originate from the ganglionic eminences. They first migrate tangentially toward the pallium and then migrate radially through the developing brain cortex. The disruption of the VZ and SVZ of the palium and subpalium has been shown to occur at key developmental periods in the hydrocephalic mutant mouse hyh. The aim of the present investigation was to study whether such a disruption of the VZ and SVZ results in abnormalities of the GABAergic neurons populating the brain cortex.The brain of non-hydrocephalic and hydrocephalic hyh mice at postnatal day 7 (n=20) were processed for immunocytochemistry and immunofluorescence using antibodies against GABA and the marker of neuronal nuclei NeuN. Sections processed for double immunofluorescence were inspected with an epifluorescence microscope provided with the multidimensional acquisition software AxioVision Rel. Single and overlay images were used for quantitative analyses of the whole populations of cortical neurons (NeuE-reactive) and that of GABAergic neurons. Absolute and relative cell density and intracortical distribution were recorded for the GABAergic neurons.The mutant hyh mice were characterized by (i) a marked reduction in the width of the cerebral cortex; (ii) a reduction in the total number of GABAergic neurons; (iii) a reduction in the relative number of GABAergic neurons with respect to the total population of neurons (GABA/NeuE); (iv) an abnormal distribution of GABAergic neurons in the cortex layers, with a significant reduction in layers II and III and an increase in layer IV y V. The hyh mutation is associated with: (i) a decreased number of GABAergic neurons migrating from the ganglionic eminences; (ii) abnormal migration of GABAergic neurons through the developing brain cortex; (iii) in hyh mice disruption of the VZ and SVZ is associated to the onset of hydrocephalus and abnormal corticogenesis."} +{"text": "Research suggests that selenium may influence the behavior of the cancer risk in two ways. As an antioxidant, selenium helps to protect the body against free radicals. Selenium may also prevent or slow tumor growth, as some breakdown products of selenium can inhibit tumor growth by enhancing immune cell activity and inhibition of tumor blood vessel development.The aim of this study was to determine the level of selenium in blood serum as a potential marker of risk for cancers of the colon, stomach or pancreas.The research material was a total of 94 samples of blood serum from people with cancer, diagnosed and confirmed in one of the organs: colon (55 cases), pancreas (30 cases) or stomach (9 cases) and 94 samples of blood serum derived from healthy individuals which paired control group. The criteria adopted for pairing included: gender, year of birth (+/- 3 years), history of the occurrence of cancers in the family among first degree relatives and smoking status expressed in pack-years.Selenium concentration in blood plasma was determined using graphite furnace atomic absorption spectrometry (GFAAS). The measurement accuracy was +/- 5% \u00b5g Se/l.Association between Se concentration and frequency of cancers in quartiles are presented in table The obtained results suggest that low levels of selenium in the body may correlate with an increased risk of pancreatic cancer, colon or stomach, and thus constitute one of the markers of risk for cancers of such sites. Research requires the extension to a larger number of samples including tumor size, and performance analysis for selenoprotein genes.Prospective studies can elucidate:a) the use of selenium measurements as markers of risk of above cancers;b) possibility of lowering risk of the cancers of the colon, pancreas and stomach by supplementation of diet with selenium."} +{"text": "This work takes part in the European DebugIT project which goal is to build a technical and semantic information technology platform able to share heterogeneous clinical data sets from different hospitals for the monitoring and the control of infectious diseases and antimicrobial resistances in Europe. The aim of the study is to compare the incidence rates of antimicrobial resistance at the HEGP hospital obtained in real-time by the DebugIT platform to those established by the yearly-performed analysis processed by the microbiologists of the HEGP hospital.The INSERM database covers seven years of anonymized microbiology data and represents an image of the HEGP EHR data. To be able to semantically integrate the data with other European peers, we went through several steps of data normalisation and quality works. These tasks led to the setup of semantic data providers that were integrated at a European level. We built a common view of our domain knowledge upon which we aligned our semantic data providers. We compared the incidence rates of antimicrobial resistance produced by the DebugIT platform at the HEGP hospital to a gold standard produced yearly by the experts.Despite different data processing methods (e.g only microbiologists de-duplicate data in case of repetitive antibiograms on different isolates), the results were highly similar .This study shows the adequacy of the control capabilities of the DebugIT platform and the maturity of the semantic integration methods developed by the project consortium for the setup of a pan-European surveillance network.None declared."} +{"text": "Cataract, the opacity of the eye lens, is an age-onset pathology that affects nearly 50To monitor the heat and Here we focused on the effects of this structural transition on the The Dynamic light scattering (DLS) (24) provides information on the aggregation kinetics and on the clusters dimension and evolution as the aggregation proceeds. DLS measurements were performed during aggregation by using a commercial computer-interfaced scattering instruments ALV/SLS-5000 system from ALV, Langen, Germany, equipped with a To determine Where we assumed The complete distribution of decay rates can also be recovered by introducing from the relation The recovery of the A key to the understanding of proteins aggregation is the behavior of the energy of interaction between two approaching particles. It has been demonstrated that for a wide variety of proteins, this can be understood within the Derjaguin-Landau-Verwey-Overbeek (DLVO) model Clusters formed in the RLCA regime show an extremely high mass polydispersity, described by a power law, up to a cutoff mass To characterize the extent of the aggregation process, we performed dynamic light scattering experiments by measuring the time evolution of the intensity weighted average hydrodynamic radius of the clusters, After an initial, fast, increase of The first increase of Fits of Eq.9 to experimental data allow to recover process . The valAll the aggregations, carried out at different temperatures, show the same behavior. The initially formed basic aggregation units aggregate forming fractal clusters, accordingly to an RLCA process characterized by a temperature dependent rate constant (i.e. higher is the temperature faster is the aggregation rate). The size of basic aggregation units,instead, is independent on temperature with an average value of By decreasing temperature below fication and the aller . At timeTherefore above and below A closer look of Supramolecular structure of crystallins substantially varies both in lenses of different vertebrate species and in various parts of the same lens Here we monitor changes in the At all the temperatures investigated supramolecular aggregation of The radius of the HMW is The aggregation rate, instead, undergoes to an overall abrupt change when crossing Therefore, the change in tertiary structure occurring at the endothermic phase transition at Lens crystallin is particularly recessive to deleterious effects from elecromagnetic radiations that are known to be a potential risk factor for cataract and other eyes diseases. Indeed, its aqueous content favors radiation absorption and the very weak vascularization makes difficult to stand fast temperature increases In this context, the natural self-protective mechanism that we report preserves the lens from premature opacification throughout the lifespan of the organism"} +{"text": "Cognitive and information processing deficits are core features and important sources of disability in schizophrenia. Our understanding of the neural substrates of these deficits remains incomplete, in large part because the complexity of impairments in schizophrenia makes the identification of specific deficits very challenging. Vision science presents unique opportunities in this regard: many years of basic research have led to detailed characterization of relationships between structure and function in the early visual system and have produced sophisticated methods to quantify visual perception and characterize its neural substrates. We present a selective review of research that illustrates the opportunities for discovery provided by visual studies in schizophrenia. We highlight work that has been particularly effective in applying vision science methods to identify specific neural abnormalities underlying information processing deficits in schizophrenia. In addition, we describe studies that have utilized psychophysical experimental designs that mitigate generalized deficit confounds, thereby revealing specific visual impairments in schizophrenia. These studies contribute to accumulating evidence that early visual cortex is a useful experimental system for the study of local cortical circuit abnormalities in schizophrenia. The high degree of similarity across neocortical areas of neuronal subtypes and their patterns of connectivity suggests that insights obtained from the study of early visual cortex may be applicable to other brain regions. We conclude with a discussion of future studies that combine vision science and neuroimaging methods. These studies have the potential to address pressing questions in schizophrenia, including the dissociation of local circuit deficits vs. impairments in feedback modulation by cognitive processes such as spatial attention and working memory, and the relative contributions of glutamatergic and GABAergic deficits. Schizophrenia is one of the most perplexing and important mysteries in modern medicine. This condition is associated with significant impairments across diverse functional domains, usually conferring to the affected individual a lifetime of disability and the need for long-term treatment. The prevalence of schizophrenia is nearly 1% of the general population, and it constitutes one of the largest public health burdens of any condition .Research in schizophrenia has examined a wide variety of processes that have implicated abnormalities at various levels of the visual system, from the retina due to spatial and/or temporal proximity of a behaviorally irrelevant stimulus (mask) Figure . As exteWhile abnormalities in a variety of types of masking have been documented in schizophrenia, the best studied is backward masking. In backward masking, a mask is presented a fraction of a second after target onset. It is thought that two processes, interruption and integration, can contribute to visual masking and that these processes involve distinct neural mechanisms. Masking by interruption occurs after the target representation has already been formed and is based on interference with higher-level, feedback processes that underlie conscious perception of the target Figure , represeOSSS has been measured perceptually with contrast discrimination thresholds for center stimuli in different surround orientations. Relative to a no-surround condition, healthy control subjects showed larger increases in contrast discrimination thresholds than patients with schizophrenia, and this group difference was selective for the parallel orientation condition , a cortical area with neurons containing oculomotor signals. A model derived from detailed anatomical and physiological studies of early visual cortical area V1 was used to generate neuronal and connectivity parameters that were then applied to FEF, quantitatively accounting for responses of FEF neurons and FEF-dependent oculomotor behaviors in macaque and parvocellular (P) provides a highly controlled measure of feedback modulation of visual cortical activity by spatial attention in schizophrenia. The use of fMRI or other neurophysiological techniques uniquely allow measurement of responses to a stimulus when it is being ignored. Analogous measures of the impact of unattended stimuli are very difficult to obtain with behavioral methods, because requiring a subject to make a behavioral response to a stimulus necessarily requires the allocation of some attentional resources to that stimulus.Finally, combining spatial attention manipulations with OSSS is likely to provide insights regarding the causes of reduced OSSS in schizophrenia at the local cortical circuit level. For grating stimuli, the neural substrates of surround suppression are thought to include feedback projections from higher-order visual cortical areas to area V1 (Angelucci and Bressloff, While contributions from multiple disciplines and experimental approaches will likely be required to overcome the formidable challenges in elucidating the neural mechanisms of cognitive and information processing deficits in schizophrenia, the study of the visual system has a number of distinct advantages in this area. This review has highlighted some of the most productive lines of research within the rapidly growing body of literature on visual processing in schizophrenia, illustrating the diversity of visual processes that have been studied as well as the sophisticated methods available in the vision sciences. We also discussed several key factors that make the visual system such an appealing model system for the discovery of neural mechanisms. The convergence of the well-developed body of knowledge in structure-function relationships in the visual system, the conservation of the functional architecture across species, the preservation of basic local circuit architecture across neocortical regions, and the availability of quantitative methods to control for generalized deficits allow for inferences at a level of detail and specificity that is usually impossible in other neural systems. In the near future, the combination of vision science paradigms with modern neuroimaging methods may allow us to test some of the most compelling hypotheses on the neural origins of cognitive and information processing deficits in schizophrenia.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The synthesis, design and simulation of chemical processes, in particular thermal separation processes is today carried out by solving the resulting balance equations of a mathematical model of the considered unit operation or the whole chemical plant using sophisticated commercial process simulation software.E-models and equations of state were developed. These models allow the prediction of the phase equilibrium behavior of multicomponent systems using binary experimental data only.The reliability and correctness of the simulation results is mainly influenced by the reliability and correctness of the thermophysical property parameters used for the pure compounds and their mixtures. For the description of the required phase equilibria GHowever, since the number of experimental binary data is limited, these methods often cannot be applied. Therefore in particular for process development reliable predictive models with a large range of applicability are most important.With the help of the worldwide largest factual data bank (Dortmund Data Bank (DDB)) for pure component properties, phase equilibria and excess properties, in the last 35 years powerful group contribution methods have been developed in my research group. By combination of cubic equations of state with the group contribution concept, now even the phase equilibrium behavior with supercritical compounds can be predicted. At the same time other important properties can directly be predicted. With the development of an adequate electrolyte model the equation of state approach was even extended to systems with strong electrolytes. Today ideal predictive thermodynamic tools are available, which can be used in combination with factual data banks for the development of chemical processes.In the lecture the status of the factual data bank and the different predictive models will be shown. Furthermore important applications of industrial interest of the predictive thermodynamic models and the Dortmund Data Bank will be presented."} +{"text": "In 2009, the Department of Health asked the MRC and MHRA to identify major obstacles to non-commercial clinical trials research in the UK and suggest remedial actions. Risk-proportionality in trial management and monitoring was identified as a key area and a sub-group formed to:\u2022 Develop a process to facilitate the agreement of key stakeholders on the level of risk associated with a clinical trial of an investigational medicinal product (CTIMP).\u2022 Identify how risk-adapted approaches for CTIMPs can be achieved within the current regulatory framework.\u2022 Develop guidance on risk assessment and the risk-proportionate management of clinical trials.The resulting guidance focuses on the risks inherent in a trial protocol which impact on participant safety and rights, and the reliability of the results. A two-part assessment is suggested: 1) a simple IMP risk categorisation based on marketing status and standard medical care, and 2) assessment of the trial design, population and procedures to identify specific areas of vulnerability.The first part, IMP risk category, has implications for simplifications of initiation and conduct of a CTIMP that may be possible within the current regulatory framework. Possible risk-adaptations include: the need for competent authority authorisation; content of the Clinical Trials Authorisation (CTA) application; IMP management; safety surveillance; trial documentation; and GCP Inspection. The risks associated with the IMP also determine trial procedures for monitoring participant safety.The second part of the risk assessment addresses other aspects of clinical trial design and methods: safety risks from clinical procedures; risks related to participant rights; and risks to the reliability of trial results. It is designed to help trialists identify potential vulnerabilities and to prepare tailored trial management and monitoring plans to minimise risks which may be reviewed and modified throughout the life of a trial.The IMP risk category and safety monitoring plan may be submitted to the MHRA with the CTA application to ensure that there is shared understanding on this key aspect of a trial. We hope that the entire risk assessment and associated plans will provide the basis for a common understanding of stakeholders of the risks for that trial, and facilitate a risk-proportionate approach to trial activities.The guidance is available on the MHRA and NETSCC websites."} +{"text": "The recent focus on the potential link between periodontal and cardiovascular disease (PD and CVD) is part of the larger renewed interest on the role of infection and inflammation in the etiology of atherosclerosis and its clinical manifestations. Periodontal Disease is an inflammatory process affecting the periodontium, the tissue that surrounds and supports the teeth. The process usually starts with an inflammatory process of the gum (gingivitis) but it may progress with an extensive involvement of the gum, as well as the periodontal ligament and the bone surrounding the teeth resulting in substantial bone loss. Periodontal disease is a common oral pathological condition in the adult age and represents the leading cause of tooth loss. PD prevalence increases with age and there are estimates that up to 49,000,000 Americans may suffer from some form of gum disease. The gingival plaque associated with PD is colonized by a number of gram-positive and gram-negative bacteria that have been shown to affect the initiation and development of PD and have been associated with the potential etiological role of PD in CVD and other chronic conditions. A potential etiological link between PD and CVD may have important public health implications as both the exposure (PD) and the outcomes (CVD) are highly prevalent in industrialized societies. In situations in which both the exposure and the outcome are highly prevalent even modest associations, like those observed in the studies reporting on the link between PD and CVD outcomes, may have relevance. There are not definite data on the effect of periodontal treatment on CVD clinical outcomes (either in primary or secondary prevention) however it should be pointed out that the limited (both in terms of numbers and study design) experimental evidence in humans suggests a possible beneficial effect of periodontal treatment of indices of functional and structural vascular health. The recent focus on the potential link between periodontal and cardiovascular disease (PD and CVD) is part of the larger renewed interest on the role of infection and inflammation in the etiology of atherosclerosis and its clinical manifestations. In this review, we will describe the potential mechanisms that have been identified and review the evidence from observational and intervention studies.Periodontal Disease is an inflammatory process affecting the periodontium, the tissue that surrounds and supports the teeth. The process usually starts with an inflammatory process of the gum (gingivitis) but it may progress with an extensive involvement of the gum, as well as the periodontal ligament and the bone surrounding the teeth resulting in substantial bone loss. Periodontal disease is a common oral pathological condition in the adult age and represents the leading cause of tooth loss. PD prevalence increases with age and there are estimates that up to 49,000,000 Americans may suffer from some form of gum disease.A hallmark of PD is the presence of bacteria in the gingival plaque that characterizes the periodontal pathological process. The oral cavity is colonized by hundreds of bacteria, most of which have no clear pathological effects; however the gingival plaque associated with PD is colonized by a number of gram-positive and gram-negative bacteria that have been shown to affect the initiation and development of PD and have been associated with the potential etiological role of PD in CVD and other chronic conditions.A number of mechanisms have been hypothesized to explain the potential pathological role of periodontal disease in cardiovascular disease etiology. These include: a) potential direct mechanisms on the vessel wall and the atherosclerotic plaque and b) indirect mechanisms.This hypothesis assumes that bacteria or their products access the vessel wall (endothelium) directly, through the blood stream, and affect either the formation of the plaque and/or its evolution. In support of this hypothesis a number of studies have shown evidence of presence of viable oral bacteria or their genetic material in atheromatous plaque samples from different vascular beds.6Indirect mechanisms through which PD can affect CVD include the potential effects on a number of classical CVD risk factors like total and LDL serum cholesterol, blood pressure, glucose metabolism and platelet aggregation. A number of observational studies have shown significant associations between PD and serum total cholesterol14Substantial efforts have been dedicated to investigate the role of inflammation and its biomarkers in the link between PD and CVD.As previously indicated hallmarks of PD are the presence of bacteria and the local inflammatory process.An additional mechanism that has been hypothesized to play a role in the link between PD and CVD is molecular mimicry.The measurements of Periodontal disease/health that have been utilized to date vary substantially, and include: general measurements of oral health, crude measurements of self-reported missing teeth, standardized measurements of gingival detachment , radiographic measurements of bone loss and more recently measurements of bacterial infection. The interpretation of the literature is complicated by the lack of an agreement on a clear definition of PD in the scientific community. In general there is consistency in findings across studies using different indices of PD and the studies that have used more detailed measurements of exposures have, in general, showed stronger association between PD and CVD outcomes.Different outcomes have been investigated and include both clinical outcomes as well as non-invasive measurements of subclinical atherosclerosis [i.e. carotid intima media thickness (IMT) and endothelial dysfunction].In addition to these disease related outcomes a number of studies have investigated the relationship between PD and a wide array of cellular and plasma markers of inflammation .The evidence to date relating PD to clinical outcomes is based solely on observational studies. These studies include clinical comparisons of selected samples, retrospective (case-control) and longitudinal epidemiological studies.A total of approximately sixty studies have focused on cardiac outcomes and provided quantitative assessment of risk. The majority but not all these studies show a significant association between PD and these clinical outcomes. Many of the studies presented multivariate adjusted estimates indicating that the relationship may be independent from the potential confounding effect of socio-economic factors, life style habits like smoking and other important covariates. This association has been consistently found in studies from different parts of the world and in both men and women. The evidence in women is much more limited but a study that focused on both sexes provided evidence for a potentially higher risk of MI in women compared to men.22Several reviews have indicated an overall significant association between PD and coronary outcomes, two more recent systematic reviews indicated that relative risk estimates ranged from a 24% to a 34% increased risk and that these estimates were consistent across the various methods to ascertain both PD and coronary events.24The amount of articles reporting on the link between PD and cerebrovascular outcomes are more limited but, as for coronary outcomes, the overall evidence is in support of a positive association. The evidence appears to be stronger for studies with more detailed measures of PD and to be present for both fatal and non fatal events.26A number of studies have investigated the relationship between structural [carotid intima media thickness (IMT)] and functional measurements of vascular health and PD. The most convincing evidence regarding structural changes comes from epidemiological studies showing a significant cross-sectional association between periodontitis and carotid IMT in multivariate analyses.31Functional measurements of vascular health have been the focus of several studies, in general small clinical samples, all have shown a significant positive association between PD and vascular health .34Smoking is an important risk factor for both PD and CVD. The strong association between smoking and these two conditions has raised concern over the true nature of the observed association between PD and CVD outcomes. Some, in fact had argued that because of the strong co-linearity among these three variables it is basically impossible to address the independent nature of the relationship between PD and CVD through multivariate adjustment .No studies to date have been published investigating the role of periodontal treatment on clinical outcomes.Several intervention studies have been conducted to investigate the effect of various periodontal treatments on either biomarkers of inflammation and vascular health.The studies with a randomized design and using a control group do not demonstrate consistent effects of periodontal intervention on inflammatory markers, in particular CRP.Few studies have been conducted to ascertain the effect of PD on a number of indicators of functional vascular health, these includes three small clinical studies (without placebo group) and a randomized controlled trial.44A potential etiological link between PD and CVD may have important public health implications as both the exposure (PD) and the outcomes (CVD) are highly prevalent in industrialized societies. In situations in which both the exposure and the outcome are highly prevalent even modest associations, like those observed in the studies reporting on the link between PD and CVD outcomes, may have relevance.significant, consistent across different study designs and settings, specific to the hypothesized outcome based on postulated mechanisms, longitudinal studies have confirmed the temporal relationship between exposure and outcomes, elegant mechanisms have been hypothesized and confirmed to show the plausibility of the association that is coherent with the results of the laboratory evidence. Unfortunately we are missing, what is considered by many the strongest of the criteria i.e. definite evidence that the outcome can be altered (prevented or treated) by intervening on the exposure. As previously indicated, we have no definite data on the effect of periodontal treatment on CVD clinical outcomes (either in primary or secondary prevention) however it should be pointed out that the limited (both in terms of numbers and study design) experimental evidence in humans suggests a possible beneficial effect of periodontal treatment of indices of functional and structural vascular health. These same studies however have raised concerns regarding the role of inflammation and its markers as an important link between PD and CVD; in particular, the role of CRP has been called to question by the available experimental evidence.If we consider the criteria for establishing the cause-effect relationship of an observed association, we can conclude that the observed association between PD and CVD outcomes satisfies most of these criteria. In particular, studies to date have shown the association to be The link between PD and CVD is worth investigating because of its potential public health implications, the evidence to data has been shown to fulfill most of the criteria established for determining a cause-effect relationship. The observed association appears to be independent from the potential confounding role of important covariates ; studies need to be conducted to better understand the pathophysiological links between PD and CVD and this improved knowledge regarding the pathways should guide the design of specific intervention studies aimed at providing definite proof that we can, through periodontal treatment, affect CVD clinical outcomes."} +{"text": "Recent experiments reveal both passive subdiffusion of various nanoparticles and anomalous active transport of such particles by molecular motors in the molecularly crowded environment of living biological cells. Passive and active microrheology reveals that the origin of this anomalous dynamics is due to the viscoelasticity of the intracellular fluid. How do molecular motors perform in such a highly viscous, dissipative environment? Can we explain the observed co-existence of the anomalous transport of relatively large particles of 100 to 500 nm in size by kinesin motors with the normal transport of smaller particles by the same molecular motors? What is the efficiency of molecular motors in the anomalous transport regime? Here we answer these seemingly conflicting questions and consistently explain experimental findings in a generalization of the well-known continuous diffusion model for molecular motors with two conformational states in which viscoelastic effects are included. After the publication of Albert Einstein\u2019s theory of Brownian motion in 1905 The intracellular fluid (cytosol) of biological cells is a superdense same motors in the same cell? Which role is played by the size of the cargo, and what determines the precise behavior of such active transport and its efficiency? These are the questions we answer in this article.What happens to the active motion of particles in the cytosol which are driven by molecular motors A well-established physical approach to anomalous transport phenomena is based on the intrinsic viscoelasticity We study the interplay of the viscoelastic environment of the cytosol with the action of a molecular motor and its cargo. A well-established model of Brownian motors of the kinesin family is based on the continuous diffusion of a Brownian particle in a potential landscape, which randomly fluctuates in time between two realizations, Within the power-stroke idealization, the maximal mean velocity of the motor due to the fluctuations between the potentials Both the motor and its cargo are subjected to friction and random thermal forces from the environment. For normal viscous Stokes friction, the frictional drag force is The above ratchet model is appropriate to describe the motor dynamics at dilute solvent conditions in vitro. Our focus here is different, as we want to study the motor action in living cells, where the following experimental facts have been established: Even in the absence of cargo the effective friction coefficient for the motor is enhanced by a factor of 100 to 1000 in the cytosol compared with the one in water Methods for details. On the basis of this theoretically and experimentally well-founded approach the results herein were obtained from numerical analysis.Methods) and The first major surprise is that even carrying large cargo particles like magnetic endosomes with radius about 300 nm our model motor can operate by an almost perfect power stroke mechanism in the normal transport regime, as demonstrated in Indeed, when we decrease the potential amplitude by Depending on the cargo size and the binding potential amplitude the transport can become more normal and thermodynamically highly efficient even for a large turnover frequency, as ven for . Still, For vanishing loading force We proposed a simple basic model which reconciles experimental observations of both normal and anomalous transport by highly processive molecular motors in biological cells. Our model presents an immediate generalization of a well-known two-state model of normally diffusing molecular motors by accounting for the viscoelastic properties of the intracellular fluid. It not only explains how molecular motors may still operate by a power-stroke like mechanism while carrying a large cargo which subdiffuses when left alone, but also why and how an anomalous transport regime emerges for even larger cargo. It is crucial for this explanation that viscoelastic subdiffusion and anomalous transport possess finite moments of residence times in any finite spatial domain. Thus, there exist time scales for sliding down towards the potential minimum within one period of the flashing ratchet potential, for the escape to another potential well, and for the mean turnover time of the potential flashes. When the time scales are well separated, the transport is normal. However, when the sliding time scale starts to interfere with the turnover time, or the interwell potential barrier is lowered so that backsteps can occur, the transport becomes anomalous. These qualitative basic features are expected to survive in more complex models of molecular motors operating in viscoelastic environments.Specifically, we showed that transport by molecular motors becomes anomalous for large cargo particles with large fractional friction coefficient Our research provokes a number of follow-up questions. Thus, what happens if we relaxed the assumption of a rigid motor-cargo linker molecule? In that case, the large subdiffusing cargo is elastically coupled to a molecular motor, that possibly still operates normally in the absence of cargo. We are currently investigating this generalization for realistic spring constants of the linker. However, qualitatively the results remain very similar. Another question is prompted by the experimental results in Ref. Our model presents a good starting point for future research and further generalizations. Understanding how molecular motors perform in the viscoelastic cytosol of living cells despite the subdiffusion of the free cargo is compelling. Our findings open new vistas to the old problem of intracellular trafficking, reconciling seemingly conflicting results for the motor-cargo dynamics under different conditions. Finally, our results will be of crucial importance for the design of new technologies of motor-driven particles and drug delivery in the crowded cytosol of cells. We are confident that our findings will prompt a series of new experiments on the dynamics of molecular motors under realistic conditions in living cells.The numerical approach to integrate the generalized Langevin The rate constants The power exponent of anomalous diffusion was fixed to"} +{"text": "RLS) method of feature subset selection was checked for high-dimensional and mixed type (genetic and phenotypic) clinical data of patients with end-stage renal disease. The RLS method allowed for substantial reduction of the dimensionality through omitting redundant features while maintaining the linear separability of data sets of patients with high and low levels of an inflammatory biomarker. The synergy between genetic and phenotypic features in differentiation between these two subgroups was demonstrated.Identification of risk factors in patients with a particular disease can be analyzed in clinical data sets by using feature selection procedures of pattern recognition and data mining methods. The applicability of the relaxed linear separability ( Statistical models for analysis of risk factors for a disease or clinical complications, a main focus of medical research, require that the number of patients is larger than the number of variables (factors) to ensure that the statistical significance of the results can be appropriately established. In practice, most studies assess only the influence of each variable separately rather than the combined importance of a set of variables; the former oversimplistic but yet prevailing approach ignores the possibility of interactions between variables or between groups of variables Medical data sets collected today often have a large number of variables for a relatively low number of patients. This may happen for genetic data sets, where the number of variables can be thousand times greater than the number of patients. Statistical methods are not fully justified in this situation Feature selection methods are used to reduce feature space dimensionality by neglecting features that are irrelevant or redundant for the considered problem. Feature selection is a basic step in the complex processes of pattern recognition, data mining and decision making The feature subset resulting from feature selection procedure should allow building a model on the basis of available learning data sets that can be applied for new problems. In the context of designing such prognostic models, the feature subset selection procedures are expected to produce high prediction accuracy.relaxed linear separability (RLS) method of feature selection for the analysis of data on clinical and genetic factors related to inflammation. These data were obtained from the so called malnutrition, inflammation and atherosclerosis (MIA) cohort of incident dialysis patients with end-stage renal disease CRP, above median) and non-inflamed patients (as defined by a CRP below median). Then, genetic and phenotypic risk factors that may be associated with the plasma CRP levels were identified by exploring the linear separability of the high and low CRP patient groups. Particular attention was paid in this work to study the complementary role of genetic and phenotypic feature subsets in differentiation between inflamed and non-inflamed patients.We apply here the RLS method on the given clinical data set: 1) ReliefF, based on feature ranking procedure proposed by Kononenko Relief algorithm Correlation-based Feature Subset Selection - Sequential Forward algorithm (CFS-SF) Multiple Support Vector Machine Recursive Feature Elimination (mSVM-RFE) Minimum Redundancy Maximum Relevance (MRMR) algorithm CPL method and four other frequently used classification methods (RF (Random Forests) KNN SVM (Support Vector Machines) NBC (Naive Bayes Classifier) Four benchmarking feature selection algorithms were selected for the comparisons with relaxed linear separability (RLS) method as applied in the present study is provided in A detailed description of the RLS method of feature subset selection is linked to the basic concept of linear separability. The linear separability means possibility of two learning sets separationby a hyperplane perceptron criterion functionconvex and piecewise-linear (CPL) criterion functions The perceptron criterion function was modified by adding a regularization component for the purpose of the feature subset selection task Lasso regressionLasso and the RLS methods is in the types of the basic criterion functions. The basic criterion function used in the Lasso method is that of the least squared method, whereas the perceptron criterion function and the modified criterion function are used in the RLS method. This difference effects the computational techniques used to minimize the criterion functions. The modified criterion function, similarly to the perceptron criterion function, is convex and piecewise-linear (CPL). The basis exchange algorithms allow the identification of the minimum of each of these CPL criterion functions The RLS) method of feature subset selection is based on minimization of the modified perceptron criterion function and allows for successive reduction of unnecessary features while preserving the linear separability of the learning sets by increasing the cost parameter in the modified criterion function. The stop criterion for discarding the unnecessary features was based on the cross-validation error (CVE) rate (defined as the average fraction of wrongly classified elements) estimated by the leave-one-out method.The , which allows to correctly distinguish with 100% accuracy two leaning sets composed of 46 cancer and 51 non-cancer patients.The evaluation of the RLS method of feature subset selection involves generation of the sequence of the reduced feature subspaces The RLS method. One of the selected algorithms, ReliefF, is based on feature ranking procedure proposed by Kononenko Relief algorithm ReliefF searches for the nearest objects from different classes and weighs features according to how well they differentiate these objects. The second one is a subset search algorithm denoted as CFS-SF CFS-SF algorithm is based on a correlation measure which evaluates the goodness of a given feature subset by assessing the predictive ability of each feature in the subset and a low degree of correlation between features in the subset. These two feature selection algorithms are considered as \u201cthe state of the art\u201d tools for feature selection mSVM-RFE, is a relatively new idea. It is an extension of the SVM-RFE algorithm (Support Vector Machine Recursive Feature Elimination). The SVM-RFE is an iterative procedure that works backward from an initial set of features. At each round it fits a simple linear SVM, ranks the features based on their weights in the SVM solution, and eliminates the feature with the lowest weight SVM-RFE (mSVM-RFE) extends this idea by using resampling techniques at each iteration to stabilize the feature rankings MRMR (Minimum Redundancy - Maximum Relevance) Four benchmarking feature selection algorithms were chosen for an experimental comparison with the CPL method, were applied:To compare feature selection algorithms and to evaluate the selected feature subspaces, four frequently used classification methods, beside the Weka's implementation Weka's implementation of ReliefF and CFS-SF was used also for the feature selection and cross validation evaluation of designed classifiers. The R implementation of mSVM-RFE was used (SVM-RFE package) MRMR was obtained with the help of the code provided by its author CPL classifiers based on the search for optimal separating hyperplane CPL criterion functions RLS method of feature selection The four first classifiers (1) were designed by using ane see through MIA cohort CRP levels and the set CRP levels . Each patient SNPs) or deletions/insertions). The index) of a patient Two learning sets ce.impute procedure of dprep package of the R programming language was used for the substitution of missing values.These cohort and feature sets were selected from a larger data set and included only those patients for whom at least CRP levels in the feature subspaces CRP levels.During exploration of this database, the computations were performed in feature subspaces Three basic feature spaces RLS procedure of feature selection was carried out in each of the basic feature spaces (2) separately.The The apparent error rate AE) and the cross-validation error (CVE) in feature subspaces phenotypic space CVE) equal to factors) of the components of the optimal weight vector The apparent error rate which is in turn linked to inflammation nutrition . It is well established that an abnormal nutritional status with protein-energy wasting in this patient population is strongly linked to inflammation hormonal status or metabolism ; in general, relations between these features and inflammation have been described previously, but the relation with plasma calcium is not expected. Finally, high age and smoking are factors which are associated with inflammation.Whereas the list of phenotypic features in general appears to be biologically plausible, the ranking of the strength of the association as expressed by the value of the factor coefficient genetic space AE is equal to zero. Moreover, the linear separability was preserved during feature reduction from Feature selection from the eparable .phenotypic and genetic space phenotypic features and genotypes. The minimal value of the average cross-validation error rate was low: The process of feature selection from the combined phenotypic space genetic space phenotypic and genetic factors (features) resulted in a marked reduction of the CVE error rate to phenotypic and genetic factors are not independent and play complementary roles in describing the inflammatory status of the patients in the MIA cohort.The minimal cross validation error rate in the ce was , and the it was . Combiniconfusion matricesleave-one-out procedure for the phenotypic and genetic features are presented in RLS method of feature selection.The The optimal parameters scatter diagram (diagnostic map) showed in phenotypic fraction) was obtained by transformation (3) applied for phenotypic features that constitute the optimal feature subspaces phenotypic and genetic space genetic fraction) of the diagram was obtained by transformation (3) applied for genetic features The above transformation described by diagnostic map showed in diagnostic map as the point CRP patients, then we infer that the new patient is inflamed. If most of the CRP patients, then we infer that the new patient is not inflamed. Similar schemes of decision support are called the K-nearest neighbours (KNN) in the pattern recognition or as the Case Based Reasoning (CBR) scheme The similarity measureprecedents from the learning sets . Such scheme of the decision support based on the diagnostic maps has been used successfully in the medical diagnosis support system HeparThe transformation of the multidimensional feature vectors RLS selection method and CPL classifier applied in our study was compared to other selection methods and classifiers (see Section \u201cAlternative methods for feature selection and classification\u201d) using the error rate (fraction of misclassified objects from the test set), CVE, evaluated in the cross-validation (leave-one-out) procedure CFS-FS and mSVM-RFE alongside with RLS select an optimal subset of features and their prediction power can be assessed using different classifiers. In contrast, ReliefF and MRMR methods are ranking procedures and od not provide any intrinsic criteria for selection of any optimal subset of features. Such criterion need to be chosen separately. For the purpose of comparison of all these methods, the optimal sets of features for ReliefF and MRMR were determined for each classifier separately as those with minimal CVE for the applied classifier. Thus, the optimal set (and number) of features for these two methods can vary with the choice of classifier , one can easily reduce further the number of features selected by RLS method as it can be seen in SVM and/or CPL yielded the lowest errors when combined with RLS or CFS-FS selection methods. ReliefF method worked also well with RF and KNN classifiers. The errors related to the application of mSVM-RFE were similar to those related to ReliefF and CFS-FS methods . ReliefF and MRMR needed between a few and a few tens of minutes (depending on the applied classifier). The computation time of the RLS method was of the order of tens of minutes. The mSVM-RFE method had the computation time of about 20 hours. It should be stressed that the relatively long computation time of the RLS, mSVM-RFE, ReliefF and MRMR methods was caused mainly by repeated computation in the framework of the cross-validation procedure used by these methods.Among the four applied feature selection methods, RLS) method is that it may identify directly and efficiently a subset of related features that influences the outcome and that it assesses the combined effect of these features as prognostic factors. This characteristic of the approach presented here is clearly visible in the dataset of phenotypic features with minimal cross validation error rate, CRP in plasma, the clinical biomarker used here for discrimination of inflamed and non-inflamed patients.Feature selection is an integral - but often implicit - component in statistical analyses. An explicit systematic feature selection process is of value for identifying features that are important for prediction, and for analysis on how these features are related, and furthermore it provides a framework for selecting a subset of relevant features for use in model construction. The most common approach for feature selection in clinical and epidemiological research is based so far on evaluation of the impact of single features RLS method of feature selection is based on the minimization of the criterion function CPL criterion function The AE) estimator risk factors that are associated with inflammation was implemented using a clinical database of patients with chronic kidney disease. A few important properties of the computation results obtained from this cohort can be pointed out. The results show, among others, the scale of the bias of the apparent error the number of cases and features, and 2) the repetition of calculations for the cross-validation method. The actual computing time for personal computer implementations was in the order of tens of minutes, and was longer than for some alternative methods (see Results), but all the computational times were reasonably short for the current research purpose. However, the computation time may be a limitation of the phenotypic, genetic, combined) showed that the combined phenotypic and genetic subspace can provide a very low CVE error rate of The comparison between the optimal feature subspaces rate of . Such a rate of . NeverthAppendix S1RLS method of feature selection.Mathematical foundations of the (PDF)Click here for additional data file."} +{"text": "All of the isolates segregated into seven highly significant clades that correspond to the known geographical clades: in particular the two new isolates from northern Albania clustered significantly within the Europe 1 clade. Our phylogeographical reconstruction suggests that the global CCHFV clades originated about one thousand years ago from a common ancestor probably located in Africa. The virus then spread to Asia in the XV century and entered Europe on at least two occasions: the first in the early 1800s, when a still circulating but less or non-pathogenic virus emerged in Greece and Turkey, and the second in the early 1900s, when a pathogenic CCHFV strain began to spread in eastern Europe. The most probable location for the origin of this European clade 1 was Russia, but Turkey played a central role in spreading the virus throughout Europe. Given the close proximity of the infected areas, our data suggest that the movement of wild and domestic ungulates from endemic areas was probably the main cause of the dissemination of the virus in eastern Europe. Crimean-Congo hemorrhagic fever (CCHF) is a zoonosis mainly transmitted by ticks that causes severe hemorrhagic fever and has a mortality rate of 5-60%. The first outbreak of CCHF occurred in the Crimean peninsula in 1944-45 and it has recently emerged in the Balkans and eastern Mediterranean. In order to reconstruct the origin and pathway of the worldwide dispersion of the virus at global and regional (eastern European) level, we investigated the phylogeography of the infection by analysing 121 publicly available CCHFV S gene sequences including two recently characterised Albanian isolates. The spatial and temporal phylogeny was reconstructed using a Bayesian Markov chain Monte Carlo approach, which estimated a mean evolutionary rate of 2.96 x 10 Bunyaviridae, genus Nairovirus. It is an enveloped virus with a negative-sense single stranded RNA genome consisting of one small (S), one medium (M) and one large segment (L) that respectively encode the viral nucleocapsid (N), the membrane glycoprotein precursor (GPC), and RNA-dependent RNA polymerase (L) proteins , but the extensive geographical intermixing of the three African clades and the relatively small number of African isolates available prevented us from reaching any more precise conclusions. In our reconstruction, CCHFV left Africa in the second half of the XVII century, reached the Middle East, and then dispersed in two directions in the early XIX century to form the two Asian clades: one spreading in Iran and Pakistan, and the second in China and central Asia . The virus therefore originally spread in an eastward direction to the Middle East and south-east Asia, crossing an area with a constant presence of CCHF susceptible species and the Himalayan mountains as a barrier to natural dispersion. It has recently been speculated that pathogens spread along the Eurasian ruminant route, as in the cases of foot-and-mouth disease ,40 and RTwo highly divergent CCHFV strains entered Europe on at least two occasions: the first in the early 1800s , and the second in the first decades of the XX century, when a more pathogenic strain caused human outbreaks in eastern Europe until recently. In our phylogeographical reconstruction, the most probable location of the MRCA of this European clade 1 was Russia, which suggests that this was the gateway through which genotype 4 CCHFV entered Europe in the early 1900s. This partially conflicts with previous findings suggestiA recent study has suggIn our reconstruction, the virus spread from Turkey to the Balkans, reaching Kosovo in the 1990s and Albania in the last decade. It is possible to hypothesise that the main cause of its dispersion through eastern Europe was the movement of wild and domestic ungulates carrying infected ticks, although outbreaks of CCHFV infection in South Africa have been associated with the passive transportation of infected ticks by birds , and thepeste-des-petits ruminants [Turkey has one of the largest ruminant populations in Europe and the Middle East, and witnesses the movement of large and small ruminants for breeding, transhumance (within and across its borders), slaughter and import/export. The main direction of the flow is from neighboring countries such as Russia and Iran and the uminants . Moreoveuminants , and couThe currently used methods of phylogeographical reconstruction are inherently limited by the availability of sample locations and the numbers of isolates at each location. The sensitivity test performed in this study (which suggested that sampling frequencies had little impact on the root location) cannot exclude the influence of unsampled locations. Nevertheless, the analysed data set included all of the sequences with a known sampling location and year that were available in public databases at the time the study began. In particular, the scarcity of sequences from Bulgaria prevented us from fully clarifying the country\u2019s role in disseminating the infection. Bulgarian Thrace and Thrace as a whole is a high-risk zone for the cross-border spread of animal infectious diseases, as has recently been reported in the case of outbreaks of foot-and-mouth disease in Bulgaria and Greece close to the Turkish border . MoreoveThe findings of this study indicate that continuous surveillance of the CCHF epidemic in Turkey and the entire Thracian area may be very important for monitoring and predicting future CCHF outbreaks in the Balkans.Figure S1Evaluation of the impact of sampling heterogeneity on the phylogeographic reconstruction. The figure shows the root state probability as a function of the location sample size. Randomisation analysis of the tip-localities throughout the MCMC analysis revealed a low level of correlation between the number of taxa per locality and the root-location probability.(PPTX)Click here for additional data file.Figure S2Likelihood map of the 121 CCHFV S gene sequences. Each dot represents the likelihoods of the three possible unrooted trees per quartet randomly selected from the data set: the dots near the corners and sides respectively represent tree-like (fully resolved phylogenies in which one tree is clearly better than the others) and network-like phylogenetic signals (three regions in which it is not possible to decide between two topologies). The central area of the map represents a star-like signal . The numbers indicate the percentage of dots in the centre of the triangle.(JPG)Click here for additional data file.Figure S3Maximum likelihood tree of the 121 CCHFV S gene sequences. The numbers on the branches represent bootstrap values . The previously described viral genotypes [enotypes have bee(JPG)Click here for additional data file.Table S1Accession numbers and characteristics of the CCHFV sequences used in the study.(DOCX)Click here for additional data file."} +{"text": "The paradigm of the \u2018pyramid of need\u2019 and the relatively high per unit cost of telehealth has led to its use being targeted at supporting those \u2018high-risk\u2019 patients who it is widely believed account for a significant proportion of unplanned admissions. However, close examination of the frequency distribution of such admissions shows that the number of patients repeatedly admitted is low. This may explain why the dramatic reductions in rates of unplanned admissions reported by many telehealth projects have had little impact on the total number of unplanned admissions and thus healthcare costs.Interactive Voice Response (IVR) reduces the costs of telehealth dramatically, is effective in capturing indicators of decreased well-being and, as the dialogue is symptom based, helps patients self-manage their condition. Low cost and the ubiquity of the telephone (mobile or landline) suggest that this technology is an economically and culturally acceptable means of screening large populations of patients."} +{"text": "Electrophorus electricus. Although the original species description indicated that this fin was a composite of the caudal fin plus the elongate anal fin characteristic of other genera of the Gymnotiformes, subsequent researchers proposed that the posterior region of the fin was formed by the extension of the anal fin posteriorly to the tip of the tail, thereby forming a \u201cfalse caudal fin.\u201d Examination of ontogenetic series of the genus reveal that Electrophorus possesses a true caudal fin formed of a terminal centrum, hypural plate and a low number of caudal-fin rays. The confluence of the two fins is proposed as an additional autapomorphy for the genus. Under all alternative proposed hypotheses of relationships within the order Gymnotiformes, the presence of a caudal fin in Electrophorus optimized as being independent of the occurence of the morphologically equivalent structure in the Apteronotidae. Possible functional advantages to the presence of a caudal fin in the genus are discussed.Alternative hypotheses had been advanced as to the components forming the elongate fin coursing along the ventral margin of much of the body and tail from behind the abdominal region to the posterior margin of the tail in the Electric Eel, Hypopygus minissimusElectrophorus electricusThe order Gymnotiformes includes 33 genera and more than 200 extant species of Neotropical electric fishes plus one fossil form from the Late Miocene of Bolivia Electrophorus is unique within the Gymnotiformes in having a third form of discharge of up to 600 volts used for hunting and self-defense Arguably one of the most noteworthy characteristics of all gymnotiforms is their ability to produce electric organ discharges (EODs) which serve dual purposes - communication and exploration of the surrounding environment. Two alternative forms of such discharges occur among these electric fishes: pulse EODs (via myogenic organs) and wave EODs (via myogenic or neurogenic organs). Electrophorus was erected by Gill Gymnotus electricus Linnaeus Electrophorus has a broad distribution in low- and mid-elevation settings across the vast expanse encompassed by the Amazon and Orinoco basins and additionally through the river systems of northern Brazil and the Guianas between the mouths of those two major drainages ElectrophorusElectrophorus also has a highly vascularized oral respiratory organ with multiple folds that greatly increase its surface area Various autapomorphies unique within the Ostariophysi distinguish Electrophorus. Linnaeus Gymnotus electricus (the Electrophorus electricus of this paper) was posteriorly continuous with the rays of the caudal-fin, i.e., the caudal fin is present. Subsequent authors ascribed to the alternative concept of the absence of a caudal fin in the genus. Intriguingly, the details of the unusual tail along the ventral and posterior margins of the body in Electrophorus have not been the subject of analysis to evaluate the two alternative hypotheses \u2013 that the fin at the posterior of the tail is a true caudal fin versus that the terminal portion of the elongate fin in the genus is a posterior extension of the anal fin to form a false caudal fin. We herein address that question and evaluate the results within the context of the divergent hypotheses of intraordinal phylogenetic relationships in the Gymnotiformes.Alternative hypotheses have been advanced concerning the components of the elongate fin coursing along the ventral surface of the body and tail of Specimens were examined at, or borrowed from, the following institutions: AMNH, American Museum of Natural History, New York; AUM, Auburn University Museum, Auburn; ANSP, Academy of Natural Sciences of Drexel University, Philadelphia; FMNH, Field Museum of Natural History, Chicago; INHS, Illinois Natural History Survey, Champaign; KU, University of Kansas, Lawrence; MBUCV, Museo de Biologia de la Universidad Central de Venezuela, Caracas; MCZ, Museum of Comparative Zoology, Harvard University, Cambridge; MNRJ, Museu Nacional, Rio de Janeiro; MPEG, Museu Paraense Em\u00edlio Goeldi, Bel\u00e9m; MZUSP, Museu de Zoologia da Universidade de S\u00e3o Paulo, S\u00e3o Paulo; UF, Florida Museum of Natural History, Gainesville; NRM, Swedish Museum of Natural History, Stockholm; and USNM, National Museum of Natural History, Smithsonian Institution, Washington. The abbreviation TL in the text \u200a=\u200atotal length. Caudal fin skeletal morphology was assessed via radiographs and specimens cleared and counterstained for bone and cartilage following the procedure of Taylor & Van Dyke Gymnotus electricus had \u201cPinna caudali obtufiffima anali annexa\u201d . Information in that account indicated that his statement was most likely derived from a detailed description and illustration of a specimen of the species by Gronovius G. electricus. This concept of conjoined anal and caudal fins in what was later termed Electrophorus electricus (hereafter Electrophorus) then vanished without comment from the scientific literature for more than 200 years. The alternative accepted scenario was that the anal fin extended posteriorly to the end of tail in Electrophorus and formed what has been termed a false caudal fin Electrophorus was a false, rather than true, caudal fin may have been, in part, based on the absence of the caudal fin in Gymnotus, a genus showing a number of derived characters with Electrophorus, with those two genera now forming the Gymnotidae. Comments as to a possible contrary arrangement were limited to remarks by Meunier & Kirschbaum Well over two centuries ago, Linnaeus Electrophorus as an alternative to the prevailing concept of an elongate anal fin extending posteriorly to the terminus of the tail. Soon thereafter Meunier & Kirschbaum Electrophorus.Meunier & Kirschbaum Electrophorus proved informative as to this question. Presence of a ventral embryological fin fold in individuals of Electrophorus shorter than approximately 85 mm TL gives a false first impression of a continuous anal-caudal fin during the early stages of the development in the genus. In actuality the anal-fin rays terminate well anterior to the posterior limit of the fin fold in specimens of less than this length. Larvae of Electrophorus of approximately 19 mm TL have anal-fin rays as evidenced by Alcian blue staining plus non-staining rays apparent in transmitted light limited to the anterior one-half of the fin fold that extends the length of the tail. Specimens at that size possess a cartilage body at the posterior end of the tail as evidenced in transmitted light without, however, any obvious associated caudal-fin rays. Conversely, fin rays are apparent at the posterior end of the tail in a circa 26 mm TL whole specimen, but with the retention of a distinct gap along the ventral margin of the tail between the posterior most apparent anal-fin ray and the ventral most caudal-fin ray. This condition is comparable to that found in adults of all species of the Apteronotidae , than all members of the Apteronotidae and the hypural complex of Electrophorus remains incompletely ossified to at least circa 300 mm TL.Overall morphology of the complex formed by the terminal centrum and the posterior plate of onotidae ; the oneonotidae . One notElectrophorus and the Apteronotidae due to the reduced nature of the elements in these taxa versus the condition in other lineages in the Otophysi; for example basal members of the Siluriformes, the sister group to the Gymnotiformes. A parhypural plus six separate hypurals Olivaichthys viedmensis was composed of the compound centrum followed posteriorly by a hypural plate (the \u201chp\u201d of that study); a form of the caudal-fin skeleton comparable with that present in adults of Electrophorus other than for two features. The hypural plate in Orthosternarchus is cartilaginous and disjunct from the terminal centrum whereas in adult specimens of Electrophorus the hypural plate and terminal centum are both ossified and broadly conjoined a joining of the two fins at least, in part, as a result of the increase in the number of ventral procurrent rays with a consequent anterior extension of the caudal fin towards the anal fin; versus 2) the posterior extension of the anal fin to contact an unelaborated caudal fin . The anteroventral most ray of the caudal fin serves as an appropriate landmark for the anterior limit of that fin versus the conjoined anal fin. This ray is readily distinguished from the terminal anal-fin ray via the lack of the associated proximal pterygiophore characteristic of anal-fin rays. Additionally, the anteroventral ray of the caudal fin is most often associated with the hypural plate , whePhreatobius which has 11 to 26 ventral procurrent rays Gymnallabes in which there are at least five ventral procurrent rays extending forward to meet an posteriorly extended anal fin . This condition is characterized by the immediate proximity of the posterior most anal-fin ray as evidenced by an associated proximal pterygiophore with the ventral most caudal-fin ray; the condition found in complex . Elsewhe complex . The PloElectrophorus, information on the number of caudal-fin rays for that genus was not included in prior phylogenetic analyses. Within the Apteronotidae, the only other group in the order with a caudal fin, the number of rays ranges from five to 30 with the basal clades, such as that formed by Orthosternarchus plus Sternarchorhamphus, possessing five to nine rays and the other genera in the family 10 to 30 rays and hypural plate (hp) is restricted in the Gymnotiformes to members of the Apteronotidae, the most speciose family in the order Electrophorus separated from the Apteronotidae within the Gymnotiformes by three nodes. Given the phylogenetic distance between the Apteronotidae and Electrophorus, the most parsimonious explanation for the distribution of a caudal fin in the two lineages involves retention of the caudal fin in the basal Apteronotidae, the loss of the fin in the ancestor of the remainder of the order, and a reacquisition of the fin in Electrophorus. This involves fewer evolutionary steps than the perhaps intuitively more appealing hypothesis of multiple loses of the fin in the Sternopygidae, the ancestor of the Hypopomidae plus Rhamphichthyidae, plus Gymnotus in the Gymnotidae is the sister group to all other families in the Gymnotiformes was advanced based on morphological mnotidae . The molElectrophorus plus Gymnotus) as the sister clade to the remainder of the order Electrophorus. Within this phylogenetic scheme, the presence of a caudal fin in those taxa again optimizes as separate events, with two alternative equally parsimonious explanations. Under one, the presence of the caudal fin in the Apteronotidae and Electrophorus represents separate acquisitions post the presumed loss of the complex in the ancestor of the Gymnotiformes. The second scheme involves the loss of the fin in Gymnotus (the sister group to Electrophorus) in the Gymnotidae and in the ancestor of the Rhamphichthyidae, Hypopomidae, Sternopygidae and Apteronotidae and the reacquistion of the fin in the Apteronotidae being the sister of the remainder of the Gymnotiformes or acquisition of the fin in a clade sister to the remainder of the order and a secondary presence of the caudal complex in another lineage. The alternatives mirror each other with the presence of a true caudal fin in families and the tiformes .Absence of the caudal fin is common to many components of the Gymnotiformes, but overall is limited to relatively few groups within the Teleostei; a not unexpected situation in so far as the caudal fin provides the majority, or a significant portion, of the propulsive force to the fish along with contributing to steering functions. A universal lack of the pelvic fin across Neotropical electric fishes in addition to the general absence of the caudal fin is also noteworthy. Although the pelvic fins are not a major factor in propulsion across fishes, they contribute to fine movement control. Offsetting the loss of these two fins across the Gymnotiformes is a dramatic lengthening of the anal fin and increased fine motor control of propulsive movements within the fin. Depending on the taxon, the gymnotiform anal fin commences anteriorly within the region between the vertical through the orbit to the posterior limit of the abdominal cavity and continues caudally to varying positions along, or at the end of, the tail . Gymnarchus also swims with a largely rigid body and propels itself via sinusoidal movements along an elongate median fin; the propulsive fin in that genus being, however, the dorsal rather than anal fin yielding an amiiform swimming mode Sinusoidal movements along this elongate anal fin among species of the Gymnotiformes provide the primary propulsive mechanism for the distinctive anterior and posterior movements of these fishes and in conjunction with the pectoral fin, critical fine scale control of such movements Electrophorus versus absent in its sister group, Gymnotus. A potential functional difference underlying this variation may be the rigid body posture in life of species of Gymnotus with sinusoidal movements along the anal fin generating the primary propulsive force Electrophorus demonstrates two alternative swimming modes. The first of these is the straight alignment of the body during obligate gulping of air and in the detection, location and shocking of prey items. This is the body orientation general across the Gymnotiformes, e.g., Electrophorus is additionally able to use sinusoidal or anguilliform movements along the length of the entire body to supplement the waves of movements along the anal fin during capture of prey and rapid forward motion. During this swimming mode, the posterior portion of the body undergoes pronounced side-to-side movements; a situation in which a caudal fin would increase the anterior propulsive force and thereby be functionally advantageous as is the case with other groups of fishes using anguilliform swimming modes. Taxa of the Apteronotidae which also have caudal fins lack, however, anal-caudal fin conjunction and is there no indication of alternative swimming modes in the family.The Gymnotidae is unique within the Gymnotiformes in demonstrating intrafamilial variation in the presence versus absence of the caudal fin, with the fin present in List S1List of specimens of Electrophorus and outgroups examined in this study.(DOC)Click here for additional data file."} +{"text": "Spinal cord injury disrupts the connections between the brain and spinal cord, often resulting in the loss of sensory and motor function below the lesion site. The most important reason for such permanent functional deficits is the failure of injured axons to regenerate after injury. In principle, the functional recovery could be achieved by two forms of axonal regrowth: the regeneration of lesioned axons which will reconnect with their original targets and the sprouting of spared axons that form new circuits and compensate for the lost function. Our recent studies reveal the activity of the mammalian target of rapamycin (mTOR) pathway, a major regulator of new protein synthesis, as a critical determinant of axon regrowth in the adult retinal ganglion neurons Injury to the mammalian adult central nervous system (CNS) often results in functional deficits, largely owing to the limited regenerative and repairing capabilities. In the case of spinal cord injury, the disruption of axonal tracts that convey ascending sensory and descending motor information could lead to pronounced and persistent sensorimotor dysfunctions in the body parts below the lesion sites. Although partial spontaneous functional recovery occurs in the patients and animal models at the neonatal stages, this declines in the adult. Presumably, rebuilding the functional circuits may result from two types of axon regrowth: \u2460 true regenerative growth of injured axons and \u2461 compensatory sprouting from spared fibers. While regenerative growth occurs rarely in the adult CNS, compensatory sprouting of the same or different types of axons may form new circuits across the lesion sites and compensate for the function lost as the result of injury. Thus, ideal repair strategies could be to promote these two different forms of axon regrowth for optimal functional recovery.In contrast to robust axon growth during development, both regenerative growth and compensatory sprouting in the adult CNS are very limited and abortive. Many studies in the past decades have been largely focused on characterizing environmental inhibitory molecules in the adult CNSA potentially useful approach to understand the intrinsic mechanisms of axon regeneration is to study how robust axon growth in immature neurons during development is achieved. Many of these studies involve neurotrophin-dependent axon growth of peripheral neurons. For example, by using specific chemical inhibitors, Liu and Sniderin vivo is unclear.Despite the progress made in axon growth during development, little is known about what accounts for the transition from the rapid growth mode of immature neurons into the poor growth mode of mature neurons in the CNS. Several potential players have been implicated, such as development-dependent decline of neuronal cAMP levelsFig. 1, ref. 19-21). It catalyzes the conversion from phosphatidylinositol trisphosphate (PIP3) to phosphatidylinositol bisphosphate (PIP2) and antagonizes the effects of PI3K. Thus, inactivation of PTEN results in the accumulation of PIP3 and subsequently the activation of the Akt. A well characterized downstream event of PTEN deletion and Akt activation is the activation of mTOR, which is a central regulator of cap-dependent protein translation initiation and cell growthDevelopment-dependent decline of axon growth ability is reminiscent of cell size control in almost any cell type: active growth during development followed by ceased growth upon the completion of development result in a normally fixed size for individual cell types. Extensive studies in the fields of developmental biology and cancer biology have identified a number of genes critical for regulating cellular growth and many of these are tumor suppressor genes. Because many of these growth-control molecules are expressed in the adult neurons, we hypothesized that the mechanisms preventing individual cell types from over-growth might also play a role in suppressing the axon growth ability of adult neurons. To test this, we utilized an optic nerve crush model to examine the regeneration of axons from RGCs in different mutant mice with the deletion of individual growth control genes in the RGCs. By analyzing more than 10 different conditional deletion mouse lines, we found that deletion of PTEN promotes both the survival of axotomized RGCs and the robust regeneration of injured optic nerve fibers"} +{"text": "The firing rate model in the form of nonlinear integrodifferential equations can characterize spatiotemporal patterns of a continuum neural field. These patterns are associated with a wide range of neurobiological phenomena, such as persistent activity and propagating waves in neural networks.To understand the substrates of neural circuitry that supports the localized stationary patterns, we study the existence of multi-bump pulse solutions of an integral equation that is the equilibrium equation of the firing rate model. If the integral coupling function, which describes the spatial connection among the network of neurons, is even and its positive half solves a second order linear ordinary differential equation (ODE), then the multi-bump pulse solutions of the integral equation are homoclinic solutions of a reversible fourth order ODE. It was known previously that the corresponding ODEs are conservative for a class of oscillatory and decaying coupling functions . We show"} +{"text": "The realization of injury to large motor neurons is embedded within contextual reference to the parallel pathways of apoptosis and necrosis of system-patterned evolution. A widespread loss of cell components occurs intracellularly and involves a reactive participation to a neuroinflammation that potentially is immunologically definable. In such terms, sporadic and hereditary forms of amyotrophic sclerosis are paralleled by the components of a reactive nature that involve the aggregation of proteins and conformational misfolding on the one hand and a powerful oxidative degradation that overwhelms the proteasome clearance mechanisms. In such terms, global participation is only one aspect of a disorder realization that induces the development of the defining systems of modulation and of injury that involves the systems of consequence as demonstrated by the overwhelming immaturity of the molecular variants of mutated superoxide dismutase. It is further to such processes of neuroinflammatory consequence that the immune system is integral to the reactive involvement of neurons as patterns of disease recognition and as the system biology of prevalent voluntarily motor character. It is highly significant to recognize various inflammatory states in the nervous system as prototype variability in phenotype expression and as incremental progression in pathogenesis. In fact a determining definition of amyotrophic lateral sclerosis is an incremental phenotype modulation within the pathways of the consequential loss and depletion of motor cell components in the first instance. Neuroinflammation proves a pattern of the contextual spread of such pathogenic progression in the realization of end-stage injury states involving neurons and neuronal networks. Amyotrophic lateral sclerosis (ALS) is manifested by an array of the large neuronal cell loss of motor type as further constituted by a toxic gain of the function of superoxide dismutase in a small percentage of the patients. The evolutionary course of the disease is further constituted by a series of progressive changes that affect clinically both sporadic and hereditary forms of motor neuronal loss . In suchDisease pattern is recognizable in terms of ongoing changes in neuronal cells that comprises a noncell autonomous involvement implicating glial cells ranging from astrocytic glutamate production and microglial reactivity. Nuclear factor-kappaB downregulation in neuronal nuclei in ALS might promote the loss of neuroprotection or else be associated with nuclear loss .The developmental consequences of injury in this disorder are further evidenced by the appearance of defects in axonal transport and in the phosphorylation of light and heavy forms of neurofilament, by the appearance of aggregation or inclusion bodies and also by such structures as hyaline bodies consisting of mutant superoxide dismutase, skein inclusions, Bunina bodies, and at times discrepancy in the evolutionary course when correlated with specific missense mutations of superoxide dismutase gene.The overall dynamics of this heterogeneous group of disorders complicate a derivational body of consequences that results, in the overwhelming majority of cases, in respiratory failure after some 2-3 years of disease course. The skeletal muscle atrophy is a realization of the central nervous system involvement that evolves as an apparent consequence to lesions in motor supplying neuronal axons and to abnormalities in eventual neuromuscular junctions supplying these muscles. Disease-driven changes in ATP-binding cassette drug efflux transporters in the CNS interfere with effective ALS pharmacotherapies [A network basis for the disease is recognizable in terms of evolutionary dynamics in the face of injury to a systemic series of structural components throughout the entire complex of motor neuronal supply to skeletal muscle. In such manner, patterns of involvement are realized dimensions of network participation in the delineation of progressive kinetics in skeletal muscle atrophy.The disease recognition patterns of involvement are reflected in the systemic response of neurons to endoplasmic reticulum stress and cytosol saturation by mutant superoxide dismutase in some 20% of the hereditary cases of amyotrophic lateral sclerosis. In such manner, oxidative stress is a recognized component pathway with indices referable to such stress response to a primary injury to neurons. It is in such capacity of attempted containment of the cellular stress that the progression of this disorder proves the true dimensional dynamics of pathogenesis in neuronal cell loss. Oligoclonal bands in the CSF of patients with ALS may be associated with gene mutations . FurtherMicroglia become transformed and neurotoxic in end-stage disease ALS .Aggregation of immature forms of superoxide dismutase and the mechanics of cytosol-nuclear abnormalities of the exchange and transport of antioxidants also are referable to the dimensions of a recognition pattern signature of disease involvement that attests to the overall dynamics of neuronal cell death pathways .Lipid and DNA oxidation correlate with the systems of the aggregation of inclusion bodies as further attested by the oxidation of the superoxide dismutase itself. It is further to ongoing developmental outlines of disease reappraisal that amyotrophic lateral sclerosis is indeed a response pattern of consequence within the contextual reproducible pathways of ongoing aberrant intracellular transport mechanics. Neuroinflammation proves a response to neuronal dysfunction and death and response manipulation might alter disease progression . An esseThe concept of the renewal of the immune response arises in terms of a recurring system of parallel pathways that may especially target the central nervous system. It is with regard to an extensive repertoire of conditioning and reconditioning manoeuvres that the overall dimensionality of targeting provokes a reappraisal of current processes that prove dominant in further injuring the brain in patients with acquired immunodeficiency syndrome . A critiIt is further to be realized that the systems of the involvement of inflammatory reactivity are borne out by a system organ that is participant in provoking systemic by-products as defined by multiple organs such as the reactive spleen and lymph nodes.Such panorama of participating components in AIDS encephalitis is a truly derived phenomenon and also a real originator of further aberrant responsiveness as evidenced by a wide diversity of possible opportunistic infections seen clinically in patients with AIDS.It is within the developmentally aberrant severity of the inflammatory response within the central nervous system that the realization of tissue injury proves a progressive phenomenon of self-sustaining participation in injuring further the native neural tissues . The sigB cells and antibodies are implicated in at least a subset of patients with multiple sclerosis and are related to the production of oligoclonal bands. The CNS also locally produces antibodies .The extreme variability of pathologic events both in patients with AIDS encephalitis and also in patients with different biologic substrate in evolving multiple sclerosis proves a serial representation of multimodal realizations that provoke an immune participation that results in a further augment in the inflammatory response. Lipid metabolism and vascular pathology are both implicated in multiple sclerosis .In such manner, the provocation of a targeting inflammatory process is determined and also determines in its turn an alternating but self-sustaining realization of tissue injury that spans the dimensional conditions in AIDS encephalitis or multiple sclerosis. It is further to a serial reconstruction of various forms of injury to neural tissues that the by-product of significant injury is the main criteria in the evidential proposition of inflammatory and immune system induction to further tissue injury . OxidatiThe distinctive cell subpopulations of the involvement of the central nervous system are real component of a confined series of pathways of an organ such as the brain that proves not immune privileged . Thus, pThe constitutive evidence for a further increment in disease activity is borne out by developmental innate immune response and as further signified by the injury to multiple tissue components .The multifocality of injury to the CNS is an integration of evidential pathways of the reconstruction of the injurious events themselves in various modes of the further promotion of such pathways as directed microglial response and as participating vascularity and as also gliosis and subsequent aberrant immune- and inflammatory-mediated responses . HIV treFurther to the significant emergence of tissue injury, the parameters of dimensional involvement in CNS inflammation are dramatically reconstituted in terms of the realization of new, targeted responses to other foci of directed involvement. It is significant to consider the overall dimensions of realization in terms of the participation of the overall self-augmenting or positive feedback loops in CNS inflammatory states .Dual participation of the immune and the inflammatory pathways proves integrative, especially with an increasing severity of these responses towards further foci of neural injury. The dimensions of further cooperative participation are mutually self-identifying motives in the significant emergence to necrotic foci of neural tissue. Microglia express various Pattern Recognition Receptors to identify viral signatures called Pathogen Associated Molecular Patterns to which microglia respond by producing inflammatory mediators .In terms of such ongoing pathway culmination, the persistence of an integrative response is paradoxically self-generating as realized also by the intermittent relapses in multiple sclerosis patients or in the recurrent attacks of opportunistic infection in AIDS patients.Neuroinflammation is a potentially constitutive mode of pathogenesis in various neurodegenerative disorders such as amyotrophic lateral sclerosis that evolves as an apparent primary neuronal cell loss within the additional contexts of the parallel evolution of apoptosis and necrosis of the neurons .In such manner, a beneficial incremental change attests to the possible limitation of injury that is partly contributed to by glial cells such as astrocytes and by microglial reactivity.The corresponding pathologic spread in amyotrophic lateral sclerosis of a putative agent is analogous to dynamics of involvement in prion disease in terms of aggregation that corresponds to mechanics of neuronal cell loss in these disorders. The innate immune response is a recognized component pattern of involvement in a disease process such as neurodegeneration whereby also inclusion bodies are the consequences of attested intracellular stress mechanics.Disease recognition patterns as network involvement primarily indicate system participation in pathogenic progression and also as incremental definition of inflammatory pathways. In such terms, the distribution of motor neuron loss indicates a predilection for neurons in reference to such indices as the large size of the cells and as further spread involvement within the neuronal motor systems .Toxic gain of function of mutant superoxide dismutase indicates a realization that is deferentially distributed due to modes of action independent of enzymatic function. In such manner, the promotional realization of evidential pathways includes derived neuronal lesions that arise as neuroinflammatory foci and as distributional realization for the further spread of the neuronal cell loss. Composite idealization is dimensionally ensured as a significant pattern formulation in disease signature definition. It is in view of comparable indices as parameters of evolution that disease progression permits and also enhances susceptibility to stress-induced injury to individual neurons within motor system pathways.Blood brain barrier breakdown leads to a neuroinflammation and oxidative stress .Distributional markers as models of injury indicate a proximate series of changes that account in turn for the oxidative series of modulated lesions within the patterns of the evolutionary progression of the disease. Hereditary motor neuronal lesions prove a susceptibility series of a heightened nature and are formulated by the network reactivity of neuronal subsets.System biology is a source of potential realization in amyotrophic lateral sclerosis that persistently constitutes the determined involvement of systems of the progression of a disease process that is rapid and incrementally realized as a loss of large motor neurons. It is further to such considerations that the outline parameters of overall index involvement include the definition of patterns of disease definition within the distributional reality of the motor neuronal system as a whole.The overactivity parameters of oxidative stress somehow include a toxic gain of function that involves systems of repair or compensation, on the one hand, and also serial modulation of recoverability as system dimensions of the disease entity. It is in terms of such evolutionary course that the further outline of lesion characterization permits the distribution of significant lesions beyond simple oxidative stress.The sporadic form of amyotrophic lateral sclerosis includes oxidative stress in its own right but as definable beyond mutations of the superoxide dismutase enzyme. It is with regard to further neuroinflammatory injury that the immune system appears as a common referential series of pathways in its own right that defines the nature of characterized neuronal cell loss.Neuroinflammation pathogenically links such diverse conditions such as amyotrophic lateral sclerosis, AIDS encephalitis, and multiple sclerosis with the marked activation of astrocytes and microglia and the production of proinflammatory agonists. Upregulation of endothelial adhesion molecules and downgrading of tight junction proteins facilitate in particular CNS the ingress of T lymphocytes .In such manner, parametric indices allow for the emergence of overall dimensions as important determinants in the characterization of the progressive course of a disorder that is primarily depletive but also reactive. Neuroinflammation, hence, is constitutionally a superimposed series of gains in toxicity that overcome recoverability parameters on the part of individual neurons as integrated signature networks of pattern recognition.Significant overall dimensions of inclusion allow for the distribution of lesions within the intracellular compartment in modes of aggregation and precipitation as evidenced by forms of mutated superoxide dismutase that lack in particular the Zn ion or of the disulfide bonds. It is further to a compromising realization of immature metabolic phenotype that mitochondrial damage precipitates apoptotic cell death and also other metabolic phenotypes of the realized destabilization of the proteasomal system in misfolded protein clearance. With reference to misfolded moieties of protein aggregation it is significant to recognize the reactive constitution of the motor system disorders within the context of further progression of the neuronal cell loss.Various aspects of the biology of neuronal cell loss permit the global evolution of injury that complicates patterns of potential recoverability in the face of evolutionary system participation. Allowance for participation is significant in terms of further modulation of the lesion that accommodates indices of determining pathogenic influence. Neuroinflammation is a powerfully effective series of revision pathways that incorporate the subsequent realization of the evolutionary patterns of an incremental nature. In terms of serial conformation and as evidential system patterns, the biology of amyotrophic lateral sclerosis consists of an inflammatory reactivity in the face of the serial reconstitution of inflammatory and immune pathways that incrementally challenge the neuronal viability issues. Such processes implicate the significant modification of phenotype as evidential systems of systemic requirement. It is with regard to system pathways that the conclusive phenomenon of neuronal cell death indicates the deliberate termination of system determination.In overall terms, the required pathogenic course dynamics in neuronal cell loss participate as systems of over-riding reactivity. Significant signature recognition patterns of pathogenic progression are potentially implicated in amyotrophic lateral sclerosis. Such incremental indices would indicate the overwhelming systemic central nervous system involvement as predominant neuroinflammatory indices of activity and reactivity."} +{"text": "Power underpins relationships between different actors in health systems and is exercised directly by way of coercion and inducement, or indirectly by controlling the ideas and environments that influence other actors to make decisions. Increasingly, abuse and misuse of power have been implicated as key determinants of poor health systems performance in low-income contexts. However the ephemeral nature of power makes it a difficult subject to study, as do the political connotations of such analysis.We present case studies from three separate research projects conducted in varied settings in India. The studies draw from bottom up theories of policy implementation analysis, which help to locate intangible themes such as power in real life events and processes.i) A qualitative analysis of implementation of global guidelines for HIV testing in five cities ii) A policy analysis of factors influencing the performance of regulatory institutions for health care, in two states iii) A health systems ethnography exploring the decision-making processes of doctors working in remote rural areas in Chhattisgarh state All three studies utilized qualitative research methodology, including in-depth interviews and document review, and the \u2018interpretive\u2019 approach of analysis to understand the experiences of health systems actors.i) The analysis of implementation of global guidelines for HIV testing revealed that doctors widely resisted pressures to follow the guidelines, yet could rarely play a role in influencing policy change. A combination of this paradoxical balance of power, conflicts between different actors\u2019 interpretation of policies, and lack of avenues for the exchange of ideas, contribute to the rift between written policies and field level practices.ii) The second study describes how public institutions for health care regulation have been subjected to \u2018capture\u2019 by medical professional groups that are represented in these institutions. The performance of core functions of these regulatory bodies is frequently subverted or obstructed by the forces embedded within, with the objective of protecting or serving specific vested interests.iii) The final case study highlights how doctors performing crucial roles in providing health care in remote rural areas face adversity not only from poor working and living conditions, but also in the form of unsupportive administrative structures, unaccountable promotion and transfer policies, and poor access to continued education.These three studies present a heterogeneous picture of the power of medical professionals in the context of national health systems. While medical professionals continue to hold sway over key administrative and regulatory institutions, this power tends to be directed to the protection of pecuniary and petty political interests, rather than to the upliftment of medical practice. Even as rural doctors struggle to perform under the yoke of unjust administrations, there is evidence that doctors in cities may also be powerless to influence the policies that guide the norms of their practice.societal reforms favouring justice and equity, for improving health systems performance. More evidence on the sociology of health systems is called for, as India moves towards sweeping health sector reforms.The case studies collectively highlight the crucial role of medical professional power and interests in influencing health systems performance in India, and also demonstrate that a closer appreciation of doctors\u2019 vulnerabilities is necessary in order to confront the problem of medical dominance. At the same time they draw attention to the embeddedness of health systems in society \u2013 societal norms, structures and balances of power \u2013 and consequently of the necessity of Author declares that he has no conflict of interest.The author declares that the research studies on which this paper is based were funded by the Aga Khan Foundation and University of London (study 1); the Nossal Institute, University of Melbourne (study 2); the World Health Organization and the Government of India (study 3)."} +{"text": "Due to its conservative morphology and allegedly primitive trunk tagmosis, we have utilized the centipede Strigamia maritima to study the correspondence between the expression of engrailed during late embryonic to postembryonic stages, and the development of the dorsal exoskeletal plates (i.e. tergites). The results corroborate the close correlation between the formation of the tergite borders and the dorsal expression of engrailed, and suggest that this association represents a symplesiomorphy within Euarthropoda. This correspondence between the genetic and phenetic levels enables making accurate inferences about the dorsoventral expression domains of engrailed in the trunk of exceptionally preserved trilobites and their close relatives, and is suggestive of the widespread occurrence of a distinct type of genetic segmental mismatch in these extinct arthropods. The metameric organization of the digestive tract in trilobites provides further support to this new interpretation. The wider evolutionary implications of these findings suggest the presence of a derived morphogenetic patterning mechanism responsible for the reiterated occurrence of different types of trunk dorsoventral segmental mismatch in several phylogenetically distant, extinct and extant, arthropod groups.Trilobites have a rich and abundant fossil record, but little is known about the intrinsic mechanisms that orchestrate their body organization. To date, there is disagreement regarding the correspondence, or lack thereof, of the segmental units that constitute the trilobite trunk and their associated exoskeletal elements. The phylogenetic position of trilobites within total-group Euarthropoda, however, allows inferences about the underlying organization in these extinct taxa to be made, as some of the fundamental genetic processes for constructing the trunk segments are remarkably conserved among living arthropods. One example is the expression of the segment polarity gene Bergstr\u00f6m, 1973.Trilobites comprise a very diverse and successful monophyletic group of well-known extinct arthropods characterized by the possession of a biomineralized dorsal exoskeleton, and include some of the oldest known macroscopic metazoans in the fossil record. All trilobites shared the same basic body construction; this consists of a head formed by four limb-bearing segments covered by a cephalic shield, followed by a homopodous trunk with a highly variable number of segments that show a significant diversity in terms of the number of expressed tergites , as well as their degree of differentiation and fusion (i.e. tagmosis) There is a good understanding on the growth dynamics of the trunk in several trilobite species The main arguments utilized by St\u00f8rmer Clarifying the correspondence between the tergites and the segments that constitute the trilobite trunk carries implications for understanding the origins and early evolutionary history of this important group, as it has been suggested that certain intrinsic aspects of trilobite trunk development show indications of significant variability and plasticity, particularly evident in Cambrian representatives relative to younger forms engrailed (en), which plays a pivotal role during segmentation, as it is essential for the formation and maintenance of the intersegmental borders in Drosophilaen was in all likelihood also involved in segment formation in these extinct arthropods The loss of biological information associated with the process of fossilization makes it impossible to examine directly the relationship of the trilobite\u2019s tergites and their corresponding segmental units. However, several of the mechanisms responsible for the formation and patterning of the segments in extant arthropods are highly conserved, and thus enable making some inferences about the fundamental genetic processes required for the construction of a metameric trunk in these extinct representatives. A prime example can be found in the segment polarity gene en and the position of the tergite boundaries in the larval and adult abdomen of insects such as Oncopeltus fasciatus and Drosophila melanogasterGlomeris marginataen based on the position of the dorsal exoskeletal elements, even in cases in which there is a secondary modification on the exact expression domain of the en stripe, such as the dorsal side of Glomerisen and the formation of the tergite borders in these arthropods is representative of the ancestral state. To address this question, it is necessary to analyze the correspondence between en and the tergites in another extant arthropod model, ideally one that features a plesiomorphic trunk morphology.Some studies have also found a direct correlation between the expression of Strigamia maritima to study the correspondence between the formation of the tergites, during late embryonic to postembryonic development, and the dorsoventral expression of en. We then use the information on the correlation of en expression and tergite border formation as the theoretical foundation from which to make inferences about the expression domains of this segment polarity gene in the trunk of exceptionally well-preserved trilobites, and other closely related fossil arthropods. The trunk segments of Strigamia have several desirable traits for the aims of this study, such as the anteroposterior differentiation of the tergite into a pretergite and a metatergite, and the presence of lateral spiracles, as these morphological features can be utilized to follow the development of the tergites. Furthermore, the embryonic development of Strigamia has recently been described in detail Due to the conservative trunk morphology of centipedes, most notably the homonomy of the segments and direct correspondence between dorsal and ventral sclerites, we utilize the geophilomorph Strigamia were collected near Brora, northeastern Scotland . As noted by several workers In situ hybridizations were mostly performed as described by Chipman et al. Information on the morphology of the exoskeleton in trilobites and trilobite-like taxa was extracted from the primary literature, original photographs, and/or by direct inspection of catalogued specimens housed in scientific collections. The Smithsonian Institution (Washington D. C.), the Palaeontological Association (UK), and the Whittington Archives granted permission for the study of collections and figure reproduction. Institutional abbreviations: Smithsonian Institution (USNM); National Museum of Wales (NMW); Early Life Research Centre, Nanjing Institute of Geology and Palaeontology, China (ELRC). No new material was collected for this study.Strigamia embryo develops as an extended flat germ band on the surface of the egg. In this flattened germ band, the ectoderm is on the surface, with the ventral component along the medial area and the right and left laterodorsal components symmetrically at the two sides Strigamia embryo faces the proctodeum, and the anterior part of the ventral ectoderm juxtaposes with the posterior part 1, C1. On tissues 1. This s tissues [54]. A tissues .en is very similar to that previously described for stage 7 embryos, including the correlation between the laterodorsal extent of the developing hemitergite and the corresponding stripe .Prominent changes observed in the cuticle of post-hatchling juveniles include the completed dorsal closure, forming a dorsal midline running longitudinally throughout the trunk , as wellMost of the features described before become more accentuated . The latAll the exoskeletal elements have acquired, or are close to, the mature morphology . The preen in embryos of Strigamiaen in embryonic stages, and represent valuable landmarks for estimating the approximate expression domains of this gene in hatched individuals. Thus it is possible to extrapolate the expression domain of en in the postembryonic trunk segments, which would be largely consistent with the pattern observed in the embryonic stages .The results confirm previous findings on the early expression of c stages . It is cen in the trunk segments of Strigamia is coincident with approximately the posterior third of each tergite. More specifically, the posterior edge of each en stripe is directly correlated with that of the posterior (meta)tergite boundary of its corresponding segment, and does not overlap with the adjacent (pre)tergite tergite 5; 8A. Asral side . While t al. see that en f en see . Regardlen and the formation of the tergites in the trunk segments of phylogenetically distant representatives of Hexapoda and Myriapoda is indicative that this relationship represents a highly conserved, and almost certainly ancestral, patterning mechanism of the ectoderm derivates in Mandibulata. Amongst Chelicerata, detailed information on the expression of segment polarity genes is only available from a few araneaeid species. However, the pattern of hemitergite formation and dorsal closure in the opisthosoma of the cobweb spider Parasteatoda tepidariorumen observed in the posterior region of the orb weaving spider Cupiennius saleien expression and tergite formation represents the symplesiomorphic condition for the development of the laterodorsal exoskeletal elements in crown-group Euarthropoda.The close correspondence between the dorsal expression of PhacopsMisszhouiaMost aspects of the palaeobiology of trilobites are exclusively known from the morphology of their biomineralized dorsal exoskeletons. There are little more than 20 trilobite species from which the exceptionally preserved ventral tissues have been described, covering a considerable range of ages, taxonomic groups and preservation styles [1]. AltKiisotoria saperi, an Early Cambrian non-trilobite arthropod from Greenland, show the presence of transverse ligament-like structures comparable to the tendinous bars of trilobites It is very unlikely that the peculiar trunk organization of trilobites is merely the result a taphonomic artefact caused by the decay of the internal tissues or compression after burial. Specimens with soft tissue preservation of \u201cWe will not enter into the detailed discussion on whether the tergite boundaries in these arthropods coincide with (segment) boundaries. Arbitrarily assuming that they do, the (segments) \u2026 must be obliquely inclined, with the ventral part more anterior than the dorsal\u201d. Although the formation and maintenance of the body segments in arthropods is a complex process that requires a precise patterning of the ectodermal and mesodermal derivates, the interpretation for obliquely inclined segments in trilobites and their close relatives would imply a considerable morphogenetic rearrangement of the former. Instead, the plesiomorphic correlation between the position of the tergite borders and the expression of en in extant arthropods can provide an alternative explanation for this exoskeletal organization. The highly conserved ventral segmentation patterning and associated expression of en within Panarthropoda , insects Glomerisen stripe in the trilobite trunk would not have been located in the posterior region of the segment, but rather approximately in the middle of it, just above the site of limb attachment to the body. The same interpretation is also applicable for trilobite-like taxa in which most of the tergite boundaries have become fused into a single trunk shield. In the case of nektaspidids, for example, the only functional articulation is found at the cephalo-thoracic boundary , a segment polarity gene that is normally active immediately anterior to the expression stripe of enwg-like anterior patterning signal could possibly be substituted by the interaction of hedgehog, a segment polarity gene with a similar expression domain to that of en, and a number of additional genes that are also expressed in the dorsal segmental units of Glomeris. This proposed mechanism is largely based on analogy to a process that has been reported in the abdominal ventral pleurae Drosophila, which are characterized by similar patterns of gene and morphogen activity.The proposed dorsoventral mismatch of segment polarity gene expression in the trilobite trunk clearly deviates from the plesiomorphic arthropod condition consisting of a dorsoventrally continuous stripe of GlomerisGlomeris could be the result of evolutionary convergence that would have at least one parallel at the level of gene expression , which consequently results in a close morphological analogue . Indeed, among extinct and extant arthropods, genetic and phenetic differences in the dorsoventral segmental patterning of the trunk region can be found in several additional cases is not expressed in the most ventral region of the germ band; however, it may be possible that the widespread dorsoventral expression of the gene Cs-Wnt5-1 is responsible for establishing the anteroposterior polarity of the germ band wg is apparently not expressed on the dorsal side of the notostracan Triops longicaudatus during post-embryonic segmentation , it has an additional morphological parallel with the diplosegments of Glomeris in that the posterior tergites are associated with more than one pair of legs are associated with several pair of legs, unlike those that form anterior part of the trunk that only bear a single leg pair Fuxianhuia protensa, Shankouia shenghei), a controversial lower Cambrian group of rather primitive-looking arthropods that have been commonly interpreted as basal members of the euarthropod stem lineage This consideration can be further applied to extinct taxa that are also characterized by the presence of supernumerary pairs of limbs per tergite on the posterior region of the trunk. Among trilobite species with exceptional limb preservation, there are some confirmed cases in which there are several pairs of walking legs clustered in the posterior portion of the trunk, the so-called pygidium s eatoni trilobites with soft-tissue preservation that could be associated with the reduced body size that is characteristic of agnostids; alternatively, the trunk organization may provide indication that the position of agnostids lies outside Trilobita, and probably within the stem-lineage of Crustacea or even Mandibulata, but have acquired a similar dorsal morphology with eodiscid trilobites convergently. Haug et al. Agnostus pisiformis, coupled with the fact that the mode of trunk development in polymeroid trilobites and agnostids most likely represents an ancestral trait of total-group Mandibulata Within the context of our findings on trilobite segmentation, it is possible to provide two alternative interpretations for the affinities and trunk organization of Strigamia, it has been possible to corroborate the link between the development of the tergites and the associated dorsal expression of the segment polarity gene en, a relationship that can be readily considered as symplesiomorphic for Euarthropoda. The fact that this correlation is ubiquitous and persistent, even in extant representatives in which the trunk segmentation is clearly modified relative to the ancestral arthropod condition, enables to make precise inferences about the expression domains of en in the trunk of exceptionally preserved trilobites. This information leads to the conclusion that the segment and tergite borders of trilobites, as well as some of their close relatives, were not perfectly aligned with each other, which is indicative of a derived and widespread type of dorsoventral segmental mismatch in this diverse and early arthropod group. The interpretations on the segmental patterning of the trilobite trunk drawn from the ectodermal derivates of these arthropods are corroborated by additional information on further aspects of the exceptionally preserved internal anatomy, such as the structure of the digestive tract. Conversely, the metameric arrangement of the longitudinal musculature is deemed inadequate to address the segmental organization of the trilobite trunk due to the effect of potential postembryonic morphogenetic movements during ontogeny. These findings carry wider evolutionary implications for understanding the processes of arthropod segmentation in some of the oldest representatives of the group, and suggest the reiterated occurrence of a derived type of dorsal gene expression (i.e. anteriorly displaced en stripe) that is ultimately responsible for the morphological pattern of trunk segmental mismatch in several disparate groups of extinct (trilobites and closely related taxa) and extant (e.g. haplosegments Glomeris) arthropod throughout their long and successful evolutionary history.The difficulties associated with resolving the correlation, or lack thereof, of the segments that comprise the trilobite body with their respective exoskeletal elements stems from the fact that the morphological information available from the fossil record is inevitably incomplete, and thus poses unique challenges to the study of development and segmentation in extinct arthropods. Through the analysis of embryonic and postembryonic stages of"} +{"text": "Since the WHO \u201cGuidelines on Evaluation of Similar Biotherapeutic Products (SBPs)\u201d was published in April 2010[In terms of the technical features, WHO guidelines are consistent with that of EU-4, head-Up to now, there are not yet specified regulations for SBPs in China. Based on \u201cThe Provisions for Drug Registration (SFDA Order 28)\u201d , Part I Among the research projects of the \"Twelfth Five-Year Plan\" significant new drugs creation special, me too biothreapeutics are still the major part of the projects of biotechnology medicines. Therefore, accelerating the process of establishing our SBPs guidelines has great benefit for achieving the goal of ensuring the availability and affordability of public medicine and improving the development of our country\u2019s biotechnology industry. Recently, our department has initiated the process of surveying the need to draft a specified SBP guideline. As some suggestions, due to very hard to obtain and very high costing of RBP would surely increase the difficulties of developing and evaluation of SBPs, how to define the requirement of RBP should be elaborately considered during the process of establishing our guidelines. Besides, special attention should be focused on how to perform the comparability exercise with RBP in the non-clinical and/or clinical study during the development of SBPs of therapeutic monoclonal antibodies. We believe that a SBPs guideline which considering both the actual situation of development of biomedicine in China and the general WHO framework would be established in the near future."} +{"text": "Anopheles gambiae and its underlying factors with an ultimate goal of designing mosquito rearing scheme that produce competitive mating males.The current works is focussed on ways in which the manipulation of mosquito mating behaviour has the potential to contribute to integrated programme for the control of malaria vectors. One line of investigation explores means of reducing vector populations by significantly reducing mating with lure-and-kill and mating disruption measures. The second line is looking at variation in mating success between and within swarms of An. gambiae mating system.A complete map of swarm distribution in Vall\u00e9e du Kou was constructed and swarms were physically described. Overall swarms were tightly linked to specific man-made markers within the village and a significant difference in swarm numbers and size was observed between households. The pattern distribution of swarms across space was clustered and hotspots are clearly seen where most of the swarms aggregate. A multivariate analysis allowed identifying a subset of environmental parameters that best correlate to swarm structures and that includes, the number of swarm markers/surface unit, the exposition of the makers to sunlight, the contrast pattern and the openness of the marker to air circulation. Exploration of the energetic budget in relation to swarming and mating showed that sugars and glycogen are the main energetic sources that fuel males mating activities. The distribution of wing size of mated males was focused around a central value suggesting that intermediate size of males is advantageous in A better knowledge of key parameters that account for male mating success will be of significance to control strategies based on the release of genetically modified or sterilised males. Similarly, understanding the ecological parameters that are correlated with the presence or absence of swarms would be valuable for the implementation of mating disruption strategies."} +{"text": "The radiobiological effect of densely ionizing ends of primary or secondary charged particles may be influenced significantly by processes running in the chemical stage of radiobiological mechanism; especially the influence of present oxygen may be very important. The effect of its or of other species (radiomodifiers) present in water medium during irradiation may be studied with the help of corresponding mathematical models. The model based on the use of Petri nets will be proposed and described.Two parallel processes, i.e., diffusion of radicals and their chemical reactions, running in corresponding radical clusters formed during energy transfer may be represented with the help of the given model. A great number of chemical species may be easily taken into account. The model enables to study the concentrations of individual radicals changing during cluster diffusion and to estimate their damaging effects on corresponding DNA molecules in given cells. The results demonstrating the influence of oxygen under different concentrations will be presented."} +{"text": "The ATP-binding cassette (ABC) proteins represent a large family of transmembrane proteins that use the energy of ATP hydrolysis to transport a wide variety of physiological substrates across biological membranes . Of themThe ABCC2 transporter is a transmembrane protein expressed in the apical cell membrane of hepatocytes and epithelial cells of small intestine and kidney, where it is involved in the elimination of many endogenous and exogenous substrates from the cell, including compounds clinically relevant [in silico model based on the Gottesman database [In this scenario, the aim of the present work was the development of an database able to database . Moleculdatabase . Feature"} +{"text": "Parents of children with ASD are increasingly using special diets and dietary supplements, despite a lack of robust evidence of effectiveness. The most popular dietary intervention is the gluten free casein free (GFCF) diet which is not without risks for the child and family.To explore the attitudes of parents and health professionals towards dietary interventions and the use of the GFCF diet. To assess the feasibility of an RCT of this diet in preschool children with ASD.Short web-based questionnaire for parents and child health professionals.246/361 parents and 246/317 professionals responded. Of all parent respondents, 46% were currently using dietary supplements for their child, 84% were aware of the GFCF diet and 28% were currently using this diet. Three quarters of parent respondents said they would \u2018definitely', or would \u2018consider' participating in an RCT of the GFCF diet.72% of child health professionals had been approached by parents for advice about the GFCF diet. 50% of professionals reported they did not know enough about the efficacy of the diet to advise families. The majority of professionals strongly supported the need for evaluation of the GFCF diet and 75% would be prepared to recruit children to an RCT.The need to evaluate the GFCF diet has been confirmed. There is support amongst professionals and parents for an RCT of this diet and facilitators and barriers of recruitment and retention of families in a future RCT have been identified.Autism Speaks US."} +{"text": "The past few decades have seen major advances in evaluation and treatment of fetal cardiovascular diseases. Largely due to advances in imaging, recognition of structural pathology in the developing human heart can now be performed as early as the 12th week of pregnancy and can be seen to develop and progress through gestation. Because of the observation that serious structural congenital heart disease may progress from seemingly minor disease if untreated, several centers are now intervening before birth to address such abnormalities and attempt to prevent the further development of structural disease. Furthermore, detailed assessment of cardiac rhythm, function, and myocardial mechanics is now also possible as early as the first trimester. Transplacental treatment for fetal rhythm abnormalities has dramatically changed the outcomes for affected pregnancies in the past decade. More recently, several centers have begun to incorporate routine fetal cardiovascular assessment in the evaluation of diseases such as congenital cystic adenomatiod malformation of the lung, twin-twin transfusion syndrome, and congenital diaphragmatic hernia where structural disease may impose significant comorbidity postnatally, and hemodynamic derangements and functional pathology secondary to the primary process may impact the fetus in utero. Recognition of potentially treatable fetal cardiac disease may alter the prognosis for these patients in the perinatal period as in utero treatment to address the primary abnormality has been shown to improve the hemodynamic derangements observed. Finally, prenatal recognition of fetal cardiac disease in general may be changing the natural history and incidence of disease in the postnatal population. Regardless of training and background, any healthcare professional involved in the diagnosis and management of diseases of the fetus and newborn now needs to be cognizant of the potential contribution of prenatal cardiac assessment and treatment in the congenitally malformed or unwell fetus. In this special issue on the fetus as a cardiac patient, we have invited a few papers addressing issues unique to this patient group.The first paper of this special issue addresses ethical issues relating to fetal diagnosis of a major abnormality with special emphasis on cardiac malformations. Presented are discussions of the ethical concept of beneficence and the principle of patient autonomy in the context of counseling expectant mothers and the complex ethical situation which arises in the consideration of the fetus as a patient.The second paper presents a comprehensive review of cardiac findings in twin-to-twin transfusion syndrome (TTTS), a condition which is a severe complication of monochorionic twin pregnancy. TTTS is characterized clinically on ultrasound by polyhydramnios in the \u201crecipient\u201d twin and oligohydramnios in the \u201cdonor\u201d with varying degrees of cardiac dysfunction in the recipient. The pathophysiology of the syndrome itself and of the development of cardiomyopathic changes remains incompletely understood. The review discusses what is known with respect to the cardiac findings at presentation, natural history, and response to treatment and discusses current approaches to a comprehensive cardiac evaluation of affected fetuses. The third paper describes a large series of fetuses presenting with findings consistent with cardiomyopathy or myocarditis and represents a large natural history study of these entities, underlining the particularly high perinatal loss rate with diagnosis of dilated cardiomyopathy or myocarditis, as opposed to many of the hypertrophic myopathies.The issue concludes with two thought-provoking review articles regarding the intrauterine environment and the complex interaction the developing fetal brain and circulatory system have with each other and the placental circulation. In the first of these, the authors present a discussion of intrauterine hypoxia, its various causes, and mechanisms whereby disease in the fetus may result. The final paper presents a comprehensive review of the current understanding of pathologic findings in the developing brain of the fetus and infant with congenital heart disease. Methods for detection, potential etiologies, and implications for neurodevelopmental outcome are discussed. Intriguing speculation regarding the possibility of altering the natural history of developmental brain abnormalities via in utero intervention will leave the reader eagerly anticipating future developments in the rapidly advancing field of fetal cardiovascular assessment and treatment. Anita J. Moon-GradyAnita J. Moon-GradyShinjiro HiroseShinjiro HiroseGreg KesbyGreg KesbySamuel MenahemSamuel MenahemWayne TworetzkyWayne Tworetzky"} +{"text": "Many women in Sub-Saharan African countries do not receive key recommended interventions during routine antenatal care (ANC) including information on pregnancy, related complications, and importance of skilled delivery attendance. We undertook a process evaluation of a successful cluster randomized trial testing the effectiveness of birth plans in increasing utilization of skilled delivery and postnatal care in Ngorongoro district, rural Tanzania, to document the time spent by health care providers on providing the recommended components of ANC.The study was conducted in 16 health units . We observed, timed, and audio-recorded ANC consultations to assess the total time providers spent with each woman and the time spent for the delivery of each component of care. T-test statistics were used to compare the total time and time spent for the various components of ANC in the two arms of the trial. We also identified the topics discussed during the counselling and health education sessions, and examined the quality of the provider-woman interaction.The mean total duration for initial ANC consultations was 40.1 minutes (range 33-47) in the intervention arm versus 19.9 (range 12-32) in the control arm p < 0.0001. Except for drug administration, which was the same in both arms of the trial, the time spent on each component of care was also greater in the intervention health units. Similar trends were observed for subsequent ANC consultations. Birth plans were always discussed in the intervention health units. Counselling on HIV/AIDS was also prioritized, especially in the control health units. Most other recommended topics (e.g. danger signs during pregnancy) were rarely discussed.Although the implementation of birth plans in the intervention health units improved provider-women dialogue on skilled delivery attendance, most recommended topics critical to improving maternal and newborn survival were rarely covered. Antenatal care (ANC) visits provide an opportunity to reach pregnant women with important preventive and treatment interventions as well as counselling on a variety of topics such as birth and complication readiness and the importance of skilled delivery and immediate postnatal care . An addiThe amount of time spent on health education, advice and counselling during ANC consultations is key to the effectiveness of ANC in improving health behaviours and care seeking during pregnancy, labour and delivery and in the immediate postpartum period . The infThe scope of health education and counselling on pregnancy and related complications provided to women during ANC visits in most Sub-Saharan African countries where over half of all maternal deaths occur, however, is often inadequate or nonexistent,7. This Maternal mortality is very high in Tanzania, estimated at 454 deaths/100,000 live births. Ninety-The focused ANC package based on the WHO model was introduced in Tanzania in 2002, althoughStudies on the content of ANC provided to women are needed in low-resource countries to improve the quality of ANC. Although some information on the quality of ANC services is collected through national surveys such as Demographic and Health (DHS) and health service assessment, detailed data on counselling, health education or promotion of skilled delivery and postnatal care are often missing.This paper reports the scope and quality of ANC services provided to women in Ngorongoro, a rural district in northern Tanzania. It compares time spent for various components of ANC and quality of care provided in the control and intervention arms of a cluster randomized controlled trial (RCT) that aimed at examining the effectiveness of birth plans in improving utilization of skilled delivery and postnatal care . Although a component of the recommended focused ANC in Tanzania, birth plans are rarely implemented during routine ANC consultations in the study district.Sixteen dispensaries offering maternal and child health (MCH) services in Ngorongoro district, rural northern Tanzania were included in the study. According to the 2002 Tanzania national census, the total district population was 129,362 of which 29,489 were women in the reproductive age group. Given tThe study was descriptive and used mixed methods of data collection. Quantitative methods were combined with direct observation of ANC consultations. For the purposes of the cluster randomized control trial, the 16 participating health units were randomly grouped into either the intervention arm where birth plans were introduced or the control arm where the standard of ANC as per district protocol was offered . Antenatal and post natal care services were generally available at all clinics . Providers worked more than eight hours a day in some clinics because women would frequently arrive close to the end of official hours. Clinic attendees included a mixture of women seeking antenatal and postnatal care.Between October-December 2008, providers in the intervention units were trained for a total of two half working days at a training workshop, followed by a second training for two half working days at their respective health units. The training covered the implementation of birth plans during ANC and the importance of involving male partners or others identified by women as participants in their care. Providers were also instructed to encourage women to bring their male partners for future ANC visits if they did not accompany them at the time of study recruitment. Major components of the birth plans included: 1) plans for a place of delivery , 2) calculation of the expected costs and money saving strategies for delivery, 3) plans for identifying someone to accompany the woman to the delivery site and another to look after her household, 4) plans for identifying possible blood donors, and 5) plans for addressing any complications arising during pregnancy, labour, delivery, and in the postnatal period (for mother and newborn), and 6) strategies for overcoming any other barriers to accessing skilled delivery care. One copy of the birth plans was retained by the woman so that she could present it to providers at the delivery site, and a second copy was kept at the health unit.Providers' performances during practical training sessions were timed and recorded on a video camera and later jointly reviewed by trainers and providers. Follow-up training to re-enforce skills learned was conducted at each health unit on clinic days.Providers in the control units were trained for one half working day at a workshop in the concept of the study in their arm followed by another one half working day in their respective health units supervised by MM. Topics covered included general concepts of focused ANC, and collection and recording of study-related data from study participants and participants' follow-up. Health care providers in both arms of the study were also informed of the process evaluation during the training sessions and before data collection commenced.Providers in both the intervention and control arms of the study had previously undergone basic and in-depth training on the concepts and implementation of focused ANC by the district MCH team and various NGOs involved in MCH care. All had at least one year experience in implementing focused ANC.The process evaluation was conducted over a period of approximately ten weeks from January to March 2009 at all 16 health units and took place during the implementation of the cluster randomized trial. ANC consultations were observed, recorded using a digital voice recorder, and timed using a stopwatch. The total time spent on the consultation and on each component of care delivered was recorded.A health unit assessment form was used to collect information on clinic characteristics, provider adherence to the focused ANC model in all clinics, and on provider implementation of the birth plans in the intervention clinics. Some of the questions in the assessment form were adapted from the Tanzania Service Provision Assessment Survey, 2006 questionnaire, and theThe health units were visited unannounced. Each unit was visited once or twice for observation of the ANC consultations and once for postnatal care. Information was collected on a maximum of five consultations during any single visit to a health unit. Systematic random sampling was used to select up to five consultations depending upon the volume of patient flow at the health unit. The data collected were reviewed with providers after health unit working hours.Health care providers were asked to hang a digital voice recorder around their necks to record the entire consultation. This strategy was used to protect the woman's privacy during physical examinations. After seeking consent from both care providers and women, MM and one trained assistant, both familiar with Kiswahili and the two major local languages (Ma and Kitemi) observed the consultations except the physical examinations. The total duration of the consultations and the time allotted for delivering each component of care was recorded on the health unit assessment form. The components of care assessed included: history taking, examination for maternal and foetal well being, drug administration, and counselling and health education. Counselling and health education included time spent explaining the importance of skilled delivery care for all women, voluntary counselling and testing (VCT) for HIV, health education for HIV/AIDS, birth plans and other topics as stipulated in the focused ANC guidelines for Tanzania Figure . At the On the same day of the visit or immediately after leaving the health unit, MM recorded the observations of the quality of provider-client interaction findings in a diary. Information on the availability of essential supplies (e.g. drugs and equipment) and other items needed to support for the delivery of quality ANC was collected.The recorded information was analyzed by the principal investigator (MM) assisted by an experienced MCH nurse who reached consensus on the amount of time allotted to each component of care and on the quality of care delivered.The average total time spent for antenatal consultations for each clinic was calculated by dividing the sum of the time for each component of care delivered divided by the number of consultations assessed to obtain a cluster level summary that was entered in the health unit assessment form. Cluster level summaries on the time spent for providing specific components of care were also calculated. The summaries were later used to calculate the time spent for total consultation in each arm of the study as well as for specific components of ANC.The quality of the provider-client interaction was ranked good if the provider allowed women to ask questions and responded to their questions. The quality was ranked fair if the provider allowed women to ask questions but did not respond to their concerns nor try to ascertain if the women understood what was discussed. The discussions were ranked bad if the provider neither allowed women to ask questions, nor tried to determine if women understood what they were being told. The interactions were categorized as undetermined if for any reason the ANC sessions were too short to allow assessment of the provider-women interaction. A health unit was categorized to have \"good\" provider-client interaction if all consultations in the particular health unit were ranked fair or good.Quantitative data was entered into SPSS statistical package (version 16.0) and later transferred into STATA (version 9) statistical package for cleaning and analysis. The average time for initial and revisit ANC (2nd visit or higher), and postnatal care consultations were calculated and compared between the two study arms using unpaired t-test statistic. Times for components of ANC were similarly calculated.Ethical approval was obtained from the Institutional Review Boards of the World Health Organization, London School of Hygiene and Tropical Medicine, and the National Institute for Medical Research in Tanzania. Permission was also granted from Arusha region and Ngorongoro district administrative and health authorities and staff at participating health units. Informed verbal consent was obtained from all providers and women who participated in the process evaluation.A total of 23 service providers were observed (11 in the intervention arm and 12 in the control): nursing officers (2), midwives (5), MCH nurses (8), nurse auxiliaries (4) and clinical officers (4).73 initial ANC consultations were observed (36 in the intervention arm and 37 in the control). A total of 35 re-visit ANC consultations were observed (18 in the intervention and 17 in the control). Since not all women were given drugs or immunized at each ANC consultation, a total of 20 consultations were observed for this component of care . The reported average number of pregnant women seen per working day in each clinic did not differ between the two study arms .Table Table Providers in all intervention health units consistently counselled women on the importance of delivering at the available health units when helping them formulate their birth plans. In contrast, providers in only one control health unit briefly counselled women on skilled delivery care (information not shown). Providers in both the intervention and control health units consistently provided VCT for HIV/AIDS during initial ANC visits, and blood test results were communicated to the women before they left the clinics. Some aspects of prevention of maternal to child transmission of HIV (PMTCT) were also discussed during all ANC consultations.Tanzania's focused ANC guidelines recommend that women make plans for postpartum care during the third visit (at 28-32 weeks gestation). The importance of postnatal care was consistently emphasized in 5 out of the 8 health units in the intervention arm of the study. It was not consistently emphasized in three of the intervention health units because women typically initiated ANC late. Consequently, there was insufficient time to cover this topic along with all other recommended topics. In contrast, women in only one control health unit were informed about postnatal care and the amount of information provided was brief and inconsistent across consultations.All health units in the intervention arm and all except one in the control arm received a good score for provider-client interaction. Women asked questions freely and even participated in some health unit activities like weighing other women and recording this information on the ANC cards. Providers in all health units spoke the local language and needed interpreters only occasionally.All health units experienced stock-outs of drugs and equipment needed for the delivery of routine ANC Table . For exaThe implementation of the birth plan intervention significantly improved the total time for consultations and for most components of ANC including health promotion and counselling. The increased time spent on counselling in the intervention units suggests that training of providers on birth plans can translate into measurable improvements in provider practices.The study did not improve the quality of care as measured by provider-patient interactions since consultations in both arms of the study were generally rated as \"good\". This is a somewhat surprising finding because a logical assumption is that spending more time with women should translate into improved provider-patient communication between providers and clients. The fact that most providers were able to speak the local languages, were willing to work past regular hours to accommodate women's needs, had served in the health units for sufficient time to build a sense of trust with the surrounding communities, and knew most of the women attending their clinics, however, suggests that the rapport between providers and patients in the study health units was strong prior to the implementation of the intervention. Our findings are consistent with results from a previous study in an urban setting in Tanzania.The average time for the initial ANC consultation of 40 minutes in the intervention health units compares favourably with WHO recommendations but is sTraining of providers on birth plans alone did not result in significant changes in the implementation of other aspects of the focused ANC model. Providers need more training on adequately delivering all components of the model, appropriate supportive supervision and regular evaluation of their performances to improve the quality of care. Arguably, the full implementation of the Tanzania's focused ANC model in health units with heavy workloads might require an increase in the number of providers or office hours to cope with increased time requirements to deliver ANC. The re-organization of health unit activities enabling MCH providers more time and resources to provide quality ANC services may be a pragmatic interim solution. Staffing each health unit with an MCH provider responsible for delivering only clinical services could be a long term objective.This study found that counselling in both control and intervention units was not provided on many topics recommended in Tanzania's national essential health intervention package and focuThe time spent for counselling in this study in relationship to client flow raises some questions about the time providers will need to spend to cover all recommended topics. If more time than is currently spent is needed to deliver all components of recommended care, this may present a challenge for providing individualized counselling in clinics with heavy workloads. Although group counselling for some topics such as danger signs in pregnancy, labour, delivery and postpartum may be an option, other topics such as birth plans may not be amenable to group discussion. Women's lack of autonomy and decision making ability in the study setting plus the sensitivity of some issues also mean that a group approach cannot completely replace the need for individualized counselling.This study showed that some clinics experienced stock-outs of essential drugs and equipment needed to provide routine ANC. For example, blood pressure was not routinely checked among women in health units which lacked functioning blood pressure machines, and those with elevated levels could be missed with serious consequences to both the women and their unborn babies. Such stock-outs may contribute to women's perceptions of services at health clinics in the district to be of poor quality and discourage them from utilizing available services for delivery and emergency care. If women choose to bypass the health clinics for hospitals to receive skilled delivery care, they may not reach them fast enough because the hospitals are located far from most women's residences.Process evaluation is a useful technique for understanding how well programmes are being implemented, and can be useful for identifying factors that contribute to or detract from smooth implementation,24. ThisThe process evaluation had some limitations. The evaluation was not designed to be a comprehensive assessment of the quality of ANC care in the study setting, but focused on evaluating the implementation of the counselling component of routine ANC. The Hawthorne effect-the possibility that some providers modified their behaviour by spending more time with their clients than they would have done in the evaluator's absence may have influenced the study findings given that providers in both study arms knew that they were being assessed. In a study on the quality of ANC in Tanzania, Boller et al 2003 found that providers delivered free ANC services to women who were usually told to pay for some services in the absence of the researchers. In contNgorongoro is a remote district and consists of a predominantly pastoralist population. Care seeking behaviours in the district are likely to differ from other districts in Tanzania. The relatively low volume of women attending ANC clinics on a given day also allows providers time to give individualized attention to women. This type of individualized care may not be possible in other settings where the number of ANC attendees is higher. The methodological approach used did not allow for blinding of the principal investigator regarding which health units were in the intervention or control arm. His interpretation of the recorded findings might have been influenced by the knowledge of the arm of the study to which the health units were allocated, thereby introducing bias. To reduce this risk, the recorded materials were reviewed by an experienced MCH nurse who was not aware of the health units' allocation to the two study arms, and the final interpretation of the recorded material depended on consensus of the two investigators. The process evaluation commenced a month after the trial implementation and continued for only three months. Repeat assessment at a later date would help determine if results can be sustained in the longer term.Most topics recommended in the ANC guidelines are not routinely discussed during ANC consultations in the study setting. Provider competency and willingness to implement all components of Tanzania's focused ANC model including all recommended health education/promotion topics needs further examination. The need for more training, supportive supervision and monitoring should be addressed to improve the quality of care. The limited time allocated for providing the various components of the focused ANC model in the control arm; the fact that not all recommended health education and counselling topics are being discussed despite prior training on the focused ANC model; and the lack of emphasis on explaining to women the importance of early postnatal care are missed opportunities to realize the full potential of antenatal care to improve obstetrical outcomes in the study setting.The authors declare that they have no competing interests.MM designed the study, collected and analyzed the data, drafted the initial manuscript and reviewed subsequent drafts. JR participated designing the study, developing the data collection tools and reviewing the draft manuscripts at all stages. VF, OMRC and SC participated in designing the study, developing the data collection tools and reviewing the initial and final manuscripts. All authors approved the final version of the manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2393/11/64/prepub"} +{"text": "Angiotensin II and nitric oxide (NO) can modulate the sensitivity of the TGF mechanism. However, the interaction among these substances in regulating the TGF resetting phenomenon has been debated. Studies in isolated perfused AA have shown a biphasic response to accumulating doses of adenosine alone. In the nanomolar range adenosine has a weak contractile effect (7%), whereas vasodilatation is observed at high concentrations. However, a synergistic interaction between the contractile response by adenosine and that of angiotensin II has been demonstrated. Adenosine in low concentrations strongly enhances the response to angiotensin II. At the same time, angiotensin II in physiological concentrations increases significantly the contractile response to adenosine. Moreover, addition of a NO donor (spermine NONOate) to increase NO bioavailability abolished the contractile response from combined application of angiotensin II and adenosine. These mutual modulating effects of adenosine and angiotensin II, and the effect of NO on the response of AA can contribute to the resetting of the TGF sensitivity.Adenosine, via activation of A The tubuloglomerular feedback (TGF) is a negative-feedback system operating within the juxtaglomerular apparatus that can regulate glomerular filtration rate (GFR) by changing arteriolar resistance and hence blood flow and pressure into the glomerular capillaries. In this control system the tubular load to the distal parts of the nephron is detected via changes in tubular sodium chloride concentration at the macula densa site. This information is then used to determine the contractile state of the afferent arteriole (AA) that is the main effector link of this controller. The sensitivity and reactivity of the TGF system can be modulated via several different factors and via those changing the effector response. Exactly where and how this modulation of the TGF response occurs has not been clear. Recent work from our laboratory has indicated that this modulation to some extent can be carried out by the arterioles themselves.Figure 1 and A2 receptors are expressed on afferent arterioles, and can regulate preglomerular resistance. Adenosine in physiological concentrations constricts afferent arterioles via a prominent effect on purinergic A1-receptors and increasing concentrations of adenosine completely abolished the contractile response to angiotensin II and adenosine with L-NAME further amplified the contractile response when added to the combined solution of angiotensin II , oxidative stress (increased) and NO bioavailability (reduced).We suggest that these interactions of vasoactive substances on the afferent arteriolar contractions can explain at least a part of the phenomenon of resetting of the TGF by angiotensin II, ROS and NO. Increased arteriolar reactivity and TGF responses have been described in several models for hypertension, which may be associated to the abnormal regulation of renal angiotensin II The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "A 34-year-old right-hand-dominant man sustained an electrical injury to his left hand while handling a 450-volt cable. He presented with deep burns to the first web space and the dorsal aspect of the index finger. The excision of the eschar resulted in exposure of bare tendons .What is the initial management of an electrical burn?What is the management of the hand injury?What are the reconstructive options?A first dorsal metacarpal artery flap was performed on this patient. What are the common advantages and pitfalls?Electrical injury often causes deep burns involving underlying soft tissue and bone. A careful history should be taken noting voltage and current involved, duration of contact with source, and related events . Other associated complications include vascular and neurologic injuries, fracture, subluxation of joints, rhabdomyolysis and myoglobinuria, renal failure and cardiac arrhythmias.The general appearance of the hand is noted. Any deficit in range of motion, sensation, and regular neurovascular assessment is carried out. Whether or not immediate and definitive reconstruction should be attempted is clearly dependent on the circumstances of the accident, health of the patient, and consideration taken of the zone of injury.,6Rapid resurfacing of the wound is essential to enable early mobilization and optimum recovery of hand function. Early reconstruction is favored when excision of all necrotic tissue is completed and allows for good wound coverage.5The first dorsal metacarpal artery (FDMA) flap has become popular for this injury. This is an axial pattern skin flap extending proximally from the level of the metacarpophalangeal joint and distally to the level of proximal interphalangeal joint. The course of the artery is determined by Doppler and then marked. The flap is outlined on the proximal phalanx of the index finger extending between the mid lateral lines. A line is then drawn along the radial border of the second metacarpal in a lazy-S fashion, beginning from the lateral margin of the proximal base of the skin island to the tip of the first webspace. This represents the pivot point and the arc of rotation of the flap . The useSkin flaps are raised on both sides of the incision. Superficial veins that have the same course as the pedicle and the terminal branch of the dorsal sensory branch are incorporated with a cuff of superficial subcutaneous tissue. The fascia is then incised on the radial half of the first dorsal interosseous muscle proceeding toward the ulnar side in which the FDMA comes into view. Dissection proceeds to the periosteum at the dorsal radial edge of the second metacarpal including soft tissue around the FDMA to protect the pedicle. The skin island is elevated just above the extensor hood on the proximal phalanx of the index finger leaving the paratenon intact . The fla"} +{"text": "This issue of the Stem Cell International journal contains papers from many of the leading scientists in the emerging field: ex vivo expansion of hematopoietic progenitor cells into erythrocytes for transfusion.immunology of transfusion and transplantation began with the discovery of the heterogeneity of human blood group antigens by Dr. Karl Landsteiner in 1901 (recognized with a Nobel Prize in 1930). The discovery of clinically relevant infectious diseases transmitted by transfusion played an important role in the development and advancement of virology. The inheritance of certain form of anemias was discovered during blood transfusion practice and led to development of the genetics of human red cell disorders. In the 1940\u20131950s, the establishment of blood banks followed by the development of rigorous donation criteria and standardization of blood manufacturing processes has made transfusion safe and widely available and has provided a paradigm for the development of emerging therapies using ex vivo expansion and differentiation of many cell types. An example of one such therapy is represented by the tumor immunotherapy described by Lapteva and Vera.Blood transfusion, the first form of successful cell therapy and, at least to some, \u201ctransplantation\u201d, was inspired by the discovery of the circulation by Richard Harvey in the 1600s . In vivo functional studies of human red blood cells in animal models will likely allow more complete characterization in many ways [in vivo imaging and cell fate determination of human erythroid cells by labeling the cells before transfusion with a fluorescent reporter gene by retroviral technology.Reprogramming technology is still under development. Therefore, red blood cells expanded lleagues , who havAlthough red blood cells do not have nuclei, their immediate precursors the erythroblasts do. The terminal maturation of erythroblasts into functional red cells requires a complex remodeling process which ends with extrusion of the nucleus and the formation of an enucleated red blood cell . These lAs represented by all the information, data, and in fact vision contained in this issue, we are clearly at the beginning of a rapidly expanding field. The papers herein provide a broad and comprehensive overview of the most relevant areas of research which have been pursued and are needed to advance the field. Still, as state of the art as this issue is presently, the field is moving so rapidly that one may predict that new knowledge will rapidly follow.Anna Rita MigliaccioAnna Rita MigliaccioGiuliano GrazziniGiuliano GrazziniChristopher D. HillyerChristopher D. Hillyer"} +{"text": "The use of lung ultrasound in the detection of pneumothorax is becoming routine in emergency departments and intensive care units in the United States and Europe . The intTo evaluate the sensitivity and specificity of diagnosis of medical students compared with emergency physicians (experts) in identifying pneumothorax by lung ultrasound.n = 40) and emergency physicians (n = 11) with training in emergency medicine and intensive care, called experts, were invited to participate. The study subjects were assessed for the correct diagnosis of 20 cases of pneumothorax after training through classroom teaching of lung ultrasound lasting 2 hours addressing the recognition of artifacts in the lung and identification of pneumothorax Lung Sliding Lines B. Prior to training, medical students and emergency physicians had no prior knowledge or practice in emergency ultrasonography. We used video-clips of 10 positive and 10 negative real cases of pneumothorax obtained by an experienced examiner in lung ultrasound. The comparison between the two groups was described by the mean and standard deviation of hits in each group and tested by the nonparametric Mann-Whitney test. The agreement between raters overall and in each group was estimated by the kappa correlation coefficient. The difference between the agreement observers in each group was tested by Z test for proportions.Students of 3 years of medical graduation participating in the module Radiology Emergency Medicine (Students and experts did not have statistically different test scores as shown in Table Medical students and medical experts are able to accurately identify pneumothorax, despite an abbreviated training time with no previous knowledge of ultrasound lung. Therefore the use of a simulation model based on lung ultrasound videos can be implemented in a systematic way to help health professionals and medical students in their training."} +{"text": "Substantial changes in large parts of the developing world have materialised in the last three decades. These are extremely diverse countries with respect to culture, societal values and political arrangements, but sharing one feature - prevalent poverty and limited resources to protect the health of individuals. The control of emerging chronic diseases in low-resource countries is a formidable challenge. For this reason any intervention should be kept logistically simple and incorporated into a general plan aiming at building gradually the infrastructure that is necessary to bring care to the population at large. The present contribution summarizes some of the priorities in cancer prevention in developing countries and the underlying evidence base, and addresses some of the challenges. Many countries in sub-Saharan Africa still struggle with endemic tuberculosis, malaria, AIDS, nutritional deficiencies and perinatal conditions that cause high rates of premature death and permanent disability, a disease burden at least one order of magnitude greater than cancer. But even where substantial economic development has taken place, as in Thailand, Malaysia, China, India or Brazil, it has failed to benefit society at large; rather, new health threats are on the rise with limited control of the long-term burden of prevalent diseases. Moreover, the lack of comprehensive planning of health systems has led to wider inequalities in access to health care.Cancer control encompasses a package of diverse interventions aiming aChoices should be driven primarily by the quantification of the problem combined with the feasibility and cost of different interventions. Means to monitor the occurrence of cancer in developing countries are still very limited, therefore planning relies largely on estimates. Based on the comprehensive GLOBOCAN2008 [Smoking of commercial cigarettes used to be uncommon in developing countries where tobacco smoking is a recent aspect of Westernization of life styles. While interventions to reduce the habit in rich countries is now showing positive results, the tobacco industry is pursuing new markets in the developing world ,4. Of alImmunization of infants against hepatitis B virus (HBV) is probably the second most cost-effective option in regions where the infection is still endemic. Lorenzo Tomatis at the International Agency for Research on Cancer saw the potential of such public health measure in the early eighties when he promoted the establishment of The Gambia Hepatitis Intervention Study (GHIS). The main objective of the study was to prove the feasibility of such interventions and quantify the efficacy of immunization in preventing chronic liver diseases and hepatocellular cancer in an African country. Several more years of observation are needed to measure the full impact of the intervention; but high coverage and reduced incidence of chronic hepatitis have been achieved promisinOne of the most celebrated successes of cancer research is the recognition that virtually all cervix cancers are caused by certain types of the Human Papilloma Virus (HPV) with types 16 and 18 accounting for about 80% of the burden [Compared with classical cytology-based screening, the HPV technology offers a valid and possibly more cost-effective strategy in secondary prevention of cervix cancer in low-resource settings, because the sensitivity and specificity of available HPV tests in exfoliated cells are much more reproducible than those of cytology which strongly depends on human expertise and skill . In factIn a large randomised trial in Kerala, India, Sankaranayan and colleagues assessedThere is no single strategy to develop cervix cancer control programmes from scratch. With a careful analysis of the size of the problem, feasibility, costs and expected outcomes against the background of existing infrastructure, plans can be gradually built from a minimal level \u2014e.g. one life-time HPV-based screening test with timely treatment accessible to all women from age 35 years\u2014 and expanded on the medium- or long-term with immunization programmes and repeated screening. The condition for any intervention to be successful and cost-effective is to reach high coverage of the target population; therefore, much attention must be paid to the logistics of how the services are delivered in order to ensure access and high compliance.Other preventive interventions that are the object of much research and activities in the West focus on nutritional habits and energy balance, clearly an increasing problem in emerging economies as shown by rising rates of diabetes . TackliThe increased risk of breast cancer in emerging economies is seen as the direct expression of economical development; yet, our understanding of the modifiable causes of the disease is still very limited leaving little room for primary prevention beyond avoiding excessive body weight. Improving access to timely treatment of early palpable tumours is likely to result in a greater benefit to the population. Etiological research in populations still at low or intermediate risk for the disease offer instead powerful opportunities to test hypotheses based on observations made in the high-risk Western world.Finally, an area that is often overlooked among preventive actions in low-resource countries is the uncontrolled use of carcinogens in industrial processes and economical activities, often imported from technologically advanced economies where regulations impose uses that are safe for workers and the environment, but less profitable. Any attempt to estimate the magnitude of the current and future disease burden due to potential carcinogens newly introduced under uncontrolled conditions would be highly controversial as lack of regulations implies also lack of monitoring of the amount, usage and disposal of hazardous substances. Nonetheless, whatever the size of the problem, ethical principles impose the inclusion in any cancer control programme of actions to prevent occupational exposure and environmental contamination with carcinogens. As a first step in this direction both rich and poor countries should be encouraged to sign up to the Rotterdam Convention whose objectives are to promote shared responsibility in the international trade of hazardous chemicals and to contribute to their sound use . Bodies In rich countries the combination of early detection and new treatments that can improve disease outcomes have contributed to a modest but constant decline in cancer mortality rates that started in the 1980s . The maiThe author declare no competing financial or non-financial interests."} +{"text": "In his book Marc Rodwin, Professor of Law at Suffolk University, analyses the regulation of medical interests. He looks more precisely at doctors\u2019 conflicts of interest that can have an influence on their therapeutic choices which are not necessarily in the patients\u2019 best interest. While society and regulators usually expect doctors to be objective in their therapeutic choices, regulating (or the lack thereof) entrepreneurship of private practitioners, their ownership of medical facilities, their type of employment , and forms of remuneration for medical services can create incentives for preferring one medical treatment over another. The initial chapter of the book illustrates these choices by presenting fictional patients from France, the US and Japan who share the same diagnosis, but receive a variety of treatments depending on the economic and regulatory incentive structure of medical practice.The book\u2019s main research question is a normative one, namely how regulation can minimize conflicts of interest between the patients\u2019 interest and physicians\u2019 entrepreneurial goals. On the basis of a political economic perspective, the book sets out to analyse the interplay between several variables: medical associations\u2019 oversight and medical self-regulation, market competition mechanisms, insurance companies\u2019 influence over medical practice, and the state\u2019s practice of regulation. This analytical framework is developed in chapter 1.Chapters 2 and 3 deal with France. The development of the relationship between the organized medical profession, insurance companies and the role of the state are traced back from the medieval times onwards in chapter 2. The last section of the chapter also looks at the influence of European law. Chapter 3 analyses how France aims at avoiding conflicts of interest. The author shows the unusual strength of the French Medical Association and how certain relationships between the pharmaceutical industry and doctors are still tolerated. Rodwin concludes that France only shows limited success in dealing with doctors\u2019 conflicts of interest.Chapters 4\u20137 form the core of the book and deal with the US. Rodwin distinguishes four phases of the development of the medical economy showing a high variation in tackling medical conflicts of interest. Chapter 4 covers the period before 1950, chapter 5 the period until 1980 and chapter 6 the logic of medical markets from the 1980s onwards. Chapter 7 deals with the ways in which the US cope with conflicts of interests today. The author shows how insurance companies have come to set incentives to reduce medical services and thus create conflicts of interest. Also, the market orientation of the American healthcare system has reinforced ties between physicians and the pharmaceutical industry. Rodwin recommends federal regulation of medical care and health insurance, in order to develop a coherent approach to coping with conflicts of interest.The following chapters focus on Japan. Chapter 8 depicts the historical development of Japan\u2019s medicine and chapter 9 analyses how Japan copes with conflicts of interest. Rodwin exposes the coexistent roles of Japanese doctors as private and hospital practitioners leading to a situation in which Japanese patients stay longer in hospitals than in other nations and also receive more drugs for medical treatment.Chapter 10 (\u2018Reforms\u2019) deals with the implications of the previous findings for regulation. Neither market competition nor pure public employment of physicians alone does necessarily mitigate conflicts of interests of doctors. Hence, both should coexist. Some of the other suggested solutions are strict regulation of entrepreneurship of private practitioners, of ties between doctors and the pharmaceutical industry, and avoiding intervention of insurance companies in medical standard setting.Chapter 11 is a more sociology-inspired chapter dealing with the concept of professionalism of physicians and its role to play in reducing conflicts of interest. Rodwin argues that the state, doctors and market mechanisms alike should have authority to regulate conflicts of interests, thus effectively providing for the possibility of \u2018checks and balances\u2019 between them.The book provides overall a very detailed analysis of the historical and structural sources for conflicts of interest in the three countries presented. The chapter on professionalism complements the political economic perspective and avoids an overly functionalist view of coping with conflicts of interests. The detailed analysis shows that the state and insurance funds are also no \u2018neutral\u2019 actors and develops therefore to the convincing conclusion that conflicts of interest are best dealt with by a mix of market-driven, professional and public regulation. The detailedness of some chapters complicates however the readability and leaves the reader with the question if the same conclusions and recommendations could not have been developed with a more structured presentation of some developments. from a regulatory perspective. These interests are not necessarily congruent with the individual patient\u2019s interest of receiving the best medical care. From a regulator\u2019s perspective patients are one interest group among others, even if they are certainly one of the most important groups given their role as future electors. Yet, their interest has to be reconciled with other legitimate interests. Since the medical profession is the main object of interest for the book, it would also be desirable to inquire about the belief structures of physicians about what their own and what patients\u2019 interests are. Using regulatory incentive structures alone does not necessarily explain why certain doctors themselves criticize the ties between the pharmaceutical industry and their profession, even though the same regulatory incentive structures apply.While the \u2018patients\u2019 interest\u2019 plays a key role for analysis, the book falls short of defining what the patients\u2019 interests would beThanks to its comprehensive analysis of the three countries and their different regulatory frameworks this book is not only useful for legal or economic scholars/experts who are interested in dealing with conflicts of interest, but also for those who would like to study the healthcare systems of France, Japan and the USA. It is also useful as a starting point for sociologists and political scientists for studying the role of the medical profession."} +{"text": "The last decade has seen an exponential increase in research directed to the field of regenerative medicine aimed at using stem cells in the repair of damaged organs including the brain. The therapeutic use of stem cells for neurological disorders includes either the modulation of endogenous stem cells resident in the brain or the introduction of exogenous stem cells into the brain. The final goal of these attempts is to replace damaged dysfunctional cells with new functional neurons. Nevertheless, there are multiple concerns regarding the therapeutic efficacy of the cellular replacement approach both from endogenous and exogenous sources. Indeed the extensive heterogeneity of neuronal subtypes in the brain makes it difficult to drive stem cells to differentiate to specific neuronal subtypes pharmacological or genetic modulation of endogenous neural stem cells (NSCs) and (ii) transplantation of exogenous stem cells.NSCs resident in the adult brain are characterized by the ability to self-renew their own pool through cell proliferation and by the potential to differentiate into the three main cell types of Central nervous system (CNS): neurons, astrocytes, and oligodendrocytes Gage, .Active neurogenesis occurs throughout adulthood in primates and various mammals including; rodents, rabbits, monkeys, and humans increase in the number of newborn neurons in the neurogenic niches as compared to physiological conditions, (ii) migration of new neurons from the neurogenic niches to the damaged area, or (iii) production of the new neurons from local progenitor cells in the vicinity of the damaged brain. Indeed, various reports have demonstrated the occurrence of these three phenomenona following brain damage. Specifically, it has been shown that neurogenesis can be upregulated in neurogenic niches in response to different brain insults including ischemia directed migration of the new neurons to the proper site of integration and (ii) directed neurite-growth over long distances, which have not been demonstrated in the adult brain outside the neurogenic niches.Therefore, the introduction of new neurons directly to the site of damage in the brain either by exogenous or endogenous sources faces major challenges such as differentiation to the correct subtype and integration. This leaves to date the newborn neurons in the neurogenic niches as the only cell type shown to be able to functionally integrate in the adult brain circuitry.Consequently, one fundamental question is how we can make use of the reactive pool of neural precursor cells residing in the neurogenic niches to take over the function of a remote damaged brain region. In order to address this question it will be important to gain knowledge from the plastic properties of the older brothers of neural stem cells, the postmitotic neurons.Postmitotic neurons exhibit a certain degree of plasticity following brain ischemia and traumatic brain injuries. Indeed, despite the permanent structural damage and cellular loss, functional recovery is observed to a certain extent following brain damage (Chollet et al., Neuroplasticity is defined as the brain's ability to reorganize itself by forming new functional synaptic connections throughout life. Continuous remodeling of neuronal connections and cortical maps in response to our experiences occurs to enable neurons to adapt to new situations Taupin, . ReorganDespite the consistent reports confirming circuitry reorganization in the brain following injury, the molecular and electrophysiological mechanisms controlling this fascinating phenomenon remain still elusive.Another unexplored aspect of compensatory plasticity includes the question of whether newborn neurons are involved in the reorganization of brain circuitry that occurs following brain injury. However, because of their peculiar cellular and plastic properties, we believe that newborn neurons in the neurogenic niches are important players in this phenomenon.Indeed it has been shown that newly generated neurons, as compared to mature granule cells, exhibit a lower threshold for induction of LTP (Schmidt-Hieber et al., Importantly, following brain ischemia, newborn neurons react with a plastic response enhancing not only their proliferation rate but also exhibiting increased spine density and dendritic complexity as compared to resident hippocampal neurons (Liu et al., So far it has not been investigated whether this plastic response includes changes in the pattern of brain connectivity of newborn neurons. However the recent application of retrograde monosynaptic tracing to study the connectome of the newly generated neurons (Deshpande et al., The next step following the demonstration of the involvement of newborn neurons in brain reorganization would be to increase their plastic potential by increasing their number. This may be achieved, taking advantage of the increase in the proliferation rate of NPCs that normally occurs upon brain damage (Liu et al., Previous work has described a number of intrinsic and extrinsic factors required for newborn neurons survival (see Table The vast amount of information that have been gathered in the recent years about the use of neural stem cells in brain repair indicates that cellular replacement alone cannot lead to effective restoration of function due to the complex anatomical, histological, and functional organization of the brain.In this perspective, due to their plastic potential and their innate ability to functionally integrate in brain circuits, newborn neurons produced inside the neurogenic niches are the most suitable targets for brain repair. Moreover, the importance of neurogenesis-related plasticity is further supported by the finding that hippocampal neurogenesis occurs in humans throughout adulthood with a modest decline during aging (Spalding et al., In this scenario strategies that enhance the survival and the plasticity of newly generated neurons in the dentate gyrus may be the most effective to foster the functional reorganization of brain circuits following injury."} +{"text": "The data from DNA microarrays are increasingly being used in order to understand effects of different conditions, exposures or diseases on the modulation of the expression of various genes in a biological system. This knowledge is then further used in order to generate molecular mechanistic hypotheses for an organism when it is exposed to different conditions. Several different methods have been proposed to analyze these data under different distributional assumptions on gene expression. However, the empirical validation of these assumptions is lacking.Best fit hypotheses tests, moment-ratio diagrams and relationships between the different moments of the distribution of the gene expression was used to characterize the observed distributions. The data are obtained from the publicly available gene expression database, Gene Expression Omnibus (GEO) to characterize the empirical distributions of gene expressions obtained under varying experimental situations each of which providing relatively large number of samples for hypothesis testing. All data were obtained from either of two microarray platforms - the commercial Affymetrix mouse 430.2 platform and a non-commercial Rosetta/Merck one. The data from each platform were preprocessed in the same manner.a priori assumption of any of these distributions across all probe sets is not valid. The pattern of null hypotheses rejection was different for the data from Rosetta/Merck platform with only around 20% of the probe sets failing the logistic distribution goodness-of-fit test. We find that there are statistically significant (at 95% confidence level based on the F-test for the fitted linear model) relationships between the mean and the logarithm of the coefficient of variation of the distributions of the logarithm of gene expressions. An additional novel statistically significant quadratic relationship between the skewness and kurtosis is identified. Data from both microarray platforms fail to identify with any one of the chosen theoretical probability distributions from an analysis of the l-moment ratio diagram.The null hypotheses for goodness of fit for all considered univariate theoretical probability distributions are rejected for more than 50% of probe sets on the Affymetrix microarray platform at a 95% confidence level, suggesting that under the tested conditions The current biological literature makes extensive use of gene mRNA expression data from experimental systems called gene chips/gene micro-arrays. These data are used to infer genomic level conclusions. For example, to infer the response of an organism or cell culture under treatment or perturbation. Microarrays as an experimental system are very valuable in that they provide a genome-wide picture genes). Unfortunately, because of costs of collecting microarray data, the number of samples per treatment is quite small (~2-10).The data from microarrays are noisy. There are a number of reasons to expect variability in the measurements of the expressions of the genes in mammalian organisms. These include biological causes, or the noise associated with the steps involved in the measurement of the gene expression. Depending on the question that the researcher is trying to answer, he/she would have to control for many of these sources of variation of gene expression. This paper is interested in understanding the variation in the expression data after the known/reported sources of variation have been controlled for.The biological variability could be due to genetic or non-genetic factors studies Among the non-genetic factors explaining the variation in gene expression include the gender of the organism - have been termed intrinsic, extrinsic and pathway-specific or global noises being normalized and preprocessed to get estimates of gene expressions. RNA are isolated from the cells obtained from the tissue sample has been drawn from the organism. The RNA are then subjected to the process of reverse transcription (RT) to obtain cDNA that are then subjected to the vitro transcription (IVT) process to obtain cRNA using polymerases. The cRNA are then hybridized to probes on the microarray platform . The facMost of the current journals require the microarray data to be deposited on a database if thesOne of the main areas where microarray data has had its application is in the identification of differentially expressed genes across varying treatment conditions. The approaches used could be classified based on whether they use parametric assumptions about the underlying distribution the gene expression or not. Kerr et al used a AThe above paragraph describes a snapshot of the analysis done using gene expression data, some of the analyses make use of distributional assumptions and some do not. Since distributional assumptions are made frequently, it appears prudent to validate this assumption. As mentioned above Newton et al and KerrSince 2002 a significant amount of data from sources like the Gene Expression Omnibus GEO and ArraThis paper focuses on identifying and validating empirical distribution fits of genome-wide gene expressions as measured by microarrays. In addition to the normal distribution we empirically tested the empirical fit for a number of well established probability distributions.We analyzed four microarray data sets from the GEO database . They weThe microarray samples used in the analyses in this manuscript were based in part on six separate data sets were analyzed in a number of papers -54 whereFor each of the four Affymetrix data sets used, raw CEL format data from GEO was normalized using the R Bioconductor implemenFor the data set of 6219 microarray samples RAM memory limitations prohibit normalizing all samples together. To normalize these, the following steps were followed:1. The samples were partitioned into sets of 75.2. The gene expression data for each of these 75 samples were obtained after running the GCRMA routine.Unpublished manuscript 2001.).3. Using the data from step 2, the gene expression across the whole data set of 6219 samples were normalized with respect to each other using the quantile normalization method as described in with these active genes were also chosen. This combined set of genes is what is used in the analysis in this paper. In the interest of having a relatively large number of genes for analysis, further criteria used by Yang et al [Yang et al that useng et al like preFor each of the data sets \"Craniofacial\", \"Liver\" and \"Brain\", \"Housekeeping\", \"Male\" and \"Female\", Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) hypothesis tests were used to test distributional assumptions. Both test the null hypothesis that a set of data comes from a given distribution, with distributional parameters possibly unknown. The AD test is more sensitive to differences in the tails of the data than the KS distribution. In testing for Normality, the AD test is known to be more powerful than the KS test . Normal,F, each of identified probe sets were tested in the following manner:For each of the data sets and each distribution F1. Use maximum likelihood estimation (MLE) to estimate 2. Use the KS and AD tests at the 90% and 95% to test whether the probe data comes from For each of these distributions, MATLAB version R2009B Statistics Toolbox MLE functions were used. Critical values can be found in .In addition, for housekeeping genes, KS and AD tests were used to test for gamma and Pareto distributions across a set of 6219 samples. For these distributions, tables of critical values do not exist, but a method for generating p-values for KS tests with unknown distribution parameters from was usedThe l-moment ratio diagram ,64 of l-The \"Male\" and \"Female\" data were fitted to mixture of normal distributions using the \"mixdist\" package in the RThe Kruskal Wallis test was used on the logarithm of expression of the probe sets in each of the \"Craniofacial\", \"Liver\" and \"Brain\" data sets in order to identify those that were most likely unaffected by any of the conditions involved in the generation of these data sets. The results in Table The results of the goodness of fit Anderson-Darling distribution tests for the \"Male\" and \"Female\" data sets showed different characteristics from those of the other data sets. Only around 43-46% of the probe sets rejected the normal hypothesis (as compared with 72-82% for the previous data sets). The logistic distribution was rejected less often than the previous data sets (21-24% as compared with 69-79%). The fit of the extreme value distributions were equally bad for both sets of data. This difference in characteristics of the goodness of fit test results between two different microarray platforms indicates the contribution of technology and/or of the normalization methodology to the distribution characteristics of microarray data.2 values for different polynomial fits . So the mean values of skewness and kurtosis are plotted by the red lines in Figure The dependence of the mean on the higher order product moments are shown in Figures K) and the skewness (denoted by S) have to satisfy the following inequality ,The data in Table Further validation of the lack of fit of gene expression to any of the standard theoretical univariate probability distributions can be seen in the L-moment ratio diagram in Figure The samples from the \"Male\" and \"Female\" datasets could be considered more or less homogenous with respect to sex, tissue, diet and experimental conditions. One reason we could be seeing poor fits to standard distributions could be that there are different modes to the distribution of the expression of a given gene reflecting the genetic variation in the F2 cross animals or a stochastic network influence of its expression . Hence iIn the past several years, there has been an explosion in amount of quantitative biological data either in terms of transcriptomics, sequencing data, genetic structure variation, proteomics or metabolomics. DNA microarrays have been important and valuable resource for understanding perturbations to biological systems in terms of identifying affected gene expressions. The standard statistical methods are being either directly used or modified to work with gene expression data. Unfortunately, only a small number of replicate samples per treatment (2-10) are used for analysis owing to the cost of the experimentsal system. This point plus the fact that the probability distributions of gene expressions as measured by these arrays were not characterized leads one of logically question the use various statistical methods that are based on distributional assumptions. Heuristics are being proposed that attempt to relax the reliance on this distributional assumption. One example of this would be the method of jointly using the p-value from a two sample t-test along with the gene expression fold-change to identify differentially expressed genes. Alternately, there is an increased use of non-parametric methods or permutation-based methods ,38.The essential question that we address in this manuscript is whether the distribution of the logarithm of gene expression as measured by DNA microarrays can be approximated by any of the standard theoretical univariate probability distributions. The results in Table The observed distributional characteristics of gene expression data in this manuscript suggest either the need for the use of non-parametric statistical methods or a need to develop newer statistical/mathematical approaches that are capable of and are optimal for working with these kinds of distributions.Because of the nature of the data used in this paper, we are unable to separate the contribution to the variation of the data due to biological reasons from those induced by the microarray technology. The noise we observe is probably the result of the convolution of these two factors. In light of the increasing use of newer technologies like those based on Next Generation Sequencing and despThe analyses of the empirical probability distribution of gene expressions from five publicly available data sources with relatively large number of samples have been described in this manuscript. The failure of the distributions to follow any of the known theoretical univariate probability distributions has been demonstrated though the data suggests consistent relationships between the different moments of the distributions. These moment relationships should motivate the development of Bayesian methods with appropriately chosen priors.RT and SM designed the study. LdT carried out the all goodness of fits analyses and contributed to the write-up. XC contributed in checking the annotations of the entire microarray data samples used and also in the interpretation of the results. RT performed the l-moment analysis and drafted the manuscript. SM also contributed to the drafting of the manuscript. All authors read and approved the final manuscript.Table S8: Detailed description of the data in Table Click here for fileTable S3: List of housekeeping genes [ng genes analyzedClick here for fileTable S1: GEO [ S1: GEO microarrClick here for fileTable S2: GEO [ S2: GEO microarrClick here for fileTable S6: Results of goodness of fit tests for the Anderson-Darling tests for the six analyzed probability distributions obtained by the varying the cutoff for the Kruskal-Wallis test.Click here for fileTable S4: Best fit distribution Kolmogorov-Smirnov test results for \"Craniofacial\", \"Liver\", \"Brain\", \"Male\" and \"Female\" data setsClick here for fileTable S5: Probe set level best fit distribution results for the Anderson Darling (AD) and the Kolmogorov-Smirnov test (KS) tests at 90 and 95 percent confidence levelsClick here for fileFigure S1: Diagnostic plot for the linear model between the logarithm of the coefficient of variation (CV) and the mean of the distribution of the logarithm of gene expression for the \"Craniofacial\" data set. Note the residual plots (subplots (b) and (c)) also provide the pearson correlation (denoted by \u03c1) between the absolute value of the residuals and the mean and logarithm of the CV respectively.Click here for fileFigure S2: Diagnostic plot for the linear model between the logarithm of the coefficient of variation (CV) and the mean of the distribution of the logarithm of gene expression for the \"Liver\" data set. Note the residual plots (subplots (b) and (c)) also provide the pearson correlation (denoted by \u03c1) between the absolute value of the residuals and the mean and logarithm of the CV respectively.Click here for fileFigure S3: Diagnostic plot for the linear model between the logarithm of the coefficient of variation (CV) and the mean of the distribution of the logarithm of gene expression for the \"Brain\" data set. Note the residual plots (subplots (b) and (c)) also provide the pearson correlation (denoted by \u03c1) between the absolute value of the residuals and the mean and logarithm of the CV respectively.Click here for fileFigure S4: Diagnostic plot for the quadratic model between the kurtosis and the skewness of the distribution of the logarithm of gene expression for the \"Craniofacial\" data set. Note the residual plots (subplots (b) and (c)) also provide the pearson correlation (denoted by \u03c1) between the absolute value of the residuals and the skewness and kurtosis respectively.Click here for fileFigure S5: Diagnostic plot for the quadratic model between the kurtosis and the skewness of the distribution of the logarithm of gene expression for the \"Liver\" data set. Note the residual plots (subplots (b) and (c)) also provide the pearson correlation (denoted by \u03c1) between the absolute value of the residuals and the skewness and kurtosis respectively.Click here for fileFigure S6: Diagnostic plot for the quadratic model between the kurtosis and the skewness of the distribution of the logarithm of gene expression for the \"Brain\" data set. Note the residual plots (subplots (b) and (c)) also provide the pearson correlation (denoted by \u03c1) between the absolute value of the residuals and the skewness and kurtosis respectively.Click here for fileFigure S7: l-moment ratio diagram for the four data sets as in Figure 6 but including all probe sets including those with mean expression less than 6.Click here for fileFigure S8: l-moment ratio diagram for the four data sets as in Figure 7 but including all probe sets including those with absolute value of mean of log expression ratio less than 0.05.Click here for fileTable S7: Spearman rank correlation between the mean and standard deviation of the measured data for \"Craniofacial\", \"Liver\", \"Brain\", \"Male\" and \"Female\" data sets.Click here for file"} +{"text": "We analyzed the dynamics of cumulative severe acute respiratory syndrome (SARS) cases in Singapore, Hong Kong, and Beijing using the Richards model. The predicted total SARS incidence was close to the actual number of cases; the predicted cessation date was close to the lower limit of the 95% confidence interval."} +{"text": "We have used high-energy x-ray scattering to map the strain fields around crack tips in fracture specimens of a bulk metallic glass under load at room temperature and below. From the measured strain fields we can calculate the components of the stress tensor as a function of position and determine the size and shape of the plastic process zone around the crack tip. Specimens tested at room temperature develop substantial plastic zones and achieve high stress intensities ( Fracture behavior is important for many prospective engineering applications of metallic glasses. Unlike conventional crystalline alloys for which stable flow in tension is possible due to strain hardening resulting from dislocation activity, metallic glasses strain soften during plastic deformation leading to localization of flow into shear bands. The result is apparently brittle mechanical behavior with almost no ductility in tension in most cases. Microscopically, however, there is often clear evidence for extensive plastic deformation of the material near the crack path. This allows some metallic glasses to achieve high values of fracture toughness despite the lack of macroscopic ductility In any material there is a competition between flow (driven by shear stresses) and cleavage , the outcome of which influences the stress intensity at the crack tip and determines the fracture toughness Several groups have previously examined the plastic zone in metallic glasses using shear band patterns on the surface in situ high-energy synchrotron x-ray scattering to map out the strain field around the crack tip as a function of applied stress intensity at various temperatures well below the glass transition. This technique allows us to probe the entire volume of material, including the plane-strain region in the interior of the specimen around the crack tip. An analysis of the strain maps (and corresponding maps of stress) allows us to determine the size and shape of the plastic process zone around the crack tip. We observe that the extent of the plastic zone increases, as expected, with applied stress intensity. The plastic zone is reduced at cryogenic temperatures, an observation that is correlated with a reduction in fracture toughness.The present work is motivated by a desire for a deeper understanding of the fracture behavior of metallic glasses, and in particular the influence of stress and strain fields on competing mechanisms of deformation and fracture near the crack tip. To this end, we have conducted fracture toughness tests on specimens of a Zr-based metallic glass using Although plastic deformation of metallic glasses has been studied extensively, fracture has been less thoroughly investigated even though it is obviously of central importance for structural applications. Much of the early experimental work, limited as it was to studies of thin specimens not well suited to mechanical testing, focused on phenomenology and in particular on the development of the characteristic \u201criver\u201d patterns observed on fracture surfaces, and on the tendency for annealing to foster brittle behavior due to either structural relaxation or devitrification. A more fundamental understanding of fracture of metallic glasses developed from the work of Spaepen The development of bulk metallic glasses has enabled new studies with specimens appropriate for fracture mechanics experiments, and in particular proper plane-strain fracture toughness measurements. Gilbert and coworkers On the basis of this work it is clear that the strain state around the crack tip is of central importance for understanding fracture of metallic glasses. To probe the strain state in a spatially-resolved way we employ high-energy x-ray scattering. Although there is a long history of using scattering techniques to measure elastic strains in crystalline materials An important feature of the high-energy x-ray technique is that it allows us to make a direct assessment of the strain state in a thick specimen, including the interior region ahead of the crack tip that is in a state of plane strain. This is in contrast to earlier work that examined the plastic zone using shear band patterns on the surface The stress intensity at fracture from our tests on The degree of scatter in Fracture surfaces for specimens tested at room temperature and low temperature are shown in Using the analysis outlined in the As a check on our results, we compare the stresses determined from the x-ray data with a calculation based on a simple model for the stresses around a crack tip in mode I loading in an infinite, fully elastic body For all four quantities compared (It is particularly interesting to note that directly ahead of the crack tip in there isi.e. for a given stress intensity on a given specimen), and then compile these into plots showing the trends with stress intensity, as shown in With the ability to measure strains and stresses in a spatially-resolved way, we can examine trends that develop as a function of load, temperature, or both, using data collected from multiple specimens. We have found it convenient to look at the maximum value of the von Mises stress, hydrostatic stress, and stress triaxiality from each map that is nominally elastic-perfectly plastic until fracture, with no strain hardening. We would expect a linear increase in the stress components with increasing stress intensity, except for the region immediately around the crack tip where plastic deformation can occur. Only when the size of the plastic zone exceeds the spatial resolution of the measurement would we expect to see the saturation in stress values apparent for The plateau in von Mises stress in The observation of a plateau in the stress components in A quantitative estimate of the plastic zone size can be made by calculating the radius of gyration . In particular, we observe an Our results show that the fracture toughness of amorphous eratures . Based oOur observation that the crack-tip stresses do deviate from s stress and thatOur data clearly show that a reduction in fracture toughness at low temperatures is associated with a reduction in the size of the plastic zone. Part of the explanation for this is that the flow stress of Zr-based metallic glasses increases with decreasing temperature, by amounts on the order of 15\u201320% in size and probTandaiya and coworkers recently proposed that fracture in ductile metallic glasses is not stress-controlled but rather is a strain-controlled process If attainment of a critical strain on an individual band is required for fracture, the number density of shear bands around the crack tip becomes an important consideration because a larger number of bands implies a smaller strain on each individual band, at least on average, for a given overall level of strain. Ravichandran and Molinari Although the bending geometry is not directly applicable to the mode I opening around the crack tip in our experiments, we assume that a similar scaling relationship holds. With increasing temperature the flow stress of metallic glasses decreases significantly slightly . Taken tSeveral groups have reported that tough metallic glasses tend to have high values of However, we also note that rack tip . The sigMore generally, we believe that the x-ray strain mapping technique demonstrated here can be broadly applied to amorphous materials in many contexts. A key concern for such studies will be the conditions under which shifts in the scattering peak position can be correctly interpreted in terms of elastic strain, and the precise nature of the relationship between the strains measured from such peak shifts and the stress state of the material. Although the elastic constants inferred from x-ray scattering strain measurements are in rough agreement with those measured via ultrasound, there may be systematic differences in some cases The specimens for this study were single edge-notch bend (SENB) samples of a Zr-based metallic glass. We prepared amorphous specimens of nominal composition specimen is flattThe x-ray scattering experiments were performed at beamline 1-ID of the Advanced Photon Source (APS), with the SENB specimens loaded in three-point bending requires that the specimen be sufficiently thick that plane-strain conditions predominate. The specified minimum thickness is The real-space structural information obtained in a scattering measurement is from a direction parallel to the scattering vector velength . BecauseElastic strain in amorphous alloys can be determined directly from shifts in the position of the first maximum in the scattering pattern We determined the positions of the first scattering maximum We can determine the normal component of elastic strain v.Although there is considerable uncertainty in the individual For more complex strain states we determine the principal strains as well as the rotation of the principal axes relative to the laboratory coordinate system from theperiment . FurtherWith these expressions we can now write a new expression for the experimentally-observed component of normal strain along the his is where theTo find the principal strains and Due to limitations in the geometry of the experiment we cannot access any information about components of strain parallel to the x-ray beam in . Calculasurfaces and in pOnce we have the principal strains (High-energy x-ray scattering can be used to examine the strain state locally around crack tips in metallic glasses, and presumably in other amorphous materials as well. Care must be taken in interpreting the data due to the complicating effects of plastic deformation and a nonlinear relationship between peak shifts and stress. In amorphous"} +{"text": "How sensory stimuli are encoded in neuronal activity is a major challenge for understanding perception. A prominent effect of sensory stimulation is to elicit oscillations in EEG and Local Field Potential (LFP) recordings over a broad range of frequencies. Belitski et al. recordedTo understand better how different frequency bands of the LFP are controlled by sensory input, we computed analytically the power spectrum of the LFP of a theoretical model of V1 (a network composed of two populations of neurons - excitatory and inhibitory), subjected to time-dependent external inputs modelling inputs from the LGN, as a function of the parameters characterizing single neurons, synaptic connectivity, as well as parameters characterizing the statistics of external inputs.Our model consists in a two populations network of excitatory and inhibitory leaky integrate-and-fire neurons. Standard analytical methods using the Fokker-Planck formalism can be used to compute average firing rates of both populations in the asynchronous state of the network, as well as the region of parameters for which this state is stable ,3). The . The 3])We then used the analytical expression of the LFP power to fit the experimental data of . The datThe model provided excellent fits of the data. The fitting procedure permitted to extract the values of the firing rates of the excitatory and inhibitory populations and the parameters characterizing the external input for most of the scenes of the movie. These outcomes could be then correlated with experimental firing rates and the features of the movie itself, such as temporal and spatial contrast as well as orientation. We found a significant correlation both between firing rates extracted from fit and the multi-unit activity recorded during the movie and between the parameters characterizing the external input and the features of the movie.These results show how an analytical approach can be used to estimate the key parameters underlying changes in the LFP spectral dynamics."} +{"text": "Department for Congenital Heart Surgery at the Mother and Child Health Institute of Serbia dates from 1982.The team consists of 2 surgeons, one resident, three anaesthesiologists and 2 intensivists. The number of operated cases at the Mother and Child Health Institute of Serbia is between 130 and 160 per year.In 2003. four projects commenced: the arterial switch operation, Norwood I, redo surgery and valvular surgery. For arterial switch, after initial successes, came a series of deaths resulting in a high mortality of over 40% . The analysis of the risk factors showed that beside surgical misses, the reasons were various levels of motivation and dedication between different members of the surgical team. Only after reorganization the results changed dramatically: the mortality rate for ASO is now under 8%. The crucial identified risk factor was long anaesthesia induction and management of the immediate postoperative care. The results of treatment of HLHS with an overall mortality above 75% are still distant from the current standards. Inadequate preoperative management and late referrals are the main reasons for mortality. Children with valvular anomalies are dominantly treated with valve replacements. The univentricular heart patients have longer postoperative recovery, mortality varies between 12% and 16%. The results of the routine paediatric cardiac surgery are consistent with results of the major cited literature data.ECMO and heart transplant are not available in our country. The lack of financial and donor support and the potentially low number of patients gravitating towards our institution question the cost benefit ratio.Compared to the surrounding countries, the provided services offered at our hospital can be regarded as cheaper, available and up to the standards of contemporary mid developed European countries."} +{"text": "Contrast-enhanced ultrasonography (CEUS) is a dynamic digital ultrasound-based imaging technique, which allows quantification of the microvascularisation up to the capillary vessels. As a novel method for assessment of tissue perfusion it is ideally designed for use in the ICU. CEUS is cost-effective and safe and can be repeatedly performed at the bedside without radiation and nephrotoxicity.The frequency of CEUS use in the multidisciplinary surgical ICU was retrospectively evaluated for the period from 1 September 2011 to 1 September 2012. Furthermore, contributions of this novel method to the management of critically ill ICU patients as well as its accuracy were assessed.In total, 33 CEUS studies were performed in critically ill ICU patients. The most frequent indications included: assessment of the liver perfusion, assessment of the pancreas and kidney perfusion after pancreas and kidney transplantation, assessment of the renal perfusion in acute kidney injury (AKI), assessment of active bleeding and assessment of the bowel perfusion. In all studies, the correct diagnosis was achieved and the transport of critically ill patients to the radiology department for further diagnostic procedures as well as application of iodinated contrast agents was avoided. In 16 cases significant new findings were detected. Twelve of them were missed by conventional standard Doppler ultrasound prior to CEUS. In assessment of seven cases with AKI, impaired or delayed perfusion and microcirculation of the kidney was observed in six patients. In three patients urgent surgical intervention was performed because of CEUS results. In three cases active bleeding was excluded at the bedside due to absence of contrast agent extravasation into hematoma (thigh and perihepatic) or into abdominal cavity, without need for complementary CT imaging or angiography. In one case the regular perfusion of intestinal anastomosis was confirmed with no need for surgical exploration. None of patients undergoing CEUS manifested any adverse reactions or developed any complications associated with the imaging technique.Contrast-enhanced ultrasonography clearly improves visualization of the perfusion in various tissues. It is very likely to be superior to standard Doppler ultrasound, and is safe and well tolerated in critically ill patients. Promising indications for the use of CEUS in the ICU may be the assessment of kidney microcirculation and assessment of liver perfusion in liver transplant and liver trauma patients."} +{"text": "We investigated the association of the intensity of newspaper reporting of charcoal burning suicide with the incidence of such deaths in Taiwan during 1998\u20132002. A counting process approach was used to estimate the incidence of suicides and intensity of news reporting. Conditional Poisson generalized linear autoregressive models were performed to assess the association of the intensity of newspaper reporting of charcoal burning and non-charcoal burning suicides with the actual number of charcoal burning and non-charcoal burning suicides the following day. We found that increases in the reporting of charcoal burning suicide were associated with increases in the incidence of charcoal burning suicide on the following day, with each reported charcoal burning news item being associated with a 16% increase in next day charcoal burning suicide (p<.0001). However, the reporting of other methods of suicide was not related to their incidence. We conclude that extensive media reporting of charcoal burning suicides appears to have contributed to the rapid rise in the incidence of the novel method in Taiwan during the initial stage of the suicide epidemic. Regulating media reporting of novel suicide methods may prevent an epidemic spread of such new methods. We assessed whether the intensity of newspaper reporting was associated with subsequent increases in charcoal burning suicides.Because no personal data were involved, this study was exempted from ethical review by the Human Research Ethics Committee, Taipei City Hospital, Taiwan.During 1998\u20132002, three major newspapers, China Times (CT), United Daily (UD) and Liberty Times (LT) accounted for more than 90% of newspaper sales in Taiwan with approximately equal market shares for these three newspapers The electronic archives for LT did not cover the study period. To assess possible bias from the exclusion of this paper, we randomly selected two months in 2001/2002 and hand searched the newspaper archives to investigate the concordance of suicide reporting in LT with UD and CT. The reporting of charcoal burning suicides in the LT was lower than the other two daily newspapers. In the two months selected in 2001, we identified a total of ten news items on charcoal burning suicide in UD, seven in CT and only one news item in LT. In 2002, 23 news items on charcoal burning suicide were identified for UD and 19 for CT, but only 3 news items on charcoal burning suicide were reported in LT, indicating the reporting of charcoal burning suicide in LT was probably lower than in the other two newspapers. Throughout the study period, suicide news stories were never reported on the front pages of the papers reviewed.Data on suicide deaths, classified according to the International Classification of Disease (ICD-9) were obtained from official death records in Taiwan. The ICD-9 codes used were deaths registered in E950\u2013959 and E980\u2013989 (intent undetermined). Deaths certified as undetermined intent were also included because previous research indicated that many suicide deaths were included in this category t, and the intensity function The cumulative number of charcoal burning news items about suicide from time 0 (beginning of 1998) up to time t. The same approach was used for suicides by methods other than charcoal burning. The adjusted models are obtained by including q lagged terms (up to one week) of the corresponding covariates. Our primary hypothesis concerned the impact of news reporting on the incidence of charcoal burning suicides, but of course if there were no suicides there would be no reporting, so we also investigated the strength of association between the incidence of suicides and subsequent reporting of these deaths. To do this we used a Poisson generalized linear autoregressive model similar to (1) except that the number of reported news items acted as the dependent variable. For interpretability, the lag parameter q was set to 7 in all the aforementioned models as prior studies tended to assume that the effect of news reporting of non-celebrity suicides lasted for about 1 week To formally assess the potential impact of news reporting of charcoal burning suicides on suicide incidence, analyses using conditional Poisson generalized linear autoregressive models were performed. We first fitted an unadjusted Poisson generalized linear autoregressive model to examine the effect of newspaper reporting of charcoal burning suicide on the incidence of charcoal burning deaths. In this model only lag terms of the predictor and outcome were included. We then fitted an adjusted model by adding potential confounders to the model. The potential confounders were the daily count of non-charcoal burning suicide (which served as an indicator of background suicide rate) and day of the week (as the number of newspaper pages and suicide count could vary according to different days of the week). Equivalent models for the impact of news reporting of non-charcoal burning suicide on non-charcoal burning suicide deaths were fitted as well. Specifically, the Poisson generalized linear autoregressive model we used for the count of charcoal burning suicides The incidence of charcoal burning suicide began to increase in 2000, with a more rapid and prominent rise in 2001, whereas the incidence of suicide using all other methods was relatively stable during the study period with a slight fall towards the end of 2002 , the peak of charcoal burning suicide (June 2002) occurred after the peak of reporting of charcoal burning suicide (Feb. 2002), suggesting that news reporting could have contributed to the momentum for the spread of charcoal burning suicide in the community. Furthermore, when the rate of change of charcoal burning suicide increased, the rate of change of other methods declined (panel 3) indicating some possible substitution of methods; similarly, when charcoal burning suicide was excessively reported, the reporting intensity (number of reports per case) of other methods of suicide decreased and vice versa (panel 4). It is important to note that the incidence of charcoal burning remained at a high level even after the reporting intensity has reduced. This suggests that by this stage the method of charcoal burning suicide had penetrated into the community and become self-perpetuating, regardless of further reporting.To assess potential mutual causation (i.e. increased suicide led to increased reporting), Newspaper reporting of charcoal burning suicide was associated with an increase in the number of charcoal burning suicides in Taiwan during the early period of the epidemic. Based on our estimation, each reported charcoal burning news item was associated with a 16% increase in next day charcoal burning suicide; whereas the reporting of all other methods of suicide was unrelated to their incidence on the following day. The period when the rate of increase in the newspaper reporting of charcoal burning suicides was at its highest preceded the peak rate for charcoal burning suicide, providing some evidence for a causal effect of news reporting, although other explanations are possible. The reporting intensity of charcoal burning suicide decreased after early 2002 but the incidence of charcoal burning suicide remained high, although the rate of increase of charcoal burning suicides decreased slightly during this period. This finding indicates that when a novel method is made well known, it may become self-sustaining and further reporting may not influence its relative incidence.Our study provides the first empirical evidence of the possible role of newspaper reporting on the charcoal burning suicide epidemic in Taiwan. This is also the first report that illustrates the dynamic and competitive relationships of news coverage for different types of suicide news. However, our study results should be interpreted in light of the following limitations. First, a challenge in the assessment of the association between media reporting and suicide incidence is the problem of reverse causation, i.e. media reporting of suicide events is in part a reflection of its increasing incidence, rather than media \u2018causing\u2019 the increase in suicide rates. We assessed the causal direction by exploring the temporal association between media reporting and suicide incidence, i.e. whether rises in reporting preceded rises in suicide and found some evidence that this was the case. Second, suicide rates in Taiwan have been rising since 1993, it is possible that the increasing use of charcoal burning suicide is a reflection of underlying trends in suicide rather than an effect of media reporting, although the rise in charcoal burning deaths occurred 5 years after the more general increase in Taiwan\u2019s suicide rates. Third, ICD codes did not allow us to distinguish charcoal burning suicides from other suicide by gassing, but as the great majority of such suicides were by charcoal burning th\u201d; on UD the news read \u201cTwo students from Taipei First Girls High School left death notes and said goodbye to the world together\u201d. Our finding suggests that cumulative media reporting of a new method adopted by the general population may have the potential to induce a suicide epidemic through a gradual diffusion process.Although charcoal burning suicide has become one of the most common methods of suicide in some Asian countries The finding that after Feb. 2002, the reporting intensity of charcoal burning suicide decreased (second panel, Our analysis demonstrates the mutual causation between news reporting of charcoal burning suicide and its actual incidence; i.e. the incidence of charcoal burning suicide was associated with its reporting and the reporting of the novel method was related to its future incidence. The process of self-perpetuation through mutual causation may have been a crucial factor for the emergence and the diffusion of the novel method in Taiwan.Our analysis reveals that reporting of different types of suicide news are competitive. When one type of suicide news gets media attention, other types of suicide news items get less reported.This analysis indicates that newspaper reporting may have fuelled Taiwan\u2019s charcoal burning suicide epidemic. Repetitive reporting appears to be harmful, despite the fact that the news items were not placed in the front page, and no celebrity suicides were linked to the use of the method at the early stage of the epidemic. However, once the method had become rooted in the community, the impact of media reporting becomes less prominent; the method takes its own path and becomes self-sustaining. Hence, working proactively with the media to improve the quality of reporting of these tragic deaths and regulate potential harmful reporting are particularly important in the early stage of a suicide epidemic. When a new method becomes widely used, prevention focusing only on regulating media reporting is not adequate. Other intervention measures such as method restriction, gatekeeper training and improve mental health of high risk groups should all be considered."} +{"text": "Matched-Molecular Pair (MMP) analysis has recently emerged as a data analysis technique in medicinal chemistry. It quickly gained scientific momentum because it tackles key questions in lead optimization. In contrast to classical global QSAR models that attempt to predict the absolute numbers of ADME and toxicological properties, MMP analyses predict the difference in (bio-) chemical properties that can be expected due to small chemical modifications to lead structures, with a much smaller and well-controlled error than global QSAR models.The power of MMP analysis depends on the number of previously documented similar molecular transformations, whereas the definition of chemical similarity plays a key role: the more generous the definition of similarity of the anchoring region, the more examples are available. The more strict the definition of similarity, the lower the variability and thus the clearer the effect on ADME-Tox parameters, but also the less data pairs will be available .The (bio-) chemical effect and the significance of the results depends on the experimental uncertainty (=noise) in the data. There is a clear mathematical association between the noise level and the minimum activity difference necessary for statistical significance. Here we demonstrate how the experimental uncertainty and variability,3 affect"} +{"text": "Recent findings of aldosterone-independent stimulation of ENaC by vasopressin challenge the completeness of dogmatic understanding where ENaC serves solely as an end-effector of the RAAS important for control of sodium balance. Rather the consequences of activating ENaC in the distal nephron appear to depend on whether the channel is activated in the absence or presence [by vasopressin (AVP)] of simultaneous activation of aquaporin 2 water channels. Thus, a unifying paradigm has ENaC at the junction of two signaling systems that sometimes must compete: one controlling and responding to changes in sodium balance, perceived as mean arterial pressure, and the other water balance, perceived as plasma osmolality.Due to the abundance of seminal discoveries establishing a strong causal relation between changes in aldosterone signaling, the activity of the epithelial Na The epithelial NaNormal ENaC function is required for proper sodium balance and thus, normal blood pressure. Gain-of-function mutations in ENaC cause inappropriate renal sodium retention and consequent increases in mean arterial pressure Figure Because of the abundance of seminal discoveries establishing this strong causal relation between changes in aldosterone, ENaC activity and blood pressure, the role of ENaC in health and disease is understood almost exclusively through the concept of feedback regulation by the RAAS being shifted by AVP from protecting plasma sodium to facilitating water reabsorption. This exciting idea raises several questions centered on the hypothesis that the function of ENaC in the ASDN dependents on whether the channel is activated in the absence or presence of simultaneous activation of AQP2 water channels where AVP activates both ENaC and AQP2.+]: a question that largely remains to be answered. Insight, though, can be gleamed from several recent findings as discussed below.Upon initial consideration, the idea that AVP-activated ENaC facilitates free water reabsorption seems counter-intuitive to the established role for this channel in protecting plasma sodium. However, the essential question here is whether ENaC activated in the ASDN in the presence of AQP2 contributes more to the draw for free water reabsorption or plasma [Na+] is significant for it will define the paradigm renal sodium excretion, transport and the activity of ENaC in the ASDN are in agreement that increases in aldosterone are sufficient to increase ENaC activity results form two events. There is loss of negative-feedback regulation by glucocorticoids of the hypothalamic-pituitary axis controlling AVP release and there is strong non-osmotic stimulation of AVP release resulting from volume depletion due to sodium and water wasting by the kidney secretion (SIADH) is not associated with hypovolemia. No results are available currently regarding ENaC activity in this condition and thus, it is unclear if increases in AVP in this setting also stimulate ENaC. Prolonged agonism of the VWhile several key questions remain about the physiological and pathological consequences of AVP-stimulation of ENaC, it is clear that AVP can increase the activity of this channel. This positions ENaC to be an end-effector of both aldosterone, as the final signal in the RAAS and AVP. Signaling through RAAS, in part because it activates ENaC, influences plasma sodium and thus, blood pressure; and signaling by AVP influences free water reabsorption, in part, because it activates AQP2. As argued above, AVP-stimulated ENaC likely facilitates water reabsorption. This contributes to protection of vascular volume as likely is the case in adrenal insufficiency, but does it do so at the expense of facilitating hyponatremia? Such a question highlights the potential limitations imposed by the mechanics of water movement only through osmosis in biological systems and by the evolution of the mammalian renal tubule where ENaC sits at the intersection of two homeostatic control systems one responding to and influencing plasma sodium, and the other plasma osmolality that must by their nature sometimes be in competition.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The transport and accumulation of anticancer nanodrugs in tumor tissues are affected by many factors including particle properties, vascular density and leakiness, and interstitial diffusivity. It is important to understand the effects of these factors on the detailed drug distribution in the entire tumor for an effective treatment. In this study, we developed a small-scale mathematical model to systematically study the spatiotemporal responses and accumulative exposures of macromolecular carriers in localized tumor tissues. We chose various dextrans as model carriers and studied the effects of vascular density, permeability, diffusivity, and half-life of dextrans on their spatiotemporal concentration responses and accumulative exposure distribution to tumor cells. The relevant biological parameters were obtained from experimental results previously reported by the Dreher group. The area under concentration-time response curve (AUC) quantified the extent of tissue exposure to a drug and therefore was considered more reliable in assessing the extent of the overall drug exposure than individual concentrations. The results showed that 1) a small macromolecule can penetrate deep into the tumor interstitium and produce a uniform but low spatial distribution of AUC; 2) large macromolecules produce high AUC in the perivascular region, but low AUC in the distal region away from vessels; 3) medium-sized macromolecules produce a relatively uniform and high AUC in the tumor interstitium between two vessels; 4) enhancement of permeability can elevate the level of AUC, but have little effect on its uniformity while enhancement of diffusivity is able to raise the level of AUC and improve its uniformity; 5) a longer half-life can produce a deeper penetration and a higher level of AUC distribution. The numerical results indicate that a long half-life carrier in plasma and a high interstitial diffusivity are the key factors to produce a high and relatively uniform spatial AUC distribution in the interstitium. Delivery of chemotherapeutic nanodrugs to the targeted tumor cells from intravenous injection includes transport and distribution of nanodrug to tumors and other organs via system circulation, extravasation from tumor vasculature, and interstitial transport to reach individual tumor cells. The chemotherapeutic efficacy depends on the spatial and temporal concentration distribution of nanodrugs in the entire tumor, which is related to tumor micro-environments and physicochemical properties of nanodrug carriers. In addition, the toxicity of nanodrugs to normal tissues should also be taken into consideration.The characteristics of tumor vasculature and interstitial space significantly influence drug delivery in solid tumors. In contrast to normal tissues, tumor vessels are leaky, chaotic, and non-homogeneously distributed et al.Modeling the effects of critical factors on the spatial and temporal responses of nanodrug carriers in tumor tissues can offer an insight into the efficiency of tumor chemotherapy. A mathematical model describing the delivery of monoclonal antibodies (mAb) in prevascular tumor nodule was constructed by Banerjee et al.et al.In 2008, Goodman et al. were used to obtain the transport parameters of macromolecular carriers for the proposed mathematical model and we then extended these transport parameters to study their effects on the spatiotemporal distribution of macromolecular concentration in tumor tissues In this study, the vascular density of tumors and transport parameters, such as vascular permeability, interstitial diffusivity, and half-life time of macromolecular nanodrug carriers, were investigated by a mathematical model. Our model displays the spatial and temporal responses of macromolecular carriers with different transport parameters and vascular density in tumor tissues. The experimental results published by Dreher The primary goal of drug delivery is to increase the concentration and accumulative exposure of therapeutic agents in the tumor tissue. In other words, the larger the AUC in the tumor, the better. The delivery efficiency of a therapeutic agent is highly dependent on the internal structure of tumors. Different from normal tissue, the tumor structures are much more complicated and vary largely with respect to the size and type of tumors et al. showed that the pressure rises rapidly in the periphery and soon reaches a maximum plateau value throughout the rest of a tumor. They also indicated that the maximum values were reached at a distance of 0.15 to 1.2 mm from the surface of most of the isolated tumors studied. Zero pressure gradient was maintained throughout the plateau within the tumor The heterogeneity of blood perfusion in tumors is caused by the uneven distribution of vasculature in neoplastic tissues vice versa. Applying symmetry, a square element of this periodic structure with a pair of arterial and venous microvessels located at two opposite corners of the square would be the geometric configuration for the current problem naturally, as shown in In this work, we do not investigate the drug transport across the entire tumor, which is a very difficult problem both in modeling and simulation, but rather study the drug delivery behavior locally. Under the assumption that the microvessel network can be considered tightly arranged and periodically distributed like crystals, a small zoom-in region is considered here with negligible pressure gradient imposed from the surrounding tissue. In such a small region, the pressure gradient occurs only due to the source and sink of local, leaky microvessels. The two-dimensional cross-sectional view of the distribution of these microvessels is illustrated in Here, as shown in We assume that the injected drug is well circulated, and thus the concentrations along microvessels are set equal. The change of concentration is caused by tissue absorption, elimination through the lymphatic system and other physiological uptakes. The joint effects of these drug uptakes result in a temporal change of the drug concentration inside blood vessels, which can be approximated by fitting the experimental data. Consequently, the concentration decay, combining all the elimination effects in the body, can be described in terms of half-life in plasma of each drug carrier as,In our work, the nanodrug delivery in tumor tissues was investigated. For convection, the drug is transported across the wall of arterial microvessel (high pressure area), travels through the tumor interstitium and is finally drained by venous microvessel (low pressure area). For diffusion, drug is diffused from high-concentration area (blood microvessels) to low-concentration one (interstitium). The whole mechanism of drug transport is the interplay between convection and diffusion with the wish that the AUC of drug is as large as possible in tumor interstitium. Convection would be the dominant transport mechanism for large molecules since they move slowly by diffusion. In contrast, small molecules diffuse faster and therefore their dominant transport mechanism is diffusion The leaky microvessels can be modeled as hollow cylinders with semi-permeable walls embedded in a porous medium (tumor interstitium). According to Starling's hypothesis, the net fluid flow across a vessel wall is given byThe flow in the interstitial area satisfies Darcy's law as Provided the velocity distribution in the interstitium, determined from To compute It is important to have an anticancer nanodrug access all tumor cells in lethal quantities to avoid tumor recurrence caused by certain cells that remain alive after treatments. Therefore, the penetration depth and accumulative exposure of an anticancer nanodrug in the interstitial region of tumors are crucial to the tumor's overall drug exposure, in particular for distant cancer cells residing away from the vasculature. This means we wish the spatial distribution of AUC to be of a high level and as uniform as possible at the same time. In general, the nanodrug distribution within a tumor is determined by its supply, the vascular permeability, interstitial transport of the nanodrug, and nanodrug-cell interactions. Below, we investigate and discuss how AUC distribution depends on vascular density, vascular permeability, interstitial diffusivity, and half-life in plasma. Though the computational domain shown in To determine the physiological parameters of macromolecular dextrans in tumor tissues, we used the current model to fit the experimental results of spatial-temporal distributions for different molecular weights of dextrans The physiological parameters of macromolecular dextrans shown in Furthermore, To study the influence of vascular permeability of dextran on their AUC distributions, we simulated the conditions with the vascular permeability both increased and decreased by different multiples of their respective values in The vascular permeability depends on the properties of particles and the vessel wall . As the particle size increases, the permeability decreases and becomes zero when the particle size is larger than the pore cut-off size. Nanoparticles that are larger than albumin are most likely to transport through intercellular junctions since inter-endothelial junctions in tumors can be as large as hundreds of nanometers to a few micrometers To investigate the influence of interstitial diffusivity of dextran macromolecules on their AUC distributions, we simulated the conditions with various interstitial diffusivities while the other parameters remain the same. The spatial distributions of AUC A higher vascular density means a shorter transport distance between vessels for sources of dextran macromolecules, and hence a higher and relatively more uniform spatial AUC distribution can be produced. Also the effect of vascular density on the spatial AUC distribution is more pronounced for large dextran macromolecules than small ones. 2) A medium-sized dextran macromolecule has the best performance among all three kinds of dextrans with various molecular weights considering both the level and uniformity of spatial AUC distribution in tumor interstitium. 3) A large dextran macromolecule possesses a long half-life while it results in a shallow penetration with a very high AUC in the perivascular region due to its low diffusivity, and the condition gets much worse for a tumor region with lower vascular density. 4) A small dextran macromolecule has the most uniform distribution of AUC among all due to its high interstitial diffusivity, but has the lowest level of AUC distribution due to short half-life in plasma. 5) Enhancement of vascular permeability implying more leaky vessels helps elevate the level of AUC for all three kinds of dextrans, but has little effect on the uniformity of distribution of AUC. 6) Enhancement of interstitial diffusivity would elevate the level of AUC and at the same time make AUC more uniform for all three kinds of dextrans. 7) A longer half-life of dextran macromolecules in plasma can produce a deeper penetration and a much higher AUC distribution.et al. developed a nanoparticle-based drug modified by multicomponent nano chains, which help anticancer drug delivery into deep interstitial and avascular regions A small dextran macromolecule generally has much better uniformity of spatial distribution of AUC due to large interstitial diffusivity compared to larger macromolecules. However, the level of its AUC is rather low because of its short half-life in plasma. Appropriate surface modification of dextran macromolecules may be able to extend the dextran macromolecule half-life in blood. Alternatively, we may physically or chemically transform large nanoparticles, when they enter tumor interstitium, to small ones that have greater vascular permeability and interstitial diffusivity to enable transport deeper into tumor tissues We present a small-scale mathematical model to study the concentration and AUC distributions for localized tumor regions under various conditions. The results clearly depict 1) the limitation of nanoparticle delivery in the local tumor tissues, 2) the treatable domain for a nanoparticle when the tumor and particle properties are given, and 3) the potential improvement when vascular permeability, interstitial diffusivity and half-life in plasma are enhanced. The current model is a two-dimensional model and can only be used to analyze the delivery and transport of nanoparticle-based drugs in a local tumor region with uniformly distributed blood vessels, which is well justified by its small scale. With the assorted results from various vascular densities in The effectiveness of cancer tumor treatment depends on the delivery of therapeutic agents to all tumor cells in different regions of a tumor in order to help avoid tumor regrowth and development of resistant cells. This study numerically elucidates the barriers to drug transport in the tumor tissues to assess methods that aim to achieve a higher and more uniform AUC distribution in the tumor tissues within regions of different vasculature. Thus we provide a better understanding of significant factors that contribute to therapeutic strategies aiming to improve passive and/or active tumor chemotherapy.Supporting Information S1Table S1, The physiological parameters of the normal tissue used to validate our numerical method.(PDF)Click here for additional data file."} +{"text": "This research forms the dissertation for an MSc in Podiatry.The Intermetatarsal (IM) angle forms an integral component in the assessment of Hallux Abducto Valgus (HAV) surgery. Commonly used to assess the severity of such deformity and guide surgical selection, the degree of accuracy is debatable for both clinical and research purposes.A common fault of previously described IM angle assessment techniques is that when the shaft of the metatarsal, or the metatarsal head its self has been altered through the surgery there is inaccuracy associated with the measurement of the post operative radiograph when these reference points are used.A new method of assessment is proposed which can be applied for accurate assessment of both pre and post operative radiographs using the same method of examination. This new technique utilises the lateral most margins of the head of the metatarsal and most lateral margin of the base of the metatarsal. It is inherently applicable to all base, mid-shaft and distal osteotomy selections with or without medial eminence cheilectomy, a feature its predecessors have had difficulty in assessing accurately.All patients whom underwent a surgical procedure to change the IM angle with in our surgical department were recruited into our study. There were no restrictions on the severity of deformity, osteotomy selection or preoperative examinations.Pre and post operative radiographs were assessed blinded to the surgery three times by three different members of the surgical team. The inter- and intra-tester reliability of the new examination was compared against commonly used examination techniques.It is anticipated that this new method of IM angle assessment will provide grounding for a uniform pre- and post-operative radiographic interpretation for a variety of surgical procedures. It is hoped this simple technique will be applicable to future research and clinical settings."} +{"text": "In the Indian scenario, research in mental health is notably deficient, but available clinical and epidemiological data suggest a significant co-morbidity between mental diseases and cardiovascular diseases. Mental health problems do not have the precision of other biological sciences due to the complex phenotypes. In India the effects of culture and the transition in symptomatology of psychiatric patients are also important. Equally important are the family influences, traditional Indian herbal ethno-pharmacology and the community care perspective. The Indian systems of medicine give a lot of attention to visceral functioning and psychiatric research look into the metabolic substrate."} +{"text": "In this paper we analyse the impact of financial liberalization and reforms on the banking performance in 17 countries from CEE for the period 2004\u20132008 using a two-stage empirical model that involves estimating bank performance in the first stage and assessing its determinants in the second one. From our analysis it results that banks from CEE countries with higher level of liberalization and openness are able to increase cost efficiency and eventually to offer cheaper services to clients. Banks from non-member EU countries are less cost efficient but experienced much higher total productivity growth level, and large sized banks are much more cost efficient than medium and small banks, while small sized banks show the highest growth in terms of productivity. The opening to the outside and the internal structural reforms of the financial sector are two interdependent processes, both having as a purpose the development of a financially competitive and efficient system in order to facilitate economic growth and financial system stability.In the present days, in the context of recent turmoil on the financial markets, there is a dispute regarding the benefits of financial liberalization. There are opinions that the financial deregulation and the increasing of the process of globalization were the main causes what amplified the recent financial crisis. Many studies evaluate the direct impact of financial deregulation on banking performance, their empirical results are also rather controversial. Some authors, such as Combining insights from the liberalization \u2013 efficiency and financial openness \u2013 stability literatures, we develop a unified framework to assess how regulation, supervision and other institutional factors may affect the performance of banking systems in 17 countries from Central and Eastern Europe for the period 2004\u20132008. This study seeks to address two key questions. What variables influence the performance of banks from Central and Eastern European countries? Did the financial liberalization and reforms in the banking system have a notable influence on bank performance?Actually, we analyze the impact of financial liberalization and reforms in the banking system as well as the associated changes in the industry structure on the banking performance, measured in terms of cost efficiency and total productivity growth index. To do this, we develop a two-stage empirical model that involves estimating banks\u2019 performance in the first stage and assessing its determinants in the second one.The importance and originality of this paper consist in assessing the CEE banking systems in a period when there were two waves of EU enlargements and the first influences of the recent international financial crises had appeared. Our sample of countries could be split into three categories: EU members, EU candidates and other potential EU candidates. The results of our papers are important in the context of the present financial turmoil; therefore, in the end of the paper, we try to develop some policy recommendations for both policy makers from CEE countries and EU ones. The evidence of our research could also be useful for banks\u2019 strategies of internationalization.Cross-country efficiency studies in the banking industry have attracted a lot of attention. For banks, efficiency implies improved profitability, greater amount of funds channeled in, better prices and services quality for consumers and greater safety in terms of improved capital buffer in absorbing risk Studies of the impact of deregulation upon efficiency have found different results. Evidences from Taiwan Studies focused on the case of developing countries from Central and Eastern European countries explore various issues including the impact of ownership and privatization The creation of an effective and solid financial system constituted an important objective of the process of reform and transition from a centralized economy to a market economy in CEE countries. The liberalization of prices, the liberalization of the circulation of goods, services and capital, the deregulation of financial systems, globalization and the mutations on the level of the economic, social and political environment had a significant impact on the development of the CEE banking system Most studies focused on the banking system in Central and Eastern Europe (CEE) are only performed at the level of one state and do not offer comparative information regarding the efficiency and productivity growth of banks in these states. However, in recent years, several papers have published comparative analyses highlighting the impact of banking system reform, the evolution of banking structure, competition and privatization on banks\u2019 efficiency see e.g. ,.Fang et al. find that the institutional development, proxied by progress in banking regulatory reforms, privatization and enterprise corporate governance restructuring, has a positive impact on bank efficiency Brissimis et al. examine the relationship between banking system reform and bank performance \u2013 measured in terms of efficiency, total factor productivity growth and net interest margin \u2013 accounting for the effects through competition and bank risk-taking Pasiouras et al. uses stochastic frontier analysis to provide evidence on the impact of regulatory and supervision framework on bank efficiency based on a dataset consisting of 2853 observations from 615 publicly quoted commercial banks operating in 74 countries during the period 2000\u20132004 The rest of the paper is organized as follows: in section 2 we explain the methodology used to measure the impact of financial liberalization on the bank efficiency and productivity growth and we discuss the data and the variable selection. Thereafter, the results of the empirical analysis are presented and discussed in section 3. The main conclusions are drawn in section 4.In this section we discuss the empirical model used to investigate the impact of financial liberalization on bank performance. Then we explain our measures of bank performance: cost efficiency and productivity growth. The discussion of data and control variables follows afterwards.The purpose of the estimable model outlined in this section is to capture the effects of financial liberalization, reforms in the banking system and the associated changes in the industry on bank performance. We also include a range of bank-specific variables that have been used in previous empirical studies that examine the drivers of bank performance. The model is specified as:Bank performance is proxied alternatively by cost efficiency (EFF) and total productivity growth index (TFPCH). These indicators have been used widely in previous empirical literature concerned with the measurement and determinants of the bank performance in developing countries In line with ity\u2013 outputs vector; t \u2013 time component.In the estimation of the cost efficiency level of the banks in CEE countries we used the SFA Method and applied the model developed by The cost frontier indicates the minimum cost, The SFA method assumes that the inefficiency component of the error term is positive and thus the high costs are associated with a high level of inefficiency.In the order to quantify the total productivity growth we estimated the Malmquist index with the help of the DEA-type linear programming method, a method that was introduced by F\u00e4re et al. proposed in In the empirical analysis of the mutations on the level of the productivity of banks we have to calculate four distance measures that occur in The linear programming problems must be solved N times, once for each company in the ensemble. The introduction of solutions to the problems in relation (4) allows for the estimation of the Malmquist index of productivity.Because the purpose of this analysis is to analyze the connection between the performance of banks and the degree of financial liberalization of the banking system, the first set of banking system characteristics considered in the model includes the following variables: Banking reform and interest rate liberalization indicator (BREF), Financial Openness Index (KOPEN), Asset share of state-owned banks (ASSB) and Asset share of foreign-owned banks (ASFB).The Banking reform and interest rate liberalization indicator is compiled by the EBRD with the primary purpose of assessing the progress of the banking systems of formerly communist countries and quantifies and qualifies the degree of liberalization of the banking industry In order to assess the level of financial openness we use the Chinn-Ito index that measures the country\u2019s degree of capital account openness. The index is based on the binary dummy variables that codify the tabulation of restrictions on cross-border financial transactions reported in the IMF\u2019s Annual Report on Exchange Arrangements and Exchange Restrictions Following previous studies that focus on banks\u2019 performance We measure bank stability using Z-score, which is a very popular indicator in recent literature concerned with the measurement and determinants of soundness and safety of banks ROA is the bank\u2019s return on assets, E/A represents the equity to total assets ratio and The data used to quantify these indicators have been taken from EBRD and ECB reports.The economic literature pays a great deal of attention to the performance of banks, expressed in terms of efficiency, productivity, competition, concentration, soundness and profitability.The use of risk indicators in the analysis of bank performance has gained in the past decades a special attention because the control on banks\u2019 risks is one of the most important factors the profitability of the bank depends on Following the empirical literature, we use the Return on Assets (ROA) to reflect the bank\u2019s management ability to use the resources the bank disposes of for the purpose of optimizing profit. Bank capital adequacy is measured as the equity to assets ratio, quantified as the value of total equity divided by the value of total assets.To express the risk profile of the banks we use two different types of risk: credit risk measured as ratio of loan-loss provisions to total loans (LLR_GL) and liquidity risk measured as ratio of liquid assets to total deposits and borrowing funds (LA_TD). Another variable used in the analysis is the bank\u2019s size measured as logarithm of total assets (TAL).The data used in the analysis are taken from the annual reports of the banks and from the Fitch IBCA\u2019s BankScope database.In line with the previous literature In order to quantify the effects of structural reforms, we also use two governance indicators developed by Kaufmann et al. to proxy institutional differences: rule of law (ROL) and regulatory quality (RQ) Improvements in the regulatory quality help banks if it is accompanied by more adequate banking supervision. The quality of the rule of law affects cost efficiency through the effectiveness and predictability of the judiciary. There is a growing literature that points to the importance of institutions for an efficient operation of the financial system. This literature argues that better institutions positively affect bank efficiency see also . The datThis study seeks to undertake this assessment by examining banking efficiency and productivity growth in 17 countries from Central and Eastern Europe . We omit Belarus and Ukraine from our study because we could not obtain sufficient data. All bank-level data used are obtained from the BankScope database and are reported in Euros. To be included in our sample, a bank has to have a minimum of 3 years of continuous data to obtain reliable efficiency estimates In the literature in the field there is no consensus regarding the inputs and outputs that must be used in the analysis of the efficiency and productivity growth of commercial banks When analyzing the means of determinants of efficiency value we can observe that the degree of financial liberalization of the banking system has continuously increased during the assessed period. Thus the level of the banking reform and interest rate liberalization indicator (BREF), Financial Openness Index (KOPEN) and asset share of foreign-owned banks (ASFB) increased and the level of asset share of state-owned banks (ASSB) due to the privatization process and the increase of foreign capital (the last two determinants are correlated). The number of banks was relatively stable, the concentration ratio of the first 3 banks continuously grew, but the evolutions of HHI denote a moderate competition towards high competition, being relatively stable. The stability of the entire banking systems, from the perspective of insolvency probability, has increased continuously as Z-score relieves. The explanations could be the process of harmonization with the EU acquis, which implies a better banking regulation framework. We consider that the evolutions of these determinants were influenced by the process of European integrations, because some of the countries assessed are EU members, some of them are EU candidates and others potential EU candidates.The bank-specific variables had different evolutions. Thus we can observe a decrease of ROA in the context of an ample growth of total bank assets. The risk profile of the banks evaluated as following: the ratio of loan-loss provisions to total loans (LLR_GL) and liquidity risk measured as ratio of liquid assets to total deposits and borrowing funds (LA_TD) have decreased, indicating a loss in bank liquidity, but a better credit risk situation.The empirical models used in the specialty literature use a two-stage procedure: in the first stage the level of cost efficiency and total productivity growth is estimated and in the second stage the regression analysis is applied in which the levels of cost efficiency and total productivity index are dependent variables.The empirical model specified in the equation is estimated using the panel least square fixed effects methodology. We use the fixed effects model, since we focus on a limited number of countries, for which we want to assess country-specific differences with respect to the relationship between financial liberalization and bank performance. For this purpose, performance scores are regressed on a set of common explanatory variables; a positive coefficient implies efficiency increase whereas a negative coefficient means an association with an efficiency decreases. The empirical model is tested for each of the two measures of banking performance, i.e. cost efficiency and total productivity growth.The research strategy follows the specific-to-general approach. We start by investigating the relationship among cost efficiency and Banking reform and interest rate liberalization indicator (BREF) and Financial Openness Index (KOPEN). Next, we include all other banking system characteristics, bank-specific variables and macroeconomic variables one by one to test the stability of the main independent variables BREF and KOPEN. A second set of models is estimated using total productivity growth index as dependent variable.From empirical results we see that the average cost efficiency of banks in Central and Eastern European countries grew in the period analyzed, from an average value of 0.8866 in 2004 to 0.9099 in 2008, but there is significant variation across the banking systems of the Central and Eastern European countries in terms of cost efficiency level. Similar to Table no. 4 also shows the average cost efficiency and productivity growth results for banks of different size. Following Thus the results show that, on average, banks from a non-member country are less cost efficient but experienced much higher total productivity growth level during 2004\u20132008 period. In non-member countries, these productivity gains could be due to technological progress, rather than to an improvement in efficiency. Large sized banks are much more cost efficient than medium and small banks, while small sized banks show the highest growth in terms of productivity. This suggests that small sized banks are able to generate strong profits possibly by operating in the high value added segments of the markets while incurring higher costs at the same time.As for the effect of banking system characteristics, we found that a higher level of the Banking reform and interest rate liberalization indicator (BREF) and Financial Openness Index (KOPEN) improves cost efficiency, suggesting that banks in countries with higher level of liberalization and openness are able to increase cost efficiency and finally to offer cheaper services to clients. Our results are in line with The results show that the level of Banking reform and interest rate liberalization indicator (BREF) and Financial Openness Index (KOPEN) have a positive impact on the total productivity growth. The Z-score is positively correlated with total productivity, demonstrating that the total productivity depends on the soundness and safety of banks.With regard to the impact of structure of banking systems, results show that higher concentration quantified by means of the Herfindahl-Hirschmann index (HHI) improves cost efficiency, while the percentage share of the three largest banks (CR3) has a negative impact on the cost efficiency level. The mean value for these two indicators during the period assessed does not prove significant changes in the banking structure and level of competition. This evidence could suggest that the competition was not one of the most important factors of improving cost efficiency, being in contradiction with the traditional view and previous results As regards the impact of bank-specific variables, the results show that the level of Return on Assets (ROA) has a statistically significant and negative impact on both cost efficiency and total productivity growth. The level of credit risk measured as the ratio of loan-loss provisions to total loans (LLR_GL) negatively influences cost efficiency.Turning to the effect of macroeconomic variables, we observe that GDP growth rate had a negative impact on cost efficiency, maybe because under expansive demand conditions, managers are less focused on the expenditure control and therefore become less cost efficient. Another explanation could be that the increase in credit markets involves higher capital cost, an increase in operating expenses and cost with fixed assets. This results are in line with From another point of view, the decrease of GDP growth rate improves the total productivity of banks. This could be a reason for foreign-owned banks to maintain their exposure on these markets in case of economic decrease, but with the condition of maintaining the soundness and safety of banks. We also found a negative and significant relationship among Inflation rate (IR), Interest rate spread (IRS) and level of Rule of law (ROL) and bank cost efficiency.Our results show that the level of Financial intermediation has a positive effect on the bank performance, meaning that a low level of financial intermediation hampers banking performance.From our analysis it results that the Financial liberalization improves cost efficiency of banks from Central and Eastern European countries with higher level of liberalization and openness are able to increase cost efficiency and finally to offer cheaper services to clients. These facts are in compliance with the Single European Market principles and demonstrate that EU new member states, candidate states and potential candidate states banking market mechanisms could achieve their objective of lowering and harmonization of banking services prices. In this case, from a banking policy perspective, we consider that the EU enlargement could continue in Central and Eastern European countries and could add benefits for the EU banking market.In exchange, the level of Asset share of foreign-owned banks has no statistically significant impact on the level of bank cost efficiency. This could mean that the dominance of foreign banks on the market does not increase cost efficiency, but the best practices that they brought in the banking systems. From the policy perspectives, these results suggest that, in the case of new member countries, foreign-owned banks have no influence on increasing cost efficiency by means of their own activity and dominance on the market, but perhaps by means of their best practices that domestic banks must adopt for competing them.In what concerns the effect of financial reform on the total productivity growth of banks from CEE countries, the results show that the level of Banking reform and interest rate liberalization indicator has a positive impact on the total productivity growth. Also, the results suggest that the important factors shaping the total productivity are merely the banking system characteristics and bank-specific variables, and the only macroeconomic variable with impact is the GDP growth rate.Overall, in order to promote efficiency and productivity, monetary authorities from CEE countries should enhance their efforts to continue the reform of the financial services regulatory and supervisory framework. At the same time, banking markets should remain open, encouraging the entry of foreign banks for improving best practices and for increasing the benefit from technological spillovers brought by them. For a sustainable improvement of cost efficiency and total productivity of banks, the focus should be on the improvements of managerial practices, especially in domestic small and medium banks. Policy makers should also be concerned about improving the liquidity level.Furthermore, our results indicate that policy makers in EU could take into account the follow-up of the process of enlargement in some countries from CEE, because their banking markets have a good potential in adapting the Single European Markets principles. Foreign banks could maintain their exposures or enter the CEE markets because there is a good perspective for total productivity growth and the stability of the banking systems has increased."} +{"text": "The growing number of large-scale neuronal network models has created a need for standards and guidelines to ease model sharing and facilitate the replication of results across different simulators. To foster community efforts towards such standards, the International Neuroinformatics Coordinating Facility (INCF) has formed its Multiscale Modeling program, and has assembled a task force of simulator developers to propose a declarative computer language for descriptions of large-scale neuronal networks.The name of the proposed language is \"Network Interchange for Neuroscience Modeling Language\" (NineML) and its initial focus is restricted to point neuron models.The INCF Multiscale Modeling task force has identified the key concepts of network modeling to be 1) spiking neurons 2) synapses 3) populations of neurons and 4) connectivity patterns across populations of neurons. Accordingly, the definition of NineML includes a set of mathematical abstractions to represent these concepts.NineML aims to provide tool support for explicit declarative definition of spiking neuronal network models both conceptually and mathematically in a simulator independent manner. In addition, NineML is designed to be self-consistent and highly flexible, allowing addition of new models and mathematical descriptions without modification of the previous structure and organization of the language. To achieve these goals, the language is being iteratively designed using several representative models with various levels of complexity as test cases.The design of NineML is divided in two semantic layers: the Abstraction Layer, which consists of core mathematical concepts necessary to express neuronal and synaptic dynamics and network connectivity patterns, and the User Layer, which provides constructs to specify the instantiation of a network model in terms that are familiar to computational neuroscience modelers.As part of the Abstraction Layer, NineML includes a flexible block diagram notation for describing spiking dynamics. The notation represents continuous and discrete variables, their evolution according to a set of rules such as a system of ordinary differential equations, and the conditions that induce a regime change, such as the transition from subthreshold mode to spiking and refractory modes.The User Layer provides syntax for specifying the structure of the elements of a spiking neuronal network. This includes parameters for each of the individual elements and the grouping of these entities into networks. In addition, the user layer defines the syntax for supplying parameter values to abstract connectivity patterns.The NineML specification is defined as an implementation-neutral object model representing all the concepts in the User and Abstraction Layers. Libraries for creating, manipulating, querying and serializing the NineML object model to a standard XML representation will be delivered for a variety of languages. The first priority of the task force is to deliver a publicly available Python implementation to support the wide range of simulators which provide a Python user interface . These libraries will allow simulator developers to quickly add support for NineML, and will thus catalyze the emergence of a broad software ecosystem supporting model definition interoperability around NineML."} +{"text": "Upon publication of the article entitled \u201cHeat stress-induced response of the proteomes of leaves from Salvia splendens Vista and King\u201d the authThe co-authors would like to apologise for this omission. All authors have agreed to the addition of Dr Hen-Mu Zhang to the revised author list as shown above, for his provision of the original raw experimental materials, contribution to the design of the experimental procedure and quantitative experiments of the plant physiology and biochemistry index."} +{"text": "The increasing trend of atypical form of FMF among the Armenian population is one of the actual problems in the Armenian medicine. Hence, it induced and promoted the necessity of studying molecular mechanisms for correction of disturbed metabolic processes and for elaboration of new methods at pathogenic therapy of FMF. The specific features of the clinical symptoms in atypical forms of FMF give the rise of complications at differential diagnosis.In this research the informativity of clinical-laboratory and biochemical indicators has been studied. Also the relations between individual phospholipids of biomembranes and leuko/erythroid cells during atypical FMF were investigated.The intensity of 14C-glycerol and 14C-glucose incorporation was studied in vitro in the contents of individual phospholipids of erythrocytes and lymphocytes membranes at children with atypical FMF. The phospholipids were fractionated by thin layer chromatography.14C-glucose incorporation rate in the lysophosphatidylcholines (LPC) with simultaneous decrease of rate for incorporation in the contents of phosphatidylcholines (PC) and sphyngomyelines. It is observed an increase of activity of phospholipase A2 and the reduction of the activity of glycerolkinase and glycerol phosphate dehydrogenase. Also an increase of LPC/PC relation coefficient and decrease of PC/phosphatic acid relation were established.The substantial increment in the myeloid cell number was observed in all investigated patients. The leuko/erytroid sells relation was 4:1 instead of normal 3:1. The erythroid cell maturation index was low. The number of leukocytes was high in all patients. The basophilic erythronormoblasts predominated over polychromatic and oxyphilic ones. In all patients the expressed thrombocytosis and megakaryocytosis accompanied with active platelet formation were described. It is established that FMF is characterized by a sharp increase of Apparently the revealed changes are inherent atypical FMF. The membrane aspects of hemostasis disorders mechanisms at atypical FMF are discussed.None declared."} +{"text": "The advent of sensitive laboratory tools to detect and study the genetic evolution of these viruses has uncovered their critical role in the etiology of AGE. The flow of information is now so great that in each year since 2008, >800 scientific papers have been published on this topic as determined by a search of PubMed using the term acute gastroenteritis.The field of viral gastroenteritis is in the midst of an extraordinary period of rapid development and transition. Vaccines to prevent rotavirus, the leading cause of severe childhood AGE worldwide, are being rolled out globally and have already achieved remarkable success in reducing the burden of this pathogen in many countries, including the United States. In addition, the application of sensitive molecular assays is reaffirming the central etiologic role of noroviruses in both endemic and epidemic AGE, and vaccines against this pathogen are undergoing clinical testing. This issue of Emerging Infectious Diseases highlights recent developments in the field with a collection of timely findings from domestic viral gastroenteritis surveillance, which will further our understanding of disease effects, viral evolution and structure, implications of vaccination, and progress with other preventive measures."} +{"text": "The mechanosensitive channel of large conductance (MscL) is capable of transducing mechanical stimuli such as membrane tension into an electrochemical response. MscL provides a widely-studied model system for mechanotransduction and, more generally, for how bilayer mechanical properties regulate protein conformational changes. Much effort has been expended on the detailed experimental characterization of the molecular structure and biological function of MscL. However, despite its central significance, even basic issues such as the physiologically relevant oligomeric states and molecular structures of MscL remain a matter of debate. In particular, tetrameric, pentameric, and hexameric oligomeric states of MscL have been proposed, together with a range of detailed molecular structures of MscL in the closed and open channel states. Previous theoretical work has shown that the basic phenomenology of MscL gating can be understood using an elastic model describing the energetic cost of the thickness deformations induced by MscL in the surrounding lipid bilayer. Here, we generalize this elastic model to account for the proposed oligomeric states and hydrophobic shapes of MscL. We find that the oligomeric state and hydrophobic shape of MscL are reflected in the energetic cost of lipid bilayer deformations. We make quantitative predictions pertaining to the gating characteristics associated with various structural models of MscL and, in particular, show that different oligomeric states and hydrophobic shapes of MscL yield distinct membrane contributions to the gating energy and gating tension. Thus, the functional properties of MscL provide a signature of the oligomeric state and hydrophobic shape of MscL. Our results suggest that, in addition to the hydrophobic mismatch between membrane proteins and the surrounding lipid bilayer, the symmetry and shape of the hydrophobic surfaces of membrane proteins play an important role in the regulation of protein function by bilayer membranes. A fundamental property of living cells is their ability to detect mechanical stimuli. Microbes, in particular, often transition between different chemical environments, leading to osmotic shock and concurrent changes in membrane tension. The tension of microbial cell membranes is detected and controlled by membrane molecules such as the widely-studied mechanosensitive channels which, depending on the tension exerted by the surrounding lipid bilayer, switch between closed and open states. Thus, the biological function of mechanosensitive channels relies on an interplay between bilayer mechanical properties and protein structure. Using a physical model of cell membranes it was shown previously that the basic phenomenology of mechanosensitive gating can be understood in terms of the bilayer deformations induced by mechanosensitive channels. We have generalized this physical model to allow for the molecular structures of mechanosensitive channels reported in recent experiments. Our methodology allows the calculation of protein-induced membrane deformations for arbitrary oligomeric states of membrane proteins. We predict that distinct oligomeric states and hydrophobic shapes of mechanosensitive channels lead to distinct functional responses to membrane tension. Our results suggest that the shape of membrane proteins, and resulting structure of membrane deformations, plays a crucial role in the regulation of protein function by bilayer membranes. The biological function of membrane proteins is determined by a complex interplay between protein structure and the properties of the surrounding lipid bilayer Acinetobacter baumanniiA paradigm of mechanosensation is the prokaryotic mechanosensitive channel of large conductance (MscL) In this article we address the above questions on the basis of the continuum elasticity theory of lipid bilayer membranes The basic experimental phenomenology of mechanosensitive gating is captured by a two-state Boltzmann model A deeper understanding of The continuum elasticity theory of membranes As mentioned above, the determination of the oligomeric state and, more generally, molecular structure of MscL in different conformational states is a problem of intense experimental interest Staphylococcus aureus (SaMscL) and Myobacterium tubercolosis (MtMscL). In particular, Escherichia coli (EcoMscL), hexameric A variety of different approaches have been employed The contour lines approximating the cross sections of the transmembrane domains in Following the approach summarized in et al.In In addition, The left-hand panel of We now turn to the dependence of the channel opening probability in In order to facilitate the systematic investigation of the connection between the oligomeric state and the gating energy of MscL in In analogy to Inspired by structural studies of MscL Our mathematical approach for determining the energetic cost of membrane deformations associated with different oligomeric states and hydrophobic shapes of MscL is general and directly applicable to other membrane proteins. Thus, the methodology developed here establishes a quantitative relationship between the oligomeric state and hydrophobic shape of a membrane protein and the elastic energy required to accommodate the membrane protein within the lipid bilayer membrane. However, the quantitative details of our predictions depend on the parameter values characterizing the hydrophobic shape of the membrane protein under consideration. In particular, crucial inputs for our model are the hydrophobic thickness and cross section of membrane proteins. Recent experimental results The physiologically relevant oligomeric states and molecular structures of MscL remain a matter of debate In accordance with the standard framework for describing elastic bilayer-protein interactions The terms The specific properties of MscL enter The membrane-mechanical model of bilayer-MscL interactions outlined above yields a qualitative framework for understanding MscL gating, is in broad agreement While the elastic model in We follow Refs. Boundary curves are obtained by fitting the Fourier representation of The molecular structures in The clover-leaf shapes in In general, Thus, using The membrane deformation energy associated with the equilibrium deformation profile in To evaluate the integrals in The deformation profiles in The primary accession numbers (in parentheses) from the Protein Data Bank are: Pentameric MscL Click here for additional data file.Figure S2Membrane deformation energy of model inclusion shapes. Thickness deformation energy in (EPS)Click here for additional data file.Figure S3Gating energy of model inclusion shapes. Difference in thickness deformation energy between the open and closed states of generalized shapes of MscL obtained from (EPS)Click here for additional data file.Figure S4Gating probability of model inclusion shapes. Membrane contribution to the opening probability of generalized shapes of MscL obtained from (EPS)Click here for additional data file."} +{"text": "The authors measured the results of three oxidative stress markers in blood serum, not plasma. In the second sentence of the \"Methods\" section of the Abstract, \"Plasma levels\" should be \"Serum levels.\" In the penultimate sentence of the last paragraph of the \"Recording Clinical Parameters and Collecting Blood Samples\" section of the \"Subjects and Methods\", \"Plasma samples\" should be \"Serum samples.\" In the last sentence of the first paragraph of the \"Oxidative Stress Measurements\" section of the \"Subjects and Methods\", \"plasma levels\" should be \"serum levels.\" In the first sentence of the third paragraph of that same section, \"blood plasma\" should be \"blood.\" In the first sentence of the second paragraph of the \"Results\", \"plasma\" should be \"serum.\" Lastly, in the first sentence of the second paragraph of the \"Discussion\", \"plasma\" should be \"serum.\" Since the measurement system the authors used is compatible with both plasma and serum, any data presented and conclusions described in the article are not affected by the correction."} +{"text": "Angular deformities of the lower limbs are common during childhood. In most cases this represents a variation in the normal growth pattern and is an entirely benign condition. Presence of symmetrical deformities and absence of symptoms, joint stiffness, systemic disorders or syndromes indicates a benign condition with excellent long-term outcome. In contrast, deformities which are asymmetrical and associated with pain, joint stiffness, systemic disorders or syndromes may indicate a serious underlying cause and require treatment.Little is known about the relationship between sport participation and body adaptations during growth. Intense soccer participation increases the degree of genu varum in males from the age of 16. Since, according to some investigations, genu varum predisposes individuals to more injuries, efforts to reduce the development of genu varum in soccer players are warranted. In this article major topics of angular deformities of the knees in pediatric population are practically reviewed. This possibility is pertinent, but not yet investigated for the development of genu varum and playing soccer. However, several studies showed that the presence of genu varum predisposes an individual to various injuries. For example, genu varum has been associated with the deterioration of the articular cartilage in the knee's medial tibiofemoral compartment both to experimentally induced osteoarthritis and as a risk factor for osteoarthritis in a patient cohort \u20134.The presence of genu varum alters the forces at the knee so that the line of force shifts farther medially from the knee joint center intensifying the medial compartment load and creating a medial joint reaction force that is nearly three and a half times that of the lateral compartment . In addition to the development of tibio femoral osteoarthritis, the presence of genu varum seems also to predispose subjects to the occurrence of injuries at the patellofemoral joint. Several studies have identi?ed the presence of genu varum as a risk factor for the development of the patellofemoral pain syndrome in athletes \u201310.A lthough from clinical experiences an association between intense sports activities like soccer players and genu varum seems evident, today no scienti?c data regarding this association are available. Chantraine Since the knee malalignment is important in athletes, the development of normal tibiofemoral angle and common angular deformities in children and adolescents are explained in this article.. Finally the genu valgum spontaneously correct by the age of 7 years to that of the adult alignment of the lower limbs of 8 degrees of valgus in the female and 7 degrees in the male. The greater degree of valgus in females may be due to their wider pelvis.Genu varum (bow legs) and medial tibial torsion are normal in newborn and infants and maximal varus is present at 6 to 12 months of age. With normal growth, the lower limbs gradually straighten with a zero tibiofemoral angle by 18 to 24 months of age . With further normal development, knees gradually drift into valgus (knock-knee). This valgus deformity is maximal at around age 3\u20134 years with an average lateral tibiofemoral angle of 12 degrees Extrinsic and intrinsic factors may interfere with this normal angular alignment of the lower limbs.Bowlegs after 2 years of age are considered abnormal. It may be due to persistence of severe physiologic bowlegs (the most common etiology), a pathologic condition, or a growth disorder.History: Family history of bowlegs or other limb deformities and the presence of short stature may indicate the possibility of bone dysplasia or a generalized growth disorder. Physiologic genu varum improves with growth, whereas pathologic bowing of the legs increases with skeletal growth; therefore it seems important to ask the parents about:When they first noticed the deformity in the child.Were the legs bowed at birth and in infancy, or did the bowlegs develop later on when the child started walking?Is the deformity improving, staying the same, or increasing in severity?When did the child begin to stand and walk? Children with tibia vara (Blount's disease) are early walkers. Inquire as to the previous treatment and response to it.Etiologic factors: The physician should be aware of the dietary and vitamin intake of the patient and also should consider any allergy to milk, history of trauma or infections and inquire the possibility of exogenous metal intoxication, specifically lead and fluoride .Examination: Short stature suggests the possibility of vitamin D refractory (hypophosphatemic) rickets or bone dysplasia, such as achondroplasia or metaphyseal dysplasia ..In stance and supine the physician should measure the distance between the femoral condyles at the joint level with the ankles just touching each other. Ruling out the deformity of the feet especially pes or metatarsus varus or valgus which may represent torsional deformity of the limb is mandatory ; whereas in ligamentus hyperlaxity it is at the knee joint. In Blount's disease it is commonly at the proximal tibial metaphysis with an acute medial angulation immediately below the knee and in the congenital familial form of tibia vara it is at the lower tibia at the junction of the middle and the lower thirds . In the very rare distal femoral vara the site of angulation is in the distal femoral metaphysis. When the lower tibiae are the sites of varus angulation, the upper tibial segment is straight and the lower segment angulated.The site of varus angulation should be determined. In physiologic genu varum there is a gentle curve involving both the thigh and the leg with more pronounced bowing in the lower third of the femur and at the juncture of the middle and upper thirds of the tibia .Next, inspect the gait and determine the foot progression angle; in genu varum the foot progression angle may be medial or normal. When laxity and incompetence of the lateral collateral ligament of the knee are present, the fibular head and upper tibia shift laterally during gait; whereas, in physiologic bowlegs there is no such lateral thrust It is important to assess symmetry of involvement. In physiologic genu varum and congenital tibia vara it is usually bilateral and symmetric, whereas in Blount's disease it may be unilateral or bilateral, and when both tibiae are involved, the degree of affection is often asymmetric., the involved or more severely affected limb is shorter than the contralateral one; in physiologic genu varum the lower limb lengths are even.Measure both the actual and apparent limb lengths. In Blount's disease and in congenital longitudinal deficiency of the tibia .In the medially bowed leg, determine the level of the proximal fibula in relation to that of the tibia. Normally the upper border of the proximal fibular epiphysis is in line with the upper tibial growth plate \u2013 well inferior to the joint horizontal orientation line; whereas Blount's disease, congenital longitudinal deficiency of the tibia, and achondroplasia demonstrate relative overgrowth of the fibula, and the fibular epiphysis is more proximal, near the joint line Palpate the epiphysis of the long bones at the ankles, knees, and wrists. In rickets (vitamin D refractory or vitamin deficiency) they are enlarged. Inspect the thoracic cage. Is there a \u201crachitic rosary\u201d of the ribs, pectus carinatum deformity, or Harrison's groove?Imaging: Take radiograms when:A child is 3 years and older and the varus deformity is not improving or is getting worse,The medial bowing is unilateral or asymmetric,The site of varus angulation is acute in the proximal tibial metaphysis immediately below the knee,, short tibia and relatively long fibula, and history of possible metal intoxication (lead or fluoride).The possibility of a pathologic condition is suggested by other clinical findings. The clinical stigmata suggesting pathologic genu varum are short stature (bone dysplasia), enlarged epiphysis and physis (rickets), history of trauma or infection (meningococcemia) Standing long films should be made to include the hips, knees, and ankles. Proper positioning is important \u2013 knees straight and patella facing forward..The growth plates of the distal femur and proximal and distal tibia should be considered carefully. In physiologic genu varum they are normal. In rickets the physes are markedly thickened, the physeal borders of the epiphyses are frayed with a brush-like pattern of the bone trabeculae, the epiphyses are enlarged, the bone trabeculae are coarse, and the cortices of the diaphyses of the femurs and tibiae show decreased bone density Then, the epiphyses, metaphyses, and diaphyses should be inspected. In physiologic genu varum the bone seems normal without any sign of bone dysplasia. The medial bowing of the lower limb is a gentle curve, taking place at the junction of the middle and the proximal thirds of the tibiae and the distal thirds of the femurs. The horizontal joint lines of both the knee and ankle are tilted medially..List of conditions that cause pathologic tibia vara is given in Measure the metaphyseal-diaphyseal angle. In the physiologic genu varum it is less than 11 degrees, whereas in tibia vara it is greater than 11 degrees .In physiologic genu varum education and assurance of the parents is important and just follow its natural course by reassessing the child in 6 months. Orthopedic shoes are not effective in its prevention or management Metabolic deformities such as rickets could simply be corrected with medical treatment, i.e. calcium and vitamin D supplements..When severe genu varum is associated with severe medial tibial torsion and the metaphyseal-diaphyseal angle is 11 degrees or greater, a Denis Browne splint is prescribed with the feet (shoes) rotated laterally and with an 8 to 10-inch bar between the shoes. This is ordinarily worn only at night for a period not more than 3 to 6 months in order to correct excessive medial tibial torsion . It is difficult to calculate the exact age for hemiepiphysiodesis. Stapling is preferred by some authors .In the adolescent with severe genu varum with marked malalignment of the mechanical axis of the lower limbs, occasionally osteotomy of the tibia or hemiepiphysiodesis of the distal femur and/or proximal tibial physis is indicated to correct the deformity . The problem is the adolescent or the child over 8 years of age who present with moderate to severe knock-knees. The patient complains of pain in the thigh and/or calf and easy fatigability, the child walks with his knees rubbing together, feet apart and one leg swinging around the other. Frequently, parents are concerned with this form of gait. Due to malalignment and an increased Q angle of the quadriceps extensor mechanism, the patella subluxates laterally; hence, the patellofemural joint seems to be unstable. The shoes shows medial collapse of the upper parts and it is the result of abnormal weight-bearing forces on the ankle and foot . The parents seek active treatment and commonly believe that the deformity will result degenerative, crippling arthritis of the knee . In order to manage the problem properly, first we should determine the cause of abnormal genu valgum by careful history taking, physical examination, and appropriate imaging studies. The various causes of genu valgum are listed in Exaggerated genu valgum up to 7 years of age is physiologic and not pathologic History: The presence of positive family history and short stature in other members of the family will suggest the presence of bone dysplasia, such as multiple epiphyseal dysplasia, multiple metaphyseal dysplasia, multiple enchondromatosis (Ollier's disease), multiple hereditary exostosis, Ellis Van Creveld syndrome , or Morquio's disease . History of swollen and hot knees indicates rheumatoid arthritis . With increased circulation in the knee the tibia overgrows relative to the fibula. In congenital longitudinal deficiency of the fibula, genu valgum is common .Examination: For assessment of short stature and bony dysplasia the standing and sitting height of the patient should be measured . Inspect the alignment of the lower limb in stance. Measure the degree of genu valgum with a goniometer on the lateral side of the thigh-leg, the distance between the medial maleolli with the knee just touching. Genu requrvatum, if present, causes an apparent increase in the degree of deformity ..Asymmetry or unilateral involvement of the knee is suggestive of pathologic genu valgum. In walking, there is protective toeing-in to shift the foot medially so that the center of gravity falls in the center of the foot Ligamentous laxity may be the cause of knock-knees; therefore one should determine the stability of the collateral and cruciate ligaments of the knee. whereas in ligamentus hyperlaxity, congenital longitudinal deficiency of the fibula, or rheumatoid arthritis of the knee, it is at the knee and in metabolic bone disease and bone dysplasia at the distal femur or in both femur and the tibia. Tibia valga is usually associated with excessive lateral tibiofibular torsion thus the degree of tibial torsion should be assessed .The site of valgus angulation should be determined. Tibia valga, or greenstick fracture of the medial part of the proximal tibial metaphysic cause genu valgum at the proximal tibia . Exostosis and lower limb length inequality should be assessed by careful palpation of the epiphysis and metaphysis and limb length measurements, respectively .Iliotibial band contracture may cause tibia valga and its presence should be ruled out by an Ober test Imaging findings: In developmental genu valgum the epiphysis, physis, and metaphysis are normal. The horizontal axis of the knees and ankles is tilted laterally. No intrinsic bone disease is present..In pathologic genu valgum the radiographic features are usually characteristic, and diagnosis is readily made. When an osseous bridge across the lateral physis of the distal femur and proximal tibia is suspected, MRI should be performed. Bone age should be determined if a hemiepiphysiodesis is being planned . When the iliotibial band is contracted, passive stretching exercises for its stretching should be done. This may relief valgus deforming force of the knee.Special shoes are ineffective in prevention or treatment of genu valgum. If the feet are in valgus and foot strain is a complaint, foot orthotics, such as University of California Biomechanics Laboratory (UCBL) orthotics, are appropriate to support the foot. They do not correct the genu valgum but relieve foot strain, easy fatigability, and foot-calf pain The role of orthotics to control or correct genu valgum has not been proven and is controversial. Some authors do not recommend them. According to the body of literature, the only indication for a Knee Ankle Foot Orthosis (KAFO) is to support the knee ligaments and prevent them from overstretching. It is used in pathologic genu valgum..Hemiepiphysiodesis is done by stapling or fusing the medial part of the distal femoral and/or proximal tibial growth plates. Appropriate time is crucial. Despite all precautions, one may end up with overcorrection or undercorrection. Some authors prefer stapling over epiphysiodesis because it allows a certain amount of flexibility of timing \u201348.Osteotomy of the distal femur and proximal tibia is performed in the skeletally mature patient. Neurovascular structures, particularly the common peroneal nerve and tibial vessels, are at definite risk for injury. Gradual correction with an external fixator lessens the degree of neurovascular change In the adolescent with severe genu valgum and marked mechanical axis deviation, surgical correction is indicated. Two methods of surgical management are available:Angular deformities of the lower limbs are common during childhood and usually make serious concern for the parents. Most commonly these deformities represent normal variations of the growth and development of the child and needs no treatment except for observation and reassurance of the parents. Despite of the benign nature of physiologic or exaggerated physiologic genu varum and genu valgum, most of the pathologic causes need proper management by an orthopedic surgeon; hence, the importance of careful evaluation of the patients and determination of these pathologic causes is evident."} +{"text": "This is a descriptive and cross-sectional study done in 2008. The target pharmacies of this study were all the 3 teaching pharmacies affiliated with the Isfahan University of Medical Sciences. The data collecting template was prepared using the standard scientific methods according to the goals of this research The goals also nominated necessary items needed in economic profit evaluation. The data collection template was completed by reference to the teaching pharmacies financial documents and reports, used as a base for calculating the total income and the total costs in 2007-2008 financial year. The difference between these two balances showed the value of profits or loss. The profit/cost ratio was also calculated, using the proportion of the total income to the total costs. The collected data was statistically analyzed using the Excel software (Microsoft 2007). For the financial year 2007-2008, the difference between the total income and the total costs was -831.6 million Rials (excess costs to income) for the SHM pharmacy, + 25.4 billion Rials for the ISJ pharmacy and -429.5 million Rials for the AZH pharmacy. According to our findings there is a strong requirement to improve the financial performance of all the three teaching pharmacies while maintaining a high standardard of teaching and educational affairs.Teaching pharmacies are amongst the important cornerstones of a healthcare system for drug supplying, pharmacy education and pharmacy practice research. Assessment of the Iranian healthcare system costs shows that after personnel charges, drug outlay is the second expensive factor. This great financial mass requires integral audit and management in order to provide costumers satisfaction in addition to financial viability. Teaching pharmacies are required to realize financial viability as well as providing several educational and drug servicing goals, which makes microeconomic analysis important. The aim of this study was to evaluate the financial performance of the teaching pharmacies affiliated with the Isfahan University of Medical Sciences (with the Health economics is a new branch of economic science, started in the early 1950s when health care system costs were noticed as a general basic need and therapeutic services became an industry . Health To the best of our knowledge, a limited number of studies have been performed and published that economically evaluate critical foci of the Iranian health care system including pharmacies \u201312. TowfNowadays, the ever increasing demand of patients for obtaining information about their physician-prescribed drugs create new expectations from community pharmacists. Pharmacist\u2019s high potentiality for delivering clinical information and their communication skills at the time of facing patients seem to help the drugstore managers to improve their financial profitability. The increasing demand of the pharmacy clients and patients who are seeking their needed specialized information about rational use of drugs made this ring of health chain much more important than it used to be. Most of the faculties of pharmacy in Iran, own at least one or more not-for-profit teaching pharmacy for the purpose of teaching and simulating the real situation of pharmaceutical care provision. These drugstores are known by the people as the governmental pharmacies. People generally believe that these drugstores are more trustable and they offer some drugs that are not ordinary found in other pharmacies and this fact has an undeniable direct positive effect on their financial turn over and economic balance.The importance of financial performance and its direct effects on income, teaching and service provision level in educational pharmacies, made good reasons for the design and conduction of present study which evaluates the above mentioned objectives for the teaching pharmacies affiliated with Isfahan University of Medical Sciences (IUMS). The results of present study are hoped to be useful for future management goals and the pharmaceutical policymakers and authorities.abbreviated for the confidentiality of the financial data), are located in three different geographical area of the city and each is next to a teaching medical center again affiliated with IUMS and the pharmaceutical services are open to public. SHM and ISJ pharmacies are the two most important drug servicing centers for special cases of drug supply and needed drugs in Isfahan province.The research protocol and methodology of this study as well as publication of the data was approved by the higher research committee of the School of Pharmacy and Pharmaceutical Sciences of the IUMS. This descriptive and cross-sectional study was conducted in 2008. The target pharmacies included the three teaching community pharmacies owned by the above mentioned school of pharmacy. These out-patient pharmacies named SHM, ISJ and ALZ (\u00ae (2003) software and commented according to the study aims.In order to prepare the evaluation data collection template, the goals and objectives of the study were initially reviewed by all authors carefully and the nominated necessary items needed in this financial and economic assessments were determined . Then thThe calculated financial indices of the pharmacies studied are summarized in As an introduction to this section, it should be mentioned that because of the lack of similar published studies on financial evalu-ation of teaching (out-patient) pharmacies and novelty of this issue in Iran, the results and ratios derived from them could not be discussed by comparing to other studies. Therefore, we have discussed the results with regard to the differences between the three teaching pharmacies as below.According to our findings the functional income/functional costs ratio for the SHM pharmacy was 56% . This raThe ISJ pharmacy with a substantial functional income of 29 billion Rials value had the Economical evaluation of ALZ pharmacy in the same way showed the ratio of functional income/functional costs equal to 87% . This shComparison of the mentioned sextet ratios in SHM pharmacy, ALZ drugstore and ISJ pharmacy revealed that the ISJ pharmacy not only has the highest income but also can cover its expenses by its income. This evidence is sufficient for representing the efficiency in enterprising. Considering the forth ratio , used to compare the organization financial properties with other organizations, helps to rank the three pharmacies economic performance as following:ISJ Pharmacy >ALZ Pharmacy >SHM PharmacyThe authors believe that similar economic evaluation is necessary as an important decision making element in other similar units.In support of improving the financial performance of above mentioned teaching pharmacies , according to our literature survey, analysis of findings and discussions with relevant experts, these solutions are offered which maybe regarded as topics for further study:Study the effects of functional emoluments intensification.Designing and performing plans for a more efficient role in the drug market.Search for means to increase the non-functional revenue and its availability assessment.The use of value engineering technique to help in avoiding unnecessary expenses and probable repeats.Strategic planning for effective human resources management of these pharmaciesRecognizing wastage of drug expenditure.Using efficient supervision ways to control pharmacy disbursements."} +{"text": "Traumatic damage to the central nervous system (CNS) destroys the blood\u2013brain barrier (BBB) and provokes the invasion of hematogenous cells into the neural tissue. Invading leukocytes, macrophages and lymphocytes secrete various cytokines that induce an inflammatory reaction in the injured CNS and result in local neural degeneration, formation of a cystic cavity and activation of glial cells around the lesion site. As a consequence of these processes, two types of scarring tissue are formed in the lesion site. One is a glial scar that consists in reactive astrocytes, reactive microglia and glial precursor cells. The other is a fibrotic scar formed by fibroblasts, which have invaded the lesion site from adjacent meningeal and perivascular cells. At the interface, the reactive astrocytes and the fibroblasts interact to form an organized tissue, the glia limitans. The astrocytic reaction has a protective role by reconstituting the BBB, preventing neuronal degeneration and limiting the spread of damage. While much attention has been paid to the inhibitory effects of the astrocytic component of the scars on axon regeneration, this review will cover a number of recent studies in which manipulations of the fibroblastic component of the scar by reagents, such as blockers of collagen synthesis have been found to be beneficial for axon regeneration. To what extent these changes in the fibroblasts act via subsequent downstream actions on the astrocytes remains for future investigation. After damage to the central nervous system (CNS) of adult mammals, regeneration of transected axons barely occurs. There is a growing view that severed central axons are capable of regeneration and that the failure to regenerate is due to the blocking effect of the scar formed at the lesion site. This scar consists in both glial (mainly astrocytic) and fibrotic components (Fitch and Silver Various kinds of inhibiting factors that are upregulated around the lesion site have been postulated to prevent the regrowth of severed axons beyond the lesion site. These include molecules of the chondroitin sulfate proteoglycan (CSPG) family , an inhibitor of Type IV collagen synthesis containing neurons in the hypothalamic arcuate nucleus. Since arcuate NPY neurons exert a potent orexigenic function, many experiments have been performed to examine the effect of their destruction by electrolytic or chemical lesions and surgical deafferentation of the projection. Alonso and Privat surgicalAdministration of gold thioglucose, a neurotoxic glucose analog, to mice increased their body weight and produced a hypothalamic lesion that extended from the ventromedial part of the hypothalamus . TRII binds to its specific ligand but TRI requires the presence of bound TRII to interact with TGF-\u03b2s . By sealing off the damage and restoring the BBB, the astrocytic reaction is protective. Both astrocytes and fibroblasts express abundant axon-repelling molecules. Suppression of TGF-\u03b2 signaling has been shown to be an effective tool for preventing formation of the fibrotic scar and has been reported to promote axonal regeneration without detrimental effects on the sealing process of damaged CNS."} +{"text": "Foci of tick species occur at large spatial scales. They are intrinsically difficult to detect because the effect of geographical factors affecting conceptual influence of climate gradients. Here we use a large dataset of occurrences of ticks in the Afrotropical region to outline the main associations of those tick species with the climate space. Using a principal components reduction of monthly temperature and rainfall values over the Afrotropical region, we describe and compare the climate spaces of ticks in a gridded climate space. The dendrogram of distances among taxa according to occurrences in the climate niche is used to draw functional groups, or clusters of species with similar occurrences in the climate space, as different from morphologically derived groups. We aim to further define the drivers of species richness and endemism at such a grid as well as niche similarities (climate space overlap) among species. Groups of species, as defined from morphological traits alone, are uncorrelated with functional clusters. Taxonomically related species occur separately in the climate gradients. Species belonging to the same functional group share more niche among them than with species in other functional groups. However, niche equivalency is also low for species within the same taxonomic cluster. Thus, taxa evolving from the same lineage tend to maximize the occupancy of the climate space and avoid overlaps with other species of the same taxonomic group. Richness values are drawn across the gradient of seasonal variation of temperature, higher values observed in a portion of the climate space with low thermal seasonality. Richness and endemism values are weakly correlated with mean values of temperature and rainfall. The most parsimonious explanation for the different taxonomic groups that exhibit common patterns of climate space subdivision is that they have a shared biogeographic history acting over a group of ancestrally co-distributed organisms. Factors that affect the life cycle of parasitic arthropods, like ticks, have been proposed as possible limiting factors for their ranges, and include host availability Differences in niches for tick taxa are quantified using observed occurrences of species and reflect a yet unknown conjunction of the environmental space of the species, the biotic interactions they experience and the habitats available to species and colonized by them Recent concerns over the impacts of climate trends on the distribution of ticks This paper is aimed to define the climate envelope for tick species recorded in the Afrotropical region and the relationships between groups of species, without explicit consideration of the geographical space. The study is not focused around the range of the ticks in the geographical space, but in the climate one. We describe and compare climate envelopes of ticks recorded in the Afrotropical region in a gridded climate space and we aim to define features of species richness and endemism at such a grid. We thus explicitly sought to describe the relationships among morphologically recognized taxa according to their strict positions in the climate niche and to consider how diversity of available niches may relate to speciation and divergence from a common pool of lineages. This study is thus intended to characterize the relationships of ticks and climate, without the restrictive effects of the geographical space. We sought to disentangle the relationships among species and the climate space, as a starting point for further research in the geographical space, which should stand on the dispersion mechanisms as descriptors of the distribution of these taxa as we know today.A multivariate analysis of monthly interpolated climate traits in the Afrotropical region produced 3 main axes, which accounted for the 89.1% of total variability. Axis 1 was loaded by and inversely related to the average annual temperature. Axis 2 was inversely correlated with the range between maximum and minimum temperatures: a large seasonal amplitude in temperature is related to negative values in this axis. Axis 3 was inversely related to total rainfall. Gradients of the first and second PCA axes draw the occurrences of the species, which are restricted to specific portions of such as axes.Hyalomma associates to portions of the climate space with high temperature and a large thermal amplitude, otherHyalomma. Systematic similarities among species in the same taxonomic group are not mirrored by the similarities of occurrences along climate gradients. Speciation processes of each morphologically-based, supraspecific group, derived into multiple branches colonizing different portions of the available climate conditions.The occurrence of Afrotropical ticks along gradients of the climate space produced a dendrogram of relationships among the species of ticks grouped Rhipicephalus display low within-group niche equivalency values, other supraspecific taxonomic clusters having a greater niche overlap. Species within each taxonomic lineage in Rhipicephalus tend to occur on portions of the climate space more different than expected by random occurrences, therefore maximizing the occupancy of the climate space and avoiding overlaps with other species of the same taxonomic group.The The highest index of species richness is associated to the lowest seasonality (lowest variation of temperature) in temperature values . AbsolutThe tick fauna associated to the Afrotropical region has long been recognized as to represent an unpaired chance to investigate the many factors giving shape to its current relationships. This framework accounted for biases introduced by spatial resolution and corrects observed occurrence densities for each region considering the availability of climate space. Such a characterization outcomed details about the composition of \u201cfunctional\" groups of species and their phyloclimatic relationships, hypothesizing about the nature of the species-environment relationships. As such, it would be appropriate to construct a general area cladogram (representing a single history of place) based on congruence among the area cladograms derived from phylogenetic analyses of the multiple co-distributed taxa The Afrotropical fauna of ticks is the result of a diversification of lineages into a variety of taxa currently associated with many biomes Hyalomma has, in some extent, both a taxonomic and functional identity, most probably because its high specialization towards warmer and drier sites. Every supraspecific group, as morphologically recognized today, has species associated to different portions of the climate niche. In some cases, species within the same taxonomic group share a small portion of climate space. These findings could be interpreted as a strategy by groups of close species evolving from the same genetic pool to exploit the portions of the climate space as much separated as possible. This would minimize competition among species genetically related (same taxonomic group), being restricted by geographical barriers that could effectively operate on the spread patterns over the climate space by restricting movements in the geographical level. The most parsimonious explanation for the different taxonomic groups that exhibit common patterns of climate space subdivision is that they have a shared biogeographic history. In other words, a common set of historical vicariant events has geographically structured a group of ancestrally co-distributed organisms In the case of ticks of the Afrotropical region, groups of taxa have evolved from the primitive pool along different lines of climate pressures and they have occupied different portions of the available climate space. The observed pattern of occurrences along climate gradients suggest that supraspecific assemblages of morphologically similar taxa have a long-standing association with one another and have attained a common pattern of climatically driven subdivision as a result of being subjected to the same environmental history. Specialization is still evident, and species belonging to the same taxonomic group do not tend to occur along similar gradients of climate space. The study of the niche-segregating clusters shows that functional groups of ticks are dissimilar to the taxonomic groups. Only the genus The mechanism by which the tick species may colonize and spread into new gradients of the climate space is largely unknown. Support for rapid niche shifts is found in diverse fields of ecology and evolution, its evidence reported in empirical studies of invasive species ecology, phylogenetic analysis and community ecology Highest richness of Afrotropical tick species has been found to be correlated with low thermal seasonality and, ranked second, warmer temperatures. Thus, the area of highest species richness is associated with zones where high species richness of birds and mammals is also concentrated The framework built in this paper may be potential drawbacks regarding the ecology of the ticks, the resolution of the climate dataset and the potential bias in reporting of tick collections. The first and the second are strongly linked. The immature stages of some species may have an endophilous behavior, i.e. connected to hosts that live in burrows or protected from hard environmental conditions Species composition is influenced by the combined effects of environmental traits, the portion of the geographical space dominated by those climate gradients and the dispersal properties of the species We determine climate niche occupancy for a set of tick records in Africa. The framework applies to comparison among any taxonomic, geographic or temporal group of occurrences, and involves the calculation of the density of occurrences and of climate gradients along axes. Those are the result of a multivariate analysis along the climate variables. We then calculated the niche overlap between species together with statistical tests of niche equivalency and the evaluation of richness and endemism of species as associated to coherent portions of the environmental space. Most of the analyses were done in R 2.7.2 www.tickbornezoonoses.org). Data from the later were used to update the taxonomic overview in the former and to produce a coherent dataset of records in Africa. The newest available records (after approximately the year 2000) was completed by a systematic search of the peer-reviewed literature between the years 2000 and 2010. The basic criteria included in both searches were the systematic generic or family names of the ticks, as found in the title or the abstract of the paper. Further reading of each reference searched for information about the specific name of the tick, and a strict reference to a site of capture. The site can be defined by its coordinates or by a name providing enough information to locate it in digital gazetteers or regional maps. Records were not included if they referred only to a generic name or if reported from a large administrative division (i.e. province) or if the information regarding the locality was ambiguous . Literature searches were concluded on 31 October 2010 and all citations meeting our search criteria were reviewed. A total of 10,628 records were included in this study. All the literature data were curated and determinations of taxa replaced with current taxonomical views if necessary, after a consultation to experts. The tick name as appearing in the final dataset is the one agreed in a recent review on the systematics of ticks Two compilations were primarily used regarding earlier surveys of ticks in the area of interest (since around the year 1950), namely the one reported out in ref. Haemaphysalis leachi and H. elliptica, see ref. Ixodes remains largely unexplored in the Afrotropical region Rhipicephalus adhered to the views summarized in ref. Some species were not included in this study because lack of agreement among experts Climate data were obtained from WorldClim r\u00d7r cells of a three dimensional volume each cell being a unique vector of climate conditions present at one or more sites of the geographic space.We consider the first three axes of the principal components analysis of the monthly climate variables as definition of the climate space of the ticks ijkO for each cell is calculated as:The occurrences of each species in each unique cell of the grid was used to map the occurrence of the ticks in the climate space. The number of occurrences of a tick species is dependent on sampling strategy, and a dataset may not entirely reflect the actual distribution of a species. We adhered to published methods Most of the calculations were done in R 2.7.2 Niche overlap between any two species involves three steps: (1) calculation of the density of occurrences and of climate factors along the axes of the multivariate analysis before (2) measurement of niche overlap along the gradients of this multivariate analysis and (3) statistical tests of niche equivalency The comparison of We built from the methodology previously described Measures of species richness are statistical measures. There is no consensus on best estimators of species richness. In this study we sought to investigate how a range of climate factors drive taxonomic diversity in the ticks reported in the Afrotropical region. More specifically, we tested whether regions with a given gradient in the axes of climate space may harbor different diversity. We purposely remain general in scope to explore how hypotheses can be extrapolated to the taxonomic diversity of a group of ectoparasites.We computed species richness weighting the number of smoothed occurrences at each vector of the climate space with the total values of occurrences. To compute species endemism in the climate space, we estimated the number of species present at every grid squares of the three dimensional climate space. Each species is down-weighted by the number of grid squares in which it occurs, and the index is then the sum of the range-down-weighted species values for each grid. The down-weighting was calculated by dividing each grid-occurrence by the total number of grids in which that species occurs. Thus a species restricted to a single grid would be scored as \u20181\u2019 for that grid, and \u20180\u2019 for all other grids; a species found in two grids, would be scored as \u20180.5\u2019 for each of the two grids, and \u20180\u2019 for all other grids; a species found in three grids would be scored as \u20180.333\u2019 for each of the grids, and \u20180\u2019 for all other grids, etc."} +{"text": "The later waves are thought to originate from indirect, trans-synaptic activation of PTNs and are termed \u201cI\u201d waves. The anatomical and computational characteristics of a canonical microcircuit model of cerebral cortex composed of layer II and III and layer V excitatory pyramidal cells, inhibitory interneurons, and cortico-cortical and thalamo-cortical inputs can account for the main characteristics of the corticospinal activity evoked by TMS including its regular and rhythmic nature, the stimulus intensity-dependence and its pharmacological modulation. In this review we summarize present knowledge of the physiological basis of the effects of TMS of the human motor cortex describing possible interactions between TMS and simple canonical microcircuits of neocortex. According to the canonical model, a TMS pulse induces strong depolarization of the excitatory cells in the superficial layers of the circuit. This leads to highly synchronized recruitment of clusters of excitatory neurons, including layer V PTNs, and of inhibitory interneurons producing a high frequency (~670 Hz) repetitive discharge of the corticospinal axons. The role of the inhibitory circuits is crucial to entrain the firing of the excitatory networks to produce a high-frequency discharge and to control the number and magnitude of evoked excitatory discharge in layer V PTNs. In summary, simple canonical microcircuits of neocortex can explain activation of corticospinal neurons in human motor cortex by TMS.Although transcranial magnetic stimulation (TMS) activates a number of different neuron types in the cortex, the final output elicited in corticospinal neurones is surprisingly stereotyped. A single TMS pulse evokes a series of descending corticospinal volleys that are separated from each other by about 1.5 ms . This evoked descending corticospinal activity can be directly recorded by an epidural electrode placed over the high cervical cord. The earliest wave is thought to originate from the Transcranial magnetic stimulation (TMS) and transcranial electrical stimulation (TES) can activate the human brain through the intact scalp and was therefore termed \u201cD\u201d wave. The later waves evoked by cortical stimulation required the integrity of the cortical gray matter, and were thought to originate from indirect, trans-synaptic activation of PTNs and were therefore termed \u201cI\u201d waves direction, the descending volleys have slightly different peak latencies and/or longer duration than those induced by PA stimulation, and the order of recruitment of the descending corticospinal waves may change with late I-waves already evoked at TMS intensity close to MEP threshold depressed by enhancement of neurotransmission through the inhibitory gamma-aminobutyric type A receptor (GABAAR) by benzodiazepines to the test stimulus, and delivered either through the same stimulating coil for exploration of circuitry within M1, or through another stimulating coil for exploration of cortico-cortical connections to M1 [for review, of MEPs in hand muscles is produced by conditioning the cortical magnetic stimulus with electrical stimulation of sensory peripheral nerves of the hand repetitive discharge of the corticospinal axons. The role of the inhibitory circuits is crucial to entrain and control the firing of the excitatory networks to produce a high frequency discharge (Douglas et al., It should be considered, however, that the attempt to explain the physiological basis of TMS using the canonical cortical circuit has several major limitations because the model used in this study is extremely simplistic. It should be considered that the canonical circuit we adopted is composed of a minimum of elements and connections and thus it can capture only the essence of the function of cerebral cortex. The interaction between TMS and cerebral cortex is much more complex in that there is a great number of classes of cortical neurons and connections that can be activated by TMS but were not considered in the present paper. Moreover, this simple model cannot easily been used to explain the interaction between TMS and cortical circuits in pathological conditions characterized by structural or functional changes in cerebral cortex.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "One of the more perplexing ground water problems currently facing Bangladesh is the high concentration of arsenic in drinking water, which poses a relatively large risk to human health of this region. Traditional health practitioners (THPs) of Bangladesh primarily use medicinal plants for treatment of various ailments. The selection of any medicinal plant is a closely guarded secret and is usually kept within the family. As a result, the use of medicinal plants varies widely between THPs of different areas within the country, and is based on both medicinal plant availability and the THP\u2019s unique knowledge derived from practice. The aim of this present study was to conduct a survey amongst the THPs to learn more about the medicinal plants used to treat one or more arsenic related infections in the Satkhira district of Bangladesh. This area is unique in its proximity to the Sunderbans forest region and contains quite different medicinal plants compared to other parts of the country because of high salinity in the soil and water.Semi-structured questionnaires were administered to twenty-four traditional health practitioners to evaluate the THPs' perceptions and practice relating to causation and treatment of one or more arsenic related infections. The THPs described the signs, symptoms, and cause of one or more arsenic related infections. Details of the preparation and use of medicinal plants for management of one or more arsenic related infections were recorded.In the present study, forty-one medicinal plant species belonging to thirty-nine genera and twenty-eight families were found to be used to treat one or more arsenic related infections in the Satkhira district.Information on indigenous use of medicinal plants has led to discovery of many medicines in use today. Scientific studies conducted on the medicinal plants may lead to discovery of more effective drugs than in use at present."} +{"text": "Objective: A hemisplit turndown tibialis anterior muscle flap is described for coverage of distal leg wounds with preservation of active extensor function for open wounds of the distal ankle is presented. This is a new flap not previously described and is another local option for coverage of selected distal leg wounds. Methods: A description of the operative procedure and a clinical successful example is presented. Results: The split hemitibialis anterior turndown muscle flap was successful and preserved function of the muscle and tendon. Conclusions: This is another option for coverage of difficult wounds of the lower extremity without sacrifice of function of the donor muscle. There are many options for the coverage of soft tissue defects in the distal third of the leg ranging from local flaps to microvascular free-tissue transfer.12 ulcer with exposed tibialis anterior tendon. Initially, the wound was managed conservatively with debridement and local wound care. This resulted in a 2 \u00d7 2 cm2 defect at the level of the ankle with exposure of the tendon of the anterior tibialis muscle that would not heal after several months of conservative wound care (2 full-thickness defect of the distal ankle with exposed tibialis anterior tendon. Distal pulses of the foot were intact and palpable. He underwent surgical treatment consisting of a split hemitibialis turndown muscle flap to cover the wound (A 53-year-old man sustained a severely comminuted fracture of the left ankle, from a crush injury at a construction site, which required open reduction and internal fixation with screws and plate. Subsequently, skin slough of the dorsal aspect of the ankle incision resulted in a 6 \u00d7 6 cmund care . He was he wound . The mushe wound .The split hemianterior tibialis turndown muscle flap was designed on the basis of the anatomic characteristics of its vascular supply, and the procedure is as follows.With the patient supine on the operating room table, under spinal anesthesia, the left lower extremity was prepped and draped in a sterile fashion. A pneumatic tourniquet was applied. The dorsal incision was opened and carried cephalad along the anterior tibia, providing access to the anterior tibialis muscle. The wound was debrided. The lateral portion of the tibialis anterior muscle was split longitudinally, taking about one half of the muscle bulk commencing cephalad and coursing inferiorly and distally while preserving the tibialis anterior tendon. One-half of the muscle bulk was turned inferiorly on a series of distal intermuscular pedicles, by incising the muscle from cephalad to caudad, taking as few intermuscular perforators as possible to maintain blood flow and still permit turndown of the muscle flap to reach the defect. Care was taken not to completely separate the split muscle and to maintain a muscular connection with the remaining muscle bulk to permit preservation of the intermuscular vascular anatomy. Once the split turndown muscle flap reached the defect, the tourniquet was released. Bleeding points were clamped and tied. There was evidence of arterial vascular circulation in the muscle flap and the most distal portions of it appeared to have vascular circulation. The flap was inserted and the incision was closed. Xenograft dressing was placed over the muscle flap temporarily to enable flap viability assessment. Four days later, a split-thickness autologous skin graft was applied to the split turndown tibialis anterior muscle flap, which was completely viable and resulted in complete skin graft adherence to the muscle and graft survival with dorsiflexion function of the tibialis anterior tendon maintained.Wounds or defects in the lower leg are difficult to treat as the vital structures are covered with only skin and minimal soft tissue. The repair of these wounds either requires tissue with adequate blood supply or a microvascular free-tissue transfer. Over the last 3 decades, free flaps have been the treatment of choice for these hard to cover wounds. Transpositions of local muscles on a distal pedicle are not consistently reliable and alternatives include the cross-leg flap and the use of free flaps.The choices of local flaps available for the distal third of the leg include the dorsalis pedis island,,The tibialis anterior partial muscle flap has been described to cover defects localized in the upper third and middle third of the tibial shaft, but not for wounds of the distal leg.1Several muscles have been split for coverage of difficult wounds. Robbins was the first to turn over the superficial part of the muscle as a fasciomuscle flap to cover an exposed tibia in the middle third of the leg, without causing a functional deficit.Splitting a muscle flap is not a new concept and has been described for other muscle flaps, which have been split including the latissimus dorsi and the pectoralis major muscle flap. The concept is based on the intermuscular vascular anatomy. The reach of the tibialis anterior muscle flap based on a proximal pedicle to cover the defects in the distal third of the leg is limited due to segmental short vascular pedicles. Extending the reach by the complete interruption of the musculotendinous unit leaves a considerable functional deficit and therefore is not recommended in ambulatory patients.14,The tibialis anterior muscle originates from the lateral condyle, upper half of the lateral surface of tibia, interosseous membrane, and crural fascia. The tendinous structure runs in the center of the muscle, becomes cordlike at the lower part, and inserts onto the first metatarsal and medial cuneiform bone, and it is responsible for dorsiflexion and inversion of the foot.1An ankle soft tissue defect with exposed tendon and bone was covered by splitting the tibialis anterior muscle longitudinally and turning down the split muscle flap on a distal intermuscular pedicle, while preserving its function. This turndown flap technique, to our knowledge, has not been previously described but is based on sound anatomical concepts. The split hemitibialis anterior turndown muscle flap was successful in covering an ankle wound with exposed tendon and bone and preserved active dorsiflexion of the ankle and active extension of the great toe despite using a portion of the muscle for wound coverage. The wound coverage remained stable at 6 months follow-up with active ambulation and no wound healing issues. Active dorsiflexion of the foot and active extension of the great toe were present.The split hemitibialis anterior turndown muscle flap, for distal lower leg wounds, adds another local procedure to the armamentarium of a reconstructive plastic surgeon, a simple but effective local flap procedure to manage these difficult soft tissue defects in selected patients with adequate vascular supply of the limb."} +{"text": "Early olfactory deprivation in rodents is accompanied by an homeostatic regulation of the synaptic connectivity in the olfactory bulb (OB). However, its consequences in the neural sensitivity and discrimination have not been elucidated. We compared the odorant sensitivity and discrimination in early sensory deprived and normal OBs in anesthetized rats. We show that the deprived OB exhibits an increased sensitivity to different odorants when compared to the normal OB. Our results indicate that early olfactory stimulation enhances discriminability of the olfactory stimuli. We found that deprived olfactory bulbs adjusts the overall excitatory and inhibitory mitral cells (MCs) responses to odorants but the receptive fields become wider than in the normal olfactory bulbs. Taken together, these results suggest that an early natural sensory stimulation sharpens the receptor fields resulting in a larger discrimination capability. These results are consistent with previous evidence that a varied experience with odorants modulates the OB's synaptic connections and increases MCs selectivity. Neuronal representations of sensory stimuli are shaped by sensory experience and the modification of these representations may underlie changes in perceptual abilities. The neuronal representations in vertebrates initiate with the activation of the olfactory receptor neurons (ORN) by odorants. The ORNs, expressing the same receptor molecule In this study we examined the properties of the MC activity changes induced by early sensory deprivation in terms of neural sensitivity. Sensitivity is defined as the fraction of neurons that show positive responses (excitatory and inhibitory) to Our results show that despite the remarkable anatomical changes in the early deprived OB, MCs ongoing and odorant triggered activity is comparable in both the normal and deprived olfactory bulb. Specifically, in the absence of olfactory stimulation, the MCs firing rate is similar in deprived and normal OBs, consistent with the homeostatic hypothesis Surgical and experimental techniques described in detail in Unitary activity was recorded with a Olfactory stimuli were presented with a custom made olfactometer by a PC controlled solenoid valves. Pressurized air, from commercially purified tanks, previously humidified was streamed to an empty tube or a tube with an odorant diluted in mineral oil for the MCs that did not exhibited odorant responses were different in the deprived and normal OB. As expected, the mean ratios were not significantly different in both conditions see . These rTaken together, these results indicate that early sensory deprivation likely induces an homeostatic adjustment of the level of excitatory and inhibitory sensory induced activation in the OB. Notwithstanding, there is an increase sensitivity to different odorants in the deprived OB.Odorants activate a distributed combination of glomeruli representing a spatial code We define a set of binary numbers which represents the probability of having The main objective of this work was to compare the properties of the MCs discharge from deprived and normal OBs in anesthetized rodents and estimate, from the theoretical standpoint, the discriminability and storage capacity of deprived and normal OBs. Our results show that the deprived OB maintains the basal level of activity in the absence of odorant stimulation, in agreement with homeostatic mechanisms that keep the system within a sensitive range to external stimulation. Homeostatic mechanisms for activity dependent excitability and synaptic strength regulation have been previously described in invertebrates as a result of action potential blockade In the presence of olfactory stimulation we found an increase in the incidence of excitatory and inhibitory responses in MCs from deprived OB when compared to the normal OB, indicating regulation of the activity levels during odorant stimulation. In summary, these results demonstrate an overall increase in the sensitivity of the deprived OB to olfactory stimuli. Despite the regulation of the overall OB activity levels during baseline and odorant stimulation, the deprived OB MCs activation patters are consistent with a reduced discrimination ability. In other words, the number of neurons involved in stimulus coding is larger in the deprived OB when compared to the normal OB. This reduction in the sensitivity of MCs is due to an increase in the excitatory as well as the inhibitory responses.The adjustment of the overall OB activity levels during baseline and odorant stimulation is apparently inconsistent with a reduction in the inhibitory input onto MCs Other studies about the effect of early sensory deprivation on the olfactory pathway have shown an increase in the epithelial response to odorants The consequences of the differences in the sensitivity of deprived and normal MCs can be explained in terms of stimuli discrimination, where the normal OB has a clear advantage in this sense. In a system with low activation levels, like the normal OB, the percentage of overlap is significantly reduced as shown in As described in the last section, the OB needs to balance between the ability to discriminate different odorants and the potential to store different odorants, i.e. storage capacity. We show that there is a negative relation between discrimination and storage capacity, the higher the system discrimination the lower the system storage capacity . A systeThe olfactory system detects, discriminate and identifies hundreds of different odorants which could be a single molecule type or a combination of several compounds. Our study aimed to compare the functional responses of MCs in normal and deprived OBs. The low number of odorants and the use of anesthetized animals are limitations of this study. We used a low number of odorants because the time necessary to test a higher number of odorants would substantially reduce the number of sites recorded for each animal, and increase the number of animals required. Furthermore, the use of anesthetized animals in this study minimized the firing rate variability due to the animals active modulation of the respiratory cycle. It is well known that the respiratory cycle is highly modulated in awake rodents by several factors such as novelty, previous learning, stimulus meaning such as appetite or aversive, etc. In our recordings, there was a constant respiratory rate reducing the variation of the firing rate due to the respiratory rate see .In summary, we compared the ongoing and odorant induced MCs activity in the normal and deprived OBs from the same animal. We have shown that the deprived OB retains a basal level of activity suggesting an homeostatic mechanism to keep the system in a sensitive range to external stimulation. Furthermore, the deprived MCs increase their excitatory and inhibitory responses when compared to the normal MCs during odor stimulation. We show an overall increase in the sensitivity of the deprived OB to olfactory stimuli versus normal OB. This means, that the number of neurons involved in stimulus coding is larger in the deprived OB when compared to the normal OB. Finally, we show from the theoretical standpoint, that in a system with low activation levels , the percentage of overlap is significantly reduced, increasing the discrimination between activity patterns induced by different odorants."} +{"text": "There are a large number of commercially available milk formulae labelled as \"hypoallergenic\". However, only a minority of these comply with the criteria established in the guidelines of Subcommittee on Nutrition and Allergic Disease of the American Academy of Pediatrics.As far as the treatment of cow's milk allergy is concerned, the extensive hydrolysed protein formulae and aminoacid-based formulae are the only two preparations that meet the standards required for hypoallergenicity, defined as absence of reactions in 90% allergic patients with 95% confidence. However, even in these cases there is great variability in the content of the extensive hydrolysed formulas on the market and, for some of them, the clinical data in support of the claim of hypoallergenicity are missing. In addition, other products known as \"partially hydrolysed formulae\" have previously been advertised as being safe for cow\u2019s milk allergic patients but turned out to be inadequate and responsible for anaphylactic reactions in many cases. These data underline the fact that it is mandatory to define the criteria in terms of peptidic content and preclinical profile of any formula put on the market as hypoallergenic formula for treatment of cow's milk allergy.In the case of prevention of cow\u2019s milk allergy, the data currently available are incomplete since no study has yet been published that meets all the criteria recommended by the American Academy of Pediatrics. Nonetheless, the studies conducted to date seem to indicate a greater efficacy of extensive hydrolysed protein formulae over partially hydrolysed formulae, although the latter may present nutritional advantages and lower cost.In conclusion, further efforts are required in the characterisation of the commercially available milk formulae used for treatment and prevention of cow\u2019s milk allergy. In the absence of well-documented studies proving the prophylactic value of partially hydrolysed formulae, children at high risk of atopy should be fed with a prophylactic hypoallergenic diet based on extensive hydrolysed formulas."} +{"text": "A prominent feature of many intracellular compartments is a large membrane surface area relative to their luminal volume, i.e., the small relative volume. In this study we present a theoretical analysis of discoid membrane compartments with a small relative volume and then compare the theoretical results to quantitative morphological assessment of fusiform vesicles in urinary bladder umbrella cells. Specifically, we employ three established extensions of the standard approach to lipid membrane shape calculation and determine the shapes that could be expected according to three scenarios of membrane shaping: membrane adhesion in the central discoid part, curvature driven lateral segregation of membrane constituents, and existence of stiffer membrane regions, e.g., support by protein scaffolds. The main characteristics of each scenario are analyzed. The results indicate that even though all three scenarios can lead to similar shapes, there are values of model parameters that yield qualitatively distinctive shapes. Consequently, a distinctive shape of an intracellular compartment may reveal its membrane shaping mechanism and the membrane structure. The observed shapes of fusiform vesicles fall into two qualitatively different classes, yet they are all consistent with the theoretical results and the current understanding of their structure and function. Many intracellular compartments, such as the Golgi apparatus and the endoplasmic reticulum, exhibit flattened shapes with a large membrane surface area relative to the luminal volume, i.e., they have a small relative volume. Since the small relative volume may well be intertwined with the function of these organelles, understanding the mechanisms of their shape generation is of great interest. Different organelles often show similar morphological features despite expressing very different sets of proteins. A large part of the morphological analyses of organelles has thus focused on the mechanisms of shaping of the lipid membrane, which is their universal structural backbone Theoretical studies of membrane shapes have proved fruitful, yet they were primarily focused to lipid membrane compartments of relatively large relative volumes, e.g., lipid vesicles and red blood cells A remarkable example of intracellular compartments with a small relative volume are the fusiform vesicles (FVs) of urinary bladder umbrella cells, which constitute the blood-urine barrier tissue . In the In this study we present a systematic theoretical analysis of discoid membrane shapes with small relative volumes, and then we quantify the morphology of FVs and compare it to the theoretical results. Specifically, we build on three existing extensions of the standard approach to lipid membrane shape calculation and determine the shapes that could be expected according to three proposed scenarios of membrane shaping, i.e., membrane adhesion, curvature driven lateral segregation of membrane constituents and emergence of stiffer membrane regions with a defined spontaneous curvature. We find that both classes of observed FV shapes are consistent with the theoretical results and the current understanding of FV structure.Animals were treated in accordance with European guidelines and Slovenian legislation. The experimental protocol was approved by the Veterinary Administration at the Ministry of Agriculture, Forestry and Food of Republic of Slovenia (permit number: 34401\u20135/2009/4).http://imagej.nih.gov/ij/). On each FV we measured the maximum length of the vesicle profile, thickness of the vesicle lumen in the central part of the vesicles, the lengths of thickened and unthickened membranes.Three albino ICR mice (CD-1) were fed standard laboratory chow and water was available ad libitum. At the age of five weeks, they were killed with COIn this section, we present the standard theoretical framework of lipid membrane shapes and discuss the challenges related to assessing shapes with small relative volumes. In the next three sections we will then employ three established extensions of the standard theory which will serve to describe the discoid shapes according to three scenarios that have been proposed for organelle membrane shaping .area difference elasticity (ADE) model The shapes of closed lipid membrane compartments in equilibrium are the shapes that correspond to the minimum of the elastic energy of the membrane at given values of external parameters, e.g., the compartment volume. According to the standard model of elasticity of the lipid bilayer, the Membrane elastic energy is scale invariant, i.e., it does not depend on the actual size of a compartment but rather on its relative shape The Gaussian bending term is not explicitly present in the Euler-Lagrange equations, rather it is a part of the boundary conditions that arise from the variation of the functional The homogeneous membrane ADE model was a basis for a number of successful studies of membrane shapes with a very good agreement between theoretically calculated shapes and experimentally observed shapes of giant lipid vesicles and red blood cells, which have relative volumes contact . At relaAdhesion between flat membrane regions has been proposed as one of the possible stabilizing mechanisms for the flattened compartment geometry in the Golgi Minimization of the total energy, The coexistence of a highly curved membrane in the rim and a relatively flat membrane in the central discoid part can be stabilized by a nonhomogeneous distribution of membrane constituents. For example, segregation of conical molecules into the rim and cylindrical molecules into the central membrane regions leads to a local spontaneous membrane curvature that matches the actual membrane curvature and thus relaxes the bending stresses in the membrane . In the Here, we will focus on the dependence of the local membrane spontaneous curvature on the intrinsic shape of membrane constituents, which can vary considerably among different lipid species The coupling between the membrane composition and its spontaneous curvature can be described as A seminal study of the curvature driven segregation Note that although no additional parameters are needed to describe the membrane shapes within the weak lateral segregation scenario, the calculation of the shapes with large The third scenario addressed in the present analysis involves membranes that are composed of distinct regions with markedly different mechanical properties and well-defined boundaries. For example, large proteins accumulating in certain regions of the membrane may act as a protein scaffold and impose their intrinsic curvature to the membrane Within this scenario, the discoid membrane compartments can be described by two connected ADE regions, one being stiffer with a defined spontaneous curvature, and the other being normal with a vanishing spontaneous curvature. The minimization of the total elastic energy of the membrane then leads to two coupled sets of the standard ADE Euler-Lagrange equations with separate sets of ADE parameters. It turns out, that the shape is not affected by the absolute stiffness of the two regions but rather by the relative stiffness of the stiffer one Clearly, the parameter space in this case is rather large, and therefore the present analysis will focus to the effects of the three most relevant parameters: the relative stiffness of the stiffer membrane region, its relative size and its spontaneous curvature. All other parameters will be held in their plausible range, e.g., the value of the Gaussian bending constant will be The analysis presented does not take into account two properties that generally play a role in the mechanics of protein scaffolding and multicomponent membranes. First, a possible shear rigidity of the protein scaffold has been neglected. It can be shown theoretically that the shear rigidity does not influence the equilibrium membrane shape in the limit of small deformations of a nearly flat scaffold and slightly raised unthickened membrane (hinge) regions. The cytoplasm of umbrella cells contained numerous flattened FVs . We anal2 nm . Vesicle2 nm .c, In order to obtain an insight into the variety of possible shapes at small relative volumes, we will focus to the discoid shapes calculated at the relative volume shape c, . Typicald and e) and lateral segregation on the standard ADE shape (shape c).We start by examining the effects of adhesion and curvature driven lateral segregation, as these two models are straightforward extensions of the standard homogeneous ADE model. d and e) is an increase in the membrane contact surface area. While the surface area of membrane in contact within the standard ADE model in absence of intermembrane adhesion is approximately 32% of the total membrane surface area, it reaches approximately 49% in the limit of strong adhesion. Correspondingly, an increasing adhesion makes the shape of the rim more and more round. The maximal contact surface area could increase further with a decreased relative volume. The main effect of adhesion (f and g). Weak lateral segregation effectively increases the bilayer couple effect in the membrane, i.e., increases the effective difference in lateral tensions between the membrane leaflets, described by the dimensionless parameter f in g).The effect of weak lateral segregation of membrane constituents is presented in the bottom part of see f in . A furthAccording to the third proposed scenario, the discoid shapes are stabilized by regions of stiffer membrane with a defined spontaneous curvature. The results will focus to the three parameters: the relative stiffness, the relative size and the spontaneous curvature of the stiffer membrane region. Some of the typical calculated shapes are presented in h through m) and 5A. If the stiff region is small, the membranes in the central part are in contact, just as with the homogeneous membrane. As the stiff region grows in size, the central discoid part first separates when the stiffer region occupies 46% of the total membrane, comes into contact again at 68%, only to detach once more at 85%. Finally, as the stiff membrane region grows towards 100% of the total membrane, the membrane comes back in contact and the shape approaches the standard homogeneous ADE shape. The point of contact is always in the discoid center. The impact of a very stiff membrane region in the central discoid part is presented in n through p). As the size of the stiffer region increases, the central discoid sides first separate at the discoid center (shape n) and then come into contact again away from the center (shape p). In other words, the curved rim forces the central discoid part to undulate. Interestingly, The effects of a stiffer membrane region with a large spontaneous curvature supporting the discoid rim are presented in Finally, The aim of this work was to theoretically analyze three scenarios for membrane shaping in discoid intracellular compartments at small relative volumes c in The first conclusion of the theoretical analysis is that all three scenarios of membrane shaping can lead to qualitatively similar shapes with a flattened central part and a drop-like cross-section at the rim, a shape comparable to the shape theoretically associated with simple homogeneous membrane (shape e in e) corresponds to an adhesion strength of approximately 10000 kT per On the other hand, at certain values of model parameters the three scenarios can also yield different shapes. In these cases, the shape of the compartment can in fact indicate its underlying shaping mechanism. For example, with an increasing adhesion between the membranes in the central part, the rim becomes more and more round , and in the limit of strong adhesion, the lumen of the compartment would become spherical with all the excess membrane adhered and wrapped up. In addition, one can expect small energy differences between different shapes at small relative volumes The shape of FVs in umbrella cells can be best studied by transmission electron microscopy. We aimed to prepare mouse urothelium in a way that preserved the ultrastructure closest to its native state. Therefore we applied high pressure freezing for tissue fixation, which immobilized cellular structures within a few milliseconds s and t in r in The shapes of FVs with a small relative volume Click here for additional data file."} +{"text": "Tunnels are access paths connecting the interior of molecular systems with the surrounding environment. The presence of tunnels in proteins influences their reactivity, as they determine the nature and intensity of the interaction that these proteins can take part in. A few examples of systems whose function relies on tunnels include transmembrane proteins involved in small molecule transport and signal transduction, peptide exit channels through which ribosomes release newly synthesized proteins during transcription Knowledge of the location and characteristics of protein tunnels can find immediate applications in rational drug design, protein engineering, enzymology etc.Identification and characterization of tunnels has been the focus of several studies, and various algorithms and software tools have been developed for these purposes -4. TheseIn the presented study we perform a benchmarking study of the most known approaches and software tools for finding tunnels in proteins . We focused on proteins from the cytochrome P450 family, which are very important from the biological point of view. We provide a critical discussion of the strong and weak points of the analyzed approaches and software tools."} +{"text": "In non-human primates a scheme for the organization of the auditory cortex is frequently used to localize auditory processes. The scheme allows a common basis for comparison of functional organization across non-human primate species. However, although a body of functional and structural data in non-human primates supports an accepted scheme of nearly a dozen neighboring functional areas, can this scheme be directly applied to humans? Attempts to expand the scheme of auditory cortical fields in humans have been severely hampered by a recent controversy about the organization of tonotopic maps in humans, centered on two different models with radically different organization. We point out observations that reconcile the previous models and suggest a distinct model in which the human cortical organization is much more like that of other primates. This unified framework allows a more robust and detailed comparison of auditory cortex organization across primate species including humans. One of the oldest and best characterized organizational features in the auditory system is its cochleotopic or tonotopic organization. Tonotopy is the ordered representation of sound frequency in auditory areas. It has been shown at all levels of the auditory pathway including the cochlea, the auditory brainstem nuclei, and the auditory cortex in at least mammals and birds. In the cortex of non-human primates, multiple areas can be defined neurophysiologically by gradients of neuronal sound frequency preference and by reversals of the frequency gradient between neighboring auditory cortical areas areas along the length of Heschl's gyrus and fMRI did not provide the resolution and power for detailed tonotopic maps in the auditory cortex. However, a number of PET and fMRI studies consistently demonstrated significantly activated clusters or voxels responding to high frequency tones in the vicinity of the medial HG and to low frequency tones in the lateral HG T1 mapping and gradient quantification.The goal of this work is a robust scheme for the definition of functional areas in humans that might in future properly justify the application of primate nomenclature to human studies and allow the development of better-defined primate models for human auditory cognition. Here we suggested a unified primate model of core and belt fields which provides testable hypotheses for future functional and anatomical comparative studies in primates.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Post traumatic osteonecrosis of distal pole of scaphoid is very rare. We present a case of 34 years old male, drill operator by occupation with nontraumatic osteonecrosis of distal pole of the scaphoid. The patient was managed conservatively and was kept under regular follow-up every three months. The patient was also asked to change his profession. Two years later, the patient had no pain and had mild restriction of wrist movements (less than 15 degrees in either direction). The radiographs revealed normal density of the scaphoid suggesting revascularization. Osteonecrosis is one of the common complications of a scaphoid fracture. Its incidence has been reported to be ~10 \u2013 15%,1246We present here a case of nontraumatic osteonecrosis of the distal pole of the scaphoid in a 34-year-old drill operator.A 34-year-old male patient, a drill operator by occupation, presented to us with complaints of pain in the right wrist. The radiographs were not suggestive of any fracture or pathology and the patient was managed with analgesics for four weeks. The patient, however, complained of persistent pain. Four weeks later, the patient revisited us with complaints of persistent pain, along with restriction of activity. The patient was reinvestigated radiologically including radiographs and magnTwo years later, the patient had no pain and had mild restriction of wrist movements (less than 15 degrees in either direction). The radiographs revealed normal density of the scaphoid suggesting revascularization .Although osteonecrosis of the whole scaphoid can be seen in the absence of trauma (Preiser\u2019s disease), osteonecrosis of the distal pole of scaphoid in the absence of trauma has not been reported to date. Osteonecrosis of the distal pole in our case may have occurred as a result of repeated microtrauma, leading to damage of all the dorsal vessels entering the bone distal to the waist. Cumulative micro trauma has been incriminated as a cause of osteonecrosis of other carpal bones such as lunate (Kienbock\u2019s disease).7et al.213Vibration exposure is another recognized cause of osteonecrosis of carpal bones.71A fracture running proximal to the insertion of the dorsal ridge vessels, or one which damages these vessels predisposes the vascularity of the proximal fragment. The site of entry of the dorsal vessels is variable \u2014 they enter distal to the waist in 14%, at the waist in 79%, and proximal in 7%.45Avascular necrosis of the scaphoid has been reported in patients with collagen vascular disease.Osteonecrosis of the distal pole in our case probably occurred as a result of repeated microtrauma leading to a damage of all the dorsal vessels entering the bone proximal to the waist. A change of profession and conservative management has given good results in our case."} +{"text": "Dynamic drop test for studying the temporal lowering of hydrophobicity on the surface of silicone rubber with direct current voltage application was carried out. In this study, we evaluated the influence of the temporal lowering of hydrophobicity under various conductivities and dropping rates for water droplets. As a result, it was found that the dropping rate and the conductivity of water droplets greatly influenced the hydrophobicity loss time on the surface of silicone rubber. The development of insulating materials used in electric-powered apparatuses plays an essential role in a stable power supply. Polymer materials, for example, have attracted attention in recent years. The use of polymer materials for housings of insulators and arresters has increased. The widely accepted polymer materials for housings are silicone rubber [SiR] and ethylene vinyl acetate. These polymer materials have some excellent properties such as being lightweight, hydrophobicity, and antiweatherability . AdditioIncidentally, electric-powered apparatuses are utilized not only by alternating current [AC] voltage applications, but also by direct current [DC] voltage applications. The polymer material is made from organic matters; therefore, the aged deterioration due to electric discharges and acidic products is worrying. Studies on evaluation methods of insulation deterioration of polymer materials are continuously made by the International Council on Large Electric Systems [CIGRE]. A dynamic drop test [DDT] which can easily evaluate the characteristics of temporal lowering of hydrophobicity on the surface of the polymer material was proposed through a continuous research effort . Here, iIn this study, we carried out DDT for evaluating the temporal lowering of hydrophobicity on the surface of silicone rubber with DC voltage application. Our results have contributed to the evaluation of polymer materials' reliability with DC voltage application.The temporal decrease of hydrophobicity on the surface of the polymer material for the DDT is evaluated by dropping water in small amounts under DC voltage application in DDT. Table We evaluated the influence of the temporal lowering of hydrophobicity under various conductivities and dropping rates. The changes of hydrophobicity and discharges generated at the surface of the test sample were observed by a CCD camera and shown in Figure Figure Figure Figure Thus, due to the influence of the high electric field, the increase of dropping rate was confirmed after impressed voltage was applied. Such increase of droplets is remarkable with the increase of the initial dropping rate of electrolyte. The increase of droplets promoted the lowering of hydrophobicity, and the hydrophobicity loss time became shorter. The surface of the test sample has a greater opportunity to attach NaCl and keeps electrification on the surface of SiR with the increase of the dropping rate.We carried out DDT in investigating the temporal lowering phenomena of hydrophobicity of the SiR surface with DC voltage application. Here, the influence of the temporal lowering of hydrophobicity under various conductivities and dropping rates was evaluated. With the progress of the lowering of hydrophobicity, small discharges on the surface of SiR could be seen. Additionally, in the final stage, the hydrophobicity lowered, and an obvious water channel was confirmed. Such hydrophobicity loss was influenced by the conductivity and dropping rate of the electrolyte.The authors declare that they have no competing interests.TS and TA conceived the experiments. YS performed the experiments and analyzed the data together with NO and TM. TS and TA provided valuable advice. TS and TA co-wrote the paper. All authors discussed the results and commented on the manuscript."} +{"text": "Hepatocellular carcinomas (HCCs) have different etiology and heterogenic genomic alterations lead to high complexity. The molecular features of HCC have largely been studied by gene expression and proteome profiling focusing on the correlations between the expression of specific markers and clinical data. Integration of the increasing amounts of data in databases has facilitated the link of genomic and proteomic profiles of HCC to disease state and clinical outcome. Despite the current knowledge, specific molecular markers remain to be identified and new strategies are required to establish novel-targeted therapies. In the last years, mathematical models reconstructing gene and protein networks based on experimental data of HCC have been developed providing powerful tools to predict candidate interactions and potential targets for therapy. Furthermore, the combination of dynamic and logical mathematical models with quantitative data allows detailed mechanistic insights into system properties. To address effects at the organ level, mathematical models reconstructing the three-dimensional organization of liver lobules were developed. In the future, integration of different modeling approaches capturing the effects at the cellular up to the organ level is required to address the complex properties of HCC and to enable the discovery of new targets for HCC prevention or treatment. Hepatocellular carcinoma (HCC) represents one of the most frequent cancers with the highest incidence in developing countries. Due to its aggressiveness it is third in causing cancer-related deaths worldwide , vascular invasion .A functional genomic study was performed applying siRNA to identify tumor suppressor genes in a mosaic mouse model is a database for genomic changes of several cancer types, including 99 HCC samples and their normal liver tissues. TCGA includes gene and miRNA expression data and the epigenetic DNA methylation status.The integrated Clinical Omics Database (iCOD) collects all available information of 140 cases of HCC, ranging from gene expression profiles to relevant clinical data that contains links of gene expression to specific liver diseases model of liver lobule was developed (Hoehme et al., Taken together, these studies show that even for complex situation such as HCC, systems properties can be addressed by combining experimental data with mathematical modeling.HCC represents a particularly complex disease and the integration of all features of HCC including the tumor stage, the etiology, the mutational status, the response to therapy, and tumor recurrence are required to better understand its development and to design a most efficient treatment. There is an urgent need for molecular markers specific for HCC to facilitate early diagnosis in order to improve the prognosis after treatment. Systems-wide studies begin to show evidence for HCC classifiers and for the impact of alterations in hub genes for these classifiers. Additionally, first steps have been taken to provide a deeper understanding of dynamic properties of signaling networks in the liver. A summary of the reviewed results is given in Table Future developments require the integration of data at different scales, connecting the genomic information to the signaling regulation and finally to tumor behavior. To this aim, model integration linking intracellular events to responses at the organ level is essential. The major challenge is to develop mathematical formalisms allowing connecting events occurring at different time scales. In conclusion, the combination of clinical and experimental data with mathematical modeling promises to provide means to handle the complexity that is characteristic for HCC and to facilitate the development of personalized targeted therapy.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The following article appeared on page 372 of the July 2012 issue of the Annals. Unfortunately the images that accompanied the tip were incorrect. We reproduce the image below with the correct images displayed. The Editor apologises for any confusion caused.The development of pancreatic necrosis is a significant complication of acute pancreatitis and can result in progressive multiple organ failure and death. Recently, in an attempt to reduce the high morbidity and mortality from open necrosectomy, minimal access techniques have been developed.2 pressure (8mmHg) permits visualisation of the retroperitoneum, and standard laparoscopic graspers and a suction device can be placed through additional port sites in the unit to allow removal of necrotic tissue can be used to gain retroperitoneal access and alloc tissue . Post-opThe technique of minimally invasive necrosectomy has been well described previously"} +{"text": "Use of thrombolytic therapy in pulmonary embolism is restricted in cases of massive embolism. It achieves faster lysis of the thrombus than the conventional heparin therapy thus reducing the morbidity and mortality associated with PE. The compartment syndrome is a well-documented, potentially lethal complication of thrombolytic therapy and known to occur in the limbs involved for vascular lines or venepunctures. The compartment syndrome in a conscious and well-oriented patient is mainly diagnosed on clinical ground with its classical signs and symptoms like disproportionate pain, tense swollen limb and pain on passive stretch. However these findings may not be appropriately assessed in an unconscious patient and therefore the clinicians should have high index of suspicion in a patient with an acutely swollen tense limb. In such scenarios a prompt orthopaedic opinion should be considered. In this report, we present a case of acute compartment syndrome of the right forearm in a 78 years old male patient following repeated attempts to secure an arterial line for initiating the thrombolytic therapy for the management of massive pulmonary embolism. The patient underwent urgent surgical decompression of the forearm compartments and thus managed to save his limb. Acute massive pulmonary embolism (PE) is an uncommon clinical entity but carries an exceptionally high mortality. A rapid diagnosis of massive PE is very crucial to initiate the potentially life-saving therapy. The use of thrombolysis in conjunction with standard anticoagulation in the acute phase has been shown to reduce the mortality rate in this group of patient . AccordiDue to its fatal haemorrhagic complications, the thrombolytic therapy has been strictly recommended for the patients with proven massive PE. The rate of significant bleeding has been reported to be around 22\u201345%. The bleeding most commonly occurs at the vascular catheter site, viscera, and intracranium. Although management for minor bleeding has been supportive, serious bleeding does warrant withdrawal of the thrombolytic therapy. Isolated cases of compartment syndrome after thrombolytic therapy have been reported in the literature [A 78-year-old male presented to the hospital with the history of fever and breathlessness. After initial assessment and investigations, the diagnosis of chest infection was made, and the patient was admitted to short stay unit for further management. The patient was a known case of myasthenia gravis and had history of asbestosis in the past. Whilst being on ward, the patient suddenly collapsed and required resuscitation. The anaesthetist was called to intubate the patient and was then transferred to the Intensive care Unit (ICU) for further management. After initial stabilisation, the patient was investigated to find the cause of sudden collapse. The ECG showed the right bundle branch block. The patient had raised Troponin T level, and the echocardiogram revealed well-preserved left ventricular function with reduced right ventricular function and bright mass in pulmonary artery, thus confirming the diagnosis of massive pulmonary embolism.The patient was thrombolysed using 100\u2009mg of tissue plasminogen activator (tPA) over two hrs and Heparin 1000 units per mL at 1\u2009mL/hr given in separate lines. The nor-adrenaline was used to maintain mean arterial pressure of around 85. The patient had femoral arterial line after numerous failed attempts to have a right radial artery line. The patient responded very well to the thrombolytic treatment leading to stable haemodynamic condition. Eight hours following the initiation of thrombolytic therapy, the right forearm of the patient was noted to be very swollen and tight. The orthopaedic team was called immediately to assess the forearm in view of compartment syndrome. As patient being intubated, it was difficult to diagnose a compartment syndrome on just clinical ground, and hence a universally accepted, calibrated handheld device was used to measure the compartment pressure in the involved forearm compartments . The pre The fasciotomy of the forearm was performed by extensile Henry's approach along with decompression of carpel tunnel and abductor compartment of the hand . The mus At the end of 3-month followup, the patient had full range of movements in elbow, terminal restriction of movement in wrist and hand. He had functional muscle power in his intrinsic muscles of the hand. Apart from mild tingling in the hand, there was no sensory deficit.Acute compartment syndrome occurs as a result increased interstitial pressure in closed osseous fascial compartment thus compromising the circulation and function of the tissues within that compartment. The initial insult either in the form of trauma, internal bleeding, or ischaemia leads to swelling within the closed compartment and compromising the tissue perfusion. This sequence of event causing tissue ischaemia if not reversed in time by decompression of compartment can lead to necrosis of the tissue . In our The diagnosis of compartment syndrome is usually made on clinical suspicion when patient complains of a disproportionate pain in the involved limb and elicits the severe pain on passive stretch of the muscle within the involved compartment. It is very difficult to make this judgement in a patient with a head injury, unconscious or on artificial ventilation like our case. In such scenario, measurement of intracompartmental pressure with the calibrated device along with clinical suspicion helps to make the diagnosis of compartment syndrome . The StrMcCarthy et al. in theirIt has been well documented in literature about the complications of bleeding with thrombolytic therapy administered to patients with myocardial infarction and stroke but not much data about the patients with massive pulmonary embolism. The complication rate for bleeding due to thrombolytic therapy seems to be higher in elderly population \u201312. In mMcQueen et al. emphasizIt is imperative for medical staff working on ICU setup to be aware of the possibility of compartment syndrome in a patient on thrombolytic therapy. One has to be very careful in taking vascular access in such group of patient to avoid extravasation of blood leading to increased intracompartmental pressure. Prompt referral should be made to the orthopaedic team to assess the tense swollen limb in a patient treated with thrombolysis for massive pulmonary embolism."} +{"text": "Both in vivo and in vSpike patterns associated with cells assemblies can be identified by clustering the spectrum of zero-lag cross-correlation between all pairs of neurons in a network . Other mHere we investigate how the identification of cell assemblies is dependent on the methodology chosen, and to what extent the statistical properties of the cell assemblies make them suitable for representation of system states in the striatum during reinforcement learning."} +{"text": "The nail unit is constructed by distinctly regulated components. The nail isthmus is a lately proposed region as a transitional zone between the most distal part of the nail bed and the hyponychium. It is difficult to recognize the nail isthmus in the normal nail, but it is easy to identify the region in nail disorders such as pterygium inversum unguis and ectopic nail. We describe structure and putative function of the nail isthmus via histopathologic features of pterygium inversum unguis and ectopic nail. The nail unit has distinct structure. The concept of the nail isthmus was recently proposed by Perrin in 2007 . The regThe nail isthmus is composed of two distinct parts. A histopathological study of the nail isthmus with a case of pterygium inversum unguis identified two substances: (i) a marked, highly eosinophilic, keratinized substance attaching the distal and visceral nail plate and (ii) a whorled, highly eosinophilic, keratinized substance into the horny layer of the finger tip Figures and 3 44. The foThe highly eosinophilic structures are identical to semihard keratin. The nail apparatus is sequentially composed of soft keratin in the proximal nail fold, semihard keratin in the cuticle, hard keratin in the nail plate from the nail matrix and the nail bed, semihard keratin in the nail isthmus, and soft keratin in the hyponychium .Immunohistochemical study of the regional keratin and filaggrin in a case of ectopic nail showed keratin 1 (K1) and K10 expression in the suprabasal layers of the nail isthmus Figures , K14 in The nail isthmus showed two regions; a proximal and narrow part and a distal and wide part. The proximal and narrow region has supposed function as an anchor for the inferior border of the nail plate. The distal and wide region produces semihard keratins possibly against repeated trauma toward the separated area between the nail plate and the hyponychium. Pterygium inversum unguis may occur after a cerebral vascular event resulted in hemi-paralysis . EctopicThe nail isthmus expresses a profile of transitional keratins and is probably constructed by two regions. One is a proximal region producing a marked, highly eosinophilic, keratinized substance attaching the distal and ventral nail plate. The region produces a peculiar and thin compartment of pale and nucleated corneocytes via the granular layer and probably maintains the longitudinal ridge pattern of the nail bed. Another is the distal region producing a whorled, highly eosinophilic, keratinized substance and may protect the binding between the nail plate and the proximal nail isthmus from repeated trauma. The nail isthmus is one of the distinctly regulated regions of the nail apparatus. A recent proposal of the nail isthmus brings us to reevaluate the pathogenesis of the nail disorders. In the future, further study will elucidate more precise structure and function of the nail isthmus."} +{"text": "Protein-protein interaction (PPI) network analysis presents an essential role in understanding the functional relationship among proteins in a living biological system. Despite the success of current approaches for understanding the PPI network, the large fraction of missing and spurious PPIs and a low coverage of complete PPI network are the sources of major concern. In this paper, based on the diffusion process, we propose a new concept of global geometric affinity and an accompanying computational scheme to filter the uncertain PPIs, namely, reduce the spurious PPIs and recover the missing PPIs in the network. The main concept defines a diffusion process in which all proteins simultaneously participate to define a similarity metric ) to robustly reflect the internal connectivity among proteins. The robustness of the GGA is attributed to propagating the local connectivity to a global representation of similarity among proteins in a diffusion process. The propagation process is extremely fast as only simple matrix products are required in this computation process and thus our method is geared toward applications in high-throughput PPI networks. Furthermore, we proposed two new approaches that determine the optimal geometric scale of the PPI network and the optimal threshold for assigning the PPI from the GGA matrix. Our approach is tested with three protein-protein interaction networks and performs well with significant random noises of deletions and insertions in true PPIs. Our approach has the potential to benefit biological experiments, to better characterize network data sets, and to drive new discoveries. Current development in high-throughput measurement techniques such as tandem affinity purification, two-hybrid assays, and mass spectrometry have resulted in vast amounts of pertinent elements and the biological networks of their interactions As measurements cover different aspects of a biological systems with different characteristics of the PPI network Similar to the method in In addition, we are aware of the relationship between the GGA-method and Markov clustering (MCL) In The computation of a global geometric metric from the local metric by geometric embedding operators has been recently established in machine learning research field The diffusion metric revealed in a diffusion propagation process reflects the intrinsic and geometric relationship among data points in the embedded diffusion space. In practice, the diffusion metric is in a form of geometric distance (dissimilarity), called diffusion distance, which is represented by the Euclidean distance in the embedded space As the mapping function We introduce a new definition of geometric metric, called global geometric affinity (GGA), to overcome the weakness of current geometric based methods for PPI network. Different from the GGD, the GGA is in form of affinity (similarity), which is computed by the dot product of a pair of high-dimensional vectors (correlation coefficient). The Initialization of the weight matrix:Propagation process:The GGA reflects the internal affinity (geometric similarity) between node The parameter, The assumption of the correctness of geometry based approaches in analysis of PPI network is that a pair of proteins will be assigned an interaction if they are close in an embedded space whereas NPPIs correspond to points that are further away in that space In the original PPI network, we denote True positive (TP) functionFalse positive (FP) functionTrue negative (TN) functionFalse negative (FN) functionMatch functionGiven a threshold, the TP function measures the intersection between the new assigned PPIs set and the ground truth PPIs set, FP denotes the assigned edges which are not in the set of ground truth, TN denotes the intersection of new assigned NPPIs and ground truth of PPNs, and FN denotes new assigned PPIs in the ground truth of PPIs set. The match function evaluates how well the new assigned PPI network match with original local PPI network. From the mathematical point of view, we are about to solve the following optimization problem to find the optimal threshold value.The algorithm for solving the optimization problem is outlined below and We vary the threshold from minimum to maximum found in GGA matrix among all pairs of proteins.For a given threshold t, we compute true positive (TP) function, true negative (TN) function, false positive (FP) function and false negative (FN) function.Based on the values obtained from the previous step, we compute match function.The optimal threshold is the one with maximum match function value ) and specificity rate (TN/(TN+FP)), precise (TP/(TP+FP)) and recall (TP/(TP+FN)). To plot the ROC curve, the horizontal axis represents (1 - specificity), and the vertical axis represents sensitivity. To plot the PR curve, the horizontal axis represents recall, and the vertical axis represents precision.The ROC curves are shown in the Following the experimental setting in In this experiment, we demonstrate the performance of GGA-method in prediction of missing PPI and identification of spurious PPI in the noisy network. For an incomplete observed PPI network, we determine GGA for each pair of proteins in the network. We are interested in the pairs of proteins that have high GGA but are not connected in the observed network, and the pairs of proteins that have low GGA but are connected in the observed network. The first type of pairs of proteins are most likely candidates for missing PPIs, and the second type of pairs are most likely candidates for spurious PPIs. Our method is compared to MDS-methods in We assess the performance of our method from two perspectives according to the tests on three PPI networks . First, we want to compare the performance in identification of spurious PPI using our method with that of MDS-method. We evaluate the comparison by gradually increase the insertions of the false PPI and attempt to identify those links using the topology information remaining in the network. Second, we want to compare the performance in predictions of missing PPI using our method with that of MDS-method. We evaluate the comparison by gradually increase the deletions of the true PPI and attempt to predict them using the topology information remaining in the network. The comparison result is displayed in the We explain why MDS-method performs worse than GGA-method against insertion and deletion noise. The metric revealed by MDS is based on the shortest path traveled from one protein to another protein in the network To demonstrate the computation efficiency of GGA-method, we compare the computation time for both methods. The The Optimal Local Fitting algorithm (OLF) is used to determine the threshold for the PPI assignment from the GGA matrix. The test result is shown in the In the first test, we randomly insert the a certain amount of PPIs (Ins Ratio is In the first test, we randomly insert the a certain amount of PPIs (Ins Ratio is We analyze the results in The result for the first test is presented in the first three rows (labeled The result for the second test is presented in the second three rows (labeled The limitations of the current high-throughput measurements techniques inherently give rise to a large amount of spurious and missing PPIs. To clean the network, people often try to integrate multiple data sources, such as gene expression arrays and proteomics to improve the quality of PPIs in a network. Recently, geometric based approaches, which are only based on the topology of the PPI network, are very promising as those approaches are independent from other prior knowledge except for topology of the PPI network. However, the large amount of noisy PPIs poses a great challenge to the geometric based computational approaches. Robust geometric structural understanding methods are the prerequisite for capturing the intrinsic geometric structure which is hidden behind the noisy PPI network data.Biological data, like the PPI data, are often observed in an incomplete manner with high noise. Any method, if simply based on the metric of a small number of local PPI network, is likely to be overwhelmed by the noise and incompleteness. It is of great importance to place the data in a statistical model and take into account all the pieces of local information simultaneously, in order to generate the knowledge behind the overall global structure of the data. Globally consistent metric, like the global geometric affinity proposed in this work, measure the relationship considering the optimal arrangement of all the data samples. Therefore, even if the local incomplete and noisy pairs of PPIs are not able to reveal the internal global structure, given sufficient samples of PPIs and considering the entire set of pair-wise linkages simultaneously, our GGA-method is able to reveal the intrinsic metric hidden in the very noisy and incomplete PPI data. The excellent robustness against noise is highlighted by its good performance at a large number of insertions and deletions introduced.Biological experimental measurements are usually time consuming and costly. The computational approaches are proposed to benefit biological experiments and to better characterize network data sets by minimizing the use of the prior knowledge from biological experiments. In recent years, the geometric features of PPI network have been proven to provide new insights into the function of individual proteins, protein complexes and cellular machinery as a complex system. These approaches take advantage of the geometric characteristics behind the PPI network, which enables to evaluate the relationship among the protein-protein interactions and analyze the network characteristics. These approaches based only on the geometric topology formed by connections among proteins and has been recently developed to de-noise the observed PPIs network. The common hypothesis for these methods is that the existence of the geometric structure for PPI networks and the topology knowledge is crucial in determining the PPIs in the network The efficiency of a computational approach for most biological problems is vital in real applications due to the high-throughput nature of the data. The existing geometry based methods for the de-noising the PPI network, for example, MDS-method, are computationally expensive and even intractable for large and incomplete PPI networks. This is because these methods include the eigen-decomposition to compute the explicit embedding coordinates and then compute the global geometric distance. In our method, we completely decouple the global metric from the eigen-decomposition problem by proposing the GGA. The eigen-decomposition-free method give our method a distinct advantage that we can totally get rid of the issues caused by the eigendecomposition. Therefore, our proposed method is able to apply on the sparse and huge size PPI network and finish the de-noising in an efficient way. These virtues account for the superiority of our proposed in the real applications for de-noising PPI networks.This paper presents our first implementation, with very promising results in the completed tests. However, the noise properties in raw PPI data can be different from the simulated random deletions and insertions used in existing tests. The performance of our method and its general applicability in de-noising a PPI network generally confirm the robustness of our methods but still need further work to improve by testing more real challenging PPI data. The parameter of propagation step plays a critical role in looking through the geometric structure from multiple scales. Although we provide a probability based algorithm to determine the optimal parameter, we have not given a rigorous proof. In our future work, we will come up a good strategy based on this parameter to investigate the raw PPI network at different level of details. The OLF algorithm is proposed to numerically determine the optimal threshold without a closed form solution or proof to that optimization problem. Furthermore, GGA-method is a general method and applicable to a wide range of problem domains, for example, the reconstruction of the air transportation network."} +{"text": "Cardiac surgeries are sometimes followed by significant blood loss and transfusions may be necessary. However, indiscriminate use of blood components can result in detrimental effects for the patient. In this study, we evaluated the short-term effects of the implementation of a protocol for the rational use of blood products in the postoperative period of cardiac surgery.P <0.05 was considered statistically significant.Between April and June 2011 an institutional protocol was implemented in a private hospital specialized in cardiology to encourage rational use of blood products with the consent and collaboration of seven cardiac surgery teams. Clinical and demographic data of patients were collected, and the use of blood products and clinical outcomes during in-hospital period 6 months before and after implementation of the protocol were analyzed. The protocol consisted of an institutional campaign with educational intervention in the surgical, intensive care and anesthesiology teams aiming to spread the practice of blood transfusion based on clinical goals , as well as making routine prescription of epsilon aminocaproic acid (EACA) intraoperatively. Comparisons between categorical variables were performed with the chi-square test and P <0.001). Clinical outcomes related to blood transfusion are presented in Table After 3 months of implementation of the protocol, the use of EACA rose from 31 to 100%. The surgeries requiring any blood transfusion were 67% before the implementation of the protocol, and 40% in the subsequent months of the same year after implantation (The rational use of blood products associated with infusion of \u03b5-aminocaproic acid has the potential to reduce the number of blood transfusions in postoperative of cardiac surgery, which can impact the risk of complications."} +{"text": "The role of the amygdala in regulating emotional neural processing has been well-acknowledged by both animal and clinical studies LeDoux, , particuThe role of the lateral amygdala in the retrieval and maintenance of fear-memories formed by probabilistic reinforcement\u201d published in Frontiers in Behavioral Neuroscience, Erlich and colleagues (In an interesting article \u201clleagues provide lleagues and clinlleagues . HoweverThe main outcome of this study is the demonstration that lateral amygdala activity is essential to the expression of fear-behavior in probabilistic paradigms and that CCK2 receptor activation may lead to impaired recovery from fearful memories. This is the first demonstration of this role and sheds light on new therapeutic targets in the modulation of the fear response in a post-encounter approach.Yet, the most remarkable finding reported by Erlich and colleagues is the role of lateral amygdala in the encoding of uncertain information using a probabilistic presentation of the aversive stimulus. It has been well-characterized that the lateral amygdala has a paramount role and is the neuroanatomical substrate (Phelps and LeDoux, In conclusion, the findings reported by Erlich and colleagues represent an appealing topic of investigation from both behavioral and pharmaceutical points of view in that it provides a rationale of developing in the future chemical compounds that, by manipulating lateral amygdala function in particular through the modulation of CCK2 receptors, may ameliorate negative and aversive emotional states."} +{"text": "Vascular variations of the penis are very rare. Awareness of its variations is of utmost importance to the urologists and radiologist dealing with the reconstruction or transplants of penis, erectile dysfunctions, and priapism. We report an extremely rare variation of the artery of the penis and discuss its clinical importance. The artery of the penis arose from a common arterial trunk from the left internal iliac artery. The common trunk also gave origin to the obturator and inferior vesical arteries. The artery of the penis coursed forward in the pelvis above the pelvic diaphragm and divided into deep and dorsal arteries of the penis just below the pubic symphysis. The internal pudendal artery was small and supplied the anal canal and musculature of the perineum. It also gave an artery to the bulb of the penis. The artery of the penis is the distal continuation of the internal pudendal artery after the origin of its perineal branch. It runs anteriorly below or above the inferior fascia of urogenital diaphragm to reach the area just below the inferior pubic ligament, where it terminates by dividing into deep and dorsal arteries of the penis . Artery During dissection classes for undergraduate medical students, a rare variation in the origin and course of the artery of the penis was noted. The variation was found in an adult male cadaver aged approximately 70 years. The left internal iliac artery did not divide into anterior and posterior divisions. The main trunk of the internal iliac artery gave iliolumbar, lateral sacral, superior gluteal, middle rectal, and superior vesical arteries. In addition to these arteries two common trunks arose from it. The first common trunk bifurcated into inferior gluteal and internal pudendal arteries, whereas the second common trunk gave two inferior vesical arteries, obturator artery, and the artery of penis . The artA detailed knowledge of origin, course, and distribution of the vessels of the penis is essential during the planning, management, or surgical treatment of erectile dysfunctions and trauma of the penis. Internal iliac artery embolization is standard selective technique in arresting the bleeding from the penis and perineal region. Instead of this procedure, a superselective embolization of the internal pudendal artery with a stainless steel mini coil can be done to preserve the blood supply from the uninjured branches of the internal iliac artery . HoweverTraumatic laceration of penile arteries may result in high flow priapism caused by a pathologically increased arterial flow to the cavernous bodies. Clinically, it is identified as a persistent painless erection with a flaccid glans that results within hours or days after blunt perineal trauma \u20139. PatieAs per our knowledge, this is the first report on combined origin of artery of the penis, obturator artery, and inferior vesical arteries from a common trunk above the pelvic diaphragm. What makes the case unique is the forward course of artery of the penis in relation to the pelvic surface of the levator ani muscle, prostate, and puboprostatic ligaments. Its iatrogenic injuries are possible in any procedure involving prostate and bladder. The artery might get injured in pelvic fractures as well. Hence, this report might be useful for the radiologists, orthopaedic surgeons, and urologists working in and around the pelvic cavity."} +{"text": "Since then, antibiotics have represented virtually the only effective treatment option for bacterial infections. However, their efficacy has been seriously compromised by over-use and misuse of these drugs, which have led to the emergence of bacteria that are resistant to many commonly used antibiotics. Bacteria present three general categories of antibiotic resistance: acquired, intrinsic and adaptive and/or peptidoglycan biosynthesis, and act synergistically with the antibiotic. Authors conclude that cell shape alterations likely disturb the influx/efflux machinery of Gram-negative bacteria and thereby enable the accumulation of otherwise excluded antibiotics. This finding provides an attractive strategy to combat the intrinsic antibiotic resistance of Gram-negative bacteria and can aid the development of new therapies that enhance the activity of existing antibiotics against them.The combination of these antibiotic resistance mechanisms has led to the emergence of multidrug-resistant pathogens, which are a serious threat for medical care. Among other strategies, the discovery or development of new antibiotic agents had been thought to be a solution to overcome the deficiencies of the existing ones. However, development and marketing approval of new antibiotics have not kept pace with the increasing public health threat of bacterial drug resistance. An alternative to the development of new antibiotics is to find potentiators of the already existing ones, a less expensive alternative to the problem inhibition of antibiotic resistance elements; (ii) enhancement of the uptake of the antibiotic through the bacterial membrane; (iii) direct blocking of efflux pumps; and (iv) changing the physiology of resistant cells (Kalan and Wright, et\u2009al., et\u2009al., et\u2009al., In conclusion, the use of antibiotic adjuvants has two beneficial outcomes: enhancement of the antimicrobial effect and reduction of the occurrence of mutations that result in resistance. In this context, efforts to find such molecules should be intensified. Since environmental organisms are the source of most resistance genes and antibiotics (D'Costa None declared."} +{"text": "Although there are significant numbers of people displaced by war in Africa, very little is known about long-term changes in the fertility of refugees. Refugees of the Mozambican civil war (1977\u20131992) settled in many neighbouring countries, including South Africa. A large number of Mozambican refugees settled within the Agincourt sub-district, underpinned by a Health and Socio-demographic Surveillance Site (AHDSS), established in 1992, and have remained there. The AHDSS data provide a unique opportunity to study changes in fertility over time and the role that the fertility of self-settled refugee populations plays in the overall fertility level of the host community, a highly relevant factor in many areas of sub-Saharan Africa.To examine the change in fertility of former Mozambican self-settled refugees over a period of 16 years and to compare the overall fertility and fertility patterns of Mozambicans to host South Africans.Prospective data from the AHDSS on births from 1993 to 2009 were used to compare fertility trends and patterns and to examine socio-economic factors that may be associated with fertility change.There has been a sharp decline in fertility in the Mozambican population and convergence in fertility patterns of Mozambican and local South African women. The convergence of fertility patterns coincides with a convergence in other socio-economic factors.The fertility of Mozambicans has decreased significantly and Mozambicans are adopting the childbearing patterns of South African women. The decline in Mozambican fertility has occurred alongside socio-economic gains. There remains, however, high unemployment and endemic poverty in the area and fertility is not likely to decrease further without increased delivery of family planning to adolescents and increased education and job opportunities for women. Africa is home to about a fifth of the world's refugees, most of whom have been victims of forced migration .\u2021 HoweveWar and resettlement can place both upward and downward pressure on fertility in the short term. Upward pressure may come from the desire to replace those lost in war, while downward pressure on fertility may come from the disruption of life and relationships caused by war . StudiesHowever, most studies on refugee fertility are conducted in refugee camps and the situation may differ for refugees not living in camps. Populations that settle in host countries without residing in camps are likely to be different from those in refugee camps since they are not served directly by aid programs. Many studies of the fertility of self-settled refugees exist in developed countries with vital registration systems. However, studies of self-settled refugee populations in Africa where vital registration systems are lacking are rare. Prospective data from the Agincourt sub-district in Mpumalanga Province in rural northeast South Africa provide an opportunity to examine the change in fertility of self-settled Mozambican refugees over a period of 16 years (1993\u20132009) and to examine their impact on overall fertility levels in the area. Earlier research using data from the Agincourt health and socio-demographic surveillance site (AHDSS) found that Mozambican refugees in Agincourt contributed to a noticeable increase in the average number of children borne by women in the 1980s measured retrospectively through birth histories . Subsequapartheid regime to the spectre of rapid population growth among the African population. The programme provided free modern contraceptives in public health clinics, including oral and injectable contraceptives , Gazankulu, where African South Africans were resettled as part of the apartheid regimes strategy of \u2018separate development\u2019 and the level of fertility measured by the TFR are used to examine fertility trends. The latter is defined as the average number of children that a woman would have by the end of her reproductive life if the current age pattern of fertility were to remain unchanged. Descriptive statistics are used to describe changes in the age pattern of fertility over time. A discrete time event framework is used to evaluate women's progression from a first to a second birth within five years and smoothed survival curves are presented. Other socio-economic trends are examined by estimating levels of employment, household wealth, and formal education.Fertility levels were quite different in the two populations during the 1990s, with Mozambican women maintaining higher fertility than South Africans. Thereafter, the two populations increasingly exhibit similar fertility levels, converging from 2000 when the confidence intervals around the fertility estimates for the two groups started overlapping. The convergence of total fertility of the two population groups is driven primarily by the decline in fertility among Mozambican women to the levels of South African women. This suggests that Mozambican women were adopting fertility behaviours similar to those of the host population. To test this hypothesis we compared age-specific fertility rates and the timing of first and second births between the two populations at the beginning and end of the observation period.However, by 2009, Panel B of The age-specific fertility rates suggest similarly high levels of adolescent fertility for Mozambican and South African women. Further analysis of the age distribution of first births for Mozambican women shows inFurther analysis also suggests lower contraceptive use by Mozambican women prior to their first birth. At the time of their first birth, Mozambican women consistently reported lower contraceptive use prior to conceiving than South African women. Five per cent of Mozambican women compared to 9.5% of South African women with first births from 1995 to 1999 reported using contraception at some time before their first birth. These figures were 23% and 28%, respectively, for first births occurring from 2003 to 2005. High adolescent fertility has been a source of concern in South Africa and so it is important to recognise the lower use of contraception before a first birth as well as the increase in the percentage of first births to adolescents for Mozambican women .Previous research on the fertility of host South Africans has shown that fertility decline for African South Africans has been driven by significant widening of birth intervals explained primarily by increases in the use of modern contraception . Wide biThe changes in age-specific fertility rates, timing of first births and extended first birth intervals indicate that Mozambican women are achieving lower fertility by adopting patterns of childbearing typical for South African women in Agincourt.To further explore the fertility decline and the convergence of fertility in the two populations, we examine select socio-economic factors that may be \u2018underlying\u2019 drivers of the decline in the TFR among the Mozambicans. Increases in education, labour force participation and income have been found to reduce fertility \u201322. HistPanel B of An analysis of the labour force participation of women of reproductive ages shows that formal employment increased slightly during the past decade for South Africans (from about 28% in 2000 to 30% in 2008) but decreased for Mozambicans (from about 27% to 23% over the same period). The very high unemployment of both groups suggests limited formal economic opportunities for women, which might have contributed to the recently observed stall in fertility decline.Education and wealth indicators suggest that over the period of study Mozambican women's status improved and converged with that of South African women. However, these gains are relatively modest and Mozambican women remain disadvantaged, particularly in relation to formal employment, within the relatively poor population of the rural setting.Approximately 20 years after the civil war in Mozambique, demographic characteristics of self-settled refugees of Mozambican origin in Agincourt are converging with those of their South African hosts. While the TFR in Mozambique itself has remained near 5 , the MozThe findings of this study suggest adaptation of the Mozambican refugees in the AHDSS to the fertility patterns of their host community. Adaptation theory states that exposure to cultural norms and local costs of childbearing will lead migrants to change their fertility behaviour to converge with that of natives in the destination . This apThe adaptation of Mozambican refugees to the lower fertility regime in South Africa has important implications for many areas of sub-Saharan Africa hosting refugee populations. The adaptation of Mozambicans in South Africa is likely facilitated by a shared language and culture. Self-settled refugees are also probably more likely to be exposed to and adjust to the local norms of childbearing compared to refugees living in camps.Access to contraception through the South African health system is a key component of the decrease in fertility of Mozambicans. Another important component is the improvement in socio-economic status partly attributable to access to education and host government social grants. Reducing the economic disadvantage of refugees and integrating refugees into local programmes and services encourages adaptation and can compensate for other factors that may otherwise increase the fertility of refugees such as poverty, lack of education, and lack of reproductive health services. Integration encourages adaptation and will likely benefit host communities by lowering the fertility of refugees.Overall fertility decline in Agincourt over the past few decades has been driven primarily by the decline in fertility of Mozambican women. South African women's total fertility declined primarily in the early 1990s and has been wavering around 2.5 since 1995. Fertility decline has also been minimal for Mozambican women since 2002. With fertility decline stalling in both groups it remains to be seen if fertility will go below replacement level (2.5 in South Africa) as predicted by earlier research . FurtherFindings presented here suggest a few areas of future intervention that would be helpful in settings such as Agincourt. The pattern of childbearing in Agincourt shows that delaying first births could reduce overall fertility rates. Others have argued that family planning programmes in South Africa need to be reoriented to address the contraceptive needs of adolescents before first births . Since cIn other settings, increasing access to family planning and reproductive health programmes for all women has been shown to improve women's economic and health outcomes and to enhance economic growth . HoweverThe primary limitations of our study are data driven. We do not have information on important variables such as prospective data on marriage, fertility desires, or detailed information on contraceptive use, to run models examining the proximate determinants of fertility."} +{"text": "In eukaryotic organisms clathrin-coated vesicles are instrumental in the processes of endocytosis as well as intracellular protein trafficking. Hence, it is important to understand how these vesicles have evolved across eukaryotes, to carry cargo molecules of varied shapes and sizes. The intricate nature and functional diversity of the vesicles are maintained by numerous interacting protein partners of the vesicle system. However, to delineate functionally important residues participating in protein-protein interactions of the assembly is a daunting task as there are no high-resolution structures of the intact assembly available. The two cryoEM structures closely representing intact assembly were determined at very low resolution and provide positions of C\u03b1 atoms alone. In the present study, using the method developed by us earlier, we predict the protein-protein interface residues in clathrin assembly, taking guidance from the available low-resolution structures. The conservation status of these interfaces when investigated across eukaryotes, revealed a radial distribution of evolutionary constraints, i.e., if the members of the clathrin vesicular assembly can be imagined to be arranged in spherical manner, the cargo being at the center and clathrins being at the periphery, the detailed phylogenetic analysis of these members of the assembly indicated high-residue variation in the members of the assembly closer to the cargo while high conservation was noted in clathrins and in other proteins at the periphery of the vesicle. This points to the strategy adopted by the nature to package diverse proteins but transport them through a highly conserved mechanism. Intracellular transport of biomolecules is an important event for the functioning of a cell. Both, endocytic as well as exocytic pathways of trafficking in eukaryotic cells involve formation of caged vesicles that communicate between the organelles of the same cell or to the exterior of the cell Clathrin, a cytosolic protein, was identified as the major component of CCVs and hence the name Protein-protein interactions play a crucial role in maintaining the structural integrity and functional state of the assembly In the present analysis, we have made use of these low resolution cryo-EM fitted models to gain better insights onto the protein-protein interactions made by clathrin chains. Towards this, we have used the method developed by us earlier, that can predict protein-protein interactions interface residues with high sensitivity and accuracy, starting from low resolution structures providing C\u03b1 atom positions only Protein-protein interaction interfaces of the components of CCV were recognized using accessibility criterion. As can be seen in the The homologues of human clathrin chains as well as adaptins were identified across eukaryotic organisms by carrying out sequence search using PSI-BLAST 1] Tree construction- Using the multiple sequence alignments mentioned above, the phylogenetic trees were constructed for clathrin chains as well as the components of the adaptor protein complexes. The tree constructions were carried out using PHYML programme 2] Correlation of genetic distances- Using the trees constructed as mentioned above and the multiple sequence alignments mentioned previously, the genetic distance matrices of n\u00d7n orthologous sequences was computed using TREE-PUZZLE Structure of clathrin cage. As mentioned earlier, there is no structure available for the intact assembly of clathrin coated vesicles. The structures that resemble the overall assembly closely are the two cryo-EM structures of empty clathrin cage, with or without part of auxilin J domain Adaptor proteins. Out of the four different types of adaptor proteins structural information is available for only two complexes namely adaptor protein 1 Apart from the above mentioned structures that were used in the main analysis, a number of other structures were used as supporting structures to confirm our predictions. The complete list of the structures analyzed is given in the Recognition of protein-protein interaction interface in case of clathrin cage was a twofold problem; a] The structures available for the clathrin cage provide positions of only C\u03b1 atoms and hence recognition of interface was a non-trivial task and b] To further add to the complexity, the structural models comprise eighteen polypeptide chains . Using the standard method harboring accessibility criterion the residues of adaptor protein subunits involved in protein-protein interactions were recognized. Two structures available of the cores of adaptor protein AP1 (PDB code 1w63) The above mentioned structures lacked the appendage region in the beta chains of both the adaptins. This gap in the information was filled by analyzing the high resolution structures of these regions namely the PDB ids: 2iv8 In a given polypeptide chain the residues participating in protein-protein interactions are often conserved better over the course of evolution compared to their non-interface solvent exposed regions. The residues identified as interface residues in case of clathrin chains when tested for residue conservation were also found to be better conserved compared to the non-interface, surface exposed residues of the same chain as shown in the When conservation of interface residues were compared between different components of the assembly it was observed that the interfaces were maximally conserved in clathrin heavy chain with B chains of adaptor proteins ranking next. Minimum residue conservation was observed in the interfaces of the chains of the adaptor proteins that directly interact with the cargo receptors (\u03bc chains of both the adaptor protein complexes). To investigate this observed pattern further and to attain a quantitative picture, a detailed analysis of evolutionary constraints over these protein chains was carried out subsequently.Using the multiple sequence alignments obtained using ClustalW, phylogenetic trees were constructed using PHYML, which constructs maximum likelihood tree based on the alignment. Comparative analysis of the constructed phylogenetic trees unfolded some of the interesting facets of the evolutionary divergence pattern amongst the subunits of the two prominent hubs of the clathrin coated vesicle assembly namely clathrins and adaptor protein complexes. The orthologous sequences that were compared were taken from the identical set of organisms. The key observations of the analysis were as follows;1] When the functionally equivalent subunits of the two adapter proteins were compared, it was observed that the B chains, that interact with clathrin heavy chain directly, showed identical clustering pattern (as shown in the 2] Between the two A chains of the adaptors it was noted that the sequence of the A chain of the AP1 is largely conserved across eukaryotes while that of AP2 much diverged. This difference can be attributed to the differences in the modes of biological actions of the two complexes. AP1 largely operates between golgi complex to endosomes while AP2 operates at plasma membrane. Thus, it can be imagined that AP2 caters to larger variety of cargo and hence, to a larger variety of accessory proteins compared to AP1.To investigate the possibility of correlated evolution between the subunits of adaptins and clathrin heavy chain, genetic distance matrices were constructed using TREE-PUZZLE. Comparison was carried out between the matrices of adaptor protein subunits and that of clathrin heavy chain and Pearson correlation coefficients were computed. As shown in the Thus, if different components of the Clathrin coated assembly can be imagined to be arranged in spherical fashion with clathrin heavy chain being at the periphery and the cargo molecules at the center of the sphere, as depicted in the The structures of clathrin coat with and without auxilin peptide bound to clathrin heavy chain are the only available structures that represent the intact clathrin coated vesicle assembly the best. However, these structures were solved at very low resolution and provide C\u03b1 atom positions only. Hence, deriving in-depth knowledge about the residues participating in protein-protein interactions had been a difficult task. Recently, we have developed a method which can perform the above mentioned task with high accuracy and sensitivity Adaptor proteins interact with almost every member of the vesicle and the tasks are very well shared by all the four subunits of the adaptin complex. Every subunit comprises two distinct interacting interfaces namely the one for interactions within the adaptin complex to form core and the other to interact with its non-adaptin interacting partner. The interfaces holding the subunits of the complex together seemed to be located largely towards the center of the polypeptide while in case of \u03b1 and \u03b2 subunits the appendages towards the N-termini harbored the interfaces holding the accessory proteins and clathrin heavy chain respectively. The interface residues inferred in the present analysis showed better residue conservation over their non-interface, surface exposed counterparts, thus validating our findings.Owing to the functions performed by the assembly, the importance of the assembly to almost all the eukaryotic organisms can very well be imagined. Such assemblies will have a few commonalities such as the presence of clathrin like molecule to form cage in order to carry the proteins safely from place to place. However, due to the varying sizes and natures of the cargo there will be significant changes in the structures of the assembly. In order to understand the evolutionary trends in the components of clathrin vesicles detailed phylogenetic analysis was carried out and data was compared across the members of the assembly. In an organism, if members of the vesicular assembly can be imagined to be arranged in a sphere with the clathrin heavy chains being at the periphery while the cargo were being at the center, the adaptor proteins will occupy the space in between, connecting the two layers. This is the simplified model to visualize the arrangement of the components of clathrin coated vesicles. Here, we are not differentiating clathrin coated pits from plaques as elegantly shown by Saffarian et.al experimentally In conclusion, an extensive and non-trivial task of interface determination from a low resolution structure of clathrin coat, followed by a systematic sequence analysis and visualizing the results in the context of 3D structure, enabled us to dissect out a complex pattern of radial distribution of evolutionary constraints. Given the low resolution structures, such an analysis can be extended to other large biomolecular assemblies in the cell that play crucial roles in various cellular pathways."} +{"text": "Escherichia coli and Bacillus subtilis are the most thoroughly studied cell division mechanisms. The earliest visible event in cell division is the formation of a Z ring by FtsZ, a tubulin like protein, at the future septum site. The Z-ring appears to be an accurate marker for the position of the division site and is furthermore recognized by set of cell division proteins\u2014the divisome. At least two distinct mechanisms contribute to the placement of the division machinery: nucleoid occlusion and the Min system. The mechanism of Min system action is fundamentally different in both model organisms [reviewed in Barak and Wilkinson . This is likely accomplished by DivIVA switching its binding partner from MinJ to the DNA-binding RacA protein of MinCD (Jamroskovic et al.,"} +{"text": "In some hospitals, urinalysis is done routinely for all patients scheduled for cardiac surgery. Occasionally pyuria or bacteria is reported in the microscopic urinalysis of these patients that are clinically asymptomatic for urinary tract infections.We were seeking answer to this question: is the presence of a different number of bacteria in preoperative microscopic urinalysis of asymptomatic patients scheduled for cardiac surgery indicative of potential postoperative complications and as a result, a good reason to postpone the operation?We conducted a retrospective cross-sectional study based on the review of the medical records of 1165 patients who underwent open-heart surgery.One hundered and fifty one patients were eligible in our established criteria. There were no significant difference between their demographic characteristics and the same number of randomly selected patients with normal urinalysis who had underwent open-heart surgery. In the bacteriuria group, two patients, and in the control group, three patients had an infection at the operation sites in the post-operative period, which was not a significant finding between two groups (P = 0.503).We recommend that in the absence of symptoms of urinary tract infection, urinalysis is not necessary and not cost beneficial in the preoperative evaluation of patients scheduled for open-heart surgery. Open-heart surgery is one of the major operations in which the use of cardio-pulmonary bypass circulation by extracorporeal devices causes wide-spread of pathophysiological changes in the vital organs of the human body during and following the operation. Because of these vast changes in the mechanical, physiological and biochemical functions of the patient's body in perioperative period, the operation team especially anesthesiologists and cardiac surgeons need comprehensive preoperative evaluations of patient's status. In some hospitals such as our hospital, urinalysis is one of the laboratory tests that is done routinely prior to cardiac surgery routinly. The urine culture does not include routine laboratory tests in the absence of any symptom of urinary tract infection in the preoperative assessment of the candidates for open-heart surgery and it is only ordered when symptoms of urinary tract infection are presented. Occasionally pyuria or bacteria is reported in the microscopic urinalysis of patients who are clinically asymptomatic for urinary tract infection. There are some references regarding preoperative management of asymptomatic bacteriuria in different kinds of surgeries such as joint operations , 2. HoweThe urinary tract is normally sterile except for the flora in the last few centimeters of the distal urethra, which is varied and reflects the existence of digestive flora , the skin flora , and the genital flora . AsymptoWe were seeking an answer to this question: is the presence of a different numbers of bacteria in preoperative microscopic urinalysis of asymptomatic patients scheduled for cardiac surgery an indicator for the likelihood of postoperative complications and a reason for postponing the operation?We conducted a retrospective cross-sectional study based on medical records of Ahvaz Imam Khomeini hospital. We reviewed the medical charts of 1165 patients who underwent open-heart surgery between March 2010 and March 2012. We enrolled the medical charts of patients that were asymptomatic for urinary tract infections, but had bacteriuria with or without pyuria in microscopic urinalysis and compared them with the same number of randomly selected patients with normal urinalysis who had underwent open-heart surgery in our hospital during the same period as the control group. In microscopic urinalysis, microorganisms are usually reported as \"none\", \"few\", \"moderate\" and \"many\" present per high power field (HPF). We included those medical charts that had \"moderate\" or \"many\" bacteria in urinalysis reports. Men normally have fewer than two white blood cells (WBCs) per HPF; and women normally have fewer than five WBCs per HPF in urinalysis were eligible according our established criteria and compared with the same number of randomly selected patients with normal urinalysis who had underwent open-heart surgery in our hospital during the same time duration. The demographic characteristics between two groups are presented in Bacteria are common microorganisms in urine specimens because of their abundant normal microbial flora of the vagina or external urethral meatus and because of their ability to rapidly multiply in urine standing at room temperature. Therefore, microbial organisms found in all but the most scrupulously collected urines should be interpretive in view of clinical symptoms. The most accurate test for bacteriuria is urine culture. The most commonly used tests for detecting bacteriuria in asymptomatic persons are dipstick urinalysis and direct microscopy . Microsc"} +{"text": "Presently the training in tricuspid valve (TV) surgery is difficult due to limited exposure of the TV, 3D complexity of TV components and repair procedures. Therefore we propose a portable simulator for cardiac trainees and junior surgeons to be proficient in TV surgical techniques and to minimize the learning curve. The simulator can be used for an unrestricted number of procedures. It is made with the lowest possible fidelity focusing on availability and cost-containment of the exercise.The TV simulator was made from a sponge that can be placed inside of any box or on any board. The simulator can be equipped with a drain pipe for minimal invasive purposes. The rings and valves were made from available materials. The sponge was covered with a surgical tape for effective manipulation.The self-construction of the TV simulator results in improvement of the understanding of 3D TV anatomy. The usage of the sponge results in effective creation of TV components with similar properties to TV tissue. The usage of the simulator results in performing of all surgical procedures including TV ring annuloplasty, De Vega plasty, bicuspidization of the tricuspid valve, Edge to Edge, neo artificial chordae, pericardial augmentation of the leaflets and TV replacement as well. Unrestricted number of procedures can be performed after covering the sponge with surgical tape. Flexibility and cavity of the drain pipe allow the trainees to use the simulator for minimal invasive also.The surgical skills in TV surgery can be improved by usage of low fidelity simulator in classic and minimal invasive techniques. The high cost of the training residents and junior surgeons in TV surgery can be reduced effectively through the use of this low cost simulator and its accessories . The familiarity in performing surgical procedures results in reducing time consumed in the operation room and reduction of the learning curve."} +{"text": "Important advances in the development of smart biodegradable implants for axonal regeneration after spinal cord injury have recently been reported. These advances are evaluated in this review with special emphasis on the regeneration of the corticospinal tract. The corticospinal tract is often considered the ultimate challenge in demonstrating whether a repair strategy has been successful in the regeneration of the injured mammalian spinal cord. The extensive know-how of factors and cells involved in the development of the corticospinal tract, and the advances made in material science and tissue engineering technology, have provided the foundations for the optimization of the biomatrices needed for repair. Based on the findings summarized in this review, the future development of smart biodegradable bridges for CST regrowth and regeneration in the injured spinal cord is discussed. The corticospinal tract (CST) is considered the ultimate challenge to demonstrate if a repair strategy is succesfull in regeneration of the injured mammalian spinal cord. It all started from the observations made after the use of peripheral nerve grafts into the lesioned spinal cord: a significant regrowth of CNS axons was noted, although various population of axons tracts including the CST did not respond and, in particular, the spinal cord has already been a major challenge for neuroscientists for many decades. The complexity of the spinal cord with its many ascending and descending fiber tracts, the numerous spinal cell populations, the inter-connectivity between various levels of the spinal cord combined with the characteristic response after an injury in the adult, as well as the variability in the location and impact of the lesion in the clinical situation, makes the development of appropriate repair strategies very complex and difficult. Furthermore, in most cases of human SCI, there is a significant loss of spinal cord tissue and cavity formation is an important obstacle, impeding axonal regeneration is related to material science and tissue engineering technology. Here, most know-how is based on studies on repair after peripheral nervous system (PNS) injury. Although autologuous peripheral nerve grafts still result in a superior regenerative performance or \u201cgold standard\u201d for peripheral nerve repair into the spinal cord of the rat is characterized by two phases, which both occur postnatally: a white matter tract formation on the one hand and the spinal gray matter target innervations on the other. Both phases are closely related and have been shown in various anterograde tract-tracing studies molecules and myelin-associated proteins may act as outgrowth inhibitory molecules and be implicated in the outgrowth and restriction of CST pioneer fibers to leave the DF during their descent , in the environment of the vDF upon and during the arrival of the first CST pioneer fibers, is important for guidance. For correct target innervation (back-branching of fibers) and contact formation of the CST fibers, both outgrowth stimulating as well as outgrowth inhibitory molecules including various CAMs, EphA4, growth associated protein 43 (GAP-43), chondroitin sulphate proteoglycans and identified (NT-3) (or as yet unidentified) neurotrophins are needed. The know-how on cells and molecules involved in CST outgrowth during development is important in the design and creation of optimal bridging structures, which are used to enhance and direct the regrowth of injured CST fibers in adult mammalian spinal cord.A damaged peripheral nerve is able to regrow its axons through the distal stump, mainly via the bands of Bungner that are composed of tubes of basal lamina enclosing Schwann cells (SC). A prerequisite for this regrowth of injured PNS axons is the presence of a physical continuity. Hence, in large peripheral lesion gaps, many bridging materials have been tested to allow and stimulate the regrowth of injured fibers. In contrast to the injured PNS axons, those of the CNS fail to regenerate, although local sprouting can occur. The pioneering work of Ramon Y Cajal indicateA variety of biodegradable natural polymers have been used as implants and cell carriers for repair after SCI. Among them collagen type I and alginate have received most attention.Developmental studies have shown that environmental factors such as extracellular matrix molecules (ECM) components are transiently expressed during periods of axonal elongation using collagen as a vehicle that is not sufficient for CST regrowth and re-establishment of functional connections. Although collagen can serve as a bridge to connect the rostral and caudal portions of the injured spinal cord , which is a very promising approach in repair and bridging the injured spinal cord. Future research should be aimed at the creation and transplantation of aliginate-encapsulated cells producing substantially more of the neurotrophic factor.Synthetic biodegradable implants tested for spinal cord repair include matrigel matrix, fibrin and fibronectin (mats) and poly .Matrigel is a soluble basal membrane extract of the Engelbreth-Holm-Swarm tumor cell line. It forms a nonporous hydrogel at room temperature and the matrigel matrix contains laminin, collagen IV and heparin sulphate proteoglycan, as well as growth factors such as insulin growth factor-I (IGF-I). The implantation of Matrigel alone into the injured spinal cord does not stimulate regeneration of injured axons Bunge , an effeAmong those cells most often used for transplantation are the Schwann cells (SC) Bunge . The usecomplete transection) did result in an improved cell survival, the additional use of fibrin glue and acidic fibroblast growth factor (aFGF) was needed to demonstrate regrowth of injured anterogradely labeled CST fibers cells, in particular Schwann cell implantation, have shown promise in overcoming many of the obstacles facing successful repair of the injured spinal cord including the successful survival of transplanted cell (or cell suspensions). The implantation of Schwann cells as cell suspensions with in situ gelling Matrigel but also with collagen as a vehicle, after spinal cord contusion significantly enhances long-term cell survival but not proliferation, as well as improvement of graft vascularization and the degree of axonal in-growth over the standard implantation vehicle was injected into the lesioned site. Five weeks after transplantation, the application of Matrigel and hBMSC-SC resulted in a reduced cystic cavitation and at the same time promotion of the functional recovery has been studied using Matrigel-containing vascular endothelial growth factor (VEGF). With the use of Matrigel as a vehicle, exogenous vascular endothelial growth factor (VEGF165), either as recombinant protein alone or combined with an adenovirus coding for VEGF165, was applied into the lesion gap of the transected spinal cord is noted but at the same time there is an absence of re-entry of the CST fibers into the host in areas distal to the transplant. The latter maybe in part related to the presence of CSPGs, which inhibit axon growth.As already discussed see \u201c\u201d, the adDespite significant improvements in design of fibrin gels/mats and controlled delivery of neurotrophins (like NT-3), a major disadvantage of the fibronectin mats and fibrin gels is their relatively rapid degradation. Hence, timing of the implantation of (the presently available) fibronectin/fibrin bridges into the lesioned spinal cord is important; timed implantation at chronic stages may result in significant regeneration of injured CNS fibers including the CST.The aliphatic polyesters derived from lactide, glycolide and E-caprolactone are completely resorbable and biocompatible in the central nervous system with the main aim to re-create a continuum for regrowth of injured CST fibers may result in significant ingrowth into the bridge and re-entry of injured CST fibers into the host. The use of PLGA nanoparticles for a controlled and sustained release of neurotrophins is a very promising development in the field of biomatrices and bridging the injured spinal cord and has already resulted in improved hindlimb motor function in SCI rats. Follow-up studies are needed to demonstrate the effect of local delivery of the PLGA nanoparticles encapsulating neurotrophins on CST regrowth.Polyethylene glycol (PEG) is a highly water soluble polymer and has served as a therapeutic agent to reconstruct the phospholipid bilayers of damaged cell membranes (or fusogenic activity). Based on this fusogenic property, the systemic intravenous application of 30% PEG in rats that underwent 35-g clip compression at cervical eight (C8) resulted in a reduced neurofilament degradation and apoptotic cell death in the lesion area finally resulting in a modest neurobehavioral recovery after SCI : the injection of SWNT-PEG into the lesion at T9 1\u00a0week after a complete transection decreased the lesion volume, did not increase reactive gliosis and at the same time increased the number of neurofilament and anterogradely labeled CST fibers in the lesion , the intravenous application has been shown to result in axonal preservation and a significant neurological recovery after spinal cord injury. PEGylation of neurotrophins, functionalization of carbon nanotubes with PEG, or the development of injectable hydrogel scaffolds based on PEG are interesting and important improvements in design of successful biomatrices and bridges for repair of injured spinal cord, including the CST.The use of degradable biomatrices for implantation into the lesioned mammalian spinal cord has led to interesting and important findings on CNS regeneration in general and also in view of CST regrowth. As can be deduced from the approaches chosen, the implementation of factors and or cells important during the development of the spinal cord and combined and/or integrated into a biodegradable matrix often formed the fundaments of repair strategies. With respect to the injured CST, main efforts have been taken to bridge the lesion based on the re-creation or reconstruction of a 3-D alignment of outgrowth promoting cells , as this typical structure is known to be present and important during guidance and development of this tract. Furthermore, bridging the lesion and stimulation of the regrowth of injured CST fibers is often triggered by application of the neurotrophin-3 (NT-3), a molecule also known to be important during the development of this tract.In general, it can be concluded that various approaches have led to a significant ingrowth of anterogradely labeled regrowing CST into the bridges. It is difficult to compare the quality of the various bridges used but at the present moment not one single approach stands out. Furthermore, the major drawback of all bridges currently used and developed is the fact that no re-entry of CST fibers from the graft into the host tissue is noted. It is here where most research should be focused at. Despite the fact that various degradable biomaterials in themselves do not enhance the formation of scar tissue or even minimize the impermissiveness of the host tissue, the CST axons do not re-enter; obviously, the CST fibers like it too much in the graft and are not triggered (enough) to re-enter the host. Here, various aspects, based on our developmental know-how, might be directive and inform us about which approach should be taken in future design of CNS bridges and CST regrowth. The presence of neurotrop(h)ic molecules but also the 3D alignment of the glial cells during development, is restricted in time and only needed during the various phases of CST outgrowth. In future designs of bridges and biomaterials, mimicking a developmental CST environment is of the utmost importance; not only the presence of the cue needed for regrowth but equally important is the disappearance or decrease in concentration of the guidance molecule after some time. If we again use the developmental CST as the most optimal model needed for regrowth of injured CST fibers in the adult mammalian spinal cord, we have learned that the development of the CST tract is characterized by cellular interactions, which are changing during the spinal outgrowth in place and in time. It has been shown, for instance, that for spinal target innervation the most common mechanism used is collateral formation or back-branching, a phenomenon, at least to my knowledge that has never been observed (or studied) in regrowing injured CST fibers. This back-branching occurs after a waiting period of several days at a defined spinal level and the neurotrophin NT-3 might be important, although the precise mechanism underlying the formation of the CST-collaterals is not yet known. Whereas the injured CST axons should be triggered to re-enter the host tissue, it should be stressed that the failure of (CST) axons to re-enter the distal spinal host tissue maybe in part related to the presence of the so-called glial scar, including the chondroitin sulfate proteoglycans (CSPGs), which inhibit axon growth (Fawcett and Asher ON (\u201cingrowth of CST fibers into the bridge\u201d) and (maybe even more important!!) an OFF mode . In this, the use of a bridge does not stand alone: the formation of a continuum based on outgrowth promoting capacities not only related to the bridge but also including the rostral and caudal host tissue point of motor restoration, the regeneration of individual fiber systems is needed in order to develop an optimal repair strategy and motor recovery. In this respect, one important question needs to be addressed: what is the relationship between an anatomically observed response related to the function repair and recovery of function? Or, do we need all injured (CST) fibers to regenerate?With respect to the CST, the question of which behavioral deficits are due to CST lesioning in adult rats is still not fully answered. Although the transection of the CST in the rat leads to, for instance, a loss of contact placing, the role of this tract in the control of distal and proximal limb movements is still not fully understood. The good news is that it has been documented that a very low percentage of regrowing fibers at the anatomical level may account for a complete recovery of the placing reflex (Bregman et al."} +{"text": "There were multiple errors in the Author Contributions statement. The second author (YL) was incorrectly included in the list of authors who \"Wrote the paper.\" In addition, the first author (XC) should be included in the list of authors who \"Wrote the paper.\""} +{"text": "TheVirtualBrain (TVB) is the first integrative neuroinformatics platform for the modeling of full brain network dynamics. TVB simulator, written in Python, enables the systematic model-based inference of neurophysiological mechanisms on different brain scales that underlie the generation of macroscopic commonly used neuroimaging signals . In the framework of TVB we build full brain network models by incorporating biologically realistic large scale couplings of neural populations. The couplings within the network are captured by the space-time structure of the connectivity matrix, which defines the connection strengths and time delays via signal transmission between all network nodes. Researchers from computational, theoretical and clinical neuroscience can benefit of this tool. The web interface allows users without programming knowledge to access a powerful computing platform as well as visualization and analysis tools. Users with strong programming skills benefit of all the advantages provided by the Python programming through a shell interface.We propose TVB as a novel computing tool for large-scale brain simulations. TVB is the solution to tackle systematic parameter space exploration, allowing to rapidly gain insights of the full brain dynamics repertoire as a function of structure. It moves away from the investigation of isolated regional responses and considers the function of each region in terms of the interplay among brain regions. Full brain simulations open the possibility for the exploration of the consequences of pathological changes in the system, enabling us to identify and potentially counteract those unfavorable processes."} +{"text": "The Eating Disorders Program (EDP) based at Royal Melbourne Hospital offers inpatient, day patient and outpatient services to adults from western Melbourne and regional Victoria and receives over 150 referrals per year. Despite streamlined referral processes to the program and access to training for front line clinicians, there are still gaps in the management of eating disorders in the community. Most affected are patients who have been discharged from the inpatient or day programs and those from regional centres.The EDP set up a pilot outreach service in 2013 following extensive service mapping and needs analysis. There were two initial goals; individualised community treatment for patients in recovery phase of their eating disorder and a consultation liaison role to advise local and regional clinicians and inpatient units in effective management of eating disorders.The outreach service consisted of a multi-disciplinary team supported by the EDP. The goals of treatment for individual work were improved quality of life, weight restoration and decreased eating disordered behaviour. The consultation liaison role aimed to reduce the need for admission to specialist beds and build the capacity of local clinicians.This presentation aims to describe the background, setup and outcomes of this new service.Care in Inpatient and Community Settings stream of the 2013 ANZAED Conference.This abstract was presented in the"} +{"text": "FOXP2 involved in specific language impairments and neuroligin genes (NL-3 and NL-4) involved in autism spectrum disorders. Knockout of FoxP2 leads to reduced vocal behavior and eventually premature death. Introducing the human variant of FoxP2 protein into mice, in contrast, results in shifts in frequency and modulation of pup ultrasonic vocalizations. Knockout of NL-3 and NL-4 in mice diminishes social behavior and vocalizations. Although such studies may provide insights into the molecular and neural basis of social and communicative behavior, the structure of mouse vocalizations is largely innate, limiting the suitability of the mouse model to study human speech, a learned mode of production. Although knockout or replacement of single genes has perceptible effects on behavior, these genes are part of larger networks whose functions remain poorly understood. In humans, for instance, deficiencies in NL-4 can lead to a broad spectrum of disorders, suggesting that further factors contribute to the variation in clinical symptoms. The precise nature as well as the interaction of these factors is yet to be determined.Comparative analyses used to reconstruct the evolution of traits associated with the human language faculty, including its socio-cognitive underpinnings, highlight the importance of evolutionary constraints limiting vocal learning in non-human primates. After a brief overview of this field of research and the neural basis of primate vocalizations, we review studies that have addressed the genetic basis of usage and structure of ultrasonic communication in mice, with a focus on the gene A classic theme in natural philosophy is the question of what distinguishes our own species from others , particuFoxP2 with particular regard to its impact on structural properties of vocalizations, whereas the second study set assessed the importance of neuroligin genes on the usage of vocalizations. For comparative purposes, we will make some reference to research on bird song, another important study system to elucidate the foundations of vocal learning.The purpose of the present review is to explore ways in which genetic studies in mouse models can contribute to a better understanding of the evolution of human communication. One specific aim is to elucidate the limitations in vocal communication of non-human primates. We therefore begin with a review of the vocal communication of non-human primates, including the neural circuits underlying call usage and structure. This background knowledge is essential to understand the derived features of neural circuitry in the human lineage that are seen as a precondition for vocal learning in our own species and to place the studies of mouse ultrasonic vocalizations (USVs) into an appropriate context. We begin this central part with a brief introduction to the structural and functional properties of mouse USVs, and then summarize the results of two exemplary sets of studies. The first study set focused on the effects of Language in general is characterized by a set of features that distinguish it from other means of communication involves a number of different subsystems, contributing to different degrees in the initiation of vocalization and the structural properties of the calls. In a recent review, The second vocalization control pathway described in the The role of the basal ganglia in controlling motor output has long been recognized . Recent The most important derived feature in the human lineage regarding the ontogeny of speech appears to be the evolution of the direct pathway from the motor cortex to the motoneurons, enabling volitional control over the oscillations of the vocal folds. Together with the intricate coordination of breathing and articulation, this feature allows for the precise control over speech production. The role of the basal ganglia in the modulation of vocal behavior, in contrast, appears to be an ancestral feature. The detailed investigations of the brain mechanisms underlying vocal control now call for the elucidation of the genes that might be involved in the reorganization of the brain that enabled humans to talk .USVs occur in a wide range of taxa such as rats as well Interest in mouse vocal behavior goes back quite some time were able to show that females emit USVs during social encounters with intruding females. The number of calls seemed to be modulated by the motivational state of the emitter during the estrous cycle, and there was a positive correlation between the number of calls and the time spent by the resident sniffing the intruder female confirmeThe above overview describing some key questions in the evolution of language debate, as well as the most significant features of mouse USVs, serves as the framework for the following section that reviews exemplary studies addressing specific genes that have been implicated in language impairments and socio-cognitive deficits.FOXP2 gene was identified in a British family whose specific language impairments appeared to be inherited in an autosomal dominant fashion (FOXP2 gene (FoxP2 appears to be highly conserved . Analyses of the evolution of the FoxP2 gene in primates have identified two amino acid substitutions believed to have become fixed in the human lineage after its separation from the chimpanzee and which appear to have been subject to positive selection affecting the DNA binding domain of the protein, is truncated due to a nonsense mutation (R328X) or is disrupted by a chromosomal rearrangement, the development of speech and language is impaired and not in males who sang to females (directed singing) ; this waFoxP2 support the view that this gene is closely linked to vocal behavior. The different clades of echolocating bats show significant changes in the FoxP2 gene sequence hypothesized that this pattern of gene modification is related to the fact that bats rely on extremely precise vocalizations for predation. Mice homozygous for non-functional FoxP2 alleles produce significantly fewer isolation calls than their wild-type (WT) littermates ; the WT mouse FoxP2 protein can be used as a model for the ancestral version of the human FoxP2 protein and vasopressin \u2013 hypothalamic neuropeptides excreted by the neurohypophysis . Mice laThe usage of ultrasonic vocalizations appears to also be influenced by the dopaminergic reward system involved in a variety of behaviors, including affective responses, positive reinforcement, foraging and sexual behavior . To giveAs mentioned early on, our research interest lies in the elucidation of the evolution of communicative behavior with special emphasis on the evolution of speech. Because of the link between communicative behavior and the development of perspective taking and mental state attribution in human children, genes that have been implicated in autism spectrum disorders (ASD) are of particular interest. Typical symptoms of ASD are social deficits such as impairments in the ability to take the perspective of others, language deficits, as well as restricted interests . These fWe focused on the vocal communication of NL-3 and NL-4 KO mice and analyzed the ultrasonic vocalizations of male mice during courtship behavior. We found in both cases a significant reduction in the number of USV calls . Indeed,The rare cases where the KO male mice uttered calls showed that both WT and KO mice were able to produce the same call types . AlthougOther mouse models for autism have reported a very different pattern in terms of the structural property of calls. Mice of the BTBR T+ tf/J strain, which exhibit social abnormalities and repetitive behaviors, were found to have an abnormal vocal repertoire (The case studies reviewed here indicate that the ultrasonic vocalizations of mice appear to constitute a valuable readout in studies of the genetic foundations of social and communicative behavior, perhaps even giving some preliminary clues to the evolution of speech. Call rates, durations and response consistencies, in particular, appear to be sensitive variables in studies of genes involved in the modulation of social behavior. However, to date the interaction of different factors that contribute to variation in the propensity to vocalize remains largely unclear.Before we can fully understand how different genes contribute to changes in the structure of vocalizations, we need to develop a better understanding of the sound production mechanisms. For instance, how are mouse calls with \u2018pitch jumps' being produced, what role do non-linear phenomena play and what is the contribution of the vocal tract filtering (FoxP2 gene show significant differences in the local architecture of the striatum is in-line with the view that this area is important in the fine-grained control of motor behavior. However, in addition to changes at the synaptic and local level, there is also a global reorganization of the fiber tracts that connect the brain areas involved in motor sound production and perception (Despite the present optimism regarding the value of ultrasonic vocalizations in transgenic mice as readouts in clinical studies, some important restrictions apply in terms of their applicability to study the foundations of human speech. It is of utmost importance to be aware of the differences in the neural circuitry underlying innate vs. learned vocalizations. In other words, in the FoxP2 studies in mice reviewed here, the effects of genes are studied largely in the context of innate behavior. The ultimate goal is to understand a learned mode of vocalization production because only in this context will we enhance our understanding of the origins of speech. The finding that laboratory-produced mice carrying the human variant of the rception .We are just beginning to grasp the complexity of the genetic networks contributing to regulations between vocal and social behavior. Studies on the genetic foundation of mouse ultrasonic vocalizations can help to put some pieces of the puzzle of language evolution in the proper place. At the same time, other issues such as the understanding of the link between mental state attribution and language and its role in the evolution of speech still remain largely elusive."} +{"text": "Over the past five decades ovarian cancer has been of considerable interest to clinical cancer investigators due to the fact that it is among the most chemosensitive of all solid tumors . Unfortutargeting biological pathways relevant in a particular tumor type and even within a specific patient with that type of cancer . Research in this arena in ovarian cancer remains in its early stages although a number of quite exciting developments have recently been reported in the peer-reviewed medical literature that suggest the realistic potential that this novel general class of drugs will soon become important components of \u201cstandard-of-care\u201d in the management of this difficult malignancy. In recent years there has been a particular focus in the cancer research community to discover and subsequently develop clinically active drugs that are capable of specifically In this special issue, investigators from around the world have contributed to this literature by summarizing a number of important developments. In the papers of this special issue, an initial discussion of the role of surgical cytoreduction in the malignancy is followed by an overview of the management of recurrent ovarian cancer and the relevance of molecular abnormalities in specific ovarian cancer subtypes. This is followed by several excellent and comprehensive overviews of the possible roles of targeted therapy in ovarian cancer, the potential impact of antiangiogenic drugs, epidermal growth factor and PARP inhibitors, disruption of insulin and glucose pathways, and novel treatments affecting histone deacetylase and metastatic colonization, as well as an innovative approach to immunotherapy in the malignancy. The peer-reviewed papers in this special issue provide important insight into both the current and future management of epithelial ovarian cancer.M.MarkmanM.MarkmanJalid SehouliJalid SehouliCharles F. LevenbackCharles F. LevenbackDennis S. ChiDennis S. Chi"} +{"text": "In a recent review article we reconsidered the hypothesis that neurogenic vasodilatation is a key factor in the genesis of the headache of the migraine attack according to an updated and critical analysis of past and current literature. Cited papers span from pioneering studies in experimental animals of more than a century ago, to very recent investigations in humans in whom vasodilatation of cranial arteries has been accurately measured with highly sophisticated and reliable techniques. Results of neurovascular imaging studies have strongly corroborated previous pharmacological acquisition with antagonists of the calcitonin gene-related peptide receptor. Findings from clinical trials with these drugs underlined the role of neurogenic vasodilatation in migraine. In a comment to our review , Elliot On the other hand, we acknowledge that the complex pathophysiology of migraine and the mystery that still covers the initiating factors/mechanisms of the migraine attack should cast caution in refusing the contribution of triggers located in the central nervous system."} +{"text": "Globally, trauma represents a growing and significant burden of disease. Many health systems have limited metrics with which to guide development and appropriately inform policy and management decisions with regard to trauma related health care delivery.This paper outlines the establishment of need for improved trauma related metrics in the country of Bhutan and the process of development of a trauma registry at Jigme Dorji Wangchuck National Referral Hospital to meet that need.Trauma registries are important tools allowing health systems to respond to the shifting burden of disease; successful establishment of a trauma registry requires an understanding of the health system and broad institutional support. This project outlines the concepts and process involved in establishing a trauma registry at Jigme Dorji Wangchuck National Referral Hospital Thimphu, Bhutan (JDWNRH) as a model for the country of Bhutan. A need for improved emergency medical care was established by the Royal Government of Bhutan and collaborating partners and improved trauma related metrics were identified as critical to informed development.Recent changes in the understanding of trauma related outcomes and demographics have led to a trend in international policy, funding and programmatic implementation emphasizing trauma care and injury prevention in the developing world with limited health and economic infrastructure, Bhutan faces significant challenges with regard to health care delivery of the United Nations Injuries and the burden of injuries on the health care system are increasing in Bhutan, resulting in the prioritization and need for increased data on the part of the MoH. Data from the Monthly Morbidity Report (MMR) presented in the 2009 Annual Health Bulletin shows total number of Injuries and Poisoning increasing from 19,117 in 2004 to 26,330 in 2008, and increase of 37.7%. In addition the number of deaths attributed to Injuries and Poisoning increased from 13 in 2004 to 30 in 2008, an increase of 130%. This data correlates with global studies of trends that show dramatically increasing rates of injuries and associated death and disability .While in Bhutan the two staff members helped build a core group of staff, the \u2018trauma team\u2019 focused on the development and implementation of the trauma registry. This group included management, clinical staff and administrators involved in hospital services related to trauma patients. Definitions were agreed upon and the data points were chosen based on addressing the important elements of demographics, prehospital care, vital signs representing clinical condition, mechanism or nature of injury, treatment and disposition. A trauma registry form was developed and piloted with subsequent implementation in September 2010.Establishment of the trauma registry at JDWNRH was met with some specific challenges. First, while there was interest in the benefits of the registry, as evidenced by MoH efforts and a previous attempt at establishing a registry several years earlier, clinical workload and commitment of resources hindered the process. Communication between the MoH and clinical staff with regard to data use and priority setting was not clear. Arrival of trauma patients occurred at several venues requiring many staff to be trained and adding logistical difficulties. Three clinic departments \u2013 the emergency department, orthopedics, and forensics \u2013 represented the bulk of trauma patients and were trained to administer the registry. The data processing was conducted by the medical records office and staff of the trauma team with the goal of eventually transferring the data to the MoH for further analysis and use in policy and management decision making. Document flow through the hospital was a great challenge, requiring layers of shifting responsibility. concerning policy implications of data points was also likely an important limitation. Implementation of the paper trauma registry before the planned implementation of the computerized medical record was necessary but less than ideal. Finally, given the limited number of clinical staff the untoward affect of using staff time to develop a tool of unclear clinical significance and longevity was of great concern.The project outlined above represented initial attempts to confront the unmet needs of the people of Bhutan and desire of the government to improve health outcomes with regards to trauma and emergency care. Many of the challenges facing Bhutan\u2019s goals for improving emergency medical care are similar to those found in other resource poor settings: economic and logistical challenges to scaling up care, lack of trained health care personnel and limited training opportunities, and difficulty prioritizing interventions and health care systems investment. Improved metrics as a goal of this project, has potential to confront some of the limitations inherent within systems with limited resources. In addition, given Bhutan\u2019s reliance on partners for development, the country\u2019s ability to guide its development may be influenced by external priorities. Improved metrics may allow the MoH greater ability to influence development partners towards more impact driven interventions."} +{"text": "Because of the rapidly increasing prevalence of obesity among children worldwide and the realization that this global epidemic is closely related to changing life style, especially relating to diet and exercise, research in the effects of exercise on children's health and the effects of children's health on their ability to exercise becomes timely and imperative. Promoting scientific researching in the field of exercise medicine in children has therefore been the primary motive behind this special issue of the International Journal of Pediatrics which is dedicated to publishing important works in the field. To the great satisfaction of our team of editors, a great number of good quality research results were submitted to the issue from all four corners of the world which indicates a strong interest in this very important field of medicine worldwide. The first section of this issue contains cross-sectional population studies that provide evidence confirming the strong correlation between obesity and lack of activity in children of different populations and different age groups . Furthermore, a 12-month interventional study from Sweden showed that longer exercise periods performed by school age boys resulted in more muscle mass and increased muscle strength (the fourth paper). The second section of the issue contains several articles studying the different genetic, environmental, parental, and other psychosocial factors that can potentially affect children's level of exercise activity. These articles collectively provide evidence for the variety and complexity of factors that affect children's predilection or exercise . They also identify potential areas for future intervention to promote exercise early in life (the ninth paper). One example of such opportunities involves the use of video gaming. Even though the overuse of video gaming has been largely blamed for the decreasing level of physical activity in children, the case might be completely reversed with the recent advent of interactive video gaming which tends to be preferred by children over conventional video gaming and is associated with higher level of physical activity (the tenth paper). The third section deals with research relating to exercise in children with known chronic illness such as diabetes, cystic fibrosis, neuromuscular diseases, arthritis, and congenital heart diseases with emphasis not only concerning the limitations these illnesses impose on children's ability to exercise and become physically fit but also on how increased fitness in these patients can modulate their disease process and therefore on ways exercise can be performed and promoted .The fourth section discusses hemodynamic responses to exercise in children as compared to adults (the seventeenth paper) and explores the hormonal and inflammatory profile of overweight and normal weight children and relates them to cardiovascular fitness (the eighteenth paper). The fifth and final section has one article which evaluates the validity of different accelerometeric measurements used in exercise research to objectively grade level of physical activity as compared to the gold standard of directly measuring energy expenditure (the nineteenth paper).We hope that this special issue will contribute substantially to the existing body of knowledge of this new and growing field of exercise medicine in children and to stimulate further needed research.Mutasim Abu-HasanMutasim Abu-HasanNeil ArmstrongNeil ArmstrongLars B. AndersenLars B. AndersenMiles WeinbergerMiles WeinbergerPatricia A. NixonPatricia A. Nixon"} +{"text": "Malaria is still one of the most important infectious diseases in the world. The disease also is a public health problem in south and southeast of Iran. This study programmed to show the correlation between regular malaria microscopy training and refresher training courses and control of malaria in Iran.Three types of training courses were conducted in this programme including; five \u2013 day, ten \u2013 day and bimonthly training courses. Each of the training courses contained theoretical and practical sections and training impact was evaluated by practical examination and multiple-choice quizzes through pre and post tests.Distribution pattern of the participants in the training and refresher training courses showed that the most participants were from Sistan & Baluchistan and Hormozgan provinces where malaria is endemic and most cases of the infection come out from these malarious areas. A total of 695 identified individuals were participated in the training courses. A significant conversely correlation was found between conducting malaria microscopy training courses and annual malaria cases in Iran.Conducting a suitable programme for malaria microscopy training and refresher training plays an important role in the control of malaria in endemic areas. Obviously, the decrease of malaria cases in Iran has been achieved due to some activities that malaria diagnosis training was one of them. Malaria is still one of the most important infectious diseases in the world and is a life threatening in the malarious areas. According to the report of WHO, 216 million malaria cases with an estimated of 655000 deaths were officially recorded in 2010 . On the Malaria is a public health problem in south and southeast of Iran. The annual reports of Communicable Disease Centre (CDC) show that the number of malaria cases from 19129 cases in 2001 has been decreased to 2656 cases in 2011 the School of Public Health, TUMS, Iran has been designated as a research and training focal point of malaria programs including malaria microscopy. Three types of training courses were conducted in this programme including:The objectives of the training course for the first and second items were to strengthen the knowledge of participants in the field of malaria microscopy and for the third item were to train the malaria microscopy for new eligible participants. Each day of the courses covered seven training hours .Training impact of the courses was evaluated by practical and multiple \u2013choice quizzes through pre and post tests.This study was designed as a retrospective study to consider the number of participants, their geographical distribution, and their academic grades, quality of the training and also the influence of a decade (2001 \u2013 2011) malaria microscopy training courses on the control of malaria in Iran.The refresher training courses were designed for microscopists and laboratory technicians and the bimonthly course was conducted for eligible high school graduated students.Theoretical subjects including: Introduction to malaria microscopy, malaria parasites, life cycle and morphology of human plasmodia, morphology of blood cells, antimalarial drugs and treatment of uncomplicated malaria, drug resistance in malaria parasites and Rapid Diagnostic Tests (RDTs). The theoretical subjects were presented according to the basic malaria microscopy . This se: blood collection, preparing thick and thin blood smears and staining them with Giemsa stain, detection of malaria parasites, differentiating between human plasmodia, artifacts, counting malaria parasites, keeping and storing the examined slides, recording the results and preparing relevant reports. The practical subjects were performed according to the standard operating procedures (SOPs) of WHO manuals . According to this table most of the courses were conducted at the Bandar-Abbas Health Research Station in Hormozgan Province.The Correlation between conducting malaria microscopy training courses and annual malaria cases in Iran is illustrated in In addition to well adopted administrative and vector control activities, prompt case finding in malaria infection and accurate treatment of the disease plays an important role in control and elimination of malaria. On the other hand, on time treatment of malaria infection depends on the exact and swift diagnosis of malaria parasites.Although clinical signs particularly in malarious areas can guide the physicians to suspect malaria infection, it is not sufficiently sensitive and specific. Due to avoiding presumptive treatment for malaria in cases come out with fever, the current WHO recommendation emphasizes on the systematic testing of all fever cases . DetectiAlthough some other methods such as PCR, Quantitative Buffy Coat (QBC), Indirect Fluorescent Antibody (IFA) and light microscopy can be used for diagnosis of malaria parasites, among them light microscopy is the conventional and method of choice for malaria case detection of malaria parasites in the most of malarious areas.In the light microscopical method prompt and exact diagnosis also depends on two principles; well trained and skillful microscopists and good materials and equipments. The latter is usually available in the markets, but rearing the competent microscopists takes time and needs adequate expenditures. Such aim can be achieved with designing a reasonable programme. The results of this study show that employing the trained malaria microscopists in their right place will lead to the desirable results . FindingPreparing a suitable programme for training and refresher training courses in the field of malaria microscopy can have a considerable impact on the knowledge and competency of the microscopists and laboratory technicians as well as plays an important role on control of malaria in endemic areas.Obviously, the decrease of malaria cases in Iran has been achieved due to some activities that malaria diagnosis training was one of them."} +{"text": "Molecular epidemiology is a science which utilizes molecular biology to define the distribution of disease in a population (descriptive epidemiology) and relies heavily on integration of traditional epidemiological approaches to identify the etiological determinants of this distribution. The study of viral pathogens of aquaculture has provided many exciting opportunities to apply such tools. This review considers the extent to which molecular epidemiological studies have contributed to better understanding and control of disease in aquaculture, drawing on examples of viral diseases of salmonid fish of commercial significance including viral haemorrhagic septicaemia virus (VHSV), salmonid alphavirus (SAV) and infectious salmon anaemia virus (ISAV). Significant outcomes of molecular epidemiological studies include:Improved taxonomic classification of virusesA better understanding of the natural distribution of virusesAn improved understanding of the origins of viral pathogens in aquacultureAn improved understanding of the risks of translocation of pathogens outwith their natural host rangeAn increased ability to trace the source of new disease outbreaksDevelopment of a basis for ensuring development of appropriate diagnostic toolsAn ability to classify isolates and thus target future research aimed at better understanding biological functionWhile molecular epidemiological studies have no doubt already made a significant contribution in these areas, the advent of new technologies such as pyrosequencing heralds a quantum leap in the ability to generate descriptive molecular sequence data. The ability of molecular epidemiology to fulfil its potential to translate complex disease pathways into relevant fish health policy is thus unlikely to be limited by the generation of descriptive molecular markers. More likely, full realisation of the potential to better explain viral transmission pathways will be dependent on the ability to assimilate and analyse knowledge from a range of more traditional information sources. The development of methods to systematically record and share such epidemiologically important information thus represents a major challenge for fish health professionals in making the best future use of molecular data in supporting fish health policy and disease control. Epidemiology, or the study of factors affecting the health of populations is as old as science itself, with Hippocrates (460-377 BC) being the first to examine the relationship between disease occurrence and environmental issues. More recent technological developments have facilitated the identification and exploitation of molecular biomarkers, whose use alongside traditional epidemiological approaches has led to a better understanding of the underlying mechanisms of disease transmission in populations. This young science of \"molecular epidemiology\" has emerged since the 1970s when the term was first coined in relation to the study of influenza virus [Descriptive molecular epidemiology most often involves an attempt at establishing the evolutionary history of a given viral species called its phylogeny. A phylogenetic tree is a graphical summary of this inferred evolutionary relationship between isolates from which the pattern and in some cases timing of events that occurred as viruses diversified can be hypothesised. Since all life is related by common ancestry, viruses (or other organisms) in current circulation display genetic diversity which reflects their evolutionary history due to the accumulation and inheritance of mutations when genetic material is copied. In the absence of historical information or isolates, which are often unavailable, the evolutionary history of viruses in current circulation can be hypothesised based on the sampling of current genetic markers and by working backwards to infer the most likely series of events which best explain the observed relationships. Fundamental to reconstructing such an evolutionary history is the comparison of homologous characters (often nucleotide or amino acid sequence positions); i.e. those which descend from a common ancestor. In practice, this involves creating an alignment of sequences which represents a hypothesis of positional homology and provides the basis for reconstructing evolutionary history based on a mathematical model of evolution. This review aims to focus on the contribution that the application of such descriptive tools has made to the practical understanding and control of viruses of salmonid aquaculture, rather than the techniques themselves, which have been extensively reviewed by others (e.g. ).Analytical epidemiology is based on observations of disease trends and incidence in different populations that turn into testable hypotheses, and entails rigorous collection of data for all aspects of study . As note8 viruses/mL in coastal seawater [Fish, unlike many other domestically reared farmed animals, are most often reared in open systems (cages) where they are exposed to a wide diversity of naturally occurring environmental pathogens. Recent estimates suggest the presence of a staggering 10seawater . Althougseawater , most arseawater . Whilst seawater , some viseawater ). Indeedseawater . Howeverseawater , and disseawater ,15.The advent of intensive fish husbandry and associated international movements of fish has fundamentally altered the natural equilibrium by exposing animals to new environments and their viruses to new host species . As a coThe accumulation of relatively high levels of genetic variation in viruses associated with aquaculture provides excellent and exciting opportunities for understanding the relative contribution of molecular epidemiology to their understanding and control. In order to identify the extent to which this capacity has been exploited in this field, this review focuses on three of the most significant and well studied RNA pathogens of salmonid aquaculture, viral haemorrhagic septicaemia virus (VHSV), infectious salmon anaemia virus (ISAV) and salmonid alphavirus (SAV). In this area the application of molecular epidemiology has provided important new knowledge of fundamental importance to disease management and control in the following key areas:\u2022 Viral taxonomy and classification\u2022 Understanding origins of diseases of aquaculture\u2022 Viral phylogeography and risks associated with viral translocation\u2022 Outbreak tracing and disease control\u2022 Genetic diversity and viral surveillanceViral haemorrhagic septicaemia (VHS) was, until the late 1980s thought to be a disease exclusive to the freshwater rainbow trout industry of Continental Europe, which is characterised by extensive degeneration and necrosis of the internal tissues . The disRhabdoviridae based on the distinctive morphology of viral particles in electron micrographs. Until relatively recently, insufficient data existed to assign any of the fish rhabdoviruses to novel genera with tentative groupings of \"unassigned\" and \"vesiculovirus-like\" viruses being proposed based on protein electrophoretic data and antigenic relatedness [novirhabdovirus) [VHSV is one of several important fish viral species that were originally identified as belonging to the family atedness . Sequencatedness . This diatedness . Indeed,dovirus) . TaxonomTo date, four main genetic groups of VHSV are recognised worldwide ,31, as dVHS was traditionally thought to be a disease exclusive to freshwater rainbow trout farming, where it has historically been responsible for significant economic loss. The first and subsequent identification of marine isolates of VHSV virus of differing genetic type in Europe and NortUntil recently, only the adaptation of marine Genotype I strains of VHSV in rainbow trout had been shown to occur. Much like the Genotype Ib marine isolates, other naturally occurring genotypes of VHSV have been experimentally shown to be of low virulence to rainbow trout . The recWhile the above examples assume the input and subsequent adaptation of naturally non-pathogenic viruses in aquaculture systems, naturally occurring viruses can in some cases also cause disease without requirement for change. Following outbreaks of disease in marine farmed turbot in the British Isles caused by VHSV , experimUnderstanding the phylogenetic relationships between VHSV isolates highlights the existence of major viral genotypes confined largely to different geographical areas as discussed above. The physical separation of these areas has resulted in limited geneflow and independent evolution of the viruses circulating in these regions. It stands to reason that VHS viruses in these regions have not evolved independently but have rather co-evolved with their hosts to ensure continued survival. Since viruses rely on the availabilty of hosts to avoid extinction it is perhaps not surprising to find many examples where viruses within their natural range do not cause disease in their hosts. Anthropogenic factors, including those associated with international aquaculture, have in general lead to the increased potential for VHSV viruses to be translocated outwith their naturally occurring range. An extreme example of the potential consequence of such occurrence is that of the recent series of VHS epidemics in the Great Lakes region of North America. This series of epidemics, which spread rapidly through the region and resulted in dramatic fish kills, was shown to have been caused by a virus whose natural range appeared to be marine species inhabiting the eastern coastal areas of the USA or Canada . The intMolecular epidemiology is a powerful tool, which alongside more traditional epidemiological tools has potential for tracing the origins and spread of new disease outbreaks. Such an approach relies not only on the availability of large genetic datasets relating to phylogenetically informative and defined regions of the genome but also on availability of data relating to potential epidemiological contact. The latter is often more challenging to obtain. An example of the application and potential of such tools is the recent occurrence of VHS disease in rainbow trout in the British Isles, which until this point had a history of freedom from VHS. The disease was first identified and contained on a farm in North Yorkshire, England in May 2006 and initial investigations into the likely origin of introduction proved inconclusive. Subsequent molecular epidemiological analysis of the causative virus identified a Genotype Ia virus which was very similar to isolates from Denmark and Germany circulating between 2004 and 2006 and suggested likely introduction from this region . HoweverUnderstanding the natural sequence variation and divergence among isolates is of fundamental importance to ensuring adoption of an appropriate surveillance regime for VHSV and in interpretation of its results. Molecular detection methods such as real-time PCR are increasing being employed in this field due to benefits including sensitivity, specificity, high throughput, ease of interpretation and ability to include appropriate controls. Molecular methods rely on the specificity of primers and probes to ensure detection of their intended targets. Targetting such methods to the required well characterised and conserved regions of the genome thus requires a thorough knowledge of molecular variation of the species. Even with this knowledge, the risk that highly specific single-plex assays such as probe-based real-time PCR could fail to detect new emerging variants of pathogens should be recognised. Such methods as developed for VHSV have to date, however, proven to be robust in detecting all known variants of the virus in known circulation .Since phenotypic properties of viruses may in some cases be consistent with genetic origin, selecting isolates representative of the different genotypes can be a sensible strategy for further research into establishing their biological properties. Previous work on VHSV has investigated the consistency of pathogenic properties of different genotypes for different species ,50. SeleSalmonid alphaviruses are responsible for salmon pancreas disease (SPD) and sleeping disease (SD) conditions, primarily of farmed Atlantic salmon and rainbow trout, respectively. Clinical signs associated with SPD include abnormal swimming behaviour and lack of appetite, while characteristic histopathological signs include severe degeneration of exocrine pancreas, cardiomyopathy and skeletal myopathy . SD invoalphaviridae in the family Togaviridae, and differ markedly from the previously established New World viruses of Venezuelan equine encephalitis virus (VEEV) and eastern equine encephalitis virus and the Old World viruses of Aura and Sindbis virus [Molecular analyses of salmonid alphaviruses have recently demonstrated that they represent a group of viruses within the genus is virus . Salmon is virus .Classification in this way can give clues as to the potential biological properties and functions of salmonid alphaviruses based on the established properties of their better studied relatives. Interestingly, most alphavirus lifecycles involve an obligate, commonly arthropod intermediate host. Such an intermediate host does not appear to be a requirement of salmonid alphavirus transmission since these viruses have been shown to efficiently transmit horizontally via direct water-borne transmission . WhetherThe most comprehensive molecular epidemiological analysis of the genetic relationships within the salmonid alphaviruses was recently conducted by Fringuelli et al. . This reLimanda limanda) and plaice (Pleuronectes platessa) [The monophyletic nature of the freshwater rainbow trout subtype of SAV . The ideMolecular epidemiology has been applied to the study of SAV and has provided evidence to support farm to farm transmission in both Scotland and Ireland where clusters of identical sequences from isolates from different farms located within the same bodies of water have been demonstrated . FurtherKnowledge of molecular epidemiology and genetic diversity of SAV has led to the development of sensitive and specific molecular diagnostic methods capable Infectious salmon anaemia (ISA) is a disease of farmed Atlantic salmon that has been responsible for extensive losses in all major regions where this species is farmed including, Norway, Canada, Scotland, the Faroe Islands and Chile. The disease is characterised by severe anaemia, ascites and haemorrhagic liver necrosis and congestion . While cMolecular epidemiological study has made a significant contribution to our understanding of ISAV. Following identification and initial characterisation, the virus was recognised as an orthomyxovirus , and ultNorth American and European evolutionary lineages of ISAV appear to have been present and diverged long before the development of commercial Atlantic salmon aquaculture . InteresMolecular epidemiogical study has been applied to understanding the appearance of ISA in Chile , which hMolecular detection methodologies have been developed to cover the broad range of ISAV viruses in global circulation , providiThe application of molecular epidemiology to the study of fish viruses has to date largely focussed on the use of descriptive techniques and interpretation of genetic relationships has often been hampered by a lack of associated analytical epidemiological information. The development of next generation sequencing technology promises a revolution in the ability to generate sequence data and thus information of potential epidemiological relevance. Descriptive molecular data alone is, however, inadequate in tracing pathogen spread, especially when variation is limited and evolution does not occur in a clock-like or regular fashion. The ability of molecular epidemiology to fulfil its potential to translate complex disease pathways into relevant fish health policy is thus unlikely to be limited by the generation of descriptive molecular markers. More likely, full realisation of the potential to better explain viral transmission pathways will be dependent on the ability to assimilate and analyse knowledge from a range of more traditional information sources. The development of methods to systematically record and share such epidemiologically important information thus represents a major challenge for fish health professionals in making the best future use of molecular data in supporting fish health policy and disease control.Making best use of this generated data to better understand the molecular basis of virulence also remains an important area for future research. Significant progress has been made in developing reverse genetic approaches for fish viruses (for review see ) which oDespite the limitations in available knowledge, molecular epidemiology has led to an improved understanding of the origins and spread of viruses in aquaculture and wild fish, which in turn has lead to significant practical improvements in disease management strategies and policy. A common theme in the examples explored seems to be the prevalence of probably benign viruses in the environment, which undergo an adaptation in association with aquaculture, where they adopt a pathogenic lifestyle. Major challenges for sustainable aquaculture are to manage the risk associated with such occurrence by reducing the opportunity for pathogens to reside and thus potentially adapt within aquaculture systems, coupled to their rapid identification and containment following disease emergence. Both these factors can be facilitated through implementation of biosecurity measures and the use of synchonous fallowing strategies to break potential long term cycles of infection and ensure rapid and appropriate containment of disease in discrete management areas. The implementation of surveillance programs based on the best available molecular epidemiological information offers great potential to support such measures in further developing a healthy and sustainable aquaculture industry that is necessary to satisfy an increasing world demand for cultured fish products.The author declares that they have no competing interests."} +{"text": "A biobank is a depository for biomaterials from a representative portion of a human population and acts as a vault with intricate detailed information pertaining to the individuals from whom biological materials have been collected. Biobanks can be classified into population biobanks, biobanks for molecular epidemiology and biobanks for disease biology.RGCB has been involved in biobanking with respect to three large studies in India namely i)Molecular Epidemiology of HPV in India, ii) A Randomized 2 versus 3 dose HPV Vaccination Clinical trial and iii) HPV AHEAD- : Role of Human Papilloma Virus Infection and other co-factors in the aetiology of Head & Neck Cancer.A cervical cytology biobank has been established at the centre which is an almost inexhaustible resource for fundamental and applied biological research. It helps to understand the natural history of HPV infection and HPV induced lesions and cancers, screening effectiveness, exploration of new biomarkers, surveillance of the short- and long-term effects of the introduction of HPV vaccination. However legal and ethical principles concerning personal integrity and data safety must be respected strictly and biobank based studies require approval of ethical review boards.The HPV vaccination study involves 10 different sites & different collaborating institutes across the country and all samples are shipped to RGCB, Trivandrum, which acts as a Central storage facility/ biobank. Sample shipment was an important procedure involving communication between base laboratory and biobank, import of registration database, strict temperature control, tracking of shipment and ready storage freezers on arrival. The sample verification process involves a temperature logger, freeze control tubes and bar code verification. The amount of clinical data linked to the samples determinate the availability and biological value of the sample. Thus acquiring the background information of each sample and cataloguing the available information systematically and meticulously is a must.The major problems faced in the management of cervical cytology biobanks have been reclassification of smears before entry into database, failure of temperature logger, bar codes slipping off and maintaining uninterrupted power supply.Publicly funded biobanks aim to promote the development of new knowledge by giving the research community access to data and samples. The most efficient way to acquire these benefits is to first maximize the use of biobanks in research and, second, to maximize the dissemination of knowledge developed by the research projects that used the biobanks. However there are issues with regard to the use of such stored materials especially when the demand occurs from the private sector. The decision is finally made by a scientific management committee consisting of members of World Health Organization (WHO) and the international agency for research on cancer (IARC)."} +{"text": "Half a billion years ago, the first four-legged land animal crawled out of the sea onto dry land. How did the limbs that creature crawled on evolve from the fins of its fishy ancestors? This question has long intrigued biologists.PLOS Biology article, Denis Duboule, Joost M. Woltering, and colleagues shed new light on these questions from a comparative analysis of the regulatory mechanisms that control when and where certain members of the Hox gene family are turned on and off in zebrafish fins and mouse limbs.Fossil records suggest that tetrapod legs evolved step by step from fins, and comparative gene expression studies have provided some insights into how mutation and natural selection derived long limb bones from fin precursors. But the evolutionary path that lies between the structural elements of fish fins and the toes and fingers of tetrapod digits has remained obscured. Are tetrapod digits homologous to fish radials? Did the genetic capacity for digit differentiation exist in fish ancestors, or is it unique to tetrapods? In their recent HoxA and HoxD, are known to function in patterning the developing vertebrate limb. In land animals, but not in fish, HoxD has what is known as a \u201cbimodal expression pattern,\u201d meaning that one subset of Hoxd genes directs the development of the long bones on the proximal (body) side of the wrist or ankle, while another subset directs the development of the long bones on the distal side . This bimodal expression pattern is due to the preferential interaction of the 3\u2032 and 5\u2032 genes in these Hox clusters with flanking regions of DNA on their own side of the cluster. These flanking regions contain non-genic DNA in which are located so-called enhancers\u2014DNA loci that control the transcription of genes. On the 3\u2032 side of the HoxA and Hox D clusters sit the proximal enhancers, which regulate the expression of proximal genes in the cluster in the proximal part of the limb, and on the 5\u2032 side are the distal enhancers, which regulate the expression of distal genes in the cluster, leading to the segregated pattern of 3\u2032 and 5\u2032 genes. Duboule and colleagues decided to find out whether the HoxA cluster shares this regulatory characteristic with HoxD, reasoning that if it does, that the bimodality arose before the Hox clusters duplicated, indicating also that the regulatory capacity to form digits was in place before land animals' evolutionary emergence from the sea. Such a finding would lend strength to the argument for homology between digits and fin radials.Two Hox gene clusters, Hoxa genes in mouse embryos and by comparing them with the expression patterns of Hoxd genes, the researchers determined that the HoxA gene cluster does indeed exhibit biomodality, with one regulatory module directing the development of the digits and another orchestrating the development of the proximal segment of the limb. Although they noted some differences in the details of the bimodal expression of HoxA and HoxD genes, the researchers concluded that this pattern was common to both these Hox clusters and therefore predated the evolution of tetrapods from their fish ancestors.By looking at the expression patterns of HoxA and HoxD clusters share the bimodality that is associated with the differential regulation of the proximal and distal development of tetrapod limbs, is the same true of the Hox clusters that direct fin development in fish? By looking at the interaction profiles of Hox genes in zebrafish embryos, the researchers discovered a partitioning pattern similar to that seen in the mouse, in which Hox genes located at the edge of the clusters tended to interact more with their nearest flanking DNA regions while those in the middle of the cluster interacted with the flanking regions on both sides, confirming the existence of this bimodal pattern. Their findings support the idea that the chromatin structure that underlies this regulatory mechanism existed before the evolution of tetrapod limbs and that digits, therefore, may be homologous to distal fin structures in fish.If both HoxA and HoxD clusters, together with their flanking 5\u2032 regions, into mice. Surprisingly, in the resulting transgenic mouse embryos, zebrafish Hox gene expression was specific to the proximal and not distal (digit-associated) developing limb tissues. On the basis of these findings, the authors conclude that the bimodal regulatory landscape that controls HoxA and HoxD expression was indeed in place before fish and tetrapods diverged, and that the subsequent evolution of novel enhancers allowed it to be repurposed to bring about the development of tetrapod digits.The team then inserted (in separate experiments) zebrafish Returning to the original question: does this make fin radials and digits homologous? That, the authors decided, depends on definitions. Their findings clearly demonstrate that fish have both the genes and the regulatory architecture needed to form digits. However, they also show that the development of digits depends on additional genetic alterations occurring in the context of that preexisting regulatory landscape. Duboule and colleagues suggest that although fish radials are not homologous to digits in the classical sense, biologists should consider thinking in terms of regulatory circuitries rather than expression patterns when considering whether traits have arisen from a common ancestral characteristic. Hox Loci and the Origin of Tetrapod DigitsWoltering JM, Noordermeer D, Leleu M, Duboule D (2014) Conservation and Divergence of Regulatory Strategies atdoi:10.1371/journal.pbio.1001773"} +{"text": "Populus genome, the recently released Eucalyptus grandis genome and the concerted efforts towards the generation of genome sequences for spruces (Picea sp.) and pines (Pinus sp.) by several groups worldwide, are fueling a multitude of inter-disciplinary studies and applications in sustainable forest production and conservation. Time now calls for the integration of scientific fields with an increased sense of urgency for delivery of effective biotechnologies.Forest trees have unquestionably entered the genomic era. The updated version of the The IUFRO Tree Biotechnology biannual conference has established a solid tradition for over 20 years as the official meeting of the IUFRO working group 2.04.06 \u2013 Molecular biology of forest trees. This conference has convened scientists and foresters interested in the genetics, genomics, molecular biology and physiology of forest trees, and the application of this knowledge to tree improvement and conservation. The Tree Biotechnology Conference has undoubtedly been the premiere international forum where the most cutting edge research in tree biotechnology developed both in academia and industry is presented. \u201cFrom genomes to integration and delivery\u201d, this was the theme chosen for the 2011 edition of the IUFRO Tree Biotechnology Conference, first time to be held in South America. Our intention was to promote a more integrated and applied dialogue on tree biotechnology and genomics, beyond the mainstream discussion of the fundamental advances on the genetic mechanisms that underlie tree phenotypes.In nine scientific sessions some of the current advances of genomics applied to forest conservation, tree physiology, stress response, molecular breeding, in vitro and propagation technologies, wood development and genetically modified (GM) trees were highlighted. With 340 registered participants, the Conference brought to Brazil most of the world\u2019s brain power in forest tree genomics and biotechnology. An outstanding team of international scientists shared their results and visions on the present and future of this fast moving area of forest science, while a brilliant group of young scientist and students delivered a very energetic and diverse collection of high-quality scientific presentations. Forty two countries were represented at the Conference with almost 100 different laboratories from tens of Universities, research institutions and private companies.During the seven days of the Conference 26 invited lectures, 63 oral and 185 poster presentations were delivered, totaling 274 papers made available as extended abstracts into this BMC Proceedings supplement. The special workshop on the hot topic of \u201cGenomic Selection in tree breeding\u201d and the several reports on whole-genome studies, made this conference edition inaugurate a deliberate effort towards a better integration between the quantitative genomics, the \u201csingle-gene\u201d and the system biology approaches to more efficiently unravel the complex relationships between genotypes and phenotypes in forest trees. A field trip to the forest plantations, nurseries and mill of VERACEL Cellulose was a definite highlight and a welcome break from the scientific sessions, providing an overview of some of the advances and challenges facing the translation of research into plantation forestry.In closing this introductory statement, acknowledgements are due to the outstanding financial support provided by the competitive grants of the Brazilian Ministry of Science and Technology through the National Research Council (CNPq) and the Ministry of Education through its agency for graduate studies (CAPES). Major support was also provided by EMBRAPA , and VERACEL Cellulose, the host organizations, together with an exceptional suite of private sponsors. Besides the organizations that backed this conference and an active Scientific Committee involved in abstract review a number of people were involved in the organization and logistics. The conference would not have been possible without the valuable contributions of all these players.Given the rewarding feedback received after the Conference, the original goal of providing an exceptional mix of science, social activities and field exploration in a relaxed atmosphere was truly accomplished. The IUFRO Tree Biotechnology Conference 2011 made a significant contribution to advance the forest biotechnology research community one step ahead on the challenging task of moving from gene and genome discoveries to the delivery of valuable technologies into sustainable forestry."} +{"text": "Felis catus subject. The interactions present in the simulated data were predicted with a high degree of accuracy, and when applied to the real neural data, the proposed method identified causal relationships between many of the recorded neurons. This paper proposes a novel method that successfully applies Granger causality to point process data, and has the potential to provide unique physiological insights when applied to neural spike trains.The ability to identify directional interactions that occur among multiple neurons in the brain is crucial to an understanding of how groups of neurons cooperate in order to generate specific brain functions. However, an optimal method of assessing these interactions has not been established. Granger causality has proven to be an effective method for the analysis of the directional interactions between multiple sets of continuous-valued data, but cannot be applied to neural spike train recordings due to their discrete nature. This paper proposes a point process framework that enables Granger causality to be applied to point process data such as neural spike trains. The proposed framework uses the point process likelihood function to relate a neuron's spiking probability to possible covariates, such as its own spiking history and the concurrent activity of simultaneously recorded neurons. Granger causality is assessed based on the relative reduction of the point process likelihood of one neuron obtained excluding one of its covariates compared to the likelihood obtained using all of its covariates. The method was tested on simulated data, and then applied to neural activity recorded from the primary motor cortex (MI) of a Recent advances in multiple-electrode recording have made it possible to record the activities of multiple neurons simultaneously. This provides an opportunity to study how groups of neurons form functional ensembles as different brain areas perform their various functions. However, most of the methods that attempt to identify associations between neurons provide little insight into the directional nature of the interactions that they detect. Recently, Granger causality has proven to be an efficient method to infer causal relationships between sets of continuous-valued data, but cannot be directly applied to point process data such as neural spike trains. Here, we propose a novel and successful attempt to expand the application of Granger causality to point process data. The proposed method performed well with simulated data, and was then applied to real experimental data recorded from sets of simultaneously recorded neurons from the primary motor cortex. The results of the real data analysis suggest that the proposed method has the potential to provide unique neurophysiological insights about network properties in the cortex that have not been possible with other contemporary methods of functional interaction detection. Neurons in the brain are known to exert measurable, directional influences on the firing activities of surrounding neurons, and a detailed analysis of these interactions improves our understanding of how the brain performs specific functions Granger causality has proven to be an effective method for the investigation of directional relationships between continuous-valued signals in many applications To address these issues, this paper proposes a point process framework for assessing Granger causality between multiple neurons. The spiking activity of each neuron is simultaneously affected by multiple covariates such as its own spiking history and the concurrent ensemble activity of other neurons. The effect of these factors on a neuron's spiking activity is characterized by a statistical framework based on the point process likelihood function, which relates the neuron's spiking probability to the covariates The proposed framework was used in an attempt to identify the causal relationships between simulated spike train data, and accurately estimated the underlying causal networks presented in the simulations. It was also applied to real neural data recorded from the cat primary motor cortex (MI) in order to assess the causal relationships that occur between multiple simultaneously recorded neurons during performance of a movement task.The experiments that were performed for the collection of real neural spiking data were approved by the Animal Ethics Committee of the University of Western Australia, and the National Health and Medical Research Council of Australia (NH&MRC) guidelines for the use of animals in experiments were followed throughout.Statistical analysis of the potential causal relationships between neurons was performed based on a point process likelihood function. The likelihood function related a neuron's spiking probability to possible covariates, such as its own spiking history and the concurrent activity of all simultaneously recorded neurons. The causal relationships between associated neurons were assessed based on the point process likelihood ratio, which represents the extent to which the likelihood of one neuron is reduced by the exclusion of one of its covariates, compared with the likelihood if all of the available covariates are used. The Granger causality measure based on the point process likelihood ratio also enabled us to detect significant causal relationship through a hypothesis testing based on the likelihood ratio statistic.A point process is a time series of discrete events that occur in continuous time To model the effect of its own and ensemble's spiking histories on the current spiking activity of a neuron, a GLM framework is often used to model the CIF. In the GLM framework, the logarithm of the CIF is modeled as a linear combination of the functions of the covariates that describe the neural activity dependencies A point process likelihood function was used to fit the parametric CIF and analyze Granger causality between neurons since it is a primary tool used in constructing statistical models and has several optimality properties Given the ensemble spiking activity in The Granger causality matrix We can test In any attempt to identify the causal relationships between multiple neurons simultaneously, the total number of the possible causal interactions to be investigated is usually large. Thus, the use of common statistical thresholds cited above to assess the causal interactions would lead to an unacceptably large number of false causal interactions where the null hypothesis is incorrectly rejected Combining the multiple hypothesis testing results with the sign of In order to evaluate the proposed framework's ability to identify Granger causality for ensemble spiking activity, we analyzed synthetically generated spike train data. Simulated spike train data were synthetically generated based on the nine-neuron network of In order to select a model for each neuron we fit several models with different history durations Based on the estimated model, two kinds of causality maps were obtained using the proposed method. Firstly, the Granger causality map The FDR procedure was used as a solution for the multiple comparisons problem when considering a set of statistical inferences simultaneously. When controlling the FDR at a specific significance level seen in , each haTo illustrate the application of the proposed method to real spike train data, 15 neurons were simultaneously recorded from the cat MI shown in Using the AIC, an optimum model for each neuron is selected to minimize the criterion. The non-overlapping spike counting window The GOF of the estimated model is assessed by using the Kolmogorov-Smirnov (KS) plots The causal connectivity between the recorded neural spike train data was assessed using the proposed framework, and the results are illustrated in The estimated GLM parameters We proposed a point process framework for identifying causal relationships between simultaneously recorded multiple neural spike train data. Granger causality has proven to be an effective method to test causality between signals when using the MVAR model, but to date it has been used for continuous-valued data Other model-based methods for assessing the directional relationships between neurons have been recently developed Some of the neurons included in this analysis showed no evidence of either self-interaction, or interactions with other neurons. Although these neurons also had non-zero GLM parameters for self-interaction, as indicated with green asterisks in The identification of excitatory self-interactions for some of the analyzed neurons was an unexpected and interesting finding. Analysis of the spiking features of these neurons verified that they were not engaged in any manner of bursting behavior that may explain the self-excitation result. Based on the high history orders that were also seen in those neurons as shown in http://www.neurostat.mit.edu/gcpp).The Matlab software and the data sets used to implement the methods presented here are available at the website ("} +{"text": "Various murine models are currently used to study acute and chronic pathological processes of the liver, and the efficacy of novel therapeutic regimens. The increasing availability of high-resolution small animal imaging modalities presents researchers with the opportunity to precisely identify and describe pathological processes of the liver. To meet the demands, the objective of this study was to provide a three-dimensional illustration of the macroscopic anatomical location of the murine liver lobes and hepatic vessels using small animal imaging modalities. We analysed micro-CT images of the murine liver by integrating additional information from the published literature to develop comprehensive illustrations of the macroscopic anatomical features of the murine liver and hepatic vasculature. As a result, we provide updated three-dimensional illustrations of the macroscopic anatomy of the murine liver and hepatic vessels using micro-CT. The information presented here provides researchers working in the field of experimental liver disease with a comprehensive, easily accessable overview of the macroscopic anatomy of the murine liver. Within the last twenty years the number of publications describing the use of mouse models has steadily increased. Animal models of human disease have become an integral part of virtually all areas of medical research. Consequently, various murine models of liver disease have been developed including models of inflammatory liver disease To gain a better understanding of the macroscopic anatomy of the murine liver and its associated microvasculature we compiled information from our own previous research in C57BL/6 mice using micro-CT and compared our observations to the literature.Micro-CT images were acquired as described recently Our literature search utilized, the search engines Medline, Google, and Vetseek Nomina Anatomica Veterinaria (NAV) which is the standard reference in veterinary science for anatomical terminology Searching Medline we found four articles describing the macroscopic anatomy of the murine liver In addition to the journal articles available online, we identified seven books that included a description of the murine liver anatomy During our literature search we found no articles or books that comprehensively reviewed the murine liver anatomy with regard to the classical slice orientations used in small animal in vivo-imaging, nor did we find any articles that described the adjacent liver vessels in this context.Impressio gastrica) of the caudal surface of the left lateral liver lobe is caused by the stomach, the right kidney lies within the renal impression located on the caudal surface of the caudate lobe. Other impressions of the liver are the duodenal , oesophageal (Impressio esophagea), and the jejunal impression , which are difficult to identify in small animal imaging.The murine liver has a convex shaped cranial surface conforming to the vault of the diaphragm and a concave shaped caudal surface, which is adapted to the surface of the abdominal organs. While the gastric impression (Ligamentum coronarium) and the triangular ligament (Ligamentum triangulare sinistrum), respectively. According to the literature, mice have no ligaments stabilizing the right medial and right lateral liver lobe. On the right side, only the caudate process is fixed to the right kidney by the hepatorenal ligament . Ventrally, the falciforme ligament containing the teres ligament (Ligamentum teres hepatis), is considered a relic of embryonic development rather than a ligament of fixation and the left lateral lobe (red color-coding), with the smaller left medial lobe lying cranially and medially to the larger left lateral lobe , 2, 3, 4Similarly, the right liver lobe can be subdivided into the right medial lobe (blue color-coding), which is located directly below the diaphragm and lateral to the right side of the gall bladder, and the right lateral lobe (green color-coding), which is smaller and located more caudally than the right medial lobe.The caudate lobe is subdivided into the larger caudate process (cyan color-coding) and the smaller papillary process (magenta color-coding). The larger caudate process lies directly caudal to the right lateral lobe and overlaps the right kidney ventrally and laterally. The papillary process in general is relatively small, and can be subdivided into two smaller parts (no specific nomenclature exists for these two subdivided parts) and is located between the stomach, the right lateral lobe, and the caudal caval vein.Finally, the quadrate lobe is described in the NAV. This lobe is located at the medial edge of the left lateral lobe and is not further subdivided. While we have not been able to identify this small lobe of the liver macroscopically or with micro-CT images, other authors depicted the quadrate lobe in their schematic drawings without labeling it within the images While anatomical variations of the liver do exist both within and between different mouse strains, to date, we did not evaluate these differences in-depth. However, we observed variations in our own measurements at the fusion of the two middle lobes. This variation has been described, by Rauch et al. To compliment the anatomical description of the liver we also gathered additional information on the anatomical relations of the adjacent perihepatic organs and liver-related vessels.Only separated by the diaphragm, the heart and the lungs are cranial to the liver. While these organs are often easily distinguished from the liver, in one case we observed transdiaphragmatic herniation of the right medial liver lobe, which was clearly identifiable just after administration of a liver-specific contrast agent .Impressio renalis) of the liver. Between the medial cranial pole of the kidneys and the liver the triangularly shaped adrenal glands are located on both sides (Imressio gastrica) adjacent to the surface of the left lateral lobe, while the spleen is located between the left kidney, the stomach, and left abdominal wall.The half-moon shaped spleen can be found caudal to the liver, kidneys, adrenal glands, and stomach . As mentth sides . On the Arteria coeliaca; A. mesenterica cranialis, in humans the superior mesenteric artery) and among others branches into the hepatic artery which runs cranially. Due to the limitations of in vivo micro-CT of the hepatic artery we have been able to identify the proximal hepatic artery, but cannot trace this vessel into the liver. However, the branching pattern of the hepatic artery has been reported to correspond with the branching of the portal vein and the billiary ducts The coeliac artery (Ramus dexter) draining into the caudate process and the right lateral lobe (Ramus sinister). The left branch provides blood-flow into the right medial lobe and divides into the umbilical part for the left medial lobe and the transversal part for the left lateral lobe. The NAV provides no specific nomenclature for the small branch providing blood supply to the papillary process.The portal vein drains blood from the gastrointestinal tract and spleen to the liver. The portal vein divides into the right branch . For example, some authors described the caudate process (of the caudate lobe) as the caudate lobe Nomina Anatomica Veterinaria, which was introduced in 1955 and since then has continuously been updated by an international group of veterinarians. Regarding the murine liver lobes we found the NAV to serve almost perfectly (with the only exception that we found it difficult to identify the quadrate lobe). However, there were some problems in the most recent edition of the NAV from 2005 To overcome the problems that may arise from the inconsistent designation of the liver lobes we recommend using the nomenclature in accordance with the Photographs and schematic drawings are primarily used in virtually all other publications dealing with the anatomy of the murine liver. Although different three-dimensional mouse atlases are available in book form While our study aimed at providing a general overview of the macroscopic anatomy of the murine liver, we did not investigate differences related to mouse strain, gender, weight, or age. These differences, however, have to be considered when trying to refer to the anatomical descriptions presented in our manuscript.The provided three-dimensional illustrations of the macroscopic anatomy of the murine liver using micro-CT may promote clarity and precision among scientists and veterinarians working in the field of liver research and may be helpful as a reference for future experimental research in this field."} +{"text": "The global threat of antimicrobial resistance (AMR) needs to be addressed urgently. The global surveillance of AMR pathogens is patchy and limited by financial and technical constraints. Without an early-warning system, the emergence and spread of AMR often goes unnoticed until a given strain has become endemic.Enterobacteriaceae (CRE), we analyzed the potential role of the International Health Regulations (IHR), a legally binding agreement between 194 States Parties, whose aim is \u201cto prevent, protect against, control and provide a public health response to the international spread of disease\u201d with respect to AMR and assess whether selected CRE events fulfil the four criteria of Annex 2 of the IHR.Using the example of carbapenem-resistant Certain events marking the emergence and international spread of KPC and NDM-1-producing CRE fulfil the criteria for notifiability to WHO. This can be extrapolated to other types of AMR. At the same time, ambiguities in Annex 2 and limited specific WHO guidance may make notification decisions a matter of debate. Obstacles for the application of the IHR to AMR include a lack of capacities within WHO.The global threat posed by the spread of AMR requires a coordinated international response. Recognizing the applicability of the IHR to AMR could serve as a \u201cwake-up call\u201d and obligate WHO and States Parties to strengthen surveillance and response, which could in turn contribute to containing the spread of AMR and preserve the efficacy of antimicrobials. Although States Parties and WHO share a collective responsibility in the process, WHO must clearly delineate its position regarding AMR and the intended role of the IHR in this context.None declared."} +{"text": "The habit of removing the nails cuticles of hands and feet is a typical cultural practice in Brazil and can be an important factor for hepatitis B and C infection. We conducted a seroepidemiological survey of hepatitis B and C in professional manicures/pedicures in salons in Sao Paulo - Brazil, with the aim of estimating the prevalence of serological markers of HBV and HCV infection on manicures/pedicures; get to know the information level of that have about transmission routes and Prevention of Hepatitis B and C, evaluate the perception degree of risk exposure accidental infectious agents and check the use of biosafety norms in the work routine of these professionals.This is a descriptive, cross sectional prospective study. The survey involved 100 participants manicures/pedicures from beauty salons, by random drawing. An individual for information about the characteristics participants; simultaneously questionare has been applyed was blood collected sample for the detection of serological markers of HBV and HCV of each participant.Prevalence estimates were found in 8% of HBV and 2% of HCV. Membership biosafety standards for professionals were relatively low and inadequate. It was found that the degree of knowledge about routes of transmission, prevention, biosecurity standards and risk perception of infectious agents in their professional activity, was low. Manicures and pedicures are a group with increased risk factors, which determine a likely greater exposure to infection with viral hepatitis than the general population and all ways of prevention must be used to protect there health.It important to raise awareness manicures and pedicures becomes for the use of individual protection in their routine work to prevent future disease.None declared."} +{"text": "Recent research has elucidated several different mechanisms for acupuncture. However the inter-relationship between these mechanisms and how acupuncture affects complex physiological systems is still not understood. Heart rate variability (HRV), the beat-to-beat fluctuations in the rhythm of the heart, results from the regulation of the heart by the autonomic nervous system (ANS). Low HRV is associated with increased risk of all-cause mortality and is a marker for a wide range of diseases. Coherent HRV patterns are associated with increased synchronization between the two branches of the ANS, and when sustained for long periods of time result in increased synchronization and entrainment between multiple body systems. This presentation is a systematic review of the clinical trials that have been undertaken examining the effect of acupuncture on HRV and the implications for HRV representing a systems level mechanism for acupuncture.The literature was reviewed using Medline, Science Citation Index, Cochrane , the New England School of Acupuncture library databases, cross-reference of published data, personal libraries and Chinese medicine textbooks.Results from randomized placebo controlled trials strongly suggest that acupuncture can improve HRV, especially when acupuncture is delivered in clinically valid dosages to subjects with a medically diagnosed condition and with the inclusion of an inert placebo control.There is sufficient evidence in the literature to support the conclusion that acupuncture improves HRV. Acupuncture may function by mediating global physiological regulation through improvement of HRV and synchronization of the two branches of the ANS. As a complex intervention, such a view of acupuncture mechanism is conceptually aligned with systems and complexity theory and is more compatible with traditional East Asian medical theory."} +{"text": "While generating the stream of consciousness and driving our actions in the world the brain largely relies on implicit forms of information processing. Conscious and unconscious factors are closely coupled and can be seen as complementary since unconscious processing can be sensitive to regularities within signals prior to conscious awareness, suggesting that the content of consciousness can be biased by unconscious factors.In the context of the Distributed Adaptive Control (DAC) theory of the mind, brain and body nexus , we prevBuilding on an initial large scale model of thalamo-cortical dysrhythmia , the pre"} +{"text": "The objective of this experimental study was to develop a new combination technique for electrode placement and the histomorphological evaluation of its effectiveness for the electrochemical lysis of large intraocular tumours.The ECL was conducted on two freshly enucleated eyes containing large tumours, with maximal prominence of 11 and 12 mm and maximal base diameter of 16 and 19 mm, respectively. The ECL was carried out using an ECU-300 apparatus generating an electrochemical charge of 30-35 K. In the course of the ECL procedure we used a new original combination technique of electrode placement, i.e., the anode was a surface electrode and the cathode was an intrastromal electrode. The anode had an original design.A greyscale B-scan performed after the ECL completion showed decreased echogenicity and heterogeneity of the echo-structure of the tumour. According to the bioimpedancemetry data, the average duration of the ECL session was from 20 to 30 minutes depending on the tumour size. The results of pathomorphological examination performed after the ECL on two freshly enucleated eyes appeared to be similar. Thus, in both cases after lysis the eyeball did not change its size or shape. In both cases the tumour originated from the choroid plexus and showed subtotal necrosis. There was a pronounced boundary between the intact and electrochemically damaged tumour regions which attests to the local effect of the ECL restricted to the electrode placement area only.The growing interest in the ECL procedure is due not only to its availability and low cost but mainly to its real clinical effect demonstrated in numerous publications. The absence of a developed ECL technology for the treatment of intraocular tumors, and, hence, reports on its clinical effectiveness, gave us the impetus to conduct this study.The proposed ECL method is promising and can be considered as optional for the organ-sparing treatment of large-sized intraocular tumours. Further optimization of the ECL parameters, as well as the development of sets of surface and intrastromal electrodes for different types of tumours, is required. Choroidal melanoma (CM) is the most common primary malignant intraocular tumour that accounts for nearly 80% of all choroid plexus tumours. Owing to the high risk of metastases 3\u201316%) [6% 1\u20134],,4], the Nowadays, the preferred concept in ocular oncology for CM treatment is the use of organ-sparing methods which basically require radical surgery against the tumour while causing minimal damage to surrounding normal tissues.Currently, the range of organ-sparing methods available for CM treatment is wide enough and includes laser photocoagulation, brachytherapy, cryodestruction, transpupillary thermotherapy, photodynamic therapy, surgical resection of the tumour (en block resection), etc. .The possibility of the use of organ-sparing treatment of CM will largely depend on the tumour size and location . Enucleation is performed to remove large-sized eye tumours.In view of the above, the development of new minimally invasive and organ-sparing methods for the treatment of large CMs is becoming a topical issue.A good example of the organ-sparing trend in oncology is the electrochemical lysis (ECL) method which induces destructive chemical reactions occurring during direct current flow between two bipolar electrodes introduced into the tumour (HCl is produced at the anode and NaOH at the cathode), that subsequently results in tissue coagulation and colliquative necrosis around the electrodes.The ECL method has been fairly successfully used for the treatment of breast cancer, hepatic carcinoma, and tumour metastases in the liver, benign prostatic hyperplasia, cancer of oesophagus, lungs, pancreas, and skin \u201311.In general oncology, the standard ECL technique involves a parallel introduction of two or more needle electrodes into the tumour. Using a similar approach in ocular oncology, the electrodes should be introduced in the intraocular tumour ***transsclerally in the zone of projection of the tumour base onto the sclera. In order to obtain an adequate necrosis of large tumours, three or more electrodes must be introduced intrastromally and correctly positioned under ultrasound control which is fraught with risks and several complications, such as iatrogenic retina tear, hemophthalmos, subretinal and subchoroidal haemorrhages, etc.Therefore, the problems with electrode placement and inability to predict the optimal electric field\u2019s effect on the tumour make the search for new approaches to ECL application in ocular oncology a topical undertaking.The objective of this experimental study was to develop a new combination technique for electrode placement and the histomorphological evaluation of its effectiveness for the ECL of the large-sized intraocular tumours.The ECL was conducted on two freshly enucleated eyes containing the large-sized tumours, with maximal prominence of 11 and 12 mm and maximal base diameter of 16 and 19 mm, respectively.The ECL was carried out using the \u201cECU-300\u201d apparatus generating an electrochemical charge of 30\u201335 K. In the course of the ECL procedure, we used a new original combination technique of electrode placement, i.e., the anode was a surface electrode and the cathode was an intrastromal electrode. The anode had an original design: it was made of the platinum wire to form a round mesh of 9 mm in diameter and with a hole in the centre. The cathode was a needle electrode made of 0.5 mm platinum wire .At the ECL preparation step, we determined the boundaries of the tumour base projection onto the sclera by their marking with 1% water\u2013 alcohol solution of brilliant green. To perform the ECL, the anode was applied onto the sclera within previously marked boundaries of the tumour base and stitched with two interrupted sutures. The cathode was introduced perpendicularly to the sclera into the centre of the tumour base through the hole in the anode electrode. To introduce the cathode into the tumour, a trocar with a screw-type length adjustment and a 25-G cannula were used.The length of trocar was adjusted to accommodate the length of the extra-scleral part of the 25-G cannula + sclera thickness + electrode penetration depth into the tumour. Then, sclerotomy was performed using the trocar inserted into the lumen of the cannula, by inserting it to its full depth into the tumour perpendicularly to the sclera followed by trocar withdrawal from the lumen of the cannula and its replacement with the electrode of a predetermined length.The depth to which the electrode was inserted into the tumour had been determined in advance based on the results of previous ultrasonic scanning by subtracting 1.5\u20132 mm from the value of the maximum prominence in the centre of the tumour. The length of the electrode active part was calculated in the same manner as the trocar length .The active positioning of the intrastromal electrode was carried out in the course of surgery under the trans-corneal and trans-scleral ultrasound control using a 10-MHz probe of the Ultrascan apparatus.Z) was conducted in an automated mode, and the software produced the chart of changes of tissue resistance in real-time (Z) flattened out and appeared to be hardly affected by time, it was an indication for completion of the ECL procedure.To evaluate the ECL effectiveness, we used the bioimpedancemetry method which is the measurement of total electrical resistance of tumour tissue placed between two electrodes in the direct electric field of variable frequency. Multiple impedance measurements of lysed tissue in the course of ECL were performed using an experimental setup at 2 and 10 kHz. To this end, the ECL procedure was interrupted every 3 min for 1\u20132 s. The same electrodes were used for both ECL and bioimpedancemetry. Impedance measurement . In the centre of the area there was a canal resulted from the insertion of the intrastromal electrode (cathode), its opening containing dark fluid expelled after the electrode withdrawal. When the eyeball was cut open, the impurity-free liquid of the vitreous body leaked out through the base of the tumour. The location of eye integuments and tumour was in accord with the results of clinical and instrumental examination. On section, the tumour is dark and has small slit-shaped spaces through which a slightly frothy, gel-like liquid, with admixture of reddish-brown, blood-tinged fluid, is oozing.In both cases the tumour was originated from the choroid plexus and showed subtotal necrosis.For the sake of convenience of description of the ECL-induced tumour morphological changes, the tumour was conditionally divided into three parts: the apex, middle part and periscleral part.The apex part showed necrosis with cell fragmentation, nucleus contractions and stiffing (caryopicnosis) and nucleus fragmentations (caryorexis), pigment condensation, appearance of slit-shaped spaces in place of the vessels filled with lysed blood and gaps along the contour of the palisade structures .The middle part demonstrated total cell necrosis. The contours of the gaps resemble those of individual cells and lumens of destroyed vessels . The sepIn the periscleral part of the tumour located next to the cathode canal the morphological picture resembled that of the middle part except for the smaller size cavities, owing to the absence of large vessels in this part of the tumour .The lumen of the canal formed after withdrawal of the intra-scleral cathode electrode) is filled with pigmented debris with an admixture of lysed blood .Given that the area of the surface electrode is smaller than the tumour base area, some periscleral spots of the intact tumour tissue have been found. These spots were characteristic for CM, i.e., they were intensively pigmented and composed mainly of spindle cell type A melanocytes which showed moderate polymorphism and minimal infiltration of the inner layers of the sclera.There was a pronounced boundary between the intact and electrochemically damaged tumour regions which atThe growing interest in the ECL procedure is due not only to its availability and low cost but mainly to its real clinical effect demonstrated in numerous publications , 7\u201314.The absence of developed ECL technology for treatment of intraocular tumours, and, hence, reports on its clinical effectiveness, gave us the impetus to conduct this study.We set out to develop a new method making use of combination of the surface and intrastromal electrodes, as well as their original placement, which hitherto has never been described. To evaluate the ECL effectiveness, we performed the bioimpedancemetry of the ECL-treated tissues and morphological examinations of the large-sized intraocular tumours. The experiment was aimed at developing an integrated and manageable ECL method based on unbiased impedancemetry measurements, thus allowing the evaluation of its dynamics and determination of the point of its completion.The direct current flow between the electrodes results in tissue devitalization owing to its electrolysis. In the course of the ECL procedure the resistance of tissue (Z) positioned between the electrodes drops indicating the occurrence of tissue necrosis. The registration of therapeutic effect is based on the impedancemetry measurements which are consistent and little changeable with time.The histological picture of the ECL-induced CM necrosis demonstrates different patterns of tumour destruction, primarily of its vessels, around each electrode depending on their polarity.Under the cathode electrode a pronounced vasodilatation and engorgement of large vessels occur accompanied by the capillary wall destruction and extensive hemorrhages into the necrotized tissue due to an increased turgor pressure caused by the electro-osmotic fluid flow. On the anode side the capillary reaction was only slightly noticeable.\u00a0Therefore, the electromagnetic field effects in biological tissues are due to the obstruction of the microvascular bed. In the cathode region the capillaries are blocked owing to the electro-osmotic fluid flow, and in the anode region the pathological changes occur due to the micro-thrombotic events. Owing to the remote placement of the electromagnetic field source the final area of the ECL-induced tissue lesions must exceed the total area of primary necrosis.The presence of the periscleral sites of intact tumour separated by a pronounced boundary from the electrochemically damaged tumour spots indicates to the importance of the accurate placement of surface electrode, as well as its selection, taking into account the size of tumour base projection onto the sclera and avoiding the use of electrodes with smaller contact area.One of the particular features of ECL treatment of the intraocular tumours is the increased intraocular pressure occurred in the course of the ECL procedure. This is due to active gas bubble formation within the tumour and hampered evacuation of liquid debris via the electrode canal. Therefore, the removal of cell-free products of tumour necrosis using the vitreotom secures the maintenance of the original level of intraocular pressure, does not interfere with the ECL procedure and enables process stability owing to the replenishment of evacuated volume with BSS saline.The updated pattern of the electrode placement geometry using surface extrascleral and intrastromal electrodes and adherence to correct polarity while placing electrode in the tumour, opens up new horizons in achieving total intraocular tumour destruction. The design feature of this ECL technique is an individual selection of surface electrode which covers the whole tumour base over its projection onto the sclera, while a centrally positioned hole enables to introduce the intrastromal electrode to any depth allowing its placement maximally close to the tumour apex and preventing electrode dislocation within this position.When setting the parameters of ECL, we were guided by the charge value which would knowingly cause necrosis of a given tumour volume. This value was experimentally established to be 30 K per 1 cm3 of tumour tissue, and the increase of charge above this value does not practically cause the expansion of necrosis zone , 16. HowFurther modifications, refinement and customizing of the ECL method to address the specific needs of ophthalmologic oncology will allow the pre-modelling of tumour necrosis patterns aided by using the specifically designed software, and thus will help to achieve higher treatment effectiveness. Working out of the objective method of real-time assessment of pathologic changes occurring within the tumour, e.g., by measuring active and reactive tissue resistance by means of bio impedancemetry, provides the effective control and regulation of the ECL procedure.Also, the above method offers additional benefits in investigating tumour morphology and growth pattern, as the fine-needle aspiration biopsy of the intraocular tumour can be performed simultaneously with electrode insertion.Undoubtedly, the size of the intraocular tumour destruction occurred after clinical use of the ECL method will depend not only on the parameters of the procedure but also on the duration of the post-ECL time period. To elucidate these issues, the insightful information from clinical trials will be required.Our experimental study indicates that the new ECL method employing the original combination of electrode placement combines minimal tissue injury and complete tumour destruction under the area of electrode application. The use of surface electrode enables to direct and evoke destruction of the whole tumour base area.The combination of surface and intrastromal electrodes allows the minimal damage to the scleral integrity in the site of tumour base projection. Further experiments dealing with variations in depth of the intrastromal electrode insertion and the amount of current, coupled with bioimpedancemetry, will enable the regulation of morphological changes within the tumour.The proposed ECL method is promising and can be considered as optional for the organ-sparing treatment of large-sized intraocular tumours. Further optimization of the ECL parameters, as well as the development of sets of surface and intrastromal electrodes for different types of tumours, is required."} +{"text": "From climatology to biofluidics, the characterization of complex flows relies on computationally expensive kinematic and kinetic measurements. In addition, such big data are difficult to handle in real time, thereby hampering advancements in the area of flow control and distributed sensing. Here, we propose a novel framework for unsupervised characterization of flow patterns through nonlinear manifold learning. Specifically, we apply the isometric feature mapping (Isomap) to experimental video data of the wake past a circular cylinder from steady to turbulent flows. Without direct velocity measurements, we show that manifold topology is intrinsically related to flow regime and that Isomap global coordinates can unravel salient flow features. The characterization of complex flows is a major challenge in climatology, biology, and engineering Here, we propose the implementation of a machine learning framework for unsupervised characterization of fluid flows. Different from established flow visualization techniques that require a-posteriori intensive processing of high resolution images D positioned vertically at the cross-section of a water tunnel. A dye-injection system is developed for improved visualization of the flow streaklines around the cylinder through a digital camera (see the Methods for further details). We vary the flow regime by changing the free stream velocity, U.To demonstrate our approach, we study the flow past a circular cylinder by processing flow visualization video data with Isomap for Reynolds numbers ranging from 50 to 1725. For such range, the fluid experiences steady separation, the formation of regular vortex patterns , and the initiation of turbulence. We anticipate Isomap to detect flow regimes through varying dimensionality of the embedding manifolds, similarly to the problem of collective behavior of animal groups, where dimensionality is showed to relate with the degree of coordination between individuals In the framework of nonlinear machine learning, we regard experimental video frames as the Isomap ambient space and seek to characterize the flow by studying the embedding manifolds. We demonstrate that the topology of the embeddings can be associated with the flow regime, whereby lack of flow separation is manifested through one dimensional manifolds and the presence of coherent structures through higher dimensionality. Further, we show that manifold inspection can be used to estimate the frequency of vortex shedding and study flow pattern variations due to externally-induced perturbations.Re . The Reynolds number is defined as We process experimental video data recorded with a commercial camcorder with the Isomap algorithm and study the relationship between the topological features of the embedding manifolds and the flow regime, controlled by the Reynolds number We further find that the topology of the embedding is related to two major features underlying the experimental data set. Specifically, in the two dimensional projection in We quantify the vortex shedding frequency by inspection of the annular projections recovered for Re, the flow can be described through nearly one dimensional embeddings, which capture the translational motion in the video feed. On the other hand, as coherent structures are shed by the cylinder, data points are fit on higher dimensionality manifolds, which also account for the shape of the vortices. We observe that increasing the degree of turbulence of the flow corresponds to \u201chiding\u201d periodic fluctuations in the flow. Indeed, Isomap captures the prevalently translational nature of the data.Our analysis of the dimensionality of Isomap embeddings demonstrates a close correspondence between the algorithm outputs and the flow physics. We further elucidate such relations by studying the residual variances for the first three dimensionalities of the data sets, which capture the vast majority of the experiments . The tunnel cross-section is The Isomap algorithm is a nonlinear manifold learning methodology for dimensionality reduction problems Construction of the neighbor graphto approximate the manifold. The elements of the set of vertices Computation of the graph geodesic matrixto approximate the geodesic of the manifold. Floyd's algorithm Approximation of the manifold distance bynearest neighbor distance. The matrix Computation of the projective variablesapplying the classical MDS on the matrixThe outputs of Isomap are the transformed data points on an embedding manifold for the input data set Experimental videos are decompressed into \u201c.jpg\u201d image files and sequences of The vectors of the residual variances for the first three embedding dimensionalities are plotted against the respective Vortex shedding frequency is evaluated for experiments conducted at"} +{"text": "The study of structural and functional connectivity and dynamics in spontaneous brain activity is a rapidly growing field of research . The exiHere, we extended a recent neurophysiologically realistic spiking-neuron model of spontaneous fMRI activity to exhibIn the presence of noisy oscillations on the same order of magnitude as system delays, the temporal connectivity structure plays a role in shaping the functional network connectivity. By effectively decreasing strong synchronous inputs to nodes, the network is stabilized and the need for fine-tuning of global coupling reduced when compared to the absence of delays."} +{"text": "The present study analyzed multiple morphological attributes of the adductor mandibulae in representatives of 53 of the 55 extant teleostean orders, as well as significant information from the literature in order to elucidate the homologies of the main subdivisions of this muscle. The traditional alphanumeric terminology applied to the four main divisions of the adductor mandibulae \u2013 A1, A2, A3, and A\u03c9 \u2013 patently fails to reflect homologous components of that muscle across the expanse of the Teleostei. Some features traditionally used as landmarks for identification of some divisions of the adductor mandibulae proved highly variable across the Teleostei; notably the insertion on the maxilla and the position of muscle components relative to the path of the ramus mandibularis trigeminus nerve. The evolutionary model of gain and loss of sections of the adductor mandibulae most commonly adopted under the alphanumeric system additionally proved ontogenetically incongruent and less parsimonious than a model of subdivision and coalescence of facial muscle sections. Results of the analysis demonstrate the impossibility of adapting the alphanumeric terminology so as to reflect homologous entities across the spectrum of teleosts. A new nomenclatural scheme is proposed in order to achieve congruence between homology and nomenclature of the adductor mandibulae components across the entire Teleostei.The infraclass Teleostei is a highly diversified group of bony fishes that encompasses 96% of all species of living fishes and almost half of extant vertebrates. Evolution of various morphological complexes in teleosts, particularly those involving soft anatomy, remains poorly understood. Notable among these problematic complexes is the The infraclass Teleostei adductor mandibulae usually is by far the most striking cranial muscle of teleosts adductor mandibulae is composed of a massive facial segment positioned lateral to the suspensorium and usually connected anteriorly via tendinous tissue to a smaller mandibular segment of the muscle attached to be the medial surface of the lower jaw adductor mandibulae ranges from a simple, undivided muscle mass to an intricate architecture encompassing up to ten discrete subdivisions adductor mandibulae given its position on the lateral surface of the head and its pronounced plasticity across the spectrum of teleostean taxa resulted in this muscle being the focus of multiple studies. These analyses range across comparative morphology The adductor mandibulae complex, the terminology advanced by Vetter in 1878 adductor mandibulae) was combined with Arabic numbers and Greek letters. In combination these yielded a unique identifier for each of the subunits of the adductor mandibulae which Vetter encountered in the four teleosts he examined \u2013 the cypriniforms Barbus and Cyprinus, the esocoid Esox and the perciform Perca. The entire mandibular segment of the adductor mandibulae positioned medial to the lower jaw in these fishes was termed the A\u03c9, whereas the main subdivisions of the facial segment located lateral to the suspensorium were designated as the A1, A2 and A3 sections. Under this identification system, the A1 section was a superficial muscle division inserting onto the maxilla, the A2 an external division attaching to the dorsal portion of the lower jaw and the A3 a more medially positioned component of the muscle inserting onto the inner aspects of the lower jaw proximate to the posterior terminus of Meckel's cartilage. Additional subdivisions of these main facial components were designated by the incorporation of a Greek letter as a suffix of the primary indicator for a particular section of the adductor mandibulae .Although Owen adductor mandibulae post Vetter adductor mandibulae (the A1 and A2 sections of his terminology) in the four teleosts examined in his study were derived from the more medially positioned A3. Subsequent studies based on broader surveys across teleosts alternatively proposed that A3 was derivative of A2 and eventually also lost in some taxa 1 or a subdivision of that muscle 1 was a superficial portion of the adductor mandibulae with an insertion on the maxilla. Use of point of insertion on the maxilla as the overarching basis for homology hypotheses thereby resulted in the untenable assumption that positionally dramatically different muscles sections within the the adductor mandibulae were, nonetheless, homologous.Myological surveys involving the e.g., A3\u2032 and A3\u2033 e.g., Aw for A\u03c9 ramus mandibularis trigeminus nerve as a landmark useful for the purposes of identifying the facial divisions of the muscle Other minor alterations of the original terminology proposed by Vetter e.g., adductor mandibulae across the expanse of the Teleostei or for that matter often between many closely related orders within that infraclass. Inadequacy of the Vetter terminology for broad homology statements at higher phylogenetic scales has been long recognized by various researchers . As a consequence, even the most detailed and comprehensive synonymy of the teleostean skeletal muscles ever produced, that by Winterbottom adductor mandibulae. That author instead retained the alphanumeric terminology for descriptive purposes rather than as indicative of homology.In retrospect, the traditional alphanumeric terminology proposed by Vetter adductor mandibulae in the Teleostei and identifying the homologies of its main components across that infraclass. In order to address these questions, we undertook a comparative analysis of the adductor mandibulae and its associated soft and hard anatomical structures in representatives of 53 of the 55 currently recognized orders of the Teleostei adductor mandibulae and prior hypotheses of evolution of the muscle across the infraclass.The present study centers on elucidating the morphological diversification of the adductor mandibulae across the Teleostei due to multiple factors discussed below. An alternative nomenclature that reflects these homologies across the entire Teleostei is proposed to facilitate this discussion along with future myologically based analyses in the infraclass.The evidence demonstrates that the present alphanumeric nomenclature fails to identify homologous components of the i.e., hyopalatine arch plus opercular series) follows Grande and Bemis The classification of the Teleostei proposed by Wiley and Johnson Specimens that served for the analysis of the musculature were double-stained for cartilage and bone prior to dissection following the procedure outlined by Datovo and Bockmann Anatomical drawings were based on photographs and direct stereomicroscopic observations of specimens in order to capture fine anatomical details. Drawings are bidimensional and were all produced with a Wacom Intuos4 pen tablet . Outlines were generated in Adobe Illustrator CS5 and the shading and coloring in Adobe Photoshop CS5 .adductor mandibulae and related structures across a morphologically dramatically diverse group such as the Teleostei is difficult. The general features presented herein are intended to serve as guidelines to facilitate the recognition of the primary components of the muscle and associated soft tissues occurring in most teleosts and apparently reflect the myological patterns generalized for most teleostean orders. It is crucial to appreciate that these basic configurations are often altered among highly derived teleosts characterized by greatly restructured jaws with associated significantly modified musculature.An enumeration of the invariant features that characterize the adductor mandibulae that apply to all species of the morphologically and taxonomically diverse infraclass Teleostei are an unachievable goal. As is the case for virtually all morphological traits, an elucidation of the homologies of the components of the highly modified adductor mandibulae muscle can in many lineages be only achieved via comparisons with less derived but comparatively closely related taxa a rostrolateral component termed the buccopalatal membrane adductor mandibulae; and (2) a medially positioned posteroventral component termed the buccopharyngeal membrane occasionally associated with the intramandibular segment of the adductor mandibulae.Most teleosts have the The first of these major components of the buccal membrane, the buccopalatal membrane, forms the anterodorsolateral boundary of the buccal cavity. Ventrally, the buccopalatal membrane is limited by the lower jaw, anteriorly and anterodorsally by the premaxilla and maxilla and posteromedially by the anterodorsal margin of the suspensorium . The bucThe superior labial lamina extends between the posterior and posterodorsal margins of the premaxilla and the anterior and anteroventral margins of the maxilla . As wouljugum, an adjectival form meaning structures connected or yoked or pertaining to the cheek], is nearly invariably delimited posteriorly by the paramaxillar and preangular ligaments and ventrally by the coronomaxillar ligament (see below). In the closed mouth, the projugal lamina folds on itself and lies mostly internal to the retrojugal lamina and jugal laminas. Gosline The portion of the buccopalatal membrane situated immediately posterodorsal to the maxilla similarly undergoes significant retraction and expansion in the course of the operation of the mouth. This portion, termed the projugal lamina , [8]adductor mandibulae associate with the buccal ligaments, which are thereby coopted to act as tendons of this muscle. Under the traditional definitions, a ligament interconnects two or more osseous structures, whereas a tendon joins a muscle to a bone, another muscle, or any other anchoring structure. The application of these standard definitions to the buccal ligaments would lead to the recognition of homologous structures via alternative qualifiers (ligament vs. tendon) in different taxa depending on the presence versus absence of a muscular association. As discussed by Johnson and Patterson In several teleosts, portions of the cf.Nine discrete primary ligaments within the buccopalatal membrane were identified among examined teleosts . By way adductor mandibulae muscle.Three ligaments may be associated with the dorsal portion of the maxilla which typically is situated proximate to the mesethmoid . The paradductor mandibulae muscle. The endomaxillar ligament attaches to the medial surface of the dorsalmost portion of the maxilla. From that attachment area, this ligament proceeds posteriorly and becomes associated with the adductor mandibulae muscle , thus, forming a compound labial ligament that surrounds most of the gape of the mouth. The name labial ligament was previously applied by some authors to these combined ligaments The supralabial ligament extends from the posteroventral region of the premaxilla to the distal portion of the maxilla and forms the anteroventral border of the superior labial lamina. This ligament is often absent or poorly differentiated in examined taxa; a condition especially prevalent among basal teleosts. Terminology previously applied to this ligament includes the maxillary-premaxillary The coronomaxillar, infralabial and on occasion the supralabial ligaments are sometimes very stout and fibrocartilaginous Stauroglanis gouldingi . This ligament is herein named the faucal ligament .The posteroventral portion of the buccal membrane is the buccopharyngeal membrane which is situated internal to the suspensorium and lower jaw. This membrane lines most of the buccopharyngeal cavity and connects the lower jaw and often the mandibular segment of the adductor mandibulae in the Teleostei is into facial and mandibular muscle segments. These segments, termed the segmentum facialis and segmentum mandibularis, respectively, interconnect via a strong tendinous complex, the intersegmental aponeurosis segmentum mandibularis and the anteroventral component \u2013 the meckelian tendon \u2013 directly attaches anteriorly to the lower jaw have the segmentum mandibularis expanded posteriorly and directly contacting the anterior portion of the segmentum facialis. In such cases a raphe marks the limits between the segmenta mandibularis and facialis. This raphe, herein termed the mandibular raphe, is always continuous medially with the mandibular tendon or attach to several of the components of the lower jaw including the medial portion of the coronomeckelian bone , the ventral region of the dentary . Based on this configuration, Datovo and Castro e.g., polymixiforms) or even arise independent of both the meckelian and mandibular tendons , may also be present. The facial tendon is a posteroventral division of the intersegmental aponeurosis that parallels the ventral border of the segmentum facialis and attaches to the ventrolateral surface of the suspensorium, usually onto the quadrate. The facial tendon is known only in some aulopiforms, characiforms Posteriorly the intersegmental aponeurosis may be expanded and subdivided in a mode comparable to the anterior portion of that connective tissue band, albeit with these subdivisions less common and less significant for the purposes of our discussion. A posterodorsal branch of the intersegmental aponeurosis, the subocular tendon, runs along the dorsal rim of the stegalis (see next section).It is worthy of note that the aforementioned tendons derived from the intersegmental aponeurosis may associate with different muscle sections in different teleostean groups. Some associations are conversely highly conserved in various cases as exemplified by the invariable association of the meckelian tendon with the segmentum facialis of the adductor mandibulae is situated on the cheek and originates primarily from the lateral surface of various elements of the suspensorium and are named rictalis, malaris and stegalis. Therefore, the terms section or pars of a muscle refers to any identifiable muscular subunit whose homology and evolutionary history can be traced and studied across the examined taxa regardless of the degree of separation/differentiation between that and other sections.The d; Elops and the ower jaw or onto ower jaw , 6A. Thrrictalis, malaris and stegalis sections although a differentiation between these sections is readily apparent. For example, the osteoglossomorph Hiodon has all the facial sections fully continuous with one another but the stegalis is unambiguously differentiable from the remaining sections of the segmentum facialis by its more anterior area of origin may be total or partial (restricted to a portion of the muscle). Often, some facial sections are continuous with each other at their origin but gradually differentiated .It is critical to appreciate that the muscle sections detailed below are subdivisions of the segmentum facialis is composed of two primary sections; a ventral component termed the pars rictalis and a dorsal element named the pars malaris of the preopercle . Many ostariophysans, smegmamorpharians, anabantiforms, gobiesociforms and a few perciforms have the rictalis, or a part of that section, inserting onto the maxilla , the rictalis is differentiated into an external subsection, herein termed the ectorictalis, and an internal subsection, named the endorictalis of the preopercle , smegmamorpharians, anabantiforms and a few perciforms, the malaris inserts primarily or exclusively on the lower jaw via the intersegmental aponeurosis also inserts on the posterodorsal region of the retrojugal lamina. This condition is found, for example, in the elopomorphs Elops .The eyeball , 4A, 6A.eopercle , 6B. As neurosis , 6A. In hs Elops and Megaligament , 6A. In ower jaw , 6A. Thimalaris over the retrojugal lamina is yet more pronounced in some neoteleosts in which the muscle nearly directly reaches the maxilla but can be continuous and, thus, form a compound section at the other extremity completely lack the segmentum mandibularis distinguishable posteriorly and non-differentiable anteriorly .This section, which is named in reference to its proximity to the coronoid process of the lower jaw, usually originates from the dorsal part of the mandibular tendon , 9. In sfacialis . The cormentalis whose name is derived from the Latin mentum, meaning chin, in reference to its relative position, may extend ventrally beyond Meckel's cartilage and rarely continues caudally beyond the posterior limits of the lower jaw . In this configuration these muscle sections form a compound corono-prementalis.The ramus mandibularis trigeminus nerve is a branch of the truncus infraorbitalis of the trigemino-facialis nerve complex adductor mandibulae across the Teleostei. Our analysis, in contrast, demonstrates that the course of the nerve towards the inner portion of the lower jaw takes many alternative paths (see Discussion). These include different passages of the ramus mandibularis trigeminus lateral, medial or through different sections of the segmentum facialis , occupies almost the same portion of the cheek, and invariably inserts on the lower jaw in members of all teleostean orders a ventrolateral set of fibers originating from the quadrate and the ventral portion of the preopercle and inserting onto the ventral part of the mandibular tendon; (2) a dorsolateral set of fibers arising from the posteroventral region of the hyomandibula and the dorsal portion of the preopercle and inserting onto the dorsal part of the mandibular tendon and the retrojugal lamina and, thus, indirectly connecting to the maxilla; and (3) a medial set of fibers originating from the metapterygoid and the anterior region of the hyomandibula and inserting on the meckelian tendon .Friel and Wainwright segmentum facialis to the maxilla are another frequently occurring modification of the adductor mandibulae of teleosts. In many lower teleosts (non-Neoteleostei), the entire segmentum mandibularis inserts solely on the intersegmental aponeurosis and via that connective tissue sheet onto the lower jaw with the posterodorsal region of the retrojugal lamina pass lateral to the ramus mandibularis trigeminus and insert on the posterolateral portion of the retrojugal lamina, primarily on the preangulo-paramaxillar ligament have been considered to be derived from a superficial A1 section ramus mandibularis trigeminus. Some authors adductor mandibulae in the Teleostei. This premise is a likely an extrapolation from the classical study of Luther ramus maxillaris trigeminus and ramus mandibularis trigeminus.Some authors have operated under the assumption of the primacy of certain morphological attributes for the identification of the homologies among sections of the adductor mandibulae morphology across the Teleostei revealed that dependence on a single morphological attribute as the sole or primary indicator of the homologies of any muscle section could lead to arbitrary and unjustifiable homology proposals. Hypotheses of homology of any morphological character, as in this case the sections of the adductor mandibulae, should take into account as many attributes as possible. An informative example involves the attachment described above of facial sections to the maxilla: regardless of their insertions, the rictalis, malaris, and epistegalis exhibit nearly identical respective sites of origin, positions, and relationships with most surrounding structures across all examined teleosts uriforms , siluriframus mandibularis trigeminus. An assumption of invariance of the course of the nerve through the adductor mandibulae across the Teleostei necessitates highly non-parsimonious hypotheses of homology for some muscle complexes. In one of the more extreme situations, the entire segmentum facialis in the blenniiform Scartella, which lies fully external to the ramus mandibularis trigeminus, would be considered non-homologous with any part of the segmentum facialis of many other teleosts, including Albula (Albuliformes), Denticeps (Clupeiformes), Diaphus (Myctophiformes), Elops (Elopiformes), Maurolicus (Stomiatiformes), Neoscopelus (Myctophiformes), Oncorhynchus , Osmerus , Pellona (Clupeiformes) and Xenodermichthys (Argentiniformes) in which the entire segmentum facialis is situated fully internal to the same nerve in Chanos and in the middle of the endorictalis versus between the endoricto-malaris and the stegalis of muscle sections. Gain or loss of specific facial or mandibular sections was not detected in any teleost, but the entire segmentum mandibularis is absent in several lineages .Examined teleosts, as well as virtually all reliable data available in the literature, demonstrate that the known alterations in the adductor mandibulae may be a non-issue or prove merely inconvenient for myological and/or phylogenetic investigations centered on smaller subgroups of the Teleostei. Such imprecise terminology conversely poses serious problems when it comes to homology statements in phylogenetic reconstructions of more inclusive groupings. Our analysis amply demonstrated that the coding of phylogenetically informative characters derived from the sections of the adductor mandibulae via the present alphanumeric terminology is virtually impossible across the expanse of teleosts. Progressive modification of the terminology first implemented by Vetter adductor mandibulae across the Teleostei. A notable example is the A1 which was traditionally defined by its insertion on the maxilla; a form of attachment which has in retrospect proved to have arisen independently in various lineages within the infraclass. The consequence of this attachment-centered definition was the designation of non-homologous sections of the adductor mandibulae as an A1 (see discussion above). Due to the resultant confusion the name A1 has been applied to at least the following facial muscle sections:Nomenclatural schemes that fail to reflect the primary homologies of the components of the rictalis of characiforms the ectorictalis of cypriniforms the endorictalis of anabantiforms the malaris of osteoglossiforms the promalaris of carangiforms the retromalaris of carangiforms the 1 to multiple sections within the adductor mandibulae across diverse teleostean groups, it should follow that the term A2 was comparably applied inappropriately to the same, or nearly the same, number of non-homologous structures. In actuality, application of the term A2 proved to be even more ambiguous than was the case with A1 due to an additional complication. The A3 section, which in most cases corresponds to the stegalis herein, is often poorly differentiated or indistinguishable from the adjoining lateral section of the adductor mandibulae which inserts on the lower jaw . In such morphologies some authors applied composite identifiers such as A2A3 in an attempt to reflect the compound nature of the sections inserted on the lower jaw 3 was absent. Thus, the term A2 was applied to both simple and compound facial sections in the literature since the above detailed misapplications of the terms A1 and A2 amply document the magnitude of the problems involved with the present alphanumeric terminology. It is noteworthy that these nomenclatural ambiguities derive not only from different authors who published across the spectrum of groups in the Teleostei, but on occasion involve different taxa within a single analysis . Alternatively, in ostariophysans a ventrolateral portion of the same segment would initially acquire an attachment to the posterolateral region of the lower jaw and, in a more derived evolutionary stage, an attachment to the maxilla. Thus, according to Gosline adductor mandibulae for the Neoteleostei, but introduced the terms \u201cinternal division\u201d and \u201cexternal division\u201d for the main sections resultant from the ostariophysan pathway of subdivision in order to emphasize the incompatibilities between the ostariophysan and neoteleostean arrangements. Under Gosline's rictalis and malaris in Ostariophysi . The problems associated with these erroneous assumptions were exhaustively detailed above and are not repeated herein. Furthermore, a broader analysis across teleosts demonstrated that the alternative muscle patterns described by Gosline Although we agree with Gosline segmentum facialis of the basalmost teleosts should be termed A2, whereas the muscle divisions of neoteleosts should retain the traditional alphanumeric terminology. Alternatively, the two main sections yielded by the supposedly unique subdivision pattern in the Ostariophysi were designated by Diogo and Chardon 2 and non-comparable (A1-OST and A0) to those generated via the subdivision in neoteleosts. These shortcomings in conjunction with other erroneous factors such as the definition of muscle sections based on variable attributes (insertion on the maxilla and the path of the ramus mandibularis trigeminus) and the adoption of an equivocal evolutionary model assuming the gain and loss of muscle sections, resulted in a totally unsatisfactory terminology. Not to belabor the point, but as an example, reference to only two of the nearly 30 studies dealing with the teleostean adductor mandibulae authored by Diogo 2 was explicitly used to refer to at least five different portions of the adductor mandibulae . These are:The nomenclatural scheme of Diogo and Chardon the entire segmentum mandibularis of Alepocephalus, Clupea, Denticeps, Elops, Hiodon and Salvelinus;the stego-malaris of Chanos, Cromeria, Danio, Hepsetus and Salminus;the ricto-stegalis of Aulopus;the malaris of Brycon and Diplomystes;the rictalis of Perca.endorictalis of cypriniforms and the rictalis of characiforms and siluriforms. Moreover, in Diogo and Chardon , the characiform Distichodus and the gymnotiform Sternopygus , whereas the A0 section is inexplicably considered to be \u201cexclusively found\u201d solely in cypriniforms : p. 261.adductor mandibulae which failed to reflect homologous components \u2013 the core critical aim of any naming convention. Symptomatic of the irreparable state of this nomenclatural system was the fact that the rictalis in the order Siluriformes has received at minimum 11 different designations despite having the same basic position, origin, and insertion in almost all members of the order. Curiously these identifiers span all the three available terms of the alphanumeric terminology for the facial sections:The sum of the above discussed problems perpetuated across more than a century resulted in a progressively complex alphanumeric terminology for the sections of the 1 or \u201clateral fibers of muscle b\u201d in loricariids AA1-OST in auchenipterids, callichthyids and diplomystids A1-OST+A2A3\u2032\u03b2 in trichomycterids A2\u2032 in trichomycterids A2\u03b1 in bagrids 2A3\u2032\u03b2 in clariids AA2ventral in loricariids 1 in bagrids Adadductor mandibulae superficialis in sisorids external division in diplomystids partie lat\u00e9rale or \u201cmuscle a\u201d in silurids e.g., A1\u03b2b\u2033m\u03b1 adductor mandibulae among teleosts. Since most of the problems associated with the alphanumeric terminology are inherent to mistaken underlying original premises, an adaptation of this nomenclature to reflect the homologies of the adductor mandibulae is impossible. Retention of the terms A1, A2, and A3 would only increase nomenclatural confusion, more so post the publications of Diogo and Chardon Authors were frequently forced to coin inordinately complex terms reflective of the basic position of each muscle component, a definite advantage over the uninformative vague alphanumeric codes in the present naming convention. In this, the new nomenclature parallels the naming conventions applied to most other anatomical systems. Short names were selected for primary muscle components to facilitate combinations into relatively brief composite terms designating compound sections and to allow easy aggregation of prefixes and adjectives to indicate subdivisions .We found this nomenclature could be successfully employed without complications in all examined teleosts ranging from the simple architecture of the adductor mandibulae in some basal teleosts lacking any trace of differentiation in the segmentum facialis .Confronted with the quandary resultant from the inherent problems with the alphanumeric terminology, it is preferable to create a new terminology for the facialis to the hfacialis and tetrTable S1Material examined.(PDF)Click here for additional data file."} +{"text": "Prediction of the location of culprit lesions responsible for ST-segment elevation myocardial infarctions may allow for prevention of these events. A retrospective analysis of coronary artery motion (CAM) was performed on coronary angiograms of 20 patients who subsequently had ST-segment elevation myocardial infarction treated by primary or rescue angioplasty and an equal number of age and sex matched controls with normal angiograms.There was no statistically significant difference between the frequency of CAM types of the ST-segment elevation acute myocardial infarction and control patients (p = 0.97). The compression type of CAM is more frequent in the proximal and mid segments of all three coronary arteries. No statistically significant difference was found when the frequency of the compression type of CAM was compared between the ST-segment elevation acute myocardial infarction and control patients for the individual coronary artery segments (p = 0.59).The proportion of the compression type of coronary artery motion for individual artery segments is not different between patients who have subsequent ST-segment elevation myocardial infarctions and normal controls. The three-dimensional motion of the heart is characterized by rotation (around the centre of gravity), radial displacement (towards or away from the center of gravity), and translational motion . The totMotion of individual segments of coronary arteries reflects the motion of the underlying myocardium. The classification system for different patterns of coronary artery motion (CAM) used in this study is derived from a system where CAM was classified into 10 patterns, which were grouped into 3 types: (1) compression type: the length of the arterial segment is shortened without vertical deviation of the artery; (2) displacement type: the location of the coronary artery shifts without change of the length or shape of the arterial segment; and (3) bend type: the coronary artery flexes into a curve[The compression type of CAM for individual artery segments is associated with stenosis and is aThe hypothesis to be tested in this study is that the compression type of CAM is more likely to be present in patients who have subsequent STEMI than in age and sex matched control patients with normal coronary angiograms.Twenty patients were identified who had coronary angiography after March 1998 and subsequently re-presented with a STEMI. STEMI was defined as ischemic chest pain with ST segment elevation of 1 mm in 2 contiguous limb leads or 2 mm in 2 contiguous chest leads. Patients were excluded if they had previous coronary artery bypass surgery or had stent thrombosis as the cause of STEMI. Twenty age and sex matched control patients were identified with normal coronary angiograms.The CAM patterns of coronary segments were assessed retrospectively in both the STEMI and control patients. For the STEMI patients, the coronary angiography performed before the STEMI was used. The assessment was made blinded to the location of the future culprit segment. The CAM classification of Konta and Bett was usedAssessment of CAM was made in up to fourteen segments of the coronary arterial tree. The segments were given a numerical label as shown in Figure Clinical risk factors of all patients were obtained from the medical records.Chi-squared tests were used for comparison of frequencies between groups. All statistical analyses were performed using Stata .The Royal Prince Alfred Hospital Ethics Review Committee approved the research protocol (reference X10-0159). The research protocol did not include obtaining patient consent.The demographics for the STEMI and control patients are shown in Table The proportion of the compression type of CAM for individual artery segments for both patient groups is shown in Figure This study shows that the proportion of the compression type of coronary artery motion for individual artery segments is not statistically significantly different between patients who have subsequent STEMIs and age and sex matched controls.The main limitations of this study are its small sample size and the potential observer bias in the qualitative assessment of CAM. The technique relies on a visual assessment. Knowledge of the asymmetrical frequency distribution of culprit lesions in patients with STEMIs and the Although the exact pattern of CAM varies amongst individual patients, there are consistent themes of motion differences between the different coronary arteries,8 and alPreviously published work has shown that the compression type of CAM is a predictor of the location of stenosis and the The authors declare that they have no competing interests.AO and PK conceived the study. AO undertook the ethics application, data collection, data analysis, and manuscript preparation. KB recommended the statistical methods and supervised data analysis. JF, DR, AH and RD prepared the ethics application, performed manuscript revision and supervised the activities of AO. All authors except KB read and approved the final manuscript."} +{"text": "To the Editor:Mycobacterium ulcerans disease, commonly called Buruli ulcer, is an emerging infectious disease in West Africa used water directly from the Couffo River. Other villages employed protected water sources for domestic purposes . These results are similar to Barker's findings in Uganda, which showed that families who used unprotected sources of water for domestic purposes had higher prevalence rates of Buruli ulcer than those who used boreholes . ConsequDetermining the complex relationship between distance from the Couffo River and the numbers of cases and level of protection of water supply is difficult. Our findings argue for the need to perform additional epidemiologic studies to understand more completely the key factors that determine the distribution of the disease in the entire commune of Lalo."} +{"text": "Neuromuscular electrical stimulation (NMES) for treating dysphagia is a relatively new therapeutic method. There is a paucity of evidence about the use of NMES in patients with dysphagia caused by stroke. The present review aimed to introduce and discuss studies that have evaluated the efficacy of this method amongst dysphagic patients following stroke with emphasis on the intensity of stimulation (sensory or motor level) and the method of electrode placement on the neck. The majority of the reviewed studies describe some positive effects of the NMES on the neck musculature in the swallowing performance of poststroke dysphagic patients, especially when the intensity of the stimulus is adjusted at the sensory level or when the motor electrical stimulation is applied on the infrahyoid muscles during swallowing. Diverse paramedical treatments for swallowing disorders usually carried out by speech and language pathologists (SLPs) are introduced in the literature. It is expected that these treatment methods help to recover the swallowing functions, improve nutritional status, and prevent from developing the dysphagia consequences . But, whNeuromuscular electrical stimulation (NMES) of the swallowing muscles is a relatively new therapeutic modality that is of great interest to the SLPs recently , 4. SeveThe present review aimed to introduce and discuss studies that have evaluated the efficacy of this method amongst dysphagic patients following stroke with emphasis on the intensity of stimulation (sensory or motor level) and the method of electrode placement on the neck. These two parameters can have important effects on the outcomes, but they had not been considered specifically by the previous published reviews , 5\u20137. MoThe external electrical stimulation on the swallowing muscles is applied with two general purposes: to cause muscle contractions and to stimulate the sensory pathways . In the In the sensory approach, the sensory threshold is usually identified as the lowest current level at which the patient feels a tingling sensation on his/her neck skin . ConsideElectrode placement always is a major challenge in surface electrotherapy especially on the small and overlapping swallowing muscles. Suprahyoid muscles including the anterior belly of the digastric, the mylohyoid, and the geniohyoid pull the hyoid upward and toward the mandible. Most of the infrahyoid muscles such as sternohyoid, omohyoid, and sternothyroid muscles lower the hyolaryngeal complex toward the sternum. Therefore, when the electrodes are placed on and around the thyroid cartilage, the motor electrical stimulation will pull the larynx downward . If the Freed et al. introducn = 25), there were no statistically significant differences in the outcomes between groups of patients who suffered from other musculature disorders respond appropriately to the inactive electrotherapy devices , 34. TheThe majority of the reviewed studies describe some positive effects of the NMES on the neck musculature in the swallowing performance of poststroke dysphagic patients, especially when the intensity of the stimulus is adjusted at the sensory level or when the motor electrical stimulation is applied on the infrahyoid muscles during swallowing. However, we still need to know more about the physiologic and neurologic effects of these therapeutic methods on the stroke patients' swallowing function. The identification of stimulation effects on the underlying pathophysiology of the swallowing disorders and on the central nervous system organization will help to design specific and individual treatment protocols."} +{"text": "P.falciparum resistance gene markers have been investigated during the malaria transmission October 2011 to February 2012. More than 1500 children and 700 pregnant women were recruited. The prevalence of malaria was comprised between 9% and 30% depending of the localization of the health center. It was found that acceptability of technicians to use rapid diagnostic tests was low and microscopy is still considered as the reference. A comparison between health centers and reference lab technicians showed similar level of in malaria diagnosis performance. The P.falciparum genetic diversity and multiplicity of infection (1.7) do not show any reduction but parasite densities were lower than reported in studies conducted before the introduction of ACTs. As a conclusion: in order to improve quality of care and the acceptability of RDT, there is a need to provide a targeted training to heath workers.Since the introduction of artemisin-based combination therapies in the Republic of Congo, limited number of investigations have been conducted to evaluate the burden of the disease at the district level. The main goal of this study was to document laboratory-confirmed cases using rapid diagnostic tests of malaria in children and pregnant women attending health facilities located in Northern districts of Brazzaville and Pointe Noire which is the second main city of the Republic of Congo. As objective2, the malaria diagnostic performance of each health facility has been assessed and as objective3, genetic diversity, multiplicity of infection and the prevalence of"} +{"text": "We consider the problem of sensorimotor delays in the optimal control of (smooth) eye movements under uncertainty. Specifically, we consider delays in the visuo-oculomotor loop and their implications for active inference. Active inference uses a generalisation of Kalman filtering to provide Bayes optimal estimates of hidden states and action in generalised coordinates of motion. Representing hidden states in generalised coordinates provides a simple way of compensating for both sensory and oculomotor delays. The efficacy of this scheme is illustrated using neuronal simulations of pursuit initiation responses, with and without compensation. We then consider an extension of the generative model to simulate smooth pursuit eye movements - in which the system believes both the target and its centre of gaze are attracted to a (fictive) point moving in the visual field. Finally, the generative model is equipped with a hierarchical structure, so that it can recognise and remember unseen (occluded) trajectories and emit anticipatory responses. These simulations speak to a straightforward and neurobiologically plausible solution to the generic problem of integrating information from different sources with different temporal delays and the particular difficulties encountered when a system - like the oculomotor system - tries to control its environment with delayed signals."} +{"text": "Management of diabetic foot problems presents a series of complex challenges for patients and for health care professionals. Appropriate guidance on best practice is essential to delivering high-quality care. The NHMRC guidelines for the management of diabetic foot problems in primary care have just been updated to address this issue. The guidelines have been developed using systematic reviews of the literature, and broad consultation of organisations representing the relevant health care professionals and patients. The development of the guidelines, the evidence considered and the recommendations developed will be reviewed."} +{"text": "Clarias gariepinus fish were also collected from the experimental Asa River and from the control Asa Dam water and were analysed for comparative histological investigations and bacterial density in the liver and intestine in order to evaluate the impact of pollution on the aquatic biota. The water pH was found to range from 6.32 to 6.43 with a mean temperature range of 24.3 to 25.8 \u00b0C. Other physicochemical parameters monitored including total suspended solids, total dissolved solids, biochemical oxygen demand and chemical oxygen demand values exceeded the recommended level for surface water quality. Results of bacteriological analyses including total heterotrophic count, total coliform and thermotolerant coliform counts revealed a high level of faecal pollution of the river. Histological investigations revealed no significant alterations in tissue structure, but a notable comparative distinction of higher bacterial density in the intestine and liver tissues of Clarias gariepinus from Asa River than in those collected from the control. It was inferred that the downstream Asa River is polluted and its aquatic biota is bacteriologically contaminated and unsafe for human and animal consumption.Water is a valued natural resource for the existence of all living organisms. Management of the quality of this precious resource is, therefore, of special importance. In this study river water samples were collected and analysed for physicochemical and bacteriological evaluation of pollution in the Unity Road stream segment of Asa River in Ilorin, Nigeria. Juvenile samples of Water is vital to the existence of all living organisms, but this valued resource is increasingly being threatened as human populations grow and demand more water of high quality for domestic purposes and economic activities . The quavia the food chain p < 0.05)athogens ,39.Clarias gariepinus collected from experimental Asa River section depicted no significant alterations or pathological effects in the livers and intestines when compared with those collected from the control Asa Dam water. This result does not corroborate with similar studies on fish species of Tilapia zillii and Solea vulgaris from Lake Qarun in Egypt which observed histological alterations in the muscles including vacuolar degeneration and aggregation of necrosis in gills and inflammatory cells in hepatocytes of liver tissues [The histological findings on the tissue structures of the four different parts of tissues .The observed histological alterations in the fishes studied in Lake Qarun was possible simply because the lake is a closed system which acts as a reservoir for agricultural and sewage drainage water and whose components are found to be polluted with heavy metals ,41. Howein situ river pollution impact of non-point source pollution, though it has been for chronic exposure studies on specific metal concentrations [Previous toxicological evaluation studies on the Amilegbe segment of Asa River by haematological and enzyme studies of rats fed the polluted river water revealed marked haematological changes and effects on enzymes of the rat after a period of eight weeks . This watrations \u201346.Nevertheless, the high microbial load observed on the gram-stained histological slides of the fish liver and intestine and 8 isThe comparative study done between the catfish of the Asa Dam reservoir water (control) and the polluted Asa River segment revealed, by bacterial examination in the fish tissues observed through special staining, a comparatively significant alteration of the normal microbial flora of the catfish from the polluted river.4 of faecal coliforms (Escherichia coli) and that high concentrations of pathogenic microorganisms might occur in the digestive tract and intraperitoneal fluid of the fish even at low numbers of indicatory bacteria [In his work, Strauss reported that invasion of fish flesh by pathogenic bacteria was very likely if the fish were reared in water containing over 10bacteria . The valThis pollution impact study has proven that the downstream Asa River segment is indeed polluted. The study has been able to track the type of pollution to be more of faecal contamination by the evaluation of the bacteriological quality of the river water samples. Non-point sources of pollution which include the agricultural activities (pesticides and crop wastes) and domestic activities by the poorly planned settlers nearby the river as well as the point source discharge of industrial effluents along the industrial estate have been implicated to be causative to the poor quality of the river and its aquatic life.Clarias gariepinus caught from the polluted Asa River and the control Asa Dam water. However, this does not imply good quality of the aquatic life of the river as further investigation of the bacteria present in the fish tissues portrayed very high bacterial density in the tissues of Clarias gariepinus collected from the Asa River segment when compared to the tissues of Clarias gariepinus caught from the Asa Dam water. The implications of these findings may be that people dependent on this river water for domestic use including cooking, bathing, washing and even drinking or for agricultural uses like fishing and farming may be exposed to public health risks. Proper treatment is imperative for the river to be appropriate for potable, domestic and industrial purposes.This study has also shown that there were no significant tissue alterations observed during the comparative histological studies carried out on juvenile live samples of It is therefore recommended to coordinate different efforts at the level of the community dwellers and the government to rescue the downstream Asa River segment and its aquatic life from the current hazard-posing environmental problems."} +{"text": "We study the ever more integrated and ever more unbalanced trade relationships between European countries. To better capture the complexity of economic networks, we propose two global measures that assess the trade integration and the trade imbalances of the European countries. These measures are the network (or indirect) counterparts to traditional (or direct) measures such as the trade-to-GDP (Gross Domestic Product) and trade deficit-to-GDP ratios. Our indirect tools account for the European inter-country trade structure and follow (i) a decomposition of the global trade flow into elementary flows that highlight the long-range dependencies between exporting and importing economies and (ii) the commute-time distance for trade integration, which measures the impact of a perturbation in the economy of a country on another country, possibly through intermediate partners by domino effect. Our application addresses the impact of the launch of the Euro. We find that the indirect imbalance measures better identify the countries ultimately bearing deficits and surpluses, by neutralizing the impact of trade transit countries, such as the Netherlands. Among others, we find that ultimate surpluses of Germany are quite concentrated in only three partners. We also show that for some countries, the direct and indirect measures of trade integration diverge, thereby revealing that these countries trade to a smaller extent with countries considered as central in the European Union network. The structure of European trade has undergone deep transformations last two decades, such as the integration of former socialist economies and the introduction of the Euro. Across the years, under these events and the evolution of the world trade, the structure of European trade has been modified thoroughly, generally speaking in the sense of higher mutual trade but also in terms of acuter trade imbalance.Among the indicators that can quantify the structure of trade, one finds direct measures of bilateral trade such as the trade-to-GDP ratio (where trade is defined as the sum of imports and exports of goods and services) and the deficit-to-GDP ratio (where trade deficit is defined as the difference between imports and exports). The first one is a measure of the trade integration of a country within its environment. The second one is a measure of trade imbalance between a country and its partners. These measures are direct in the sense that they only take into account the interaction of one country with its trade partners, neglecting the remainder of the network.In this paper, we propose two complementary methods for analysing the global structure of a trade network. These methods take part in the development of complex networks theory for economics . Such a flow may for instance exhibit a four -way cycle where country The relevance of considering the indirect flows can be illustrated with the example of the Netherlands. This country plays the role of a hub in the trade network, with imports from the rest of the world transiting through the Netherlands to the other European countries. The direct measure would only note that the Netherlands have a trade surplus with all European countries, neglecting that most of the trade only transits through the Netherlands from an initial exporter to a final importer. The indirect measure cleans these network-related effects to reveal the ultimate trade debtors and creditors. The indirect measure is thus also a relevant tool to assess the competitiveness of a country. This example illustrates the case of a chain of production with goods transiting through the Netherlands, but the relevance of the indirect measure is more general, and not confined to chains of production. The case of a country A exporting wheels to country B, which will then export cars (and the wheels herewith) to a country C is not fundamentally different from the case of a country A exporting beer to country B, letting him devote more resources (the ones previously affected to beer production) to wheel production and cars export to country C. At the end, country A is the ultimate creditor and country C the ultimate debtor, given the resources the former used for the final consumption of the latter.This Flow Decomposition Method and the associated notion of indirect trade deficit are the main conceptual innovations of this paper. These tools bear some superficial resemblance with the cycle, middleman, in and out statistics developed in In the second method, we make a thought experiment in which we follow the random tribulations of a dollar travelling randomly through the trade network. At stationarity, the fraction of time spent by the random dollar, called the PageRank Trade integration between two countries is commonly apprehended through bilateral trade-to-GDP ratio. The larger the ratio the more integrated the country with its partners. This traditional measure is direct in the sense that this statistics does not take into account the position, or centrality, of the trade partners in the network. Let us consider a mostly autarkic country As a case study, we show how these indirect measures complement their direct counterparts in the context of the European trade. We especially study the role of the Euro in the development of trade within Europe. The benefit of sharing a common currency (by decreasing the cost of cross-border transactions) depends on the degree of trade integration. The higher the trade integration, the larger the benefits. It is therefore crucial to dispose of fine measures of trade integration. We show how our methods can better capture the complexity of the trade network as complement to the usual direct measures.We study the network of trade flows between 24 countries of the European Union, using data provided by the International Monetary Fund (IMF) and the World Bank (WB). The two datasets provide different information about the network: the IMF dataset contains the total exports between each pair of countries of the European Union in current US Dollar value, while the WB dataset provides the gross domestic product (GDP) of each country and the consumer price index. By combining those two datasets, we build a network of trade flows between European countries. Since each export from a country The data ranges from 1993 to 2007. By starting in 1993, we avoid the potential structural break related to the removal of EU internal customs (which had an impact on the recording of trade flows). By ending in 2007, we avoid the specific impact of the financial crisis. We therefore have a time period of 15 years. Our dataset originally contains the data of the 27 members of the European Union. However, Malta and Cyprus have been removed from the dataset since large parts of their data were missing. Also, Belgium and Luxembourg are recorded together for the so-called Belgium-Luxembourg Economic Union to (6) take the form of linear or convex quadratic programs that can be solved efficiently with usual methods. An example of such a decomposition is given on The trade integration of a country with another country or group of countries is usually measured by the bilateral trade-to-GDP ratio. The larger the ratio the more integrated the country with its partners. This measure is direct in the sense that this statistics does not take into account the position, or centrality, of the trade partners in the network. We propose to use commute times of a random walking dollar in the trade network as a measure of integration of countries with respect to one another taking into account their direct and indirect ties. A small commute time between two countries implies a small trade distance between those countries, i.e. a high mutual integration.The commute-time distance, defined in the general context of random walks over graphs or Markov chains We imagine that all the money in circulation in the world is materialised in one-dollar notes. We identify one particular note and we track it as it switches from hand to hand. We suppose that every dollar paid out by an individual is randomly uniformly chosen among all the dollars in her possession. For the moment we suppose also time invariance: every two individual who have economic contacts, have regular economic contacts whose frequency and intensity (measured for instance by the number of dollars exchanged in a year) is constant in time, thus resulting in a constant expected fortune (number of dollars possessed) of individuals through time. The probability of presence of a random walker in a given node of a network is sometimes called the PageRank of the node, in analogy with a well known search tool on the World Wide Web In our case, nodes represent individual countries, and we assume that the random dollar is exchanged at regular discrete-time steps, rather than in continuous time. We can choose a restricted network and perform a random walk on restricted network, which corresponds to the random walk in the global network conditional to the fact that the random dollar stays in the restricted network. In this paper, we study the random dollar walk conditional to the fact that it corresponds to a trade of goods and services produced in the year within Europe. This excludes the non-European countries but also the trade of second-hand goods already accounted for in previous years.In this network, the PageRank is a truly global quantity which gives a useful indication of the centrality of a country in the network of exchanges, as exemplified in Let The Euclidean commute-time distances embeds the national economies as points in a geometric space, where proximity of two countries indicates that a large fraction of the GDP of both countries participes to trade between them, either directly or through a chain of intermediate partners. Assuming that Commute times can be further interpreted in terms of the propagation of a small perturbation of one country's activity to another country, as the following simplistic thought experiment suggests. Imagine indeed that a new activity is created in country So far we have presented commute-time distance as a pairwise relationship indicating mutual integration of two countries. However we are often more interested in assessing how a country is integrated in the regional or global economy, rather than with a specific partner. Clearly, a regional economy is tightly integrated as a whole, if all the national economies are close to each other in terms of commute-time distances, which translates into a quick propagation of shocks across the network through direct or domino effects. This can be assessed by the variance of the countries in their Euclidean space representation, which is the average square commute-time distance to the centre of mass of the region. The square distance of one particular country to the centre of mass indicates how integrated this country is with the regional network as a whole, rather than with a specific country.We now explain how to compute the commute-time distances. Let In order to facilitate the visualisation or analysis of data, one can perform a principal component analysis of the We now illustrate how the tools derived from the flow decomposition approach can help to better understand the European trade network. The symmetric, cyclic and acyclic components of the inter-country trade structures represent three types of economic interactions. The symmetric part reflects the pairwise exchanges where both countries have neither trade deficit nor surplus. The cyclic part reflects the same kind of interaction, but circulating over a larger number of actors. Symmetric interactions can be seen as cycles of length 2. Finally, the third part contains the transfers of money that flow from countries that accumulate deficits to surplus-making countries. The study of those three components provides insights on the evolution of the European trade network.Firstly, the largest fraction of the European trade is made of symmetric exchanges , which rThis acyclic component provides a new way to consider the trade imbalances in a network of countries. Contrarily to the traditional approach focusing on the trade deficit-to-GDP between two countries, a direct measure, the tool we propose goes farther by considering the indirect exposures to get at the end a finer understanding of the ultimate debtors and creditors of trade flows. The example of the Netherlands, see The relevance of the indirect measure of the trade deficit is double. First, it reveals ultimate deficits/surpluses, cleaned from transit node effects. As such, the indirect measure is an analytic complementary tool to the direct trade deficit-to-GDP measure. To the extent that a direct trade deficit-to-GDP can reveal both a relative competitiveness problem and a specific role in the world trade network , its interpretation might be delicate. The indirect measure by neutralizing the network component of the deficit can thus be considered as a more precise measure of competitiveness. It better captures specific competitiveness problems since it controls for country specializations in the trade network. Secondly, it reduces the size of the information matrix. The indirect measures reduce the bilateral trade surpluses/deficits matrix to a relatively small subset of ultimate bilateral trade deficits/surpluses. Countries who measure the evolution of their competitiveness can thus focus on the sole countries with which indirect trade deficits prevail. For example, More generally, the rising complexity and imbalances of the trade network, where some countries record large surpluses and others large deficits awake the debate on the need of a global coordinated adjustment plan involving all countries is one of the main benefits derived from the common currency. We here illustrate how the network tools provided by the commute-time distance approach help to deepen the understanding of the European trade network, the degree of integration of the countries in the network and their integration dynamics.These distances, built upon the information available in the whole European network, are global, or indirect, measures of the country trade integration, as opposed to the direct trade integration computed as bilateral trade-to-GDP ratio.We first present in In the same figure, we also represent individual trajectories through their square distance to the centre of mass of the European Union, in order to show the variety of situations aggregated into the variance. For example, the United Kingdom, albeit outside the Euro zone, is actually better integrated to Euro zone than some Euro zone countries such as Spain. Nevertheless from the years 2000 it tends to split away from the centre of mass. Interestingly, this tendency is also observed for some Euro zone countries as we observe in another visualization hereafter. The comparison reveals that the average inter-country trade distances, as measured by their variance, is much smaller for Euro zone countries than for European countries (in or out of the Euro zone), revealing a larger integration among Euro countries. We also note that the variance decreases, mostly at the beginning of the period, with no specific break in 1999 for the introduction of the Euro. Some preliminary studies estimated the effect of the launch of the Euro on the volume of trade within the Euro zone countries to be in the range of 8% to 16% or deficits (\u2212) of Austria toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S2Evolution of the direct and indirect measures of trade imbalances for Belgium and Luxembourg. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Belgium and Luxembourg toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S3Evolution of the direct and indirect measures of trade imbalances for Bulgaria. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Bulgaria toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S4Evolution of the direct and indirect measures of trade imbalances for Czech Republic. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Czech Republic toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S5Evolution of the direct and indirect measures of trade imbalances for Demnark. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Demnark toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S6Evolution of the direct and indirect measures of trade imbalances for Spain. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Spain toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S7Evolution of the direct and indirect measures of trade imbalances for Estonia. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Estonia toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S8Evolution of the direct and indirect measures of trade imbalances for Finland. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Finland toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S9Evolution of the direct and indirect measures of trade imbalances for France. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of France toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S10Evolution of the direct and indirect measures of trade imbalances for Germany. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Germany toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S11Evolution of the direct and indirect measures of trade imbalances for Greece. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Greece toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S12Evolution of the direct and indirect measures of trade imbalances for Hungary. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Hungary toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S13Evolution of the direct and indirect measures of trade imbalances for Ireland. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Ireland toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S14Evolution of the direct and indirect measures of trade imbalances for Italy. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Italy toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S15Evolution of the direct and indirect measures of trade imbalances for Latvia. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Latvia toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S16Evolution of the direct and indirect measures of trade imbalances for Lituania. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Lituania toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S17Evolution of the direct and indirect measures of trade imbalances for the Netherlands. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of the Netherlands toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S18Evolution of the direct and indirect measures of trade imbalances for Poland. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Poland toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S19Evolution of the direct and indirect measures of trade imbalances for Portugal. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Portugal toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S20Evolution of the direct and indirect measures of trade imbalances for Romania. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Romania toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S21Evolution of the direct and indirect measures of trade imbalances for Slovakia. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Slovakia toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S22Evolution of the direct and indirect measures of trade imbalances for Slovenia. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Slovenia toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S23Evolution of the direct and indirect measures of trade imbalances for Sweden. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of Sweden toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file.Figure S24Evolution of the direct and indirect measures of trade imbalances for UK. The figures in each cell correspond to direct trade surpluses (+) or deficits (\u2212) of UK toward countries listed on the rows. The colors correspond to the indirect measures of trade imbalances, as computed by the Flow Decomposition Method, with ultimate surpluses in green and ultimate deficits in red.(TIFF)Click here for additional data file."} +{"text": "Retrograde ureteropyelography plays a vital role in imaging of the upper tract, particularly in aiding the diagnosis of upper tract urothelial cancers.Under radiological imaging, a retrograde ureteric catheter is placed in the upper ureter/renal pelvis. The urethral catheter is placed into the urinary bladder, the balloon inflated and gently pulled down, taking care not to displace the ureteric catheter. The tips of the small artery forceps are used to puncture a hole into the distal end of the urethral catheter . The disThis technique allows the ureteric catheter to be secured firmly until the formal retrograde studies are performed in the radiology department. Furthermore, the closed system minimises the risks of introducing infection into the urinary tract."} +{"text": "Recent findings illustrate how changes in consciousness accommodated by neural correlates and plasticity of the brain advance a model of perceptual change as a function of meditative practice. During the mind-body response neural correlates of changing awareness illustrate how the autonomic nervous system shifts from a sympathetic dominant to a parasympathetic dominant state. Expansion of awareness during the practice of meditation techniques can be linked to the Default Mode Network (DMN), a network of brain regions that is active when the one is not focused on the outside world and the brain is restful yet awake has been shown to decrease activation of the DMN as compared to DMN activation in techniques requiring no effort. The TM technique utilizes automatic self-transcending to allow the mind to settle to a state of quiescence (Alexander et al., Activation of the DMN has been found to be increased during periods of low cognitive load and decreased under greater executive control. Findings reflect differences in specific meditation practices employed, which may also be attributed to methodological differences in measurement of fluctuating cognitive states (Hasenkamp et al., Recent research illustrates how the neural correlates and plasticity of the brain accommodate changes in consciousness. Neural correlates of expanding awareness during the mind-body response illustrate how the autonomic nervous system shifts from a sympathetic dominant to a parasympathetic dominant state. We have attempted to synthesize the literature in order to advance a model illustrating meditation's dynamic mind-body response, and associations are made with the cardiac and respiratory center, the thalamus and amygdala, the DMN and cortical function connectivity.Synchronization of the hemodynamic response during meditation was shown to lead to inhibition of the limbic system, and deactivation of DMN leads to an increase in cortical functional connectivity. This article provides evidence to support the mechanism of neurophysiological changes during meditation at the cellular level based on neurovascular coupling, and at the global brain activity level from the autonomic response generated by cardiorespiratory synchronization. Future research will benefit from use of a standardized set of psychophysiological variables and imaging protocols, for comparison of studies different kinds of meditation, that would foster clearer understanding of the relationship between the DMN and experiences during meditation (Travis et al.,"} +{"text": "Standard methods including pharmacotherapy, radioiodine and surgery cannot always be applied in treatment of thyroid diseases. The limitations of these methods are: drug intolerance and their side effects, too low radioiodine uptake, high risk of surgery. The obliteration of thyroid arteries seems to be an alternative method which could be used in the stiuation of ineffective standard treatment. It is based on shut down of blood flow in chosen thyroid arteries by injection of embolizative material into the vessels. The consequence of acute ischemia is a septic necrosis of the glandular tissue in a field being supplied by this particular artery. Further repair processes and fibrosis lead to the reduction of an active thyroid hormone synthesis and decrease of thyroid volume. Effects of the embolization on apoptosis induction and modulation of autoimmune reactions are also observed. Preoperative selective embolization of a huge goiter or thyroid cancer improves surgery outcomes, reduces the risk of haemorrhage and damage to surrounding tissue. Palliative use of embolization in advanced stages of thyroid cancer reduces symptoms and improves quality of life. Little invasive nature of this procedure in comparison to surgery, the lack of serious undesirable coincidence makes the embolization of thyroid arteries an attractive form of therapy, which may become a therapeutic option in many difficult clinical situations and may improve the effectiveness of treatment of thyroid disease. The authors present the experiences gained in their institution with conclusions as follows:1. As a result of using embolization of thyroid arteries in the treatment of selected thyroid diseases the following observations have been made: a significant reduction in goiter volume with resolution of compression symptoms of adjacent organs, reduction of the concentration of thyroid antibodies characteristic for patients with Graves' disease (TRAb), cure of hyperthyroidism in 71% of patients with thyrotoxicosis. Embolization procedure was not associated with the occurrence of undesirable side effects, such as the clinical symptoms associated with increased concentration of free thyroid hormones, and induction or exacerbation of pre-existing autoimmune thyroid disease.2. Thyroid artery embolization had no significant influence on activity of the parathyroid glands, regardless of the number and quality of closed vessels.3. Thyroid artery embolization is an effective and safe treatment of selected thyroid diseases as an alternative to conventional forms of therapy."} +{"text": "Diflavin reductases are essential proteins capable of splitting the two-electron flux from reduced pyridine nucleotides to a variety of one electron acceptors. The primary sequence of diflavin reductases shows a conserved domain organization harboring two catalytic domains bound to the FAD and FMN flavins sandwiched by one or several non-catalytic domains. The catalytic domains are analogous to existing globular proteins: the FMN domain is analogous to flavodoxins while the FAD domain resembles ferredoxin reductases. The first structural determination of one member of the diflavin reductases family raised some questions about the architecture of the enzyme during catalysis: both FMN and FAD were in perfect position for interflavin transfers but the steric hindrance of the FAD domain rapidly prompted more complex hypotheses on the possible mechanisms for the electron transfer from FMN to external acceptors. Hypotheses of domain reorganization during catalysis in the context of the different members of this family were given by many groups during the past twenty years. This review will address the recent advances in various structural approaches that have highlighted specific dynamic features of diflavin reductases. Flavoproteins represent 1%\u20133% of all proteins present in prokaryotic and eukaryotic genomes and abouThis diflavin reductases family includes the NADPH-cytochrome P450 reductase , the flavoprotein subunits of bacterial sulfite reductase and of the mammalian methionine synthase reductase , the reductase part of the nitric oxide synthase and of cytochrome P450-BM3 and the novel reductase 1 cytoplasmic protein (NR1).c (cyt c) reductase . A. A130]. ntal CPR .More recently, the analysis of chimeric CPR combining domains from yeast and human CPR highlighted the role of interdomain interface and recognition in diflavin reductase activity and conformation . The eleInterestingly, the two opposite chimeras exhibit kinetic behaviors analogous to those of the rat nNOS mutants in which the modification of a single residue, Arg1229, located on the interface between the flavinic domains, destabilized the interdomain recognition by disrupting one specific electrostatic interaction . Like chThese chimeras combining the three NOS isoforms, different CPR or the NOS and the CPR clearly demonstrate the capacity of distant diflavin reductases to exchange individual domains while totally or partially preserving their catalytic activities. CPR and NOS, and probably other diflavin reductases, are not only sharing high structural homologies. The presence of acceptable ET capacities in these different chimeras prove that, despite the architectural complexity of the regulatory elements present in NOS but absent in the other members of the family, the ET mechanisms of all diflavin reductases may involve the same structural reorganization steps. It seems then reasonable to propose that the dynamical events hypothesized from the various biochemical analyses of NOS and CPR may as well exist in all other diflavin reductases.The above-mentioned results did not constitute a solid proof of the dynamical behavior of diflavin reductases, but they eventually cast a strong support of the existence of domains movements. In addition, the parameters controlling those movements, if partially understood in the case of the NOS, remained totally unknown for the other members of the diflavin reductase family ,92. Someet al.[236TGEE239, PDB ID: 3ES9), crystallized with three molecules in the crystallographic cell. As illustrated in c reductase activity, most probably due to the incapacity of the catalytic domains to come to a position competent for interflavin ET. Consequently, the detection of this new conformation on an incompetent form of the enzyme raises the question about its existence in the course of the catalytic cycle.In 2009, two groups published novel crystallographic structures of mutant or chimeric CPR demonstrating the existence of alternate conformations different than the hitherto unique closed conformation ,137. Thec), but nevertheless remained capable of reducing other artificial and natural acceptors such as ferricyanide and P450-3A4 with a complete conservation of the electron pathway. Interestingly, the backbone of YH is almost completely conserved (RMSD 0.9 \u00c5 and 0.46 \u00c5 for the FAD and FMN domains respectively) except for the linker domain for which only a limited number (3) of residues have their Psi and Phi angles completely changed. An analysis of the interdomain interface of YH CPR [On the other hand, our group published the structure of one of the abovementioned yeast-human chimeric CPR (YH) values and predicted RDC values using the closed structure. This NMR study unambiguously proved that the CPR adopts the same conformation in solution and in the crystal environment. Interestingly, we also showed the high flexibility of the linker region between the FMN and connecting domain in the nanosecond timescale. The CPR can thus be described as two beads (FAD and FMN domains) forming a stable interface bridged by a flexible string. Our results are in sharp contrast with the previous SAXS/NMR analysis. This might be explained by slightly different (but yet unidentified) experimental conditions. Since the CPR enzyme used for the NMR study is fully active, we postulated that the opening of the structure required in the course of the reaction must be provoked by some (yet unknown) molecular trigger such as substrate binding or flavin reduction.Lastly, our groups revisited the solution properties of CPR by NMR . This la+) binding while open states are favored in the absence of the bound coenzyme. However, ELDOR spectroscopy describes conformational distributions exhibited by the proteins just before their internal motions are quenched by rapid freezing at 80 K. Therefore, the observed distance distribution may not represent distances between points on heavy atoms, but rather a weighted mean of the unpaired electron spin density distribution over the flavosemiquinones in absence of domain mobility. Nonetheless, the distance variations observed upon NADP+ binding and salt effects were in agreement with the SAXS experiments [Analysis of CPR dynamics using electron-electron double resonance (ELDOR) allowed a fine description of the conformational sampling of human CPR in the di-semiquinone state (one electron reduced FMN and one electron reduced FAD) . The auteriments .+-bound CPR are in good agreement with the observed distances in the available closed crystal structures and wit. The la that converge towards the existence of a wide conformational space that the human CPR may sample in solution. Furthermore, the energy landscape appears to be largely influenced by substrate binding or flavin reduction, which may reveal the complex domain rearrangement during the catalytic reaction. However, despite the huge recent progress, a thorough characterization of the CPR is still lacking that may include a better understanding of the structure and dynamics of the extended conformation and its interactions with partners and a more precise description of the time-dependent conformational change during the catalytic cycle. Similarly, the concepts developed for the CPR will have to be tested for the other related proteins.kInt and that the deshielded state is the only one competent for external ET (rate constant kExt). Using such a simple kinetic model linking for the first time conformational sampling and electron transfer they have been able to reproduce some experimental data on NOS. For example, this model explains how the slow conformational equilibrium, and hence the supposed conformational motion of the FMN domain, directly controls the intensity of the slow and fast phase of FAD reduction and the electron flux to cyt c[versus the fraction of (de)shielded state which nicely fits experimental data obtained on various nNOS forms [+ release steps are all integrated in a single step (kInt). In addition, there are experimental evidences for significant effects of flavin reduction and cofactor binding on the conformational equilibrium, which are not taken into account in the model. It seems then possible to improve the kinetic model at the expense of additional data collection for validating the increased number of elementary steps. There is no doubt that future investigations on the coupling between conformations sampling and kinetic properties of diflavin reductase will provide invaluable clues on the microscopic behavior of the enzyme at work.Different models have been proposed in the past to describe the successive electrons transfers ,144, how to cyt c. In addiOS forms . HoweverStarting from the first published CPR structure, in 1997, the scientific community has tracked avidly alternate architectures of diflavin reductases that would be compatible with ET to external acceptors. We have now plenty of evidences that, at least for CPR and NOS, reorientation of the FMN and FAD domains occurs at some stage during the electronic cycles. The alternative model described with the yeast CPR and involving the rotation of the FMN cofactor instead of the FMN domain seems not to fit with the very recent evidences of domain movements in human and rat CPR. As of today, the model involving conformational changes is really favored. The fact that several structural organizations were characterized by different methods seems to indicate that diflavin reductases may exist as a mixture of some discrete or continuous conformations in equilibrium. This equilibrium appears to be easily disturbed by mild change in working conditions, at least for the oxidized form of the CPR, suggesting that great care should be taken when comparing structural studies carried out in different laboratories. As a result the published crystallographic structures probably represent the most populated conformations depending on the enzyme considered (wild-type or mutant). Even if these crystallographic structures do not correspond to highly populated conformers in solution, they still demonstrate that reorientation of the catalytic domains occur with almost no change within the backbone of the domains themselves. This is also true for the connecting domain of CPR for which no specific role has yet been assigned. This connecting domain, as well as the various regulatory domains of NOS or MSR having more complex catalytic regulation, probably plays a very important role in the sensing mechanisms and in the control of the domains structural reorganization. The surfaces of these regulatory domains, being in direct contact with both catalytic domains, are probably the place where this information exchange takes place.That being said, we still lack information on the possible sensing mechanisms and how they are transmitted within the enzyme prior any structural reorganization. The involvement of cofactors and redox potential were described. Whether these factors are involved in changing existing conformational equilibria (by selectively binding to and lowering the energy of extended conformations) or directly impact structural points that could trigger domain reorganization is still under debate and will probably be the subject of intense and ardent research in the next years. Another open question concerns the role of the physiological acceptors in the control of these movements. From the simpler model of CPR to the more evolved NOS, the regulation of ET in diflavin reductases is of crucial importance for the enzyme and most probably to avoid electron leakage to oxygen and the consequent production of toxic ROS. It is likely that the presence or absence of these acceptors drive the equilibrium between the various conformations and therefore may also constitute some of the necessary triggers. Such propositions will probably constitute future research directions in the field of diflavin reductase mechanisms."} +{"text": "This continued growth has resulted in environmental problems such as coastal wetland loss, habitat degradation, and water pollution, gas flaring, destruction of forest vegetation as well as a host of other issues. This underscores the urgent need to design new approaches for managing remote costal resources in sensitive tropical environments effectively in order to maintain a balance between coastal resource conservation and rapid economic development in developing countries for sustainability. Notwithstanding previous initiatives, there have not been any major efforts in the literature to undertake a remote sensing and GIS based assessment of the growing incidence of environmental change within coastal zone environments of the study area. This project is an attempt to fill that void in the literature by exploring the applications of GIS and remote sensing in a tropical coastal zone environment with emphasis on the environmental impacts of development in the Niger Delta region of Southern Nigeria. To deal with some of the aforementioned issues, several research questions that are of great relevance to the paper have been posed. The questions include, Have there been any changes in the coastal environment of the study area? What are the impacts of the changes? What forces are responsible for the changes? Has there been any major framework in place to deal with the changes? The prime objective of the paper is to provide a novel approach for assessing the state of coastal environments while the second objective seeks a contribution to the literature. The third objective is to provide a decision support tool for coastal resource managers in the assessment of environmental impacts of development in tropical areas. The fourth objective is to assess the extent of change in a tropical ecosystem with the latest advances in geo-spatial information technologies and methods. In terms of methodology, the paper draws from primary and census data sources analyzed with descriptive statistics, GIS techniques and remote sensing. The sections in the paper consist of a review of the major environmental effects and factors associated with the problem: initiatives and mitigation measures. The project offers some recommendations as part of the conservation strategies. In spite of concerted efforts by managers to address the problems, results revel that the study area experienced some significant changes in its coastal environments. These changes are attributed to socio-economic and environmental variables.In the last decades, the Niger Delta region has experienced rapid growth in population and economic activity with enormous benefits to the adjacent states and the entire Nigerian society. As the region embarks upon an unprecedented phase of economic expansion in the 21 In the last few decades, the Niger Delta region has experienced rapid growth in population and economic activity with enormous benefits to the adjacent states and the entire Nigerian society. From the time vast oil deposits was discovered in commercial quantities in the Delta in 1956 to the present, the Nigerian state profited immensely from the oil fortunes of the region. Since then the area has grown to become the source of foreign exchange earning for the country. Starting from the fiscal year 1975 to the new century, oil from the Delta region has on the average accounted for more than 90 percent of Nigerian exports and about 80 % of government revenues as of December, 1981.Although oil production activity in the Delta has carved a remarkable economic landscape for the country with an enormous contribution to foreign exchange earning, however, there is a negative side. Petroleum exploration has triggered adverse environmental impacts in the Delta through incessant environmental, socioeconomic and physical disasters that accumulated over the years due to limited scrutiny and lack of assessment . Such inElsewhere, the United States Department of Energy (USDOE) shows that since the inception of oil and gas activities, severe environmental pollution in the Niger Delta involving uninterrupted gas flaring and oil and spillage remains rampant in the region. According to the US DOE, the area has experienced 4,000 oil spills since 1960. One of the most noticeable impacts of the various oil spills and production activities has been the loss of mangrove trees. The mangrove was once a source of both fuel-wood for the local people and habitat for the area\u2019s biodiversity, but now it is unable to withstand the high toxicity levels of petrochemicals ravaging its habitat. The spills have had adverse effects on marine life that have become contaminated, and that in turn poses enormous risks to human health from consuming contaminated seafood . Oil andThe state of the Niger Delta\u2019s environment has attracted national and international attention due to the enormous deposits of oil and gas, and the resultant impacts coupled with a controversial revenue sharing formula that has impoverished the communities. Because of massive exploitation of oil, the ramifications for human health, local culture, indigenous self-determination and the environment are severe. Accordingly, the economic and political benefits are given more emphasis at the expense of environmental health \u20138. More st century. Thus as the region embarks upon an unprecedented phase of economic expansion in the 21st century, it faces several environmental challenges fuelled partly by the pressures caused by human activities through oil and gas exploration, housing development, and road construction for transportation, economic development and demographic changes . I. I17]. IThis paper stresses a mix scale approach involving the integration of primary and secondary data provided through government sources and data bases from other organizations. The raw spatial data and satellite images used in the research came from the United States National Aeronautical and Space Administration (NASA). To answer some of the research questions pertaining to the study area, the spatial data was analyzed with descriptive statistics and remote sensing technology.The first step involves the identification of the variables needed to assess environmental change at regional level. The variables consist of socioeconomic and environmental information, including the amount of cropland, human settlement, water bodies, forest types, and population, See and 3. TTwo Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) images of 4 May 1985 and 12 June 2000 were obtained for this study. The Landsat TM and ETM+ satellite data were processed using ERDAS IMAGINE 8.7 image processing software.The images were imported into ERDAS using ERDAS native file format GEOTIFF. Since the images were in single bands, they were stacked together using ERDAS layer stack module to form a floating scene. The 1985 image was co-registered with the year 2000 image and later geo-linked to allow for the subset of both images of the study area. Enhancement of all the images using histogram equalization techniques was later performed. The images were classified using an unsupervised classification technique to identify land cover features within the study area. The remaining procedure involves spatial analysis and output (maps-tables-text) covering the study period, using ARCVIEW GIS. The spatial units of analysis consisted of the states located in the Delta region . OutputsThis section presents the results of the data analysis by first providing a brief synthesis of the descriptive statistics and geospatial analysis of the assessment. Later, it highlights the factors associated with environmental change in the study area and frameworks in place to reverse the trends.The results of 1985 and 2000 classified images presented in th July 1979 etc [The frequency of oil spills from fire disasters quite rampant in the area have led to the death of thousands of community residents, contamination of water, explosions and destruction of vegetation and the freshwater ecosystem . Judgingbbl) etc , 19. SeeThe last decade has witnessed the influx of people to the area in search of oil jobs and other economic opportunities causing degradation of most of the vital environmental resources. As shown in Over the period of 1975 to date, more than 90 per cent of the nation\u2019s exports earnings have, on average been generated from the region\u2019s oil resources . BecauseThe first systematic surveys of the Delta\u2019s flora and fauna conducted during the past decade showed that the forests and the animal populations of the Delta are under serious stress. The area\u2019s most second important timber species (Abura) once common in the region has been removed by logging activities. Widespread disturbance on the Delta\u2019s remaining forests originates from a rapidly growing Nigerian population as well as proliferation of development infrastructure that makes it easier to access the remaining areas of swamp forest. Considering the limited efforts in protected area planning in the Delta, the rapid levels of destruction epitomizes a bleak future awaiting its diverse habitats and species . As in aThe framework for oil operations in Nigeria is stipulated in the Petroleum Act, and other relevant legislations. This includes the Oil in Navigable Waters Act of 1968, the Oil Pipelines Act of 1956 as well as the Associated Gas Act of 1979, and the Petroleum (Drilling and Production) Regulations of 1969, promulgated under the Petroleum Act of 1988. Since 1988, the Federal regulations promulgated through Environmental Protection Agency (FEPA), govern environmental activities in the oil sector and other industries. The Department of Petroleum Resources (DPR) has also formulated various environmental guidelines and standards for the Petroleum Industry in Nigeria .Under the existing laws, oil companies are mandated by federal laws to use precautions including the provision of state of the art technologies to prevent pollution, and act appropriately in eradicating the problems arising from petroleum . Part of the oil sector\u2019s legal obligations under the Environmental Impact Assessment (EIA) Act of 1992 require an EIA on the location of a proposed project that it is likely to significantly affect the environment even thoPrevious policy efforts in economic development planning in the region have had a local flavour for years. This emphasis took for granted the socio economic and ecological impacts of development programs on the ecosystem of adjoining states, counties and communities where oil companies operated. Because the gravity of these problems that accumulated over the years in the Delta area, the federal government initiated a series of regional development programs to meet the challenges. The Niger Delta Development Commission (NDDC) that emerged from the process provides a platform for the focused development of the region. The NDDC\u2019s community level process consists of three main parts: state-wide awareness and capacity building workshops to identity community problems and formulate projects for solving them, followed by community consultation and implementation, supported by a network of community research centers . Under tIn spite of the frameworks in place, the results not only reveal that the study area experienced some significant changes in its coastal environments, but the region remains an ecosystem under stress. The nature and extent of this change showed some variations across time and space. The changes attributed to socioeconomic and environmental variables reflect also a host of other factors. Over all the results point to a decline in water bodies, mangrove forests, and an increase in human settlements, mixed forests, cropland and agricultural intensification as well as several cases of oil spillages which posed a major threat to the environment and natural systems of the region. Other interesting findings touch on an impending population explosion in the coastal region of the Niger Delta in Southern Nigeria. Of particular concern is the growth rate in the Niger Delta and the massive concentration of human settlements in the Port Harcourt area of Rivers state. This trend is gradually turning the Delta region into an ecological time bomb waiting to explode. This will not only threaten the carrying capacity of an already fragile ecosystem but it poses enormous challenges for both environmental and natural resource managers and policy makers in the region if not confronted with the urgency it deserves.st century.In light of this finding, the practical use of a mix scale approach involving primary and census data, GIS and remote sensing in tracking coastal environment change stands an update to current literature on coastal resources management in the Niger Delta of Nigeria. Considering the little efforts in the past to assess the Delta ecosystem, GIS technology as used in this paper has fulfilled a useful purpose for storage, manipulation and mapping of coastal data with a spatial reference. It also stands as an effective tool for coastal resource management. Integrated data analysis using remotely sensed satellite imagery and GIS modeling, facilitated the analysis of the spatial distribution of environmental change involving land use, land cover classification, forest and hydrology and demographic issues facing the Niger Delta environment. Such information technology is highly indispensable for the decision makers in Nigeria as they grapple with the future of development activities along the Delta ecosystem in the 21Four recommendations for conservation strategies and environmental protection are listed below.The series of controversies surrounding the ecological decline of the Niger Delta have been partly attributed to several factors. Considering the role of policy lapses, inaction towards environmental safety of the communities and inequity, protecting the coastal environment will require active community input and equitable distribution of natural resources revenues. Thus, conserving the coastal environment of the Niger Delta demands an understanding of the socio economic needs of local communities who are constantly faced with the threat of a poisoned environment triggered by development activities. Under this setting, the authorities in Nigeria and the oil and gas industry and others should encourage the implementation of participatory approach in matters associated with oil and gas activities during the leasing process so that those communities closer to the problems can have a say on economic development matters and decisions likely to affect their local environment. This could also be achieved through meetings and training sessions on the assessment of the major environmental and social problems facing the Delta region with participation of local universities, non governmental organizations and the communities themselves, the oil sector and government agencies as well .The development activities along the Niger Delta region has for several decades escaped a rigorous environmental scrutiny required of any fragile ecosystem in advanced nations of the world. Surprisingly, the petroleum sector of the economy undertook oil exploration without proper environmental impact assessments a situation that is untenable in advanced countries that are home to the major oil companies. To deal with these anomalies environmental impact assessment of oil and gas development projects in the region should be required before the commencement of oil and gas activities. This approach can be developed by comparing the future costs and benefits of projects on the environment before approval. The proposed model places highest priority on sets of criteria and objectives in order to gauge how the proposed activities can impact wildlife, natural habitats, hydrology, soil, wetlands and social environment. This EIA process requires also the classification of preferred alternatives to reduce identified risks to the ecosystem of the Niger Delta. While the EIA process as suggested here are frequently used in advanced nations, it is a valuable benchmark for gauging where policy interventions can be most efficiently directed given the scanty resources and support system existing in the Niger Delta .An integrated costal resource management approach is needed to address such a broad range of social and environmental issues facing the Delta in a sustainable manner. Integrated coastal zone management implies holistic planning and coordinating process capable of guaranteeing that large economic and social benefits from resources in the Niger Delta are not dissipated by environmentally destructive polices. ICZM represents an ecologically and socially sensitive approach to environmental management with a major divergence from the more traditional technocratic rational planning models that have proven to be out of touch and ineffective in dealing with the complexities associated with coastal problems. To accomplish its purposes ICZM builds from several actions at the national and regional level as part of an action plan to correct past environmental degradation and to modify current activities that are environmentally harmful. These include the establishment of an appropriate policy framework to support coastal resources management and environmental conservation . This caCurrent attempts to assess the state of the environment and the environmental stewardship of economic development along the Niger Delta ecosystem are often handicapped by the lack of complete access to a comprehensive regional environmental information system. The design of a regional en vironmental information system on the ecology of the Niger Delta will serve as a decision support tool for policy makers, oil sector and research by facilitating data collection capabilities of users and access to a state of the art technical infrastructure of relevance to the management of the coastal zone. The expectation is that such an information system will help in displaying the interactions between the fragile ecosystem of the region and human activities and then sharpen the response mechanisms in dealing with the problems. It has the potential in offering a viable system for reviewing and implementing new coastal zone development projects as well as the development of effective technical infrastructure to oversee environmental monitoring of the coastal zone on a permanent basis. In this case, the paper recommends the design of a regional information system to stem the on going environmental decline of the region, as proposed by Merem .This project has explored the applications of GIS and remote sensing in a tropical coastal zone environment with emphasis on the environmental impacts of development in the Niger Delta region of Southern Nigeria. The paper presented a vivid overview of the issues in the literature, a review of the major environmental effects and factors associated with the problem, initiatives, and mitigation measures. Notwithstanding previous initiatives, there has not been any major effort in the literature to undertake a remote sensing and GIS based assessment of the growing incidence of environmental change within coastal zone environments of the study area. In spite of concerted efforts and initiatives to address the problems, results reveal that the study area experienced some significant changes in its coastal environments. These changes are attributed to socio-economic and environmental variables and a host of other factors.The results point to a decline in water bodies, mangrove forests, and an increase in human settlement, mixed forests, cropland and agricultural intensification as well as several cases of oil spillages. The other interesting findings touched on the potentials for a rapid population growth in the region and the implications. This will not only threaten the carrying capacity of an already fragile ecosystem, but it poses enormous challenges for environmental and resource managers, and policy makers in the region if not confronted with urgency. To deal with these problems, the project offers some recommendations as part of the conservation strategies for the region. The recommendations consist of participatory approach, periodic assessment, coastal zone management and the design of a regional information system.st century.The practical use of a mix scale approach involving GIS and remote sensing tools for the assessment of environmental change provided some interesting results for coastal resources management in the Niger Delta. Moreso, it is evident that GIS technology as used by scientists for storage, manipulation and mapping of data with a spatial reference stands as an effective tool for resource management. Using remotely sensed satellite imagery and GIS modeling, quickened the analysis of the spatial distribution of environmental change involving land use, land cover classification, forest and hydrology and demographic issues facing the Niger Delta. In closing, it is our belief that successful implementation of some of the strategies could lead to effective management of the coastal environment in the Niger Delta region. Furthermore, the paper serves as an essential tool for the design of geo-spatial decision support systems for coastal resource managers in the assessment of environmental impacts of development in tropical areas. This is highly indispensable if the Niger Delta is to recover from decades of environmental decline inflicted on the area by various development actives as we move further into the opening decades of the 21"} +{"text": "The aim of this study is to report the control of lymphorrhea in the intensive treatment of elephantiasis, using an Unna boot. The case of a 29-year-old female patient is reported. This young patient evolved with the more serious form of lymphedema, elephantiasis, after surgical treatment of an abdominal neoplasm and radiotherapy. Warty excrescences were present on both legs and genitalia where lymphorrhea was constant. The patient arrived at the Godoy's Clinic for treatment. She was weighed and perimetric evaluations were made at the start of treatment and thereafter every day during an intensive outpatient treatment of eight hours daily for three weeks. Treatment included manual lymph drainage, mechanical lymph drainage using the RA Godoy device, and the continuous use of compression stockings with adjustments made every three hours. An Unna boot was employed as compression at sites of dermal lesions (warty excrescences) with overlapping use of individualized compression stockings that were individually adapted. The Unna boot was renewed every two days during the first week and every 3 days during the second and third weeks. By the end of this course of treatment, most of the warty excrescences had reduced in size or even disappeared and the lymphorrhea was controlled. Lymphedema usually affects poor populations; there is no cure and little prospect of therapies being developed by the private health sector. This situation is aggravated in less developed countries where the lack of government resources and specialized health care professionals has led to the marginalization of this disease .An association of therapies, which generally includes manual lymph drainage, compression therapy, exercises, and hygienic care, is recommended for the treatment of lymphedema , 2. MoreIntensive treatment of lymphedema, which offers the possibility of the rapid control of swelling, has been reported in the literature . HoweverThe aim of this study is to report on the use of an Unna boot that allowed the use of an associated compression mechanism with a resulting faster reduction in leg volume, thereby offering a new perspective in the treatment of wartyexcrescences and lymphorrhea in this most severe form of lymphedema.We report the case of a 29-year-old female patient with lymphedema of the lower limbs. The patient arrived at the Clinica Godoy in Sao Jose do Rio Preto for treatment in January 2011. The patient reported that the lymphedema started at the age of 12 or 13 years old when she was submitted to a surgery to remove a tumor in the abdomen, which started with pain and the diagnosis of appendicitis. During surgery a lymphoma was identified. After surgery, the patient was submitted to chemotherapy and radiotherapy sessions; she does not remember how many sessions due to her age at that time; her mother, who accompanied her, has passed away. The patient reported that the swelling began in the thigh region and spread to the feet; initially the edema got better with rest but eventually this improvement was no longer noted. Her vascular physician at the time prescribed lymph drainage, pressure therapy, and elastic compression hosiery. Even with treatment she noted that the edema increased and fibrosis developed in the abdominal region and eventually she abandoned treatment. With time the edema worsened further and wartyexcrescences began to develop on both legs and the genitalia with constant discharge of lymph. The patient was weighed and perimetric evaluations were made at the start of treatment and it was noted that the patient had difficulties to move the legs. Treatment consisted of an intensive program with mechanical lymph drainage using the RAGodoy apparatus, Godoy and Godoy manual lymph drainage, and compression therapy.The RAGodoy is an electromechanical device that performs passive movements of the ankle joint (dorsiflexion and plantar flexion) adapted to the treatment of lymphedema. This device promotes the deep lymphatic drainage and the Godoy and Godoy lymphatic drainage technique performs manual compression followed by sliding over the lymphatic collectors to the corresponding lymph nodes.The Unna boots (Unnaflex) is an elastic bandage composed of zinc dioxide (which does not become stiff), glycerin, starches, castor oil, and white petrolatum. It adapts to the contour of the leg stretching softly and remaining flexible and is applied in the same way that one bandage, into spirals movements.Unna boots were employed on both legs to protect and for compression at the sites of dermal lesions (wartyexcrescences) with overlapping using individualized low-stretch compression stockings adapted to take into account the deformities made from a cotton-polyester fabric. Daily assessments of body weight and leg perimeter were made. The Unna boot was renewed every two days during the first week and every 3 days during the second and third weeks. The boot was employed for three weeks until most of the wartyexcrescences had reduced in size or even disappeared and the lymphorrhea was controlled Figures and 2.Major deformities and the skin of the patient were monitored monthly. The patient was advised continuously about the need of hygienic care, to control the edema, and about the normalization of skin lesions.This study describes an alternative to treating major deformities where wartyexcrescences and lymphorrhea are aggravating factors. The use of compression without skin protection in these patients is associated with infection and worsening of the condition. Friction between the skin and the compression mechanism, without protection using creams, is associated to excoriation and infection.Recently an Unna boot has been employed in these cases as it protects against infections and allows a reduction in the volume of lymphedema with most wartyexcrescences disappearing during treatment. Even large lesions generally disappear spontaneously; when they do not, they can be eliminated by cauterization. In this case the approach was possible because the lesions were isolated; however when there are several lesions together, the friction between them may result in infections due to the development of skin injuries.The compression exerted by Unna boots is nonelastic and so useful in the treatment of lymphedema. The patient was prescribed penicillin-based antibiotics during this period as prophylaxis. This intensive approach allows large volume reductions of the legs; this patient lost about 31 kilos in 10 days of treatment. Thus, this quick reduction in weight helps to avoid injuries related to the wartyexcrescences which initially become flat and gradually disappear.This clinical approach reduced the necessity of surgical interventions of these excrescences, controlling lymphorrhea and protecting against the use of nonelastic compression mechanisms.We concluded that Unna boot is an option in the protection of warty excrescences and in control of lymphorrhea in patients with elephantiasis during the treatment of edema."} +{"text": "The surgical management of breast cancer has evolved over the years from extensive radical mastectomy to breast conservation surgery. Until the introduction of the sentinel lymph node biopsy (SLNB), all patients with invasive breast cancer would undergo complete axillary lymph node dissection (ALND) and thus be at risk of suffering from its associated high morbidity. SLNB has become the standard of care and represented significant progress toward reducing the invasive procedures for the management of the axilla. The most recent clinical trials (NSABP-B32 and ACOSOG Z0011) performed in patients that underwent SLNB support this procedure as an accurate predictor of the risk of further axillary node involvement and of breast cancer recurrence. Additionally, the ACOSOG Z0011 trial challenged the standard of practice in the management of the axilla, in which ALND is mandated for all the patients with a positive sentinel node. A similar outcome was demonstrated for a selected group of patients (treated with breast conserving surgery and radiotherapy) with positive SLNB then followed by ALND or SLNB only. In light of these results, the role of ALND in the current management of breast cancer is being reevaluated for specific patient subpopulations. In this special issue, this timely subject is provocatively reviewed in addition to other relevant topics, such as the controversial meaning of the presence of micrometastasis and isolated tumor cells in the SLN in relation to local recurrence and overall survival and the feasibility of performing SLNB after neoadjuvant treatment and transaxillary breast augmentation.Despite the advances in the lymphatic mapping and in the intraoperative methods for SLN analysis, the accurate identification of tumor cells in this node continues to be a challenge in clinical practice, and significant false-negative results in SLNB are still observed. In a research study published in this issue, a preferential cellular distribution of the malignant cells in the SLN is reported, suggesting that the pathological analysis directed to this area may contribute to a more precise identification of nodal metastasis. Additional progress in this direction involves the development of molecular markers, which would tackle not only the misdiagnosed SLN-negative patients but also the ones with low risk of recurrence that are unnecessarily submitted to SLNB. In this sense, the Bayesian-based nomogram developed by Westover et al. is proposed to be particularly useful, especially in cases where the SLNB assessment is predicted to be less sensitive. This nomogram would also lead to the identification of high-risk individuals for recurrence, based on the calculation of residual axillary disease risk, despite a negative SLNB. Several new reports, mostly based on gene expression profiling, have suggested that the different rates of recurrence can be due to the distinct molecular types of breast cancer. The development of a genomic signature that effectively discriminates patients by lymph node status, as the one proposed by Ellsworth et al., could stratify patients based on their need of surgical evaluation of the lymph nodes, sparing the ones in which disease will probably be limited to its primary site. The assessment of specific tumor markers in the lymph nodes is also discussed in this special issue, including a review of the prognostic and/or predictive implications of lymph node metastasis in tumors with elevated levels of CXCR4 (a protein chemokine receptor) and VEGF-C .From the Halsted radical mastectomy to the commercial gene expression profiling tests, axillary lymph node management and recurrence prediction are still evolving topics for patients with breast cancer. The continued improvement of molecular tumor profiling and bioinformatics from larger and better defined patient cohorts will certainly provide answers to many challenging questions regarding the axillary metastatic process. The clarification of this complex molecular mechanism and the identification of novel and integrative molecular markers that can reliably predict lymph node involvement that will affect risk of recurrence and survival will continue to form the basis of the contemporary approach for breast cancer management, where an early prediction of axillary metastasis and a personalized cancer treatment can be achieved.Luciane R. CavalliLuciane R. CavalliRachel E. EllsworthRachel E. EllsworthChristoph KleinChristoph KleinGiuseppe VialeGiuseppe Viale"} +{"text": "Use of ultrasound introduced as part of intensive care therapy makes viable bedside invasive procedures and diagnosis. Due to portability, combined with team training, its use guarantees less complications related to insertion, as well as patients' safety. It also reduces severe conditions related to the catheter, such as pneumothorax among others. The aim of this study was to evaluate the accuracy related to ultrasound-guided venous catheter insertion in a low-cost famtoma among medical students of third-year graduation compared with experienced doctors and medical residents. We evaluated the success rate of insertion, the number of puncture attempts and the time related to the insertion of the needle from contact with the surface of the phantom and its correct placement in the vein.Study participants were 25 undergraduate students of medicine (third year) participating in the curriculum of emergency medicine and intensive care, nine medical residents and nine critical care physicians. All participants had no previous experience with ultrasound-guided procedures, and medical students had no previous experience with central venous access puncture. There was a lecture prior to the study of 2 hours in ultrasound-guided venous cannulation. Evaluation of the average time between groups was performed by ANOVA using data processing in rank due to lack of homogeneity and the Tukey test for multiple comparisons. A possible relationship between the time needed until the puncture is performed and length of experience was assessed by Spearman correlation, due to lack of normality in the data.We found a success rate of 100% in the insertion of a catheter in phantom among all participants, a longer time in the group of graduate students (Table The use of ultrasound-guided cannulation is a reliable method of training associated with a high of success among graduate students and experienced professionals."} +{"text": "After the publication of this work , we becaMethods' section of the Abstract, the first sentence should have read:In the '\"The clinical and follow-up data of 124 patients who were referred to our department for SIRT between June 2008 and October 2010 were evaluated retrospectively.\"Patients' section of the Patients and Method, the first sentence should have read:In The '\"The clinical and follow-up data of 124 patients who were referred to our department for SIRT between June 2008 and October 2010 were evaluated retrospectively.\"Patients' section of the Results, the first sentence should have read:In the '\"78 patients received intra-arterial radionuclide treatments with 90 Y for liver metastasis or primary HCC between June 2008 and October 2010.\""} +{"text": "How can groups of neurons selectively encode different memories? We investigated a possible mechanism for the selective activation of regions of a network based on the resonance properties of individual neurons and heterogeneities in the network connectivity. In network simulations of coupled resonate and fire neurons, we incorporated the experimentally observed phenomena of resonance frequency shift based on membrane voltage changes. We aim to understand to what extent the resonance frequency shift allows for the separation of signals. We find that formation of neuronal subgroups, whether through higher connectivity strength or number of connections can lead to different activation properties from the rest of the network."} +{"text": "Cardiovascular diseases are among the most important causes of morbidity and mortality in the world. One of the important risk factors of cardiovascular disease is hyperlipidemia especially high levels of serum cholesterol. Due to the importance of hypercholesterolemia, being a serious condition, various treatments are used to control it, regardless of the cause, most of treatments, focused on reducing the level of serum lipids. This study aims to determine various view points for hypercholesterolemia in Iranian traditional medicine.We used several Iranian traditional medicine resources and literatures; then based on these texts; a pilot study was designed to assess their effects in 10 patients with high plasma cholesterol. The sign and symptoms in main digestive organs (Stomach and liver) were also evaluated.Some patients showed hepatic temperament but all patients had gastric temperament.With reference to Iranian traditional medical texts and literatures, the organs involved in the process of digestion, particularly the stomach and the liver play the most important role. Yet the proper function of stomach as the first step involved in the digestion chain should be emphasized. Lipids are body components which play important roles in energy storage and in the structure of the cell membranes, hormones and secondary mediators. Plasma lipid concentration depends on the balance between production, consumption and deposition of lipids. The hypeThe important point in atherosclerosis disorder is to modify the level of lipids; because improved lipid levels to prevent atherosclerosis is the foundation. There arThe Iranian traditional medicine system makes effort to propose the best possible ways by which a person can lead an optimum healthy life with least illness. The Iranian traditional medicine framework is based on some principals. One of the important principles is \"Umore Tabiya\". Umore Tabiya includes different parts: Arkan (Rokn), Amzaj , Akhlat , Aza (Ozv), Arvah (Rooh), Qova (Qova) and Afal (Fel), respectively.32]33][33]32][3[33]32][3\"Mizaj\" (temperament) is a quality which is a consequence of mutual interaction of the four contradictory primary qualities residing within the elements. These elements are so meticulously intermixed with each other that they lie in a very intimate relationship to one another. Their opposite powers intermittently conquer and are conquered until a state of balance is reached which is uniform throughout the whole. This result was given the name of temperament (Mizaj).35]36] [36] 35][ [36] 35]Mizaj is one of the most important canons of Iranian traditional medicine system. It has an important function in maintaining the ideal healthy state of an individual. Vulnerability of its altered temperament which is called dystemperament (Sui' e Mizaj) leads to several different types of diseases.\"Humor\" which is called in the Iranian traditional medicine texts as \"Khelt\" is a wet and fluid substance which foodstuffs in the first stage of permutation changes to it. Normally there are four humors in the human body: \"Phlegm or Balgham, Blood or Dam, Yellow bile or Safra and Black bile or Sauda\". Each of the humors was related with pairs of qualities including cold and wet, hot and wet, hot and dry, and cold and dry, respectively.39]40][[40][39][[[40][39]In the Iranian traditional medicine texts, there is no concept of hypercholesterolemia as such; but in many cases has been described it as a disorder. As far as the presence of fat (Lipids) is concerned in blood, Ibn Sina (Avicenna) an ancient Iranian traditional physician46]47][[47][46][[[47][46]Dosoomat of blood or the oily substance could be the lipids but as the biochemical analysis of blood was not available and they could not describe it as per modern parameters.2 According to the fundamentals of Iranian traditional medicine, the blood circulating in the vessels is a combination of four humors. ThereforBased on available evidences and the detailed descriptions of the Iranian traditional medicine resources, ingested food undergoes various stages of digestion before reaching the tissues.54] Ibn- Ibn-54] Each stage of the digestion is composed of specific processing of the food material that must be carried on until it becomes suitable for used by the body. In each digestion process, the following actions take place :55]55]1. In the gastric digestion, some of the traits and characteristics of food material changes and appropriate absorbable material named \"Chylous\" is absorbed via the mesenteric vessels to the liver for further digestion.2. In the hepatic digestion stage, the chylous is changed in to the \"Chymous\" that is composed of four humors , which will circulate in the vessels.3. In the vascular digestion stage, the food state gets closer to the tissue state.4. During the tissue digestion stage, food becomes quite similar to the end organ tissue.As it was expressed, the humors are the final products of the hepatic digestion and in order for the humors, to be of good quality two conditions must be met including (i) Normal liver function for proper digestion and (ii) The (Gastric chylous), which is used by the liver for the production of humors that must have an appropriate composition. The second condition would not be present unless the stomach function properly and the ingested food is of good quality.According the book \"The Qanon of Medicine\" (by Avicenna), the foundation of Iranian traditional medicine system was based on the balancing humors in the human body.57] Thei Thei57] As it was expressed previously, the continuity between the stages of digestion and even the organs involved is such that any disorder in one of them would bring about a disorder in other organs and stages. Abnormal vascular content may be a consequence of hepatic maldigestion. Also the hepatic maldigestion, itself may be a consequence of gastric maldigestion. In other word, abnormal gastric chylous results in abnormal hepatic chymous and abnormal hepatic chymous results in abnormal humors . It shouThe most important point is that due to the close relationship between different organs that contribute to the digestive chain, the impairment in each of the stages of digestion will ultimately cause production of abnormal humor. This abnormal humor does not have the quality to become the desired end product. In later stages of digestion, the abnormal humor affects the function of organs consuming it and furthermore may gradually cause dysfunction of the next responsible organ in the digestion chain.Based on the above hypothesis, to find the responsible organ causing hypercholesterolemia and to provide scientific basis for deeply studying the relation between body humors and hypercholesterolemia, a pilot study was designed to assess 10 patients with high plasma cholesterol. The sign and symptoms of the dystemperament (Sui' a Mizaj) of the main digestive organs (Stomach and liver) were obtained. Symptoms which according to them the medical history was taken, were derived from the traditional diagnostic and treatment books, like The Qanon of Medicine of Ibn Sina (Avicenna), the KholOur study indicated that only some of the patients had hepatic dystemperament (Sui' a Mizaj) and surprisingly all of the patients had gastric dystemperament. The significant difference between the results of gastric and hepatic dystemperament in our patients, suggests that the responsible organ involved in hypercholesterolemia is the stomach. Furthermore; the hepatic dysfunction, if not considered to be a consequence of gastric failure, will be the second responsible organ. Results of this study provide information for using Iranian traditional medicine in clinic and developing Iranian traditional medicine.With reference to Iranian traditional medical texts, one can reach the basic point that organs involved in the process of digestion, particularly the stomach and the liver play the most important role in determining the blood composition. However, the proper function of stomach as the first step involved in the digestion chain should be emphasized. If the gastric digestion is impaired, the rest of the organs involved in digestion like the liver, blood vessels and tissues cannot be provided with the necessary qualified raw material (chylous) and therefore may suffer dysfunction, thus causing abnormal product formation such as abnormal humors and blood composition.It can be concluded that the first and best action in the treatment of hypercholesterolemia is to treat the stomach dystemperament (Stomach Sui' a Mizaj) and if then the disorder persists, the dystemperament of the liver should be considered. Further clinical study is recommended to investigate this issue."} +{"text": "In 2006, even with the external support, most of the districts in Jharkhand state were unable to do health planning beyond an initial prospective planning. The key impediment was the lack of capacity to analyse the health situation in their districts and develop strategies and propose budgets to implement these. The Public Health Resource Network (PHRN) in partnership with the National Health Systems Resource Centre (NHSRC) developed a training curriculum and a fast-track capacity building programme in 2006 to address this gap.Jharkhand state adopted this training programme and trained selected officials from all districts in several batches beginning in the year 2008. Social workers from various civil society organisations were also trained on district health planning through a parallel distance-learning programme by PHRN. The trained personnel from both health department (n=155) and civil society groups (n=85) supported the district programme management unit in preparing district health action plan during November 2009 to March 2010.This study was done in order to assess the significance of the PHRN-led capacity building in enhancing the individual and institutional technical capacities at the district level. Study aimed to understand how the capacity building enhanced the district level planning and management of health service deliveries under the National Rural Health Mission (NRHM).A case study on district health planning processes was prepared through a desk review of reports, review of district health plan documents, focus group discussions with the district teams, and interviews with stakeholders.We found that at least 10 of the 24 districts developed in-house capacity to take forward the health planning processes at district level as a result of the capacity building programme. This Improved planning capacity at district level has enhanced the enthusiasm for decentralised planning replacing the earlier notion that planning is a normative compulsion under NRHM.The situation analysis and planning was better in the districts where the selection of personnel for the training was done. Involvement of civil society members improved compliance to the decentralized planning process. The capacity building intervention improved the focus on inclusive planning and equity through special plans for vulnerable areas and groups.All districts were able to prepare and submit their action plans on their own for the first time, with limited guidance and appraisal inputs from experts at the state level. Most of the plan proposals from the districts were incorporated in the state project implementation plan for 2010-11. The subsequent 2011-12 planning process has adopted the same strategy.There is a need to improve the competence of the district team for planning and management of public health systems through systematic capacity building. The selection of personnel for such capacity building programmes is crucial for the success of the programme. Even though several years have passed since the launch of NRHM, and there have been some capacity-building inputs, there are gaps especially in the regions where this is most needed. The key challenge is how these improvements can be institutionalised and maintained.Also, there is a need to ensure resource allocation based on the needs indicated by the district health plans. In absence of this, the enthusiasm created by the capacity building initiatives may diminish."} +{"text": "The health risk assessment from exposure to a particular agent is preferred when the assessment is based on a relevant measure of internal dose rather than simply the administered dose or exposure concentration. To obtain such measurements, the relevant biology, physicochemical properties, and biochemical mechanisms of a specific agent are used to construct biologically based models that can be used to predict its uptake, disposition, target tissue dose, and ensuing tissue responses in test animals and humans. The focus of this special issue is the state of the science underlying the development and application of a specific type of biologically based model models) in risk assessment. The fourteen papers presented herein address critical issues and advances relating to their use in current risk assessment approaches with a focus on their use in emerging toxicology paradigms as well. The first paper in this issue presents an overview that (1) briefly introduces the papers contained in this special issue; (2) provides context for how they inform best modeling practices and state-of-the-art risk assessment applications of PBPK models; (3) discusses limitations and bridges of modeling approaches for future applications and how papers within the issue fit into that emerging science. Specifically, novel approaches for estimation/characterization of metabolic parameters for PBPK models are examined in two articles with application of PBPK models for determination of exposure and interpretation of biomarker data examined in three articles . The special issue contains several case studies that illustrate the development and/or use of quantitative models to address specific needs of risk assessment of selected chemicals and chemical mixtures . The impact of human population subgroup variability on the magnitude of uncertainty factor developed as part of risk assessment of volatile organic chemicals is also described . A broader perspective on the use of PBPK models for addressing contemporary issues in risk assessment is presented along with a progress report on the development of a PBPK \u201ctool-box\u201d for more general applications . In summary, the collection of papers by leading experts in this field that comprise this special issue provides insight and tools for a wide spectrum of risk assessment applications. These papers can be used for (1) facilitation of the screening and prioritization of chemicals, (2) linkage of exposure to internal dose, (3) analysis of mechanistic information and prediction of risk to chemical mixtures, and (4) provision of computational techniques and tools to address uncertainty and variability questions related to identification and prediction of responses and pharmacokinetics in potentially sensitive subpopulations. We hope that this \u201csnapshot\u201d of the current state of the science, best practices, and challenges for application of these tools in emerging risk assessment approaches and data will be of tremendous interest to the regulatory and scientific communities internationally. This information can build a foundation for use of these predictive models for a variety of applications and for future reduction of uncertainty in those predictions."} +{"text": "Coherent optical phonon spectra show that femtosecond laser-irradiated crystallization threshold of the multilayer films relies obviously on the periodic number of the multilayer films and decreases with the increasing periodic number. The mechanism of the periodic number dependence is also studied. Possible mechanisms of reflectivity and thermal conductivity losses as well as the effect of the glass substrate are ruled out, while the remaining superlattice structure effect is ascribed to be responsible for the periodic number dependence. The sheet resistance of multilayer films versus a lattice temperature is measured and shows a similar periodic number dependence with one of the laser irradiation crystallization power threshold. In addition, the periodic number dependence of the crystallization temperature can be fitted well with an experiential formula obtained by considering coupling exchange interactions between adjacent layers in a superlattice. Those results provide us with the evidence to support our viewpoint. Our results show that the periodic number of multilayer films may become another controllable parameter in the design and parameter optimization of multilayer phase change films.The periodic number dependence of the femtosecond laser-induced crystallization threshold of [Si(5nm)/Sb The frequency of COP modes before and after crystallization appears at approximately 3.90 and 4.55 THz, respectively. The former is assigned to the phonon mode of amorphous Sb80Te20 layer, while the latter is assigned to the A1g optical phonon mode of crystalline Sb and is not related to the periodic number. In addition, the crystallization power threshold decreases with the increase of the periodic number. The periodic number dependence of the power threshold is attributed to the crystallization temperature decrease with the increase of the periodic number and is considered originating from superlattice structure effects. The periodic number of the multilayer films provides another additional degree of freedom to simultaneously optimize multiple parameters of the multilayer phase change films.The periodic number dependence of the femtosecond laser-induced crystallization of [Si/SbFFT: Fast Fourier-transformed; COP: Coherent optical phonons; CPS: Coherent phonon spectroscopy.The authors declare that they have no competing interests.CW, MS, and JZ carried out the sample preparation and electrical characterizations. WZ,SL, and TL conceived of the laser-induced crystallization studies and coordinated the experiment. All of the authors participated in the analysis of the data. WZ and TL wrote the manuscript. All authors read and approved the final manuscript."} +{"text": "Ixodes scapularis) vectors infected with Borrelia burgdorferi, are complex and vary across eastern North America. Despite study sites in the Thousand Islands being in close geographic proximity, host communities differed and both the abundance of ticks and the prevalence of B. burgdorferi infection in them varied among sites. Using this archipelago in a natural experiment, we examined the relative importance of various biotic and abiotic factors, including air temperature, vegetation, and host communities on Lyme disease risk in this zone of recent invasion. Deer abundance and temperature at ground level were positively associated with tick abundance, whereas the number of ticks in the environment, the prevalence of B. burgdorferi infection, and the number of infected nymphs all decreased with increasing distance from the United States, the presumed source of this new endemic population of ticks. Higher species richness was associated with a lower number of infected nymphs. However, the relative abundance of Peromyscus leucopus was an important factor in modulating the effects of species richness such that high biodiversity did not always reduce the number of nymphs or the prevalence of B. burgdorferi infection. Our study is one of the first to consider the interaction between the relative abundance of small mammal hosts and species richness in the analysis of the effects of biodiversity on disease risk, providing validation for theoretical models showing both dilution and amplification effects. Insights into the B. burgdorferi transmission cycle in this zone of recent invasion will also help in devising management strategies as this important vector-borne disease expands its range in North America.In the Thousand Islands region of eastern Ontario, Canada, Lyme disease is emerging as a serious health risk. The factors that influence Lyme disease risk, as measured by the number of blacklegged tick ( Borrelia burgdorferi, which is transmitted to humans by the bite of an infected blacklegged tick (Ixodes scapularis) B. burgdorferi and its vectors.In eastern North America, Lyme disease is a serious emerging health risk caused by the bacterium B. burgdorferi transmission cycle involves a number of mammalian and avian tick hosts Odocoileus virginianus) are essential for the establishment and maintenance of endemic I. scapularis populations B. burgdorferi infection. The effect of deer abundance on tick populations varies across eastern North America In addition to the tick vector, the Peromyscus leucopus) are considered to be the most competent reservoir for B. burgdorferi (most likely to transmit the pathogen to a tick while it is feeding) Tamias striatus) and shrews (Blarina brevicauda and Sorex cinereus) are also important competent hosts Sciurus carolinensis, Tamiasciurus hudsonicus), voles (Microtus spp.), raccoons (Procyon lotor), and some ground-foraging birds also provide blood meals to many I. scapularis larvae and nymphs, but are less competent reservoirs for B. burgdorferiWhite-footed mice can affect the number of ticks, the prevalence of infection in ticks, and thus, human risk Environmental conditions, including temperature, relative humidity, canopy cover, and leaf litter, can affect the survival and development of ticks B. burgdorferi infection may be lower than the prevalence in bird-dispersed adventitious ticks B. burgdorferi transmission can cause the prevalence of infection in the local tick population to increase and eventually exceed the prevalence in adventitious ticks I. scapularis populations recognized in Ontario have been restricted to localized areas along the north shore of Lake Erie, Lake Ontario, and the St. Lawrence River B. burgdorferi are spreading across northeastern North America from original endemic foci in the United States and are increasingly being confirmed as endemic in parts of eastern Canada B. burgdorferi were discovered in the adjacent Thousand Islands region of eastern Ontario (unpublished data).Elevated tick numbers often represent established tick populations in which, in the early stages of establishment, the prevalence of I. scapularis nymphs, and relative abundance of mice in explaining the number of nymphs (NON), the prevalence of B. burgdorferi infection in nymphs (NIP), and the number of infected nymphs (NIN) in the heavily-visited Thousand Islands region. Given the diversity of small mammal communities in the Thousand Islands archipelago, we were also interested in whether species richness would affect NON, NIP and NIN differently depending on the relative abundance of mice. This information can guide management to control blacklegged ticks and reduce the risk of Lyme disease in the Thousand Islands as this vector-borne disease system expands its range in North America. Our study is one of the first to consider the interaction between the relative abundance of small mammal hosts and species richness in the analysis of the effects of biodiversity on disease risk, providing validation for theoretical models exploring dilution and amplification effects.Despite being distributed within an area about 28 km long and 8 km wide, there is variation among sites in the Thousand Islands in the number of blacklegged tick vectors, the prevalence of infection in those ticks, and the diversity of the small mammal host community 2 (one hectare) grid at each study site . The same trap locations were used for the duration of the study.The study area was comprised of 12 sites within Thousand Islands National Park in the Canadian Thousand Islands following the protocols described for B. burgdorferi in Ixodes tick species All nymphs collected by drag sampling in 2009 and 2010 were counted and identified to species using the taxonomic key of Durden and Keirans Small mammals were trapped and handled in Thousand Islands National Park under a Research and Collection Permit approved by the Parks Canada Agency and in accordance with an Animal Utilization Protocol (09R039) approved by the University of Guelph Animal Care Committee following the guidelines of the Canadian Council on Animal Care.Peromyscus spp.) based on phenotype Peromyscus sp. were all confirmed as P. leucopus by genetic analysis of ear punch samples using methods described by Thompson Peromyscus were assumed to be white-footed mice. Ticks were collected from all animals as part of a concurrent study and animals were then released at their point of capture were set at marked grid locations at each site for four consecutive nights in June and August. Traps were baited with sunflower seeds and contained polyester or natural cotton bedding. Traps were set in the evening and checked before 08\u223600 the following morning. Animals were identified to species positioned one to two cm above the ground near the centre of each one-hectare study site recorded air temperature and humidity at 30-minute intervals from July 2009 to the spring of 2011. Data were collected from the data loggers periodically (approximately every 300 days) and compiled using HOBOware Pro version 3.2.1 . The average daily minimum temperatures were calculated for three-month periods because ticks are sensitive to extremes in microclimatic variation and climate variables are typically grouped seasonally to coincide with tick activity periods e.g., .2/100 square feet) around each trap location was recorded and survey plots were cleared of pellets. Each survey plot was assessed for pellet groups again in the spring of 2010. The accumulation of pellets from fall 2009 to spring 2010 was used to calculate the number of pellet groups per hectare, which was used as a relative index of deer abundance at each site.Relative white-tailed deer abundance was estimated by pellet counts Average leaf litter depth at each site was determined in September 2010 by inserting a ruler into the litter layer until bare ground was reached at 11 randomly selected trap locations. The percent canopy cover at each trap location was categorized and used to calculate the average overall canopy cover for the study site.Distance to the United States was measured in ArcGIS as the shortest straight-line distance to the American mainland from the shore of the island on which the study site was located or from the point on the Canadian shore of the St. Lawrence River closest to the mainland study site. Because some of the sites were located in close proximity to one another and model residuals showed significant autocorrelation by a Moran\u2019s I test, an autocovariate term was used to account for the spatial non-independence of sites, calculated as an inverse distance-weighted function of the response variable in neighboring sites B. burgdorferi infection in ticks captures of eight small mammal species were recorded during 22,895 trap nights in 2009 and 2010 , 127 models were considered; 19 were included in the 95% confidence set . BetweenB. burgdorferi transmission cycle in the Thousand Islands. The structure of the small mammal community was an important factor in modulating the effects of species richness on the number of nymphs and the prevalence of B. burgdorferi infection: depending on the relative abundance of mice, high biodiversity did not always reduce the factors that contribute to the measurement of human disease risk. Dilution was evident in the effect of small mammal species richness on the overall number of infected nymphs.Deer abundance, distance to the United States, small mammal species richness, relative mouse abundance, canopy cover, and air temperature were all associated with the complex B. burgdorferi are expanding from foci in the northeastern and north-central United States through the movement of migratory birds, deer, and possibly small mammal hosts B. burgdorferi and allow an assessment of changing disease risk. The pattern of fewer ticks farther from the United States suggests an expansion front that could be studied to learn more about the mechanisms of tick and B. burgdorferi establishment Geography played an important role in Lyme disease risk in the Thousand Islands. Distance to the United States was an important predictor and negatively correlated with NON, NIP, and NIN. This is consistent with previous studies that report blacklegged ticks and The positive effect of climate warming on expansion of tick populations was supported by our results, with higher average daily minimum summer temperature at ground level positively correlated with tick abundance. The topography and geography of the islands and mainland sites created varied microclimatic conditions and the mean daily minimum temperature in the summer months ranked as an important predictor of the number of nymphs. The moderating effect of the St. Lawrence River may have played a role in this pattern, as mean daily minimum temperature in the summer was also negatively correlated with island size, and the three mainland sites had the lowest mean temperatures The inclusion of island and mainland sites in our study area provided an opportunity to explore the effects of deer populations on Lyme disease risk The risk of encountering a nymphal tick can also be reduced by avoiding closed-canopy areas.The positive correlation we found between canopy cover and the number of nymphs is consistent with previous studies; ticks are more likely to inhabit forested areas than open fields B. burgdorferi infection in the population of nymphs decreased. This outcome may be the result of the recent emergence of the Lyme disease cycle in the Thousand Islands: tick populations may become established at new sites before B. burgdorferi, diluting the overall prevalence of B. burgdorferi infection that is largely restricted to adventitious ticks arriving on migratory birds Borrelia) B. burgdorferi infection, as evidenced by the large amount of variation in the data explained by the NON models in comparison to the NIP models. Therefore, an increase in the number of nymphs overwhelmingly increases the odds of encountering an infected nymph, despite a decrease in the overall prevalence of infection. Several studies examining the factors that affect disease risk have focussed on the prevalence of B. burgdorferi infection in nymphs as the outcome variable, assuming a strong link to the number of infected nymphs and, therefore, human risk Unexpectedly, as the number of nymphs increased, the overall prevalence of The small amount of variation in the data explained by the NIP models suggests that random variation or other factors that were not measured influence the prevalence of infection in nymphs. We did not examine the effects of the time since tick establishment B. burgdorferi infection in nymphs was lower with increasing biodiversity when the relative abundance of mice was low. As more species are added, some of those new species are likely to be less competent hosts and because hosts that are not mice make up a large proportion of the community, fewer ticks become infected with B. burgdorferi. The dilution theory was also evident when examining the effects of biodiversity on the number of nymphs. When mice comprised more than half of the small mammal community, higher species richness was correlated with lower nymph abundance. This may be because the other small mammals that comprised the species-rich communities where mice dominated were poor quality hosts, removing and eating a large proportion of feeding ticks through grooming Higher species richness was associated with a decrease in human disease risk, as measured by the number of infected nymphs. Also consistent with the dilution theory, the prevalence of B. burgdorferi infection in nymphs. Species richness had little effect on the prevalence of infection in ticks at sites with a high relative abundance of mice. This could be because mice are very competent hosts B. burgdorferiWe also found evidence for a context-dependent role of biodiversity, and situations that were not consistent with the dilution theory. The structure of the small mammal community, specifically the relative abundance of mice, modulated the effects of biodiversity on the number of nymphs and the prevalence of B. burgdorferi in a zone of Lyme disease emergence in eastern North America, these results contribute empirical evidence to the debate about the role of biodiversity in reducing human disease risk worldwide.Our study is one of the few to evaluate the predictions of theoretical models that explore dilution and amplification effects in the Lyme disease system in eastern North America"} +{"text": "There is evidence of the growth of substance consumption throughout the world. However, there continues to be a broad gap between the scope of the problem and the use of preventive and treatment services. This situation is particularly acute near the border between Mexico and the United States, where drug use and availability are greater. Funding was obtained in both countries to conduct a feasibility study about the early identification of consumption at primary health care centers and to test the Quit Using Drugs Intervention Trial (QUIT) of brief intervention. Early identification and brief intervention have proven to be cost-effective and to encourage timely care and reduce the care gap. Men and women aged 18+ (approximately 1000 in each country) will be screened with the Alcohol, Smoking and Substance Involvement Screening Test (ASSIST). Risky users (about 65 participants in each country) will be enrolled and randomized to either QUIT or a control condition. An electronic material management application (EMMA) and \u201ctalking touch-screen\u201d wireless tablets will be used for screening, randomization, and data monitoring. The main results of the implementation phase are the creation of a binational research team; consolidation of the research design; adaptation of instruments and strategies of the ASSIST-QUIT intervention and the EMMA system; technology exchange; and identification of areas requiring cultural adaptation. An ethnographic approach to health centers in Tijuana, Mexico and Los Angeles was used to identify the differences and similarities between centers. Comparisons of the prevalence of drug consumption among participants in both countries will be presented. The main contribution is the identification of the challenges, scope, and cultural differences in the implementation of a program for early detection and treatment of drug consumption in primary health care centers in the border region. The results will underpin new research to apply this model among similar populations."} +{"text": "Successful food-safety management relies on a clear identification of the hazards to be addressed. In the case of issues relating to food allergens the hazard may be defined as: The inadvertant consumption of a food allergen by a sensitive individual. Ensuring that the risk (probability) of this hazard occuring is maintained at an appropriately low level within the context of modern food-manufacturing often presents issues. These reflect the facts that modern food-processing businesses rely on the efficient use of both equipment and the associated manufacturing environment. Such reliance often predicates against the dedicated use of a particular manufacturing line, still less a manufacturing facility, for a single product. This multiplicity of products is often the source of many of the issues relating to food allergen cross-contaminantion. At a philosophical level, the issue of cross contamination presents a unique hazard in terms of food safety management since the contaminant is an integral part of a food and considered nutritious to most consumers. Furthermore levels of cross contamination capable of eliciting an adverse reaction are often lower than those associated with loss of consumer acceptability. Risk assessment cannot seen be seen in isolation and has to be considered within the additional context of risk management and risk communication. Risk management therefore requires an understanding but of both events taking place within the manufacturing facility but also within the suppliers of the raw materials used to make the finished product. As such successful management of food allergen issues places heavy reliance on pre-requisite programmes operating within the food business . Finally it is necessary to ensure that relevant information concerning any remaining hazards presented to the food allergic consumer are clearly communicated on the wrapper."} +{"text": "The management of empyema is still debated. Video-assisted thoracoscopic surgery (VATS) has revolutionized surgical management of patients with empyema. Currently thoracoscopic approach probably is the best choice in the management of second or third stage of empyema, particularly in those patients with chronic empyema and poor performance status. Multioculated empyemas generally make thoracoscopic procedure difficult therefore open procedures would be preferable. We evaluate the effectiveness of transthoracic ultrasound (TUS) in the planning of the best thoracoscopic approach in treatment of chronic empyema.From January 2007 to March 2013 28 patients with pleural empyema underwent preoperative transthoracic ultrasound. II stage of empyema was presented in 23 patients.Patients were examined in surgical position. TUS was used to analyze and localize multi loculations pockets of pleural effusion and pneumonia with consolidation, the hemidiaphragm and the major pockets of the pleural effusion were marked on the skin. In consideration of the TUS and the signs, a small working window and a thoracoscopic access were drawn on the skin of the patient. TUS accuracy was determinate on empyema staging, topography of multilocalations pleural effusion, hemidiaphragm localization and finally thoracoscopic accesses pianification.All patients underwent thoracoscopic treatment with a working window of about 10 cm and a second access of 10mm. In 4 cases open-convertion was needed for pathology extention. Topography was correct in 26 cases and good thoracoscopic approach was achieved in 24 . The presence of small and multiple loculations and a hyperechoic effusion were the main causes of failure.TUS is a very useful method in the planning of thoracoscopic treatment of pleural empyema. The utility of this method should be further extended in the treatment of this pathology."} +{"text": "To the Editor: The European Working Group for Legionella Infections (EWGLINET) conducts epidemiologic surveillance of Legionnaires' disease cases associated with travel by using the protocol and database of EWGLINET. Both isolates showed identical SBT patterns .As soon as cultures from the 2 patients were available, the National Reference Laboratories of France and Spain shared their respective microbiologic results. Since both strains were identified as L. pneumophila serogroup 1 and had identical SBT patterns . Collaboration between public health authorities in France and Spain enabled us to eliminate the association of patient 2 with the Zaragoza outbreak and establish an association of both patients with the same site in France. Control measures were taken at the hotel, but we could not obtain environmental cultures for comparison with those of the patients. Lack of environmental data prevented investigation of the relationship with the other accommodation sites visited.Isolates from 4 patients in the Zaragoza outbreak were identified at the Spanish Reference Laboratory as L. pneumophila. The availability of an online database with accessible information is key for sharing results and determining the geographic distribution of isolates that cause Legionnaires' disease (This study demonstrates the critical role of sharing results between countries that participate in a network. Agreement is essential on a standardized questionnaire that includes more information on the patient's exposure to a disease. Moreover, despite the performance of the urine antigen test, cultures of clinical samples should be encouraged by clinicians and microbiologists. This step would permit use of techniques, such as SBT, in reference laboratories and sharing of results. Our investigation would have been more difficult without this technique in identifying the site where the infection potentially originated."} +{"text": "We report a case of mucoid degeneration of the anterior cruciate ligament (ACL). Mucoid degeneration of the ACL is a very rare cause of knee pain. There have been only some reported cases of mucoid degeneration of the ACL in the English literature. We reviewed previous reports and summarized clinical features and symptoms, including those found in our case. Magnetic Resonance Imaging is the most useful tool for differentiating mucoid degeneration of the ACL from an intraligamentous ganglion or other lesions in the knee joint. If this disease is considered preoperatively, it can be diagnosed easily based on characteristic findings. The first description of cysts of the anterior cruciate ligament (ACL) in 1924 and was made by Caan . There aMRI identified two forms of cyst of the ACL. The first is the cyst fluid. This is the cystic form proper, well described in the literature, 4, 9. TA 45 year old woman who had no prior significant trauma was seen at our hospital for evaluation of left knee pain and flexion difficulty. Upon examination, there was no swelling, ballottement of patella or instability. The range of motion of the knee was limited from 0\u00b0 to 120\u00b0 with terminal flexion pain. There was a point of tenderness at the medial joint line of the knee. A plain X-ray showed a slight degenerative change in the medial side of the knee .MRI of the right knee showed an increased intraligamentous signal for the ACL with high inhomogeneous intensity and cracking of the external meniscus undisplaced . Based oMucoid degeneration of the ACL is a very rare cause of knee pain. This lesion of the ACL was first reported by Kumar et al in 1999.This case shows the presence of intraligamentous mucoid degeneration in the ACL as a source of knee pain without instability. To assist in diagnosing the source of atypical knee pain, we recommend the use of MRI in conjunction with diagnostic arthroscopy. Partial ACL debridement does not preclude adequate knee stability. The chief complaint of knee pain correlated with the intraligamentous nature of mucoid degeneration in the ACL and may suggest involvement of the small nerve fibers found in the ACL. After resection of the cyst arthroscopy has no laxity was found and Robert had studIn summary, MRI yields preoperative information that is useful in the diagnosis of and surgical treatment planning for mucoid degeneration of the ACL. If we consider this disease preoperatively, it can be easily diagnosed based on these characteristic findings. Partial excision of the yellow, sclerotic lesions of the ACL results in immediate pain relief and improves the range of movement without any symptom of instability.Arthroscopic resection of symptomatic degenerative ACL gives good results but leads to a subjective postoperative progressive laxity in some cases. The prognosis depends on the age and associated injuries. The diagnosis of mucoid degeneration of the ACL should be suspected posterior unusual pain with limitation of flexion. MRI and arthroscopy confirmed the diagnosis."} +{"text": "Modern pharmaceutical industry in India emerged in the 1900s and with that regulation of pharmaceuticals also evolved. The reforms of the 90s while improving the exports, also brought in improved standards of quality to be followed in the entire supply chain of drugs. While for the manufacturers it meant documentation, increased expenditure, and more export opportunities, for the consumer it has resulted in improved drugs, as the public health system also started emphasizing on selecting only those manufacturers with quality standards. However, governance and enforcement mechanisms have to improve to benefit the consumers and the entire healthcare system.Based on a review of literature on pharmaceutical regulations governing manufacturing standards, pharmacy education, and drug prices, this paper discusses the positive outcome of certain regulations and points out where explicit policy focus is needed. Specifically, the paper discusses (a) the positive outcomes of implementing good manufacturing practices (GMP) that has improved the standards of production; (b) the need for aggressive pharmacovigilence practices to enhance awareness on adverse drug reaction; (c) the need to integrate the pharmacy education with the needs of the health systems which functions with far minimal trained human resources particularly in the preventive health care; and (d) the impact of issues related to the drug price control orders on the consumers.Modern pharmaceutical industry in India emerged in the 1900s. Regulation of pharmaceuticals also emerged with the industry. The ultimatum given to the industry to adhere to the schedule M implementation has resulted in improving the standards of medicines produced. While the industry-centric regulations emphasize on quality, little attention is paid particularly to the pharmacovigilence practices to systematically document the adverse drug reactions. The pharmacy education in India has an industry bias. Every year, because of the spate of permissions for pharmacy colleges from the All India Council of Technical Education, pharmacists are churned out in thousands in the country. However, they do not find jobs in the industry. If the pharmacy education is oriented towards community pharmacy or hospital pharmacy, the human resources shortfall in the preventive health care could be improved.The Drug Price Control Order presently covers only 75 essential drugs and leaves out the new innovator drugs and the biotechnology based drugs, which are prohibitively expensive. While manufacturers wriggle out of price control by producing combinations so that they do not come under the purview of price control, consumers are inundated with irrational an combination that push up the cost of treatment but delays the course of treatment.The Drugs and Cosmetics Act needs to be revamped taking into consideration the changing industrial scenario. While manufacturers wriggle out of price control by producing combinations so that they do not come under the purview of price control, consumers are inundated with irrational combinations that push up the cost of treatment but delay the course of treatment. Pharmaceutical governance can be vastly improved if the regulatory authorities have more manpower with them.Jan-Aushadhi stores need to be addressed.In India generic drugs are sold in brand names, which push up the cost of treatment as doctors prescribe branded generics. Instead if drugs are sold in generic names, the cost could be contained. The bureaucratic bottlenecks particularly at the state level that delays the spread of The curriculum of pharmacy education should be revised thoroughly. Pharmacy education in India should include courses like community/hospital pharmacy, where the pharmacist actually assists the doctor in choosing the right medicine after analyzing patients\u2019 case history.None declared.None declared"} +{"text": "The human immune system functions to provide continuous body-wide surveillance to detect and eliminate foreign agents such as bacteria and viruses as well as the body's own cells that undergo malignant transformation. To counteract this surveillance, tumor cells evolve mechanisms to evade elimination by the immune system; this tumor immunoescape leads to continuous tumor expansion, albeit potentially with a different composition of the tumor cell population (\u201cimmunoediting\u201d). Tumor immunoescape and immunoediting are products of an evolutionary process and are hence driven by mutation and selection. Higher mutation rates allow cells to more rapidly acquire new phenotypes that help evade the immune system, but also harbor the risk of an inability to maintain essential genome structure and functions, thereby leading to an error catastrophe. In this paper, we designed a novel mathematical framework, based upon the quasispecies model, to study the effects of tumor immunoediting and the evolution of (epi)genetic instability on the abundance of tumor and immune system cells. We found that there exists an optimum number of tumor variants and an optimum magnitude of mutation rates that maximize tumor progression despite an active immune response. Our findings provide insights into the dynamics of tumorigenesis during immune system attacks and help guide the choice of treatment strategies that best inhibit diverse tumor cell populations. Immunologic surveillance is a function of the immune system which serves to constantly monitor the body for microorganisms, foreign tissue, and cancer cells. To evade this surveillance and subsequent elimination, cancer cells evolve strategies to prevent being recognized and killed by immune system cells; one mechanism is to increase the rate at which genetic and/or epigenetic variability is generated. The benefits of an increased variability of cancer cells to counteract immune surveillance, however, stands in contrast to the costs associated with such heightened mutation rates: the risk of an inability to maintain essential genome structure and functions. To study such situations arising in tumorigenesis, we designed a novel mathematical framework of tumor immunosurveillance and the evolution of mutation rates. We then utilized this framework to study how increased mutation rates and immunologic surveillance affect the abundance of tumor and immune system cells. We found that there exists an optimum number of tumor variants and an optimum magnitude of mutation rates that maximize tumor progression despite the presence of actively proliferating and functioning immune system cells. Our study contributes to an understanding of cancer development during immune system attacks and also suggests treatment strategies for heterogeneous tumor cell populations. In 1909, Paul Ehrlich was the first to propose the idea that the immune system scans for and eliminates nascent transformed cells in the human body Tumor immunoescape is driven by the generation of tumor cell variants An increased chance of accumulating (epi)genetic alterations during cell divisions enhances the rate of generating tumor cell variants that may evade the immune response; however, high rates of alterations may in turn lead to an error catastrophe in that a functioning genome cannot be sustained when error-prone replication produces excess damage Several mathematical models have been designed to provide insights into the dynamics of tumorigenesis under immunosurveillance or an error catastrophe of tumor cells During the early phases of tumorigenesis, immune system cells such as NK and CD8+ T cells attack tumor cells and may succeed in suppressing their expansion; this outcome is referred to as \u201ctumor dormancy\u201d. However, if the immune system cannot successfully eradicate a tumor, then eventually a subset of tumor cells will acquire the phenotypes necessary for immunoescape. Depending on the magnitude of the mutation rate of these cells, the tumor cell population may then be at risk of going extinct due to the generation of excess damage \u2013 the event of an error catastrophe. To investigate the dynamics, conditions, and likelihood of these events, we designed a mathematical model of tumor and immune system cells.In the context of our mathematical model, initially there is only a single type of tumor cells \u2013 those cells that originally founded the tumor. Denote the abundance of these original tumor cells by In addition to tumor cells, we also consider immune system cells that launch a specific immune response against each particular tumor variant. Denote the abundance of immune system cells specific to the original tumor clone by With these considerations, we define the basic mathematical model including tumor variants and their specific immune responses byBaseline values of model parameters and their respective ranges used for simulations are presented in Let us now discuss the possible outcomes of interactions between the immune system and the tumor cell population: there may be tumor dormancy, partial immunoescape, complete immunoescape, and the event of an error catastrophe. In the dormancy state, immunosurveillance serves to effectively suppress the tumor cell population. In the partial immunoescape state, some tumor variants achieve immunoescape while in the complete immunoescape state, the immune response is completely unsuccessful. Finally, in the error catastrophe state, the original tumor clone, which has the highest division rate, goes extinct due to the accumulation of excess alterations. We now outline how the original tumor clone, the tumor cell variants, and the specific immune system cells behave during the accumulation of alterations and the evolution of higher mutation rates.The four qualitative outcomes of the interaction between tumor cells and the immune system \u2013 dormancy, partial and complete immunoescape as well as error catastrophe \u2013 are most significantly influenced by two systems parameters: the mutation rate generating tumor variants while maintaining a functioning genome.Let us now investigate the system dynamics for varying mutation rates and identify those regimes in which the total tumor cell number is maximized. Every time a new tumor variant arises, the dynamics of tumor evolution rapidly converges to its steady state; we therefore analyze the dynamics in steady state. The total number of tumor cells depends on the number of variants as well as the mutation rate, and an optimum combination of these parameter values exists that maximizes the total tumor cell number. In In situations in which all tumor cells are effectively suppressed by the immune response (tumor dormancy), the total number of tumor cells increases with the number of variant types. In situations in which some tumor cell types manage to escape from immune surveillance, the total number of tumor cells increases as both the number of variant types and the mutation rate increase . Howeverand see . In thisncreases , but theOur results demonstrate that there are two strategies to maximize the rate of tumor evolution so that the total tumor cell mass is maximally large: one is to maintain a low mutation rate, since then the tumor cell population can increase the number of variant types along the threshold see ; anothersee see . When boLet us now investigate how the division rate of variant tumor cells affects the evolution of tumor cells during their interaction with immune system cells. Recall that in the basic model, the division rate is Let us next consider additional effects arising during tumor progression such as competition among tumor cells of different variant types, the presence of an innate immune response such as NK cells, which non-specifically target all tumor variants, and differential growth rates among tumor cell variants. In order to investigate the conditions for outcomes such as tumor immunoescape and error catastrophe in these more complex scenarios, we established an extended model, given byThe dynamics of tumor progression considering these situations are shown in Finally, let us discuss the effects of different treatment modalities on the rates of cancer progression and the chance of immunoescape. Since the behavior of tumor cells and thus patient outcomes are to a considerable extent driven by the interactions between tumor and immune system cells, we considered both traditional chemotherapy and treatment options that stimulate the immune system to launch or sustain an attack against the tumor cell population. In general, immune therapies have not been proven to be very effective against many tumor types; one of the few exceptions is represented by adoptive cell therapy, which is used in the treatment of metastatic melanoma and causes regressions in about 50To study the effects of chemotherapy, immune therapy, and combination therapy on the dynamics of tumor evolution, we introduced a series of different treatment types into the mathematical framework and identified optimal treatment strategies for diverse tumor cell populations . These dWe then utilized this system to investigate optimum treatment strategies. First, let us consider the effects of chemotherapeutic agents which reduce the number of tumor cells by inducing cell deaths at a rate proportional to the cell number present within the tumor. Administration of such treatments decreases the total cell number, but may not be capable of leading to complete eradication of all tumor cells unless iIn conclusion, our mathematical model predicts successful outcomes of combination therapy when (i) chemotherapy is administered which induces tumor cell death at a significantly large rate, or (ii) combination therapy is administered which reduces the number of tumor variants, induces tumor cell death, and replenishes immune cell populations. When the mutation rate of tumor cells is small, combination therapy is more effective than when variations arise at a large rate. An explanation of these findings can be found in In this paper, we have investigated the dynamics of tumor progression under immune system surveillance while considering the effects of increasing rates at which (epi)genetic alterations are generated. We defined specific situations that can arise due to the interactions of immune system cells and tumor cells. When the tumor cell population is able to persist under immunosurveillance without leading to tumor growth, then a state of tumor dormancy ensues. Should the immune system not be capable of efficiently suppressing the tumor cell population, then partial or complete immunoescape is possible, depending on whether some or all tumor clones evade immune system inhibition. Finally, an error catastrophe occurs when the tumor cells evolve mutation rates that are incompatible with the maintenance of a functioning genome due to excess error.The dynamics of the system and likelihood of these different states depend on the rate at which variability emerges in the population , in detail. The basic model can be considered qualitatively as a 4-dimensional ODE system is a We investigated the existence conditions of the equilibria of model (1). The model has seven possible equilibria:While the equilibria Consider the situation in which This result implies that Let us now calculate the total number of tumor cells at equilibrium. Note that the dynamics of the basic model, equation (1), might not converge to an equilibrium, but may oscillate if When the number of tumor variants is"} +{"text": "In order to draw patterns in helminth parasite composition and species richness in Mexican freshwater fishes we analyse a presence-absence matrix representing every species of adult helminth parasites of freshwater fishes from 23 Mexican hydrological basins. We examine the distributional patterns of the helminth parasites with regard to the main hydrological basins of the country, and in doing so we identify areas of high diversity and point out the biotic similarities and differences among drainage basins. Our dataset allows us to evaluate the relationships among drainage basins in terms of helminth diversity. This paper shows that the helminth fauna of freshwater fishes of Mexico can characterise hydrological basins the same way as fish families do, and that the basins of south-eastern Mexico are home to a rich, predominantly Neotropical, helminth fauna whereas the basins of the Mexican Highland Plateau and the Nearctic area of Mexico harbour a less diverse Nearctic fauna, following the same pattern of distribution of their fish host families. The composition of the helminth fauna of each particular basin depends on the structure of the fish community rather than on the limnological characteristics and geographical position of the basin itself. This work shows distance decay of similarity and a clear linkage between host and parasite distributions. The helminth parasite fauna of freshwater fish of Mexico ranks amongst the best characterised parasite faunas in Latin America In this paper, we analyse the distributional patterns of adult helminth parasites of freshwater fishes of Mexico, with regard to the main hydrological basins of the country. We examine the linkage between host and parasite distributions, evaluating the relationships among drainage basins in terms of helminth diversity. Considering the points explained before, if each family of fish has a typical helminth fauna whose distribution corresponds to that of their hosts, then the ichthyological composition of the basin may be an important determinant of the patterns of distribution of the helminths. Moreover, the geographical distance amongst basins can be an important determinant of similarity in helminth faunas. Frequent contacts and exchanges of parasites between host populations of nearby basins should lead to highly homogenous faunas, we would expect the similarity in species composition among basins to decay with increasing distance between them et al.et al.et al.A presence-absence matrix, representing every species of adult helminth parasites of freshwater fishes from 23 Mexican hydrological basins , Table 1Major drainage basins of freshwater systems of Mexico, outlined in Relationships between the helminthofauna inhabiting each drainage basin were examined via a similarity matrix constructed using the S\u00f8rensen coefficient. Multivariate classification is a useful method to examine biogeographic patterns exhibited by species and to distinguish and characterise biogeographic entities. In this work, drainage basins were considered the main units of comparison, and individual species occurrence represented the attributes of the individual drainage units Bothriocephalus acheilognathi Yamaguti, 1934 (Cestoda) that was found in 12 river basins The compiled database includes a total of 170 species of adult helminths from 85 genera and 34 families recorded from 16 families of freshwater species of fishes in Mexico this database has been already examined for taxonomical composition and endemism by Salgado-Maldonado and Quiroz-Mart\u00ednez Helminth species richness varies widely throughout drainage basins of the country . NeotropAnalysis of similarity, based on S\u00f8rensen\u2019s coefficient, of the relationships between helminth faunas of the 21 drainage basins for all species (excluding introduced species) showed three large clusters that exhibit low similarity (<10%) between them at species level. Results of the analyses carried out using the full species list (with the introduced species) show a somewhat disrupted pattern. The three main groups are still clearly represented in the dendrogram and the MDS ordination . However, it is worth noting that the sampling effort has been different for each of the Mexican biogeographical provinces. The tropics have been studied far more intensively, with a greater number of basins explored, more frequency in sampling, and more fish families and species sampled et al.Our analyses allowed the distinction of Neotropical, Nearctic and Pacific Mexican drainage basins. Fish parasite fauna shows that half (11/21) of the Mexican basins analysed in this work classify as Neotropical. The close relationship between the helminths of Chiapas-Usumacinta and Tabasco hydrological systems was expected as most of these records belong indeed to the R\u00edo Usumacinta drainage basin, sampled form its middle part in Chiapas and from its lower part in Tabasco. The bodies of water sampled from the Yucat\u00e1n Peninsula appeared as the area most associated with them, followed by the R\u00edo Papaloapan basin. This is further confirmed as distance between basins is taken into account: geographical proximity of the basins would allow frequent contacts and exchange of parasites that would explain high values of similarity between neighbouring basins. These similarities in helminth fauna are in accordance to the concept of the Usumacinta Ichthyological Province proposed by Miller In the same way, the close similarities observed between the second group of neotropical basins can be explained by the helminthological and the ichthyological composition of the faunas and by the geographical proximity of the basins. All these data confirm that parasite communities show decay in compositional similarity as a function of distance separating them. As shown by our results nearby host populations tend to have many parasite species in common, whereas distant ones share few Ictalurus balsanus and the goodeid Ilyodon whitei, in the R\u00edo Balsas basin, coupled with the record of three North American generalist helminths, provide a component of nearctic species in this basin. This is counterbalanced by several neotropical species of helminths associated to the cichlid Cichlasoma istlanum and the characid Astyanax aeneus. The neotropical species component is even larger in the three remaining basins mentioned , and is responsible for the strong similarity between these basins, while the presence of an ictalurid species in each basin contributes to the nearctic component. As for the main two neotropical groups of helminths discussed previously, the composition of the helminthofauna in the Balsas, P\u00e1nuco, Ayuquila and Santiago basins is explained because the geographical distribution of the helminth fauna reflects the distribution of the ichthyological fauna. Traditionally, the Lerma and Santiago basins have been considered as a single biogeographic unit for different taxa particularly freshwater fishes The Balsas and P\u00e1nuco river basins as well as the Ayuquila and Santiago river basins have a mixture of nearctic and neotropical helminth species associated with the fish families that inhabit them. The presence of the ictalurid host I. punctatus The helminth composition and the position in the multivariate analyses of the bodies of water of the Cuatro Ci\u00e9negas Valley, geographically located in the Nearctic realm, can be explained by the fact that only cichlids and characids have been examined in this area, and that these are essentially neotropical fish families Agonostomus monticola, which migrate along the coast, entering continental bodies of water from brackish environments. The absence of these eleotrids and mugilids from nearctic environments results in a strong differentiation of this set of basins compared to the Pacific ones. These Pacific basins can also be differentiated from the neotropical ones by the peripheral distribution and the number of species of eleotrids and mugilids, given the fact that species of either families are rarely examined in more inland neotropical basins.The third set of basins, the Pacific affinities group, includes the oases of Baja California Sur, rivers and streams near Chamela Jalisco, and bodies of water belonging to either, the R\u00edo Papagayo or R\u00edo Atoyac basins. This group can be explained by the distribution of the basins along the Pacific versant of Mexico, by the geographic proximity between the Papagayo and Atoyac basins and between the Baja California and Ayuquila basins and also by the way their fish hosts families and their parasites disperse. The records of helminths in these basins are primarily associated with eleotrids and a species of mullet What this paper shows is that the helminth fauna of freshwater fishes of Mexico can characterise hydrological basins the same way as the fish families do. The data presented above show that the basins of south-eastern Mexico harbour a predominantly Neotropical helminth fauna whereas the basins of the Mexican Highland Plateau and the Nearctic area of Mexico harbour Nearctic fauna, following the same pattern of distribution of fish host families. The analysis shows that the composition of the helminth fauna of each particular basin depends on the structure of the fish community rather than on the limnological characteristics and geographical position of the basin itself and that the similarity decreases with increasing distance between drainage basins, the same that holds for their host families Figure S1Dendrogram resulting from the similarity matrix based on the S\u00f8rensen Similarity Index for all river basins with introduced species. Groups are based on parasite species composition of the basin.(TIF)Click here for additional data file.Figure S2Non Metric multidimensional scaling (nMDS) ordination plot resulting from the resemblance matrix of the river basins, based on the S\u00f8rensen Similarity Index, with introduced species.(TIF)Click here for additional data file.Table S1Most widespread species helminths of freshwater fishes of Mexico.(DOCX)Click here for additional data file."} +{"text": "There are substantial geographic variations in coronary heart disease (CHD) mortality rates in England that may in part be due to differences in climate and air pollution. An ecological cross-sectional multi-level analysis of male and female CHD mortality rates in all wards in England (1999\u20132004) was conducted to estimate the relative strength of the association between CHD mortality rates and three aspects of the physical environment - temperature, hours of sunshine and air quality. Models were adjusted for deprivation, an index measuring the healthiness of the lifestyle of populations, and urbanicity. In the fully adjusted model, air quality was not significantly associated with CHD mortality rates, but temperature and sunshine were both significantly negatively associated (p<0.05), suggesting that CHD mortality rates were higher in areas with lower average temperature and hours of sunshine. After adjustment for the unhealthy lifestyle of populations and deprivation, the climate variables explained at least 15% of large scale variation in CHD mortality rates. The results suggest that the climate has a small but significant independent association with CHD mortality rates in England. Geographical inequalities in coronary heart disease (CHD) mortality rates in England are substantial and persistent. Since the late 1970s, male CHD mortality rates have been at least 30% higher in the North of England than in the South East, and the differences between North and South for female rates have been even larger It is unclear how much of the geographic variation in CHD in England is a result of differences in the physical environmental. This paper explores the impact of climate and air pollution on geographic variation in CHD mortality rates. Plausible mechanisms for the effect of these factors on CHD have been suggested. Cold weather increases blood pressure, blood cholesterol, blood viscosity (thereby increasing the risk of thrombosis), and could induce a mild inflammatory response thereby increasing blood coagulability st January 2003. Henceforth these areas are referred to simply as \u2018wards\u2019. There are 7,929 wards in England, which can be grouped into 355 local authorities (LAs). Mortality data were provided by the Office for National Statistics for the years 1999 to 2004 (inclusive) stratified by sex, ward and five year age group. The mortality data included all deaths in England where CHD was recorded as the primary cause of death . Change in ICD coding over the data collection period is thought to have had little impact on reporting of CHD mortalities The analyses reported in this paper utilise ecological regression models, with all standard table wards as the unit of analysis. Standard table wards are a statistical set of boundaries based on the electoral ward boundaries as of 1Data on mean maximum temperature and total hours of sunshine were provided by the Meteorological Office for 37 English weather stations for every month between 2000 and 2002. The data were used to generate model-based ward-level estimates of mean maximum temperature and total hours of sunshine for each month between 2000 and 2002 using second order trend surface modelling Air pollution data were collected in 2001 for the development of the physical environment domain of the Index of Multiple Deprivation 2004 2An index of unhealthy lifestyle was used as a measure of the behavioural risk factor profile of populations. This index was derived from a principal components analysis of five sets of ward-level synthetic estimates of the prevalence of cardiovascular risk factors, specifically consumption of less than five portions of fruit and vegetables per day Deprivation and urbanicity are other potential confounders of the relationship between climate, air pollution and CHD mortality rates. Deprivation in England is higher in the North than in the South (following a similar gradient to mean temperature and hours of sunshine), and air pollution is higher in more urban areas. The deprivation index used in these analyses was the ward-level Carstairs index Initially exploratory data analysis techniques were used to investigate correlations between the exposure variables and assess the distribution of the outcome variables. Then baseline multi-level regression models ) were built with male and female CHD mortality rates as outcome variables, in order to get a baseline measurement of residual variance at ward-level and LA-level. Then univariate and multivariate multi-level models were built with the physical environment, unhealthy lifestyle index and deprivation variables as exposure variables. Inclusion of variance at ward-level and LA-level is important as climate and air pollution vary on different spatial scales. Finally, equivalent spatial error regression models were built with the same exposure and outcome variables. These were built to assess whether the associations derived in the multi-level models were adversely affected by spatial autocorrelation bias. Results from the multi-level models were the primary outcomes, as they allow for an assessment of how much variance is explained by the exposure variables both at ward-level and LA-level. These results are used as proxies for explanation of \u2018small-scale\u2019 variation and \u2018large-scale\u2019 variation . The estimation technique used for the multi-level modelling was iterative generalised least squares (IGLS), and the spatial error modelling used maximum likelihood techniques, ensuring that the results of the multi-level models and the spatial error models are comparable.Both male and female ward-level age-standardised CHD mortality rates were reasonably normally distributed, and hence suitable for regression analyses. The ward-level and LA-level variance in the baseline models is shown in The physical environment variables contribute little to the explanation of ward-level variation. However, they clearly contribute to the explanation of LA-level variance in mortality, even after adjustment for urbanicity, the unhealthy lifestyle and deprivation indices: the models containing only the confounding variables (model E) explained around 65% of the LA-level variance, whereas this increased to nearly 80% in the final model (model F).The spatial error univariate and multivariate models showed good agreement with the multi-level models, suggesting that spatial autocorrelation bias has not substantially affected these findings. The parameter estimates in the spatial error models tended to be closer to zero than in the multi-level models, demonstrating that spatial autocorrelation (when unaccounted for) tends to result in a bias away from the null hypothesis. The difference in the parameter estimates between the multi-level and spatial error models was generally in the region of around 10% to 20% (results not shown).Two local climate measures and a measure of air pollution were found to explain - without accounting for other factors - nearly 60% of large scale geographic variation in CHD mortality rates but did little to explain small scale geographic variations in CHD rates. The strength of the relationships was strongly attenuated when deprivation, urbanicity and behavioural risk factor profiles of populations were added as explanatory variables. A substantial amount of large scale geographic variation in CHD rates is explained by physical environment variables even after adjustment for deprivation, urbanicity and behavioural risk factor profiles of populations \u2013 at least 10% of large scale variation in mortality rates. These models suggest that the climate has a small but independent association with CHD mortality rates in England \u2013 a ward with the lowest observed temperature had 40 more male deaths per 100,000 and 25 more female deaths per 100,000 than a ward with highest observed temperature, all else being equal. In comparison, applying excess winter mortality from CHD for England in 2004/05 This is the first instance of a study of geographic variation in small area CHD rates that accounts for behavioural risk factor profiles of populations, deprivation, and a number of measures of the physical environment within the same set of analyses. The multi-level design of the analyses allowed for the explanation of large scale and small scale geographic variation in CHD rates simultaneously, which allowed for disentanglement of the influence of variables that are effective at the different scales. The spatial error models allowed for an assessment of whether the multi-level models were prone to spatial autocorrelation bias, which was shown not to be the case. The systematic approach to building models that was utilised here allowed for a comprehensive assessment of the impact of confounding, and for some disentanglement of the amount of geographic variation that is explained by the climate and air pollution variables.The results presented in this paper are derived from ecological cross-sectional analyses. Because of the cross-sectional nature of the studies, the results cannot confirm causal relationships. The relationship between climate and CHD rates presented here may be a result of residual confounding. Economic deprivation, unhealthy lifestyles and the climate generally follow the same North-South gradient in England, and the associations shown in the analyses may be a result of errors in the measurement of economic deprivation and unhealthy lifestyles, or could be due to unmeasured and potentially confounding factors such as utilisation and quality of health care. A previous study of women in 23 towns in Great Britain suggested that controlling for aspirin and statin use removed the residual variance in adjusted cardiovascular prevalence rates in England (but not in Scotland) The results presented here are in general agreement with the UK literature on geographic variation in CHD rates, in that not all of the variation in CHD rates can be explained by lifestyle factors alone. The British Regional Heart Study (BRHS) provides the most comparable results for the impact of climate on geographic variation in heart disease in England, despite the widely differing methodology employed in the study compared with the analyses reported here. Analysis of phase one of the BRHS suggested that in 1969\u201373 climate variables had a modest effect on variation in local CHD mortality rates after adjustment for deprivation The results of this paper extend the results of phase one of the BRHS in the following ways: all wards in England were included in the analysis; a measure of the behavioural risk factor profile of populations of areas was included; an exploration of both small scale and large scale geographic variation in CHD rates was conducted; including wards from rural areas allowed for urbanicity to be included as a potential explanatory variable; more sophisticated estimates of air pollution and climate were used, which allowed for modelled estimates of these measures to be applied to all wards in England. The results of this paper complement the results of phase two of the BRHS, but extend the interpretations to women and to men of all ages. In addition, the analyses reported here were sufficiently powered at the area-level to allow for inclusion of several environmental variables in the models simultaneously.The results presented here suggest that air pollution has a small positive association with CHD mortality rates in small areas. A similar finding was shown by Maheswaran et al. in an analysis of census enumeration districts in Sheffield The analyses reported here suggest that, on top of excess winter mortality, CHD mortality rates in the coldest parts of England are generally higher compared to the warmest parts . Whilst this difference is small compared to differences in the lifestyle of populations, if the relationship is shown to be causal then it is an area which could be targeted in order to reduce geographic inequalities in CHD. Analyses of excess winter mortality in different regions of Europe have shown that the excess mortality is generally greater in countries with milder climates and this has led researchers to suggest that the impact of a cold climate on cardiovascular health can be substantially reduced if the population were better prepared for cold weather by improving household heating and insulation and wearing more appropriate clothing during cold periods of the year The work reported in this manuscript has not previously been published elsewhere, or submitted for publication elsewhere."} +{"text": "It is believed that immunosuppression is the sole cause for the occurrence of rhinosinusitis in hematopoietic stem cell transplant (HSCT). There is a high incidence of sinusitis in recipient's patients, especially those with Chronic Graft Versus Host disease (GVHD). Histopathological abnormalities were described in recipient's sinus mucosa comparing to the immunocompetents patients. There are also mucosal abnormalities related to the cytotoxicity in the transplanted patients with chronic GVHD, but no difference in ultrastructure between HSCT patients with and without GVHD, except increased goblet cells in patients without GVHD. The relation between the sinunasal mucosa abnormalities of patients with and without GVHD and rhinosinusitis is not well established yet.To verify the ultrastructure of the sinunasal mucosa of HSCT with and without GVHD with rhinosinusitis in order to understand the cause of high sinusitis incidence in recipients with and without GVHD.A preliminary prospective study with statistical analysis of data obtained from the evaluation of the mucosa of the uncinate process by transmission electron microscopy of those recipients with (10) and without GVHD (9) with rhinosinusitis.93% of transplanted patients with GVHD and 62% of those without GVHD had 2 or more rhinosinusitis. Only the presence of microvilli was significantly increased in patients without GVHD. There was no significant difference in the cilia number, cilia ultrastructure, squamous metaplasia, goblet cells or citoplasmatic vacuolization between those groups.The recurrence of rhinosinusitis seems to be higher in chronic GVHD patients, however no abnormalities were found in the ultrastructure of their sinunasal mucosa, except increased microvilli in those without GVHD. This is a preliminary study and an increased sample might modify statistical analysis, as well as the comparison of recipients without rhinosinusitis."} +{"text": "Plasmodium berghei pbNK65) the cytokine profile of infected animals is significantly altered in relation to the percent parasitemia. Preliminary data from our laboratory have shown alterations of the fatty acid profile in lungs of mice infected by pbNK65. However, it is not known if the molecular organization and lipid composition of pulmonary surfactant change during malaria ARDS. Surfactant once secreted forms organized lipid structures referred as large aggregates (LA). During respiration an inactive form of surfactant is also produced, the small aggregate form (SA). We explored the lipid profile both of the aggregates and of the lung tissue from non infected and infected animals with pbNK65 and with Plasmodium chabaudi (PcAS), a Plasmodium strain that does not induce lung pathology.One of the lethal complications of malaria is acute lung injury and in its more severe form, acute respiratory distress syndrome (ARDS). In the murine model of malaria-associated ARDS . The membrane enriched fractions of the lungs from NK65 mice are characterized by a significant increase of phosphatidylcholine and phosphatidylethanolamine. No differences are present in the other classes of PL and in the AS group.The increase in PL in the lung tissue is a common response to alveolar inflammation. This modification, absent in AS mice, appears to be correlated with malaria ARDS and consistent with the eosinophilic hyaline membrane deposition and cell infiltration observed in the alveoli of NK65 mice. The BAL fluid of NK65 mice is characterized by high increase of protein levels indicative of oedema and alveolar leakage due to the lung pathology. On the contrary, the total PL increase present also in AS groups, seems related to malaria infection but not to lung pathology. Increased total protein levels are present in the LA fraction of NK65 mice, probably due to blood-derived proteins being incorporated into or associated with these microstructures in the alveolar hypophase. The increase of LPC, a known inhibitor of surfactant activity, in the SA fraction of NK65 mice is consistent with the action of phospholipases which are known to be present in the lungs during inflammatory injury."} +{"text": "There is extensive research into eating disorder risk factors, and recently the focus has moved to investigating the mechanisms underlying these factors. The current study examines the interrelationships between eating disorder symptoms and two proposed risk factors: perfectionism and media internalisation. This study uses data collected as part of the Prevention Across the Spectrum randomized controlled trial, which involves approximately 2000 Grade 7 and 8 adolescents across Australia. Students were randomly allocated to one of three eating disorder prevention programs or a control group. Students were assessed in 4 waves and the assessment included measures of perfectionism , media internalisation , and shape and weight concerns (Eating Disorder Examination Questionnaire). Preliminary analyses using a sample of baseline data suggest that the relationship between perfectionism and eating disorder symptoms is mediated by media internalisation, with differential effects depending upon the dimension of perfectionism and the outcome measure used in the analysis. Part two of this study will investigate the effects of the intervention programs on this relationship and outcome. The findings presented will have implications for our understanding of the development and prevention of eating disorder symptomatology.Prevention stream of the 2013 ANZAED Conference.This abstract was presented in the"} +{"text": "Dear Editor,We enjoyed reading the excellent article by Abenavoli and colleagues on the clinical role of Ribavirin (RBV), and particularly the selection and maintenance of the optimal RBV dosing strategy that are required to achieve sustained viral suppression in patients with chronic hepatitis C (CHC) infection. They concluded that contemporary therapy for CHC infection is to deliver doses of both RBV and pegylated interferon-alpha (PEG-IFN) that confer optimal antiviral efficacy for a sufficient time to minimize viral relapse. At the same time, it is important to minimize the impact of side effects that might erode the effectiveness of therapy due to dose reductions below the level of therapeutic efficacy, or because the patient is unable to complete an optimal treatment course . The ear"} +{"text": "To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar \u2013 and often identical \u2013 inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals.http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/contents.shtml) and JAGS (http://mcmc-jags.sourceforge.net)). Program SPACECAP provides a more specialized implementation of this approach A variety of SECR models have been developed to accommodate different kinds of sampling protocols and capture methods If the region occupied by the population is sufficiently large, the Poisson limit theorem suggests an asymptotic equivalence between the Poisson and binomial SECR models and their estimators of The asymptotic equivalence of binomial and Poisson SECR models may not apply in small regions or small populations. In a simulation study where recapture locations were simulated for area-search surveys The source(s) of the apparent difference in performance of classical and Bayesian estimators of density cannot be inferred from this simulation study. Were the differences associated with differences in modeling assumptions or with differences in methods of inference and point estimators (MLE vs. posterior mean or mode)? Also, owing to the sizeable difference in number of data sets analyzed in each simulation scenario , the estimated bias and coverage of the Bayesian estimators may have included substantial Monte Carlo error, casting doubt on the statistical significance of the reported difference in performance of classical and Bayesian methods.every Poisson point-process model of SECR data and provides theoretical support for an estimator of abundance proposed for recaptures in trapping arrays To shed light on this issue and to establish a formal relationship between classical and Bayesian estimators of abundance, I developed two Bayesian models of SECR data using Poisson point processes. One model is for the analysis of recaptures observed in trapping arrays; the other is for the analysis of recapture locations observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to To illustrate the results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density for both simulated and real populations of animals. Surprisingly few analyses of SECR data have assumed spatial variation in individual density In this section I describe two SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. These models can be specified hierarchically using one component to describe the process that generates the spatial distribution of individuals in the population and a second component to describe the process that leads to an observable sample of individuals. The first component, a Poisson point-process model, is identical for both SECR models; therefore, I describe this component first and follow with descriptions of the underlying assumptions and likelihood functions of the two observation models.This component of the SECR models is similar to the model of home-range centers proposed by I assume that the activity centers are a realization of an inhomogeneous Poisson point process parameterized by a first-order intensity function expected density of individuals at location In the context of SECR models, We are now equipped with all of the components needed to derive the likelihood function of the model. Let In classical (non-Bayesian) statistics, the likelihood function is often written as a function of not detected in any of the In reality, only The likelihood function of the ''complete data'' locations, I assume that each individual moves randomly around its activity center jth search of region jth survey cannot be used in the likelihood function of the observable data. Instead, we must factor the joint density into two components, one for the locations of individuals that are detected and another for the marginal probability of the non-detections obtained by integrating the unobserved recapture locations from (7).The first component of the joint density is associated with the value of The second component of the joint density is associated with the value of Note that (9) is based on the assumption that jth search of region ith individual during the jth area-search survey. Our objective is to estimate not detected during the We now have all of the elements needed to derive the likelihood function of the model. Let The likelihood function of the \u201ccomplete data\u201d (wherein The maximum-likelihood estimator (MLE) of Classical (non-Bayesian) and Bayesian estimators of population size To develop the Bayesian estimator of The MCMC algorithm used to fit this model is given in Summaries of the posterior distribution of The density of individuals can vary substantially in regions of spatially varying habitat. To mimic this situation, I simulated a population of individuals living in a square , but few individuals were observed more than twice at the same trap (average\u200a=\u200a1.2 detections per trap).I fitted the SECR model for recaptures in trapping arrays to these data by partitioning the region occupied by the population into a grid of square pixels (width\u200a=\u200a160 m) for numerical integration. Further reductions in pixel width did not change the value of the maximized log likelihood. I used maximum-likelihood estimates of the model's parameters to initialize the Markov chain used in the Bayesian analysis. In addition, I used sampler to estimClassical and Bayesian estimates of the model parameters and of their 95% confidence or credible intervals are quite similar . Strong quality and 3. Tognormal .I fitted the SECR model for recaptures in trapping arrays to a set of data that have been analyzed previously using both non-spatial and spatial capture-recapture models 2 (20.7 km \u00d7 37.3 km). To fit the model, I assigned In the absence of spatial covariates of tiger density, I fitted the model based on a homogeneous Poisson point process using both classical and Bayesian methods of analysis. The minimum rectangular region needed to encompass the entire trapping array spanned an area of 772 km sampler to estim\u22122. The probabilities of detecting these tigers appears to be very low since the estimated maximum detection probability is only about 0.02. For comparison with a previous analysis of these data 2). 2) . The empirical Bayes and Bayes estimates of N are also quite similar since the posterior distribution appears to be approximately lognormal within a 9-ha square plot (width\u200a=\u200a300 m) located in southwestern Arizona, USA. During each survey, this plot was searched thoroughly for lizards, which were marked and released after noting their capture locations. A total of 14 surveys were conducted over 17 days producing 134 captures I fitted the SECR model for area-search surveys to a set of data that have been described and analyzed previously In the absence of spatial covariates of lizard density, I fitted the model based on a homogeneous Poisson point process using both classical and Bayesian methods of analysis. To fit the model, I assigned sampler to estim\u22121. For comparison with previous analyses of these data, I estimated the number of lizards present in the 9-ha surveyed plot. The empirical Bayes and Bayes estimates of Classical and Bayesian estimates of the model parameters and their 95% confidence or credible intervals are quite similar . The ave similar .I repeated the simulation study of To perform classical and Bayesian analyses of each simulated data set, I partitioned sampler to estimClassical and Bayes estimates of population abundance and average density were similar in most of the scenarios of the simulation study and 8. TIn formulating the SECR models I used a Poisson point process to specify the spatial distribution of individual activity centers. This modeling assumption allowed me to derive estimators of population abundance and density for use in classical or Bayesian analyses and to compare the operating characteristics of these estimators in analyses of real and simulated data sets.every Poisson point-process model of SECR data. The only difference between models is the functional form of The Bayes and empirical Bayes estimators of population abundance are closely related. Both estimators depend on the full conditional distribution of the number An estimator of abundance proposed by An empirical Bayes approach for estimating abundance is also feasible if a binomial point-process model is fitted to SECR data. In this case the conditional distribution of the number In the datasets I analyzed, classical and Bayesian methods provided similar \u2013 and often identical \u2013 inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. For example, in the analysis of recaptures of tigers in trapping arrays and of lizards in area searches, classical and Bayesian estimates of the model parameters and their 95% confidence or credible intervals were quite similar and were consistent with previous analyses of these data The Bayesian SECR models that I described can be extended in a variety of useful and important ways. For example, the ecological process submodel can be extended to specify stochastic sources of spatial variation in density that induce clustering of individual activity centers . The Bayesian SECR models also can be revised to include spatial or non-spatial sources of variation in detectability of individuals or alternative functional forms for models of detectability Appendix S1Derivation of empirical Bayes estimator of Var((PDF)Click here for additional data file."} +{"text": "Food allergies are adverse immune reactions to food proteins that can range from immediate, potentially life-threatening reactions to chronic disorders such as atopic dermatitis and allergic gastrointestinal disorders. These adverse reactions can be IgE-mediated; cells mediated or result from a combination of both. No effective preventive strategy or curative protocol is currently established. Various mouse models have been developed which mirror some of the key elements of food allergies to a high degree. These models have extended our understanding on the immunological and patho-physiological mechanisms of the allergic immune response and have been used for the initial testing of preventive and therapeutic agents. In particular the prenatal and early postnatal period seem to be a critical window for the establishment and maintenance of a normal immune response towards food allergens. This presentation will focus on the role of the maternal adaptive immune response and the nature of the diaplacental antigen transfer during the prenatal period in preventing the onset of allergies in the offspring. In addition, during the early postnatal period several host factors can influence the acquisition of oral tolerance. The interaction of the developing immune system with microbial structures seems to play a decisive role for the induction of local and systemic tolerance. Several studies demonstrated that continuous administration of live Lactobacillus rhamnosus GG (LGG) during gestation and the breastfeeding period inhibited the onset of allergen-induced sensitization and airway disease in the offspring which is associated with the induction of T-regulatory cells. Recent findings suggest that heat treated and soluble factors may also have the ability to suppress the allergic immune response. These data may help to interpret previous data from successful clinical trials and provide an outlook on future intervention strategies."} +{"text": "The trans-Atlantic slave trade dramatically changed the demographic makeup of the New World, with varying regions of the African coast exploited differently over roughly a 400 year period. When compared to the discrete mitochondrial haplotype distribution of historically appropriate source populations, the unique distribution within a specific source population can prove insightful in estimating the contribution of each population. Here, we analyzed the first hypervariable region of mitochondrial DNA in a sample from the Caribbean island of Jamaica and compared it to aggregated populations in Africa divided according to historiographically defined segments of the continent's coastline. The results from these admixture procedures were then compared to the wealth of historic knowledge surrounding the disembarkation of Africans on the island.In line with previous findings, the matriline of Jamaica is almost entirely of West African descent. Results from the admixture analyses suggest modern Jamaicans share a closer affinity with groups from the Gold Coast and Bight of Benin despite high mortality, low fecundity, and waning regional importation. The slaves from the Bight of Biafra and West-central Africa were imported in great numbers; however, the results suggest a deficit in expected maternal contribution from those regions.When considering the demographic pressures imposed by chattel slavery on Jamaica during the slave era, the results seem incongruous. Ethnolinguistic and ethnographic evidence, however, may explain the apparent non-random levels of genetic perseverance. The application of genetics may prove useful in answering difficult demographic questions left by historically voiceless groups. Of the estimated ten million people captured in Africa between the 16The island of Jamaica was sparsely inhabited by indigenous sea faring peoples when it was established as a Spanish settlement in 1509. These peoples either fled the island or were eradicated by the time of the English conquest of Jamaica in 1655, the result of forced labour and European diseases imposed by the Spanish . Due to th century.Between 1655 and 1807, captive Africans embarked from the most westerly shores of Senegambia and along the coast eastward, as far as Madagascar. These regions is passed entirely matrilineally and accumulates mutations along the maternal line, providing a unique opportunity to explore the matriline of select groups. Once considering the relatively rapid mutation rate of mtDNA when compared to the nuclear genome, it is possible to create detailed phylogenies to explore the matrilineal relatedness of people . GroupinUsing historically defined geographical parameters, the distribution of mtDNA haplogroup profiles in discrete founding populations can be combined in hopes of identifying and replicating the source populations found on Jamaica during the slave trade era. This approach has been used with success in comparing great swaths of New World populations of African origin to various macro-regions in Africa, showing a concurrence with the historical literature, as well as differing contributions of African source populations in North, Central, and South America . Using aThe aim of this study is to apply similar admixture modelling to the mtDNA distribution of Jamaica with an eye toward geographic sensitivity, using historical data on the coasts of origin as source populations in an attempt to investigate the genetic vestiges of population constraints present during the slave era in Jamaica. Considering the overwhelming proportion of slaves imported from the Bight of Biafra and West-central Africa just before the end of the slave trade, as well as the continuously high levels of mortality among slaves, it is hypothesized that the mtDNA haplogroup profile distribution will resemble these latter sources more closely than regions exploited earlier in the slave trade.The mitochondrial haplogroup profiles observed in the sample of Jamaicans are presented in Additional file Only one Jamaican profile could be undoubtedly classified as belonging to typical Native American haplogroups A2; several exact matches were observed in El Salvador , Costa RAs shown in Additional file The results of the exact test of population differentiation performed on the haplogroup profile distributions confirm the discrete nature of each African coast (results not shown). In order to estimate the proportion of maternal ancestry from each major slaving coast present in the population of modern Jamaican mtDNA, admixture models using both haplogroup profiles and haplotype similarities from the African coasts were fitted to the pool of sampled Jamaicans. Combinations excluding marginal populations were also explored to investigate any contribution these groups may have on the regional haplogroup profile distribution. These estimated admixture coefficients are summarized in Tables th century and the 1790 s.Varying parts of the western coast of Africa contributed to the British trans-Atlantic slave trade in differing intensities over differing periods; however, the Gold Coast embarked slaves for the New World in consistently great numbers between the beginning of the 18Using haplogroup distributions to calculate parental population contribution, the largest admixture coefficient was associated with the Gold Coast (0.477 \u00b1 0.12), suggesting that the people from this region may have been consistently prolific throughout the slave era on Jamaica. The diminutive admixture coefficients associated with the Bight of Biafra and West-central Africa is striking considering the massive influx of individuals from these areas in the waning years of the British Slave trade. When excluding the pygmy groups, the contribution from the Bight of Biafra and West-central rise to their highest levels , though still far from a major contribution. When admixture coefficients were calculated by assessing shared haplotypes, the Gold Coast also had the largest contribution, though much less striking at 0.196, with a 95% confidence interval of 0.189 to 0.203. Interestingly, when haplotypes are allowed to differ by one base pair, the Jamaican matriline shows the greatest affinity with the Bight of Benin, though both Bight of Biafra and West-central Africa remain underrepresented.The results of the evaluation on genetic diversity and demography of the parental populations and of the Jamaican mitochondrial gene pool are summarized in Table The African Diaspora in Jamaica is the result of a well-documented trade in human lives for just over 150 years motivated almost entirely by the rise in demand for luxury goods in Western Europe. By taking historical African embarkation points into account, we compared estimates of maternal contribution of each parental population with historical disembarkation records. The results of the admixture analysis suggest the mtDNA haplogroup profile distribution of Jamaica more closely resembles that of aggregated populations from the modern day Gold Coast region despite an increasing influx of individuals from both the Bight of Biafra and West-central Africa during the final years of the trade. When taking what is known about the negative rate of natural population growth of slaves on Jamaica, these results add an additional layer of complexity to demographic history of Jamaica. Planters found it more economical to import new labour rather than invest in natural reproduction within their existing groups. Coupling low fecundity with the high mortality leads to the expectation of a fluid demographic shift through time to a haplogroup profile distribution more closely resembling those groups arriving later during the slave trade. Present results do not show this, hinting instead at non-random processes in the creation of modern Jamaican matrilineal demography.The admixture results may suggest a preference among Jamaican planters. Historic evidence suggests the Jamaican planting class held the Akan of the Gold Coast in very high regards , althougThe entire acclimatization process was understandably both mentally and physically stressful for newly arriving Africans. Between a quarter to a half of newly landed Africans died within the first three years on the island . The disThe slave society on Jamaica also operated in a very rigid social hierarchy; creole slaves had much greater life expectancy, fecundity, and upward social mobility than those born in Africa . The entThe development of modern Jamaican English may also provide insight into the demographic development of the island. The modern creolized English spoken on the island has been traced to relatively uneducated Northern British and Irish overseers and bookmakers and the early African slaves they interacted with. During the initial era of slavery on the island (1655-1700), slave acculturation was a process characterized by direct contact between newly arrived Africans and their European overseers. Though the Gold Coast contributed marginally to the slave trade prior to 1700, the Akan speaking groups from modern Ghana were thought to be the largest concentrated linguistic groups . These eIn summation, despite the historical evidence that an overwhelming majority of slaves were sent from the Bight of Biafra and West-central Africa near the end of the British slave trade, the mtDNA haplogroup profile of modern Jamaicans show a greater affinity with groups found in the present day Gold Coast region. Caution must be paid however to the scope of the analyses performed here. The Jamaican slave markets were the largest in the West Indies and sporadic accounts exist of slaves being purchase in Jamaica for plantations in other part of the New World; however, it is difficult to accurately trace the ancestry of the resold slaves. Additionally, after the abolition of slavery in 1834, the island is treated here as roughly a closed system with regards to the African continent. The trajectory of the mtDNA distribution is assumed to have stayed relatively consistent since emancipation; however, constraints imposed on the Jamaican population may have changed through time, influencing modern demography. The end of the slave trade in Jamaica brought about a change in economic climate, with a small albeit recognizable amount of Jamaicans emigrating to other parts of the world, as well as foreign migrant labours arriving from around the globe. Whether any these constraints have significantly affected the mtDNA distribution on the island is difficult to say.The study was approved by the local ethics committees of the University of West Indies in Mona, Jamaica. After providing informed consent, over 400 Jamaican volunteers were then asked to complete a simple questionnaire stating their birth place, their parents' birth places, and--if known--the birth places of both sets of grandparents. Individuals born outside the country or with any reported maternal relatives born outside the country were excluded from the study. Additionally, where more than one maternal relative was sampled, only one individual was used in further analysis.A database of 9,265 African first hyper variable segment (HVS-1) sequences was amassed from the literature in order to investigate any life history constraints present on the island during and after slavery. African ethnic groups were assigned locations based on present day ranges of the population imposed on the historic guidelines for the differing coasts. To account for transit of slaves from inland Africa to the coast, these regions were then segregated into the hinterland roughly perpendicularly from the coast, an assumption explored previously is used.After exclusion criteria, analyses were preformed on 400 Jamaican individuals from around the island. Buccal swabs were collected from all subjects and stored in cell lysis solution . Total DNA was then extracted using the Qiagen buccal cell spin protocol . The HVS-1 was then amplified using PCR. A reverse primer was used to generate all sequences (for details see ). A forwAdmixture coefficients based on haplogroup profile distributions were estimated using a Markov-Chain Monte Carlo posterior sampling method assuming a multinomial distribution for the mtDNA haplogroup profiles, a method best explained in detail elsewhere . AdditioIn order to investigate the internal diversity of the studied regions' mtDNA pools, the following diversity indices were calculated: number of different haplotypes (k), number of polymorphic sites (S), mean number of pairwise differences (\u03b8\u03c0) . DemogramtDNA: mitochondrial DNA.MLD conceived the study, performed the genetic laboratory analysis, data analysis, and drafted the manuscript. AS aided in the data analysis, interpretation and presentation. SPN assisted in the social science interpretation and validation of historical content. VAM helped in the design, conception and analysis. EYstAM aided in sample collection and help in the study design. YPP was the principal sample collector and aided in the study design. All authors reviewed and commented on the manuscript during its drafting and approved the final version.Table S1. HVS-I sequences of the Jamaican individuals analyzed in the present study.Click here for fileTable S2. Haplogroup frequencies of Jamaica, different African regions and different iterations considered and analyzed in the present study.Click here for fileTable S3. Haplotypes shared (relative frequencies) between Jamaica and the different African regions analyzed in the present study.Click here for fileTable S4. Number of shared haplotypes between the different regions considered in the present study. Numbers in the diagonal are the number of different haplotypes per region. In brackets is the proportion of the Jamaican haplotypes that are shared with each of the African regions considered.Click here for fileTable S5. Mitochondrial DNA HVS-I sequences included in this study [is study ,53-80.Click here for file"} +{"text": "Collateral damage to esophagus with ablative therapies for atrial fibrillation (AF) remains a major concern with percutaneous catheter based therapies. Endoscopically documented thermal injuries to the esophageal mucosa such as ulcerations or hemorrhages are seen in a significant percentage of patients undergoing AF ablation. ,2 The reThe introduction of balloon based high intensity focused ultrasound therapy (HIFU) was received with considerable enthusiasm, primarily due its ability to deliver therapy without tissue contact, and absence of requirement for 3 dimensional mapping systems. AdherencThe burden of evidence points to the mobile anatomy of the retro cardiac esophagus being responsible for the high prevalence of thermal injuries leading to LAEF which is fortunately a rare but nevertheless catastrophic complication of AF ablation. The proximity of esophagus to posterior left atrium which is accentuated in supine position contributes to mucosal thermal injury, particularly because of absence of serosa in this portion of the gut. The mobile and variable location of the esophagus with respect to each of the pulmonary veins and the source of energy is critical in determining its predilection to injury. The study by Naven et al in this issue of the journal elegantly shows the inverse relation of the luminal temperature of the esophagus to the distance from HIFU balloon. The remo"} +{"text": "In the last century peroxisomes were thought to have an endosymbiotic origin. Along with mitochondria and chloroplasts, peroxisomes primarily regulate their numbers through the growth and division of pre-existing organelles, and they house specific machinery for protein import. These features were considered unique to endosymbiotic organelles, prompting the idea that peroxisomes were key cellular elements that helped facilitate the evolution of multicellular organisms. The functional similarities to mitochondria within mammalian systems expanded these ideas, as both organelles scavenge peroxide and reactive oxygen species, both organelles oxidize fatty acids, and at least in higher eukaryotes, the biogenesis of both organelles is controlled by common nuclear transcription factors of the PPAR family. Over the last decade it has been demonstrated that the fission machinery of both organelles is also shared, and that both organelles act as critical signaling platforms for innate immunity and other pathways. Taken together it is clear that the mitochondria and peroxisomes are functionally coupled, regulating cellular metabolism and signaling through a number of common mechanisms. However, recent work has focused primarily on the role of the ER in the biogenesis of peroxisomes, potentially overshadowing the critical importance of the mitochondria as a functional partner. In this review, we explore the mechanisms of functional coupling of the peroxisomes to the mitochondria/ER networks, providing some new perspectives on the potential contribution of the mitochondria to peroxisomal biogenesis. Over the past decade we have learned a great deal about peroxisomal biogenesis and function, much of this using the genetic power of model organisms like yeast and Dnm1 (in yeast)] led to elongated peroxisomes and mitochondria lead to the oligomerization and \u201cpriming\u201d of the mitochondrial fusion GTPases Mfn1 and Mfn2 (Shutt et al., Perhaps the most surprising links between the mitochondria and signaling pathways came a number of years ago with the identification of the mitochondrial anti-viral signaling protein MAVS. MAVS was identified from 4 independent groups simultaneously as an essential protein for the viral-induced transcription of Nf-kB (Kawai et al., MAVS has also been seen to signal even earlier from the peroxisomes, again linking these two organelles as unique signaling platforms (Dixit et al., In this review we have highlighted a series of observations that illustrate the very tight functional, spatial, and regulatory links between the peroxisomes and the mitochondria. Evolutionary analysis coupled with the emergence of a vesicular transport route between the mitochondria and peroxisomes propels us to consider a role for mitochondria in peroxisomal biogenesis. Since ER-derived pre-peroxisomes are fusogenic (Boukh-Viner et al., The critical importance of peroxisomes in physiology is chronically underappreciated within the wider scientific community. Along with their established links to the ER, we hope that increasing awareness of the obligate coupling of the peroxisomes to the mitochondria will encourage researchers to more carefully consider the contribution of peroxisomal dysfunction to disease progression. For example, a great deal of attention is currently being paid to the role of mitochondria in neurodegeneration, cancer and immunology, yet the impact of mitochondrial dysfunction on peroxisomes is virtually unexplored in these disease pathologies. There is a great deal of work to be done before we will fully understand the role of peroxisomal dysfunction in human disease. A first step will require a better characterization of the molecular mechanisms that regulate the behavior and biochemistry of peroxisomes as a dynamic and tightly integrated organelle.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The gastroesophageal reflux disease (GERD) is one of the main causes of dental erosion. The aim of this case presented is to describe the prosthetic rehabilitation of a patient with GERD after 4 years of followup. A 33-year-old male patient complained about tooth sensitivity. The lingual surface of the maxillary anterior teeth and the cusps of the upper and lower posterior teeth presented wear. It was suspected that the feeling of heartburn reported by the patient associated with the intake of sports supplements (isotonics) was causing gastroesophageal changes. The patient was referred to a gastroenterologist and was diagnosed with GERD. Dental treatment was performed with metal-free crowns and porcelain veneers after medical treatment of the disease. With the change in eating habits, the treatment of GERD and lithium disilicate ceramics provided excellent cosmetic results after 4 years and the patient reported satisfaction with the treatment. Nowadays, the incidence of dental erosion has become a clinical reality , 2 with Among the causes of dental erosion, the gastroesophageal reflux disease (GERD) can be highlighted , which bThe modalities of rehabilitator treatment vary according to the degree of tooth wear , 9. It iThe aim of this case presented is to describe the clinical manifestations of GERD, its diagnosis, and the medical and dental treatment of a patient with GERD after 4 years of followup.A 33-year-old male patient was admitted to Aracatuba Dental School-UNESP, complaining about tooth sensitivity to temperature variations caused by the ingestion of different foods and acidic substances.Wear on the lingual surface of the maxillary anterior teeth and the shortening of the cusps of upper and lower posterior teeth were observed. This change was just not observed in the lower anterior teeth. Slight reduction of occlusal vertical dimension of the patient was observed without the need of its restoration Figures , and 5. The patient was referred to a gastroenterologist and the presence of hiatal hernia and gastroesophageal reflux disease was diagnosed after performing some specific tests. The medical treatment was based on the use of 40\u2009mg of Omeprazol, twice a day for 30 days, and also change in eating habits such as the reduction of isotonic drinks and other foods that could exacerbate the symptoms. There was a decreased sensation of heartburn after medication treatment and only a regular medical followup was required.The dental treatment could be performed once the symptoms were controlled and the patient was treating the disease with medicines. Initial photographs were taken and impressions of both arches were performed with stock trays and alginate to obtain the study models. The models were positioned on semiadjustable articulator for drafting and proposing the treatment plan.Among the proposed treatment options, the patient chose to restore the dental anatomy with metal-free crowns and porcelain veneers. Initially, the affected teeth were prepared . SubsequThe zirconia cores were confectioned and lithium disilicate ceramic was used as veneering ceramic for posterior crowns and laminate veneers . The innThe porcelain veneers were cemented only with base paste of resin cement , whereas the crowns were cemented with both pastes of dual resin cement .The patient reported no sensitivity after dental treatment and a homemade topical application of sodium fluoride gel 2% was recommended every 15 days. After 4 years of followup the restorations showed no visible deterioration and the periodontal tissue was free of gingival inflammation Figures , and 15.Dental erosion has become an increasingly frequent and important clinical reality with multiple causes . Among tWe could confirm that the patient regularly practiced physical activities and made abusive use of acidic drinks as a supplement and their suspension was recommended by the gastroenterologist. People with healthy lifestyles may consume acidic drinks frequently in low salivation conditions or making excessive use of the same day by day, trying to keep body weight .Treatment for patients with GERD is to change eating habits and drug therapy to increase esophageal muscle activity or reduce the amount of stomach acid. In these cases, patients should avoid consumption of foods that irritate the gastric mucosa such as spicy and fatty foods, citrus fruits, coffee, tea, chocolate, alcohol, and soft drinks and get used to walk after meals .Some studies show that ceramics are degraded on their surface when exposed to acidic solutions, which could compromise the longevity of these materials \u201312. ThusIn order to achieve excellent cosmetic results associated with a good mechanical behavior, zirconia copings were chosen . LikewisMonitoring of patients and a multidisciplinary approach should be taken in rehabilitation treatments, since systemic changes might directly influence the final results of treatment. The modification of eating habits and treatment of GERD associated with the use of lithium disilicate ceramic offered excellent aesthetic result after 4 years and the patient reported satisfaction with the treatment."} +{"text": "The study of traveling waves of activity in neural tissue can provide deep insights into the functions of the brain during sensory processing or during abnormal states such as epilepsy, migraines or hallucinations. Computational models of these systems usually describe the tissue as a vast interconnected network of neurons comprised of large number of units with similar properties, for example integrate and fire neurons. It is also widely assumed that while the strength of the connections between neurons changes as a function of distance, this interaction does not depend on other local parameters.These assumptions allow for formulation of a set of integro-differential equations describing the propagation of the traveling wave fronts in a one-dimensional integrate-and-fire network of synaptically coupled neurons, allowing for investigation of the network dynamics during wave initiation and propagation. Equations for the transition between initiation and transition toward constant speed traveling waves have been derived for Gaussian connectivity and finiWe extended our previous models that exhibit constant-speed traveling waves to investigate how the presence of these inhomogeneities affects the relationship between the speed of the activity propagation and its acceleration. We determine that the estimates from homogenization theory do not accurately capture the conditions for propagation failure. More precisely, just prior to stopping, the activity propagates at a higher average speed than predicted from the theoretical results of the homogenization theory. We derive more precise estimates for the conditions when propagation failure occurs. Furthermore, our study points to directions where researchers can obtain additional tools for analyzing experimental data in order to infer details of synaptic connectivity."} +{"text": "Resonance describes the ability of neurons to respond selectively to inputs at preferred frequencies . When meLVA voltage-dependence was linearized and the transfer impedance of in model consisting of a soma coupled to a cylindrical cable was derived analytically for each location along the dendrite. Changing the total density of KLVA, gave rise to different resonant frequencies along the dendrite. This enables us to identify the membrane features that influence the range of resonances along the dendrites to characterize the trade-off between the range of frequencies and the Q-factor .First, an analytical approach was used whereby the KSecond, we used a numerical approach to optimize dendritic features to create neuron models with a large range of resonant frequencies along their dendrites. We thus confirmed the analytical results and addressed the more complicated dendritic structures including branching. We found that dendritic branches (bifrucations) may increase the range of resonant frequencies in dendrites and, at least partially, may overcome the strong trade-off between resonant strength and the possible range of resonance frequencies found it unbranched structures.We argue that the computational complexity of neurons is increased significantly by dendrites endowed with a whole range of resonant frequencies and discuss the advantage of having a bank of differential dendritic resonances that act as dynamic filters which, following plastic processes enable neurons to resonate in a particular desirable frequencies."} +{"text": "The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks. Whether spike timing conveys information in cortical networks or whether the firing rate alone is sufficient is a matter of controversial debate, touching the fundamental question of how the brain processes, stores, and conveys information. If the firing rate alone is the decisive signal used in the brain, correlations between action potentials are just an epiphenomenon of cortical connectivity, where pairs of neurons share a considerable fraction of common afferents. Due to membrane leakage, small synaptic amplitudes and the non-linear threshold, nerve cells exhibit lossy transmission of correlation originating from shared synaptic inputs. However, the membrane potential of cortical neurons often displays non-Gaussian fluctuations, caused by synchronized synaptic inputs. Moreover, synchronously active neurons have been found to reflect behavior in primates. In this work we therefore contrast the transmission of correlation due to shared afferents and due to synchronously arriving synaptic impulses for leaky neuron models. We not only find that neurons are highly sensitive to synchronous afferents, but that they can suppress noise on signals transmitted by synchrony, a computational advantage over rate signals. Simultaneously recording the activity of multiple neurons provides a unique tool to observe the activity in the brain. The immediately arising question of the meaning of the observed correlated activity between different cells In the other view, on the contrary, theoretical considerations The role of correlations entails the question whether cortical neurons operate as integrators or as coincidence detectors The pivotal role of correlations distinguishing the two opposing views and the appearance of synchrony at task-specific times One particular measure for assessing the transmission of correlation by a pair of neurons is the transmission coefficient, i.e. the ratio of output to input correlation. When studying spiking neuron models, the synaptic input is typically modeled as Gaussian white noise, e.g. by applying the diffusion approximation to the leaky integrate-and-fire model Understanding the influence of synchrony among the inputs on the correlation transmission requires to extend the above mentioned methods, as Gaussian fluctuating input does not allow to represent individual synaptic events, not to mention synchrony. Therefore, in this work we introduce an input model that extends the commonly investigated Gaussian white noise model. We employ the multiple interaction process (MIP) \u201cUnderstanding and Isolating the Effect of Synchrony\u201d we study the impact of input synchrony on the firing properties of a pair of leaky integrate-and-fire neurons with current based synapses. Isolating and controlling this impact allows us to separately study the effect of input synchrony on the one hand and common input on the other hand on the correlation transmission. In section \u201cCorrelation Transmission in the Low Correlation Limit\u201d and \u201cCorrelation Transmission in the High Correlation Limit\u201d we present a quantitative explanation of the mechanisms involved in correlation transmission, in the limit of low and high correlation, respectively, and show how the transmission coefficient can exceed unity in the latter case. In section In section The neuronal dynamics considered in this work follows the leaky integrate-and-fire model, whose membrane potential We investigate the correlation transmission of a pair of neurons considering the following input scenario. Each neuron receives input from ng rate . Fig. 1CLet us now consider the case of The output firing rates and output spike synchrony shown in These two observations \u2013 the increase of input correlation and output firing rate induced by input synchrony \u2013 foil our objective to understand the sole impact of synchronous input events on the correlation transmission of neurons. In the following we will therefore first provide a quantitative description of the effect of finite sized presynaptic events on the membrane potential dynamics and subsequently describe a way to isolate and control this effect.The synchronous arrival of Again, In order to isolate and control the effect of the synchrony parameter In scenario 1 suggests, that the decisive properties of the marginal input statistics are characterized by the first . In order to obtain a correlation coefficient, we need to normalize the integral of (9) by the integral of the auto-covariance of the neurons' spike trains. This integral equals Before deriving an expression for the correlation transmission by a pair of neurons, we first need the firing rate deflection of a neuron In order to understand how the neurons are able to achieve a correlation coefficient larger than one, we need to take a closer look at the neural dynamics in the high correlation regime. We refer to the strong pulses caused by synchronous firing of numerous afferents as MIP events. Let us now recapitulate these last thoughts in terms of a pair of neurons: In the regime of synchronized high input correlation . Due to the non-linearity of the neurons we expect the effect of synchronous input events on their firing to depend on the choice of this working point. We therefore performed simulations and computed (2) using four different values for the mean membrane potential . This waconstant , inset. A further approximation of (15) and (16) confirms the intuitive expectation that the mean size of a synchronous event compared to the distance of the membrane potential to the threshold plays an important role for the output synchrony: if synchrony is sufficiently high, say Measuring the integral of the output correlation over a window of A qualitatively new behavior is observed in the intermediate range of input correlation So far, for windows a correlIn this work we investigate the correlation transmission by a neuron pair, using two different types of input spike correlations. One is caused solely by shared input \u2013 typically modeled as Gaussian white noise in previous studies To model correlated spiking activity among the excitatory afferents in the input to a pair of neurons we employ the Multiple Interaction Process (MIP) Given a fixed input correlation, the correlation transmission increases with Hitherto existing studies argue that neurons either loose correlation when they are in the fluctuation driven regime or at most are able to preserve the input correlation in the mean driven regime We presented a quantitative description of the increased correlation transmission by synchronous input events for the leaky integrate-and-fire model. Our analytical results explain earlier observations from a simulation study modeling synchrony by co-activation of a fixed fraction of the excitatory afferents As for our spiking model, iven by . At the iven by .The situation illustrated in Several aspects of this study need to be taken into account when relating the results to other studies and to biological systems. The multiple interaction process as a model for correlated neural activity might seem unrealistic at first sight. However, a similar correlation structure can easily be obtained from the activity of a population of The correlation transmission coefficient can only exceed unity if the firing of the neurons is predominantly driven by the synchronously arriving volleys and disjoint input does not contribute to firing. The threshold then acts as a noise gate, small fluctuations caused by disjoint input do not penetrate to the output side. In the mean driven regime, i.e. when The boost of output correlation by synchronous synaptic impulses relies on fast positive transients of the membrane potential and strong departures from the stationary state: An incoming packet of synaptic impulses brings the membrane potential over the threshold within short time. Qualitatively, we therefore expect similar results for short, but non-zero rise times of the synaptic currents. For long synaptic time constants compared to the neuronal dynamics, however, the instantaneous firing intensity follows the modulation of the synaptic current adiabatically The choice of the correlation measure is of importance when analyzing spike correlations. It has been pointed out recently that the time scale It has been proposed that the coordinated firing of cell assemblies provides a means for the binding of coherent stimulus features Though in the limit of weak input correlation the correlation in the output is bounded by that in the input, in agreement with previous reports We here derive an approximation for the integral of the impulse response of the firing rate with respect to a perturbing impulse in the input. A similar derivation has been presented in The first four moments of the binomial distribution"} +{"text": "Comparative genomics has put additional demands on the assessment of similarity between sequences and their clustering as means for classification. However, defining the optimal number of clusters, cluster density and boundaries for sets of potentially related sequences of genes with variable degrees of polymorphism remains a significant challenge. The aim of this study was to develop a method that would identify the cluster centroids and the optimal number of clusters for a given sensitivity level and could work equally well for the different sequence datasets.Nocardia species) and highly variable (VP1 genomic region of Enterovirus 71) sequences and outperformed existing unsupervised machine learning clustering methods and dimensionality reduction methods. This method does not require prior knowledge of the number of clusters or the distance between clusters, handles clusters of different sizes and shapes, and scales linearly with the dataset.A novel method that combines the linear mapping hash function and multiple sequence alignment (MSA) was developed. This method takes advantage of the already sorted by similarity sequences from the MSA output, and identifies the optimal number of clusters, clusters cut-offs, and clusters centroids that can represent reference gene vouchers for the different species. The linear mapping hash function can map an already ordered by similarity distance matrix to indices to reveal gaps in the values around which the optimal cut-offs of the different clusters can be identified. The method was evaluated using sets of closely related (16S rRNA gene sequences of The combination of MSA with the linear mapping hash function is a computationally efficient way of gene sequence clustering and can be a valuable tool for the assessment of similarity, clustering of different microbial genomes, identifying reference sequences, and for the study of evolution of bacteria and viruses. The distance matrix can then be clustered using the Unweighted Pair Group Method with Arithmetic Mean (UPGMA) \u2190 min(distSum)centroid (i) \u2190 sCluster (i)+idx-1elsecentroid(i) \u2190 sCluster (i)endendk-mers) that orders the sequences by similarity as , and clusters them by UPGMA as shown in the tree. The remaining steps proposed by this method are the diagonal extraction, the conversion of these scores into hash codes, and the clustering of these codes by identifying the natural gaps between the codes based on the sensitivity parameters chosen.The main steps of this process are summarised in Figure \u00a9 \"princomp\" function. The highest scoring first (x-axis) and second (y-axis) coordinates were used for the plotting of the sequences on a two dimensional plane, while the symbols representing the different clusters reflected the clustering results produced by the linear mapping method.To visualize the spatial distribution of the points on two-dimensional plots, Principal Component Analysis (PCA) was applied on the distance matrix using the MatlabThe authors declare that they have no competing interests.MH conceived and designed the experiments, carried out experiments and drafted the manuscript. FK participated in the design of the study, in sequence alignment and data analysis. SCAC participated in the design of the study, provided sequencing data and drafted the manuscript. FZ provided sequencing data and participated in the data analysis. DED participated in the data analysis and drafted the manuscript. JP contributed to the development of methods and data analysis. VS conceived and designed the experiments, participated in the data analysis and drafted the manuscript. All authors have read and approved the final manuscript."} +{"text": "This paper reports on an investigation of mass transport of blood cells at micro-scale stenosis where local strain-rate micro-gradients trigger platelet aggregation. Using a microfluidic flow focusing platform we investigate the blood flow streams that principally contribute to platelet aggregation under shear micro-gradient conditions. We demonstrate that relatively thin surface streams located at the channel wall are the primary contributor of platelets to the developing aggregate under shear gradient conditions. Furthermore we delineate a role for red blood cell hydrodynamic lift forces in driving enhanced advection of platelets to the stenosis wall and surface of developing aggregates. We show that this novel microfluidic platform can be effectively used to study the role of mass transport phenomena driving platelet recruitment and aggregate formation and believe that this approach will lead to a greater understanding of the mechanisms underlying shear-gradient dependent discoid platelet aggregation in the context of cardiovascular diseases such as acute coronary syndromes and ischemic stroke. Pathological thrombus formation underlies a number of major health problems with significant economic impact. Usually thrombus formation occurs where blood vessels become narrowed (stenosis) as a result of atherosclerosis. This narrowing produces changes in blood-flow parameters which in turn trigger cell adhesion and aggregation. The way in which the geometry of the blood vessel changes blood-flow parameters and these in turn affect blood cell responses has been the focus for decades. Clinically, however it has been difficult to investigate, under controlled conditions, the role of mechanical parameters (geometry and flow) on thrombosis and platelet aggregation. The challenge arises from the fact that the geometrical and flow parameters, are difficult to isolate in-vivo. Recent clinical studies still continue to neglect the geometry of the stenosis treams / and, twoeams //. Comparieams // presentsIn order to simplify our proof-of-concept studies and to isolate the mechanical effects of blood flow from biochemically driven platelet activation, all experiments were performed in the presence of pharmacological inhibitors of the canonical platelet amplification loops as was described in the methods section. For the first experiment with blood, a two flow geometry as presented in his case . This daThe experiments presented in the previous section were useful as a proof of concept to visualize different blood streams, to begin gaining insight into which components of the blood flow contribute to the aggregate as also to observe cell interaction. To further resolve the platelet streams contributing to aggregation and explore in greater detail the cellular interactions occurring within defined stenosis, we conducted a series of experiments to generate asymmetric streams in a ratio of 75\u223625 and then 85\u223615. In order to investigate the role of the hematocrit in the upper streams and the effect of red cell interactions at the stenosis, model experiments on the microfluidics platform were conducted where the upper layer was changed as a function of the amount of red cells suspended in solution. The hematocrit in the upper stream of 80% . Our miture see , demonsteamlines ). This aeamlines 288, vorThis paper presents model experiments on a microfluidic platform incorporating hydrodynamic flow focusing to examine blood cell transport investigating the mechanical flow processes governing pathological platelet aggregation at stenosis (micro-contractions). These initial proof-of-concept experiments suggests that aggregate growth in acute stenosis can generate enhanced advective transport zones within the local flow effectively increasing platelet mass transport to the wall, further accelerating aggregate growth. This feedback effect of platelet aggregation on advective transport may be a function of the change in angle of deceleration of the stenosis or the already formed aggregate. Future studies will focus on further delineating this phenomenon and in particular will focus on the effect of initial streams acceleration as a significant parameter determining platelet aggregation dynamics. We have demonstrated the importance of the effects of hemodynamic forces present in a stenosis driving platelet aggregation. The experiments presented in this manuscript reveal a significant role of role of lift forces at stenosis. These forces appear to be important in modulating the delivery rate of platelets to the vessel wall. Delivery rate is mediated by the action of red cells which because of their relative size experience a higher imbalance of lift forces in comparison to platelets. We hypothesize that this imbalance of forces, is a generalised phenomena and consequence of stenosis and that the experimental approach outlined in this study will lead to a greater understanding of the mechanisms underlying shear-gradient dependent discoid platelet aggregation. These new experimental insights suggest a complex time varying role of enhanced advection through particulate flow, in controlling the extent and rate of platelet aggregation. Although it may be speculative to apply the insight gained here to more inertial dominated flows, a more detailed understanding of these aggregation phenomena should greatly extend our understanding of the mechanical flow processes governing pathological thrombus formation in the context of coronary artery disease and stroke, with particular relevance to inertially dominated stent related thrombosis.In order to model the flow focusing behavior of the micro-fluidic devices, the mass transport equation for two species was solved numerically. For the case of this paper, no chemical reaction was considered and the only phenomena present for the chemical species was the transport of mass and momentum. The governing equation of the mass transport for different species can be expressed as:Text S1Vortex formation as a function of platelet aggregate size.(DOCX)Click here for additional data file.Video S1Supplementary Material.(AVI)Click here for additional data file."} +{"text": "Sir,In the past few years, a new optical intubation device, i.e. laryngeal mask airway (LMA) CTrach has emerged as a useful alternative for the facilitation of tracheal intubation in difficult airway situations provided the interdental distance is \u226525 mm. The LMA The LMA CTrach is inserted in neutral head position using one handed rotational technique. FollowinOn initial insertion, if the complete laryngeal view is not seen on the viewer, the presence of three distinct colours, i.e. red, white and black can indicate the likely causes of inadequate view. The knowledge of this tri-colour concept described below can be extremely helpful in obtaining the best laryngeal view and performing intubation successfully through the LMA CTrachOnce the best laryngeal view is achieved, the application of medial-lateral manoeuvre can help in centralizing the glottic aperture on the viewer and finally the application of Chandy\u2019s manoeuvre at the time of intubation can eventually influence the success of first attempt intubation.The achievement of the best laryngeal view and facilitation of tracheal intubation through the LMA CTrach can be made simpler by understanding of this tri-colour concept. We feel that the learning of tri-colour concept should be an integral part of the teaching of CTrach use as it can help in increasing the efficacy of this useful airway device in difficult airway situations."} +{"text": "SirSubcutaneous tunnelling of the epidural catheter is routinely practiced for anchoring the epidural catheter. There are different techniques for subcutaneous tunnelling which are associated with complications like needle stick to the clinician or shearing of the epidural catheter. Rose GL (2009) reported the efficacy of needle sheath for prevention of these complications.Still, the likelihood of shearing of the epidural catheter cannot be ruled out. Following shearing of the epidural catheter it has to be pulled out, as it cannot be used for administering medication. RecentlyFortunately, shearing of the catheter happened at an extracutaneous site so we got extra length to attach the filter and other assembly with it. Although, short length of catheter or slightly bulky assembly (filter system) or absence of loop may pose a possibility of dislodgement it is a safe and cost-effective method. We therefore suggest that this technique could be employed in cases of shearing of epidural catheter during the process of subcutaneous tunnelling."} +{"text": "The discovery of HIV in 1983 originated from a collective adventure, which mobilized clinicians, researchers and patients altogether. This collaboration was crucial to rapidly expand the knowledge of the virus and develop the first diagnostic tests and antiretroviral therapy (ART).More than 25 years after the discovery of the etiological agent responsible for AIDS, research priorities still remain care, treatment and prevention with the major objective of developing a preventive vaccine.Today, we have gained significant insight into the virus pathogenesis. The evolution and progression of the disease caused by HIV is closely linked to a number of determinants of the virus itself and the host. We also know today that, very early after exposure to the virus, a massive depletion of CCR5+ CD4+ T memory cells associated with microbial translocation occurs in the gastrointestinal tract of HIV infected patients. HIV infection is clearly inducing an inflammation and a generalized and persistent T cell activation, which may play a role in the persistence of HIV infection, resulting from the establishment of permanent reservoirs into host cells and in different host compartments. The reduction of the size of these reservoirs is representing one of the main challenges for the development of future therapeutic strategies.Among other challenges in therapy, we also need to better understand the mechanisms leading to the severe complications observed in some patients on long-term ART. Again, whether the inflammatory response is contributing or not to these complications remains an opened question.The early acute phase of HIV infection appears therefore to be crucial in determining disease progression. Given the importance of the innate immune responses in this very early phase following infection and in driving adaptive immunity, further research on innate immunity in HIV infection are certainly among priorities for elaborating future therapeutic and vaccine strategies.New technologies are today available to address all these scientific challenges. But they will only be overcome with a multidisciplinary and translational research for the global benefit of humanity."} +{"text": "Journal of Adolescent Health is devoted to understanding the health and well-being of adolescents by taking the long view of this period of the life cycle. The articles herein analyze data collected from conception through age 15 from 4,500 individuals born in the city of Pelotas, Brazil, in 1993. The analyses provide us with not just a snapshot of adolescents' current health status, but rather an entire movie depicting the life-course trajectories and the emergence of health outcomes of this population during adolescence.This supplement to the The first four articles of the supplement focus on early-life predictors of later health and behavior The next three articles focus on the health effects of behavioral changes during adolescence The next three articles further explore environmental factors that influence obesity and perceptions of obesity from the perspective of parents and adolescents The supplement closes with a study showing an alarmingly prevalence of self-medication (29%) among 15-year-old adolescents, with girls being more likely to self-medicate than boys \u2218First, several factors arise early in development. Further, the behaviors and social context of the family affects not just the young child but also the adolescent \u2218Second, behaviors tend to track from childhood to adolescence, suggesting once again that interventions need to start earlier, preferably before individuals get on the wrong track \u2218Third, some exposures might have both benefits (lung function) and harmful effects (school failure), depending on the dose \u2218Fourth, qualitative studies provide us with an opportunity to hear the nuances that come from adolescents and families that are difficult to extract from quantitative studies Although these studies collectively provide valuable insights into child development, it is important to note that the Pelotas cohort only provides us with information through mid-adolescence (15 years). Whereas it is clear that many of the behaviors and health outcomes examined in these studies have their roots in early childhood, the brain continues to develop well into young adulthood This collection of articles examining the Pelotas cohort provides us with a rare opportunity to understand clearly the critical role of a life-course perspective in promoting the health and well-being of adolescents. We look forward to seeing these findings replicated in other settings, but the results presented here are robust and have relevance to populations throughout the world. Consider the collective critical findings:"} +{"text": "Antenatal care (ANC) presents important opportunities to reach women with crucial interventions. Studies on determinants of ANC use often focus on household and individual factors; few investigate the role of health service factors, partly due to lack of appropriate data. We assessed how distance to facilities and level of service provision at ANC facilities in Zambia influenced the number and timing of ANC visits and the quality of care received.Using the 2005 Zambian national Health Facility Census, we classified ANC facilities according to the level of service provision. In a geographic information system, we linked the facility information to household data from the 2007 DHS to calculate straight-line distances. We performed multivariable multilevel logistic regression on 2405 rural births to investigate the influence of distance to care and of level of provision on three aspects of ANC use: attendance of at least four visits, visit in first trimester and receipt of quality ANC .We found no effect of distance on timing of ANC or number of visits, and better level of provision at the closest facility was not associated with either earlier ANC attendance or higher number of visits. However, there was a strong influence of both distance to a facility, and level of provision at the closest ANC facility on the quality of ANC received; for each 10 km increase in distance, the odds of women receiving good quality ANC decreased by a quarter, while each increase in the level of provision category of the closest facility was associated with a 54% increase in the odds of receiving good quality ANC.To improve ANC quality received by mothers, efforts should focus on improving the level of services provided at ANC facilities and their accessibility. Even though significant progress has been made in reducing maternal deaths globally since the 1990s Childbirth is the time when most deaths occur, and thus increased attention has been paid to intrapartum care in recent years. Antenatal care (ANC), while not sufficient to reduce maternal mortality on its own, still presents an important opportunity to reach women with a number of interventions crucial for their health and that of their babies Zambia has adopted the focused ANC strategy recommended by WHO. The proportion of pregnant women in Zambia who attend ANC at least once with a skilled provider stands impressively high at 94%, while 74% attend at least the recommended four antenatal visits Studies of the determinants of ANC use focus mostly on individual and household factors. According to a recent systematic review, only one study One explanation for the scarcity of studies considering the role of the health service environment in determining service use is the lack of information on the level of service provision at health facilities in existing population survey datasets We used this approach of linking user and provider datasets to quantify the influence of distance and level of service provision at facilities on ANC use and on quality of ANC received in rural Zambia, taking other important individual-, household-, and community-level determinants into account. The specific objectives of this study were to assess (1) how distance to an ANC facility influences number and timing of women's ANC visits; (2) how level of service provision at the ANC facility influences number and timing of women's ANC visits; (3) how distance to care and level of service provision at the closest ANC facility influence the quality of care received by women.The 2007 Zambia Demographic and Health Survey (DHS) was a nationally representative household cluster survey that interviewed 7,146 women aged 15\u201349 years The 2005 Zambia Health Facility Census (HFC) The exposure variables of interest are distance to the closest facility providing ANC and level of service provision at the closest facility or within 10 km thereof. Straight-line distances in meters from each DHS cluster to the closest ANC facility of a given level of provision were calculated in the GIS platform ArcView 3.2 (ESRI) using the user-written extension \u201cNearest Neighbor 3.6\u201d.For our classification of health facilities according to their level of service provision, we were guided by Donabedian's framework on quality of care assessment Based on the recommended interventions for pregnancy care We explored the effect of distance and level of provision on three binary outcome variables obtained from the DHS: (1) attendance of at least the four recommended ANC visits; (2) ANC visit in the first trimester and (3) receipt of \u201cgood quality ANC\u201d. Having received good quality ANC was defined as attending at least four ANC visits with a skilled health worker and receiving more than eight of the eleven following antenatal interventions We performed frequency tabulations to describe outcome and exposure variables in the rural DHS subsample and compared the rural with the urban subsample. Crude measures of association of distance and level of provision with the three binary outcome variables were computed, using two-level random effects logistic regression models to account for clustering at the village level. Similarly, crude measures of association for all important potential confounding variables with the outcomes were computed. Variables that showed an association with the outcome and were not considered to be on the causal pathway were added to the models containing both distance to the closest ANC facility and the highest ANC quality available at that distance or 10 km thereof, thus evaluating both distance and level of provision simultaneously and adjusting for each other. We used a forward-fitting approach, including mother's education and household wealth as a priori confounders and adding the other variables in the order of the strength of their separate confounding effects. Variables that caused at least a 10% change in the logOR of distance or quality were considered to be confounders and kept in the model. Three models were built, one for each of the three binary outcome variables. All statistical analysis was done using Stata version 11.All urban mothers had access to an ANC facility within 15 km and the majority (57%) had access to an ANC facility offering an optimum level of service within this distance. In contrast, 88% of rural mothers lived within 15 km of an ANC facility and only 9% had access to a facility with an optimum level of provision within this distance ; indicatThe cumulative distribution of distances to facilities offering various levels of ANC for rural mothers in Zambia is shown in While 1461 of 2405 rural mothers (61%) attended the recommended four or more visits, only 505 (21%) also received good quality ANC; i.e. attended four or more visits with a skilled health professional and received eight or more important antenatal interventions. Only 414 rural mothers (17%) attended ANC in the first trimester of pregnancy.Distance to care did not seem to significantly influence whether a mother had her first antenatal visit in the first trimester or later in either the crude or the adjusted models. By contrast, in the crude model of the effect of level of provision on ANC use in the first trimester, better level of provision at the nearest facility was associated with 26% decreased odds of attending ANC in the first trimester. This surprising association was diminished by adjusting for ethnic group and for women's media use in the cluster and there was only weak evidence for a 23% decrease in odds in the multivariable model. In the crude (bivariable) model, we found that the odds of attending the recommended four or more ANC visits reduced by about 20% for each 10 km increase in distance. Adjusting for the mother's individual fertility attitudes, her household wealth, and women's health-care seeking autonomy in the cluster and men's opinion on women's autonomy in the cluster weakened this effect, and there was no evidence that distance decreased the odds of having four or more ANC visits in the multivariable model. There was no evidence that the level of service provision at the closest ANC facility increased the odds of a mother attending the recommended minimum four antenatal visits. There was evidence for an effect of distance to the closest facility on the quality of ANC received by a mother: In the final model, adjusted for all confounders at individual, household and cluster level, the odds of a woman receiving good quality ANC (at least four visits with a skilled provider and at least eight interventions) decreased by a quarter (24%) for each 10 km increase in distance. Level of service provision at the closest ANC facility also strongly influenced whether the mother received good quality ANC (i.e. had at least four visits with a skilled provider and received at least eight interventions). For each category improvement in the level of ANC provision at the closest facility, there was a 54% increase in the odds of receiving good quality ANC in the adjusted model. Linking data from a national health facility census with data from a national household survey in a geographic information system allowed us to analyze and quantify the relationship between the level of service provision at a facility and women's ANC use, while considering various quality dimensions. We furthermore quantified the influence of distance on ANC use, adjusting for a wide range of confounders, and we studied three different ANC use outcomes .Our study showed that most rural women in Zambia lived far away from facilities providing an optimum level of service, and that both farther distance to and level of provision at the closest ANC facility were associated with the quality of ANC received by expectant mothers in rural Zambia. Timing of first ANC visit, however, did not seem to be influenced by distance to the closest ANC facility, nor did better level of provision at the closest facility lead to earlier ANC attendance or a greater number of ANC visits.Our findings on distance to a health facility and ANC use agree with the results of several previous qualitative studies and some quantitative studies The effect of distance on delivery in a health facility in Zambia, using the same datasets, was stronger Studies investigating the influence of distance on the quality of ANC received are scarce, although the information to construct such an outcome variable is available in most DHS datasets. Since focused ANC was introduced, the aim is not only to achieve a sufficient number of visits, but also relevant content during visits The level of service provision, reflecting aspects of quality of care provided at the facility is considered an important determinant of service use Another quantitative study on quality of care and ANC use found that in Sudan, besides shorter walking time, better quality of care at the facility was associated with a higher proportion of women using ANC monthly from the second trimester onwards The overall poor first trimester attendance of ANC in Zambia raises questions about whether women are aware of the importance and expected content of an early first antenatal visit or whether ANC is rather still seen as a check-up before birth to ensure everything is normal. One could speculate that the technical quality of care at ANC facilities may not influence women's early care-seeking behaviour because they don't appreciate the meaning of the ANC interventions, but are rather influenced by interpersonal quality of care. That is to say that the personal experiences a woman had earlier with facility staff, or experiences her friends and family had, may greatly impact on her care-seeking behaviour. Our unexpected finding that a higher level of provision at the closest facility (adjusting for distance) was weakly associated with a later first antenatal visit is difficult to explain. If it is not a chance finding , it could be due to the fact that technically assessed quality of care captured as the level of service provision is not the same as quality of care as perceived by women, and higher-level facilities may offer less personal care \u2013 or charge higher costs. A study on ANC use among women in rural Kenya found that lower perceived quality of care was significantly associated with a later first ANC visit The interpretation of our findings needs to take into account certain study limitations. The 2005 Zambia Health Facility Census provides only a snap-shot picture of health facility quality at one point in time. The availability of services may have varied during the five-year period (2002\u20132007) for which the 2007 Zambia DHS recorded ANC use, and the situation today is also likely to be different. On the other hand, using secondary data allowed us to study all of Zambia with a large variety of facilities being considered. This made it possible to overcome previous limitations in investigating the influence of level of service provision, which requires a large range of facilities to be studied. While the DHS covered a large national sample of women, the restriction of ANC information in the DHS on last births and our exclusion of movers left a rural subsample of 2405 births. A larger sample size would have enabled us to draw firmer conclusions.Non-differential misclassification of distance, due to errors in the geographic coordinates (to which the geoscrambling practice of Measure DHS contributes This study provides evidence that quality of ANC received by expectant mothers in rural Zambia is strongly influenced by the level of services provided at the closest ANC facility and to a lesser degree by distance to care. This suggests that to improve ANC quality received by mothers, efforts and resources should focus on improving the level of service provision at ANC facilities. The need to focus more on quality of care received by expectant mothers has been echoed in a number of recent calls This implies scaling-up effective screening tests and interventions towards universal coverage, especially in rural health centers which make up the majority of ANC providers. As most rural mothers live within reasonable distance of an ANC facility providing an adequate level of service , it may Further research may be needed to better understand the factors influencing ANC care-seeking in the first trimester of pregnancy, as well as the difference between perceived quality of care and technically assessed quality of care at a facility, as this could help understand client behavior and form the basis for future behavior change campaigns. As antenatal care-seeking overall is rather good, the \u201cquality gap\u201d for ANC is much larger than the \u201ccoverage gap\u201d"} +{"text": "Rhamphorhynchus that lie adjacent to the rostrum of a large individual of the ganoid fish Aspidorhynchus. In one of these, a small leptolepidid fish is still sticking in the esophagus of the pterosaur and its stomach is full of fish debris. This suggests that the Rhamphorhynchus was seized during or immediately after a successful hunt. According to the fossil record, Rhamphorhynchus frequently were accidentally seized by large Aspidorhnychus. In some cases the fibrous tissue of the wing membrane got entangled with the rostral teeth such that the fish was unable to get rid of the pterosaur. Such encounters ended fatally for both. Intestinal contents of Aspidorhynchus-type fishes are known and mostly comprise fishes and in one single case a Homoeosaurus. Obviously Rhamphorhynchus did not belong to the prey spectrum of Aspidorhynchus.Associations of large vertebrates are exceedingly rare in the Late Jurassic Solnhofen Limestone of Bavaria, Southern Germany. However, there are five specimens of medium-sized pterosaur Aspidorhynchus and the long-tailed pterosaur Rhamphorhychus from the Late Jurassic Solnhofen Limestone that proves evidence that Rhamphorhynchus fell victim the fish of prey, likely as a result or possibly of a lethal accidental interaction of both animals. At last four additional Rhamphorhynchus specimens tightly entangled with the rostrum of a large Aspidorhynchus have been discovered but in contrast to the new specimen none of them proves that the Rhamphorhynchus was alive when it was seized. Today aerial vertebrate prey, such as birds and bats is recorded for sharks Pterosaurs, actively flying reptiles of the Mesozoic era, are well documented by an extraordinary fossil record, with the most informative specimens coming from the Middle Jurassic to Early Cretaceous lacustrine deposits of the Jinlingsi and Jehol Group of the western Liaoning Province, China Eudimorphodon ranzii Parapholidophorus, which abundantly occurs in the Eudimorphodon bearing beds. Two specimens of Rhamphorhynchus muensteri contain remnants of small leptolepidid fishes Pteranodon and fish debris in the gular area of a Pterodactylus respectively have been tentatively interpreted as the contents of a throat pouch Preondactylus specimen from the Middle Triassic of Northern Italy is preserved as a crushed agglomeration of bone that likely formed the regurgitation pellet produced by a large fish While much is known on the skeletal and soft tissue anatomy of pterosaurs evidence for their position in the Mesozoic food chains is exceedingly sparse in the fossil record. Remnants of fishes inside the ribcage of a pterosaur have been reported for a single specimen of Aspidorhynchus acutirostris closely associated with Rhamphorhynchus muensteri. In three of these slabs the virtually complete skeleton of a subadult or adult Rhamphorhynchus lies immediately adjacent to the jaws of the Aspidorhynchus , Thermopolis, U.S.A. under the collection number WDC CSG 255. The specimen was found in the year 2009 in a plattenkalk quarry NW of the town of Eichst\u00e4tt within the Solnhofen Lithographic Limestone , Rhamphorhynchus had just caught a small fish and was about to swallow it head first, when the Aspidorhynchus attacked. The fish tail yet sticking in the pharyngeal region of the throat and the excellent preservation of the tiny fish without any trace of digestion suggests that swallowing was not completed and that the Rhamphorhynchus was alive and airborne during the attack. A regurgitation of the leptolepidid fish tail first into the pharyngeal region as a result of agony of the pterosaur would not have been possible, because the fins and the opercula of the fish would have braced themselves against the wall of the narrow esophagus. Instead, the fins of the fish lay smoothly alongside the body ande the lepidotrichs of the caudal fin are folded together and lay perfectly straight because the caudal fin apparently had just passed the larynx of the Rhamphorhynchus. Furthermore, there is no evidence that the half digested fish debris in the abdominal area .For the ultraviolet-light investigation of the specimen we used a set of UVA lamps with a wavelength of 365 to 366 nanometers and an intensity between 4000 and more than 50000 microwatts per 10 mm"} +{"text": "Even in small neuronal systems sensory stimuli evoke precise behavioral responses. The leech, possessing one of the smallest neuronal systems with ~10.000 neurons, shows as a response to a touch of the skin a locally limited bend away from this touch . BehavioThomson and Kristan investigated in their study the encoBased on this study, we investigate how neurons of the medicinal leech encode information about the combination of stimulus location and intensity. We have recorded intracellularly from P cells and interneurons of the local bending network, while stimulating the skin mechanically with tactile stimuli of varied intensity and location. The neuronal responses of simultaneously recorded pairs of mechanosensory neurons were analyzed by calculating the difference of latencies and of spike counts of both cells. The discrimination performance was evaluated based on a pairwise classification of location distances and intensity differences.For the estimation of touch location, we found in agreement with Thomson and Kristan that theFor the estimation of stimulus intensities spike counts provide significantly better classification results than response latencies. The classification of this stimulus property does not improve when using the difference between the responses of two cells compared to using the spike count of a single P cell.Preliminary results show that features of graded interneuron responses like the slope, the amplitude and the area under the EPSP depend clearly on the touch intensity and in a more complex way on the touch location.1. The touch location is accurately encoded by the latency difference of two P cells when higher stimulus intensities are used.2. Stimulus intensity can be classified best based on the spike count of single cells.3. The combined encoding of stimulus location and intensity is more complex for low intensity stimuli. This topic needs further investigation on the level of interneuron responses."} +{"text": "The vast amount of research over the past decades has significantly added to our knowledge of phantom limb pain. Multiple factors including site of amputation or presence of preamputation pain have been found to have a positive correlation with the development of phantom limb pain. The paradigms of proposed mechanisms have shifted over the past years from the psychogenic theory to peripheral and central neural changes involving cortical reorganization. More recently, the role of mirror neurons in the brain has been proposed in the generation of phantom pain. A wide variety of treatment approaches have been employed, but mechanism-based specific treatment guidelines are yet to evolve. Phantom limb pain is considered a neuropathic pain, and most treatment recommendations are based on recommendations for neuropathic pain syndromes. Mirror therapy, a relatively recently proposed therapy for phantom limb pain, has mixed results in randomized controlled trials. Most successful treatment outcomes include multidisciplinary measures. This paper attempts to review and summarize recent research relative to the proposed mechanisms of and treatments for phantom limb pain. The concept of phantom limb pain (PLP) as being the pain perceived by the region of the body no longer present was first described by Ambrose Pare, a sixteenth century French military surgeon . Silas WStump pain is described as the pain in the residual portion of the amputated limb whereas phantom sensations are the nonpainful sensations experienced in the body part that no longer exists , 7. SupePLP was once thought to be primarily a psychiatric illness. With the accumulation of evidence from research over the past decades, the paradigm has shifted more towards changes at several levels of the neural axis, especially the cortex . PeripheDuring amputation, peripheral nerves are severed. This results in massive tissue and neuronal injury causing disruption of the normal pattern of afferent nerve input to the spinal cord. This is followed by a process called deafferentation and the proximal portion of the severed nerve sprouts to form neuromas . There iThe axonal sprouts at the proximal section of the amputated peripheral nerve form connections with the neurons in the receptive field of the spinal cord. Some neurons in the areas of spinal cord that are not responsible for pain transmission also sprout into the Lamina II of the dorsal horn of the spinal cord which is the area involved in the transmission of nociceptive afferent inputs , 19. ThiCortical reorganization is perhaps the most cited reason for the cause of PLP in recent years. During reorganization, the cortical areas representing the amputated extremity are taken over by the neighboring representational zones in both the primary somatosensory and the motor cortex , 25, 26.Another proposed mechanism of PLP is based on the \u201cbody schema\u201d concept that was originally proposed by Head and Holmes in 1912. The body schema can be thought of as a template of the entire body in the brain and any change to the body, such as an amputation, results in the perception of a phantom limb . A furthThe assumption that PLP is of psychogenic origin has not been supported in the recent literature even though stress, anxiety, exhaustion, and depression are believed to exacerbate PLP . A crossA number of different therapies relying on different principles have been proposed for the management of PLP as shown in Preemptive use of analgesics and anesthetics during the preoperative period is believed to prevent the noxious stimulus from the amputated site from triggering hyperplastic changes and central neural sensitization which may prevent the amplification of future impulses from the amputation site . HoweverA cross sectional study found that acetaminophen and NSAIDs were the most common medications used in the treatment of PLP . The anaOpioids bind to the peripheral and central opioid receptors and provide analgesia without the loss of touch, proprioception, or consciousness. They may also diminish cortical reorganization and disrupt one of the proposed mechanisms of PLP . RandomiTricyclic antidepressants are among the most commonly used medications for various neuropathic pains including PLP. The analgesic action of tricyclic antidepressant is attributed mainly to the inhibition of serotonin-norepinephrine uptake blockade, NMDA receptor antagonism, and sodium channel blockade . The rolGabapentin has shown mixed results in the control of PLP with some studies showing positive results while others not showing efficacy \u201359. CarbThe mechanism of action of calcitonin in treatment of PLP is not clear. Studies relative to its therapeutic role have been mixed , 62.The mechanism of action of NMDA receptor antagonism in PLP is not clear. Memantine has shown some benefits in some case studies but controlled trials have shown mixed results , 64. A rThe beta blocker propranolol and the calcium channel blocker nifedipine have been used for the treatment of PLP . HoweverTranscutaneous electrical nerve stimulation has been found to be helpful in PLP . HistoriMirror therapy was first reported by Ramachandran and Rogers-Ramachandran in 1996 and is suggested to help PLP by resolving the visual-proprioceptive dissociation in the brain , 70. TheAlthough there are earlier reports suggesting temperature biofeedback to be helpful for burning sensation of PLP, there is no specific evidence to match specific types of PLP with specific biofeedback techniques . There iSurgical interventions are usually employed when other treatment methods have failed. A case report relates the effectiveness of lesioning the dorsal root entry zone (DREZ) on upper limb phantom pain resulting from brachial plexus avulsions . AnotherA case report of positive outcome has been published even though the mechanism and role of ECT relative to PLP is not well understood .PLP is a relatively common and disabling entity. We have learned much about the pathophysiology and management of PLP since it was first described about five centuries ago. However, there is still no one unifying theory relative to the mechanism of PLP. Specific mechanism-based treatments are still evolving, and most treatments are based on recommendations for neuropathic pain. The evolution of the mechanistic hypothesis from body schema and neuropathic theories to the recently proposed role of mirror neurons in the mechanism of pain have added to our understanding of PLP. Further research is needed to elucidate the relationship between the different proposed mechanisms underlying PLP. A synthesized hypothesis explaining the phenomenon of PLP is necessary in the future for the evolution of more specific mechanism-based treatment recommendations."} +{"text": "Heart failure is seen as a complex disease caused by a combination of a mechanical disorder, cardiac remodeling and neurohormonal activation. To define heart failure the systems biology approach integrates genes and molecules, interprets the relationship of the molecular networks with modular functional units, and explains the interaction between mechanical dysfunction and cardiac remodeling. The biomechanical model of heart failure explains satisfactorily the progression of myocardial dysfunction and the development of clinical phenotypes. The earliest mechanical changes and stresses applied in myocardial cells and/or myocardial loss or dysfunction activate left ventricular cavity remodeling and other neurohormonal regulatory mechanisms such as early release of natriuretic peptides followed by SAS and RAAS mobilization. Eventually the neurohormonal activation and the left ventricular remodeling process are leading to clinical deterioration of heart failure towards a multi-organic damage. It is hypothesized that approaching heart failure with the methodology of systems biology we promote the elucidation of its complex pathophysiology and most probably we can invent new therapeutic strategies. Systems biology is a term selected for the study of the vast amount of experimental data advanced from current technologies in genomics and proteomics . It is aThe explanation given to the clinical syndrome of heart failure is limited if only the pathophysiological mechanisms are described. Also the specific clinical models developed to interpret the variety of clinical phenotypes are not sufficiently coherent to explain the progressive deterioration of heart failure and some of the successful aspects of the new therapeutic agents. Targeting the involved molecular mechanisms, only the biomechanical model predicts that the novel therapeutic strategies interrupt the viscous cycle of myocardial dysfunction and cardiac remodeling .The present paper debates the biomechanical model of heart failure under the current concept of systems biology methodology exploring the connection of the complex metabolic and regulatory networks with the clinical phenotypes of heart failure. It is accepted that early mechanical stresses applied in the myocytes and/or mechanical changes in the left ventricular cavity comprise the primary initiating causes for cardiac remodeling and neurohormonal activation.It is hypothesized that only through systems biology methodology can we explain the complex phenomenon of heart failure and defend the concept of the biomechanical model. This approach will increase the knowledge of the pathogenesis of heart failure and eventually will enable the development of effective drug therapy.The nature and the biological world constitute a complex system compiled of many interacting components that produce complex physical and biological phenomena. The research on complex phenomena extends from physics to biological and the medical world . The phyA network of interactions is considered modular if it is made of autonomous interconnected components that act together in performing some discrete physiological function , 5. The Systems biology integrates diverse areas of science targeting the underlying principles of hierarchical metabolic and regulatory systems from the cell to an organismal level . It enta1). The heart should be considered as a multifunctional organ that participates in the homeostatic regulation of the body and is an essential element of a complex and integrated network of organs and systems. Human heart failure from the point of view of systems biology is considered as a syndrome with different causes that involves many clinical appearances or clinical phenotypes. Each clinical phenotype involves many biochemical pathways, pathophysiological changes and physiological regulatory systems. These regulatory systems are important in upgrading or inhibiting the cardiac cavity remodeling process that delineates the syndrome. Two of the most important regulatory systems in heart failure, the natriuretic peptide axis and the renin-angiotensin-aldosterone system (RAAS), are considered examples of critical functional modules . In the aAtrial natriuretic peptide (ANP), brain natriuretic peptide (BNP) and their prohormones (proANP and proBNP) are secreted by cardiomyocytes. Under normal circumstances the ANP and BNP are released after increase of the atrial and myocardial stretch or after excessive sodium intake. In patients with heart failure the natriuretic peptides release is considered the earliest regulatory mechanism that is motivated in contrast to later sympatho-adrenal system (SAS) and RAAS activation. It is accepted that the ANP is produced mainly from atria and BNP from ventricles, and they are differently regulated in the two cavities. Therefore two cardiac endocrine systems are working in parallel, one in the atria producing ANP and the other in the ventricles secreting BNP . In chroThe BNP is considered both as a biomarker of heart failure diagnosis and as a prognostic factor in various cardiovascular diseases. In clinical practice the BNP sensitivity and specificity for increased filling pressure detection are lower than the echocardiographic estimation criteria for filling pressure. In a recent article doppler echocardiography methodology increased the accuracy of heart failure diagnosis in the presence of intermediate BNP or amino-terminal proBNP (NT-proBNP) and improved the stratification of risk across all periods of heart failure . NeverthThe natriuretic peptides possess vasodilator, natriuretic and diuretic effects, and an inhibitory action on immunological and inflammatory systems, and on growth factors , 13. TheSystems biology approach to the natriuretic peptide axis is revealed from genomic and proteomic studies which identified many locations that were responsible for heart failure exacerbation. Genomic studies recognized multiple polymorphic structural variants of the BNP gene, which were responsible for the blood levels of BNP and NT-proBNP in normal subjects and in patients with heart failure , 16. Alsa diagnostic and prognostic point of view. Recent proteomic studies define some aspects of the functional importance of the natriuretic peptide axis in comprehending the heart failure syndrome. The BNP-32 is considered to represent the main biologically active form of BNP and predictably is increased in patients with heart failure. However, using a novel method of mass spectrometry, new high molecular weight forms of BNP have recently been discovered that are less biologically active than BNP-32, but with the current assays were erroneously identified as BNP-32 or NT-Pro BNP . The aboThe increased BNP in patients with heart failure is inadequate to delay the disease progression and its beneficial physiological effects are reduced partly because of the up-regulation of the phosphodiesterase type-5 enzyme (PDE5) . The PDEThe levels of natriuretic peptides in patients with chronic heart failure are increased compared with healthy individuals. Excessively high circulating levels of natriuretic peptides are confirmed in patients with chronic heart failure presented with fluid retention and vasoconstriction. The defective biological activity of the natriuretic peptides system suggests a kind of peripheral resistance to the biological effect of natriuretic peptides . This peThe above data support the hypothesis that the natriuretic peptides are elements of an integrated network that includes the cardiac muscular, endocrine, immune, and nervous system.bThe depressed left ventricular function generates a variety of physiological reactions in an endeavor to remedy the diseased myocardium. These counteracting responses that include the increased activity of both the SAS and the RAAS, are considered deleterious for the myocardium, and therefore correctly their association with the cardiovascular and cardiorenal disease became a therapeutic target. The SAS is activated early (but after the activation of BNP) in the course of heart failure due to autonomic imbalance or to loss of the inhibitory effect of the baroreceptor reflexes. The RAAS is activated later in the course of heart failure with the possible mechanisms of the renal hypoperfusion due to low cardiac output and the increased renin release due to sympathetic stimulation of the kidney.1 receptor blockers (ARBs) ( antagonize angiotensin II binding to AT1 receptors). The above therapeutic agents to a degree could prevent or reverse heart failure deterioration and progression. These treatments cope with the overexpression of the biologically active neurohormonal molecules that exert deleterious effects on the heart and circulation [The most significant classes of drugs targeting the RAAS are the \u03b2-adrenergic antagonists (\u03b2 blockers) (reduce renin), the angiotensin-converting enzyme inhibitors (ACEI) (reduce levels of angiotensin II) and the ATculation . The metculation . The comSystems biology approach supports the hypothesis that there are two integrated and opposing systems acting not only at the cardiac level but in the whole body. The first system is the RAAS with vasoconstrictive, sodium retaining, thrombophylic, prohypertrophic, and proinflammatory properties, while the second is the natriuretic peptide system that manifests opposing qualities such as natriuretic, vasodilating, anti-inflammatory, anti-thrombotic and anti-hypertrophic. In patients with heart failure the first system controls the hemodynamic homeostasis, initially as a counterbalancing mechanism but later induces a progressive and deleterious worsening of the cardiac function. The second opposing mechanism, the natriuretic peptide system, seems to be inadequate in delaying the heart failure progression.1).The goal of modern clinical cardiology is to obtain a reliable description and understanding of the human physiology in various states and diseases, to incorporate the different physiological and interrelated derangements presented in databases, and finally to use the available clinical variables constructing specific clinical models mechanical stimulus responsible for cardiac remodeling and neurohormonal activation, and also the progressive mechanical remodeling changes responsible for the persistence and progression of clinical heart failure syndrome . The oveLAX) and early diastole (ELAX) which were lower in the patients with diastolic heart failure [M) and early diastolic (EM) velocities in patients with HFrEF, HFpEF and isolated diastolic dysfunction [\u2019) is useful to assess left ventricular filling pressure while a persistent elevation of this ratio may be a prognostic factor of diastolic heart failure in patients with HFpEF independent of left ventricular hypertrophy [\u2019 ratio after optimized medical therapy is predictive of cardiac events [\u2019 greater than 15, ejection fraction less than 50%, and severe functional mitral regurgitation were independent echocardiographic predictors of cardiac events [\u2019<3 cm/s was associated with a significantly excess mortality, and this measurement added incremental prognostic value to E/E\u2019 >15 [In a recent paper, a mathematical model is introduced that predicts the dominant role of remodeling as a compensatory physiological response in chronic heart failure resulting in a normalization of stroke volume . In the failure . Left ve failure . Systolifunction . The Dopertrophy . In patic events . In patic events . In patiE/E\u2019 >15 .In the end-stage of heart failure the persistent mechanical remodeling changes are self-determined probably without neurohormonal dependence operating until the end of the disease. In this stage it is improbable for the neurohormonal activation to have a sustained and effective inotropic performance and therefore the only compensatory mechanism that remains is the left ventricular remodeling. The compensatory mechanisms of left ventricular remodeling and neurohormonal activation are not sufficient to counterbalance effectively the initial mechanical left ventricular changes while the progressive mechanical remodeling changes that occur during the end-stage of the left ventricular dysfunction are deleterious and unopposed by any other compensatory mechanisms. In a sense significant left ventricular dysfunction begets progressive ventricular remodeling, that in turn begets progressive left ventricular dysfunction. The end-staged left ventricle represents a terminally remodeled and noncompliant cavity with low contractility and function.The mechanosensation is an old biological process to be found in all forms of life. In mechanotransduction the mechanical effect is closely related and affects the transduction systems of vasopeptides, hormones, sensing receptors, ionic fluxes and other autocrine/paracrine mechanisms.The endothelium of arteries and veins is constantly exposed to hemodynamic forces due to the pulsatile nature of blood pressure and flow. Therefore the endothelium is permanently detecting various biomechanical forces , which are translated into intra- and extracellular signals . The bioThe Kruppel-like factor (KLF2) is a transcriptional inhibitor of endothelial mediated inflammation . The KLFThe biomechanical stress is speculated as an early and most significant stimulus for cardiac hypertrophy and heart failure. The gp130 cytokine receptor plays a significant role in myocyte survival pathway in the transition to heart failure. During aortic pressure overload, in mice with ventricular restricted knockout of the gp139 cytokine receptor, was described a rapid onset of dilated cardiomyopathy and massive myocyte apoptosis in contrast to control mice that showed compensatory hypertrophy . The actin vivo analyses of intracardiac flow forces in zebrafish embryos [It is hypothesized that in the adult the early hemodynamic cavity stresses of left ventricular dysfunction induce ventricular endothelial dysfunction and remodeling. Probably that follows the embryonic model of flow-structure interactions that influence the process of cardiac development . The int embryos . The rec embryos . The ana embryos . The \u03b2 aMyocardial biomarkers like natriuretic peptides are considered useful biochemical substances in the diagnosis of patients with heart failure. The natriuretic peptides ANP and BNP are secreted in response to increasing cardiac wall tension and/or circulating neurohormones. The blood levels of ANP and BNP are increased in patients with left ventricular dysfunction as well in patients with preserved ejection fraction. In patients with systolic heart failure the BNP blood levels are directly related to wall stress, ejection fraction and functional failure classification . In wome1R), as it is well known that angiotensin II is proinflammatory and proatherosclerotic. This was demonstrated in an immunohistochemical analysis in the aortic arch of transgenic mice where a pronounced expression of AT1R was found in the inner atheroprone regions of the aortic arch, characterized by disturbed or oscillatory shear stress, but not in the outer aortic arch exposed to high shear stress [1R protein expression.Shear stress demonstrates an atheroprotective role through downregulation of angiotensin type 1 receptors (ATr stress . In the 2).How can the early mechanical effects be translated into clinical phenotypes? The impact of the early mechanical changes on clinical outcome in heart failure patients remains to be established in a future study comparing initial mechanical changes with the earliest neurohormonal activation. It is significant from a clinical perspective the design of a trial with sufficient power to discern meaningful interactions between early mechanical effects and neurohormonal activation in order to improve our understanding of the later clinical outcome give several reasons supporting the wall-stress hypothesis: 1) Pure mechanical overload that is not accompanied by a neurohormonal reaction is a rather rare phenomenon and therefore the coexistence of the two processes obscures the significance of the mechanical initial factor. 2) The transgenic hypothesis is impossible to reproduce the entire cascade that follows the mechanical stress. 3) During cardiogenesis the different transcription factors operate synergistically in a combinatorial way that is not documented in cardiac hypertrophy. They support the hypothesis that cardiac hypertrophy results from an initial mechanical effect and a re-expression of the fetal program, and during clinical conditions in humans the effects of mechanics are modified by senescence, obesity, diabetes and ischemia [The transgenic technology queries the validity of the wall-stress hypothesis that both cardiac hypertrophy and fetal reprogramming were a necessary adaptive response to mechanical load. It is suggested that the progression of heart failure depends on different mechanisms triggered by new molecular rearrangements. The above position is not absolutely valid and it is not necessary to reconsider the wall-stress hypothesis . Swyngheet al give sev2). Phenotype in biology is considered any observable characteristic of an organism that includes morphology, biochemical properties, physiological functions, and behaviors. As phenotypic data from systems biology perspective are examined network functional states such as fluxomic data that give information about the actual flux distributions in a network . The comHuman heart failure is a syndrome exhibiting multiple clinical phenotypes implicating a number of common pathophysiological, biochemical and molecular mechanisms. The clinical phenotypes are marked by multiorgan dysfunction due to the upregulation or downregulation of the above mechanisms. It is important from experimental and clinical perspective to define these phenotypes and to schedule a therapeutic strategy of multifaceted heart failure targeting to a more personalized therapy. The described heart failure models represent clinical phenotypes that can be studied using the current available data from the advanced fields of genomics, proteomics and experimental model systems . RecentlTherefore the neurohormonal dysfunction initially compensates cardiac hemodynamic changes and in a later stage becomes toxic to the myocardium reducing myocardial contractility and increasing the deleterious processes of cardiac remodeling.In conclusion the systems biology analysis of heart failure will increase our understanding of the biological and mechanical basis of the disorder. The progressive nature of the mechanical model of the heart failure from early mechanical changes to left ventricular remodeling and the modular analysis of the neurohormonal homeostatic regulation will produce a comprehensive analysis of the disease process. The elucidation of the complex heart failure disease will also provide the material for further development of new therapeutic strategies."} +{"text": "Understanding tumor induced angiogenesis is a challenging problem with important consequences for diagnosis and treatment of cancer. Recently, strong evidences suggest the dual role of endothelial cells on the migrating tips and on the proliferating body of blood vessels, in consonance with further events behind lumen formation and vascular patterning. In this paper we present a multi-scale phase-field model that combines the benefits of continuum physics description and the capability of tracking individual cells. The model allows us to discuss the role of the endothelial cells' chemotactic response and proliferation rate as key factors that tailor the neovascular network. Importantly, we also test the predictions of our theoretical model against relevant experimental approaches in mice that displayed distinctive vascular patterns. The model reproduces the in vivo patterns of newly formed vascular networks, providing quantitative and qualitative results for branch density and vessel diameter on the order of the ones measured experimentally in mouse retinas. Our results highlight the ability of mathematical models to suggest relevant hypotheses with respect to the role of different parameters in this process, hence underlining the necessary collaboration between mathematical modeling, in vivo imaging and molecular biology techniques to improve current diagnostic and therapeutic tools. Sprouting angiogenesis - the process by which new blood vessels grow from existing ones - is a ubiquitous phenomenon in health and disease of higher organisms Recent progresses on high resolution microscopy e.g. the activation and subsequent sprouting of new branches) and on the large scale collective movements of the cells due to endothelial cell proliferation and the tissue properties Mathematical models of tumor induced angiogenesis have been fundamentally of two types: either microscopic descriptions accounting for cell dynamics The process of tumor angiogenesis starts when endothelial cells of existing capillaries acquire the tip cell phenotype by the action of a protein cocktail produced by tumor and related-tissue cells, generally induced by a hypoxic microenvironment. Tip cells lead the growth of new capillaries in conjunction with further endothelial cells which acquire the stalk cell phenotype. The migration of endothelial tip cells is directed towards increasing concentrations of relevant growth factors. These, such as VEGF , have a three-fold role at the cell scale: (i) to trigger the permeability of the capillaries and the subsequent activation of the tip cell phenotype, (ii) to promote migration of tip cells in the direction of its gradient, and (iii) to promote the proliferation and survival of the stalk endothelial cells. The angiogenic role of VEGF is opposed by different anti-angiogenic factors in the tissue.Tip migration is associated with the production of extracellular MMPs that are responsible for the remodeling of nearby ECM (ExtraCellular Matrix), and that affects the affinity of VEGF species towards different extracellular locations. This has relevant consequences in the bio-availability of VEGF The different isoforms of VEGF are distinguishable by the degree in which they anchor to negativelly-charged molecules in the ECM or the cell surface After an opened path in the ECM is generated by the tip cells, a reorganization of the stalk proliferating cells is required in order to form a new lumen for blood circulation, in precise coordination with pericytes and other stromal components. Further processes such as anastomosis (connection between different branches on the network), the action of pressure forces and the intrinsic mechanical properties of the tissue, contribute to the formation of the new vessel network and are finely tuned to determine vascular patterning In this work the dynamics of the interface between the newly formed capillaries and the stroma is treated with a phase-field model formalism In the model we introduce an angiogenic factor that represents the balance between pro and anti angiogenic signaling proteins We analyze the capillary network morphology for different values of the stalk endothelial cells proliferation rate and of the velocity of the tip endothelial cells in the tissue.We describe the dynamics of an effective factor tion see .Proliferative and non-activated cells are described by an order parameter v see .In order for the endothelial tip cell activation to occur, points with large values for both per se, but the density of stalk cells on the vicinity of the tip cell.To merge tip cell and capillary dynamics, we relate the value of the order parameter inside the tip cell Starting from a single vessel We test our mathematical model against two experimental situations.First, we compare with results obtained by Haigh et al Although the final goal of these authors was to determine the role of VEGF-A during the development of the nervous system, their neat approach allowed the visualization of the vascular network in the developing cortex and retina of embryos and newborn mice. The authors reported that the decrease of VEGF-A is accompanied by a sparser vessel network when compared to the wild type used as a control condition Click here for additional data file.Figure S2Capillary network morphology obtained for different number of angiogenic factor sources. Figures A, B, C, and D have respectively 120. 240, 480 and 960 initial hypoxic cells approximately. At low density of sources the network forms a tree-like structure, with a low number of branches, similar to the network at low values of of see . As the (EPS)Click here for additional data file.Figure S3Capillary network morphology obtained for a large initial circular source of angiogenic factor. Figures A, B, C, and D are snapshots of the vasculature growth. The resulting network is tree-like and very dense when reaches the source. This is a brush-like network as observed in various solid tumor situations (both in silico and in vivo) (EPS)Click here for additional data file.Figure S4Capillary network for deficient Notch signaling . The figures represent the observed capillary network morphology for low and high proliferation rates; respectively t al see .(EPS)Click here for additional data file.Text S1Supporting information with details on the model equations.(PDF)Click here for additional data file.Video S1Formation of the capillary network shown in . The tip cell velocity and stalk cells proliferation rate in this simulation are (MOV)Click here for additional data file."} +{"text": "Previous studies have reported an association of altered fibrin clot network architecture with several diseases including sepsis, bleeding or acute thromboembolic disease . We inveRheometry and confocal laser scanning microscopy (CLSM) were used to monitor and image the formation of fibrin clots. Clotting was initiated by the addition of different levels of thrombin to solutions of a fixed concentration of fibrinogen. Each sample was divided into two aliquots; one added to the measuring geometry of an AR-G2 rheometer and one to the microscope slide for the spinning disk CLSM (Olympus IX71).The micrographs of formed clots Figure show marWe demonstrate, for the first time, that the fractal dimen-sion obtained by rheometry is a sensitive measure of visually observed structural differences within the fibrin network. Rheometrical detection of incipient clots formed in whole blood provides the clinician with a powerful tool for the diagnosis of thromboembolic disease."} +{"text": "Insects have evolved obligate, mutualistic interactions with bacteria without further transmission to other eukaryotic organisms. Such long-term obligate partnerships between insects and bacteria have a profound effect on various physiological functions of the host. Here we provide an overview of the effects of endosymbiotic bacteria on the insect immune system as well as on the immune response of insects to pathogenic infections. Potential mechanisms through which endosymbionts can affect the ability of their host to resist an infection are discussed in the light of recent findings. We finally point out unresolved questions for future research and speculate how the current knowledge can be employed to design and implement measures for the effective control of agricultural insect pests and vectors of diseases. Insects comprise about 95% of all known animal species and are considered one of the most successful groups of living organisms on earth. They possess an extremely efficient immune system that allows them to deal with pathogenic infections. The insect immune system consists of a wide variety of defense mechanisms that act individually or in combination to prevent foreign organisms from entering the insect body or to suppress the growth and replication of pathogens once they gain access to host tissues.The first line of defense is represented by the insect's epithelia, which serves as a barrier against biotic and abiotic factors, and produce local antimicrobial peptides (AMP) upon infection or wounding produce essential vitamins that are not present in the vertebrate blood meal and in uninfected control flies on the Drosophila immune response (ortholog of the DrosophilaPGRP-LB) was increased in the bacteriome and this increase coincides with the release of the endosymbiotic bacteria from the bacteriocytes. In turn, high PGRP gene transcription at the nymphal stage of Sitophilus is accompanied by significant up-regulation of the endosymbiont virulence genes and a positive regulator in the immune deficiency (Imd) signaling pathway (Kenny). Similarly, transcription profiles of Wolbachia infected and uninfected Drosophila S2 cells revealed the up-regulation of several genes involved in the Imd, Toll, and c-Jun N-terminal kinase (JNK) pathways in the interaction between the endosymbiont and immune genes in mosquitoes. The authors used cell lines from A. gambiae, which is not a natural host for Wolbachia, and A. albopictus, which is naturally infected with the strain wAlbB, and compared the transcriptional induction of immune genes between the two cell lines genes, prophenoloxidase, and NO cascade genes or E. carotovora (Gram-negative), respectively. Surprisingly, infection of wild type flies with the plant pathogenic bacterium Spiroplasma citri failed to activate the immune system, leading to bacterial proliferation in the hemolymph and death of the flies. In addition, wild type and immune mutants infected by S. citri showed no differences in their survival rates following infection with this bacterium. The ability of S. citri to kill flies is probably due to the fact that these bacteria are not detected by the Drosophila immune system.The impact of Wolbachia-infected D. melanogaster and D. simulans flies , with three gram-negative bacterial pathogens did not affect their survival ability compared to Wolbachia-free control flies with two intracellular bacterial pathogens, Salmonella typhimurium and Listeria monocytogenes, and an extracellular pathogen, Providencia rettgeri, showed no differences in pathogen load between the two types of flies and showed that they were sensitive to systemic infection with E. coli bacteria , Flock House Virus (FHV) and Cricket Paralysis Virus (CrPV) , but not to Insect Virus 6 (DNA virus), and reduced viral burden in flies carrying the endosymbionts and the avian malaria parasite to establish infection in the mosquito reactions in D. melanogaster have been introduced into A. aegypti mosquitoes where they have been found to increase the transcriptional levels of melanization genes as well as AMP and Toll related genes compared to control treatments and wPip(Mc), naturally found in Culex pipiens were also shown to confer protection to mosquitoes following infection with Plasmodium relictum parasites compared to uninfected mosquitoes, which may be responsible for the inhibitory effect on the development of B. pahangi parasites and D. simulans (wRi) larvae have no influence on the encapsulation of parasitoid eggs, although there is a minor decrease in parasitoid development in flies infected by the endosymbiont in the parasitoid wasp Leptopilina victoria do not affect encapsulation of its eggs by various Drosophila host species (Gueguen et al., The effect of Wolbachia on fungal infections (Fytrou et al., Wolbachia and Spiroplasma, co-existing in an insect host and efficiency of the immune function. It will also be important to elucidate the precise mechanisms employed by endosymbiotic bacteria to modulate immune signaling in insects. Similar research will undoutedly reveal the relative contribution of endosymbiotic bacteria to the overall host immune response against various classes of pathogenic organisms.Despite impressive advances in the broad field of insect innate immunity, our understanding of the role of endosymbiotic bacteria in the host immune response to pathogenic infections remains incomplete. Previous and recent studies have started to determine the phenotypic response of various insects carrying endosymbionts to infection by bacterial and viral pathogens as well as parasites. These studies have substantially improved our understanding of the complex interactions between insects, their endosymbiotic bacteria and pathogenic organisms in the infection and host immunity processes. It will further be of particular interest to test the immune response of insects with or without endosymbionts to infection by entomopathogenic fungi, as there is currently clear conflict within the literature on the effect of Wolbachia endosymbionts in mosquitoes has a direct effect on insect sensitivity to pathogenic infections has attracted the attention of scientists for the development of novel approaches for the control of human diseases (Hancock et al., Wolbachia introduced into A. aegypti resulted in successful invasion of natural populations of mosquitoes (Hoffmann et al., From the practical point of view, the recent discovery that the presence of The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Current Neuropharmacology experts discuss pathophysiology, clinical features and treatment of various neurologic autoimmune disorders. In the last decade we have witnessed remarkable scientific advances which have greatly improved our understanding of pathophysiology of the autoimmune disorders leading to an increasing number of treatment options with many of new therapies already being used in the treatment of neurologic disorders as their role continues to evolve [Autoimmune disorders affect 5-10% of the general population, [o evolve . The firCurrent Neuropharmacology issue for all their time and effort in preparing their articles to create a comprehensive and up-to-date collection of articles on the wide spectrum of autoimmune neurologic disorders.I would like to thank the authors of this"} +{"text": "The understanding of the effective functionality that governs the enzymatic self-organized processes in cellular conditions is a crucial topic in the post-genomic era. In recent studies, Transfer Entropy has been proposed as a rigorous, robust and self-consistent method for the causal quantification of the functional information flow among nonlinear processes. Here, in order to quantify the functional connectivity for the glycolytic enzymes in dissipative conditions we have analyzed different catalytic patterns using the technique of Transfer Entropy. The data were obtained by means of a yeast glycolytic model formed by three delay differential equations where the enzymatic rate equations of the irreversible stages have been explicitly considered. These enzymatic activity functions were previously modeled and tested experimentally by other different groups. The results show the emergence of a new kind of dynamical functional structure, characterized by changing connectivity flows and a metabolic invariant that constrains the activity of the irreversible enzymes. In addition to the classical topological structure characterized by the specific location of enzymes, substrates, products and feedback-regulatory metabolites, an effective functional structure emerges in the modeled glycolytic system, which is dynamical and characterized by notable variations of the functional interactions. The dynamical structure also exhibits a metabolic invariant which constrains the functional attributes of the enzymes. Finally, in accordance with the classical biochemical studies, our numerical analysis reveals in a quantitative manner that the enzyme phosphofructokinase is the key-core of the metabolic system, behaving for all conditions as the main source of the effective causal flows in yeast glycolysis. Yeast glycolysis is one of the most studied dissipative pathways of the cell; it was the first metabolic system in which spontaneous oscillations were observed Glycolysis is the central pathway of glucose degradation and it is implied in relevant metabolic processes, such as the maintenance of cellular redox states, the provision of ATP for membrane pumps and protein phosphorylation, biosynthesis, etc; and its activity is linked to a high variety of important cellular processes, e.g., glycolysis has a long history in cancer cell biology Over the last 30 years a large number of different studies focused on different molecular mechanisms allowing for the emergency of self-organized glycolytic patterns The theoretical basis of dissipative self-organization processes was formulated by Ilya Prigogine From a dissipative point of view, the essential enzymatic states are those corresponding to the biochemical irreversible processes, and they are the only metabolic processes which might allow that the enzymatic system to work far from the thermodynamic equilibrium. Once the irreversible enzymatic system operates sufficiently far-from-equilibrium as the nonlinear nature of its kinetics, the steady state may become unstable leading to dynamical behaviours and new instabilities originating the emergence of different biochemical temporal patterns In the yeast glycolysis, the main instability-generating mechanism is based on the self-catalytic regulation of the irreversible enzyme phosphofructokinase, specifically, the positive feed-back exerted by the reaction products, the ADP and fructose-1,6-bisphosphate In this paper, to go a next step further in the understanding of the relationship between the classical topological structure and functionality we have analyzed the effective connectivity of yeast glycolysis, which in inter-enzyme interactions accounts for the influence that the activity of one enzyme has on the future of another For this purpose, we considered a yeast glycolytic model described by a system of three delay-differential equations in which there is an explicit consideration of the rate equations of the three irreversible enzymes hexokinase, phosphofructokinase and pyruvatekinase. These enzymatic activity functions were previously modeled and tested experimentally by other different groups We have obtained time series of enzymatic activity under different sources of the glucose input flux. The data corresponded to a typical quasi-periodic route to chaos which is in agreement with experimental conditions Using the non-linear analysis technique of Transfer Entropy TE has been proposed as a rigorous, robust and self-consistent method for the effective connectivity i.e., causal quantification of the functional information flow among nonlinear processes In The monitoring of the fluorescence of NADH in glycolyzing bakerIn order to simulate these metabolic processes, the glycolytic system is considered under periodic input flux with a sinusoidal source of glucose Under these conditions, a wide range of different types of dynamic patterns can emerge as a function of the control parameter, hereafter the amplitude A of the sinusoidal glucose input flux pattern . An incrs emerge . Above As appear . After aappear , , as predTo go a next step further in the understanding of the relationship between the classical topological structure and effective functionality we have analyzed by means of non-linear statistical tools the catalytic patterns belonging to this scenario to chaos, and for each transition represented in the Transfer Entropy (TE) quantifies the reduction in uncertainty that one variable has on its own future when adding another. This measure allows for a calculation of the functional influence in terms of effective connectivity between two variables The values of functional influence are ranging in The glycolytic effective connectivity is illustrated in In all cases analyzed, the values of TE present a maximum statistical significance .Next, we have measured the total information flow, defined as the total outward of Transfer Entropy arriving to one enzyme minus the total inward. Positive values mean that that enzyme is a source of causality flow and negative flows are interpreted as sinks or targets. The results of the total information flows are represented For all conditions the enzyme EThe attributed role to each enzyme, namely ETime correlations allows for quantification about how much two time series are statistically independent. According to that, we have measured the time pairwise correlations in the enzymatic system, and the corresponding results are shown in These values of time correlations were almost constant through the quasi-periodic route to chaos and established that the activities of EThe Mutual Information (MI) quantifies how much the knowledge of one variable reduces the entropy or uncertainty of the another The high values of MI (close to 0.50) proved a high redundancy in information between the pairs of enzymes. In other words, the number of bits of information transferred from one enzyme to another is much larger than the actually needed.The values in the principal diagonal of The values of MI have a maximum statistical significance .Finally, we have computed the Mutual Information between the glucose input fluxes and the activity patterns of the different enzymes. In all cases, the MI was equal to zero, proving that the oscillations of the glucose were statistical independent of the glucose- 6-phosphate, fructose 1-6-biphosphate and pyruvate, products of the main irreversible enzymes of glycolysis.In this paper we have quantified essential aspects of the effective functional connectivity among the main glycolytic enzymes in dissipative conditions.First, we have computed under different source of glucose the causality flows in the metabolic system. This level of the functional influence accounts for the contribution of each enzyme to the generation of the different catalytic behavior and adds a directionality in the influence interactions between enzymes.The results show that the flows of functional connectivity can change significantly during the different metabolic transitions analyzed, exhibiting high values of transfer entropy, and in all considered cases, the enzyme phosphofructokinase corresponds to the Eerge cf. . This fiThe level of influence in terms of causal interactions between the enzymes is not always the same but varies depending on the substrate fluxes and the dynamic characteristics emerging in the system. In addition to the glycolytic topological structure characterized by the specific location of enzymes, substrates, products and regulatory metabolites there is an functional structure of information flows which is dynamic and exhibit notable variations of the causal interactions.Another aspect of the glycolitic functionality was observed during the quantification of the Mutual Information, which measures how much the uncertainty about the one enzyme is reduced by knowing the other; we found that the uncertainty for ESecond, the numerical results show that for all analyzed cases the maximum effective connectivity corresponds to the Transfer Entropy from EThird, our analysis allows for a hierarchical classification in terms of what glycolytic enzyme is improving the future prediction of what others, and the results reveals in a quantitative manner that the enzyme EFrom the biochemical point of view the EForth, the dynamics of the glycolytic system changes substantially through the quasi-periodic route to chaos when the amplitude of the input-flux varies. However, the hierarchy obtained by transfer entropy, EWe want to emphasize that Transfer Entropy as a quantitative measure of effective causal connectivity can be a very useful tool in studies of enzymatic processes that operate far from equilibrium conditions. Moreover, many experimental observations have shown that the oscillations in the enzymatic activity seem to represent one of the most striking manifestations of the metabolic dynamic behaviors, of not only qualitative but also quantitative importance in cells is an open system formed by a given set of dissipative enzymatic sets interconnected by substrate fluxes and three classes of regulatory signals: activatory , inhibitory and all-or-nothing type . Certain enzymatic sets may receive an external substrate flux. In the DMN, the emergent output activity for each dissipative enzymatic set can be either oscillatory or steady state with an infinite number of distinct activity regimes. The first model of a Dissipative Metabolic Network was developed in 1999 Transfer Entropy is able to detect the directed exchange of causality flows among the irreversible enzymes which might allow for a rigorous quantification of the effective functional connectivity of many dissipative metabolic processes in both normal and pathological cellular conditions.The TE method applied to our numerical studies of yeast glycolisis shows the emergence of a new kind of dynamical functional structure which is characterized by changing connectivity flows and a metabolic invariant that constrains the activity of the irreversible enzymes.When the metabolite S (glucose) feeds the glycolytic system , it is tThe main instability-generating mechanism in yeast glycolysis is the self-catalytic regulation of the enzyme EIn the determination of the enzymatic kinetics of the enzyme ETo study the kinetics of the dissipative glycolytic system we have considered normalized concentrations; From the dissipative point of view the essential enzymatic stages are those that correspond to the biochemical irreversible processes The initial functions present a simple harmonic oscillation in the following form:The dependent variables http://www.webassign.net/pas/ode/odewb.html.The numerical integration of the system was performed with the package ODE Workbench, which created by Dr. Aguirregabiria is part of the Physics Academic Software. Internally this package uses a Dormand-Prince method of order 5 to integrate differential equations. Further information at This model has been exhaustively analyzed before, revealing a notable richness of emergent temporal structures which included the three main routes to chaos, as well as a multiplicity of stable coexisting states, see for more details TE allows for a quantification of how much the temporal evolution of the activity of one enzyme helps to improve the future prediction of another. The oscillatory patterns of the biochemical metabolites might have information which can be read-out by the TE. Further evidence about oscillatory behaviour in cellular conditions is given in For a convenient derivation, let generally assume that each of the pairs of enzymatic activity is represented by the two time series Rewriting the conditional probabilities as the joint probability divided by its marginal, one obtains an explicit form for the Transfer Entropy:http://www.mathworks.com/matlabcentral/fileexchange/14888.The problem of binning probabilities was solved by rounding at each time the values of the variables to the nearest integer, thus coarse-graining the continuous signal and counting the number of times (frequency) in which a variable is in a certain state. Similar choice were taking by Peng in his Matlab toolbox to compute the Mutual Information, further details in Another important parameter in the calculation of TE is the order We also addressed the statistical significance for TE. This was achieved by comparing the obtained values of TE between two series of enzymatic activity, say X and Y, with the values obtained when considering a random permutation of the future of Y, what we called, the shuffled-future of Y. The values of TE in The formula (6) is fully equivalent to the Mutual Information between directed graphs, the graph of information flows between pairs of enzymes.It is important to remark that the TE from Recently it was proved that the measures of effective connectivity based on Granger Causality MI quantifies how much the knowledge of one variable reduces the entropy or uncertainty of another. Therefore, MI says about how much information the two variables are sharing. Against other measures to compute correlations or statistical dependency, the strongest point of the MI is that it extends functionality to high order statistics For statistical independent Table S1Model parameter values.(PDF)Click here for additional data file.Appendix S1Initial functions domains and phase shift.(PDF)Click here for additional data file.Appendix S2Metabolic oscillatory behavior in cellular conditions.(PDF)Click here for additional data file."} +{"text": "Improvement of utilization of malaria treatment services will depend on provision of treatment services that different population groups of consumers prefer and would want to use. Treatment of malaria in Nigeria is still problematic and this contributes to worsening burden of the disease in the country. Therefore this study explores the socio-economic and geographic differences in consumers' preferences for improved treatment of malaria in Southeast Nigeria and how the results can be used to improve the deployment of malaria treatment services.This study was undertaken in Anambra state, Southeast Nigeria in three rural and three urban areas. A total of 2,250 randomly selected householders were interviewed using a pre tested interviewer administered questionnaire. Preferences were elicited using both a rating scale and ranking of different treatment provision sources by the respondents. A socio-economic status (SES) index was used to examine for SES differences, whilst urban-rural comparison was used to examine for geographic differences, in preferences.The most preferred source of provision of malaria treatment services was public hospitals (30.5%), training of mothers (19%) and treatment in Primary healthcare centres (18.1%). Traditional healers (4.8%) and patent medicine dealers (4.2%) were the least preferred strategies for improving malaria treatment. Some of the preferences differed by SES and by a lesser extent, the geographic location of the respondents.Preferences for provision of improved malaria treatment services were influenced by SES and by geographic location. There should be re-invigoration of public facilities for appropriate diagnosis and treatment of malaria, in addition to improving the financial and geographic accessibility of such facilities. Training of mothers should be encouraged but home management will not work if the quality of services of patent medicine dealers and pharmacy shops where drugs for home management are purchased are not improved. Therefore, there is the need for a holistic improvement of malaria treatment services. Malaria is a major cause of mortality and morbidity in Nigeria, and is responsible for 30% of childhood mortality, 11% of maternal mortality and more than 60% of out-patient visits . One of The limited healthcare facilities that exist in rural Nigeria make it difficult to provide the required good quality malaria management services ,5. Poor The preferences of consumers for sources for improving treatment of malaria should be understood and used to improve provision and utilization of appropriate malaria treatment services. Peoples' perception of the ease of accessing the various providers of malaria treatment can potentially determine their health-seeking behavior and by eThere is need to increase the body of knowledge about the link of socio-economic status to consumers' preferences for improved malaria treatment, for proving appropriate and timely treatment of malaria. However, some strategies have been suggested which will be useful for providing timely, appropriate, and potentially equitable management of malaria within communities these include health education to mothers ,14,15; aMany of the recommended interventions for improving appropriate treatment of malaria could be at variance with what the communities really prefer. This will lead to poor utilization of such services thereby negating the original ideas behind their deployment. The ability for successful and sustainable disease control programs depends very much on \"listening to the people\" . TherefoThis study examined the preferences for improved malaria treatment by consumers and disaggregates these preferences by socio-economic status and geographic location of the respondents. The study also compared whether the rating and ranking scales will produce similar results, so as to assure the internal validity of the preferences. The findings are useful for evidence-based policy-making and development of strategies for equitably improving the deployment of demand-responsive malaria treatment services in different geographic areas for different population groups.This study was undertaken in Anambra state; Southeast Nigeria. The state has a high malaria transmission rate all year round and the annual incidence rate is between 10 to 35%. On the basis of discussions with Anambra State Ministry of Health (MOH) officials, 6 sites were chosen for the study. These were the three largest urban centres , Nnewi and Onitsha) from each of the three senatorial zones and one rural LGA randomly selected from each senatorial zone .Then, one community from each of the three rural LGAs: Enugwu-Ukwu (Njikoka LGA), Ekwulobia (Aguata LGA) and Okpoko (Ogbaru LGA) was selected using two-stage sampling, by first stratifying the communities according to whether they have a general hospital and then randomly selecting the sites from those that have general hospitalsThe software for population survey in EPI Info 6 was used for sample size calculation. The parameters that were used for sample size calculation were a power of 80%, 95% confidence level and considering 2% as the proportion of people with malaria that used services from the least commonly visited providers and 5 hamlets were randomly selected and the number of households in the selected areas enumerated to produce the household list. In the second stage, households were systematically included at regular intervals down the list, the starting point being chosen at random . InformaA pre-tested interviewer-administered questionnaire was used to obtain information from respondents in the randomly selected households. Local educated residents of each community were recruited and trained as field workers to administer the questionnaire. The respondents were asked to rank and also rate their preferences for improved malaria treatment services. Respondents were given a list of different sources of treatment centers, pharmacy shops, patent medicine dealers, trained mothers, herbalists and community health workers (CHWs). Colorful option cards that depicted the different providers were also shown to the respondents to help them in visualizing them and aid their ranking and rating.The contingent ranking and rating of preferences of consumers for different providers was elicited after the different sources of treatment provision were explained to them. The ranking was done before the rating scale. They were first asked to rank the 3 they most preferred then rate each treatment source from 1 to 10. The ranking allowed respondents to state relative preferences amongst the top 3 and in each rating multiple options could be scored at the same level of preference. The questionnaire was also used to collect data on the general socio-economic and demographic characteristics of the respondents and their households, expenditures on food as well as value of home produced and consumed food, and their asset holdings was used to generate the SES index ,23 that Ethical Clearance: The authors received the approval of the ethics committee of the College of Medicine, University of Nigeria, Enugu Campus before carrying out this study.Table Table Table Table Public hospitals were ranked and rated the most preferred choice for the improvement of malaria treatment services in both the higher socio-economic status (SES) group and rural areas. The fact that majority of the respondents stated that the best way to improve malaria treatment services in their community was to improve the quality in services being rendered by public hospitals, might be because public hospitals are known to have a large array of specialist services. Training of mothers that was also highly preferred will help to improve the treatment of malaria. This is line with the Roll Back Malaria (RBM) target for provision of timely and appropriate treatment of malaria . The useThe SES of the individual influenced their preferences. The finding that the preference for public hospital was highest for the least poor SES (Q4) compared to others may be because the better-off quartiles are usually more educated, and thus have more information about the advantages of having treatment in a public hospital where there is usually a presence of qualified personnel. Herbalists must have been viewed as inferior goods, where preferences fall as SES increases as was alluded to by their low preference as SES increased. The finding that the use of PMDs was ranked higher by poorer quartiles compared to the least poor quartile for improving malaria treatment is most likely due to the fact that it might be cheaper and easier for the poorer SES to visit PMDs. This corroborates the findings of some studies which showed that the poorer households were more likely to seek treatment from low level and informal providers rather than use the hospitals ,24.The geographic differences in consumer preferences for providers which showed that herbalist were ranked higher in rural areas than in urban may be because of more familiarity and higher levels of availability of herbalists in the rural areas when compared to the urban areas. The higher preference of public hospitals in the rural area compared to the urban area suggest that those in the rural areas actually need increased availability of public hospitals which would invariably have more qualified health workers and would most likely provide good quality malaria treatment services.It was seen that the rating scale and ranking of preferences produced similar findings, which is a sign of convergent validity of the findings . Apart fA limitation of the study was the fact that only two risk factors, namely socio-economic status and geographic locations were explored in this study because the study was primarily concerned with issues of socio-economic and geographic equity. However, other factors such as parity, occupation and age may also affect the preferences of consumers for different providers. The exploration of the role of these other factors should be the focus of future studies. Also, the study did not have a qualitative component, which would have helped to generate qualitative data that will either strengthen or disprove the quantitative findings. Another possible limitation is that use of local residents to act as fieldworkers might bias the results, but regular quality assurance by the investigators ensured that data collected was of very good quality.All in all, the paper has shown that preferences for provision of improved malaria treatment services were influenced mostly by SES and also by geographic location of the respondents to a lesser extent. The reasons for the differences in preferences were not explored in the study, but could be as a result of prior knowledge, experiences, costs and availability of the providers. However, it was obvious that people mostly preferred that improved malaria treatment services should be delivered through public health facilities such as hospitals and PHC centers. Hence, there should also be re-invigoration of public facilities for appropriate diagnosis and treatment of malaria, in addition to improving the financial and geographic accessibility of such facilities. However, there is a role of home management of malaria through training of mothers. However, home management will not work if the quality of services rendered by providers where drugs for home management are purchased are not improved. Therefore, there is the need for the government and development partners to also improve the quality of services of PMDs and pharmacy shops so that there is a holistic improvement of malaria treatment services.The authors declare that they have no competing interests.OO conceived the study. All the authors participated in data collection and analysis. NU wrote the manuscript with input from all the authorsThe pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2458/10/7/prepubContains a household questionnaire on the preferences of different household for where they sought treatment for malaria treatment. It also contains the socio-demographic detail of each respondent and their socio-economic status based on household owned assets and food expenditure pattern.Click here for file"} +{"text": "The obtained closed form solutions are presented in the form of simple or multiple integrals in terms of Bessel functions and terms with only Bessel functions. The numerical integration is performed and the graphical results are displayed for the involved flow parameters. It is found that the velocity decreases whereas the shear stress increases when the Hartmann number is increased. The solutions corresponding to the Stokes' first problem for hydrodynamic Burgers' fluids are obtained as limiting cases of the present solutions. Similar solutions for Stokes' second problem of hydrodynamic Burgers' fluids and those for Newtonian and Oldroyd-B fluids can also be obtained as limiting cases of these solutions.The present work is concerned with exact solutions of Stokes second problem for magnetohydrodynamics (MHD) flow of a Burgers' fluid. The fluid over a flat plate is assumed to be electrically conducting in the presence of a uniform magnetic field applied in outward transverse direction to the flow. The equations governing the flow are modeled and then solved using the Laplace transform technique. The expressions of velocity field and tangential stress are developed when the relaxation time satisfies the condition \u03b3\u200a=\u200a\u03bb Magnetohydrodynamics is the study of flow of electrically conducting fluids in electric and magnetic fields. This phenomenon is essentially one of the mutual interaction between the fluid velocity and electromagnetic field i.e. the motion of the fluid affects the magnetic field and the magnetic field affects the fluid motion. Basically, magnetohydrodynamics is a research area that involves the study of motion of electrically conducting fluids such as plasma and salt water. MHD flows are found to have influential applications in many natural and man made flows. They are frequently used in industry to heat, pump, stir and levitate liquid metals. Another application for MHD is the magnetohydrodynamic generator in which electrically conducting fluid is used to generate electric power. The flows of an electrically conducting fluid in the presence of a magnetic field have important applications in various areas of technology such as, accelerators centrifugal separation of solid from fluid, purification of crude oils, astrophysical flows, petroleum industry, polymer technology, solar power technology, nuclear engineering applications and other industrial areas The literature on the study of MHD viscous fluid is abundant , the continuity Eq. We consider the unsteady incompressible flow of an electrically conducting Burgers' fluid occupying the upper half space of For such type of motions the governing equations are (8) and (9) with the following initial and boundary conditionsMoreover, the natural conditions Introducing the following non-dimensional variablesThe corresponding initial and boundary conditions In order to solve the initial and boundary-value problem Case-I: Solution of the problem forIn order to determine exact solutions for our problem, we substitute In order to find Of course, in view of Eq. Applying the inverse Laplace transform to Eqs. Now using Eqs. Similarly for the sine part of velocity, we get the following expressionIn order to find the dimensionless shear stress, we write For withThe Laplace inverse transforms of Eqs. The convolution product of Eqs. Similarly for sine oscillation we obtainFurthermore, it is noted that the expressions The inverse Laplace transforms of Eqs. Now taking the convolution product of Eqs. In the same way we find thatNow, in order to find the associated expressions for velocity, we directly put andEquivalent expressions for the velocities we can writewhereApplying the inverse Laplace transforms to Eqs.Consequently, introducing Eqs. (63) and (64) into Eq. (62), we obtainFollowing a similar way, we also obtainCase-II: Solution of the problem forLet us now consider the expressions of velocity fields and tangential stresses when Here the second grade equation In order to find the Laplace inverse transform of The Laplace inverse transform of Eq. Moreover, the Laplace inverse transform of Eq. In view of the relations Adopting a similar procedure for the sine oscillation of the boundary, we get an expression similar to Eq.whereThe corresponding expressions for the shear stresses are given byIn this section, for the accuracy of results, we consider a limiting case of our solutions. More exactly, we substitute The objective of the present paper is to study the unsteady MHD flow of a Burgers' fluid over an oscillating plate when the relaxation time satisfies the conditions In this paper, we have studied the MHD flow of Burgers' fluid when the relaxation time satisfies the conditions"} +{"text": "Unfortunately the submitting author omitted the pathologists involved in the case by accident. The following authors have contributed to the scientific content of the published article as follows: Adnan Aali, Faris Kubba, and William Lynn, all processed the original tissue samples, and after much research and collaboration provided us with the overall results. They produced the slides and educated the orthopaedic authors on the subject matter. Andrew Borman provided detailed analysis of the rare eumycetoma and spent copious time examining the tissue material to give us a definite diagnosis. He also oversaw the whole manuscript."} +{"text": "Imaging technology with its advancement in the field of urology is the boon for the patients who require minimally invasive approaches for various kidney disorders. These approaches require a precise knowledge of the normal and variant anatomy of vascular structures at the hilum of the kidney in terms of their pattern of arrangement and division. The present paper describes a bilateral anomalous arrangement of the structures at the renal hilum as well as their peculiar branching pattern which is of clinical and surgical relevance. Multiple branching of the renal vessels was observed in both kidneys due to which the hila were congested. The right renal artery immediately after its origin divided into 2 branches. The upper branch represented an aberrant artery whereas the lower branch gave 5 divisions. The left renal artery also divided into 2 branches much before the hilum as anterior and posterior divisions. The anterior branch took an arched course and gave 6 branches. The posterior branch gave 3 terminal branches before entering the renal substance. In addition to anomalous hilar structures, normal architecture of both kidneys was altered and the hilum of the left kidney was found on its anterior surface. Kidneys are a pair of excretory organs situated one on each side of the vertebral column retroperitoneally. Being with a bean shape, it presents thick and rounded superior pole and thin and pointed inferior pole. Renal hilum is deep vertical slit situated in its medial border which lies about 5\u2009cm from the midline opposite the lower border of L1 vertebra. It communicates with the renal sinus within the kidney Various Knowing the anatomy of the ureteropelvic junction of the kidney is essential for understanding urinary tract disorders and various nephron sparing surgical procedures. The present study describes the bilateral anomalous arrangement of the structures at the hilum of kidney which is of clinical and surgical relevance.During dissection of about 60-year-old male cadaver, we observed anomalous positions and branching pattern of the renal vessels causing a congested renal hilum. The variation was bilateral . The hilRenal artery (RA) with its normal origin and course from abdominal aorta divided immediately into 2 branches . The supThe hilum was wide and situated on the anterior surface instead of its normal anatomical situation in the medial border .Left renal artery arose from abdominal aorta, before entering the hilum branched into 2 divisions. Anterior division presented an arched course superficial to the tributaries of renal veins and gave 6 branches. The upper 2 branches of it represented the aberrant arteries and entered the upper pole of the kidney. One of the aberrant arteries before piercing the substance of the kidney gave the right inferior suprarenal artery. The posterior division ran behind the renal pelvis and posterior division of renal vein and gave 3 branches. So altogether, 8 branches pierced the renal hilum and 2 branches pierced the upper pole of the kidney.Anterior and posterior tributaries of renal vein after emerging separately from hilum of the left kidney united to form a single trunk that drained into inferior vena cava. Before the union, the posterior division joined the anterior division in a twisted manner. Anterior division received left testicular vein (LTV). The left suprarenal vein (LSRV) drained into the trunk of the left renal vein. So the arrangement of the structures in the hilum of left kidney from anterior to posterior aspect was anterior division of the renal vein-anterior division of renal artery-renal pelvis-posterior division of renal vein-posterior division of renal artery (A-V-P-V-A).The schematic representation of bilateral renal hilar pattern with distorted shapes of kidneys is shown in Although abnormal shapes, positions, and vascular variations of the kidney have been reported earlier, to our knowledge, there are no reports on bilateral anomalous variations of the renal vessels as presented in this paper. The variations reported here are peculiar and unique. Variations in the branching pattern of the renal vessels probably might be the cause for the change in the shape of the kidney from the normal bean shape to the retort flask shape which is seen here. Morishima et al. reported a diamond shaped left kidney situated lower than usual, the hilum of which was widely opened and was facing anteriorly . A discoThe abnormalities in the renal arteries are mainly due to the various developmental positions of the kidney . InsuffiStudy conducted by Kaneko et al. presented 25% of multiple renal arteries which included the polar renal arteries . In the Rouvi\u00e8re et al. observed 29%\u201365% incidences with anomalous course of renal vessels crossing the renal pelvis cause of ureteropelvic obstruction . Obstruc"} +{"text": "Aortic root occupies a central position in the fibrous skeleton with important relations to surrounding structures; therefore it is a challenge for surgeons enlarging the small annulus. We propose a low fidelity simulator to enhance the comprehension of the aortic root and the enlargement procedures.We used self-constructed models to simulate the aortic root. The related structures were constructed. Aortomitral angle of 120 degrees was created in the models. We performed three enlargement procedures 1. Manougian 2. Nicks 3. Nunez. Manougian and Nunez incisions were made through the commissure between the left and the Non coronary cusps (NCC) where Manougian extended through the anterior leaflet of the Mitral Valve (AML) while Nunez stopped proximal to the anterior annulus of the mitral valve. Nick\u2019s incision was carried out through the middle of the NCC and also extending through the AML. Enlargements were carried out with a Dacron patch.Self construction of the Aortic Root and its related structures results in improving of 3D understanding of their relationship. The creation of the aortomitral angle leads to understanding the importance of maintenance of this angle after enlargement. Manougian and Nicks procedures resulted in the opening of the left Atrium (LA) and subsequent repair of the LA roof in addition to closure of the Aortic Annulus. While Nunez was simpler to patch as it did not require additional repairs.Low fidelity simulator is an excellent tool in broadening the knowledge of the familiarizing and performing different types of aortic root enlargement."} +{"text": "The development and application of standardised sets of outcomes to be measured and reported in clinical trials have the potential to increase the efficiency and value of research. One of the most notable of the current outcome sets was developed nearly 20 years ago: the World Health Organisation (WHO) and International League of Associations for Rheumatology (ILAR) core set of outcomes for rheumatoid arthritis clinical trials, originating from the OMERACT (Outcome Measures in Rheumatology) Initiative.The Cochrane Library. Reports of these trials were evaluated to determine whether or not there was a trend in the proportion of studies reporting on the full set of core outcomes over time. Researchers who conducted trials after the publication of the core set were contacted to assess their awareness of it and to collect reasons for non-inclusion of the full core set of outcomes in the study.A review of 350 randomised trials for the treatment of rheumatoid arthritis identified through This review suggests that 60-70% of trialists conducting trials in rheumatoid arthritis are now measuring the rheumatoid arthritis core outcome set. 90% of trialists that responded said that they would consider using the core outcome set in the design of a new study.This review suggests that a higher percentage of trialists conducting trials in rheumatoid arthritis are now measuring the rheumatoid arthritis core outcome set. Core outcome sets have the potential to improve the evidence base for health care."} +{"text": "Timing of initiating action is often critical for performance of voluntary behaviors. Appropriate times for initiating voluntary actions are considered to be acquired by reinforcement learning. In this learning process, exploration in the time domain is essential. The basal ganglia have been implicated in the initiation of voluntary movements. Recently, we proposed a biologically plausible mechanism for probabilistic timing of action initiation in the basal ganglia, and by computer simulations of the spiking neural network model we demonstrated the probabilistic nature of the action initiation of the model which supports active exploration in the range of several seconds . For fur"} +{"text": "In addition to lacking dystrophin, this mouse also expresses a mutated version of the gene encoding LARGE, another important component of the DGC. The combination of these two mutations provides a model that demonstrates a severe neuromuscular phenotype and accurately mimics the symptoms of human muscular dystrophies. The authors also demonstrate that the model could be used to test the benefit of stem cell therapy to treat DMD and associated disorders. In a second study, Dean Burkin\u2019s group provides insight into the mechanism underlying the beneficial effects of corticosteroid therapy in DMD and reveal potential candidates to target in future therapeutic approaches that might overcome the negative side effects associated with steroid use (1175page ). In their work, they utilise in vitro cultured myogenic cells and also a canine model of DMD that shows strong phenotypic similarities to human DMD patients, demonstrating that the use of multiple complementary approaches and model systems can allow complex disease mechanisms to be elucidated.Two research articles published in the current issue progress our understanding of Duchenne muscular dystrophy (DMD), a debilitating X-linked disorder characterised by progressive muscular weakness and degeneration. The disorder is caused by mutations in the gene encoding dystrophin, an important component of the dystrophin-glycoprotein complex (DGC) that connects intracellular proteins with the extracellular matrix and promotes muscle fibre integrity. In the first of the two studies, Mariz Vainzof and colleagues report a new mouse model for the disease in the form of a"} +{"text": "Cells respond to changes in the internal and external environment by a complex regulatory system whose end-point is the activation of transcription factors controlling the expression of a pool of ad-hoc genes. Recent experiments have shown that certain stimuli may trigger oscillations in the concentration of transcription factors such as NF- Cells are dynamic environments constantly adapting themselves to internal and external stimuli. The response to such stimuli is a tightly controlled multi-step process from sensing the stimulus, usually by means of receptors present in the external and internal membrane, transmission of the signal across the cell by a cascade of protein modifications and protein-protein interactions, that activates specific transcription factors which, in turn, regulate the expression of target genes. Fine tuning regulations, e.g. post-translational and post-transcriptional modifications, take place at every step in process providing robustness against noise, specificity to the triggering stimulus and insulation between the different pathways.Recent discoveries have revealed that transcriptional regulation itself is a very complex process and genes are not just activated or deactivated by transcription factors. Rather transcription factors activate a pool of genes network motifs. Network motifs are patterns of connectivity that are present in a much higher frequency than in a network of similar dimensions but whose links between its nodes are generated randomly Some insights have been gained from identifying so called Among the network motifs feed-forward loops have been widely investigated both theoretically and experimentally and many of their properties have been described, such as persistence detection, protecting against transient loss of signals Oscillations have been observed for a long time in the most varied biological systems e.g. cell cycle For transcription factors the functional role of oscillations is not well understood. A number of studies provide supporting evidence that the oscillatory temporal dynamics of nuclear NF-In this work we theoretically and numerically investigate how the transcriptional activity of genes regulated by simple network motifs is affected by oscillations in the concentration of transcription factors. First we study and characterize quantitatively the properties of direct regulation. We then use and extend these results to understand the behavior of indirect two-step regulation and feed-forward loops, driven by oscillating transcription factors with varying period and temporal profile. The specific aim is to analyze how various characteristics of the oscillatory input signal (e.g. frequency and shape) can control differential expression of genes, that is not possible in the case of steady state responses. A better understanding of such mechanisms based on theoretical models can help identifying the functional role of experimentally observed oscillations in the expression of various genes. We focus on the genetic response produced by synthetic oscillatory input signals, where we can directly control the different characteristics of the signal.symmetric case in which In the following we present and analyze differential equation based models that link the temporal dynamics of a transcription factor Analytical solutions for the components of the considered mechanisms have been derived see section We first studied the effects of oscillations on the average expression of a gene same see .The solution e value .Although the specific solution ene see :(3)whereWhen the concentration of the oscillatory transcription factor crosses the threshold of activation back and forth only once in each cycle of oscillation B)C)D)pulse generator promoting the expression of the gene As a representative of the IFFLs we illustrate the behavior of the IFFL-1 with an AND gate. In the IFFL-1 In the presence of an oscillating factor ncreased . SimilarThe response to changing the period of the oscillation of A)B)C)D)Oscillations are a widespread phenomenon arising in many biological systems Previous studies have investigated both theoretically and experimentally the properties of regulatory networks in relation to their topology. Alon and coworkers have demonstrated various properties of simple regulatory motifs like negative auto-regulation Under adequate stimulation oscillations in gene expression may involve a large number of transcription factors, propagating across different pathways and occur at different cellular levels In this work we studied the possibility of frequency dependent responses in simple gene regulatory schemes, that could be used in decoding information from time-dependent oscillatory signals, and to generate differential regulation of multiple genes controlled by the temporal dynamics of the same transcription factor.In the case of direct regulation the key factor regulating the gene expression is the fraction of time when the transcription factor concentration is above the activation threshold of a certain gene. As a consequence, modifying the frequency of oscillation cannot modulate the expression of a gene. Varying the amplitude of oscillation though, may cause changes in the duration of the activity of transcription factors and could regulate the average level. Such a mechanism might be ideal to regulate those genes whose average level of expression in cells and tissues should not change when the cellular environment is perturbed by a stimulus that gives rise to oscillations.For the two-step regulation the frequency of oscillation is capable of switching on or off the expression of the target genes. Increasing the frequency of oscillation of the regulating transcription factor causes the intermediate component to oscillate closer to its average value Thus increasing the complexity of the gene regulatory network provides the cell with more refined mechanisms for decoding information from the temporal dynamics, that is not possible in the case of steady-state responses with no temporal dynamics. We have identified distinct types of response behaviors depending on the parameters, for example: on/off switching of the gene expression in a frequency dependent manner, maintenance of a constant average expression, frequency dependent switching of the expression level between two distinct regimes. Moreover we have shown that, as Gene expression mediated by two-step regulation and FFLs could be advantageous in driving cell fate in those situations for which the transcription factor can regulate opposite cellular processes. NF-Future extensions of this work could consider how combining together several of these regulatory mechanisms affects the ability to decode information from the temporal dynamics of transcription factors in transcriptional networks with more complex topology. Another interesting possibility would be to consider gene regulatory motifs controlled by oscillatory input signals that depend on multiple stimuli, to explore how multiple information can be transmitted and recovered from the temporal dynamics of a single transcription factor. The inputs influencing the dynamics of an oscillatory transcription factor typically would modify not just the frequency but also other characteristics of the signals, e.g. average expression rate, amplitude of the oscillations etc. Therefore the frequency dependent responses that we described may be combined with or dominated by other changes occurring simultaneously.While in our models we focussed on the time-averaged response behavior of a stationary oscillating system, in many cases transient signaling and the timing of the gene expression is also important. Relevant information may also be encoded in the temporal profile of transient stimuli, that could lead to selective transient expression of different genes. Simple gene regulatory networks can also play a role in decoding such information as it was shown for example in the context of genes involved in cell cycle regulation Frequency dependent expression of genes regulated by NF-Supporting Information S1This supporting material contains the mathematical derivation of all the formulas presented in the main text. The numerical results obtained simulating Network Motifs with an OR gate stimulated by oscillatory transcription factors are shown and discussed extensively using the same framework introduced in the main text.(PDF)Click here for additional data file."} +{"text": "By pulling together results from five individual studies, we aim to enable the audience to explore how the overall narrative enables additional insight in to our understanding of development, effectiveness, and implementation of online interventions. Methods and results from studies will be explained and contrasted. The theoretical underpinnings of the development of Unitcheck, an intervention initially developed for UK university students, will be explored briefly before outlining results of three randomised control trials. The results of a study employing qualitative think-aloud methodology will then be presented. Evidence will be presented of the ability of Unitcheck to prompt users to actively engage with information presented, relating information to their own experience; resulting in an evaluation of their own drinking behaviour. Evidence will be presented of user's immediate cognitive and emotional reaction when comparing their own drinking with that of others. Next, we discuss how experiences with Unitcheck were applied to inform development of ChangeDrinking, an online screening and brief advice intervention for non-help-seeking individuals who have recently been hospital inpatients. The results of a study employing qualitative think-aloud methodology to understand how those identified as having substance misuse problems engage with ChangeDrinking will then be presented. The presentation will consider how context of implementation shaped development of each intervention tool and subsequently impacted the evaluation process. The synthesis of the results of the think-aloud studies with randomised control trial results enables insight in to potential mechanisms for behaviour change. This paper will consider the implication of results for our understanding of the use of online brief personalized feedback interventions for modifying alcohol use.This presentation outlines the development and evaluation of two online brief personalized feedback interventions for alcohol use ("} +{"text": "There is an exceedingly small group of microorganisms that are considered pathogenic in humans relative to the microbial biota\u2014with literally just a few bacterial and viral agents that have the potential to cause disease when transmitted in aerosol form. This group of pathogens has adapted to circumvent the rigors of airborne transport to enter the respiratory system and induce infection. The changes that take place in the microbial composition during this process, including expulsion of bioaerosols from an infected host, is a multifaceted process that ultimately effects the underlying functionality of the organism and the induction of disease in the host target. Similarly, the nature of the aerosol source can be determinative of the size distribution of the pathogenic aerosols and thus affect the pattern of initial deposition in the respiratory system and the tissue/cell types most impacted by the pathogen.in vitro techniques and animal models for the purposes of further defining cellular mechanisms in aerosol-initiated disease pathogenesis. The resulting disease models have shown utility in medical product evaluations targeted to protecting or ameliorating the effects from an infectious mucosal/aerosol challenge. These aerosol disease models have been and continue to be an especially important consideration in biodefense-related research studies.The complexity of the mucosal response to infection within the respiratory system is affected not only by the number of infectious particles deposited, but the relative integrity of the microbial constituents contained in the aerosol particles. Conceptualizing and providing a description of the constitutive process of airborne disease transmission have given rise to recent research efforts using both This special issue brings together 11 articles that are diverse in content but are unified under a central theme of infectious disease aerobiology. This includes focused reviews that survey the current paradigm associated with airborne viral transmission of disease, as well as the benefits of directed alternative (aerosol) treatment of aerosol-acquired disease. The vulnerability of microorganisms in airborne environmental transport is also detailed in 2 of the 11 manuscripts contained in this special issue. Initial host response associated with exposure to airborne pathogens constitutes the majority of the articles included, with special emphasis on experimental infection as well as particle size-dependent disease pathogenesis.The first article Milton, evaluateFrancisella tularensis. Microbial aerosol efficiencies can be negatively affected by prevailing lower relative humidity as well as the selection of growth media (MHb vs. BHI) used in bacterial propagation. Dabisch et al. demonstrated differences in collection efficiency which can affect calculation of the delivered dose. These studies demonstrate the importance of microbiological and aerosol characterization when developing experimental animal models for respiratory infection.The next two articles present experimental data associated with microbial survival as it relates to the generation of laboratory-generated infectious bioaerosols for respiratory infection of animals. The first . The articles in this Research Topic are an excellent cross section of active research and topical reviews which will inform the reader of current technologies, gaps in knowledge, and opportunities for future research."} +{"text": "Total urogenital sinus mobilization has been applied to the surgical correction of virilized females and has mostly replaced older techniques. Concerns have been raised about the effect of this operation on urinary continence. Here we review the literature on this topic since the description of the technique 15\u2009years ago. Technical aspects and correct nomenclature are discussed. We emphasize that the term \u201ctotal\u201d refers to an en-bloc dissection and not to the extent of the proximal dissection. No cases of urinary incontinence have been reported following this operation. It is yet too early to evaluate results regarding sexual function but it is likely that the use of a posterior skin flap to augment the introitus will minimize the development of introital stenosis. Persistence of the urogenital sinus (UGS) is a common feature of a variety of congenital anomalies, among them: (1) XX DSD, namely females exposed to androgens in fetal life; (2) as an isolated malformation unrelated to masculinization or rectal malformation and; (3) in persistence of the cloaca. Although there are important anatomical and functional differences among the different types of UGS the surgical techniques used for its correction bear similarities.The goal of the surgical correction of UGS is the creation of separate openings in the vulva for the urethra and the vagina with preservation of the function of both organs. Early surgical techniques involved complete or partial separation of the urethra from the vagina, at times preserving the sinus as the urethra and doing either a vaginal \u201cpull through\u201d or creatIn 1997, Pe\u00f1a published a new technique for repair of the UGS in girls with persistence of the cloaca which avoided the separation of the urethra from the vagina. The technique involved the en-bloc mobilization of both structures to bring them down to the perineum creating separate openings. This maneuver saved time and reduced the number of complications in the eight children described and was called total urogenital mobilization (TUM) . By neceSubsequently, some authors expressed concerns that TUM may compromise urinary continence and bladder function and a prMany reports on the outcome of TUM include patients with a variety of diagnoses which tends to make interpretation of results difficult.Here we review the published results of TUM in patients with XX DSD without associated anorectal malformations.Pubmed search under the headings: \u201cUGS, vaginoplasty, feminizing genitoplasty, and TUM\u201d was conducted. After collecting all abstracts, full length articles of those which included UGS mobilization in DSD patients were reviewed. Results regarding number of patients, diagnosis, urinary continence, and potential vaginal stenosis were recorded.The persistence of the UGS in cases of genetic females exposed to androgens in the first few weeks of embryogenesis is almost universal. In virilized XX individuals the urethra and vagina share a common opening but the urethra and vagina develop normally proximal to the persistent UGS. Therefore, the urethra proximal to the confluence is of normal length . The goaFollowing the description of the technique , Kryger More recently, Palmer et al. reported on the continence status of a population with mixed diagnoses which included CAH, cloaca, and primary UGS . Among tThe results of the literature search are presented in Table We analyzed the results of TUM performed to correct UGS in virilized females. Although many series in the literature include cases of CAH, cloaca and primary UGS, and other forms of DSD, it should be recognized that the variable anatomy and possible associated neurological anomalies may have a profound influence in urinary continence independent from the operation performed .The application of TUM to cases of virilized females has largely replaced the use of other procedures that required the separation of the urethra from the vagina. The advantages of this technique are clear; avoiding separation of the urethra from the vagina reduces surgical time and complications and may avoid potential damage to urethral innervation. Many authors mention the length of the UGS as a criterion to select the surgical technique. In fact, the length of the sinus may be irrelevant since much of its course is in the perineum, parallel to the surface. More important to predict the easy or complexity of the surgery is the distance from the confluence of the vagina and urethra to the perineal skin Figure . We routPosteriorly, the dissection should always be carried to the peritoneal reflexion to facilitate the descent of the posterior vaginal wall and the suture of the \u03a9 flap. It is important to keep the posterior dissection close to the vagina in order to avoid disruption of the rectovaginal fascia which seems important for normal rectal function .We operate all those cases perineally in the dorsal lithotomy position. We see no advantage in the prone position which makes the vulvoplasty difficult. Furthermore, in cases of primary XX DSD, the use of the transanorectal approach is overly aggressive, hinders the use of a posterior flap and increases the potential for serious complications , 23.In short, data published since the description of TUM fail to demonstrate any adverse effect of the operation on voiding or continence.Interestingly, Davies et al. investigated 19 adult women with CAH of whom 16 had had corrective surgery in childhood. They found an incidence of urge incontinence of 68% and stress urinary incontinence (SUI) of 47%, significantly greater than in controls . It is nMost authors have combined TUM with a posterior flap of perineal skin to augment the circumference of the suture line and prevent introital stenosis , 25. TheGosalbez et al. proposedPasserini-Glazel also avoided separation of the urethra from the vagina but instead of mobilizing the UGS sinus en-bloc his technique involved anastomosing the inverted distal UGS to the distal end of the vagina left in its original position . Lesma ePodesta and Urcullo reportedIt remains to be determined whether the rate of introital stenosis with TUM and a posterior flap will be lower than that reported Bailez et al. and moreWe recommend the parents of our patients that an examination under anesthesia should be conducted at the onset of puberty to evaluate the vaginal introitus.These girls need follow-up till adulthood to evaluate the adequacy of the vagina.No case of urinary incontinence has been documented after distal or proximal TUM performed as a primary operation. Although some reports suggest a low incidence of vaginal introital stenosis, no long term results with this technique have yet been published. Concerns about the use of TUM for the correction of virilized genetic females appear unfounded.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Capacity problems and political pressures have led to a rapid change in the organization of primary care from mono disciplinary small business to complex inter-organizational relationships. It is assumed that inter-organizational collaboration is the driving force to achieve integrated (primary) care. Despite the importance of collaboration and integration of services in primary care, there is no unambiguous definition for both concepts. The purpose of this study is to examine and link the conceptualisation and validation of the terms inter-organizational collaboration and integrated primary care using a theoretical framework.The theoretical framework is based on the complex collaboration process of negotiation among multiple stakeholder groups in primary care.A literature review of health sciences and business databases, and targeted grey literature sources. Based on the literature review we operationalized the constructs of inter-organizational collaboration and integrated primary care in a theoretical framework. The framework is being validated in an explorative study of 80 primary care projects in the Netherlands.Integrated primary care is considered as a multidimensional construct based on a continuum of integration, extending from segregation to integration. The synthesis of the current theories and concepts of inter-organizational collaboration is insufficient to deal with the complexity of collaborative issues in primary care. One coherent and integrated theoretical framework was found that could make the complex collaboration process in primary care transparent. This study presented theoretical framework is a first step to understand the patterns of successful collaboration and integration in primary care services. These patterns can give insights in the organization forms needed to create a good working integrated (primary) care system that fits the local needs of a population. Preliminary data of the patterns of collaboration and integration will be presented."} +{"text": "The Asthma and GERD are common medical affections and recent studies show that often they coexist (2). The GERD paper as leading factor of the asthma is an important subject, In addition, also exists the possibility that the asthma can precipitate the ERGE. This investigation purposed to demonstrate the interrelation between Gastroesophageal Reflux Disease and Bronchial asthmaThe Gastroesophageal Reflux Disease (GERD) is not associated with the increase the acid secretion, but by failure of the antireflux barrier with one basal hipotone of the inferior sphincter of the esophagus that can be because of the medicines like Xanthine, Non-Steroidal Anti-Inflammatory Drugs (NSAIDs), alcohol or anticolinergics By means of a descriptive and traverse study of the patients with bronchial asthma of the municipality Regla assisted in consultation of allergy of April - June 2011, being applied a survey to all the greater asthmatic patients of 20 years for clinical confirmation of the Gastroesophageal Reflux Disease.Demonstrating that 63.8 % of the asthmatic ones present GERD, being more frequents in females (64%) and on the group of 40 to 59 years old, the digestive symptom most significant was the postprandial fullness (94%), and digestive extra the cough (81%). The abundant food is the main factor of risk of the ERGE (69%).Most of the asthmatic patients suffer Reflux, being demonstrated the narrow relationship among these pathologies. It exists ignorance that the symptoms can be unchained by the GERD and that this last one is increased by Xanthine, NSAIDs, and abundant foods."} +{"text": "We address the problem of non-parametric estimation of the recently proposed measures of statistical dispersion of positive continuous random variables. The measures are based on the concepts of differential entropy and Fisher information and describe the \"spread\" or \"variability\" of the random variable from a different point of view than the ubiquitously used concept of standard deviation. The maximum penalized likelihood (MPL) estimation ,2 of theFrequently, the dispersion of measured data needs to be described. Although standard deviation is used ubiquitously for quantification of variability, such approach has limitations. For example highly variable data might not be random at all if it consists only of \"extremely small\" and \"extremely large\" measurements. Although the probability density function, or its estimate provide a complete view, quantitative methods are needed in order to compare different models or experimental results. In a series of recent studies, some alternative measures of dispersion were proposed. However, the estimation of these coefficients from data is more problematic. We present the obtained estimations of the dispersion coefficients based on the MPL method, which extends our previous study ."} +{"text": "Mitochondria are the bioenergetic centers of eukaryotic cells that produce ATP through oxidative phosphorylation. A byproduct of oxidative phosphorylation is the generation of reactive oxygen species (ROS). Cancer cells exhibit high basal levels of oxidative stress due to activation of oncogenes, loss of tumor suppressors, and the effects of the tumor microenvironment Several studies have reported that a high frequency of clonal and, therefore, selected mitochondrial mutations are present in a variety of human tumors PLoS Genetics, Ericson and colleagues Cancer arises from a mutator phenotype and it has been established that the random mutation rate of the nuclear DNA of tumors is quite high Seven decades ago, Warburg discovered that mitochondria in cancer cells metabolize glucose by aerobic glycolysis and suggested that this was a result of impaired mitochondrial function that contributes to tumorigenesis Other explanations for the low frequency of random mitochondrial mutations in colon tumors include highly efficient DNA repair (for excellent review see HIF-1 and MYC, will emerge as new drug targets and molecular markers of prognosis and responses to therapy. In addition, targeting mtDNA repair proteins could serve as a potential alternative approach to kill cancer cells. The work of the Ericson et al. What are the implications of the findings of Ericson et al."} +{"text": "Culex spp. mosquitoes and birds as reservoirs host whereas humans and horses are dead-end hosts. Clinical symptoms of WN human infections range from asymptomatic or mild influenza like disease to severe neurological and meningoencephalitis syndromes.West Nile virus (WNV) is a flavivirus (Flaviviridae family) and its transmission cycle involves West Nile is a neglected emerging disease with major breakthrough in 1999 with the introduction of WN virus (WNV) in New York City and the subsequent spread to whole northern America over the last decade causing massive human and animals infection leading to some fatal cases. In Eastern Europe, circulation of WNV with recurrent emergences impacting human and animal health since 1996 is similar to the situation in the USA. Strikingly, in Africa WN appears to have a minor effect despite regular isolations from mosquitoes and vertebrates hosts. In addition, WNV exhibited a great diversity with eight different lineages among which only one (lineage 1) is found worldwide and 4 are present in Africa.Culex quinquefasciatus showed significant differences between strains of various lineages tested for infections, dissemination and transmission rates. Indeed the different strains can be classified as low, intermediary and high infection profile. Analysis of the transmission patterns with sequences of the strains suggest that glysoylation of the envelope protein of WNV, a key player in the virus entry in the cell, may play an important role. The implications of our findings are discussed in the context of global emergence of WN.In order to understand factors underlying the different patterns of transmission and processes involved in the emergence of WN in the different contexts, genetic diversity, phylodynamics and vectorial competence of WNV have been studied in Africa. Phylogenetic analysis based on partial and complete genome suggests an interconnection of zoonotic amplifications in Africa with emergence in Europe as well as replacement between lineages over time. Vectorial competence of lineages circulating in Africa for a domestic mosquitoes"} +{"text": "Many aspects of a trial may be incompletely reported, including the outcomes collected and the full set of analyses undertaken. Selective reporting bias occurs when the inclusion of outcomes or analyses in the report is based on the results. We review and summarise the empirical evidence from studies that have assessed the selective reporting of outcomes and analyses and provide guidance to trialists to help reduce this problem.outcomes and ii) evidence for selective reporting of analyses. An international collaboration of experts will be brought together in July to discuss the available evidence alongside current reporting guidance for trials. Recommendations are being produced with regards to raising the awareness and safeguarding trial publications against selective reporting.Two systematic reviews of studies that have examined randomised trials for i) evidence for publication bias or selective reporting of From twenty studies, of which four were newly identified, the evidence in the first systematic review demonstrated an association between statistically significant results and publication. Statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). A further seventeen studies consider aspects of selective reporting such as statistical analyses; subgroup analyses and composite outcomes.This work highlights the evidence of selective reporting and demonstrates the importance of pre-specifying outcomes, analyses and reporting strategies during the planning and design of a clinical trial, for the purposes of minimising bias when findings are reported."} +{"text": "The case presented by the authors gives the opportunity to discuss the medico-legal issues related to lack of prevention of falls in elderly hospitalized patients.A 101 year old Caucasian female was admitted to a surgery division for evaluation of abdominal pain of uncertain origin. During hospitalization, after bilateral bed rails were raised, she fell and reported a femoral fracture. Before surgical treatment of the fracture, scheduled for the day after injury, the patient reported a slight reduction in hemoglobin. She received blood transfusion but her general condition suddenly worsened; heart failure was observed and pulseless electrical activity was documented. The patient died 1 day after the fall. Patient relatives requested a judicial evaluation of the case.The case was studied with a methodological approach based on the following steps: 1) examination of clinical records; 2) autopsy; 3) evaluation of clinicians\u2019 behavior, in the light of necroscopic findings and a review of the literature.The case shows that an accurate evaluation of clinical and environmental risk factors should be always performed at the moment of admission also in surgery divisions. A multidisciplinary approach is always recommended also with the involvement of the family members. In some cases, as in this one a fall of the patient is expectable but not always avoidable. Physical restraint use should be avoided when not necessary and used only if there are no practical alternatives. A fall , an evenIt is estimated that about 14% of falls during hospitalization may be classified as accidental, due to environmental factors (e.g. slipping on the wet floor), 8% as physiological and unpredictable, due to sudden mutation of patient physical condition , and 78% of falls as physiological but predictable due to identifiable risk factors [Risk patient identification is the first preventive step that should be considered. The systematic assessment of clinical and environmental risk factors must be implemented at admission and during hospitalization especially in case of modification of patient condition.The assessment of the risk of falling may be done through a systemic evaluation on individual and environmental risk factors -7 integrOnce you have identified risk factors it is essential that providers, patients and family/caregivers acquire the awareness of the risk of falling and work together in an integrated and consistent, careful application of multifactorial strategies.Interventions after risk assessment may reduce but not eliminate the risk of falling. A fall of a patient during hospitalization may have not only clinical consequences for the patient but also judicial consequences for the clinician.The case presented by the authors gives the opportunity to discuss the medico-legal issues related to falls in elderly hospitalized patients. A systematic review on the thematic of falls is beyond the purpose of this paper and we refer to the numerous national and inteThe case was studied with a methodological approach based on the following steps: 1) examination of clinical records; 2) autopsy; 3) evaluation of physicians\u2019 behavior, in the light of necroscopic findings and a review of the literature.A 101 years old Caucasian female was admitted to a surgery division for evaluation of abdominal pain of uncertain origin . She wasThe initial cognitive impairment associated to confusional states, the walking difficulties, the lack of personnel or family members capable of monitoring the patient during the night induced clinicians to use physical restraint and to ask nurses a more frequent evaluation of the patient. Bilateral bed rails were raised. During the morning of the next day of hospitalization, the nurse heard a noise coming from the room of the patient finding her on the floor. Immediate examination and instrumental evaluation allowed diagnosing a left femoral fracture and a slight concussion. TC scan excluded skull fractures or intracranial hemorrhages. A surgical treatment intervention was scheduled for the next day but during the afternoon the clinical condition worsened. Laboratory analysis revealed slight low levels of hemoglobin and a worsening of neurological condition. The physician decided to transfuse the patient with two units of packed red blood cells. During the night a deterioration of the overall clinical picture was observed with dyspnea and bradycardia not responding to the treatments. Pulseless electrical activity (PEA) was documented; cardiopulmonary resuscitation measures were performed with no result. She was declared dead in the morning of the day after the fall.External examination showed at palpation signs of fracture of thorax and on the left lower hip. Section revealed a picture of cerebral oedema, bilateral rib fractures due to resuscitation, myocardial fibrotic areas, congestion and emphysema of the lungs, polyvisceral congestion, polydistrectual atherosclerosis.Histological examination confirmed the macroscopic findings, demonstrated the presence of interstitial myocardial fibrosis in absence of signs of recent myocardial infarct; significant areas of emphysema and congestion of the lung, areas of glomerulosclerosis were put in evidence.The cause of death was identified in cardio-circulatory arrest in cardiogenic shock, pulseless electrical activity in recent left femoral fracture, ischemic-hypertensive chronic heart disease, lung emphysema, chronic macrocytic anemia and previous old right femoral fracture.In Western countries, the aging of the population leads to an increase in the number of elderly patients admitted to hospitals or nursing homes. The growing number of medico-legal cases concerning professional liability may reprThe methodological approach of evaluation of professional liability cases is based on the definition of damage to the patient (diagnosis of a disease or identification of the cause of death), on the analysis of the conduct of clinicians and on verification of causal relation between patient damage and the possible error of clinicians ,15.The identification of the cause of death requires a necroscopic examination . AutoptiIn our case the claim of malpractice was promoted by the relatives of the deceased. Prosecution was based on the hypothesis of a lack of fall prevention and on an incorrect use of physical restraint.According to the methodological approach clinical documentation analysis and autopsy were performed. The cause of death was linked with no doubt to the fall that triggered, acting on baseline patient health condition, a lethal pathophysiological evolution. Patient general clinical conditions were in precarious balance - she was very old and suffering from multiple diseases \u2013 and bleeding from fracture was considered a sufficient factor to alter patient precarious equilibrium with heart failure which hesitated rapidly in the death of the patient.With reference to the malpractice claim, the conduct of clinicians was considered correct. Initial cognitive impairment associated to transient confused states, urinary incontinence and walking difficulties were considered by clinicians as risk factors for fall. The awareness of risk of falling finds demonstration in the use of physical restraints. The clinicians considered properly the risk factors providing guidance to the nursing staff for the proper management of postural changes of the patient and of the aid to be provided during postural changes. Greater oversight of patients with more frequent rounds of inspection were established. The use of side rails during the night was considered a necessary measure to limit the ability of movement of the patient and nocturnal wanderings. This choice was widely discussed during the trial. According to prosecution, physical restraints such as bilateral bed rails, belts, and fixed tables in a chair should be always considered inadequate or wrong. Physical restraints are quite frequently used in geriatric division in Italy despite some evidences for their lack of effectiveness and safety -21. In UThe choice of using physical restraint was considered a forced choice in our case due to the high risk of fall and the lack of sufficient personnel to monitor the patient during the day.The case shows that an accurate evaluation of clinical and environmental risk factors should be always performed at the moment of admission of the patient also in a surgery division. A multidisciplinary and multicomponent intervention is always recommended also with the involvement of the family members. In some cases a fall of the patient is expectable but not always avoidable. In certain situations, the lack of adequate number of personnel, or of family members who may look after the patient, does not exclude the risk of fall. Physical restraint use should be avoided when not necessary and used only if there are no practical alternatives.CT performed the autopsy, contributed to the analysis and interpretation of the data, to the discussion of medico-legal issues and to the writing of the paper. FC contributed to the literature review, to the drafting and reviewing of the paper. BM, BA and CM gave their contribution to the analysis of the conduct of clinicians and contributed to the writing of the paper. All the authors read and approved the final manuscript.The authors declare that they have no competing interests."} +{"text": "Children need physical activity and generally do this through the aspect of play. Active play in the form of organized sports can appear to be a concern for parents. Clinicians should have a general physiological background on the effects of exercise on developing epiphyseal growth plates of bone. The purpose of this review is to present an overview of the effects of physical activity on the developing epiphyseal growth plates of children.A National Library of Medicine (Pubmed) search was initiated using the keywords and combinations of keywords \"growth plate\", \"epiphyseal plate\", \"child\", \"exercise\", and \"physical activity.\"Bone is a dynamic tissue with a balance of osteoblast and osteoclast formation. The normal functioning of the epiphyseal growth plate is an important clinical aspect. Much of the physiology of the epiphyseal growth plate in response to exercise includes the important mechanical component. Growth hormone, insulin-like growth factor I, glucocorticoid, thyroid hormone, estrogen, androgen, vitamin D, and leptin are seen as key physiological factors. While there is a need for children to participate in physical activity, clinical consideration needs to be given to how the epiphyseal growth plate functions.Mechanical loading of the bone is important for epiphyseal plate physiology. Exercise has a healthy function on the normal growth of this important biomechanical feature. Clinically, over-exertion in the form of increased load bearing on the epiphyseal growth plate creates an ideal injury. There is a paucity of research on inactivity on the epiphyseal growth plate resulting in stress deprivation. Further research should take into consideration what lack of exercise and lessened mechanical load bearing has on the function of the epiphyseal growth plate.Child; Physical activity; Epiphyseal growth plates; Bone; Exercise; Mechanical loading Bone has been described as a dynamic and highly interactive complex of many cell and tissue types . The epiThe purpose of this review is to present an overview of the effects of physical activity on the developing epiphyseal growth plates of children. A discussion of the physiological basis of epiphyseal growth plates will be included. Practitioners with a familiarity of the dynamic changes that can occur with the epiphyseal plate in normal children can ultimately lead to recognition of pathologic states , 8.As noted previously, skeletal growth at the epiphyseal plate is an active and dynamic process . The epiLongitudinal bone growth is primarily achieved through the action of chondrocytes in the proliferative and hypertrophic zones of the growth plate . LongituWithin the epiphyseal growth plate, chondrocyte proliferation, hypertrophy, and cartilage matrix secretion result in chondrogenesis . EndochoThe regulation of linear bone growth in children and adolescents comprises a complex interaction of hormones and growth factors . This prIn particular, for the majority of skeletal elements to develop and grow, the process of endochondral ossification requires a constantly moving interface between cartilage, invading blood vessels, and bone . An adeqThe balance between proliferation and differentiation in bone is considered to be a crucial step. It is a crucial regulatory step controlled by various growth hormones acting in the endocrine pathways . Growth Research with laboratory animals has provided most of the current information regarding estrogen's influence on the growth process of long bones, on the maintenance of cancellous bone mass, and on the architectural and cellular changes in bone . EstrogeAs previously noted, longitudinal growth of the skeleton is a result of endochondral ossification that occurs at the epiphyseal growth plate. Through the sequential process of cell proliferation, extracellular matrix synthesis, cellular hypertrophy, matrix mineralization, vascular invasion, and eventually apoptosis, cartilage continually is being replaced by bone as length increases . ParfittComprehension of the biomechanical aspects of bone allows one to conceptualize the physiological processes associated with exercise and physical activity on the epiphyseal growth plate. The mechanical influence on bone directly applies to the normal physiological functioning of bone. Longitudinal growth is controlled by local mechanical factors in the form of a feedback mechanism which exists to ensure that bone growth proceeds in the direction of the predominant mechanical forces . The indIn addition to the vital function of growth factors, it is known that mechanical forces stimulate the synthesis of extracellular proteins in vitro and in vivo and can affect the tissue's overall structure . AccordiOver the past decade, there has been a surge in the number of sports opportunities available to young athletes . Beyond The effects of exercise on the molecular nature of secreted human growth hormone (GH) or its biological activity are not well understood . Yet it Vascularization of the epiphyseal growth plate region represents a key mechanism for the coupling of two fundamental processes determining the rate of bone growth: chondrogenesis (cartilage production) and osteogenesis (bone formation) . Within A potential problem with physical activity and exercise on the epiphyseal plates is over-activity. Intuitively, it is the extent that over-physical activity may have on the growth plate resulting in injury. A better appreciation of how epiphyseal plate physiology works is seen in the response to trauma. Approximately 25% of adolescents have at least one recreational injury which is mostly minor reflecting only soft tissue trauma and abrasions of the skin . HoweverMost injuries in children's sports and activities are minor and self-limiting thus suggestive that children and youth sports are safe . Unlike The genetic potential for bone accumulation can be frustrated by insufficient calcium intake, disruption of the calendar of puberty and inadequate physical activity . While mThe clinical pathophysiology of excessive activity on the epiphyseal growth plates resulting in injury is one of the more prominent methods of understanding the effects of exercise on growth plate physiology. The immature skeleton is different from the adult skeleton with unique vulnerability to acute and chronic injuries at the growth plate . EpiphysGerstenfeld et al summarizThe second and arguably the least discussed aspect concerning the epiphyseal growth plate is the role inactivity may play on the growth plate. Given the current interest in and rising rates of child obesity such interest in the growth plate should be considered. One of the implicated culprits in the child obesity epidemic is the lack of physical activity. It is known that epiphyseal growth plate activity controls longitudinal bone growth and leads ultimately to adult height yet numerous disorders are characterized by retarded growth and reduced final height either have their origin in altered chondrocyte physiology or display pathological growth plate changes secondary to other causes . PhysicaNonetheless, the recent concern over the increasing incidence and prevalence of obesity as seen in children gives rise to concern for the normal growth process. As previously indicated, functioning growth hormone (GH) and insulin-like growth factor (IGF)-I are essential for normal growth . HoweverIt may appear that, in some cases, genetic expression, through favorable conditions, can be maximally achieved throughout the entire period of growth . In thisThe epiphyseal growth plate is a dynamic entity. Growth is dependent not only on intrinsic factors such as hormones and other regulatory factors but on extrinsic factors. These extrinsic factors are based entirely on the biomechanical model. Exercise, a positive aspect for the epiphyseal growth plate needs to be moderated through carefully crafted activities especially during pubertal growth spurts. Obesity, a major problem among today's youth, can be attributed in part to a lack of exercise. Once activity is undertaken the potential for epiphyseal growth plate disturbances from too much activity may be a predisposing factor to growth plate dynamics. The effects of exercise on the epiphyseal growth plate needs further research to comprehend the entirety of this dynamic anatomical and physiological entity. Research in the area of the epiphyseal growth plate in some children who are sedentary needs to be addressed."} +{"text": "According to the geological characteristics of Xinjiang Ili mine in western area of China, a physical model of interstratified strata composed of soft rock and hard coal seam was established. Selecting the tunnel position, deformation modulus, and strength parameters of each layer as influencing factors, the sensitivity coefficient of roadway deformation to each parameter was firstly analyzed based on a Mohr-Columb strain softening model and nonlinear elastic-plastic finite element analysis. Then the effect laws of influencing factors which showed high sensitivity were further discussed. Finally, a regression model for the relationship between roadway displacements and multifactors was obtained by equivalent linear regression under multiple factors. The results show that the roadway deformation is highly sensitive to the depth of coal seam under the floor which should be considered in the layout of coal roadway; deformation modulus and strength of coal seam and floor have a great influence on the global stability of tunnel; on the contrary, roadway deformation is not sensitive to the mechanical parameters of soft roof; roadway deformation under random combinations of multi-factors can be deduced by the regression model. These conclusions provide theoretical significance to the arrangement and stability maintenance of coal roadway. Rc < 15\u2009MPa) and weak cementation (argillaceous cementation). Hence, tunnels are mainly arranged in relatively stable coal seam. However, the shape of roadway is often seriously damaged after excavation due to the large deformation of surrounding rock which brings terrible impact on the production safety. Thus, it is urgent to grasp the deformation laws of the tunnel surrounding rock in order to seek reasonable constructing principle and supporting scheme.In recent years, coal industry is tending to transfer to the western area and most of the mining areas are mainly concentrated in Inner Mongolia, Shanxi, Gansu, Ningxia, Xinjiang and so on. The main rock layers of these areas are Jurassic and cretaceous soft rock strata with great thickness due to the special diagenetic environment and depositional history in western China \u20133. It isDeformation of coal roadway is actually determined by the global mechanical behavior of three body model composed of soft rock and coal seam, which shows obvious structure effect due to the different properties of rock layers. Wang and Feng discusseFor a complicated compound structure composed of different rock layers with discrete lithology, it is difficult to establish the relationship between its deformation and influencing factors by an analytical expression. By comparison, the numerical method is more effective. However, large discreteness exists in various mechanical parameters obtained by geological exploration and indoor tests which bring difficulties to the parameters selection for numerical calculation, so the key problem to improve the numerical accuracy is to determine their value scopes and sensitivity according to tunnel deformation. Hou et al., Nour et al., Fenton et al., Gill et al., and Jia et al. \u201319 have In this paper, a compound system model composed of soft rock and coal seam is established firstly according to the geological conditions and construction characteristics in Xinjiang Ili mine, and then sensitivity analysis of mechanical parameters of each rock layer to the stability of coal roadway is carried out in order to find out the high sensitivity factors which have significant influences on roadway displacement. The conclusions obtained may provide some theoretical principles and optional methods for correct selection of simulation parameters, as well as improving the overall stability of coal roadway.The tunnel this paper had chosen is located in Ili forth mine in Xinjiang area. Soft rock strata such as mudstone and sandstone layers with low strength were encountered during the construction of transport tunnel. From the field monitoring data, the rates of roof subsidence and floor heave develop fast after excavation. As shown in q is the self-weight stress of overlying strata. The horizontal displacements of left and right boundary lines were restricted along with a fixed bottom. Due to the shallow depth of roadway, only gravity stress and horizontal stress are considered while ignoring the effect of tectonic stress.The layout of tunnel is usually passing through coal seam due to the characteristics of low stiffness, being easy to deform, and weak stability of weak cementation soft rock strata. For this, a compound physical model composed of soft rock coal seam was built as shown in Ur and Uf, respectively, were selected as the stability evaluation indexes. Let be the displacement of roof to floor U = Ur and Uf. For certain strata with specific thickness and buried depth, the main factors influencing the displacement of monitoring points are \u03b1 = {E, C, \u03c6, h}, namely, the deformation modulus, cohesion force, and friction angle of each rock layer as well as the depth of coal seam on the bottom of tunnel floor stand for undetermined coefficients. The equivalent mechanical parameters have the following forms:Single factor analysis can only determine the influence law of a separate factor on the stability of surrounding rock, so it is difficult to establish the cross-influences of different parameters on the system feature. Thus, it is necessary to establish a multi-factors regression model for the system in order to make clearly the system sensitivity while multiple parameters change at the same time. For this, the system regression model is set as follows based on the influencing laws of each factor obtained in The unit type of each mechanical parameter is in line with Ef and h were changed simultaneously within their scope. It is easy to obtain the convergence displacements under the arbitrary combination of h and Ef. Similarly, the displacement responses can also be quickly established when other parameters change. The regression model provides theoretical basis for determining the optimal parameter combination which can minimize the tunnel displacement.h \u2265 2\u2009m, so it should be fully considered in the arrangement of coal roadway.The tunnel displacement is highly sensitive to the depth of coal seam under soft floor. On the condition of specified thickness of each layer, the deformation of roof and floor tend to be stable when The mechanical parameters of coal seam and soft floor have significant influence on tunnel displacement; the direction of floor displacement will be changed by enhancing the stiffness of coal seam which can effectively prevent and control floor heave as well as decrease the roof subsidence; the damage state of side walls is determined by strength parameters of coal seam; therefore, the overall stability of roadway can be improved by the reinforcement of side walls. The roof subsidence can be obviously reduced by increasing the elastic modulus of floor. The floor displacement can be dramatically decreased by improving the floor strength which is closely related to the shear failure area of soft floor. The tunnel displacement is not sensitive to the mechanical parameters of soft floor.The multifactors regression model laid a theoretical foundation for seeking the best parameter combination and providing reasonable supporting to control the deformation of this kind of roadway."} +{"text": "The histopathological evaluation showed an infiltration of bone with tumor cells. Review No the literature revealed in previous cases of skeletal metastasis from gastric cancer in the tibia like this.We report a 41 years old man with rapidly growing and tender lump on the anteromedial surface of tibia. The patient had the history of gastrectomy and gastrojejunostomy due to gastric carcinoma. On admission, the Simple X-ray of lower extremity disclosed a slight thinning of the anterior cortex of tibia without cortical destruction. The whole-body bone scan with Gastric cancer has not been recognized as a number of the group of common bone invaders consisting of lung, prostate, breast, thyroid and kidney cancers . Moreove A 41-year-old man came to orthopedic ward with the chief complaint of tender and painful lump on the anteromedial surface of his tibia in the proximal third which had been growing during previous 6 months. The patient was very cachectic and had the history of proximal gastrectomy and gastrojejunostomy which had been done 16 months ago due to gastric carcinoma accompanied with six sessions of chemotherapy and radiotherapy. In pathologic evaluation of the tumor in this operation, moderately differentiated adenocarcinoma with an-epithelial to-serosal extension and no lymph node involvement had been reported. In physical examination of the extremity, a firm and tender lump was seen with a diameter of 6 \u2217 4\u2009cm which was firmly attached to the anteromedial surface of tibia. In simple X-ray an interesting feature was the slight thinning of the anterior cortex without cortical destruction which made the diagnosis of metastatic a bit unlikely . In wholPuri et al. in their report of three cases of gastric cancer presenting with distant metastasis, describe a case of metastasis to the left forearm in the form of soft tissue lump and another case of fibular metastatic lesion . KammoriEven though metastasis to bone is much more common in the axial skeleton and the roots of extremities, we should still expect to see them in unusual sites and from those cancers not so notorious in invasion to bone from."} +{"text": "Molecular footprints of phenotypic variation are usually explored by two different approaches in trees. On the one hand, association genetics is seeking for statistical correlation between SNPs and traits owing to the significant heritability that is usually observed in progeny tests. On the other hand, detection of outlier Fst values of single SNPs has also been implemented to account for the very large differentiation of traits that are observed in provenance tests. The rationale of both approaches is based on the extensive within or between population genetic variability that has been widely recorded in fields tests. While the cataloguing of candidate genes has steadily increased in trees, so has the inventory of their diversity in natural or breeding populations. There is now a rapidly growing body of experimental results showing a very large discrepancy between the expectations of both approaches and the SNP variation that is monitored in candidate genes. Most association studies show that single SNPs explain usually less than 5% of the phenotypic variation of the trait, while Fst values are at best of the same value than neutral markers. In my presentation I will explore the reasons of the decoupling between trait and gene variation, by focusing on the multilocus structure of traits, as compared to the monolocus SNP information. Indeed, association studies and Fst outlier detection are essentially based on single locus approaches, while traits are multidimensional structures. There are at least three properties of multilocus structures that will be investigated: cumulativity, interactions and covariation of gene effects. I will show that cumulativity and interactions may explain the discrepancy in the case of association studies, while covariation of gene effects (at the between population level) explain the missing Fst of genes underlying adaptive traits. These conclusions are supported by experimental results, theoretical background and simulation predictions."} +{"text": "Food safety is threatened by numerous contaminants, which can originate from environmental pollution, such as toxic metals and organic halogenated compounds; chemicals used in the production of food, such as pesticides and veterinary drugs; contaminants formed during food production and cooking; contaminants arising from food packaging, or natural toxins in food. Consumers\u2019 perceptions of food-related risks have recently been investigated in the 2010 Eurobarometer. The highest concern was reported for pesticides in fruit, vegetables and cereals, with 72% of the respondents being very or fairly worried. Somewhat fewer people were worried about residues like antibiotics and hormones in meat (70%), pollutants like mercury and dioxins (69%), food poisoning from bacteria (62%) or putting on weight (47%).Adverse effects of environmental contaminants may be displayed as developmental toxicity and endocrine disruption, with fetuses and children being vulnerable target groups. One contaminant, which has attracted much attention, is bisphenol A (BPA) and there is a scientific controversy about the low-dose health risks of BPA. BPA is used in the production of polycarbonate plastics and epoxy resins. Human exposure is mainly from packaged food and beverages. BPA binds to estrogen receptors and also acts through other mechanisms on endocrine function. We have investigated effects of BPA on steroidogenic pathways in the human adrenocortical cell line H295R. Secretion of steroidogenic hormones and intermediates were affected at non-toxic levels of BPA. The effects were mediated by inhibition of CYP17 and CYP21 and expressions of steroidogenic genes were downregulated. This may be an additional mechanism behind the endocrine disruptive effects of BPA.One of the greatest challenges in toxicology today is in predicting the risks associated with chemical mixtures. The exposure to contaminants via the diet occurs as a mixture rather than as individual compounds. Thus, food safety is to a high extent dependant on possibilities to predict risks from mixtures. The two models most frequently used are concentration addition and independent action. The main difference between the two models is the assumption of mode of action of the chemicals in the mixture. In concentration addition it is assumed that the chemicals work through a common mode of action and can be regarded as dilutions of each other. In independent action it is assumed that the chemicals act independently via different mode of actions and the mixture effect is predicted by the probabilities of response of the individual chemicals. We have investigated mixture effects of food-related chemicals on secretion of steroids in the human adrenocortical cell line. The results have been compared to the predicted effects from the two prediction models. In general the chemicals acted in an additive manner on secretion of hormones, which could be predicted by both models."} +{"text": "Models of competitive template replication, although basic for replicator dynamics and primordial evolution, have not yet taken different sequences explicitly into account, neither have they analyzed the effect of resource partitioning (feeding on different resources) on coexistence. Here we show by analytical and numerical calculations that Gause's principle of competitive exclusion holds for template replicators if resources (nucleotides) affect growth linearly and coexistence is at fixed point attractors. Cases of complementary or homologous pairing between building blocks with parallel or antiparallel strands show no deviation from the rule that the nucleotide compositions of stably coexisting species must be different and there cannot be more coexisting replicator species than nucleotide types. Besides this overlooked mechanism of template coexistence we show also that interesting sequence effects prevail as parts of sequences that are copied earlier affect coexistence more strongly due to the higher concentration of the corresponding replication intermediates. Template and copy always count as one species due their constraint of strict stoichiometric coupling. Stability of fixed-point coexistence tends to decrease with the length of sequences, although this effect is unlikely to be detrimental for sequences below 100 nucleotides. In sum, resource partitioning (niche differentiation) is the default form of competitive coexistence for replicating templates feeding on a cocktail of different nucleotides, as it may have been the case in the RNA world. Our analysis of different pairing and strand orientation schemes is relevant for artificial and potentially astrobiological genetics. The dynamical theory of competing templates has not yet taken the effect of sequences explicitly into account. One might think that complementary sequences have very limited competition only. We show that, despite interesting sequence effects, competing template replicators yield to Gause's principle of competitive exclusion so that the number of stably coexisting template species cannot exceed the number of nucleotide species on which they grow, although one of the findings is that plus and minus strands together count as one species. Thus up to four different templates/ribozymes can constitute the first steps to an early, segmented genome: we suggest that other mechanisms build on this baseline mechanism. Gause (1934) in the Golden Age of theoretical ecology formulated the principle of competitive exclusion, proposing in effect what usually is being referred to as \u201ccomplete competitors cannot coexist\u201d We explicitly take into consideration the concentration of up to four different building blocks and that the dynamics of these abiotic resources are not periodically forced, for example. We neglect replicase enzymes and assume that template and replica separate irreversibly upon completion of elongation. The kinetic effects are simplified to the extent that the elongation rate of template polymerization depends only on the identity of the inserted nucleotide and nothing else. We know that this is a crucial simplification but already with this rule different sequences may assume very different kinetic phenotypes. In agreement with this, we neglect secondary and tertiary structures.raison d'\u00eatre for these assumptions is that we would like to demonstrate the effect of competition for resources of competing template sequences as simply and clearly as possible. We deliberately want to see the dynamics of coexistence under irreversible exponential growth tendency, as a kind of worst case. The effect of the sequence diversity of templates on dynamical coexistence is not trivial. If there are two resources A and B, then it is trivial that sequences of The Some of the effects that we show in this paper are far from trivial. Our calculations show the effect of resource partitioning on template coexistence and shed more light on early molecular evolution, which surely was affected by sequence effects of template replicators.sequences) we formulated the dynamics of polynucleotide replication. Here we only explain the necessary basics of our formalism, for the mathematical model see To understand the mechanism of coexistence of template replicators ; we also introduce parallel strand polarity as opposed to antiparallel polarity (like in case of RNA replication). The difference between complementary and non-complementary pairing and parallel and antiparallel strand polarity is given in The following model of template replication models RNA replication, dealing with 4 monomers, complementary pairing and antiparallel strand polarity. We have investigated the coexistence of M1 With a given set of parameters (elongation constants M2 We have investigated the coexistence for each sampled sequence group for different numbers of complementary pairs (and stability of coexistence and that on four different monomers a maximum of four different sequence pairs could possibly coexist. Thus Gause's principle (against first intuition) limits the number of sequence pairs instead of individual sequences because of the dynamical coupling between the template and its complement . According to our exhaustive numerical results, we have found no case where more than two sequences could coexist, supporting our hypothesis. We have numerically integrated ODE systems until convergence or extinction of one of the sequences. The numerical methods and routines are the same as before. We have analyzed the coexistence of two sequences in two different ways of coexistence of sequences can be deduced right from the order of monomers of the sequences according to a very simple rule . The rulA-s in its head; the latter behaves as having A-s in the head to yield coexistence weighs more concerning the coexistence than the rear (tail). This is because during replication, earlier intermediates are present in larger concentrations than intermediates closer to the final step of the replication the trinculeotide Mechanisms for template coexistence have been in the focus of models of primordial replicator evolution cf. . Here weRecently there has been an upsurge in interest in exo/astrobiology. It is in this context that we have deliberately presented results for homologous pairing also, even with parallel orientation of the strands. Although such configurations are not unheard of even in our world, we wanted to see how such features would in general affect dynamical coexistence of template replicators.We have obtained the fitness landscapes through a distribution of elongation and degradation rates. The main reason behind this is tractability: although the 2D structures as phenotypes of RNA molecules can be calculated for most cases, this does not automatically yield phenotypes in terms of replication rates. We are temporarily satisfied with the phenotype richness that our local rules provided see . What isIn each experiment, we integrated the system of Let us introduce two complementary sequences:The extension of the dynamics for more sequence pairs is straightforward. The dynamics of the intermediates is independent for each pair and the dynamics of the monomers provides the coupling between the equations of different pairs of sequences. Because of the cross-coupling of equations, no analytical solution was found . For the numerical integration of the ODE system to find steady-state solutions we have used the CVODE code from the SUNDIALS project of the Lawrence Livermore National Laboratory Uniform degradation rates of sequence intermediates allow for a completely analytic approach. For the positivity test of concentrations, we introduce the following notation for the constants of the power sum of different signs.Let us assume that influx can counter degradation. In this case the condition of coexistence of length . Lower left half: coexistence is marked by green, extinction of the first sequence pair by red and extinction of the second sequence pair by blue. Upper right half: stability of coexistence according to the leading eigenvalue . The upper triangle shows the stability measures of the sequences pairs from the lower one (mirrored and rotated (TIF)Click here for additional data file.Figure S2Coexistence plots of pairs of double-stranded sequences of length (upper panel) and (lower panel) using two monomers in case of uniform degradation and identical elongation rate constants and non-complementary pairing. The green indicates stable coexistence, grey indicates structurally unstable coexistence, i.e. compositional identity , while pink indicates that there is no coexistence possible. Sequences along the axes are arranged first according to Hamming distance and secondly according to lexicographic ordering from more (TIF)Click here for additional data file.Figure S3Correlation plot of the fitness landscape as a function of Hamming distances for sequence pairs of length .(TIF)Click here for additional data file.Table S1Analysis of coexistence of non-complementary pairs of sequences of length according to method M1. Second column shows the number of scanned sequences (and the amount as a fraction of the whole combined sequence space). The third column shows the fraction of coexisting sequences in the scanned domain, i.e. the probability of coexistence of random sequence groups of size (XLS)Click here for additional data file.Table S2Analysis of coexistence of non-complementary pairs of length according to method M2. Second column shows the number of scanned sequences (and the amount as a fraction of the combined sequence space). The third column shows the fraction of coexisting sequences averaged over the scanned sequence groups and over 1000 random degradation rate sets for intermediates Click here for additional data file.Table S3Examples of coexistence for .(XLS)Click here for additional data file.Table S4Examples of coexistence for .(XLS)Click here for additional data file.Table S5Examples of coexistence for .(XLS)Click here for additional data file.Text S1Supporting text with sections on 1) Parameters for methods M1, M2, M3 and M4; 2) Analysis of non-complementary pairing with antiparallel polarity; 3) Analysis and analytical results of non-complementary pairing and uniform degradation rates; 4) Proofs; 5) Discussion of the fitness landscape; 6) Examples of coexistence of longer sequences.(PDF)Click here for additional data file."} +{"text": "Among the many achievements of the twentieth century the accelerating development of microprocessors and their capabilities is one of the most impressive accomplishments, influencing virtually every aspect of daily life. The impact of this new technological resource was (and still is) of particular importance for theoretical approaches in science and engineering. Although many theories , an increased awareness of the importance of the solvation is observed in literature in recent years consider alternative approaches in quantum chemistry, as have been summarized in a recent publication asking: Accuracy consideration and general applicability are not the only advantage of quantum mechanical computations over the use of empirical approaches: the majority of force fields can not (or not adequately) describe the formation and cleavage of chemical bonds. However, in recent years so-called reactive force fields are another topic that received increased interest in recent years. NQEs are strongly linked to the computational treatment of hydrogen transfer reactions, which are of critical importance for many chemical disciplines such as catalysis and biochemistry. Due to the low mass of hydrogen atoms, nuclear quantum effects may have a significant impact on the reaction dynamics (Agarwal et al., Each of these grand challenges is strongly dependent on the development of improved computational facilities. Since it has been estimated that the progress in microprocessor technology will continue throughout the next decades, the formulation of improved approaches addressing the grand challenges will constitute a promising and exciting research activity in the future. It can be expected that theoretical approaches in chemistry assume a similar leading role in chemical science, as observed for theoretical methods in physics during the last century."} +{"text": "To present a group of anatomical findings that may have clinical significance.This study is an anatomical case report of combined lumbo-pelvic peripheral nerve and muscular variants.University anatomy laboratory.One cadaveric specimen.During routine cadaveric dissection for a graduate teaching program, unilateral femoral and bilateral sciatic nerve variants were observed in relation to the iliacus and piriformis muscle, respectively. Further dissection of both the femoral nerve and accessory slip of iliacus muscle was performed to fully expose their anatomy.Piercing of the femoral nerve by an accessory iliacus muscle combined with wide variations in sciatic nerve and piriformis muscle presentations may have clinical significance.Combined femoral and sciatic nerve variants should be considered when treatment for a lumbar disc herniation is refractory to care despite positive orthopedic testing. The recurrence of leg pain from lumbar disc herniations is a common post treatment clinical finding. Certain muscular and peripheral nerve variants may represent an unrecognized etiology in these cases and may affect the outcome of specific treatments. Recognition of these variations in normal anatomy may be useful to the clinician when treating the patient with refractory leg pain. The femoral nerve, derived from the second to fourth lumbar dorsal divisions, is one of the terminal branches of the lumbar plexus. MultiplThe sciatic nerve, formed from the ventral rami of the fourth lumbar to third sacral spinal nerves, leaves the pelvis passing both anterior and inferior to the piriformis or sometimes through the muscle. A 2010 Piercing of the femoral nerve by an accessory iliacus muscle in combination with bilateral variations in both sciatic nerve and piriformis muscle anatomy exemplifies the wide variability that exists within the lumbar and lumbosacral plexus. The clinical implications of these combined variants are discussed.During routine cadaveric dissection, bilateral sciatic and unilateral femoral nerve variants were detected. The course and muscular relationships of both sciatic nerve variants were studied. The femoral nerve variant was further dissected and was examined to determine its nerve root contributions and its branching pattern. Also, the accessory muscular slip of the iliacus muscle that was piercing the femoral nerve was dissected to determine both its origin and insertion points.On the right side, the sciatic nerve was split into the common fibular and tibial divisions proximal to the piriformis muscle, with the common fibular division passing above and superficial to the piriformis muscle and the tibial division passing inferior and deep to the muscle. On the left side, the sciatic nerve was also divided proximal to the piriformis muscle. However, the piriformis muscle was pierced and subdivided into two discrete bellies by the common fibular division, while the tibial division passed inferior and deep to the most caudal border of the piriformis muscle Figure. The rigIn the left iliac fossa, the femoral nerve emerged both lateral and deep to the psoas major muscle between the psoas major and iliacus muscles covered in iliac fascia. It was then pierced and divided into two separate divisions by an accessory slip of the iliacus muscle. Just proximal to the inguinal ligament, these two separate divisions rejoined and the femoral nerve passed as one under the inguinal ligament and then divided into its usual anterior and posterior branches Figure. The accThe sciatic and femoral nerves represent the two largest peripheral collections of lumbar and sacral nerve roots. There hStraight leg raise and femoral nerve traction tests are commonly performed orthopedic maneuvers done to ascertain the presence of a lumbar disc herniation,17. FemoVariants in lumbar and lumbosacral plexus anatomy should be considered when a symptomatic lumbar disc herniation is refractory to care. Recognition of these anatomical variants may lead to earlier intervention of physiologic testing, better treatment outcomes and improved patient satisfaction. Future studies examining the prevalence of these combined variants in the general population would be of interest to clinicians.Written informed consent was obtained from the deceased prior to the gift of body donation. All handling of anatomical specimens was in accordance with the institutions ethical policy for body donation for anatomical study and scientific purposes. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The authors declare that they have no competing interests.PB conceived of the case report, assisted in reviewing the literature and drafting the manuscript. FS provided anatomical artwork, assisted in reviewing the literature and drafting the manuscript. DE assisted in reviewing the literature, drafting the manuscript and provided critical review. All authors read and approved the final manuscript."} +{"text": "The microcirculation is a major topic in current physiology textbooks and is frequently explained with schematics including the precapillary sphincters and metarterioles. We re-evaluated the validity and applicability of the concepts precapillary sphincters and metarterioles by reviewing the historical context in which they were developed in physiology textbooks. The studies by Zweifach up until the 1950s revealed the unique features of the mesenteric microcirculation, illustrated with impressive schematics of the microcirculation with metarterioles and precapillary sphincters. Fulton, Guyton and other authors introduced or mimicked these schematics in their physiology textbooks as representative of the microcirculation in general. However, morphological and physiological studies have revealed that the microcirculation in the other organs and tissues contains no metarterioles or precapillary sphincters. The metarterioles and precapillary sphincters were not universal components of the microcirculation in general, but unique features of the mesenteric microcirculation. The microcirculation is one of the major topics in the study of physiology and histology. In current physiology textbooks, the general architecture of the microcirculation is frequently explained with schematic drawings depicting precapillary sphincters and metarterioles in addition to arterioles, capillaries and venules. Judging from the main text and figures with legends in relatively recent textbooks such as Johnson\u2019s Fig.\u00a01)1) and BoSince the concept of precapillary sphincters and metarterioles is quite popular in physiology textbooks, one might expect that previous physiological investigations had revealed the existence of these structures in various tissues with reasonable certainty. Nevertheless, the existence of precapillary sphincters as universal components of the microcirculation has been questioned by some physiologists. McCuskey pointed In the present article, we re-evaluated the validity and applicability of the concept of precapillary sphincters and metarterioles by reviewing the historical context in which these concepts were developed and accepted in four steps. In the first part, we investigated the historical process of how the schematics of these concepts was developed and incorporated into physiology textbooks. In the second part, we evaluated the validity of the concepts by examining the physiological literature on the microcirculation of the mesentery in which the concepts were originally proposed. In the third part, we examined the applicability of the concepts to the microcirculation of organs other than the mesentery. In the fourth part, we verified the structural counterparts of the concepts by examining the histological literature on the vasculature.We examined the available physiology textbooks from Haller\u2019s \u201cPrimae Lineae Physiologiae\u201d of 1747 to the mThe schematics of the microcirculation in the textbooks shown in Table\u00a0Among the six schematics, two figures contained neither precapillary sphincters nor metarterioles, including figure\u00a01 from Zweifach\u2019s 1937 article on the dOne of the six figures showed sketches of observations of the mesentery microcirculation with precapillary sphincters and metarterioles, namely figure\u00a01 in Zweifach\u2019s 1950 article The other three figures represented an idealized microcirculation and contained both precapillary sphincters and metarterioles, including figure\u00a01 of Chambers and Zweifach\u2019s article from 1944 Fig.\u00a06)6), figurThe other schematics of the microcirculation in the physiology textbooks did not use Zweifach\u2019s article as the source for the figures and appeared to be original drawnings. The schematic in Berne and Levy\u2019s physiology in 1967 and its The above survey indicated that the schematics of the microcirculation in the current textbooks of physiology relied on specific articles by Zweifach before 1953. In the following part, we will examine the literature on physiological research on the microcirculation, especially those by Zweifach, to evaluate the extent of applicability of the concepts of precapillary sphincters and metarterioles.A series of investigations by Zweifach utilized the mesentery of a few different animals for visualization of the microcirculation in vivo, beginning with Zweifach\u2019s research in 1937 reportedIn introducing the new concepts of metarterioles and precapillary sphincters, Chambers and Zweifach referredSummarizing the above literature survey, it must be concluded that the precapillary sphincters and metarterioles identified by several studies by Zweifachs provide enough evidence on the mesentery microcirculation, but their findings cannot be accepted as evidence for the microcirculation of the various tissues and organs in general.Zweifach provided four types of schematics including precapillary sphincters and metarterioles in three articles. The first type was in an article on the mammalian omentum and mesentery by Chambers and Zweifach in 1944 presentiHowever, the metarterioles and precapillary sphincters were topics of later physiological research on the microcirculation of various tissues and organs including the skeletal muscles \u201339, skinAfter the pioneering studies of Zweifach and coworkers, the structure and function of the microcirculation were studied in other organs such as the skeletal muscle, skin, heart, liver, kidney, intestine, etc. The microcirculation architecture was revealed to be quite diverse among organs and had specific patterns suitable to the functional demands of the individual organs.In skeletal muscle, the arterioles and venules branch successively to become the terminal segments that supply and drain microvascular units, respectively . The terIn the skin, the microcirculation is organized as two horizontal plexuses . One is In the coronary microcirculation, the small penetrating arteries give rise to arterioles at almost right angles, which take either longitudinal or oblique courses to the muscle fibers to give off capillaries around the muscle fibers . BetweenIn the liver, the hepatic arteries (HAs) and portal veins (PVs) divide successively into terminal HAs and PVs in the connective tissue stroma at the periphery of the hepatic lobules. The terminal PVs directly supply the sinusoids, while the terminal HAs drains into the terminal PVs and the proximal part of the sinusoids. The sinusoidal blood is drained via central venules into the hepatic veins . There aIn the kidney, the arteries enter the parenchyme and divide into the arcuate arteries at the corticomedullary boundary from which the interlobular arteries arise toward the cortical surface to branch off successively into the afferent arterioles to supply the glomerular capillaries in the individual glomeruli . The effIn the small intestine, the arterial and venous plexuses are formed in the submucosa . From thHowever, in the mesenteric microcirculation, the metarterioles have been recognized by the other authors as shunting arterioles . The bloThe metarterioles in the mesentery represent thoroughfare channels between the arterioles and venules and therefore can be regarded as a kind of arteriovenous anastomosis. These have been reported in various tissues and organs including the skin, skeleton, muscle, lung, heart, intestinal canal, kidney, brain, eye and ear . HoweverThe structure of the microvasculature has been repeatedly investigated in various tissues by means of three electron microscopy methods. By transmission electron microscopy (TEM), the smooth muscle cells and extracellular matrices of the vascular wall can be well visualized and analyzed, but it is usually difficult to identify the observed location within the three-dimensional branching of the vascular tree on the sectional planes. Thus, identification of precapillary sphincters may only be possible after repeated careful observations of the microvasculature with TEM. By scanning electron microscopy of vascular casts (SEM-vc), the three-dimensional branching pattern of the vasculature can be well visualized and analyzed, but it was practically impossible to identify the cellular composition of the vascular wall. The precapillary sphincter can be inferred by constriction of the vascular casts at the base of the vascular branches with SEM-vc. By scanning electron microscopy after removal of the extracellular matrices (SEM-rem), both the structure of the vascular wall and the three-dimensional branching pattern can be well visualized, but the cellular types can be identified only tentatively on the basis of cellular shape. The precapillary sphincters can be identified convincingly with SEM-rem.In a search of the literature, we found 36 morphological studies on the microvasculature in 16 kinds of tissues employing one of the three morphological methods Table\u00a0. Six of Rhodin made detailed observations of the microvasculature employing TEM . The idePrecapillary sphincters were reported in the microvasculature of the heart with TEM and SEM-vc. Sherf et al. describePrecapillary sphincters were reported in the microvasculature of the brain with SEM-vc by Nakai et al. and CastAs shown above, the morphological evidence did not verify the existence of precapillary sphincters in the microvasculature of the muscular fascia, heart and brain. In the other 13 kinds of tissues, the precapillary sphincters were not observed at all with TEM, SEM-vc or SEM-rem. Our own observations of the microvasculature in the lung , kidney The arterioles received an abundance of aminergic (adrenergic) innervation in many organs such as the skeletal muscles \u201380, saliThe studies by Zweifach up until the 1950s revealed the unique features of the mesenteric microcirculation and provided impressive schematics of the microcirculation with metarterioles and precapillary sphincters. Fulton, Guyton and other authors introduced or mimicked these schematics in their physiology textbooks as representative of the microcirculation in general. However, morphological studies have revealed that the microcirculation in other organs and tissues contains no metarterioles or precapillary sphincters, and physiological studies on the microcirculation have used the terms metarterioles and precapillary sphincters differently. This reveals that the metarterioles and precapillary sphincters are not universal components of the microcirculation in general, but unique features of the mesenteric microcirculation. Therefore, explanations and illustrations of the microcirculation with metarterioles and precapillary sphincters can be regarded as inappropriate and misleading in physiology textbooks about the organs and tissues that aim to teach in general and not specific terms."} +{"text": "The hypothalamic\u2013pituitary\u2013adrenal (HPA) axis plays a key role in adaptation to environmental stresses. Parvicellular neurons of the hypothalamic paraventricular nucleus secrete corticotrophin releasing hormone (CRH) and arginine vasopressin (AVP) into pituitary portal system; CRH and AVP stimulate adrenocorticotropic hormone (ACTH) release through specific G-protein-coupled membrane receptors on pituitary corticotrophs, CRHR1 for CRH and V1b for AVP; the adrenal gland cortex secretes glucocorticoids in response to ACTH. The glucocorticoids activate specific receptors in brain and peripheral tissues thereby triggering the necessary metabolic, immune, neuromodulatory, and behavioral changes to resist stress. While importance of CRH, as a key hypothalamic factor of HPA axis regulation in basal and stress conditions in most species, is generally recognized, role of AVP remains to be clarified. This review focuses on the role of AVP in the regulation of stress responsiveness of the HPA axis with emphasis on the effects of aging on vasopressinergic regulation of HPA axis stress responsiveness. Under most of the known stressors, AVP is necessary for acute ACTH secretion but in a context-specific manner. The current data on the AVP role in regulation of HPA responsiveness to chronic stress in adulthood are rather contradictory. The importance of the vasopressinergic regulation of the HPA stress responsiveness is greatest during fetal development, in neonatal period, and in the lactating adult. Aging associated with increased variability in several parameters of HPA function including basal state, responsiveness to stressors, and special testing. Reports on the possible role of the AVP/V1b receptor system in the increase of HPA axis hyperactivity with aging are contradictory and requires further research. Many contradictory results may be due to age and species differences in the HPA function of rodents and primates. The hypothalamic\u2013pituitary\u2013adrenal (HPA) axis is a key adaptive neuroendocrine system. Regulation of glucocorticoid secretion through adrenocorticotropic hormone (ACTH) is critical to life and essential to maintain the mammalian response to stressor of the PVN is a key center of the central nervous system, integrating the neuroendocrine effects of stress, and a key part of the HPA axis. On the one hand, the CRH neuron is under the regulatory influence of numerous afferent nerve pathways that carry information about the stressor. In addition, it is regulated by glucocorticoids and it is the central link of the autoregulation mechanism in the HPA axis.The CRH neuron receives projections from ascending catecholaminergic pathways including noradrenergic projections from the A2 noradrenergic cell group within the nucleus of the solitary tract and the locus ceruleus. It also receives input from areas of the limbic system, notably the bed nucleus of the stria terminalis, the hippocampus, and the amygdala and the low-affinity glucocorticoid receptors (GR), which are expressed in the brain and on the corticotrophs of the pituitary and a number of peripheral tissues revealed that age-related dysfunctions of the HPA axis are associated with adaptive behavior of animals receptors the possible increase of the AVP production in the PVN for aged rodents, (b) the possible lack of any age changes in the AVP secretion in basal and stress conditions, and (c) the possible reduced basal AVP production along with the CRH hypersecretion.Only a few studies have addressed age-related changes in vasopressinergic regulation of HPA axis stress reactivity in humans and non-human primates to young and old female rhesus monkeys at different times of day (09:00 and 15:00\u2009h). The response of the pituitary\u2013adrenal axis to the injection of CRH revealed a circadian rhythm with a more pronounced response in the afternoon; this rhythm was not affected by aging The role of CRH and AVP in the stimulation of ACTH secretion varies depending upon the species. For humans, non-human primates, and rats CRH seems to be a more important secretagogue than AVP, in humans, all parvicellular neurons of the PVN, which produce CRH in the basal conditions, produce AVP as well, whereas in rats not more than half produce AVP.(2)AVP is an important regulator of the ACTH response to acute stress. AVP contributes to the acute ACTH secretion to stress in a context-specific manner. AVP is needed for the acute ACTH secretion for most of the known stressors.(3)Current data on the role of AVP in the regulation of HPA axis responsiveness to chronic stress in adulthood are contradictory and require further research.(4)The importance of the vasopressinergic regulation of stress responsiveness of the HPA axis is elevated during the fetal development, in neonatal period, and in the lactating adult.(5)(a)Aging is associated with increased variability in several parameters of HPA axis function including basal state, responsiveness to stressors, and special testing. This increased variability in the aged compared to normal adults has been observed in some form in multiple species including rodents, non-human primates, and humans.(b)For the most experiments on rodents, the hyperactivation of the HPA axis was observed at all levels of the HPA organization with aging under the basal and stress conditions.(c)With healthy aging of non-human primates and humans the HPA hyperactivation is usually associated with the increase of the cortisol level in the evening time, the slight changes in the regulation of the HPA axis by glucocorticoid feedback with relative hypercortisolemia due to the decline in secretion of the adrenal antagonists of cortisol \u2013 DHEA and DHEAS.(6)(a)the possible increase of the AVP production in the PVN for aged rodents and humans,(b)the possible lack of any age changes in the AVP secretion in basal and stress conditions,(c)the possible reduced basal AVP production along with the CRH hypersecretion.Reports on the possible role of the AVP/V1b receptor system in the increase of HPA axis hyperactivity with aging are contradictory, suggesting(7)Age-related changes in response of the HPA axis to the moderate acute restraint stress were observed in rhesus monkeys with much higher increase of ACTH and cortisol secretion in young monkeys in response to the stress imposed at 15:00\u2009h. In addition, these age changes were associated with age-related disturbances in vasopressinergic regulation.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Gene regulation underlies fungal physiology and therefore is a major factor in fungal biodiversity. Analysis of genome sequences has revealed a large number of putative transcription factors in most fungal genomes. The presence of fungal orthologs for individual regulators has been analysed and appears to be highly variable with some regulators widely conserved and others showing narrow distribution. Although genome-scale transcription factor surveys have been performed before, no global study into the prevalence of specific regulators across the fungal kingdom has been presented.In this study we have analysed the number of members for 37 regulator classes in 77 ascomycete and 31 basidiomycete fungal genomes and revealed significant differences between ascomycetes and basidiomycetes. In addition, we determined the presence of 64 regulators characterised in ascomycetes across these 108 genomes. This demonstrated that overall the highest presence of orthologs is in the filamentous ascomycetes. A significant number of regulators lacked orthologs in the ascomycete yeasts and the basidiomycetes. Conversely, of seven basidiomycete regulators included in the study, only one had orthologs in ascomycetes.This study demonstrates a significant difference in the regulatory repertoire of ascomycete and basidiomycete fungi, at the level of both regulator class and individual regulator. This suggests that the current regulatory systems of these fungi have been mainly developed after the two phyla diverged. Most regulators detected in both phyla are involved in central functions of fungal physiology and therefore were likely already present in the ancestor of the two phyla. Gene regulation is of major importance for physiology of all organisms, and has been intensively studied in fungi. It ensures that the required genes are switched on and act under the circumstances they are needed, and allows fungi to respond to changing conditions. Thirty-seven classes of regulator proteins have been identified in fungi, such asSaccharomyces cerevisiae,,].].RegulatClick here for filePresence of homologs of the 64 selected regulators in 108 fungal genomes. Presence or absence of homologs of each regulator are tabulated for the 108 fungal genomes analysed.Click here for fileGene IDs of the homologs of the 64 regulators. Gene IDs are displayed as indicated in the genome downloads. In the case of mutiple homologs, the gene ID\u2019s are in the same cell separated by a comma.Click here for file"} +{"text": "The analyses of genome sequences have led to the proposal that lateral gene transfers (LGTs) among prokaryotes are so widespread that they disguise the interrelationships among these organisms. This has led to questioning of whether the Darwinian model of evolution is applicable to prokaryotic organisms. In this review, we discuss the usefulness of taxon-specific molecular markers such as conserved signature indels (CSIs) and conserved signature proteins (CSPs) for understanding the evolutionary relationships among prokaryotes and to assess the influence of LGTs on prokaryotic evolution. The analyses of genomic sequences have identified large numbers of CSIs and CSPs that are unique properties of different groups of prokaryotes ranging from phylum to genus levels. The species distribution patterns of these molecular signatures strongly support a tree-like vertical inheritance of the genes containing these molecular signatures that is consistent with phylogenetic trees. Recent detailed studies in this regard on the Thermotogae and Archaea, which are reviewed here, have identified large numbers of CSIs and CSPs that are specific for the species from these two taxa and a number of their major clades. The genetic changes responsible for these CSIs (and CSPs) initially likely occurred in the common ancestors of these taxa and then vertically transferred to various descendants. Although some CSIs and CSPs in unrelated groups of prokaryotes were identified, their small numbers and random occurrence has no apparent influence on the consistent tree-like branching pattern emerging from other markers. These results provide evidence that although LGT is an important evolutionary force, it does not mask the tree-like branching pattern of prokaryotes or understanding of their evolutionary relationships. The identified CSIs and CSPs also provide novel and highly specific means for identification of different groups of microbes and for taxonomical and biochemical studies. The understanding of prokaryotic relationships is one of the most important goals of evolutionary sciences. These relationships have been difficult to understand due to the simplicity and antiquity of prokaryotic organisms and disagreements in viewpoints among evolutionary biologists regarding the importance of different factors when grouping prokaryotes. Although earlier studies in this regard were based on morphology or physiology Cowan, , the fieThe comparative genomic analyses have revealed that phylogenetic relationships deducted based upon different genes and protein sequences are not congruent and lateral gene transfer (LGT) among different taxa is indicated as the main factor responsible for this lack of concordance , or CSIs, in protein sequences comprises an important category and a number of its sub-phylum level taxa .In addition to the CSIs that are specific for particular prokaryotic taxa, several of the identified CSIs have also proven useful in clarifying the branching order and interrelationships amongst different bacterial phyla Gupta, . One exaThermotoga except T. lettingae within prokaryotes are currently identified solely on the basis of their branching in the 16S rRNA trees. Because the phylogenetic trees are a continuum, based upon them it has proven difficult to clearly define or delimit the boundaries of different taxonomic groups. Additionally, for virtually all of the higher prokaryotic taxa, no molecular, biochemical or physiological characteristics are known that are unique to them. Hence, a very important aspect of microbiology that needs to be understood is that in what respects do species from different main groups of bacteria differ from each other and what, if any, unique molecular, biochemical, structural or physiological characteristics are commonly shared by species from different groups? In this context, the large numbers of CSIs and CSPs for different taxonomic clades of bacteria that are being discovered by comparative genomic analyses provide novel and valuable tools for taxonomic, diagnostic, and biochemical studies Based upon these markers different prokaryotic taxa can now be identified in clear molecular terms rather than only as phylogenetic entities. (2) Based upon them the boundaries of different taxonomic clades can also be more clearly defined. (3) Due to their high degree of specificity and predictive ability, they provide important diagnostic tools for identifying both known and unknown species belonging to these groups of bacteria. (4) The shared presence of these CSIs by unrelated groups of bacteria provides potential means for identifying novel cases of LGTs. (5) Functional studies on these molecular markers should help in the discovery of novel biochemical or physiological properties that are distinctive characteristics of different groups of prokaryotes.Lastly, it should be acknowledged that the number of genes which harbor rare genetic changes such as these CSIs is generally small in comparison to the total number of genes that are present in any genome. However, the genes containing these CSIs are involved in different essential functions and they are often are amongst the most conserved proteins found in various organisms. Although, the criticism could be levied that the inferences based upon small numbers of genes/proteins containing these CSIs are not representative of the entire genomes (Dagan and Martin, The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Chronic pancreatitis (CP) is a progressive inflammatory disease in which the pancreatic secretory parenchyma is destroyed and replaced by fibrosis. The presence of intraductal pancreatic stone(s) is important for the diagnosis of CP; however, the precise molecular mechanisms of pancreatic stone formation in CP were left largely unknown. Cystic fibrosis transmembrane conductance regulator (CFTR) is a chloride channel expressed in the apical plasma membrane of pancreatic duct cells and plays a central role in Chronic pancreatitis (CP) is a progressive inflammatory disease of the pancreas, and is characterized by pancreatic exocrine and endocrine dysfunction resulting from tissue damage caused by inflammation. The pancreatic exocrine gland is composed of two types of cells, duct cells and acinar cells. Duct cells secrete fluid and For a diagnosis of CP of the pancreas. Cytoplasmic mislocalization of the CFTR chloride channel results in aberrant The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Drosophila fly. However, not much is known about the mechanisms leading to coherent circadian oscillation in clock neuron networks. In this work we have implemented a model for a network of interacting clock neurons to describe the emergence (or damping) of circadian rhythms in Drosophila fly, in the absence of zeitgebers. Our model consists of an array of pacemakers that interact through the modulation of some parameters by a network feedback. The individual pacemakers are described by a well-known biochemical model for circadian oscillation, to which we have added degradation of PER protein by light and multiplicative noise. The network feedback is the PER protein level averaged over the whole network. In particular, we have investigated the effect of modulation of the parameters associated with (i) the control of net entrance of PER into the nucleus and (ii) the non-photic degradation of PER. Our results indicate that the modulation of PER entrance into the nucleus allows the synchronization of clock neurons, leading to coherent circadian oscillations under constant dark condition. On the other hand, the modulation of non-photic degradation cannot reset the phases of individual clocks subjected to intrinsic biochemical noise.Circadian rhythms in pacemaker cells persist for weeks in constant darkness, while in other types of cells the molecular oscillations that underlie circadian rhythms damp rapidly under the same conditions. Although much progress has been made in understanding the biochemical and cellular basis of circadian rhythms, the mechanisms leading to damped or self-sustained oscillations remain largely unknown. There exist many mathematical models that reproduce the circadian rhythms in the case of a single cell of the Drosophila melanogaster the expression of the circadian clock genes occurs within approximately 150 clock neurons. The molecular mechanisms of these fundamental oscillators consist mainly of two interlocked transcriptional feedback loops involving per, tim, clk, vri and pdp1 genes per expression, and the PER-TIM complex then inhibits the activity of clk. Furthermore, clk is regulated negatively by the VRI protein and positively by the protein PDP1. In the second loop, vri and pdp1 are directly activated by the CLK protein. After its synthesis, the PER protein is phosphorylated at several residues. This leads to a time delay between the rise of mRNAs and that of the PER acting as transcriptional repressor for the clk gene. Thus alternating protein production, gene repression, and protein degradation may lead to self-sustained oscillations. The circuit is further complicated by the influence of the kinase doubletime on degradation and transport of the PER protein Most living organisms present rhythmic phenomena whose periods range from few milliseconds to years. Circadian oscillation is an important example of this kind of phenomenon. Several recent observations have suggested that the circadian rhythm at the molecular level results from a gene regulatory network Drosophila, the clock neurons have been divided into two major groups: the lateral neurons (LN) and the dorsal neurons (DN). The lateral neurons are subdivided into three subgroups: the large and small ventrolateral neurons neuropeptide, which is specifically expressed in the ventral lateral neuron group averaging the period and phase obtained for each individual clock neuron over the clock neuron population; (ii) first averaging the The overall degree of synchronization over a specified time interval is analyzed by computing the parameter zeitgeber the clock neurons are synchronized, but in free running (DD condition) there is a loss of synchronization. Nevertheless, the molecular oscillations of each clock neuron are little affected by the absence of the zeitgeber. Now we will test the hypotheses regarding the effects of interneuron interaction on synchronization:Drosophila, we look for a region in the We assume that the input signal zeitgebers.In addition, we have also determined the period and phase of the oscillations of the individual clock neurons by fitting the time course of We have also done numerical simulations for two other values of or, see . For that allow us to explain the observed behavior in In contrast with With our model we have studied the degree of synchronization of oscillators, and the period and phase of the oscillation, at the network level, in the space of parameters for some alternative hypotheses. In particular, we have investigated the effect on the synchronization when (i) Our results indicate that for zeitgebers when the input feedback signal The present network model is able to display circadian oscillation even in the absence of external Our results indicate that mechanisms based on a positive feedback acting over the rate of entrance of the phosphorylated PER into the nucleus could be essential for maintaining the circadian oscillation under free-running condition. This fact suggests a putative way of action for the neuropeptide PDF, which could be acting as an agent that promotes the entrance of the phosphorylated PER into the nucleus."} +{"text": "To investigate the importance of incorporating adaptive designs of randomised phase II trials into the protocol and patient information sheets using NEO-ESCAPE as an illustrative example.NEO-ESCAPE was a randomised two arm phase II study in inoperable ovarian cancer designed as an external pilot to inform a future phase III randomised controlled trial. The primary objective was to assess the feasibility of two new extended chemotherapy regimens with 44 patients required on each arm. If one or both treatment arms proved feasible, the trial would continue to recruit to the feasible treatment arm(s) to improve the estimates of outcomes to be used in the phase III trial. Stopping rules were essential to enable pre-planned decisions on the futility of continuing to the required 44 patients on each arm. Simulations to assess futility were used in the decision making process.At the first pre-planned interim analysis when 56 patients had been recruited (28 on each arm) and 21 patients had finished treatment, it was clear that one arm of the trial would not meet the feasibility criteria on completion of recruitment. On the advice of the Independent Data Monitoring Committee, this arm of the study closed to recruitment but follow-up for these patients continued.Closure of one arm of the study resulted in a temporary halt of the trial while the protocol was amended to a single arm study, the patient information sheet was updated and all the relevant ethics, MHRA and Research and Development approvals were obtained. This temporary two month halt in recruitment to the study carried not only a loss of potential patients during the closed period but also a loss of momentum of centres on re-opening of the study and hence resulted in a delay in completing recruitment.Our experiences with this phase II study emphasise the need for full details of any adaptive designs, including all anticipated scenarios and subsequent actions, to be fully incorporated into the protocol and patient information sheet to avoid the need for a temporary halt of recruitment. A detailed statistical analysis plan is also essential in ensuring optimal decision making within these types of studies. The dilemma remains on how best to achieve a seamless transition in the adaptive element whilst not confusing patients with too much information about each possible future adaptation."} +{"text": "Results of the survey show that the overall attitude of the Malaysian stakeholders towards GM products was cautious. Although they acknowledged the presence of moderate perceived benefits associated with GM products surveyed and were moderately encouraging of them, they were also moderately concerned about the risks and moral aspects of the three GM products as well as moderately accepting the risks. Attitudes towards GM products among the stakeholders were found to vary not according to the type of all GM applications but rather depend on the intricate relationships between the attitudinal factors and the type of gene transfers involved. Analyses of variance showed significant differences in the six dimensions of attitude towards GM products across stakeholders' groups.Public acceptance of genetically modified (GM) foods has to be adequately addressed in order for their potential economic and social benefits to be realized. The objective of this paper is to assess the attitude of the Malaysian public toward GM foods and GM medicine (GM insulin). A survey was carried out using self-constructed multidimensional instrument measuring attitudes towards GM products. The respondents ( There has been significant advancement in modern biotechnology worldwide in the past ten years. Current biotechnology products mostly focus on the commercialization of biopharmaceuticals followedIn Malaysia, biotechnology has been identified as one of the five core technologies that will accelerate the country's transformation into a highly industrialized nation by 2020 . Most ofThe advancement in gene technology for the production of GM crops and biopharmaceuticals has been so rapid in the past fifteen years, making it the object of an intense and divisive debate worldwide. The acceptance of gene technology varies from country to country and across different applications of the technology . Past suThe studies of public attitudes towards biotechnology and GM foods have many similarities with risk perception studies. The psychometric approach suggests that the public do not perceive technological risk according to a single dimension related to predicted injuries or fatalities akin to a risk assessor's viewpoint but interpret risk as a multidimensional concept, concerned with broader qualitative attributes , 36. WitThe objective of this paper is to assess and compare the attitudes of the Klang Valley stakeholders towards two GM foods: genetically modified (GM) soybean , GM palm oil , and GM insulin (involving the transfer of human genes into bacteria). GM soybean and GM insulin are already available in the Malaysian market while GM palm oil is a high priority area of research in Malaysia.A survey was carried out between June 2004 and February 2005. The people in the Klang Valley region were chosen as the targeted population as it is the centre of the country's economic and social development as well as the respondents in this region meet the requirement of diverse background stated in the model.N = 1017) were adults (age 18 years old and above) stratified according to various interest or stakeholder groups listed in f = 0.25) at P = 0.05 and u = 12, a sample of 22 subjects per group is required to obtain a power of 0.80 [In this study, the stakeholder-based survey approach recommended by Aerni was adop of 0.80 . So each of 0.80 , for anyThe questionnaires were handed out personally to respondents by biotechnology graduate enumerators who were trained to be neutral on their stance of GM products. Before answering, the respondents were given an introduction to basic concepts and examples of GM foods and GM medicine. They were also exposed to the real scenario of GM products debate on the possible benefits and risks and regulation of GM products, and they were given the chance to enquire further. This approach was suggested by Kelley to assesThe multidimensional instrument measuring attitudes towards GM foods and medicine used in this study was constructed based on earlier research . The insInitially, reliability tests were carried out using SPSS version 12.0 to assess the consistency and unidimensionality of the constructs. Analyses of variance (ANOVAs) were also carried out using the same statistical package. Confirmatory factor analysis was carried out using Analysis of Moment Structure (AMOS) software version 19 with maximum likehood estimation to validate the measures.\u03c72 = 608.2, df = 208, P < 0.001). Chi-square statistical significant test is not very useful in indicating the fit of the model in this study due to the large sample size [A single step SEM analysis as proposed by Hair Jr et al. was carrple size . Other fple size , NFI, CFple size , 50 are Three types of reliabilities measured in this paper are the internal consistency , item reliability, and construct reliability. The Cronbach's alpha coefficients for majority of the constructs were considered good (above 0.70) . The corTwo validity measures were tested in this paper. The convergent validity was assessed by the factor loadings and composite reliability . The staAttitudes towards GM foods and GM insulin were analyzed based on six dimensions: familiarity, perceived benefits, perceived risks, risk acceptance, moral concerns, and encouragement.F = 4.26, P < 0.001), GM palm oil , and GM insulin across stakeholders. The biology students scored the highest weighted average in terms of familiarity with the three GM products and post hoc tests showed that their rating of GM insulin differed significantly from majority of other stakeholders except for the producers , GM palm oil , and GM insulin across stakeholders. Post hoc analyses of the beneficial aspects of the three surveyed GM applications highlighted the significant difference in opinion of the biology students compared to the media and the general public , GM palm oil , and GM insulin . Post hoc test showed the media's rating of GM insulin as significantly differed from the biologists, policy makers, and biology students , GM palm oil , and GM insulin across stakeholders. The media was noticeably the most critical with the lowest rating for GM soybean and GM palm oil. Post hoc tests showed that their acceptance of risk for GM soybean, GM palm oil, and GM insulin were significantly different from the biology students and additionally differed with the Buddhist scholars with respect to the acceptance of risk related to the two GM foods , and GM palm oil , and GM insulin . Post hoc tests confirmed that the Buddhist and Christian scholars significantly perceived the moral aspects of the GM applications surveyed as higher than the majority of stakeholders except for the media and Hindu scholars , GM palm oil , and GM insulin across stakeholders. Post hoc tests showed significant difference in the support of the biology students towards all three GM products compared to the Buddhist and Christian scholars, additionally differed with the Islamic scholars with respect to GM soybean and GM insulin, displayed higher rating than the general public in their opinion of GM palm oil, and showed higher rating of GM insulin compared to the Hindu scholars followed by GM insulin (weighted average 4.44) and GM soybean was the most supported but, on the other hand, the other type of GM food (GM soybean) was the least supported compared to GM insulin . When a Overall, the Malaysian stakeholders were the most supportive of GM palm oil followed by GM insulin and GM soybean . The weighted averages for overall encouragement in this study are slightly higher than the mean score of Malaysians' attitude towards agricultural biotechnology in the ISAAA-UIUC 2003a) report. Across Asia, the support for GM foods and medicines was not very encouraging too. Data on public attitude in other Asian countries is rather limited for comparison and those available used different questionnaires. The urban shoppers in Seoul also perceived the benefits of GM foods as moderate too 03a repor. In a moComparing across stakeholders, the biology students were clearly the most enthusiastic about GM foods and GM insulin surveyed as well as claiming to be the most familiar with those products. This could be because they were still studying and therefore were actively seeking information related to their courses. It is typical of the biology courses in Malaysia to focus on the theory, concepts, and applications/development of GM products rather than on the risk aspects. So it is to be expected that the biology students would be highly enthusiastic about the potentials of GM products. On the other hand, the media subjects were noticeably the most critical of GM foods and GM insulin. This finding again supported the earlier study by ISAAA-UIUC , where tThe NGOs in the Klang Valley region were not strongly against GM foods and GM insulin. Their perception of the risks related to the three GM applications surveyed was in the moderate range, acknowledging moderate benefits and moderately encouraging the three GM applications comparable to the majority of the other stakeholders. This is a favourable finding for the country and in contrast with Aerni's study . Aerni reportedSurprisingly and as cause for concern, the biotechnologists and policy makers claimed to have only moderate familiarity with GM foods and GM insulin surveyed and their weighted averages were in the same category with the majority of the other stakeholders with the exception of the biology students. Even more worrying is the ignorance of the policy makers who are responsible for making decisions regarding modern biotechnology issues in Malaysia. They professed to have low familiarity with GM soybean and although their familiarity with GM palm oil and GM insulin were in the moderate category, they were within the lowest three ranking together with the religious scholars groups. The reason behind this could be because biotechnologists tended to focus their activities on research and development of new products while the policy makers are mostly biotechnology related academicians or researchers who also share the same priorities.Attitude of the scientists (biotechnologists and biologists), policy makers, and the producers in the Klang Valley region seemed to be cautious towards GM products. Although their attitudes were inclined towards the positive side compared to the other stakeholders except for the biology students, they also seemed to have some reservations about GM foods and GM insulin. The attitude towards GM products scenario is rather common worldwide. Torres et al. stated tAmong the stakeholders, the Buddhist and Christian scholars were highly concerned about the moral aspects of GM foods and GM insulin surveyed while the Hindu scholars perceived the moral aspects of GM insulin as high. Aerni found thThe politicians and the general public were found to have moderate attitude towards GM foods and GM insulin. The consumers in the Philippines were more positive towards GM crops but it sDespite significant developments in modern biotechnology and GM foods worldwide and in Malaysia, the Klang Valley stakeholders' overall attitude towards GM foods and GM insulin was found to be cautious. Although they acknowledged the presence of moderate perceived benefits associated with GM foods and GM insulin surveyed and were moderately encouraging of them, they were also moderately concerned about the risks and moral aspects of the three products as well as moderately accepting the risks. Results from this study revealed that acceptance of modern biotechnology products by the stakeholders varies not so much according to the type of applications or products but rather on the intricate relationships between all the factors, familiarity, benefit to consumers, and the moral aspects or the type of gene transfers involved. If the biotechnology application offers high and clear benefit to consumers and is of low moral concern, the risk associated with it would be highly compensated (acceptable) and the application would be highly encouraged. It is suggested that biotechnologists and industries assess the benefit, risk, and moral aspects of new GM food and GM applications/products to gauge public acceptance of the applications before embarking on R&D and commercialization to avoid the loss of huge amount of financial and labour investments if the products turn out to be unacceptable to consumers. Labelling GM foods and GM products is also recommended to increase consumers' confidence in the products besides the need to make available scientific evidence on the safety of modern biotechnology products by independent researchers.The research findings serve as a useful database for understanding public acceptance of GM foods in a developing country to understand the social construct of public attitude towards GM foods. A more in-depth study needs to be carried out to evaluate the reasons for the low familiarity of the Malaysian public especially among the biotechnologists and policy makers on GM foods and to explore the cautious attitude of the scientists (biotechnologists and biologists), policy makers, and the producers in the Klang Valley region towards GM foods and products. There is also a need to understand the religious perspectives of various religions on the moral aspects of genetic modification as well as looking at the actual reasons behind the critical nature of the people in media."} +{"text": "Successful removal of metalwork requires a skilled surgeon and the correct instruments. We describe a simple method for the removal of an AO (Arbeitsgemeinschaft f\u00fcr Osteosynthesefragen) unreamed tibial nail in the absence of the correct extraction bolt. The threaded rods from the Taylor Spatial Frame fit into the proximal end of the nail perfectly, allowing for the easy extraction of the nail. The addition of a hexagonal post allows t"} +{"text": "We describe the case of a 68-year-old male with autopsy-confirmed sporadic CJD (CJDs) who had undergone 2 colonoscopies prior to this diagnosis.The involved endoscopy centre had multiple colonoscopes and gastroscopes that are cleaned and disinfected in the same automatic washers/disinfectors (AWD). There was no system in place to track the use and disinfection of individual endoscopes.Four questions arise:- Is it necessary to dispose of colonoscopes potentially contaminated by CJDs?- Is it necessary to dispose of the AWD where the endoscopes were washed?- Is it necessary to dispose gastroscopes at risk of contamination during the disinfection process in the AWD?- Is it necessary to inform the patients who were exposed to these endoscopes ?We estimated that this situation occurs approximately 17 times each year in Switzerland. To answer these questions requires data on the presence of CJDs prions in the colon, the risk of contamination of the endoscopes, the risk of prion transmissions to other patients via the endoscopes, and the procedures of cleansing and disinfection. Finally, it is also necessary to take into account psychological, financial and ethical implications for the endoscopy centre and the patients exposed to the potentially contaminated endoscopes.This complex situation highlights the need for guidance recommendations in this area.None declared."} +{"text": "Presently, different studies are conducted related to the topic of biomass potential to generate through anaerobic fermentation process alternative fuels supposed to support the existing fossil fuel resources, which are more and more needed, in quantity, but also in quality of so called green energy. The present study focuses on depicting an optional way of capitalizing agricultural biomass residues using anaerobic fermentation in order to obtain biogas with satisfactory characteristics.. The research is based on wheat bran and a mix of damaged ground grains substrates for biogas production.The information and conclusions delivered offer results covering the general characteristics of biomass used , the process parameters with direct impact over the biogas production and the daily biogas production for each batch relative to the used material.All conclusions are based on processing of monitoring process results , with accent on temperature and pH influence on the daily biogas production for the two batches. The main conclusion underlines the fact that the mixture batch produces a larger quantity of biogas, using approximately the same process conditions and input, in comparison to alone analyzed probes, indicating thus a higher potential for the biogas production than the wheat bran substrate.Adrian Eugen Cioabla, Ioana Ionel, Gabriela-Alina Dumitrel and Francisc Popescu contributed equally to this work AEC and FP performed the experimental research and elaborate the paper. II coordinated the whole research including the drawing of the conclusions and coordinated the final form of the paper. GAD performed the mathematical approach and analyze. All authors read and approved the final manuscript."} +{"text": "Dentigerous cysts, commonly encountered in the practice of dentistry, are benign odontogenic cysts associated with crowns of unerupted and/or impacted permanent teeth. They frequently occur during the second and third decades of life. Treat-ment modalities range from enucleation to marsupialization, which may be influenced by the age of the patient, severity of impaction, and root form of associated tooth/teeth. The purpose of this report is to describe the successful outcome of con-servative surgical management of a large dentigerous cyst associated with an impacted mandibular second premolar in a young patient. Maxillary and mandibular premolars have also been associated with dentigerous cysts.7 Dentigerous cysts have also been reported in association with impacted deciduous teeth.9Dentigerous cyst or follicular cyst is an odontogenic cyst associated with the crown of an impacted, embedded, unerupted or developing tooth. The cyst which encloses the crown of an unerupted tooth is attached to the cervical region of the tooth. It is the second most common type of odontogenic cysts accounting for 14% to 24% of all jaw cysts.10 The usual radiographic feature is characterized by a symmetric, well-defined, usually unilocular radiolucent lesion surrounding the crown of an unerupted tooth. Generally there is a distinct, dense periphery of reactive bone (condensing osteitis) with a radiolucent center. These cysts can also manifest as multilocular entities and occasionally may be associated with resorption of the roots of adjacent erupted teeth.13Clinically, patients with dentigerous cysts are generally asymptomatic. They are often described as an incidental radiographic finding on routine radiographs or when films are obtained to determine why a tooth has failed to erupt or when an acute inflammation or infective exacerbation occurs.2 marsupialization,5 and decompression of the cyst via fenestration.6Dentigerous cysts are generally treated by surgical means. The most common surgical modalities used are total enucleation,This case study describes a conservative surgical approach combined with routine orthodontic treatment of a dentigerous cyst associated with a mandibular second premolar in an adolescent.An 11-year-old female was referred by her dentist in Community Medical Center to the Orthodontic Section, Department of Dentistry, Hamad Medical Corporation, Doha, Qatar, with the chief complaint of a swelling on the left side of her lower jaw since three months earlier. The swelling had been growing slowly over the period and was associated with no pain or discharge. The overall general physical health of the patient was good with nonspecific general medical history, without any contraindication to dental treatment.The extraoral examination revealed a symmetrical orthognathic facial profile with no signs of neurological deficit in the lower half of the face. There was no sign of any regional lymphadenopathy. Intraoral examination revealed a mixed stage of dentition, bilateral Class I molar relationships and Class I incisors. A hard, non-tender, non-fluctuant swelling of 2.5 \u00d7 2 cm was evident in the lower left vestibule, extending from the distal surface of the left permanent canine to the distal surface of ipsilateral first permanent molar. The swelling was associated with expansion of buccal and lingual cortical plates and covered by healthy-appearing and freely-moving mucosa. The teeth adjacent to the swelling were quite firm and not associated with any decay. A panoramic radiographic examination (OPG) revealed the presence of all the permanent teeth without any decayed or supernumerary teeth. There was a well-circumscribed unilocular radiolucent lesion in the body of the mandible on the left side, which was associated with the crown of a vertically impacted second premolar. The root of the impacted second bicuspid was developed approximately up to half of its usual length and the apex was quite wide open. The cystic structure appeared to have originated from the second bicuspid with inferior and distal displacement of the same tooth. The corresponding deciduous tooth (second molar) was still present with normal crown and roots.A clinical diagnosis of a dentigerous cyst involving the crown of the impacted left mandibular second bicuspid was made with the differential diagnosis of an inflammatory cyst, a keratocyst and a unilocular ameloblastoma.Aims and objectivesConsidering the age of the patient, her occlusal status, size of the cyst, position, and developmental stage of the root of the involved tooth a conservative treatment modality was decided upon. The main objectives of the treatment were clinical and radiographic elimination of the pathologic entity and to bring the involved permanent tooth into its proper position.Treatment planExtraction of the left mandibular second deciduous molar and decompression of the cyst through the extraction socket.Histopathologic examination of the cystic lining.Trans-lingual arch to hold the permanent first molars bilaterally in their current position and to maintain the space for unerupted left bicuspids.Follow-up of the progress of eruption of the impacted second bicuspid with periodic radiographs.Fitting of a fixed orthodontic appliance for final alignment of teeth in due course if needed.Treatment progressThe left second deciduous molar was extracted under local anesthesia and the socket was used to establish a communication between the cyst cavity and the oral cavity. An incisional biopsy was obtained from the cyst wall for histopathologic examination, which confirmed the initial diagnosis of a dentigerous cyst without evidence of any dysplastic changes. A BIPP (bismuth iodoform paraffin paste) gauze pack was inserted into the cyst cavity and secured with a suture. One week after surgery, the pack was removed and repacking was done with another BIPP gauze pack, which was kept in place for another week. Two weeks after surgery, the patient was sent back to an orthodontist for the fabrication of trans-lingual arch and for further follow-up.However, the patient could not return for further follow-up and treatment as she was out of the country. She reported eight months after the initial surgery and at this stage the clinical examination revealed inadequate space for the unerupted second premolar. The panoramic view (OPG) showed a favorable change in the position of the impacted premolar with increasedradiopacity of cystic lesion, suggesting osteogenesis. A fixed14The aim of treatment for dentigerous cysts is complete elimination of pathology and maintenance of dentition with minimal surgical intervention. Recently-defined criteria for selecting the treatment modality refer to the cyst size and location of the cyst, patient age, dentition involved, stage of root development, position of the involved tooth in the jaw and its relation to the adjacent teeth, and involvement of adjacent vital structures.14 This approach is favored in cases involving impaction of a single tooth, such as a wisdom tooth in an adult, which has no function; however, it is not often in the patient\u2019s best interests. In particular, extractions of associated teeth in children may have functional, esthetic, and psychological consequences. Thus a conservative surgical approach was chosen for this patient, which consisted of the removal of one deciduous tooth and an incisional biopsy for histological examination, considered essential to confirm the diagnosis. Many authors have emphasized the importance of maintaining the opening between the cyst and the oral cavity artificially not only to heal the cystic lesion but also to prevent the formation of a fibrous scar which can impair eruption of the involved teeth.10 However, in this case no such attempt was made to maintain the patency of the cyst opening into the oral cavity except for keeping the surgical pack in place for only two weeks. This finding supports the hypothesis that simple decompression of the cyst and further root development would allow for the spontaneous relocation of the associated tooth.Surgical treatment of dentigerous cysts usually includes enucleation of the lesion along with the removal of associated teeth.7 the optimal timing for the initiation of surgical treatment of a cyst-associated tooth is when the tooth has the ability to erupt. Radiographically the roots of these teeth should show at least 2/3 of root formation with an open apex. Although optimal timing correlates with the 2/3 of root development, in this case the radiograph showed approximately 1/2 of root development. The decision was made to administer treatment at this stage considering the initial position of the involved tooth in relation to the lower border of the mandible and to allow for further root development by an early release of intra-cystic pressure.According to Miyawaki et al,6 However in this case an early initiation of orthodontic treatment was essential as there was inadequate space for the impacted second premolar due to drifting of adjacent teeth into the extraction site of the second deciduous molar.It has also been well-documented by many authors in different case studies that complex orthodontic treatment can be avoided by maintaining the space in the arch for underlying involved tooth.It took a total of 37 months to finish the treatment, which emphasizes the importance of close supervision by the operators. The result also highlights the potential for healing of the cystic lesion after its decompression, particularly in young children. As revealed by the panoramic view, there was complete radiologic healing of the cystic lesion, including the filling of cystic cavity with normal trabecular bone at the end of the treatment .This case report presents a number of points which are noteworthy:A simple and conservative surgical approach should be preferred for a large dentigerous cyst in an adolescent in the mixed dentition period. This not only preserves the function and esthetic values but also prevents the child from psycho-social trauma due to tooth loss.The capacity to regenerate bone is greater among children than among adults and teeth with open apices have great eruptive potential. Thus, large dentigerous cysts in children can be treated differently and conservative treatment with tooth preservation should always be considered.This case report also provides a commentary on the optimal timing to initiate surgical treatment of a cyst-associated tooth. It supports the concept of initiation of treatment when radiographic analysis of the roots and apex of these teeth shows at least 1/2 - 2/3 of root formation with an open apex.A panoramic (OPG) film is a common and reliable tool for the diagnosis and assessment of progress in healing of a mandibular dentigerous cyst along with the change in the position of the associated tooth.However, this conservative approach does require close cooperation on the part of both the patient and the dental practitioners in order to monitor the healing of the lesion and change in the position of the involved tooth. The result can be the elimination of pathology and maintenance of the dentition."} +{"text": "The study aimed to evaluate the level of burnout among the clinical dental students in two Jordanian universities.A total of 307 students from the two schools were surveyed using Maslach Burnout Inventory survey. Scores for the inventory\u2019s subscales were calculated and the mean values for the students\u2019 groups were computed separately. Kruskal-Wallis and Mann-Whitney tests were carried out and the results were compared at 95% confidence level.The results showed that the dental students in both Jordanian universities suffered high levels of emotional exhaustion and depersonalization compared to reported levels for dental students in other countries. The dental students of the University of Jordan demonstrated a significantly higher (p < 0.05) level of emotional exhaustion than their counterparts in the Jordan University of Science and Technology.The findings indicated that dental students in the Jordanian universities presented considerable degrees of burnout manifested by high levels of emotional exhaustion and depersonalization. Studies targeting students health and psychology should be carried out to determine the causes of burnout among dental students. The curricula of the dental schools in the two universities should be accordingly improved to minimize burnout among the students.Burnout; Emotional exhaustion; Depersonalization; Personal accomplishment; Maslach Burnout Inventory Dentistry is a profession demanding physical and mental efforts as well as people contacts.Excessive contact and handling of people can result in a condition known as Burnout. Burnout, therefore, is defined as \u2018a syndrome of emotional exhaustion and cynicism that occurs frequently among individuals who do \u2018people-work\u2019 of some kind\u2019 . BurnoutThe rate of burnout among dentists and its effect on their lives have been previously investigated by many researchers . It has Many researchers studied the existence of stress among dental students ,7-12, a In a study aimed at testing the rate of burnout among German dental students of three universities , it was The perceived stress among dental students had been attributed to factors such as fear of failure , the loaThe aim of this study was to determine the presence and level of burnout among the clinical dental students in the University of Jordan and in the Jordan University of Science and Technology, and also to compare the results with those of other countries.The extent of burnout in 307 clinical dental students of the University of Jordan (UJ) and Jordan University of Science and Technology (JUST) was assessed by employing the Maslach Burnout Inventory - Human Services Survey (MBI-HSS) . The invThe employed inventory is an effective tool of proven reliability and validity in detecting the presence and assessing the degree of burnout in human services workers.The English version of the inventory was used since English is the teaching language in the two Jordanian dental schools.The clinical students (fourth and fifth year) of both dental schools of UJ and JUST were asked to complete the inventory during one lecture at each of the universities during the second semester after the midterm examinations. The instructions were explained thoroughly to the students by the researchers. The response was totally anonymous to ensure confidentiality.The total score for each of the three subscales was calculated for each student by calculating the sum of responses to the items in the subscale. Mean values were then calculated for each student group separately. Shapiro-Wilk test of normality indicated that the data was not normally distributed in many instances. Consequently, non-parametric statistical tests were used in comparing the results of the four student groups. Kruskal-Wallis one-way analysis of variance (ANOVA) based on ranks was used in determining whether any significant differences existed among the student groups. Multiple comparisons among the different groups were performed using Mann-Whitney rank sum test. The various data sets were rigorously treated statistically at 95% level of confidence. Statistical analysis was carried out using the Statistical Package for Social Sciences .The responses for the 4th and 5th year students of each university were described in terms of means, standard deviations, variances, minimum and maximum values and Fig.Cut-off values above which a subscale score was deemed high were obtained by correcting the values indicated in the MBI-Scoring Key to compensate for the difference in scoring scale.Female students in UJ had higher degrees of emotional exhaustion (p < 0.05) than their male counterparts while no significant differences were found between males and females of both universities neither in depersonalization nor in personal achievement scores .Pairwise comparisons have shown that the 4th year UJ students exhibited significantly higher degrees of burnout (p < 0.05) in the three subscales than their 5th year counterparts. On the other hand, no significant differences were found between 4th and 5th year students in JUST. Comparisons between universities have shown that UJ students had significantly higher degrees of emotional exhaustion (p < 0.05) than their counterparts in JUST .The results showed that almost all of the students of both the 4th and 5th years in the two dental schools of the two Jordanian universities suffered high degrees of emotional exhaustion. In a previous study it was rComparisons between the two dental faculties of the two Jordanian universities revealed that UJ students had significantly higher scores of emotional exhaustion than JUST students. Although the clinical training scheme was similar in both universities, there were some differences which were possibly in favour of JUST students. The dental faculties of both universities adopted a clinical training system which obliged the students to successfully finish a minimum amount of clinical requirements in order to pass any clinical course. Nevertheless, the minimum clinical requirements of UJ dental faculty were more in terms of quantity and variety of clinical tasks than those of JUST for 4th or 5th year students. This accounted for more physical and mental efforts of the UJ students as opposed to that of their counterparts in JUST. In addition, UJ students had to find their own patients due to the absence of patients records filing system. The dental faculty of JUST, on the other hand, organized the students patient records and distributed them according to the students needs. This had increased the number of patients the UJ students had to interact with, which consequently increased the emotional exhaustion level in those students.Clinical sessions were longer and less frequent in JUST dental faculty. As a result, UJ students were more stressed trying to finish their procedures in time. Not to mention, that more clinical sessions per week meant more patients contacts, which in turn, increased the level of emotional exhaustion.The availability of the staff during the clinical sessions might also have played a role. In JUST, the staff to student ratio was very favourable for both the learning experience and the students mental health. As each clinical instructor handled less number of students, the students received longer time and better attention from the staff. Moreover, JUST provided an Allied Dental Sciences program, whose students assisted the clinical dental students during their sessions by putting four-handed dentistry into practice. The dental faculty at the UJ did not offer any allied dental sciences programs. This has obliged the dental students of the UJ to look for and find their own patients and to handle them entirely by themselves, unassisted by ancillary personnel.Depersonalization may be the most critical aspect of burnout in a health care profession like dentistry. Perceiving the patient as an impersonal object rather than a human being might result in detrimental negligence in treatment procedures and disregard of the psychological aspect of treating the patients. Fifth year students had significantly higher scores of depersonalization than 4th year students in both dental schools. No significant differences were found between students of the two schools in the same academic level. These findings might suggest that depersonalization increased with increased patient contact.Longitudinal studies are necessary to confirm such suggestion, but increasing the student as well as the instructors awareness to such problem is imperative to avoid further development of the problem in the future clinical career of the students.High scores of personal achievement mean more involvement with the patients, more satisfaction with the profession, and consequently, lower degrees of burnout. The lower scores of personal achievement in 5th year students, revealed in this study, should be viewed as another sign of increasing burnout which was related to increased patient contact in terms of frequency and duration. This scale of burnout is extremely important and should be closely monitored during the clinical training period of dental students and even after their graduation to ensure that the lack of personal achievement has diminished.The findings of this study indicated that dental students in the Jordanian universities suffered considerable degrees of burnout as manifested by high degrees of emotional exhaustion and depersonalization.Longitudinal studies that include pre-clinical dental students as well as young graduates should be carried out to determine how the patterns of burnout vary during the students academic clinical training period and further in their professional lives.Analytical studies targeting students health and psychology should be carried out regularly to determine the causes and factors related to the high degrees of burnout among dental students.The curricula of the dental schools in the two universities should be accordingly improved to minimize burnout among the students.Dental students as well as instructors should be informed about burnout and its elements, to increase their awareness which can alleviate the problem."} +{"text": "During the past two decades, the identification of new scientific developments to improve outcomes in gastrointestinal disorders has been attractive to many researchers. Pharmaceutical industries are now more motivated to introduce novel therapeutic remedies in the treatment of gastrointestinal disorders. Such disorders have increased at an exponential rate in various patient communities and both natural and synthetic compounds have been investigated for their potential biological activity in the treatment of these gastrointestinal disorders.Several interesting works were assessed in this special issue. Amongst them five articles were chosen based on their critical findings in gastrointestinal disorders.\u03b2-catenin and COX-2 in colon tumors. The Schiff base metal derivatives also may enhance the expression of HSP70 and suppress the expression of BAX proteins in an acute hemorrhagic gastric ulcer model. Another animal study assessed the hepatoprotective activity of the ethanolic extract of rhizomes of Z. officinale against thioacetamide-induced hepatotoxicity in rats. The floating dosage form of an anticancer drug was prepared in another study entitled \u201cPreparation and Characterization of a Gastric Floating Dosage Form of Capecitabine.\u201d The work characterized the sustained release tablet in terms of total floating time, dissolution, friability, hardness, drug content, and weight uniformity to compare the prepared formulation with the commercial tablet in terms of drug release, and to evaluate the stability of the formulation.The effects of rikkunshito on the decrease in food intake were assessed after induction of stress in mice and showed improvement in the decrease of food intake probably via serotonin 2B receptor antagonism of isoliquiritigenin. In another work, the preventive effect of inositol hexaphosphate extract of rice bran on colon cancer was assessed. The results showed significant reduction in the expression of By presenting these articles, we hope that this issue incorporates new scientific evidence and emerging developments as the basis of rational treatment in medicinal practice using novel therapeutics in the treatment of gastrointestinal disorders.Mahmood Ameen AbdullaIbrahim BanatPatrick Naughton"} +{"text": "Chemical and physical processes occurring at the surfaces of the minute grains found in interstellar dust clouds are crucial in the formation of the many different atomic and molecular species found in the interstellar medium. These surface processes include not only relatively simple processes, such as the formation of dihydrogen gas from the atomic hydrogen prevalent in the interstellar medium, but also the assembly of small molecules, such as water and carbon dioxide and their catalysis into bigger, more complex species. When the dust grains coalesce into planetary accretion discs, these molecules may be retained and become trapped in the planets as they form. Direct measurements, laboratory studies, theoretical investigations and mathematical models all serve to improve our understanding of these complex processes.This Theme Issue is the outcome of a Royal Society International Scientific Seminar on \u2018Surface science in the interstellar medium\u2019, which brought together an interdisciplinary group of scientists to explore how their complementary expertise in chemistry, physics, astronomy, computing and mathematics could be exploited to provide a fuller picture of this fascinating area of research. The papers presented here showcase a wide variety of state-of-the-art techniques used to shed light on the formation of the interstellar dust clouds and the surface-mediated reactions that convert simple atoms into increasingly complex molecules. They illustrate how this combination of expertise can unravel a range of complex physico-chemical processes occurring under the extreme conditions prevalent in the interstellar medium.I would like to dedicate this Theme Issue to the memory of Prof. Michael J. Drake, a friend and colleague who was one of the driving forces behind the organization of the Royal Society meeting, but tragically died shortly afterwards. Mike had a profound interest in surface processes occurring in the interstellar medium. Having spent half a life-time teaching his students the accepted versions of the origin of our planetary water, which increasingly did not fit the available evidence, Mike suggested an alternative hypothesis, where water was already present at the surfaces of interstellar dust grains when they accreted to form our planet. Although this hypothesis fitted with all available evidence, it would only work if the adsorption of water to the dust grains was sufficiently strong to survive the harsh conditions in the accretion disc. Mike then had the insight to look outside his own field and to computer simulations to test his hypothesis. Simulations of water adsorption at dust grain models have shown that the kind of highly fractal surfaces found on interstellar dust grains are indeed suitable for the strong retention of water under the extreme temperatures and pressure conditions prevalent in the accretion disc during planetary formation. Some of this work is illustrated in this Theme Issue, which I trust is a fitting memorial to Mike's contributions to the field."} +{"text": "The reverse sural artery flap is a generally accepted means of soft tissue reconstruction for defects of the distal third of the legs. The routine sacrifice of the sural nerve with its consequential temporary loss of sensation on the lateral aspect of the foot can be of concern to early rehabilitation of some patients.rd and the middle 3rd of tibia. A reverse sural artery flap was raised without transecting the sural nerve to cover the distal part of the defect.This is a case report of a 24 years old male who had Gustillo and Anderson type IIIB injury involving the upper part of the distal 3The distal part of the exposed bone was covered with the reverse sural artery flap without loss of sensation at anytime to the lateral part of the foot.The reverse sural artery flap can be raised to cover the upper portion of the distal leg without severing the sural nerve. Reconstruction of soft tissue defects of middle third have not being as challenging as that of the lower third of the leg, the heel and the hind foot. While the upper and middle third is amenable to many options of fasciocutaneous and muscle flaps, the distal third had mainly the option of free flap up until 1992 when Masquelet et al, exterioThis case report highlights the use of nerve sparing reverse sural artery flap as a possible option for covering the upper part of the distal tibia and the mid tibia.Mr. A. J is a 22 years old iron welder who presented with one hour history of pain, deformity and inability to bear weight on the left lower limb following a motorcycle accident. There was no history of symptoms suggestive of injuries to other parts of the body.He was conscious on presentation with stable vital signs. The essential finding was a wound on the anterio-medial aspect of the left leg measuring 23cm x 12cm exposing the middle and part of the distal tibia and fibula. There were fractures of the left tibia and fibula. The posterior tibial and dorsalis pedis arterial pulsation were palpable and the cutaneous sensation over L5 and S1 were preserved.rd of the tibia with some bone loss and segmental fracture of the fibular both at the proximal and the middle 3rd. Patient had wound debridement with application of external fixator. . Patient was resuscitated and investigated. Radiogram revealed oblique fracture of the distal 3. Figure He subserd of the leg posteriorly. It was then transposed as an island flap to cover the distal part of the exposed middle tibia and the upper part of distal tibia. To enhance mobilization, the point at which the sural nerve pierces the fascia was gently cut about 4cm distally and the peroneal communicating nerve was transected at the lateral border of the flap. Hemisoleus flap was raised to cover the remaining exposed mid tibia bone which held on 21st- 27th May, 2011 at the Vancouver Convention Centre, Vancouver British Columbia Canada.Paper presented at the 16The authors declare that they have no competing interests.EEE conceived the study, participated in the design and coordinated the surgeries and drafting the manuscript. NOC participated in the surgery and drafting of manuscript. OJE, AS and AFO participated in literature search, sequence alignment and drafting of the manuscript. All authors read and approved the final manuscript."} +{"text": "Teaching of microscopy in the pharmacy curricula in Europe has almost vanished, although adulterations and mistakes in identification of herbal substances have caused several problems during the last decade. Thus, profound knowledge in the macroscopic and microscopic authentication of herbal drugs is an indispensable prerequisite in quality control.This book provides an excellent overview over macroscopical and much more detailed over microscopical features of more than 140 herbal substances used in European phytotherapy. Those drugs with existing HMPC-, WHO- and ESCOP-monographs have been prioritized. In each of the presented monographs the precise botanical name of the respective plant (under inclusion of recent systematic findings), a photograph of macroscopic characteristics, a short compilation of the most important compounds and the use of the herbal substance is given. Additionally, six clear coloured charts of the most important microscopical characteristics are shown and described in detail. The drugs are arranged according to the plant part used and presented in alphabetical order based on the German names. Finally, some instructions for investigations under use of colouring reagents for specific compounds, a glossar and charts explaining the structure of several plant organs follow. This book is highly recommended to all interested pharmacists whether in public pharmacies or in quality control of herbal substances in industry as well as especially for students, because it provides a detailed insight into this more and more neglected field. The in-depth microscopical part makes the book a valuable teaching tool and excellently suitable for self-learning."} +{"text": "Alcohol and other drugs are frequently used in combination . Based o\u00ae (cimetidine), and Zantac\u00ae (ranitidine), interact with alcohol metabolism leading to a higher level of blood alcohol concentration (BAC) may help explain why polydrug use is dangerous to fetal development. Pharmacokinetic interaction is the process that occurs when two or more drugs are in the system at the same time. Although the pharmacokinetics of individual drugs may be well characterized, when the drugs are combined, one drug can seriously and unpredictably alter the concentration, bioavailability (the rate of a drug entering the bloodstream), and net effect of the other drug. Alternatively, the combination of drugs can alter the bioavailability of either or both drugs or form a metabolite more toxic than either of the parent compounds. For example, several well-known over-the-counter medications, such as aspirin, Tagameton (BAC) . BAC levon (BAC) , 1990, aIn another example of the effects of combined drug use, As a final example, the interaction of alcohol and cocaine has been shown to be more harmful than the use of each drug individually because of the formation of the highly toxic metabolite cocaethylene . CocaethPotential harm to the developing fetus resulting from polydrug use during pregnancy is an important area of drug abuse research. Exploring the effects of each drug alone on the developing fetus does not capture the essence of the clinical condition. Studies on interactive effects of polydrug use fill a void in the scientific literature and highlight the importance of recognizing polydrug use during pregnancy as a significant maternal risk factor for fetal and child development and health. Research on how alcohol interacts with other drugs and how such interactions may adversely affect the developing brain will lead to a better categorization of the known detrimental effects from gestational polydrug use and a more focused understanding of the methods to avert or treat the outcomes."} +{"text": "This work entails scaling a biophysical model of the neocortex using parallel NEURON while ruIndividual simulations consisted of stimulation and completion of a single memory pattern within 1 second of cortical activity. Preliminary results show near linear speedups of the computational part of the simulation, but degradation of file I/O performance as the number of cores increase. Since each core writes out spiking activity after the simulation, the performance decline may be due to the ratio of core to I/O nodes and the large number of output files. With this performance analysis, further work will include measuring and scaling memory storage capacity with the described second variation of the biophysical neocortical model."} +{"text": "Variation is a naturally occurring phenomenon that is observable at all levels of morphology, from anatomical variations of DNA molecules to gross variations between whole organisms. The structure of the otic region is no exception. The present paper documents the broad morphological diversity exhibited by the inner ear region of placental mammals using digital endocasts constructed from high-resolution X-ray computed tomography (CT). Descriptions cover the major placental clades, and linear, angular, and volumetric dimensions are reported.The size of the labyrinth is correlated to the overall body mass of individuals, such that large bodied mammals have absolutely larger labyrinths. The ratio between the average arc radius of curvature of the three semicircular canals and body mass of aquatic species is substantially lower than the ratios of related terrestrial taxa, and the volume percentage of the vestibular apparatus of aquatic mammals tends to be less than that calculated for terrestrial species. Aspects of the bony labyrinth are phylogenetically informative, including vestibular reduction in Cetacea, a tall cochlear spiral in caviomorph rodents, a low position of the plane of the lateral semicircular canal compared to the posterior canal in Cetacea and Carnivora, and a low cochlear aspect ratio in Primatomorpha.The morphological descriptions that are presented add a broad baseline of anatomy of the inner ear across many placental mammal clades, for many of which the structure of the bony labyrinth is largely unknown. The data included here complement the growing body of literature on the physiological and phylogenetic significance of bony labyrinth structures in mammals, and they serve as a source of data for future studies on the evolution and function of the vertebrate ear. The ear region, which functions in hearing via the cochlea as well as balance and equilibrium via the vestibule and semicircular canals, is one of the most intensively studied sensory systems in vertebrate anatomy and physiology. The external morphology of the petrosal bone, which surrounds the delicate structures of the inner ear in all mammals, is a common source of characters used in phylogenetic analyses A link between body mass and dimensions of the inner ear have been hypothesized for several mammal groups, particularly primates The internal cavities within the petrosal comprise the bony labyrinth of the inner ear, including the cochlea anteroventrally and the vestibular apparatus posterodorsally . The dimThe labyrinth of the inner ear is difficult to study because the inner ear structures are completely surrounded by bone, and removal of this bone is necessary in order to observe the inner ear cavities . The strMorphology of the inner ear is phylogenetically informative at both more- and less-inclusive taxonomic levels. For example, the cochlea completes at least one complete 360\u00b0 turn in living therian mammals, but less in monotremes and more basal mammals Given the functional and phylogenetic importance of this region of the skull, it is surprising that broad comparisons of the inner ear of mammals are lacking is described. The opossum is considered in many respects to retain ancestral morphologies for Theria Didelphis commonly is used as a marsupial outgroup in phylogenetic analyses investigating placental relationships The morphological descriptions of the placental mammal bony labyrinth are arranged in a phylogenetic framework. As a point of departure for comparison, the bony labyrinth of a marsupial are arranged taxonomically following the relationships recovered for Mesozoic non-placental eutherians From The descriptions of the bony labyrinths of crown placental mammals begin with Afrotheria, and follow in order with Xenarthra, Laurasiatheria, and Euarchontoglires see . The deswww.digimorph.org). No live animals were used for any part of the research reported here. All specimens used in this study are listed in At least one representative of the major clades of placental mammals recovered by the phylogenetic analyses of Bininda-Emonds and others Whenever possible, isolated petrosal bones were CT scanned to maximize the resolution of the images (CT methods described below). The left petrosal was examined for each taxon, with a few exceptions, for consistency. Although cranial asymmetry is known within the ear region Monodelphis domesticaAll specimens were presumed mature, although no rigorous assessment of maturity was ascertained. Although the external surface of the petrosal changes through accretionary growth, there is evidence that the structures of the inner ear do not change significantly once the walls of the bony labyrinth have ossified Trichechus manatus (MSW 03156), which was scanned at Washington University in Saint Louis, MO. Parameters for CT scanning and post-scanning image processing are provided in Digital images obtained through computed tomography (CT) was employed to observe the internal chambers of the petrosal that constitute the bony labyrinth. The majority of the specimens used for this study were scanned at the University of Texas High-resolution X-ray CT facility (UTCT) in Austin, TX. The only specimen not scanned at UTCT was \u00a9\u00a9The bony labyrinths were digitally segmented from the CT image data into the various partitions of the inner ear in order to calculate partial volumes of the osseous cavities, as well as create a 3-dimensional representation of the bony labyrinth for visualization purposes. Segmentation was performed in the computer software packages VGStudio Max 1.2 The endocasts constructed for this study are oriented with the plane of the lateral semicircular canal parallel to the horizon. Such an orientation was selected because the lateral semicircular canal usually is held horizontal when the animal is in a state of alertness Angular, linear, and volumetric measurements were made in the Amira software The total length of the cochlear canal from base to apex was measured using the SplineProbe tool in the Amira software Angles were taken between the planes of all of the semicircular canals, as well as between the basal turn of the cochlea and the plane of the lateral semicircular canal, when the planes were oriented perpendicular to the field of view The total angular deviation of a semicircular canal from its respective plane is calculated trigonometrically using two linear measurements of the canal of the arcs of the vertical (anterior and posterior) semicircular canals. These ratios have been hypothesized to distinguish aquatic species from their terrestrial ancestors a priori significance level of 5% (P\u200a=\u200a0.05) based on the current sample size for which body masses are known (N\u200a=\u200a28), any coefficient of correlation 0.38 or above is considered significant using a two-tailed probability model, which is most common in statistical analyses 2) were calculated also to determine the strength of recovered correlations. The coefficient of determination reports the percentage of variation in variable Y that can be explained by X, and coefficients of correlation above 0.70 are considered strong in this study .To ascertain whether the dimensions of the inner ear are correlated to overall body size of the animal, specific measurements were plotted over body mass and the coefficient of correlation (\u201cr\u201d) was calculated in Microsft Excel 2008 for Macintosh. At an Bathygenys reevesi and Balaenopteridae), and a body mass was not used for Canis familiaris given the broad range of body masses observed in domestic dogs If the body mass of the specimen examined was not known, an average was calculated from various published sources http://morphobank.org/index.php/Projects/ProjectOverview/project_id/833). The matrices include discrete characters only (within the \u201cMatrices\u201d section of the project page), continuous characters only, and a combined discrete and continuous character matrix (the latter two can be downloaded within the \u201cDocuments\u201d section of the project page).A phylogenetic analysis was not performed for this study owing to the restricted anatomical region in question and the level at which the taxa were sampled. However, several characters that have been used in phylogenetic analyses, and those are described for the taxa below. Ancestral states, both continuous and discrete, for the hypothetical common ancestors of the clades pictured in aa, anterior ampulla; ac, anterior semicircular canal; am, ampulla; ant, anterior direction; ar, semicircular canal arc radius of curvature; av, bony channel for vestibular aqueduct; cc, canaliculus cochleae for cochlear aqueduct; cf, foramina within cribriform plate; cl, length of cochlear canal; cn, canal for cranial nerve VIII; co, cochlea; cp, plane of semicircular canal; cr, common crus; cv, canal for cochlear vein; dor, dorsal direction; er, elliptical recess of vestibule; fc, fenestra cochleae; fn, canal for cranial nerve VII; fv, fenestra vestibuli; hf, hiatus Fallopii for exit of greater petrosal nerve; ht, height; iam, internal auditory meatus; in, incus; la, lateral ampulla; lat, lateral direction; lc, lateral semicircular canal; ld, linear deviation of semicircular canal from its plane; ma, malleus; med, medial direction; pa, posterior ampulla; pc, posterior semicircular canal; pf, perilymphatic foramen; pl, primary bony lamina; pos, posterior direction; pr, promontorium; ps, outpocketing for perilymphatic sac; rl, reference line for measuring coiling of cochlea; sa, subarcuate fossa; sc, semicircular canal; scr, secondary common crus; sg, canal for spiral ganglion within primary bony lamina; sl, secondary bony lamina; sr, spherical recess of vestibule; st, stapes; vb, vestibule; ven, ventral direction; vn, canal for vestibular branch of cranial nerve VIII; wt, width.Abbreviations used in figures: The inner ear of mammals consists of a set of interconnected spaces within the petrosal bone known as the bony labyrinth , which cThe bony cochlear canal is divided within the promontorium of the petrosal into the scala tympani that communicates with the fenestra cochleae (which is covered with a secondary tympanic membrane to accommodate pressure changes within the membranous labyrinth in life), and the scala vestibuli that terminates at the fenestra vestibuli (which accommodates the footplate of the stapes). The division is formed by a bony primary spiral lamina that curves along the modiolus on the axial (inner) wall of the cochlea. A secondary bony lamina often mirrors the primary lamina for a short distance on the opposing wall of the cochlea. The two laminae are connected by the basilar membrane (the laminae do not contact each other directly), upon which the spiral organ of hearing sits. The basilar membrane defines the tympanic wall of the membranous cochlear duct . The vestibular membrane crosses the width of the scala vestibuli to complete the cochlear duct at its vestibular edge. A small opening known as the helicotrema is situated at the apex of the cochlea, and it serves as a connection between the scalae tympani and vestibuli. The cochlear duct is filled with endolymph, and the surrounding space, which includes both the scala tymapni and the scala vestibuli, is filled with perilymph. The perilymphatic duct exits the inner ear near the fenestra cochleae through a bony passage known as the canaliculus cochleae.The bony vestibule is divided into the spherical recess inferiorly and the elliptical recess superiorly. The recesses correspond loosely to the saccule and utricle plus semicircular ducts , but the shapes of the membranous sacs are preserved minimally within the bony vestibule. The saccule, utricle, and semicircular ducts are filled with endolymph (exchange occurs between the cochlea and saccule at their junction), and perilymph fills the remainder of the space. Varying amounts of perilymph surround the semicircular ducts in different species Didelphis virginiana, which is used to represent Marsupialia) and extant placentals (such as Homo sapiens) and all of the descendents of that ancestor. As inferred from ancestral state reconstructions based on the data presented in this paper, the bony labyrinth of the hypothetical therian ancestor possessed a secondary common crus formed between the lateral and posterior semicircular canals . As observed in most mammal species, the arc of the anterior semicircular canal is the largest among the three canals. Ancestral state reconstructions based on the specimens examined for this study indicate that the cochlea completes a 685\u00b0 coil (nearly two turns). However, as discussed below in the descriptions of the bony labyrinths of Didelphis virginiana and Kulbeckia kulbecke, the cochlea of the therian ancestor likely completed a single 360\u00b0 turn only The plane of the lateral canal is positioned low with respect to the ampullar entrance of the posterior canal so that the area of the arc of the posterior canal is not divided by the lateral canal in anterior view, as it is in most extant placentals is higher than that calculated for basal taxa along the eutherian lineage . The ancestor of Theria likely possessed a cochlea with a low aspect ratio given the close similarities between basal metatherian and eutherian labyrinths The cochlea contributes 66% of the total inner ear volume. The aspect ratio of the cochlea of the ancestral therian is reconstructed as low, although the aspect ratio in Didelphis virginiana is described for comparison with the inner ear structures of crown placentals and their Mesozoic eutherian relatives. Dimensions of the bony labyrinth as a whole structure of Didelphis are provided in The structure of the inner ear of Didelphis is a common animal in North America, despite it being the only North American marsupial. The body mass of the specimen used (TMM M-2517) is 2.8 kg . The cochlear spiral is high in profile .Compared to the reconstructions for the therian ancestor, the bony labyrinth of wo turns . Lastly,Kulbeckia kulbecke is provided here as a representative of non-placental Mesozoic eutherians , Ukhaatherium nessovi, and Zalambdalestes lechei (the latter two taxa from Mongolia). The relationships depicted for these taxa in Kulbeckia are averages across a sample of four petrosal specimens. The bony labyrinth of Kulbeckia is illustrated in Eutheria is defined as the most recent common ancestor of crown Placentalia and all taxa more closely related to Placentalia than to Marsupialia . A brief overview of the labyrinth of Kulbeckia is slightly less than that measured for Didelphis .The plane of the basal turn of the cochlea is rotated from the plane of the lateral semicircular canal by a lesser degree in idelphis , and theidelphis . The sphidelphis . As was s scr in , and thes scr in . The coms scr in . The bonThe planes of the lateral and posterior semicircular canals almost form a right angle, but the angles that each of these canals form with the anterior canal are acute . The antThe anterior semicircular canal deviates the most from its plane, and the lateral canal is the most planar . The antKulbeckia and the other Mesozoic taxa, as well as Didelphis, were used to reconstruct the ancestral states of Eutheria. The bony labyrinth of the ancestor of Eutheria retained the ancestral therian conditions in all respects. The lateral semicircular canal formed a secondary common crus with the posterior canal, the plane of the lateral canal was low compared to the ampullar entrance of the posterior semicircular canal, the arc of the anterior semicircular canal was the largest among the three semicircular canals, and the aspect ratio of the cochlea was low (below 0.55). All ancestors at the nodes leading to crown Placentalia retained the ancestral eutherian states for all discrete characters.The inner ear morphology of Ukhaatherium and Placentalia; 56% for the most recent common ancestor of Zalambdalestidae, which includes Kulbeckia and Zalambdalestes, and Placentalia). The contribution of the cochlea of the ancestral zalambdalestid was 51%.The contribution of the ancestral eutherian cochlea to the total inner ear volume was 64%, which was only slightly less than that reconstructed for Theria (66%), and the percentage decreased through time . As discussed above in the reconstruction of the ancestor of Theria, these values are overestimates, and the ancestral eutherian most likely possessed a cochlea that completed a single turn only plus all of its descendants. Placentalia is divided into the three major lineages Afrotheria, Xenarthra, and Boreoeutheria, which in turn is divided into Laurasiatheria and Euarchontoglires Placentalia includes the most recent common ancestor of extant placental mammals . The cochlea of the ancestor of placental mammals completes 738\u00b0 (over two turns), and the volumetric contribution of the cochlea to the entire labyrinth (58%) is less than that of the ancestral eutherian (64%).Entry of the lateral semicircular canal directly into the vestibule in absence of a secondary common crus is the single unambiguous otic synapomorphy for Placentalia, which is a condition not found outside of the crown (at least within Eutheria) The arc of the anterior semicircular canal is the largest among the three canal arcs, which is retained from the ancestor of Theria. The reconstructed states of both the position of the plane of the lateral semicircular canal compared to the ampullar entrance of the posterior canal and the aspect ratio of the cochlea in profile are equivocal owing to variation in the position of the lateral canal within Afrotheria and variation in the shape of the cochlear spiral in both Afrotheria and Boreoeutheria.Afrotheria is a clade of placentals endemic to Africa that includes the groups Afrosoricida (tenrecs and golden moles), Macroscelidea (elephant shrews), Tubulidentata (aardvark), Hyracoidea (hyraxes), Sirenia (dugongs and manatees), and Proboscidea (elephants). Monophyly of Afrotheria is controversial, primarily because it was not recognized in classical morphological studies of placentals, whether based on strict cladistic methodologies or not Macroscelides proboscideus (Macroscelidea), Orycteropus afer (Tubulidentata), a fossil elephantimorph proboscidean (either Mammut or MammuthusTrichechus manatus (Sirenia), Procavia capensis (Hyracoidea), and the two afrosoricids Chrysochloris sp. (Chrysochloridae) and Hemicentetes semispinosum (Tenrecidae). There is a broad range in body mass among these taxa .The bony labyrinth of the ancestor of Afrotheria retained the ancestral morphology of Placentalia in that the lateral semicircular canal entered into the vestibule directly and the arc of the anterior semicircular canal was the greatest among the three canal arcs. The reconstructed ancestral states of the position of the lateral semicircular canal compared with the posterior canal, as well as the aspect ratio of the cochlea, are equivocal based on the afrotherian morphology described here. The states reconstructed and inferred for all of the nodes within Afrotheria are identical to that of the afrotherian ancestor, except the state for the largest semicircular canal arc which is equivocal for the clade consisting of Procavia and Trichechus was the same as that of the afrotherian ancestor (56%), although the contribution of the cochlea of the ancestor of Paenungulata was almost ten percent less (48%), likely on account of the low contribution of the cochlea of the proboscidean (see below). Contributions of 63% and 64% were reconstructed for the ancestors of Afrosoricida and the more inclusive clade that also includes Macroscelidea, respectively. The ancestral afrotherian cochlea coiled 751\u00b0, which was greater than the ancestral placental condition, but less than the values reconstructed for the nodes within Afrotheria .The volumetric contribution of the cochlea to the total labyrinthine volume of the ancestral afrotherian was 56%, which was close to that reconstructed for the ancestor of Placentalia (58%). The ancestral cochlear contribution of the paenungulate clade consisting of Erinaceus (hedgehog) and Sorex (shrew), the results of more recent molecular studies Chrysochloris sp. (Chrysochloridae) and Hemicentetes semispinosum (Tenrecidae) represent the afrosoricids.The group Afrosoricida contains Tenrecidae (tenrecs) and Chrysochloridae (golden moles). Although traditional classifications than in Hemicentetes (50%) as well.The bony labyrinths of ochloris \u20139 and Hecentetes \u201311 diffeochloris . The coc coiling . The proHemicentetes extends from the channel for the vestibular aqueduct to the junction between the elliptical and spherical recesses.No trace of the bony channel for the vestibular aqueduct was observed in the CT slices of centetes . The aquChrysochloris, the largest angle was measured between the posterior and lateral canals, and the smallest was measured between the anterior and lateral canals. The widest angle measured in Hemicentetes is between the anterior and posterior canals, which not only is the largest angle between two semicircular canals in either taxon . The aspect ratio of the posterior canal arc is the lowest among all of the canals between the two species. The ratios between the length of the slender portion of a semicircular canal and the radius of its arc for Chrysochloris are 4.30 for the anterior canal, 3.89 for the lateral canal, and 5.07 for the posterior canal. A similar pattern is observed in Hemicentetes where the posterior semicircular canal has the highest canal length to arc radius ratio (5.41), and the lateral canal has the lowest .The anterior canal is the largest of all semicircular canals in terms of length of the slender portion of the canal and arc radius for both afrosoricid taxa included in the present study . Likewisochloris , indicatChrysochloris are less than that observed for the same canals in Hemicentetes, although the lateral semicircular canal of Chrysochloris is the only planar canal between the taxa . The arc of the posterior canal of Chrysochloris is curved , although the ratio is only 0.47 for the lateral semicircular canal.The angular deviation of the anterior and lateral semicircular canals from their planes in the taxa . The leaed pc in , and thees pc in . Both thChrysochloris and Hemicentetes to an extent that it divides the space enclosed by the arc of the posterior semicircular canal into dorsal and ventral sections when the labyrinth is oriented in anterior view . In fact, the labyrinthine index of Hemicentetes is smaller than that calculated for any other mammal in this study .Lastly, the plane of the lateral semicircular canal is high with respect to the posterior canal in both ew lc in . Within Hemicentetes), and the anterior semicircular canal has the greatest radius among the three canals. Although the cochlea of Chrysochloris exhibits a great degree of coiling was examined in the present study . The plane of the basal turn of the cochlea deviates from that of the lateral semicircular canal by a greater degree in Macroscelides than in Hemicentetes, but not as much as in Chrysochloris were present during the Neogene of sub-Saharan Africa pus afer and 15 iOrycteropus afer is significantly greater than any of the other afrotherians described thus far , and the spiral of the cochlea is fairly flat might be a synapomorphy uniting the remaining afrotheres. However, the absence of the secondary common crus is reconstructed as a synapomorphy for all of Placentalia based on the phylogeny used here . For example, the cochlea completes over one and a half turns , and the posterior semicircular canal has the largest radius, rather than the anterior canal.If sed here . Thus, tA close relationship between Hyracoidea (hyraxes) and ungulates, particularly either Perissodactyla or Tethytheria (Sirenia+Proboscidea), is a classical hypothesis. Although some morphological data support a sistergroup relationship between Hyracoidea and Perissodactyla Procavia capensis was constructed at several points along its path. The canaliculus is not a straight tube, but rather hooks laterally. The canaliculus cochleae exits the bony labyrinth from a bulge posteromedial to the fenestra cochleae , as well as the longest of the three canals, and the arc radius of curvature of the posterior semicircular canal arc is greater for the posterior canal than for the others . Rather, there is a closer connection between Sirenia and Proboscidea, which is a relationship that has been recognized for several centuries Monophyly of Tethytheria, which is the clade that includes Sirenia, Proboscidea, as well as the extinct groups Desmostylia, \u201cAnthrocobunidae\u201d, and Embrithopoda Trichechus manatus, represents Sirenia. The most notable feature observable on the digital endocast of Trichechus is the absence of the bony canaliculus cochleae for transmission of the cochlear aqueduct and Macroscelides proboscideus (72%). However, the aspect ratio of the spiral of the cochlea is low in Trichechus . A low degree of coiling may be a synapomorphy for Sirenia, given that a similar degree of coiling is observed in the fossil Hydrodamalis gigasTrichechus than in other afrotherians excepting proboscideans or paenungulate (48%), the value is not much different from the volumetric percentages calculated for ichechus relativescelides . The cocscideans . As in pichechus .Trichechus than in other taxa examined, such as Macroscelides and Procavia.The fenestra vestibuli has a low aspect ratio , which sApertures for the posterior ampulla, common crus, and the posterior limb of the lateral semicircular canal are situated at the posterior end of the vestibule, with the common crus as the medial-most opening cr in . The bonTrichechus, similar to the state observed in Macroscelides. However, the lateral canal enters the vestibule lateral and ventral to the posterior ampulla in Trichechus, which is on the opposite side of the posterior ampulla from Macroscelides and other taxa where the opening for the canal is well separated from the ampulla, such as Procavia. Even in Orycteropus, where the lateral and posterior canals fuse to form a secondary common crus, the lateral canal is situated dorsal and slightly medial to the posterior canal. The morphology observed in Trichechus places the plane of the lateral semicircular canal relatively low on the vestibule, and the lateral canal does not extend posterior to the plane of the posterior canal. However, the lateral canal extends further laterally than the arc of the posterior canal.The vestibular aperture of the lateral semicircular canal opens near the base of the posterior ampulla in Trichechus form acute angles with each other. The angle between the planes of the anterior and lateral semicircular canals is the smallest angle measured between any two canals in any afrotherian specimen are the largest among the three canals. However, the length of the slender portion of the lateral semicircular canal is noticeably smaller than both the anterior) and posterior semicircular canals , and a low cochlear spiral. The large radius of the lateral semicircular canal is an autapomorphy for Trichechus compared to all other afrotherians. The bony labyrinth of Trichechus retains the ancestral placental condition of the lateral semicircular canal opening directly into the vestibule in the absence of a secondary common crus.Although the results of several recent molecular analyses do not support the monophyly of Tethytheria, the structure of the inner ear supports a sistergroup relationship between Sirenia and Proboscidea among the paenungulates. Notable labyrinthine features that are shared by Sirenia and Proboscidea within Paenungulata are a low position of the lateral semicircular canal compared to the posterior canal was described elsewhere The bony labyrinth of a specimen of an extinct elephantimorph . The lateral semicircular canal does not extend as far laterally as the posterior canal . The ratios between the length of the slender portion of the canal and arc radius of the anterior canal is 4.93, which is the largest ratio among the three canals, and 4.41 for the posterior canal, which is the smallest value. The ratio for the lateral canal is 4.70. The canals do not deviate from their planes substantially . The ancestral paenungulate state for both of those characters is equivocal.The bony labyrinth of the elephantimorph retains the primitive eutherian morphology observed in There are two major groups of xenarthrans, the armadillos and extinct glyptodonts that belong to Cingulata, and the anteaters and sloths, which make up the clade Pilosa Dasypus novemcinctus, which is the only xenarthran found in the United States, represents Xenarthra in this study. Dasypus as a genus is known from the Pliocene to Recent in North, Central, and South America D. novemcinctus itself has the largest biogeographical distribution of any xenarthran species D. novemcinctus was discussed previously Dasypus are provided in The nine-banded armadillo, Macroscelides), although the successive whorls sit upon the basal turn , and the cochlea contributes a larger percentage of the total inner ear volume than the placental ancestor (66% versus 58%).Although the labyrinth of Dasypus, which might support a sistergroup relationship between Xenarthra and Boreoeutheria, but the ancestral state in Afrotheria could not be reconstructed unequivocally. Owing to variation of the aspect ratio of the cochlear spiral within Laurasiatheria and Euarchontoglires, the condition for the ancestor of Boreoeutheria is equivocal between the high and low conditions.The non-afrotherian and non-xenarthran placentals, or Boreoeutheria, are divided into two sister clades, the Euarchontoglires and Laurasiatheria . The latDasypus (816\u00b0), both of which are greater than that reconstructed for Afrotheria (751\u00b0). Such a degree of coiling might support a Xenarthra plus Boreoeutheria pairing. However, the volumetric contribution of the cochlea to the entire labyrinth of Boreoeutheria (55%) nearly is identical that reconstructed for Afrotheria (56%), both of which are less than that calculated for Dasypus (66%).The degree of coiling of the ancestor of Boreoeutheria (815\u00b0) is almost identical to that of Craseonycteris thonglongyai) \u2013 at around 2 g to the largest \u2013 the blue whale \u2013 at around 150000 kg Acinonyx jubatus) or Thomson\u2019s gazelle (Eudorcas thomsoni), while others are adapted for fossorial lifestyles, such as the European mole . Furthermore, volant bats and fully aquatic cetaceans are included within Laurasiatheria. As a whole, the clade Laurasiatheria is composed of Cetartiodactyla , Perissodactyla , Carnivora , Pholidota (represented by Manis tricuspis), Chiroptera , and Eulipotyphla . General dimensions of the bony labyrinths of laurasiatheres are provided in Laurasiatheria encompasses a great diversity of placental mammals in terms of body size, ranging from the smallest extant mammal \u2013 the hog-nosed bat was less than that reconstructed for Boreoeutheria (815\u00b0), but the contribution of the cochlea to the entire labyrinthine volume of Laurasiatheria (55%) was similar to that of the boreoeutherian ancestor (56%).Within Boreoeutheria, Chiroptera is included in a polytomy with Ferae (Carnivora and Pholidota) and a clade comprising Cetartiodactyla and Perissodactyla . The ancEarly systematic analyses of mammals based on morphology group cetartiodactyls and perissodactyls in a group called Ungulata along with Sirenia, Hyracoidea, and Proboscidea (and often Tubulidentata) The only state reconstructed for the ungulate ancestor that differs from that of the ancestor of Boreoeutheria was the aspect ratio of the cochlea, which was low in the ancestor of the Perissodactyla+Cetartiodactyla clade. The shape of the boreoeutherian cochlear spiral was reconstructed as equivocal, although the aspect ratio of the cochlea was low in the ancestral therian. The bony labyrinth of the ancestor of the Perissodactyla+Cetartiodactyla clade had a lateral semicircular canal that opened into the vestibule directly , a position of the plane of the lateral canal high compared to the posterior canal (retained from the boreoeutherian ancestor), and an anterior semicircular canal arc as the largest of the three arcs (retained from the therian ancestor). The ancestral coiling of the cochlea of the Perissodactyla+Cetartiodactyla clade was 857\u00b0, which was greater than that reconstructed for the ancestor of the ungulate-feran-chiropteran polytomy (815\u00b0), and the ancestral ungulate cochlea contributed 55% of the total labyrinthine volume, which was a value retained from the boreoeutherian ancestor.The ancestor of Ferae are divided into the three major extant groups, which are Suiformes (pigs and hippos), Tylopoda (camels and llamas), and Ruminantia (deer and cows) Bathygenys reevesi . However, this is the only similarity between the cochleae of the two taxa. The cochlear canal is significantly longer and Sus .Both the radius and diameter of the lumen of the anterior semicircular canal arc is the largest in both thygenys . Howevert in Sus . The ratSus are more planar (fit better onto a single plane) than the canals of Bathygenys. In fact, the anterior semicircular canal is perfectly planar in Sus, whereas the anterior canal deviates from its plane in Bathygenys .The semicircular canals of thygenys . The posys pc in , althougus pc in . Only thThe bony labyrinth of the ancestor of Cetartiodactyla was similar to that reconstructed for the ancestor of the Perissodactyla+Cetartiodactyla clade. The lateral semicircular canal opened into the vestibule directly in absence of a secondary common crus, the arc of the anterior semicircular canal was the largest among the three, the lateral semicircular canal was positioned high compared to the posterior canal, and the aspect ratio of the cochlea was low. The cochlea of Cetartiodactyla coiled to a lesser degree than the Perissodactyla+Cetartiodactyla clade (846\u00b0 versus 857\u00b0), but the cochlear canal contributed a greater percentage to the overall labyrinthine volume (59% versus 55%).Bathygenys is flattened (low aspect ratio), which is the ancestral condition, although the cochlea of Sus has a high profile. Both labyrinths retain the ancestral cetartiodactyl condition of the high position of the lateral semicircular canal as compared to the posterior canal, and a vestibular entrance of the lateral canal, rather than formation of a secondary common crus.The labyrinths of the two terrestrial cetartiodactyls retain the ancestral cetartiodactyl condition of the anterior canal possessing the largest arc radius. The cochlea of Sus and Cetacea, there are no unambiguous inner ear synapomorphies supporting this relationship. Both Sus and Bathygenys share a high position of the lateral semicircular canal that is absent in Cetaceans (discussed in the following section), but this state was ancestral for crown Placentalia as a whole. Nonetheless, the most recent common ancestor of Sus and Cetacea possessed a bony labyrinth with the lateral semicircular canal opening directly into the vestibule, the anterior semicircular canal arc with the largest radius among the three canals, a high position of the lateral semicircular canal compared to the posterior canal, and a low aspect ratio of the cochlea in profile. The ancestral cochlear coiling of Sus and Cetacea was 1013\u00b0, which appears to be a factor of the high degree of coiling in Sus given that the cochleae of the cetaceans do not coil nearly as much (see below). However, the ruminants and tylopods studied by Gray Sus and Cetartiodactyla, which likely is an inflated estimation given the disproportionately large cochlea of the cetaceans.Although the cladogram presented in Balaenoptera musculus), and the toothed whales, Odontoceti, which includes porpoises and dolphins such as Tursiops truncatus. The bony labyrinth of the bottlenose dolphin Tursiops is described, along with the labyrinth of a fossil member of Balaenopteridae (Mysticeti). The balaenopterid fossil used (TMM 42958-35) was collected from Pliocene deposits of the Yorktown Formation at the Lee Creek Mine in Aurora County, North Carolina. Morphologically, the isolated petrosal can be identified as Balaenopteridae following the key and descriptions of extant mysticetes by Ekdale and others With the exception of Sirenia (the bony labyrinth of which was described above in the Afrotheria section), cetaceans are the only fully aquatic extant mammals. Two major cetacean clades recognized are the baleen whales, or Mysticeti, which includes the largest living mammal The bony labyrinth of the extinct balaenopterid \u201329 is laTursiops \u201331 both Tursiops . The greTursiops in all dimensions, including volume, cochlear canal length, degree of coiling, and even aspect ratio, although to a lesser extent (Tursiops (94%) than for the balaenopterid (91%), although the value for the balaenopterid is exceptionally high. The significant contribution of the total volume by the cochlea is higher for the two cetacean taxa than any other mammal investigated here, including the afrotherians Chrysochloris, Macroscelides, and Trichechus (each with a cochlear contribution around 71\u201372%). The other extreme is the cochlea of the elephantimorph proboscidean, which only contributes 31% of the total volume of the bony labyrinth (see above).The cochlea of the balaenopterid is larger than r extent . The volTursiops forms a tympanal recess immediately basal to the apical terminus of the secondary bony lamina in the balaenopterid . A similTursiops than in the balaenopterid extends from the canaliculus cochleae for a short distance on the tympanal surface of the cochlea in the balaenopterid. The angle between the planes of the basal turn of the cochlea and lateral semicircular canal for the balaenopterid and Tursiops are similar to the angles measured for the terrestrial cetartiodactyls Bathygenys and Sus . The vestibule of Tursiops is bowed medially . Although the ratio calculated for the posterior canal of Tursiops was larger than that of the balaenopterid, the ratios for the anterior (4.19) and lateral (4.05) semicircular canals were larger than those calculated for Tursiops (3.47 and 3.38 respectively).The aspect ratio of the posterior semicircular canal is greatest in both nopterid . The aspTursiops fit onto single planes . The only canal of Tursiops that deviates from its plane is the lateral semicircular canal . A second otic synapomorphy that separates Cetacea from the terrestrial cetartiodactyls is a low position of the plane of the lateral semicircular canal compared to the ampullar entrance of the posterior semicircular canal. The state is derived with respect to the ancestral cetartiodactyl condition, and it is a reversal to the ancestral therian state.Tursiops, and a low aspect ratio for the cochlear spiral in profile. The coiling of the cochlea of the ancestral cetacean (853\u00b0) was retained from the ungulate ancestor (857\u00b0), but the contribution of the ancestral cetacean cochlea to the total labyrinthine volume was greater than that calculated for the Perissodactyla+Cetartiodactyla clade (84% versus 55%). The high contribution of the cochlea to the total volume distinguishes Cetacea from other members of Cetartiodactyla, and likely is greater than that inferred in this study.Additional states reconstructed for the ancestor of Cetacea include the anterior semicircular canal arc as the greatest among the three arcs despite the lateral canal as the largest in Equus caballus, was available for examination. Images of the inner ear and an endocast of the bony labyrinth are presented in The odd-toed ungulates that make up extant Perissodactyla are divided into Equidae (horses), Tapiridae (tapirs), and Rhinocerotidae (rhinoceroses). Monophyly of Perissodactyla is well supported Equus is similar to that of the dolphin Tursiops truncatus , but rather is more in line with the percentage calculated for the terrestrial cetartiodactyl Bathygenys reevesi (54%).The total volume of the inner ear cavities of runcatus . This isody mass . Althoug dolphin . BecauseEquus has a low aspect ratio of . Additionally, the plane of the canal is high relative to other vestibular constituents. The elevated lateral semicircular canal divides the space enclosed by the posterior semicircular canal arc when the endocast of the bony labyrinth is viewed posteriorly extends from the dorsomedial edge of the spherical recess to the vestibular aperture of the bony channel for the vestibular aqueduct, which is situated ventral and medial to the vestibular aperture of the common crus av in . The chaal av in and 3.Equus roughly form a right angle with one another, and the angle between the planes of the posterior and anterior canals only is slightly obtuse .The planes of the posterior and lateral semicircular canals of y obtuse . Both the and Feliformia . Monophyly of Pinnipedia (within Caniformia) has been questioned in the past Canis familiaris and Felis catus), as well as the aquatic Stellar sea lion, Eumetopias jubatus (Pinnipedia). The dog that was used was a particularly small breed (a Chihuahua). Although the cranium varies to extreme degrees among domestic dog breeds, the vast majority of variation is restricted to the craniofacial region rather than the basicranium The carnivorans examined here include two common terrestrial species (Felis (627 slices) and Eumetopias (498 slices) is significantly greater than the number obtained for Canis (92 slices). Because of this, the CT data through the ear region of Canis . Similarly, the length of the inner ear cavity of the dog is not much different than that measured for the cat, and the length of the bony labyrinth of Eumetopias is substantially greater than either of the other species, owing to large body size .The basal whorl of each carnivoran cochlea is separated from the apical turns, where the apical turns fit within the arc created by the basal turn when the cochlea is in vestibular view , and 38DCanis is similar to that measured for the cetaceans and Sus and also in non-placental mammals.As in the cat and sea lion, the posterior limb of the lateral semicircular canal does not open into the vestibule in Eumetopias was measured between the anterior and posterior semicircular canals , Eumetopias , and Felis .The posterior semicircular canal of the cat is the longest of all of the canals in this species, and the lateral canal is the shortest. Likewise, the lateral semicircular canal of in Felis . Unlike the dog . Similar species . The aspEumetopias . The anterior canal of Eumetopias does not deviate from its plane substantial , although the lateral and posterior semicircular canals of the sea lion deviate by a substantial amount .The lateral semicircular canal is the least planar of the three canals in metopias . In fact by much . The antEumetopias and Felis. The secondary common crus observed in Canis is an apomorphic reversal to the ancestral therian condition, and it also is observed in Orycteropus among crown placentals. The ancestral coiling of the cochlea of Carnivora is over a quarter of a turn greater than that reconstructed for the ancestor of Ferae (987\u00b0 versus 888\u00b0), and the carnivoran cochlea contributes 5% more to the total labyrinthine volume than that of the feran ancestor (61% versus 56%). The position of the lateral canal is reconstructed as high relative to the vestibule for the ancestral carnivoran, despite the low position in caniforms. In addition, the anterior semicircular canal is the largest in ancestral Carnivora.Two labyrinthine characters are synapomorphies for Carnivora within Ferae. The first is the higher aspect ratio of the carnivoran cochlea in profile that gives the cochlear spiral a \u201csharp-pointed\u201d profile Felis is positioned high, which is derived from the ancestral eutherian condition, but is plesiomorphic for Carnivora as a whole. The low position of the lateral canal is a reversal for Caniformia. The lateral semicircular canal enters the posterior ampulla in the ancestral caniform (even though a secondary common crus is present in Canis), and the arc of the anterior semicircular canal is the largest of the three canal arcs . The ancestral labyrinth of Caniformia possesses a cochlea with a high aspect ratio that coiled 979\u00b0 and contributed 60% of the total labyrinthine volume.A single character from the bony labyrinth, reversal to a low position of the lateral semicircular canal in relation to the ampullar entrance of the posterior canal, is optimized as a synapomorphy for Caniformia. The lateral canal of ManisAlthough extant species of pangolins are known only from Africa and Asia, fossils of Pholidota have been recovered from Tertiary deposits of Europe and North America Manis tricuspis (Bathygenys reevesi (29.8 mm3) and Dasypus novemcinctus and the xenarthran BradypusThe gross volume of the inner ear cavities of the pangolin, ricuspis and 41, mcinctus . Likewisrd turns . The apird turns as is obs .The bony labyrinth of There are no unambiguous synapomorphies within the inner ear that support an exclusive Carnivora plus Pholidota clade (Ferae). The ancestor of the clade retained features that were present in the ancestor of Placentalia, including entry of the lateral semicircular canal into the vestibule directly, and an anterior semicircular canal arc that was the largest among the three arcs . The plane of the lateral semicircular canal of the ancestor of Ferae was high compared to the ampullar entrance of the posterior canal, which was the state reconstructed for the ancestor of Boreoeutheria. The state of the aspect ratio of the cochlea was equivocal as reconstructed for the feran ancestor.Pteropus lylei is used as a representative; Rhinolophus ferrumequinum was examined here) than between Rhinolophidae and other echolocating bats, which are represented here by the Nycteridae species Nycteris grandis and the Molossidae species Tadardia brasiliensisRhinolophus is included with Nycteris and Tadarida.Chiroptera (bats) is the only group of truly volant mammals, and with over 1,000 species, it forms the second most speciose group of mammals (second only to rodents) Pteropus includes the bats with the largest body sizes Pteropus lylei is an order of magnitude larger than that of the microchiroperan species examined . The aspect ratio of the lateral semicircular canal arc of Pteropus is the highest among the three canals .The ancestor of Chiroptera retained ancestral placental features, including a lateral semicircular canal that was positioned high compared to the posterior canal and that opened into the vestibule directly, as well as the anterior semicircular canal arc as the largest among the three arcs. The ancestral chiropteran cochlea had a high aspect ratio (a condition shared with Carnivora), coiled 764\u00b0, and contributed 61% to the overall volume of the inner ear cavities . The labyrinth of Nycteris grandis and Felis (68%). The cochlea comprises 73% of the labyrinthine volume in Tadarida, which is similar to the percentage calculated in the afrotherians Chrysochloris (71%), Macroscelides (72%), and Trichechus (71%). The largest volumetric contribution among the bats examined was calculated for Rhinolophus (89%). The only other mammals that have a larger contribution than Rhinolophus are the cetaceans .Among the microchiropteran species examined in the present study, grandis \u201345, Rhinmequinum \u201347, and iliensis \u201349, the abyrinth . LikewisTadarida . The cocRhinolophus ratio calculated for Nycteris is 1.0, indicating a circular fenestra than that of Nycteris, yet the channel for the vestibular aqueduct is observed, perhaps because the slices were of a higher resolution (interpixel spacing of 0.043 mm).The common crura of all three bats are tall and especially slender in us cr in and Nyctis cr in . The bonus av in , but theth av in . The preNycteris , Rhinolophus , and Tadarida .The aspect ratio of the arc of the lateral semicircular canal is the lowest among the three canals for the microchiropterans, particularly for nolophus . Only thTadarida are the most planar among the microchiropterans , but the canal does not deviate substantial in Nycteris .The semicircular canals of opterans , and nonar canal . The degRhinolophus shares a more recent common ancestor with Pteropus than the definitive microchiropterans Nycteris and Tadarida. However, the lateral semicircular canals of both Nycteris and Rhinolophus empty into the posterior ampulla, whereas the lateral canals of Pteropus and Tadarida open into the vestibule directly.There are no unambiguous synapomorphies within the bony labyrinth uniting Chiroptera as a whole, nor is there evidence from the inner ear that Nycteris retains the ancestral condition. Because of this, the state in the ancestor of Microchiroptera is equivocal as reconstructed. Tadarida retains the ancestral therian state of a flattened cochlea, whereas the cochleae of all other bats have a high aspect ratio. Nonetheless, the ancestral microchiropteran condition is a cochlea with a high aspect ratio, which is retained from the ancestor to all of Chiroptera. The largest semicircular canal arc is observed in the anterior canal in all of the bats, although this feature is plesiomorphic and shared with most therian taxa. The cochlea of the microchiropteran ancestor coils 820\u00b0 and contributes 68% of the total labyrinthine volume, both of which are greater values than those reconstructed for the ancestor of Chiroptera .A secondary common crus is not observed in any of the bats examined. In this regard, the bony labyrinth of Chiroptera is derived from that of the ancestral eutherian, but retains this morphology from the ancestral placental. Most of the bats are derived from the ancestral eutherian condition in the position of the lateral semicircular canal in relation to the ampullar opening of the posterior canal, although Rhinolophus and Tadarida possessed a cochlea with a high aspect ratio, a lateral semicircular canal positioned high compared to the posterior canal, and an anterior semicircular canal arc that was the largest among the three arcs. All of these states also are present in the ancestor of Chiroptera. Because the lateral semicircular canal opened into the vestibule in Tadarida and into the posterior ampulla in Rhinolophus, the state in the most recent common ancestor of these taxa was reconstructed as equivocal. The cochlea of this ancestor coiled 896\u00b0, and contributed 76% of the entire labyrinthine volume. Although the cochlea contributed a great amount of the labyrinthine volume, it was not as great as that reconstructed for Cetacea (84%).The most recent common ancestor of Solenodon, and the extinct genus NesophontesThe sister taxon to the Perissodactyla+Cetartiodactyla clade, Ferae, and Chiroptera polytomy is Eulipotyphla. The constituents of Eulipotyphla are Erinaceidae (hedgehogs), Soricidae (shrews), Talpidae (moles), Atelerix albiventris , although the vestibular aperture of the canal of Atelerix is further separated from the base of the posterior ampulla than the canal in Sorex. The lateral canal is positioned high relative to the posterior semicircular canal in both the hedgehog and shrew, particularly in Atelerix , which is a greater value than the lateral canal (0.03 mm3). The volumes of all of the canals in Sorex are identical (0.02 mm3). The cross-sectional diameters of the anterior and posterior semicircular canals are the same in Sorex, which is a smaller value than that measured for the lateral canal with respective aspect ratios close to 1.0 (Sorex (5.44), and the ratio of the lateral canal is the smallest in the shrew (3.38). The ratio of the lateral canal in Atelerix is the smallest (4.15), and the ratio is identical for the anterior and posterior canals (4.74).The lateral and posterior semicircular canal arcs of e to 1.0 and 52. Sorex deviates the most from its plane , but only the posterior canal of Sorex exhibits substantial deviation .Among the semicircular canals of both eulipotyphlan taxa, the posterior canal ne pc in . The leaar canal , which dChrysochloris and Hemicentetes). Both Sorex and Atelerix are derived from the ancestral eutherian condition in that the lateral semicircular canal enters the vestibule directly rather than forming a secondary common crus with the posterior canal, as well as a high position of the plane of the lateral canal in relation to the ampullar opening of the posterior semicircular canal. Vestibular entry of the lateral canal is inherited from the ancestor of Placentalia. The cochlea of Atelerix is derived from the ancestral eutherian in that the aspect ratio of the spiral is high, whereas the cochlea of Sorex retains the primitive flattened condition reconstructed for the ancestor of Theria.No features of the bony labyrinth support monophyly of Eulipotyphla, nor are there any unambiguous characters that unite the eulipotyphlans with the afrosoricids and is positioned high compared to the ampullar entrance of the posterior canal (retained from boreoeutherian ancestor), and an anterior semicircular canal arc with the largest radius among the three arcs (retained from therian ancestor). The state of the aspect ratio of the cochlea is reconstructed as equivocal for the ancestor of Eulipotyphla. The cochlea of the most recent common ancestor of Euarchontoglires contains the remaining placental mammal clades. Among these are the highly speciose Rodentia, Lagomorpha, Primates, Dermoptera, and Scandentia. Gross dimensions of the bony labyrinths of Euarchontoglires are provided in The states reconstructed for the bony labyrinth of the most recent common ancestor of Euarchontoglires are the same as those for Boreoeutheria. That is, the lateral semicircular canal opens into the vestibule directly in the absence of a secondary common crus, the lateral semicircular canal is positioned high compared to the posterior semicircular canal, and the anterior canal arc is the largest in terms of radius of curvature among the three arcs. The cochlea of the ancestral euarchontoglire coils 957\u00b0, which is over a quarter of a turn greater than that reconstructed for the ancestral boreoeutherian (815\u00b0), and the cochlea of Euarchontoglires contributes 53% of the total inner ear volume . An unequivocal state of the aspect ratio of the cochlea could not be reconstructed from the data provided here.Recognition of a close relationship between rodents and lagomorphs can be traced back to the seminal classification of Linnaeus The most recent common ancestor of Rodentia and Lagomorpha (Glires) retained a lateral semicircular canal that opened into the vestibule directly in absence of a secondary common crus from the most recent common ancestor of Placentalia, a position of the lateral canal high compared to the posterior canal from the ancestor of Boreoeutheria, and the highest arc radius of curvature measured for the anterior semicircular canal arc from the ancestor of Theria. Although the euarchontoglire ancestral state of the aspect ratio of the cochlea was equivocal, the ancestral glire possessed a cochlea with a high aspect ratio, which was shared with Scandentia among the members of Euarchontoglires. The cochlea of the ancestral Glires coiled 924\u00b0, and the cochlea contributed 55% of the total labyrinthine volume, which was inherited from the ancestral boreoeutherian.Primates, dermopterans, and scandentians together form the clade Euarchonta Rodents make up the most speciose clade of mammals, contributing over 40% of all named extant mammal species Mus musculus . Similarly, the cochlea of Cavia coils to a much greater degree than any other mammal studied here , and the ratio is the smallest in the lateral canal . The canal length to arc radius ratio of the anterior semicircular canal is 4.79 for Cavia and 4.98 for Mus.Both the largest and smallest semicircular canal arc aspect ratios among rodents were measured for the arcs of Cavia . The larCavia are less planar than the canals of Mus (Mus). The lateral semicircular canal of Mus is the most planar canal in either taxon . The linear deviation to cross-sectional diameter ratio is 0.31 for the anterior semicircular canal in Cavia and 3.10 for Mus.The semicircular canals of s of Mus , especiaCavia and Mus retain the ancestral condition reconstructed for Theria in that the largest semicircular arc radius is observed in the anterior canal. Further, the labyrinth of the ancestor of Rodentia retained the ancestral placental entry of the lateral canal (into the vestibule directly), the ancestral boreoeutherian position of the lateral semicircular canal , and the ancestral glire cochlear aspect ratio (high). The cochlea of the rodent ancestor coiled 1003\u00b0 (close to 1013\u00b0 reconstructed for the most recent common ancestor of Cetacea plus Sus) and contributed 56% of the total labyrinthine volume (close to 55% contribution of the cochlea of Boreoeutheria).The labyrinths of Lagomorphs are classically allied with rodents is a larger species overall than the eastern cottontail (Sylvilagus) Lepus is over twice that measured for Sylvilagus .Two lagomorph species examined here were fornicus \u201359 and Soridanus \u201361. The longest , and thelvilagus , althougLepus is slightly larger than that of Sylvilagus .The fenestrae vestibuli are less elliptical in the lagomorphs than for the rodents . The channel is a delicate passage in Sylvilagus, and it does not end as a flattened fissure as in most other mammals, including Lepus. The channel is longer in Lepus than it is in Sylvilagus, both in terms of absolute length .The bony channel for the vestibular aqueduct exits the inner ear cavities medial to the vestibular aperture of the common crus is greater than either the lateral (0.25 mm3) or posterior canals (0.24 mm3), although the most voluminous canal within the labyrinth of Sylvilagus is the lateral semicircular canal . The cross-sectional diameter of the posterior semicircular canal of Lepus is greater than either the anterior or lateral semicircular canal is greater than that computed for the anterior (4.84) and lateral semicircular canals (4.38). However, the greatest ratio among the canals of Lepus was calculated for the anterior semicircular canal .The aspect ratios of the anterior and posterior canals are greater in in Lepus , but theSylvilagus deviates from its plane to a greater degree than that of Lepus. The posterior canal of Sylvilagus deviates to a substantial degree , as does the posterior canal of Lepus (ratio is 1.09). The lateral semicircular canal of Lepus is the most planar among all of the canals between the two species, but the linear deviation is not substantial for either species . The anterior semicircular canal deviates from its average plane by a lesser degree in Lepus, and only the anterior canal of Sylvilagus deviates to a substantial degree .The posterior semicircular canal is the least planar canal in both taxa , where tThere are no unambiguous synapomorphies that support monophyly of Lagomorpha within Glires or Euarchontoglires. The states reconstructed for the ancestor of Lagomorpha are the same as those for both Rodentia and Glires, as the lagomorphs retain the ancestral therian condition of the largest radius of curvature measured for the anterior semicircular canal arc, the placental condition of the direct vestibular entrance of the lateral semicircular canal, the ancestral boreoeutherian condition of the high position of the lateral semicircular canal compared to the ampullar opening of the posterior canal, and the glire condition of the high aspect ratio of the cochlea. The cochlea of the most recent common ancestor of lagomorphs coils 751\u00b0 and contributes 53% to the total volume of the inner ear cavities.Primates consists of two major lineages, Strepsirhini which includes the lemurs and lorises, and Haplorhini which includes monkeys and apes. The haplorhines are divided further into three groups, which are Tarsiidae (tarsiers), Platyrhini (New World monkeys), and Catarhini (Old World monkeys and apes). Monophyly of all of these clades is supported by numerous phylogenetic analyses Macaca mulatta . Only the cochlea of the elephantimorph proboscidean contributes less (31%) to the bony labyrinth among the mammal species discussed so far.The two primate species examined here are the rhesus monkey, mulatta \u201363, and sapiens \u201365. The sapiens . The huml length , but theMacaca completes a greater degree of coiling than the cochlea of Homo , and the apical turns sit on top of one another and 64D.n Macaca . The angr mammal .Cavia porcellus. The vestibule is constricted internal to the fenestra vestibuli, thereby defining the border between the spherical and elliptical recesses. The bony channel for the vestibular aqueduct leaves the inner ear dorsal to the medial edge of the common crus and terminates as a fissure in both species and Homo .The aspect ratios of the arcs of the semicircular canals are similar between the two primate taxa . The higMacaca is less than that measured for the posterior canal, and the posterior semicircular canal is the most planar within the labyrinth of Homo. The deviation of the anterior canal is substantial for both primates , but only the posterior canal of Macaca deviates substantially . The degree of deviation of the lateral semicircular canal is not substantial in either species .The anterior semicircular canal is the least planar canal in each primate . The totHomo was measured for the posterior canal arc. The arc of the posterior semicircular canal of no other euarchontoglire is the largest in terms of radius of curvature, and the only mammals for which the posterior canal arc is the greatest are Manis , Dasypus (the distribution within Xenarthra beyond this taxon is unknown), and Orycteropus and Procavia among afrotherians.There are no unambiguous synapomorphies in the bony labyrinth to support monophyly of Primates, and the clade retains the ancestral primatomorphan morphology of the cochlear spiral in that the cochlea has a low aspect ratio in profile. The anterior semicircular canal arc has the largest radius of curvature, which is retained from the ancestor to Theria, although the greatest radius in The ancestral primate retained the ancestral placental condition of the direct vestibular entrance of lateral semicircular canals in the absence of a secondary common crus, and the plane of the lateral canal was high relative to the ampullar entrance of the posterior canal, which was retained from the ancestor of Boreoeutheria, if not earlier . The cochlea of the ancestor of Primates coiled 980\u00b0, and the cochlea contributed 48% of the total labyrinthine volume, which is the same value as that reconstructed for Paenungulata, but slightly less than that for Primatomorpha (50%).Cynocephalus volans and Galeopterus variegatusCynocephalus is used as a representative of Dermoptera. Phylogenetic analyses based on molecular data reconstruct a close relationship between Primates and Dermoptera The colugos are gliding mammals divided into two extant species, Cynocephalus is less than the rabbit Sylvilagus floridanusCynocephalus contributes 48% of the total labyrinthine volume, which is similar to the contribution calculated for Homo sapiens (50%). The cochlear spiral completes nearly two and two thirds whorls is greatest among the canals , which is different than the condition in most of the mammals examined here, where the greatest ratio is observed in the posterior semicircular canal. The anterior canal is the least planar of the three semicircular canals . The aspect ratio of the cochlea is low (retained from Primatomorpha), the lateral semicircular canal is high compared to the ampullar opening of the posterior semicircular canal (retained from Boreoeutheria), the lateral canal opens into the vestibule directly in the absence of a secondary common crus , and the greatest arc radius of curvature was measured for the anterior semicircular canal (retained from Theria). The contribution of the cochlea calculated for Cynocephalus (48%) is retained from the ancestor of Primatomorpha (50%), and the coiling of the cochlea (954\u00b0) is similar to that reconstructed for the ancestor of Euarchontoglires (957\u00b0).The bony labyrinth of Tupaia glis ratio of similar to the rhesus monkey, Macaca mulatta , a condition that is unique to Tupaia among euarchontoglires, but shared by Hemicentetes, Cetacea, Equus, Carnivora (except Canis), and the bats Nycteris and Rhinolophus. The greatest arc radius of curvature was measured for the anterior semicircular canal in Tupaia, which is consistent for most of the therian mammals considered here.The bony labyrinth of Tupaia is derived from the ancestral eutherian condition, which the taxon shares with Glires within Euarchontoglires. The shape of the cochlear spiral may be a synapomorphy supporting a Tupaia plus Glires clade, although the ancestral state of Euarchontoglires is equivocal with respect to this character. The cochlea coils to a greater degree (1125\u00b0) than that reconstructed for the ancestor of Euarchontoglires (957\u00b0), but less than one half turn. The cochlea of Scandentia contributes 55% of the total labyrinthine volume, which is he same percentage calculated for the cochlea of Boreoeutheria.The high aspect ratio of the cochlea of Trichechus manatus and Equus caballus are among the most voluminous, while the inner ears of Mus musculus and Sorex monticolus are the smallest. In order to test if there is a correlation between body size and inner ear dimensions, the coefficient of correlation was calculated between specific measurements and body mass , although weak positive correlations are found between degree of coiling and both canal length and aspect ratio of cochlear spiral , as summarized in The degree of coiling exhibited by the cochlea is not correlated with body mass . That is, large animals, such as th Century Observation of variation across the sample of bony labyrinths of placental mammals is a long-recognized phenomenon predating the seminal works of Gustaf Retzius in the late 19Dasypus novemcinctus, where the posterior canal is the most sensitive, or Eumetopias jubatus where the lateral canal is the most sensitive. In the case of Eumetopias, the size of the lateral semicircular canal as the largest might be related to an aquatic lifestyle as discussed previously Trichechus manatus and Tursiops truncatus.General patterns in the bony labyrinth anatomy include the arc radius of curvature of the anterior semicircular canal being the largest among the three canals in the majority of the mammals examined here (24 out of 32 species). This pattern has been observed in most mammal species Monodelphis domestica) The posterior semicircular canal is the least planar of the three canals in more bony labyrinths (15 out of 32 species) than either the anterior (9 out of 32 species) or lateral canals (8 out of 32species), and the lateral canal is the most planar for the majority of taxa (18 out of 32 species). The ratio of the total linear deviation to the cross-sectional diameter of the semicircular canal is used in the present study to describe the degree of planar deviation of a semicircular canal, where a ratio above 1 (linear deviation greater than diameter) is considered substantial. Any physiological importance of planar deviation has yet to be explored in a rigorous sense, and such substantial deviation may not have any basis in function. The ratio is used for descriptive and comparative purposes only. Although the ratio is arbitrary, evidence suggests that, even in species with broad ranges of planarities are less than that observed in terrestrial mammals (range of 28% for Macroscelides to 69% for the elephantimorph). The vestibular contribution of Trichechus (29%) is on the low end of the entire mammal range, but it is still greater than the vestibular apparati of both Macroscelides (28%) and Chrysochloris (29%), which are strictly terrestrial. Furthermore, the vestibular apparatus of Eumetopias contributes 47% of total labyrinthine volume, which is only slightly larger than the mean for terrestrial mammals (44%). This suggests that aquatic behavior alone cannot explain a reduced vestibular system in all aquatic taxa.The low contribution of the vestibule might be related to an aquatic lifestyle nonetheless. As an initial investigation of this hypothesis, the relative contributions of the cochlea and vestibule are compared between terrestrial and aquatic taxa examined in this study. Because bats are the only true volant mammals and their ears likely are specialized for aerial locomotion, Chiroptera was not incorporated into this comparison. The vestibular contribution of t-test assuming unequal variances (determined through an F-test). An a priori significance level of 0.05 was selected Although the ranges of vestibular contribution overlap between the terrestrial and aquatic samples, the means of each group may differ significantly. The small number of aquatic species used here limits the effectiveness of statistical analysis. The hypothesis that the mean contribution of the vestibule differs significantly between terrestrial and aquatic mammals was tested, with a two-tailed Trichechus manatus and Eumetopias jubatus appear as though they have been compressed and Phocaena communis, the sirenian Dugong dugon, and the pinniped carnivorans Phoca vitulina, Halichoerus grypus, and Otaria pulsillaAdditional features that may have an importance in determining evolutionary relationships that can be observed within the bony labyrinth include the size of otoliths within the vestibular apparatus, coiling of the cochlea, and shape of the cochlear spiral. The otoliths are tiny in most mammal species, but sizeable otoliths have been observed within the labyrinths of the marsupial Chrysochloris and Atelerix, thereby preserving the membranous labyrinths with the otoliths supposedly intact.There are several reasons for the absence of otoliths on the CT scans. The composition and density of otoliths makes it virtually impossible that they would have been missed in CT data if present. Indeed, CT scans of many non-mammalian vertebrates reveal otoliths Eumetopias, which has a cochlear aspect ratio of 0.68 similar to other carnivorans, and Sus, which has an aspect ratio of 0.71. However, Gray Sus as intermediate between \u201cflattened\u201d and \u201csharp-pointed\u201d, but he described the cochleae of pinnipeds as \u201cflattened\u201d.Two cochlear shapes termed \u201csharp-pointed\u201d and \u201cflattened\u201d are thought to be phylogenetically informative Cavia porcellus has the highest aspect ratio (1.29), and it is the only species in this study in which the height of the cochlear spiral is greater than the width across the basal turn of the cochlea. Similar high-spired cochleae are observed within other caviomorph rodents, including Hydrochoerus capybaraMyocastor coypuDolichotis patagonum , and Chinchilla laniger . A high-spired cochlea is likely a synapomorphy for caviomorph rodents , but other members of their clades possess high degrees of coiling . Phylogenetic patterns extend beyond Mammalia, and additional systematic information can be found in the inner ear labyrinths of squamate reptiles The coiling of the cochlea is phylogenetically informative and can be used to distinguish therian and non-therian mammals Didelphis virginiana) does, in fact, possess a fenestra vestibuli that falls below the 1.8 cut-off (ratio of 1.6). However half of the placentals examined here exhibit the \u2018marsupial condition\u2019 (below 1.8) of Segall Nycteris is 1.0, which is the observed condition among monotremes The stapedial ratio is an index commonly used in phylogenetic analyses to explore the relationships between Mesozoic therians Rhinolophus is similar to that observed in cetaceans, but the vestibular contribution calculated for other bats fall within the range of other mammals. Because a large cochlea is phylogenetically informative for Cetacea, the phenomenon may also be informative within Chiroptera, upon which a denser sampling of taxa might shed light. Furthermore, a low contribution of the cochlea independently unites zalambdalestids, primatomorphs, and eulipotyphlans , and absence of a secondary common crus is a synapomorphy for crown Placentalia . Even so, entry of the lateral canal into the posterior ampulla is reconstructed as a synapomorphy of Cetacea, as well as Carnivora . Although the entry of the posterior limb of the lateral semicircular canal does not express any major pattern with the taxonomic sampling employed by the current study, potential for informativeness at lower levels is apparent. For example, the lateral canal opens into the posterior ampulla in the cetaceans, but it opens into the vestibule in terrestrial artiodactyls , but potentially phylogenetic patterns are observed . In order to fully understand the functional and evolutionary implications within the structure of the bony labyrinth, both physiology and phylogeny must be considered, as these two phenomena are not mutually exclusive. Future detailed studies of the inner ear among closely related species will increase our knowledge of the phylogenetic and functional implications of the inner ear, and foster the application of bony labyrinth morphology to the biological interpretation of fossil vertebrates.Table S1Taxa examined and scanning parameters.a Definitions of parameters are as follows: FR, field of reconstruction refers to the dimensions of an individual CT slice, expressed in millimeters; Pixel, interpixel spacing, or vertical and horizontal dimensions of an individual pixel, expressed in millimeters, and calculated as FR/Size; Size, number of pixels in a CT slice, either 512\u00d7512 or 1024\u00d71024 pixels; Slices, number of CT slices through the ear collected in the coronal slice plane; Space, interslice spacing, or distance between consecutive slices, expressed in millimeters. b Taxonomy and systematic arrangement follows published phylogenies c This specimen was the 156th Mortality South West in 2003, collected by S. Rommel at University of North Carolina Wilmington.(PDF)Click here for additional data file.Table S2Additional information, imagery, and sources of data selected specimens. Further imagery is available at http://morphobank.org/index.php/Projects/ProjectOverview/project_id/833. Institutional abrreviations listed in (PDF)Click here for additional data file.Table S3Ancestral character state reconstructions for ancestral nodes in text . Letters in the first column refer to node labels in text (PDF)Click here for additional data file."} +{"text": "Healthcare associated infections exact a heavy toll in terms of preventable death and disability in patients and are responsible for additional costs of hospitalization in health care facilities in resource-limited countries. The baseline mean attack rates for surgical site infections ranged from 27.3% to 54.2% in the surgical wards of four district hospitals in northern Benin. To this end, a program of prevention and reduction of surgical site infections is implemented to ensure the safety of patient care.Halve the surgical site infections in hospitals targeted by the program.The clinical audit methodology is based on the use of standard criteria for prevention of surgical site infections in order to identify gaps and propose a plan for improvement. In addition samples of pus were analyzed in the lab to confirm the surgical site infection by identifying the causative organisms. A study protocol sheet with data collection monitoring has been developed for the calculation of incidence of suppuration with confidence intervals.The observance of protocol implementation of preventive measures at all operating system has improved the following indicators:- The rate of handwashing compliance improved from 25% to 45% in one year,- The reduction of surgical site infections in hospitals by 25% in 2005 to 3% in 2012,- The decrease in the average length of stay in health care for 22 days to 7 days.The introduction of the prevention of healthcare associated infections in the management of hospital management has reduced the risk of infection and improves the quality of patient care.The change introduced in improving patient safety in these institutions remained a source of motivation for the generalization of the program in other services.None declared"} +{"text": "The definition of Purkinje cell zones by their white matter comprtments, their physiological properties, and their molecular identity and the birthdate of their Purkinje cells will be reviewed. The cerebellar Purkinje cell layer generally is described as a homogeneous structure. Its subdivision into discrete longitudinal zones with abrupt changes in their connectivity at their borders was based on the observation of the subdivision of the cerebellar white matter of the cerebellum of the cat and the ferret into compartments. In transverse sections a regular pattern of mediolaterally disposed bundles of medium-sized myelinated Purkinje cell axons was observed, separated by darker staining slits consisting of smaller fibers Fig.\u00a0 that wasIn the same period, Olav Oscarsson and his collaborators from the Department of Physiology in Lund, Norway recorded positive surface-climbing fiber potentials and Purkinje cell complex spikes from the anterior lobe of the cat in preparations where they had transected the spinal cord, except for one of the funiculi. Oscarsson found that climbing fiber potentials on stimulation of peripheral nerves always were located in parasagittal zones . OscarssMore recently Voogd et al. and SugiPukinje cell zones develop early from a series of superficial, mediolaterally disposed Purkinje cell clusters . Recentl"} +{"text": "The presentation illustrates the process for the development of the recent NICE Headache Guidelines and the importance of careful topic selection. The methodology, timescale, and the role of the Expert Group including patient representatives are described. The Guidelines are intended for the non-specialist in Primary Care where most patients present and can be safely diagnosed and managed.The Guidelines provide support with the diagnosis of primary headache including the value of neuroimaging which evidence suggests should not be used only for reassurance. The importance of excluding secondary causes, in particular medication overuse which is found among migraine sufferers is emphasised. Specific advice is given for management of the three most common types of primary headache; migraine, tension headache and cluster headache. Changes are recommended to current practice including prescribing combination therapy in acute migraine and indications for prophylactic topiramate. The management of female migraine sufferers of child baring potential also receives attention and caution is advised in the use of combined hormonal contraception for patients with aura due to increased risk of stroke. There are also recommendations for perimenstrual prophylaxis with frovatriptan or zolmitriptan.Improved recognition of common headache disorders and better targeting of treatments should reduce the burden of headache without requiring substantial extra resource. Where more specialised advice is required the Guideline recommends referral to a neurologist or GP with special interest in headache."} +{"text": "The use of talcum powder is incorrectly part of the traditional care of infants. Its acute aspiration is a very dangerous condition in childhood. Although the use of baby powder has been discouraged from many authors and the reports of its accidental inhalation have been ever more rare, sometimes new cases with several fatalities have been reported. We report on a patient in which accidental inhalation of baby powder induced severe respiratory difficulties. We also point out the benefits of surfactant administration. Surfactant contributed to the rapid improvement of the medical and radiological condition, preventing severe early and late complications and avoiding invasive approaches. The use of talcum powder is incorrectly part of the traditional care of infants. Acute aspiration of baby powder is a very dangerous condition in childhood and several fatalities have been reported . Talcum A18 month female child was admitted for appearance of cough and respiratory distress after one hour from the inhalation of talc powder, accidentally happened during a nappy change. She was apyrectic and presented with cough and only a mild respiratory distress, tachypnea (40/minute), 98% oxygen saturation, heart frequency of 145/minute. On examination there was no evidence of cyanosis, the respiratory sounds were normal on both sides and other systems were normal. Chest x-ray revealed a consolidation in central and middle lung zones and basal emphysema on both sides Figure . An empiBaby powders are usually insoluble in water and may drie up the surface of the mucous membranes of the tracheobronchial tree, thus impairing the function of cilia and the mechanism of pulmonary clearance. Bronchoalveolar lavage is ineffective in washing away the talc powder from the airways, due to its water insolubility . FurtherWritten informed consent was obtained from the parents of the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The authors declare that they have no competing interests.FM drafted the manuscript and participated in management of the case. MC and PV managed the case and participated in design of the study. MCM participated in management of the case and in drafting the manuscript. CLP performed the respiratory physiotherapy and participated in drafting the manuscript. GC coordinated the study and participated in its design. All authors read and approved the final manuscript."} +{"text": "In the Results section of the article, the first paragraph under the heading \"Monitoring of infection status postmortem.\" had an error in its last sentence as a result of issues in the typesetting process. The correct sentence is available below.\"With regard to culture of spirochetes, none of the treated animals yielded a positive culture and only one of the untreated animals was culture-positive (lung tissue).\""} +{"text": "In this issue of the journal we present the locally agreed management guidelines of the four most common urologic cancers: renal cell cancer, bladder cancer, prostate cancer and testicular cancer.Medical practitioners in Saudi Arabia are either multicultural or trained in different parts of the world. This usually results in different treatment approaches and management plans for patients of similar problems. Under the direction of the Saudi Oncology Society, a committee of experts from the medical oncology, urology and radiation oncology was established.The mission of the genitourinary guidelines committee was to oversee the development of guidelines for the common genitourinary cancers that will improve the practice of medicine for urologic cancers in Saudi Arabia and to establish minimum recommendations that can be used by health authorities in their decision making when coming across cancer management. This committee is one of the different committees that was established for the development of management guidelines in all cancer sites, of which the first to be published was the lung cancer guidelines. Ongoing Members of the genitourinary guidelines committee represented major institutions from all different parts of the kingdom and are listed in The committee agreed to use the established bulleted format used by the lung cancer committee. The committee nominated different members to present a draft of the guidelines on each cancer site that takes into account available evidence for each item. The draft is finalized during one or two sessions. Each recommendation is either referenced and level of evidence is indicated. Final draft was circulated for approval by all members.The committee also agreed to use the evidence level (EL) categories used by the lung cancer committee which are summarized as follow:EL-1 (high level): well-conducted phase III randomized studies or meta-analysisEL-2 (intermediate level): good phase II data or phase III trials with limitationsEL-3 (low level): observational/retrospective studies/expert opinionFinally, committee members agreed that clinical research is integral part of patients care and is the only way to advance and improve patient care, increase cure rate and decrease the presence of adverse effects from medical and surgical therapies."} +{"text": "This study provides an experimental confirmation of the proportionality hypothesis predicted by the theoretical model of emotion ,2 based The proportionality relationship between emotional intensity and perceived disparity in sharing in a social interaction has been confirmed by using the ultimatum game paradigm (UG) based onThe results confirmed similar finding of proportionality relationship between the amplitude of the evoked EEG signals and the disparity in offer-ratio. This provided independent confirmation of the emotional model using electrophysiological measurements to detect the neural response to the disparity signal in social sharing. This provides further evidence supporting the proportionality hypothesis using electrophysiological measures of neural responses, in addition to the previous psychological study using self-report rating as a measure."} +{"text": "With full genome sequencing of bacterial genomes becoming more affordable, the number of publications, which deploy this elegant and modern technique is rapidly increasing. Full genome sequences of hundreds of bacterial strains have been determined and described. However, on reading some of the literature, sometimes one wonders if the use of this method with enormous resolving power is really \u2018balanced\u2019 or justified by the specific questions that are being addressed. Or \u2013 to put it differently \u2013 what exactly were the specific questions for which sequencing was supposed to provide the answer is sometimes unclear.Staphylococcus aureus (MRSA) clone in two farms located in Denmark to resolve an epidemiological puzzle that involves zoonotic transmission of a methicillin-resistant mecC gene which shared identical ST, Application of WGS and determination of SNPs in the core genome of the isolates clearly show that this was not the case. Phylogenetic analysis showed that the MRSA isolates could be clearly differentiated into two clusters each specific for the particular farm and isolates recovered from the farmer and livestock from the same farm were virtually identical, differing only by 3\u20135 SNPs at most.Was the infection in the farmers preceded by colonization at nasal carriage sites frequently inhabited by MRSA? And did the infection start in the farm animals and passed on to the farmers or the other way around?In Farm A, nasal swabs and blood isolates showed no SNP differences. This is a potentially important observation since genetic difference between colonizing and invading forms of the same clone has been considered in the literature. The data suggest that colonization of the farmer preceded the blood infection. Isolates from the cows and the farmer only differed by five SNPs indicating clearly a transmission; however, the direction of transmission was not immediately obvious.In Farm B, the MRSA isolates from sheep 1 and 2 and the farmer only differed by 3\u20135 SNPs, while the rest of the sheep had multiple (30\u201340) SNPs. The presence of multiple SNPs suggests either a prolonged circulation of the clone among the animals or multiple introductions of the same MRSA clone into the flock. Together the data suggest that the most likely direction of the zoonotic infection on this farm was from sheep to the human.In view of the virtually identical MRSA shared by human and animal hosts, the authors also took a close look at the status of virulence genes and the accessory genome in general in the two kinds of isolates, none of which carries the Panton Valentine toxin PVL. This part of the paper also abounds in extremely interesting observations.This study provides a clear illustration of exactly how and at what point of an epidemiological investigation WGS could increase the resolving power of the study. The authors make the important point that WGS cannot replace other more traditional forms of epidemiological inquiry, but it can greatly and in highly specific ways increase the resolving power of such studies. WGS of isolates from each farm proved beyond doubt the identity and unique nature of the particular MRSA clone recovered from animals and the farmer of the same farm. However, determining the direction of transmission still required a combination of conventional as well as WGS techniques."} +{"text": "Allergic asthma as a complex outcome of pathogenic immunological, cellular and functional modifications of the airways is initiated by allergic reactions as a response mostly to inhalant allergens. Dysregulation of innate and adaptive immune functions contribute to the pathogenesis of the disease. A hallmark is the development of a Th2-driven inflammatory response in the airways. The underlying Th2-skewed balance is the result of a multi-functional process in which genetic predisposition and environmental exposures interact as major players. Maturation of the immune system already starts in utero, the most critical phase in the ontogenetic programming of the offspring. Endogenous as well as exogenous exposures may influence the maturation and differentiation of immune cells of the fetus and may thereby contribute to disorders such as allergies and asthma later on in life. Epigenetic mechanisms are proposed to mediate these effects. A comprehensive overview on the interaction of fetal exposures and the developing immune system will be provided that may contribute to or protect the progeny against the development of asthma. The new and exciting field of epigenetics will be highlighted with respect to T-cell differentiation and early allergic disease development. Furthermore, we emphasize new investigations that aimed to analyze fetal host innate immune responses to environmental microbial microorganisms and their possible future application in asthma protection."} +{"text": "Although there are treatments that can impede progress of the disease, there are no cures or vaccines. The most common route for transmission of the virus is via heterosexual intercourse. The animal models that are currently available do not accurately replicate the physiological environment of vaginal intercourse, so the effects of factors such as seminal fluid composition on HIV infectivity have not been thoroughly examined. Here, Mary Jane Potash and colleagues describe a system for reproducible and efficient sexual transmission of HIV in mice. Using this model, the group show that the rate of viral transmission dramatically declines during estrus in mice, demonstrating that the local environment in the female reproductive tract can influence viral infectivity. The study provides an effective"} +{"text": "A regional-scale sensitivity study has been carried out to investigate the climatic effects of forest cover change in Europe. Applying REMO , the projected temperature and precipitation tendencies have been analysed for summer, based on the results of the A2 IPCC-SRES emission scenario simulation. For the end of the 21st century it has been studied, whether the assumed forest cover increase could reduce the effects of the greenhouse gas concentration change.Based on the simulation results, biogeophysical effects of the hypothetic potential afforestation may lead to cooler and moister conditions during summer in most parts of the temperate zone. The largest relative effects of forest cover increase can be expected in northern Germany, Poland and Ukraine, which is 15\u201320% of the climate change signal for temperature and more than 50% for precipitation. In northern Germany and France, potential afforestation may enhance the effects of emission change, resulting in more severe heavy precipitation events. The probability of dry days and warm temperature extremes would decrease.Large contiguous forest blocks can have distinctive biogeophysical effect on the climate on regional and local scale. In certain regions of the temperate zone, climate change signal due to greenhouse gas emission can be reduced by afforestation due to the dominant evaporative cooling effect during summer. Results of this case study with a hypothetical land cover change can contribute to the assessment of the role of forests in adapting to climate change. Thus they can build an important basis of the future forest policy. Climate change and its impacts on different spatial and temporal scales and sectors have been addressed by several international research projects in the last decade-3. All rThe considerable enhancement of inter-annual variability of the European summer climate as well as the changes of the hydrological cycle can lead to higher probability of extremes compared to present-day conditions,6-11. ThClimate change affects the key sectors such as hydrological systems, infrastructure, human health, agriculture and forestry. Changes of the climatic means and extremes already show impacts on land cover that are expected to be more severe under future climate conditions. Drought periods and other extremes are responsible for a significant share of agricultural losses in Europe. Impacts of severe droughts on the composition, structure, and biogeography of forests have been detected worldwide in the recent decades,18. On t2 concentration[2 warming of deforestation can dominate over albedo cooling effect . Several studies have addressed the biogeophysical cooling and moistening effect of tropical forests[Land cover in turn interacts with the atmosphere, thus it has an important role in climate regulation. Vegetation affects the physical characteristics of the land surface , which control the surface energy fluxes and hydrological cycle. Through biogeochemical processes, ecosystems alter the biogeochemical cycles and thereby changing the chemical composition of the atmosphere-27. Depentration,29,30. Pntration. Dependi forests. Whereas forests-34. Clim forests,36 and m forests. In Cana forests,39. Thes forests, as wellEurope is the only continent with a significant increase of forest cover in recent times. In the last two decades the annual area of natural forestation and forest planting amounted to an average of 0.78 million hectares/year. Land us\u2022 the biogeophysical effects of a hypothetic potential afforestation on summertime temperatures and precipitations, for the end of the 21st century and its regional differences within Europe,\u2022 the magnitude of the biogeophysical feedbacks of forest cover increase compared to the projected climate change signal with special focus on the probability and severity of temperature and precipitation extremes.This subsection summarizes the most important aspects that are essential for the appropriate interpretation of the results. The experimental set-up and the method of the analyses are introduced in Sect. 4 more in detail.In order to provide climate change information due to emission change, an emission scenario simulation for the future 2071\u20132090) and a reference simulation for the past (1971\u20131990) has been carried out applying the regional climate model REMO,43. Both\u20132090 andFirst, the sign and magnitude of the climate change signals without any land cover change have been investigated comparing the summer temperature means and precipitation sums in the time period 2071\u20132090 to 1971\u20131990. Increase of temperature is projected to occur with precipitation decrease in Southern- and Central-Europe and in the southern part of Scandinavia, whereas Northeast-Europe can be characterized with warmer and wetter conditions Figure\u2009. In agreSecond, climate change signal due to potential afforestation has been determined comparing the simulation results with- and without forest cover increase for 2071\u20132090. The regions have been identified, where the hypothetic forest cover increase shows the largest effects on summer temperature and precipitation Figure\u2009. Land coConsequently, the regions characterized by largest climatic effects of afforestation do not correspond to the areas with the largest signals due to emission change. The magnitude of the climatic effects of both emission change and potential afforestation differs among regions. In most parts of the temperate zone the cooling and moistening effects of afforestation are dominant during summer. These feedbacks can reduce the projected warming and drying especially in the northern part of Central-Europe and Ukraine. Whereas increase of the forest cover can enhance the climate change signal for precipitation in some part of Spain, Belarus and Russia but the magnitude of this impact is relatively small compared to the effect of the emission changes. Thus the analysis of the magnitude of the climatic feedbacks of afforestation relative to the effects of the enhanced greenhouse gas emission can help to determine the regions, where forests can play an important role in altering the climate change signal.The regional characteristics of the effect of the assumed potential afforestation on temperature and precipitation have been analysed for three selected regions would be larger than its decrease due to the enhanced greenhouse gas emission Figure\u2009. Thus thIn the region of Northern France, the precipitation decrease based on the A2 emission scenario is projected to be larger \u221226%; -69\u2009mm). If emission changes occurred together with potential afforestation, the half of the original climate change signal could be relieved of temperature show that distributions of the daily temperature means are shifted towards the warmer direction under future climate conditions Figure\u2009. The PDFIn each of the selected regions the total number of warm extremes are projected to increase significantly at the end of the 21st century and an increase of the summer precipitation sum (up to 50\u201360\u2009mm).\u2022 For precipitation, the climate change mitigating effects of afforestation differs among the selected regions. In the northern part of Germany the increase of forest cover would fully compensate the projected climate change signal. In Northern France the precipitation decrease based on the A2 emission scenario is projected to be larger than in Northern Ukraine. In both regions the half of the climate change signal could still be relieved assuming potential afforestation.\u2022 In each of the selected regions increase of forest cover may contribute to the decrease of the variability of the daily temperature means, thereby to the reduction of the projected climate change signal. The strong increase of the number of warm extremes due to emission change can be slightly reduced by the assumed potential afforestation.\u2022 In Northern Germany and France, the forest cover increase would enhance the effects of emission change on extreme precipitation, resulting in more severe heavy precipitation events. The probability of dry days would decrease.The magnitude of the possible climate change reducing effects of a potential afforestation for Europe, on regional scale, for longer future time period have not assessed before. Based on the simulation results it can be concluded that large, contiguous forest blocks can have distinctive biogeophysical effect on the climate on regional and local scale. Our land cover change study confirm that in smaller areas the biogeophysical feedback processes can significantly affect and modify the weather and climate, the temperature and precipitation variability,46. The For the introduced sensitivity study, one regional climate model has been applied driven by one emission scenario. Multimodel ensembles and intercomparison studies are needed for studying the robustness of the results, which is the aim of recent EU-projects . The sp2 concentrations can also lead to the increase of the stomatal resistance thereby to the inhibition of the transpiration, which can amplify the global warming[Our sensitivity study focused on the biogeophysical feedbacks, the biogeochemical interactions, the processes related to the carbon sequestration of forests and soil were not taken into account. In the temperate zone, net climatic effects of forests are determined by various contrasting feedbacks. In case warming,52. TherFrom a practical point of view, results of this case study related to the investigation of the climate sensitivity due to a hypothetic land use change and its regional differences can contribute to the future adaptation strategies in European agriculture and forestry. The understanding of the role of land cover in the climate system becomes even more important. Land cover characteristics due to climatic conditions as well as policy induced land management are region-specific. The sign and magnitude of the climatic effects of afforestation and emission change also shows large spatial differences. Therefore, to obtain regional scale information, similar fine scale case studies are essential to quantify and predict the climatic effectiveness of the different land cover and land use practices.REMO is aThe simulations have been carried out for Europe Figure\u2009, with 0.The following experiments have been performed and analysed Table\u2009:Reference simulation for the past (1971\u20131990) with present (unchanged) forest cover.\u2022 Emission scenario simulation for the future (2071\u20132090) with unchanged forest cover applying the A2 IPCC-SRES emission scenario . This eEmission scenario simulation with potential afforestation for 2071\u20132090. The potential afforestation map have been recalculated and reaggregated for all model grid cells. Figure\u2009The analyses of the simulation results focused on the summer months , because of the high radiation input, intense heat and mass exchange. The leaf area index of the deciduous forests reaches its maximum in this period, which has a strong control on the land-atmosphere interactions.The sign and the magnitude of the temperature and precipitation changes have been analysed for the following three cases:\u2022 Climate change due to emission change has been investigated comparing the results of the simulations with unchanged land cover for 2071\u20132090 to 1971\u20131990.\u2022 Climate change due to potential afforestation have been calculated comparing the simulation results with- and without forest cover increase for the future time period (2071\u20132090).\u2022 Climate change due to emission change and potential afforestation has been determined comparing the results of the potential afforestation experiment (2071\u20132090) to the reference study in the past (1971\u20131990) without land cover change.2).A Mann\u2013Whitney U-Test was applThe probability distribution of temperature has been calculated from the daily mean values in the investigated 20-year time periods based on the normal distribution function. The indices of temperature and precipitation extremes in this study were selected from the list of climate change indices recommended by the World Meteorological Organization\u2013Commission for Climatology (WMO\u2013CCL) and the Research Programme on Climate Variability and Predictability (CLIVAR). The seThe authors declare that they have no competing interests.BG carried out the simulations, analyzed and interpreted the results and drafted the manuscript. GK provided the forest cover database and map for the potential afforestation case study. KS and CT provided expertise and guidance during the simulations. AH contributed to the statistical analysis and to the interpretation of the results. DR and SH participated in the design of the study and have been involved in the discussion of the results and the critical revision of the manuscript. DJ coordinated the research, participated in the design of the study and has given final approval of the version to be published. All authors read and approved the final manuscript."} +{"text": "Dorsal dislocation of the intermediate cuneiform and isolated medial cuneiform fracturesare rare injuries. In this report, we present a patient who sustained a dislocation of theintermediate cuneiform and describe predisposing factors and the treatment procedure. Dorsal dislocation of the intermediate cuneiform is a rare injury, and only a few cases have been reported \u20133. The iA 30-year-old woman sustained an injury to her right foot when she was walking in high-heeled shoes and fell down the stairs with her foot in an equinus and inversion position. The patient complained of severe pain and was unable to bear weight in her right foot. The initial clinical examination of her foot revealed swelling and tenderness at the dorsum of the midfoot without an open wound. There was no vascular compromise, and sensation was preserved. Plain radiographs showed dorsal dislocation of the intermediate cuneiform bone and a nondisplaced fracture at the medial cuneiform . A compuThe three cuneiforms are wedge shaped and sit in the middle of the medial column of the foot. They are part of the transverse and medial longitudinal arches of the foot. The intermediate cuneiform is the smallest. Each cuneiform articulates with one third of the distal navicula proximally and its respective metatarsal distally . The staIsolated intermediate cuneiform dislocation was first described by Clark and Quint in 1933 . BecauseIn this case, closed reduction under general anesthesia failed, and we had to reduce openly. Five of the 6 reported cases with intermediate cuneiform dislocation were treated by open reduction , 2. Alth"} +{"text": "The sphenoid sinus occupies a central location in transsphenoidal surgery (TSS). It is important to identify relevant anatomical landmarks to enter the sphenoid sinus and sellar region properly. The aim of this study was to identify anatomical landmarks and their value in single-nostril endonasal TSS. A retrospective study was performed to review 148 cases of single-nostril endonasal TSS for pituitary lesions. The structure of the nasal cavities and sphenoid sinus, the position of apertures of the sphenoid sinus and relevant arteries and the morphological characteristics of the anterior wall of the sphenoid sinus and sellar floor were observed and recorded. The important anatomical landmarks included the mucosal aperture of the sphenoid sinus, a blunt longitudinal prominence on the posterior nasal septum, the osseocartilaginous junction of the nasal septum, the \u2018bow sign\u2019 of the anterior wall of the sphenoid sinus, the osseous aperture and its relationship with the nutrient arteries, the bulge of the sellar floor and the carotid protuberance. These landmarks outlined a clear route to the sella turcica with an optimal view and lesser tissue damage. Although morphological variation may exist, the position of these landmarks was generally consistent. Locating the sphenoid sinus aperture is the gold standard to direct the surgical route of TSS. The \u2018bow sign\u2019 and the sellar bulge are critical landmarks for accurate entry into the sphenoid sinus and sella fossa, respectively. The pituitary gland is located below the center of the brain and over the sella on the cerebral surface of the body of the sphenoid. The sphenoid contains two sinuses, which open into the roof of the nasal cavity via the apertures on the posterior wall of the sphenoethmoidal recess directly above the turbinates. Since only thin layers of bone separate the sphenoid sinuses from the nasal cavities below and the sella turcica above, transsphenoidal surgery (TSS) is the first choice option for the removal of pituitary lesions rather than the transcranial approach.The transsphenoidal approach has evolved considerably since it was first successfully performed by Schloffer in 1907 . Since tHowever, the surgical path of TSS is extremely deep and narrow, and the view is usually blocked by crucial neurovascular structures. In addition, the close proximity of the sphenoid sinus to the carotid artery and the optic canal, plus the high levels of variation between the anatomical structures of the sphenoid sinus and sellar floor, make the approach even more difficult, hence the success of the treatment greatly relies on the experience of the surgeon and the familiarity with anatomical landmarks through the surgical route. To date, the methods and techniques of TSS adopted by different surgeons with respect to surgical guidance and important landmarks vary significantly. Our knowledge regarding the anatomical structures relevant to TSS is mainly based on postmortem or imaging studies \u201313. HoweThis study retrospectively reviewed 148 surgical records of single-nostril endonasal TSS for sellar lesions performed in our department in the period between May 2002 and February 2008. The patients included 78 males and 70 females with a mean age of 39.2 years . Preoperative magnetic resonance imaging (MRI) was performed for all patients to assess variations of the sphenoid bone, sphenoid sinus and sellar floor. Postoperative histopathological examination confirmed the diagnosis of pituitary lesions. The study was approved by the ethics committee of Fuzhou General Hospital . Written informed consent was obtained from all participants.Patients lay in the supine position with the head extended by 20\u00b0. Surgeons were positioned directly behind the patient\u2019s head. The microscope was orientated perpendicularly to the surface of the surgical floor first and then later adjusted towards the mucosal aperture of the sphenoid sinus. All surgeries were performed via a unilateral endonasal transsphenoidal approach . This roAn endoscope was used in 26 patients to examine and identify the nasal structures and the mucosal aperture of the sphenoid sinus. Under an operating microscope, the sphenoid sinus was approached either by expanding the aperture or by incising the ipsilateral mucoperiosteum at the posterior third of the nasal septum, fracturing the vomer and separating the bilateral mucoperiosteum to finally expose the anterior wall of the sphenoid sinus, followed by an anterior sphenoidotomy. The sphenoid septum was then excised, the orientation of the sellar floor was determined and the bony sellar floor and dura were opened to approach the pituitary gland and lesion. After removing the pituitary lesion, the dural defect of the sellar floor was closed with a small piece of autologous muscle harvested from the thigh and coated with fibrin glue. In a few difficult cases, neuronavigation was employed to guide access to the sella turcica.In another 63 patients, the surgical procedure was similar to method A. However, the mucoperiosteal incision was made on the posterior nasal septum and then the perpendicular plate of the ethmoid bone was fractured and pushed to the opposite side before performing an anterior sphenoidotomy.In the final 59 patients, the mucoperiosteal incision was made at the osseocartilaginous junction of the nasal septum (\u223c3 cm from the naris). The cartilaginous nasal septum was pushed to the opposite side and the perpendicular plate of the ethmoid bone was excised to expose the anterior wall of the sphenoid sinus, followed by an anterior sphenoidotomy. The rest of the procedure was identical to the previous two methods.The structure of the nasal cavity and sphenoid sinus, position of the apertures of the sphenoid sinus and relevant arteries and the morphological characteristics of the anterior wall of the sphenoid sinus and sellar bulge were observed and recorded. Nasal structures and anatomical anomalies that would affect the surgical approach were photographed and recorded.Comparing the three surgical methods, the approach with the mucoperiosteal incision made at the osseocartilaginous junction of the nasal septum provided a greater surgical view compared with the other two methods. The most common pituitary lesion was pituitary macroadenoma, occurring in 70.9% of patients . There wThe most important landmark in the nasal cavity is the mucosal aperture of the sphenoid sinus, which could be observed under the microscope in 79 patients (53.4%) after pushing the middle turbinate laterally . HoweverIn this region, the first landmark noted was the osseocartilaginous junction of the nasal septum, which could be observed after the septal mucoperiosteum was opened 3 cm from the naris . As illuThe position, thickness, deviation and degree of development of the sphenoid septum varied greatly, which should be identified with the aid of preoperative MRI. In our study, most patients had only one sphenoid septum but multiple sphenoid septa were observed in a few cases . The impThe outcome of TSS is related to the proper removal of lesions and the protection of normal pituitary gland tissue. As shown in In the present study, we delineated important anatomical landmarks for endonasal TSS, including the mucosal aperture of the sphenoid sinus, a blunt longitudinal prominence on the posterior nasal septum, the osseocartilaginous junction of the nasal septum, the \u2018bow sign\u2019 of the anterior wall of the sphenoid sinus, the osseous aperture and its relationship with nutrient arteries, the bulge of the sellar floor and the carotid protuberance. These landmarks outline a clear route to the sella turcica providing the optimal view and causing less tissue damage. Based on these landmarks, we successfully accessed the sella turcica and dissected pituitary lesions in all patients without any assistance from intraoperative CT scan and fluoroscopic navigation.et al produced a spheno-sellar point and a spheno-nostril line to guide the head positioning for TSS (et al studied dry skulls and found that the location of apertures varied greatly (et al (et al (et al (et al (Several postmortem and imaging studies attempting to illustrate anatomical landmarks for TSS have been conducted previously . Using c for TSS . However greatly . Tatreauy (et al reportedl (et al used CT l (et al . In the l (et al .et al (et al (Over the last 10 years, endoscopic techniques have seen a marked development and led to a trend in transsphenoidal surgical approaches. However, the preference of endoscopic TSS or microscopic TSS depends on the technological refinements and economical restraints. Although an endoscopic approach permits a better view in the sphenoid sinus and the parasellar region , it cannet al reviewedet al . It shoul (et al ,22, it oAlthough the aforementioned anatomical landmarks are useful to guide the surgical procedure, neuronavigation techniques are invaluable in evaluating the variation of sphenoid sinus and the sellar region, especially in those with residual or recurrent masses in the setting of previous TSS, which may inevitably alter the normal anatomical structure of the skull base . In the Since we mainly focused on endonasal TSS, we did not compare our findings with those using another surgical approach. We believe, however, that the anatomical landmarks for endonasal TSS are also applicable to other approaches.Locating the sphenoid sinus aperture is the gold standard to direct the surgical route of TSS. The \u2018bow sign\u2019 and the sellar bulge are critical landmarks for the accurate entry into the sphenoid sinus and sella fossa. These landmarks outline a clear route to the sella turcica with the optimal view causing less tissue damage. The application of these landmarks will aid the reduction of complications and improvement of outcomes of TSS."} +{"text": "Recently, significant advances have been made in understanding the aetiology of adolescent idiopathic scoliosis (AIS) in the area of genetic inheritance. Essential to the success of these studies is the finding of large families with several cases of AIS from which blood samples can be collected and analysed. The emirati culture is unique and encourages large families with many children, often several wives with the same husband, and many cases of consanguinity especially among first cousins. This complex family structure might very well be suited to provide more appropriate examples for the study of genetic heritage in AIS than the more typical western family.The genealogical history of 15 extended emirati families was collected for 4 generations which would approximate that required for the study of AIS. The number of children in each family and the incidence of consanguinity among first cousins were calculated. These data were compared with similar, published data for western families.The average number of children per family among the emirati population was 6.29 (often >10), approximately 3x that of the average western family. In each family there were several examples of multiple wives and first cousin marriages. Two families were found in which at least one person had AIS.These results suggest that the unique, well controlled structure of the emirati families lend themselves to being excellent for extensive study of the genetic inheritance aspects of the aetiology of AIS because of the greater familial component."} +{"text": "Sensory innervation to the scrotum arises from the genital branch of the genitofemoral nerve, travelling with the spermatic cord through the Inguinal canal en route to the scrotum. It lies immediately lateral to the spermatic cord as it emerges from the superficial Inguinal ring and Is involved in the efferent arm of the cremasteric reflex. This causes distortion and apparent shrinkage of the scrotal surface area, and ascent of the Ipsilateral testis. Many surgeons use local infiltration rather than regional anaesthesia. The scrotal anatomy makes it favourable for regional local anaesthetic nerve blocks to be used when repairing or Incising the scrotal skin.The spermatic cord Is identified immediately lateral to the pubic tubercle. The area for injection, including the scrotum, Is sterilised. The spermatic cord is then stabilised and medlallsed using the nondominant hand, and 5ml of 1% lldocaine Is injected subdermally, Immediately lateral to the cord, superficial to the bone.The genitofemoral nerve block provides hemiscrotal anaesthesia, allowing painless manipulation and Intervention In an area that is prone to changes In texture and superficial skin anatomy. This method of regional anaesthesia thereby eliminates problems with handling scrotal skin during the time of anaesthetic infiltration, which may occur on stimulation of the cremasteric reflex. This method also minimises the risk of injury to the male genitalia."} +{"text": "SSIs are one of the most frequent nosocomial infections. To monitor and reduce SSI-rates a good surveillance is crucial. For optimal information, surveillance of incidence of SSIs is preferred above surveillance of prevalence of SSIs. Incidence surveillance however is time consuming.To investigate whether the prevalence of SSIs can be used to adequately predict the incidence of SSIs (cumulative incidence surveillance).Data were derived from the Dutch surveillance network for nosocomial infections (PREZIES) from 2007 to 2011. The suitability of the Rhame and Sudderth method to estimate incidence of SSIs from prevalence of SSIs was assessed. Also incidence data were used to simulate prevalence data, and prediction models were developed to predict incidence from prevalence and from other relevant variables. Several statistical indices were used to evaluate the performances of the models.Use of the Rhame and Sudderth method to estimate incidence resulted in most estimated incidence rates becoming negative values (below zero). Simulating prevalence from incidence data showed large variation in prevalence depending on the day of measurement. The predictive model best predicting incidence, with a proportion explained variance of 0.31, was the model including the mean length of hospitalization of patients with an SSI (LN), the mean interval between admission and onset of the SSI (INT) and hospital (as random effect). Adding prevalence to the prediction model did not improve the model.It proved not reliable to directly convert prevalence into incidence using the Rhame and Sudderth method. The negative estimated incidence values were the result of the postdischarge surveillance mandatory for the SSI-surveillance in the Dutch surveillance network. Also the simulations and the results of the prediction model indicate that with the current data available it is not possible to accurately predict cumulative incidence of SSIs in Dutch hospitals using point prevalence data.None declared."} +{"text": "The third Thematic Series on flow chemistry published in the Beilstein Journal of Organic Chemistry demonstrates the emerging importance of transforming chemical synthesis in the laboratory from a classical batch approach to continuous processes by using micro- and miniaturized flow reactors. In the past two decades this technology has seen a dramatic increase of visibility. Considering an analyses of the accompanied developments in flow synthesis one has to acknowledge that the topic has shifted among different disciplines with a variety of intensities and focus.In the late eighties and nineties, the idea of miniaturising continuous chemical processes was mainly pursued by chemical engineers. They were fully aware of the quest for an intensification of the process and the advantages associated with continuously operated chemical processes. The benefits of miniaturizing flow systems are evident when considering the excellent heat and mass transfer properties of these small technical devices. Chemical engineers developed beautifully designed reactors, mixers and interfaces for the online monitoring of continuous processes. The major inspiration came from the process and development units in the chemical industry, where continuously operated pilot plants already played a key role. Conceptually, highly modular microreaction systems developed by Ehrfeld Mikrosystem BTS and by CPC (Cellular Process Chemistry Systems) are marvelous examples of these engineered driven achievements.In the late nineties, organic chemists from both industry and academia, which included our group, became involved in the use of microreactors and provided a myriad of synthetic examples. The combined work of experts from engineering and chemical synthesis was highly fruitful and is so until today. This combination of expertise has catalyzed the development of microreactor technology in the applied context of synthesis and production.Chemical engineers depend on input from chemists and synthetic chemists will only advance the field of miniaturized flow synthesis if they are aware of the technical and engineering aspects. This includes the quest for developing analytical devices for online monitoring and feedback loops for optimising synthetic protocols.Several of these aspects can be found in this third Thematic Series on flow chemistry. I am thankful to all my colleagues who contributed with their excellent research to this issue. The Beilstein Team is acknowledged for the handling of the manuscripts and referee reports in a very pleasant and professional manner.Andreas KirschningHannover, August 2013"} +{"text": "The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer is studied when the fluid and solid phase are not in local thermal equilibrium. The modified Darcy model is used for the momentum equation and a two-field model is used for energy equation each representing the fluid and solid phases separately. The effect of thermal non-equilibrium on the onset of double diffusive convection is discussed. The critical Rayleigh number and the corresponding wave number for the exchange of stability and over-stability are obtained, and the onset criterion for stationary and oscillatory convection is derived analytically and discussed numerically. The problem of double diffusive convection in porous media has attracted considerable interest during the past few decades because of its wide range of applications, including the disposal of the waste material, high quality crystal production, liquid gas storage and others.Early studies on the phenomena of double diffusive convection in porous media are mainly concerned with problem of convective instability in a horizontal layer heated and salted from below. The double-diffusive convection instabilities in a horizontal porous layer was studied primarily by Nield On the other hand, viscoelastic fluid flow in porous media is of interest for many engineering fields. Unfortunately, the convective instability problem for a binary viscoelastic fluid in the porous media has not been given much attention. Wang and Tan In present research, we perform the linear stability of double diffusive convection in a viscoelastic fluid-saturated porous layer, with the assumption that the fluid and solid phases are not in local thermal equilibrium (LTE). The effects of parameters of the system on the onset of convection are discussed analytically and numerically. The critical Rayleigh number, wave number and frequency for exchange of stability are determined.We consider an infinite horizontal porous layer of depth Assuming slow flows in porous media, the momentum balance equation can be linearized asFor general viscoelastic fluids, the constitutive relations between stress tensor We assume that the diffusion of temperature obeys the following equations, which is a non-equilibrium model between the solid and fluid phases, suggested by The onset of double diffusive convection can be studied under the Boussinesq approximation and an assumption that the fluid The basic state is assumed to be quiescent and we superimpose a small perturbation on it. We eliminate the pressure from the momentum transport equation(4) and define stream function Then the following dimensionless variables are defined asHere the symbol \u201cHence the boundary conditions areIn this section, we discuss the linear stability of the system. According to the normal mode analysis, the Eqs.(10)\u2013(13) is solved using the time dependent periodic disturbances in a horizontal plane. We assume that the amplitudes are small enough, so the perturbed quantities can be expressed as followsWhere Since The steady onset corresponds to This result is obtained by Banu and Rees Further Eq.(20) can be written asIn the absence of the solute effect, Eq.(21) reduces toThe value of Rayleigh number Case 1: For very small values of When the value of H is very small, the critical value of the Rayleigh number To minimize We also expand Substituting Eq.(26) into the Eq.(25), and rearranging the terms and then equating the coefficients of same powers of H will allow us to obtain the Substituting these values of Case 2: For very large values of For the very large values of Letting Similarly, we expand Substituting Eq.(30) into the Eq.(29), we getThen, substituting these values of For oscillatory onset Effect of different values of conductivity ratio The variation of conductivity ratio on the critical Rayleigh number for stationary mode with the heat transfer coefficient for different values of conductivity ratio is shown in The effect of relaxation time on the neutral curves is shown in From The stationary Rayleigh number increases with an increase in the value of heat transfer coefficient In The effect of Lewis number The onset of double diffusive convection in a binary Maxwell fluid, which is heated and salted from below, is studied analytically using using a thermal non-equilibrium model. Based on the normal mode technique, the linear stability has been studied analytically. The effects of relaxation time, heat transfer coefficient, normalized porosity parameter and other parameters on the stationary and oscillatory convection are discussed and shown graphically. It is found that the increasing relaxation time increases the elasticity of a viscoelastic fluid thus causing instability. The asymptotic solutions for both small and large values of"} +{"text": "The ultimate goal of the endodontic treatment is to render the root canal system bacteria-free and to prevent the invasion of bacteria and their byproducts from the root canal system into the periradicular tissues.This special issue presents current research addressing the newest approach to the diagnosis and treatment of the endodontic disease.Fast development of new materials and procedures gave rise for better preparation and filling techniques and enabled higher standards and better quality of root canal treatment. Innovative approach to the anatomy of the roots allows for the predictable preparation of the root canal system. Evaluation of the latest root canal filling materials and restoration techniques helps elucidate factors responsible for the long-term survival of the endodontically treated teeth. The latest molecular biology techniques may help clarify the composition of microbiota in infected root canals and help in the development of new direction in nonsurgical root canal treatment.Igor TsesisIgor TsesisSilvio TaschieriSilvio TaschieriIris Slutzky-GoldbergIris Slutzky-Goldberg"} +{"text": "Cetaceans have long been considered capable of limiting diving-induced nitrogen absorption and subsequent decompression sickness through a series of behavioral, anatomical, and physiological adaptations. Recent studies however suggest that in some situations these adaptive mechanisms might be overcome, resulting in lethal and sublethal injuries. Perhaps most relevant to this discussion is the finding of intravascular gas and fat emboli in mass-stranded beaked whales. Although the source of the gas emboli has as yet to been ascertained, preliminary findings suggest nitrogen is the primary component. Since nitrogen gas embolus formation in divers is linked to nitrogen saturation, it seems premature to dismiss similar pathogenic mechanisms in breath-hold diving cetaceans. Due to the various anatomical adaptations in cetacean lungs, the pulmonary system is thought of as an unlikely site of significant nitrogen absorption. The accessory sinus system on the ventral head of odontocete cetaceans contains a sizeable volume of air that is exposed to the changing hydrostatic pressures during a dive, and is intimately associated with vasculature potentially capable of absorbing nitrogen through its walls. The source of the fat emboli has also remained elusive. Most mammalian fat deposits are considered poorly vascularized and therefore unlikely sites of intravascular introduction of lipid, although cetacean blubber may not be as poorly vascularized as previously thought. We present new data on the vasculature of air sinuses and acoustic fat bodies in the head of bottlenose dolphins and compare it to published accounts. We show that the mandibular fat bodies and accessory sinus system are associated with extensive venous plexuses and suggest potential physiological and pathological implications. Since the 1980s, numerous beaked whale mass strandings have been temporally and/or spatially associated with deployment of naval mid-frequency active sonar -type sequelae, suggesting that gas bubble formation may be at the root of some of the observed strandings by Houser et al. . Specimens used for describing the vascular anatomy were injected with contrast medium into either the venous system or both the venous and arterial systems, following the procedural methodology outlined by Holliday et al. . Prior t\u00ae software on a Gateway desktop with memory and processor upgrades. Following imaging and post-processing, all specimens were thawed and dissected to validate and/or clarify imaged structures. Although the focus of this study was to elucidate the venous morphology in regions of interest in the head of Tursiops, the authors felt that a comparative examination of deep diving odontocete cetaceans would be valuable given the association of deep divers and sonar-related strandings. The authors therefore opportunistically obtained specimens from pygmy and dwarf sperm whales , sperm whales , and Gervais\u2019 beaked whales (Mesoplodon europaeus). Some of the sperm whale and pygmy sperm whale specimens were of sufficient quality and from young enough animals that could fit into the CT gantry. Prior to vascular dissection, those specimens were imaged according to the aforementioned angiographic protocol and the data obtained was used to guide the dissections. As the focus of this study was on Tursiops, only cursory mention is made with respect to findings from the other species.Prior to injecting the vascular contrast material, 5\u2009mL balloon catheters were placed in the vessels to be injected and inflated until a good seal was formed. In specimens that were to be imaged via computed tomography (CT), the vascular system of interest received a mixture of liquid latex and barium sulfate suspension , while vessels only destined for dissection received pure latex . All specimens were refrigerated for 2\u2009days following injections, to allow the latex cast to cure, and if unpreserved with formalin, where subsequently frozen at \u221220\u00b0C. Specimens that were imaged via CT were scanned at the thinnest slice thickness possible based on the specimen length, and whenever possible the volumes were reconstructed to 0.5\u2009mm thickness to allow high resolution imaging of fine caliber vasculature. The resultant DICOM data was post-processed using AmiraWhat is known about the accessory sinus system of the cetacean head varies considerably by species, however delphinid and phocoenid species remain the best described. It is however clear that in all odontocete species there are extensive gas-filled sinuses on the ventral side of the skull. Interestingly, our preliminary research shows that the accessory air sinuses of deep diving odontocetes such as beaked whales, spermwhales, and pygmy and dwarf sperm whales are much larger, relatively, than those of even large delphinid species like pilot whales, and are invested with intricate and seemingly more voluminous venous arrangements. Much of the internal surface of these sinuses is lined with copious masses of convoluted, intercommunicating, and valve-less veins separated from the air spaces by walls so thin that they are translucent Figures and 5.Tursiops, these lobes are expanded and may meet dorsal to the eye as a supraorbital lobe. The presence of an air-filled sinus on the ventral aspect of the supraorbital process of the frontal bone dorsal to the eye necessitates a fenestration to bones of the basicranium. The sinus system exists as a bilaterally paired system of blind structures. These sinuses Figure may exteTursiops has a hamular lobe \u2013 within the hollowed hamulus of the pterygoid bone \u2013 and an anterior lobe. These two lobes are formed by an indentation of the pterygoid sinus, caused by the lateral laminae of the pterygoid bone and palatine bones. This bony structure is lacking in the pygmy sperm whale (Kogia breviceps) and thus, distinction of those lobes is meaningless. At its caudal end, the Pty sinus connects to three smaller sinuses, all of which are relatively close to the region occupied by the ear, the TMJ, and the attachment of the hyoid apparatus via the tympanohyal cartilage: the most lateral of these three sinuses (the middle sinus) is associated with the TMJ; a slightly more caudomedial sinus (posterior sinus) is associated with the tympanohyal joint and is bordered by the paroccipital crest; and a caudomedial sinus (peribullar sinus) that helps separate the tympanic bulla from the adjacent bones.The Pty sinus in Tursiops, have a deep indentation in the Pty sinus. In contrast, deep divers such as Kogia lack or have less distinct anterior and hamular lobes.Despite the variable geometry of the recesses and fossae of the sinus system, the bony associations of the sinuses follow some similar patterns among all odontocetes studied thus far. Delphinid species show marked similarity among the different species, while kogiids, physeteriids, and ziphiids all show similarity between themselves. In delphinids, the lateral wall of the pterygoid sinus is partly encased by the bony lateral laminae of the pterygoid and palatine bones. Conversely, in deep diving species like ziphiids, kogiids, and physeteriids, the lateral wall of the Pty sinus is composed entirely of soft tissue capable of collapsing onto itself. When manually manipulated, the Pty sinus is easily closed under the weight of the surrounding tissues, suggesting that maintaining it in an expanded state may require a degree of pressurization. Although the significance of these features is unknown, it is difficult to ignore the common threads among the non-delphinid deep diving cetaceans. Shallow divers such as Tursiops extend much farther into the orbit than those of Kogia; these lobes join distally to form the supraorbital lobe of the Pty sinus, on the ventral aspect of the supraorbital crest of the frontal bone.The sinuses of the accessory sinus system follow the contours of slight depressions , and extra-mandibular fat body (EMFB) Figures and 6\u20138 The melon fat is primarily drained by a multitude of veins that converge with the veins draining the rest of the tissues of the region , the external jugular vein of the bottlenose dolphin is the main drainage route of the pterygoid vasculature although the internal jugular vein can be the primary drainage The lateral branch of the facial vein The middle branch of the facial vein The third and most medial branch of the facial vein The first and most proximal contribution is formed by numerous anastomoses The second and third branches arise in common from a bifurcation of the distal internal jugular vein. The proximal of the two branches curves sharply rostrad to become the ventral petrosal sinus The third branch extends vertically from the bifurcation of the internal jugular vein, to enter the jugular foramen as the terminus of the internal jugular vein On its course to the rostroventral aspect of the brain case, the maxillary vein sends dorsorostrad oriented veins of a plexiform nature Before it breaks up into the countless veins that compose the pterygoid plexus and by Walmsley , ventral to the ventral petrosal sinus. This is an unusual location for the cavernous sinus.The paired cavernous sinuses of domestic mammals form a ring-like venous structure around the pituitary gland . The two lateral components are connected across the midline via structures much like the intercarvernous sinuses of domestic mammals. This venous loop is therefore consistent in location and drainage with the cavernous sinus of terrestrial mammals, and seems contradictory to the statements of Fraser and Purves could be erected by way of the internal carotid\u2026\u201d This appears to conflict with our findings as well as Boenninghaus\u2019 illustration that the corpuscavernosum is venous in nature, connected by ventral tributaries to the pterygoid and maxillary veins , anterior lobe venous plexus (purple), and IMFB plexus (yellow).Click here for additional data file."} +{"text": "The ribosome is a large ribonucleoprotein complex that carries out protein synthesis in all kingdoms of life by translating genetic information encoded in mRNA into the amino acid sequence of a protein. The nascent polypeptides escape the peptidyl transferase center through the ribosomal exit tunnel that spans the entire large subunit. The tunnel is involved in the control of co-translational protein folding processes, the regulation of elongation and the inhibition of the protein synthesis by antibiotics . Since tWe thus set out to analyze global and local flexibility characteristics of the ribosomal exit tunnel by constraint counting on topological network representations of large ribosomal subunits from four different organisms ,3. The a"} +{"text": "Neurotransmitter transporters of the SLC1 and SCL6 family are found on presynaptic neurons and on glia cells. The function of these transporters is the termination of neurotransmission by the rapid removal of the neurotransmitter molecules from the synaptic cleft. These transporters couple substrate transport to ion gradients of sodium and chloride. Almost all of the eucaryotic transporters have been described to function as oligomers. However, the forces stabilizing the oligomeric state are not well understood. No crystal structures of eukaryotic transporters are available, but recently crystal structures of bacterial homologs thereof have been solved: GltPh (SLC 1 family) was found as a trimer, LeuT (SLC6 family) was crystallized as a dimer. These homologous crystal structures allow rationalizing on the driving forces that stabilize the eukaryotic counterparts.The crystal structures of LeuT and GltPh were obtained from the Protein Data Bank (PDB). We identified the interfaces between the protomers and analyzed hydrogen bonding, hydrophobic and hydrophilic interactions as well as size and width of the interface area.We investigated the protein-protein interfaces between the transporter protomers and identified the dominant forces that stabilize the oligomer. These consist of hydrophobic interactions between the aliphatic side chains within the interface and of polar interactions by hydrogen bonds between hydroxyl groups.The contributions of different forces to the stability of oligomer assemblies vary between proteins. While the hydrophobic mismatch is a prominent contributor to the stability of the GltPh transporter, it plays a minor role for LeuT, where helix packing and aromatic interactions seem to dominate."} +{"text": "Eucalyptus trees (family Myrtaceae) are well-known for their high foliar content of several classes of secondary metabolites and these have a strong effect on the feeding patterns of several species of marsupials and at least some insects. Best known are the essential oils, which is mostly a mixture of terpenoids, but there are also significant concentrations of flavonoid and formylated phloroglucinol compounds. There is extensive quantitative and qualitative variation within and between species of Myrtaceae in these chemical groups and all appear to be under strong genetic control with heritabilities (2H) between 0.3 and 0.9. As well as being important ecologically, the terpenes in particular are valued as industrial and medicinal products and Australia supports a strong essential oil industry focused on Eucalyptus and Melaleuca foliar oils.Eucalyptus grandis genome provides the opportunity to discover the genetic makeup of the biosynthetic pathways for secondary metabolites. We present data from pathways leading into the biosynthesis of terpenes, flavonoids and lignins. The homology of genes and gene families were investigated and compared to a variety of other species including poplar (Populus trichocarpa), grape (Vitis vinifera) and apple . For example, terpene synthases has 120 members in the genome of Eucalyptus grandis, compared to 44 and 99 in poplar and grape, respectively studies that investigated variability in secondary metabolites and wood properties in eucalypts. This approach allowed the discovery of candidate genes for a large number of QTL.The Eucalyptus globulus, investigating 200 SNPs and roughly 40 traits ranging from terpenoids to terpene-adducts to flavonoids and to tannin-related traits. We discovered several significant trait associations between allelic variants in the chloroplastic MEP pathway and monoterpenes and between the cytosolic MVA pathway and sesquiterpenes, as well as one allelic variant in a prenyl pyrophosphate synthase that associates with the ratio of monoterpenes to sesquiterpenes. Loci with significant associations were mapped to the Eucalyptus grandis genome and compared to published QTL datasets that investigated similar traits. These results represent the first species wide analysis of the molecular basis of quantitative variation in secondary metabolites in any tree.Understanding the genetic basis of variations in quantitative traits provides insights into ecosystem function and at the same time may help breeders in the essential oil industry. We have characterized trait associations with polymorphisms from The publicly available genome sequence of Eucalyptus grandis is a great resource that can be applied to a variety of questions including the genetic make-up of gene families for biosynthesis of plant secondary metabolites, genome organization of these genes and evolution of traits such as resistance to herbivores or the ability to re-sprout after fire. Combining studies of association genetics, QTL studies together with the genome sequence helps to shed light on the underlying control mechanisms of phenotypic variation."} +{"text": "Prophylactic treatments of migraine are an important part of the management of the disease. Only one survey concerning this topic was performed in 2000 in France among GPs. This survey was however performed before the French migraine guidelines with a large use of dihydroergotamine at this time.To evaluate the practices of GPS in the North of France concerning the prophylactic treatments of migraine management and to compare them with the French Guidelines.A self-administered questionnaire concerning prophylactic treatments of migraine was mailed in 2011 to 307 GPs in two big cities in the North of France (Lille and Roubaix). We analysed the data and compared them with the French Guidelines.142 GPs answered to the questionnaire (46.2 %). 85% of GPs use prophylactic treatments of migraine when the patient has 4 to 5 migraine attacks per month. The first line treatment are beta-bloquers (BB) (60%). The first objective is to reduce the migraine attacks frequency by 50% at 3 months for 53% of the GPs and the second to increase the quality of life in 45%. 59% of GPs prescribe prophylactic treatment of migraine for a 6-12 months duration.GPS in the North of France take into account the bad quality of life of migrainers to start a prophylactic treatment. They use in majority the recommended prophylactic treatments of migraine and during a correct duration according the French Guidelines.There is a dramatic increase since 2000 of the use of BB in France as first line prophylactic treatment of migraine. French guidelines of migraine seems to be useful for GPs."} +{"text": "Protein-ligand complexes are often consulted for the understanding of binding modes and mechanisms of action as well as the development of novel drugs. Unfortunately the resolution of most x-ray structures is too low to resolve hydrogen atoms. However, hydrogen positions play a major role in the analysis of important interaction types as hydrogen bonding or metal interactions. Therefore, it is important to predict the orientation of hydrogen containing rotatable groups as well as sensible protonation and tautomeric states of both protein and ligand. While in most cases these degrees of freedom are still manageable for the protein and therefore incorporated in the models of the most common prediction tools for hydrogen placement -3, the cWe present a new method for the prediction of hydrogen positions in protein-ligand complexes that considers tautomeric variability of the ligand in addition to common degrees of freedom. Beginning with a random tautomeric state, different reasonable tautomers of the ligand are enumerated and their relative stability is estimated on the basis of a heuristic scoring scheme. This tautomerism model is integrated in the hydrogen placement application Protoss .Our approach permits an enhanced automatic prediction of hydrogen positions, especially for ligands that exhibit tautomers with similar stability but different interaction facilities. Furthermore we were able to reproduce the ligand tautomers that were proposed in several studies of tautomerism preferences in protein-ligand complexes."} +{"text": "The fossil record of Caviidae is only abundant and diverse since the late Miocene. Caviids belongs to Cavioidea s.s. The first two phases involve two successive radiations of extinct lineages that occurred during the late Oligocene and the early Miocene. The third phase consists of the diversification of Caviidae. The initial split of caviids is dated as middle Miocene by the fossil record. This date falls within the 95% higher probability distribution estimated by the relaxed Bayesian molecular clock, although the mean age estimate ages are 3.5 to 7 Myr older. The initial split of caviids is followed by an obscure period of poor fossil record (refered here as the Mayoan gap) and then by the appearance of highly differentiated modern lineages of caviids, which evidentially occurred at the late Miocene as indicated by both the fossil record and molecular clock estimates.A phylogenetic analysis combining morphological and molecular data is presented here, evaluating the time of diversification of selected nodes based on the calibration of phylogenetic trees with fossil taxa and the use of relaxed molecular clocks. This analysis reveals three major phases of diversification in the evolutionary history of Cavioidea The integrated approach used here allowed us identifying the agreements and discrepancies of the fossil record and molecular clock estimates on the timing of the major events in cavioid evolution, revealing evolutionary patterns that would not have been possible to gather using only molecular or paleontological data alone. Estimating the timing of evolutionary diversification events is the field of major interaction between paleontology and molecular biology. During the last two decades the alternative evolutionary timescales for different taxonomic groups were the focus of intense debates between paleontologists and molecular biologists Rodents provide an interesting case for analyzing the interaction between fossils and molecules, as this is a diverse group with a relatively complete fossil record. Rodents are the most diverse group of mammals at present, which include more than 2256 species representing 41% of all mammals Within caviomorphs, Cavioidea is crucial for understanding the diversification of South American rodents given that it includes the greatest morphological disparity clustering the extant Caviidae and Hydrochoeridae together with a diverse assemblage of primitive taxa of the extinct family Eocardiidae, given the presence of unique dental and mandibular modifications . Recent phylogenetic analyses of this group based on morphological characters s.s. but retrieved a paraphyletic arrangement of \u201ceocardiids\u201d as successive sister taxa of the crown-group comprised of cavies, maras, and capybaras.Different authors, however, have variously interpreted the taxonomic content of Cavioidea. The most inclusive and traditional conception of the group includes four families: Dasyproctidae (agouties), Cuniculidae (pacas), Caviidae (cavies and maras), and Hydrochoeridae (capybaras) s.s. has provided a wealth of new data to understand more thoroughly the relationships of the group, such as the close affinities of the capybara (Hydrochoerus) and cavies that have been corroborated in broad-scale phylogenetic analyses of hystricognath rodents Hydrochoerus within the multiple representatives of cavies and maras used in that study, being the sister group of the Rock cavy, Kerodon. This result was incongruent with traditional morphological classification schemes, but has been subsequently corroborated by phylogenetic studies based on morphological characters Regarding the relationships of the extant lineages, the availability of DNA sequences of extant species of Cavioidea sensu stricto represents the clade formed by Caviidae and its stem group , and Cavioidea as an even more inclusive group that also includes Cuniculidae and Dasyproctidae, the two other lineages leading to the extant pacas and agouties.Therefore the clade Caviidae can be applied to the cluster of the crown-group of three major living lineages: cavies (Caviinae), maras (Dolichotinae), and capybaras (Hydrochoerinae). These three major lineages of extant caviids are well differentiated from a morphological and ecological perspective. Cavies are usually small-bodied taxa that inhabit a variety of environments and they feed on diverse types of plants. Maras, instead, are much larger, adapted to cursorial habits with elongated hind limbs, and exclusively inhabit arid areas with coarse grass or scattered shrubs. Capybaras are not only the largest rodents alive but are also characterized by their highly apomorphic dentition and inhabits densely vegetated areas around freshwater bodies s.s. and its crown-group Caviidae, using the information of the fossil record and molecular clock estimates . The fossil record of Cavioidea s.s. includes a large diversity of extinct species recorded in South America since the late Oligocene (24.5\u201329 Ma). The fossil evidence indicates that Cavioidea s.s. diversified after the arrival of rodents in South America s.s.\u2013previous hypotheses on the group, including the traditional interpretation of gradual evolution, need to be revisited within an explicit phylogenetic context.The major focus of this contribution is the analysis of the timing and diversification patterns in the evolutionary history of Cavioidea s.s. provide an interesting test-case to evaluate the congruence between divergence estimates based on the fossil record and the molecular clock.Furthermore, the availability of molecular data for species of Caviidae also allowed exploring the divergence time of this clade using different approaches to the molecular clock s.s. Finally, we compare the timing of these events as inferred from the fossil record evidence and through the use of relaxed molecular clocks s.s.Here we present new phylogenetic results based on a morphological dataset that expands previously published evidence Eocardia spp., Schistomys, Matiamys) and in the alternative positions of the fragmentary crown caviid taxon Allocavia (see Document S1.doc). The reduced strict consensus tree of the MPTs pruning Allocavia resolves the interrelationships of the three major lineages of Caviidae: Caviinae, Dolichotinae, and Hydrochoerinae of the parsimony analysis of the combined dataset differ in the interrelationship of some fossil forms of stem cavioids . All these forms appear later in the fossil record, in early Miocene beds referred to the Colhuehuapian SALMA . This implies a previously undetected radiation of all basal lineages of Cavioidea s.s. that must have occurred at least by the late Oligocene. The optimization of character transformations in these basal nodes of Cavioidea s.s. indicates that major evolutionary novelties must have appeared in this late Oligocene radiation, including changes from a mesodont to a protohypsodont dentition and other dental modifications such as the appearance of cement and enamel discontinuities and the loss of fossettes/ids during ontogeny.The discovery of a basal cavioid radiation in the late Oligocene contrasts with previous interpretations about the early evolution of Cavioidea s.s., so it is possible that the actual diversification of this group predated the Deseadan SALMA (24.5\u201329 Ma). The pre-Deseadan record of fossil rodents in South America however provides relevant information to evaluate this possibility. The youngest pre-Deseadan rodent assemblage is known from La Cantera horizon of Central Patagonia . The La Cantera rodents include plesiomorphic taxa of Cavioidea (Dasyproctidae?) and representatives of Octodontoidea and Chinchilloidea s.s. in all these pre-Deseadan rodent faunas is compatible with the hypothesis of a basal radiation of this group close to or in the late Oligocene.The ghost lineages provide a minimum estimate for the age of these basal nodes of Cavioidea s.s. The most ancient records of euhipsodont cavioids come from the early Miocene Santa Cruz Formation of Central Patagonia and the five oldest euhypsodont species of Cavioidea s.s. . The sudden appearance of multiple euhypsodont lineages at this time suggests the occurrence of an evolutionary radiation that not only includes the five above mentioned taxa but also three ghost lineages leading to slightly younger taxa of the Colloncuran SALMA E. robertoi, and 3) the clade formed by E. robusta and more derived cavioids within this clade results in markedly suboptimal topologies .The euhypsodont condition represents the presence of teeth with continuous growth (lacking a root) and was one of the major evolutionary novelties in the history of Cavioidea oids see . Thus, uoids see have booThe exceptional Santacrucian fossil record and the sudden appearance of a high diversity of euhypsodont cavioids can be interpreted in two alternative ways. On the one hand, the fossil record could be actually capturing the early offshoots of a major radiation, characterized by the acquisition of a key evolutionary innovation (euhypsodonty), which has been interpreted as an adaptation to grazing. The acquisition of the euhypsodont condition may have been a critical innovation at this time given the major environmental changes recorded in Patagonia, which includes high volcanic activity related to the uplift of the Andes and b) that the three modern and morphologically distinct lineages of Caviidae were already present, abundant, and diverse about 6.1\u20139.07 Ma the initial split of Caviidae in three major lineages must have occurred at least 11.8\u201313.5 Ma . These estimates, however, were based on different DNA sequences and molecular clock methods so that it is difficult to compare their reliability and determine the causes of their differences.Previous molecular clock estimates on the time of diversification of Caviidae resulted in disparate dates Given the multiple nodes that are paleontologically dated by the phylogenetic study presented here , we testThe results of the analyses involving different number of calibration constraints and their prior probability distributions are summarized below for the time of diversification of Caviidae and the diversification of the major modern lineages of Caviidae see also and 3.P. pridiana; Laventan age; purple column in Owing to the incorporation of fossils in the phylogeny, these are the most tightly constrained analyses of the diversification timing of Caviidae . The estGalea. The age of one of these nodes was constrained by a prior distributions but the ages retrieved for the diversification of Dolichotinae lies outside the 95% HPD when this node is not calibrated is remarkably old, with mean ages of 14.78 and 17.45 Ma and 95% HPD the mean ages of these nodes fall within the Chasicoan SALMA see . However 95% HPD lying ouThese analyses retrieved disparate results on the time of diversification of Caviidae , the mean ages of the Dolichotinae and These results indicate a considerable degree of rate heterogeneity among groups of Cavioidea. The two oldest and most basal calibration constraints lead to inferences of a much slower rate of evolution than when the caviine calibration constraint (node 4) is used, and rate estimates derived from calibrations of node 3 are intermediate between these two extremes. Consequently, estimates on the age of Caviidae and on the radiation of the major modern lineages of caviids based only on the basal calibrations are much older than those retrieved when node 4 (or node 4 and node 3) is used for calibrating the relaxed molecular clock . This sesensu stricto and the crown-group Caviidae. The cumulative number of lineages (counting those leading to both extinct and living taxa) plotted across time reveals the diversification events of this group inferred from fossils and the molecular clock and is recognized mainly by the four ghost lineages of forms that appear later in the fossil record . Only two species are known up to date from this age, which provide the minimum estimate for the age of basal nodes of Cavioidea s.s. Although a more gradual diversification of this group might have occurred before the Deseadan SALMA, the older rodent assemblages at Tinguiririca and La Cantera lack members of this clade. An initial radiation during the late Oligocene fits the available fossil data and the morphological phylogeny.As noted above, the Deseadan radiation 1 in involvesThe Santacrucian radiation 2 in is the eThe diversification of the crown-group Caviidae , as shown by the diversification plot . Alternatively, the Colloncuran Guiomys unica can be placed within Caviidae with a single extra step (pushing back the time of diversification of Caviidae to the Colloncuran and decreasing the discrepancy with the molecular clock estimates).The uncertainty on the relaxed molecular clock estimates is however relatively large, and the 95% HPD and new remains of incompletely known taxa , as well as new sequences to base molecular clock estimates on a more extensive dataset are all necessary steps to solve this apparent discrepancy .The molecular clock estimates, therefore, indicate there is a high probability that the fossil record is currently missing the first few million years of caviid evolution, although the breadth of the 95% HPD and the possible alternative positions in slightly suboptimal trees of some key fossils indicates the two sources of information are not yielding entirely incompatible estimates. Further studies on some of these fossils or as a crown caviid that bears plesiomorphic dental characters that resemble the condition of basal hydrochoerines but is markedly different from the highly modified condition of more derived hydrochoerines from the late Miocene . The scarce available evidence suggests that the initial diversification of Caviidae can be of substantial duration, and many times they do overlap with the first appearance datum of fossil taxa, as in the case of the initial diversification of Caviidae and the Laventan age. In many instances, therefore, the discrepancies between the molecular clock and the fossil record disappear when then 95% HPD are considered.Microcavia+Cavia). A striking difference exists between the age inferred by the molecular clock when this node is not calibrated and the first appearance of members of this clade in the fossil record. As noted above, the molecular clock estimates are likely too old and suggest the fossil record of caviine rodents would be missing 60% of its evolution. We suggest it is more likely this caviine lineage has a higher evolutionary rate in comparison with other cavioid rodents (at least for these genes). Caviines have a reduced body size (and related life-history traits such as shorter generation times) in comparison with other cavioid rodents , providing another case of correlation between high evolutionary rates and small body size if this hypothesis is correct. More data are needed to test this correlation, as well as to provide reliable molecular clock estimates for Caviinae. New data must include both a more extensive taxon sampling among caviines for these sequences and further studies on fossil caviines to provide alternative calibration points within this clade.One of the interesting outcomes of the 30 different molecular clock analyses conducted here is the identification of the sensitivity of the results to the inclusion or exclusion of node 4 comes from northern South America .Estimating the divergence time of clades based on phylogenetic studies is the area of most intense interaction (and conflict) between paleontology (providing fossils with dates) and molecular systematics (providing molecular clock estimates), as both areas provide information for understanding the tempo and mode of the evolution of a group. Recent efforts and progress have been made to incorporate different kinds of uncertainties to both molecular clock methods s.s. These analyses result in a global picture on the evolutionary history of Cavioidea s.s., including the origins and diversification of Caviidae, one of the most remarkably disparate lineages of living rodents. Three major evolutionary phases are recognized in the history of the group. The first two were radiations of basal forms that acquired the dental hallmarks of the groups: the appearance of protohypsodont forms in the Deseadan radiation of Cavioidea s.s. and of euhypsodont cavioids in the Santacrucian radiation, the evidence coming mostly from the Patagonian fossil record. The third phase was the diversification of Caviidae, which seems to have occurred in two temporally discrete episodes, the initial split of the group in three major lineages and the subsequent diversification of its modern clades, which are highly differentiated morphologically and ecologically.Our study integrates morphological and molecular data gathered from extinct and extant taxa into a phylogenetic analysis of Cavioidea A general agreement exists on the divergence dates estimated from molecular data and the fossil record. Molecular clock estimates places the origin of most modern lineages of caviids close to the Chasicoan, which coincides with the earliest appearance in the fossil record of the modern caviid lineages, which are characterized by remarkably distinct body plans, body size ranges, and ecological roles .However, the timing of the initial diversification of Caviidae was detected as the major discrepancy. The initial split of Caviidae is inferred to occur at Laventan times by paleontological evidence or perhaps as much as 7 million years earlier using a relaxed molecular clock. However, the uncertainties of the paleontological and molecular estimates reveal that more data is needed to solve this apparent conflict between the fossil record and the molecular clock.From a paleontological point of view, a more extensive knowledge of pre-Laventan faunas are critical to clarify the time and place of the initial diversification of Caviidae. Although the record of pre-Laventan faunas is geographically extensive, rodent faunas from these ages are all restricted to the southern half of South America , limiting our ability to localize the group's origin.s.s.Finally, increasing the amount of molecular data (taxon and gene sampling) is also needed to achieve a more robust phylogenetic framework for caviid evolution and to generate more robust molecular clock inferences. The prospective integration of new sources of evidence into an integrated approach will be unavoidable steps for understanding the evolutionary history of Cavioidea s.s. that were selected as the ingroup. The dataset include all the known stem-group fossil taxa (\u201ceocardiids\u201d), at least one representative of each extant genus of Caviidae, and nine extinct species of caviids (see Document S2). Outgroup taxa included representative species of Dasyproctidae, Cuniculidae, and Echimyidae, the latter of which was used to root the topologies (see Document S2). The character sampling is based on 69 craneo-mandibular and 27 dental characters (see Document S3).The morphological dataset was expanded from a recent study Ghr), intron 1 of transthyretin (Tth), 12 subunit ribosomal RNA (12s), and cytochrome b (cyb). Sequences of these genes were available for nine extant representatives of Caviidae and the three outgroup taxa (see Document S2 for GenBank accession numbers). Sequences of two of the outgroup taxa (Proechimys and Dasyprocta) have been assembled from two different species of each genus (see Document S2). These sequences have been successfully used by previous authors to resolve relationships of caviomorph and/or cavioid rodents The DNA sequences of extant caviid species were gathered from GenBank for two nuclear and two mitochondrial genes: exon 10 of growth hormone receptor (http://datadryad.org/ (doi:10.5061/dryad.v5p71).The phylogenetic dataset consisting of the 96 morphological characters and the 4014 characters of the four genes is available as Dataset S1 and also at DataDryad 12s, cyb, Ghr, and Tth) were compiled from several previous analyses (see Document S2), and were aligned using CLUSTAL X 12s gene, 1140 bp for cyb gene, 1099 bp for Tth gene, and 814 bp for Ghr gene.The DNA sequences of each of the four genes , scoring fossil taxa with missing entries for the DNA partitions. This dataset contained a total of 40 taxa and a total of 4110 characters. An equally weighted parsimony analysis was conducted treating gaps as missing data in TNT 1.1 The combined dataset of the 96 morphological characters was concatenated with the DNA sequences of the four genes were input as separate partitions for the Bayesian analyses, model selection was performed using AIC (Akaike Information Criterion) cyb that used GTR+I+\u0393), a linked clock model , and tree priors assuming a Yule process. Four independent MCMC runs of 10,000,000 generations (sampling every 1000 generations) were run independently for each of the 30 analyses we conducted using different calibration constraints (see below). Results of the four independent MCMC runs were integrated and summarized for checking convergence using the BEAST v. 1.6 package Bayesian analyses were conducted on the molecular data in BEAST v. 1.6 Bayesian approaches to molecular clock estimates, as implemented in BEAST v. 1.6, allows using different prior probability distributions to calibrate selected node ages . As noted by several authors in recent years Prior distributions of the ages of these four nodes were defined based on the available chronostratigraphic information of the fossil record, considering the phylogenetic placement of fossils in the phylogenetic analysis see and the Normal distributions were centered on midpoint age of the period of time to which the fossil-bearing formation has been referred. The standard deviation was set so that the 95% probability distribution reached the upper and lower bound of the age of the lithostratigrapic unit , and including in the 95% prior probability distribution the age of the most recent sediments in which representatives of the node being calibrated are absent, but numerous remains of its stem-group or other caviid lineages are known . This distribution largely ignores the second source of uncertainty and places a strong belief in that the dasyproctid from Tinguiririca is actually very close (in time) to the divergence time of Cavioidea. The second approach uses a gamma distribution, with a hard minimum bound at 31.5 Ma and its long tail extends the 95% probability density back to 45 Ma, representing the upper bound of the available dates of oldest record of caviomorphs from South America Andemys termasiAndemys termasi to calibrate Cavioidea.Finally, given the scarcity of the available material to constrain the age of this node and the lack of an explicit phylogenetic analysis including s.s. plus the lineage of the family Cuniculidae. The fossil record of Cuniculidae is extremely poor and only starts in the Pleistocene s.s., in contrast, has an extremely rich fossil record s.s. are Asteromys punctus and Chubutomys simpsoni, both known from few specimens found in the late Oligocene beds of Patagonia (Deseadan SALMA) of the Sarmiento Formation s.s. is strongly supported in by the morphological data of the phylogenetic analysis presented here and bootstrap and jackknife frequencies above 96% , ignoring the second source of uncertainty. The second approach uses a gamma distribution, which puts a hard minimum bound at 24.5 Ma and its long tail extends the 95% probability density back in time to the age of the youngest rodent assemblage that lacks forms of Cavioidea Kerodon). The fossil record of Kerodon is only known from scarce material of the late Pleistocene of Brazil Hydrochoerus ; this ignores the phylogenetic uncertainty of the fragmentary remains with possible hydrochoerine affinities in older sediments . The second used a gamma distribution, with a hard minimum bound at 6.1 Ma and a long tail to extend the 95% probability density back to the age of the Laventan SALMA, which is the youngest rodent assemblage that is well sampled and lacking forms that could potentially belong to Node 3 . As in previous cases, we have explored both approaches but consider the second option better represents the uncertainties in the fossil record of early hydrochoerines.Microcavia or Cavia than to Galea. The earliest fossil members of Cavia come from the San Andr\u00e9s Formation . Forcing M. chapalmalensis to be positioned outside the Node 4 requires a minimum of five extra steps demonstrating the strong support for its inclusion within this node. The minimum age of the Chapadmalal Formation has been dated at 3.27 Ma using K-Ar radioisotopes MicrocaviaMicrocavia or have an even more basal position, but more complete remains are needed to place them confidently.analysis and comeDolicavia minuscula was recovered in a basal polytomy within the lineage leading to the genus Microcavia together with Paleocavia impar .Allocavia) as well as numerous forms of Dolichotinae and Hydrochoerinae Microcavia and Cavia (Node 4).The rodent fossil record of the older sediments of the Chasic\u00f3 Formation provides confident information to place a maximal bound for the origin of this node. The Chasicoan fossil record contains remains of caviine-like caviids . The second approach used a gamma distribution, with a hard minimum bound at 4 Ma and a long tail that extended the 95% probability density back to the Chasicoan, which is the youngest well-known rodent assemblage lacking taxa that belong to Node 4 . As in previous cases, we have explored both approaches but believe the second option better represents the uncertainties in the fossil record of early caviines.As with other nodes, we explored two different prior probability distributions for the age of Node 4 based on the position of fossil taxa in the most parsimonious trees of our analysis. The first used a normal distribution whose 95% probability density encompasses the range of radioisotopic ages that bracket the fossiliferous levels of the Monte Hermoso Formation (4\u20135.3 Ma); this ignores the uncertainty associated with the possible presence of Paleocavia; Montehermosan SALMA) is only poorly supported within Node 4, we have also tested alternative calibrations for this node. We have conducted exploratory runs of the Bayesian analysis calibrating this node with the age of Microcavia chapalmalensis , which is the only fossil of this clade that is robustly supported by the morphological data of the phylogenetic analysis (see above). The results of this analysis are largely similar in terms of the molecular clock estimates for Caviidae (mean age \u200a=\u200a17.5 Ma) and place the diversification of dolichotines and Galea within the Chasicoan SALMA. Therefore, the estimates of interest for our purposes do not seem to be sensitive to the alternative ages that can potentially be used for calibrating Node 4.Finally, given the oldest member of this clade Click here for additional data file.Document S2List of taxa used for the Phylogenetic Analysis and GenBank accession numbers.(DOC)Click here for additional data file.Document S3List of morphological characters used in the phylogenetic analysis.(DOC)Click here for additional data file.Dataset S1Combined data matrix containing molecular and morphological characters in Nexus format.(TNT)Click here for additional data file.Script S1Script for calibrating phylogenetic trees using the chronostratigraphic information for fossil taxa in TNT .(TXT)Click here for additional data file."} +{"text": "We investigate the influence of network structure on the dynamics of neuronal networks, with a focus on the emergence of synchronous oscillations. Network structure is specified using the framework of second order networks, a network model that captures second order statistics (correlations) among the connections between neurons. We demonstrate that the frequency of a chain motif in the network plays a crucial role in influencing network dynamics, not only modulating the emergence of synchrony but also possibly increasing the range of possible network behaviors."} +{"text": "The authors wish to retract the PLOS ONE article \u201dAntibiotics threaten wildlife: circulating quinolone residues and disease in avian scavengers\u201d.The Ethics Committee of the Spanish Superior Council of Scientific Research (CSIC) has carried out a formal investigation in relation to concerns about potential scientific misconduct by Jes\u00fas A. Lemus, one of the authors of this article. The investigation has questioned the validity of the laboratory analyses conducted by Dr. Lemus; concerns were also raised about the existence of co-author Javier Grande.Given the ephemeral nature of the material used , we are unable to repeat these analyses with the same samples in order to verify the presence of veterinary drugs and pathogens. We sincerely apologize for any inconvenience to the readership of PLOS ONE."} +{"text": "One of the major differences of the DG compared to other brain regions is the finding that the dentate gyrus generates new principal neurons that are continuously integrated into a fully functional neural circuit throughout life. Another distinguishing characteristic of the dentate network is that the majority of principal neurons are held under strong inhibition and rarely fire action potentials. These two findings raise the question why a predominantly silent network would need to continually incorporate more functional units. The sparse nature of the neural code in the DG is thought to be fundamental to dentate network function, yet the relationship between neurogenesis and low activity levels in the network remains largely unknown. Clues to the functional role of new neurons come from inquiries at the cellular as well as the behavioral level. Few studies have bridged the gap between these levels of inquiry by considering the role of young neurons within the complex dentate network during distinct stages of memory processing. We will review and discuss from a network perspective, the functional role of immature neurons and how their unique cellular properties can modulate the dentate network in memory guided behaviors.The dentate gyru The dentate gyrus (DG) hippocampal region is one of the most plastic regions in the mammalian brain, exemplified by its ability to generate adult-born principal neurons that integrate into the pre-existing network. Great effort has been made to understand the process and regulation of neurogenesis, and recently several groups have sought to determine the functional role of adult-born neurons in the DG. We postulate that in order to fully understand the functional role of adult-born neurons in the DG it will be necessary to consider the complexity of the local neural-network into which they integrate, focusing on network level mechanisms of DG computations and studying the contribution of adult-born neurons to those computations.We will focus on one aspect of the DG that is critical to several theories of the role of the DG in memory processing: the observation that activity levels in the DG network are sparse. Both the proportion of active neurons and the action potential rates of active neurons are relatively low compared with other brain regions. Intuitively the observation of low activity levels introduces a puzzle; why would such a silent network require the constant addition of adult-born neurons? In other words, what is the relationship between adult-born neurons in the DG and the sparse encoding scheme implemented by the network? We will discuss two possibilities in the context of recent findings in the field, one, that adult-born neurons are themselves the small proportion of active cells in the DG at any given time and are thus \u201ccarrying the message,\u201d or two, that adult-born neurons impose low activity levels in the DG by recruiting local inhibitory networks which act to suppress activity in mature DG granule cells, allowing a few to fire at any given time, thus \u201cdictating the tone.\u201d To gain insight into these possibilities we will review the unique properties of adult-born neurons in the context of the complexity of the greater DG network, focusing on linking proposed behavioral roles for adult-born neurons with long-standing theories of DG network coding. Throughout, we will highlight future experiments that could be done to properly bridge the gap between function and mechanism.Classically, the DG is thought of as the first processing station of the hippocampal formation, comprising the first synapse of the \u201ctri-synaptic pathway.\u201d In simplified circuit diagrams, signals propagate from associative cortices, the lateral and medial entorhinal cortices, to the granule cells of the DG. From there signals are sent to downstream area CA3 and from CA3 to CA1. In reality, signals do not necessarily propagate in one direction along the tri-synaptic pathway, but instead ping-pong within sub-regions through associative pathways in CA3, however, they still appear structurally immature because they have fewer synaptic vesicles and active zones, and contact fewer CA3 spines compared to mature DGCs (Faulkner et al., The DG recruitment of GABAergic interneurons in the CA3 field may be a crucial component to proper memory encoding. A recent study has demonstrated that, during learning, structural changes of synapses from DGCs to CA3 interneurons determines memory precision (Ruediger et al., An important role for immature DGCs as the active population of the DG network is also suggested by one recent study that has silenced the activity of all DGCs except immature DGCs and shown that dentate dependent memory is facilitated. Nakashiba et al. , using gThe most critical component that determines the sparseness of the DG network is the level of inhibition. DGCs modulate inhibition in the DG network through direct feedback and lateral inhibition (Freund and Buzsaki, in vitro and in awake-behaving animals.Even without the characterization of the functional connectivity with various targets of the local network, many groups have theorized that the role of adult-born neurons is to modulate the neuronal activity of the larger population of mature DGCs (Ming and Song, Neural synchrony is thought to be important for working memory, in which recently acquired information is held \u201con-line\u201d for sustained periods of time (Durstewitz et al., If immature DGCs were recruiting feedback and lateral inhibitory circuits in the DG, we would expect that blocking neurogenesis may release inhibition and result in increased numbers of active DGCs. Burghardt et al. found a Currently in the field of neurogenesis, there is ample data describing the cellular properties of adult-born neurons and recently there has been a fast-paced accumulation of data describing the functional role of adult-born neurons in memory behaviors. The gap in our knowledge is studies that integrate these two lines of study and that illuminate the functional role of adult-born DGCs at the network level. In this review we have focused on two possible roles that active adult-born DGCs might play in DG network computations, and we specifically considered the relationship of neurogenesis to sparse network coding. Immature DGCs could carry the message directly to the downstream CA3 region by being the sparse active members. Alternatively, they could impose the tone in the DG network through interactions with interneurons in the local circuit that mediate the maintenance of the sparse coding scheme and the selection of the appropriate DGCs, immature and mature, to carry the message (Sahay et al., Neurogenesis is an extraordinary plastic phenomenon that offers adaptive advantages to the DG due to the fact that it can be modulated by behavior and experience. Changes to the local dentate network as a result of experience can modulate the generation, survival, rate of maturation, and integration of adult-born DGCs (Piatti et al., The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Cholesterol granulomas are common in the mastoid antrum and air cells of the temporal bone. In the paranasal sinuses, especially in the frontal sinus, they have occasionally been mentioned in the literature. The pathogenesis is unknown, but the majority of the authors support the concept of airway obstruction in the cells well pneumatised of temporal bone and paranasal sinuses. The authors report a case of cholesterol granuloma of the frontal sinus treated with radical surgical techniques, and they also recommend an endoscopic approach to frontal sinus to restore or enlarge the nose-frontal canal and promote drainage and ventilation of the frontal sinus. The cholesterol granuloma is a histological entity consisting of granulation tissue in which large quantity of cholesterol crystals provoke foreign body giant cell formation \u20136. The pCholesterol granuloma is common in the mastoid antrum and air cells of the temporal bone . It has We report a case of cholesterol granuloma of the frontal sinus. The tumor is treated with radical surgical techniques, but we also recommend an endoscopic approach to frontal sinus to restore or enlarge the nose-frontal canal and promote drainage and ventilation of the frontal sinus. An 82-year-old male was referred to our attention with a two-year history of gradually enlarging swelling of the left brow. On examination, the swelling appeared soft and floating, 3-4\u2009cm large, painless, and was not associated with diplopia and any ocular symptoms. On additional detailed anamnesis there was history of rhinitis with nasal obstruction and nasal discharge; there was no story of trauma. Nasal endoscopy demonstrated bilateral enlarged inferior turbinates.Computed Tomography (CT scan) of the orbits showed complete opacity of the left frontal sinus and partial opacification of contralateral frontal sinus and maxillary and ethmoidal sinuses. The opacification of the left frontal sinus appeared to be due to dense material without contrast enhancement which extended into the orbit through anterolateral (20 \u00d7 30\u2009mm) and inferior (13\u2009mm) breach of the sinus . ThroughClassically, the cholesterol granuloma is found in the petrous apex and other pneumatised areas of the temporal bone and paranasal sinuses. The pathogenesis is unknown, but many authors suggest that the key factors are prolonged inflammation and obstruction of a bony cavity that is normally aerated. Leon et al. suggested that the increased intrasinus pressure due to drainage obstruction may affect the venous and lymphatic drainage from the sinus cavity and cause the rupture of blood vessels and hemorrhage . In thisCholesterol granuloma of the frontal sinus is uncommon. Our MIDLINE literature search on \u201ccholesterol granuloma of the frontal sinus\u201d has reflected the rarity of this condition. Butler and Grossenbacher and HellThe case reported by us was treated with radical surgical techniques and with an endoscopic approach to frontal sinus to restore or enlarge the nose-frontal canal and promote drainage and ventilation of the frontal sinus. 12\u201318-month followup shows no clinical signs of recurrence ."} +{"text": "The genetic information in eukaryotic cells is organized in a specific structure called chromatin. The basic unit of chromatin is the nucleosome, which consist of four histone proteins and ~147 bp of DNA . Experim"} +{"text": "Metabolomics is a part of systems biology dealing with the determination of qualitative and quantitative profile of low molecular weight compounds (metabolites) present in body fluids and tissues of living organisms. Metabolic composition is strongly dependent on the state of homeostasis and any deregulation should affect it. For this reason, there is now increased interest in metabolomics as a potential tool to support cancer research. At the same time the analysis of metabolic pathways involved in the process of carcinogenesis provides the possibility of a more complete understanding of the mechanisms that are critical for tumour biology.In this study, 1H NMR measurements were performed for thyroid tumour tissue and healthy tissue homogenates and analyzed by chemometric manner. Multivariate analysis of the data using the PCA, PLS-DA and OPLS-DA methods allowed a precise separation from normal thyroid tissue of all tumours originating in both benign and malignant lesions. In addition, classification of nodular goiter, follicular adenoma and malignant tumours was possible with comparable efficacy."} +{"text": "The introduction of HAART has considerably improved life expectancy and quality of life of people living with HIV/AIDS (PLWHA).If in the first years HIV infection was considered an absolute contraindication to implantology, it is now possible to employ implants positioning which allows a more complete functional and aesthetic rehabilitation of the oral cavity also in these patients; howewer there is still fear and prejudice in using this technique in PLWHA.We present the experience of Department of Dentistry of Luigi Sacco Hospital of Milan in over twenty years of implant surgery to evaluate the possibility of using implantology techniques in PLWHA without exposing them to greater risks of developing infections both during and after surgical intervention.The study considers a consecutive series of 21 HIV-positive patients and 91 HIV-negative patients as control group , treated in the dental clinic of the Luigi SaccoHospitalfrom 1998 to december 2011.The pre-surgery phase included a collection of anamnesis, clinical examination data, diagnostic radiology evaluation and assessment of blood tests.We used the surgical technique of submerged fixture implants and mobile and fixed prostheses for the final prosthetic rehabilitation.Several failures occurred in both groups and were attributable to local factors related to receiving bone or to exceedingly invasive surgical techniques.We detected no lesions in the oral cavity in these subjects concomitant with the plants loss nor changes in their overall health conditions.Patients enrolled in this study presented both functional and aesthetic dental problems.The comparison between the success/failure rates in the two groups shows that implant surgery can be employed without risk for the patient and with success rate comparable to the general population, nevertheless it is important to assess the level of immune competence of the patient.Finally, the prosthetic rehabilitation of the oral cavity, in addition to the clear local benefit, has an important psychological effect on patients and on your quality of life."} +{"text": "Striatum is the critical structure in goal directed behavior. Striatal processing of the input to striatal structures under the presence of dopamine yields the primary output of the action selection loop in the basal ganglia. This primary output travels through the other structures active for the action selection loop in the basal ganglia and produces the stimulation to the motor circuits of the brain via thalamus. It is not wrong to say that action selection starts and ends in the striatum. The recent research shows that both the ventral (nucleus accumbens) and the dorsal (putamen) striatum take part in decision making processes . MidbraiA computational model is developed to demonstrate that through the striato-nigro-striatal pathway, limbic regions have impact on the motor regions of the basal ganglia and also the integration of the outputs of dorsal and ventral striatal structures produces the resulting output of the basal ganglia action selection loop. This model takes into account the physiological properties of neurons in each basal ganglia structure and the effects of the ion channels on the cell membrane to state a more realistic processing unit. Hodgkin-Huxley type conductance based neuron models are used in order to demonstrate the effects of ion channel currents on the functioning of a neuron. Striatal neurons are modeled as two groups which have D1 or D2 type dopamine receptors. Dopamine input from midbrain neurons acts on the ion channels and stimulates the D1 neurons while inhibiting the D2 neurons. Thus a computational model is obtained which can produce different types of action potentials. This computational model was partially presented at and focu"} +{"text": "Smilisca fodiens) across its geographical distribution. We employed Ecological Niche Modeling (ENM) to perform a monthly analysis of spatial variation of suitable climatic conditions , and then evaluated the geographical correspondence of monthly projections with the occurrence data per month. We found that the species activity, based on the species' occurrence data, corresponds with the latitudinal variation of suitable climatic conditions. Due to the behavioral response of this fossorial frog to seasonal climate variation, we suggest that precipitation and temperature have played a major role in the definition of geographical and temporal distribution patterns, as well as in shaping behavioral adaptations to local climatic conditions. This highlights the influence of macroclimate on shaping activity patterns and the important role of fossorials habits to meet the environmental requirements necessary for survival.The importance of climatic conditions in shaping the geographic distribution of amphibian species is mainly associated to their high sensitivity to environmental conditions. How they cope with climate gradients through behavioral adaptations throughout their distribution is an important issue due to the ecological and evolutionary implications for population viability. Given their low dispersal abilities, the response to seasonal climate changes may not be migration, but behavioral and physiological adaptations. Here we tested whether shifts in climatic seasonality can predict the temporal variation of surface activity of the fossorial Lowland Burrowing Treefrog ( Ecological niches can be defined as the set of conditions within which a species can maintain populations without immigrational input But what about organisms with a low dispersal ability like amphibians? Could it be signs of behavioral adaptations due to the effects of seasonal climate within the distributional range of these ectothermic organisms? Despite the high sensitivity of amphibians to environmental variables Smilisca fodiens) has the northernmost distribution of the family; its current distribution encompasses a significant climatic gradient, from the desert scrub in south-central Arizona Smilisca fodiens spends a period of the year inside underground burrows, until the climatic conditions trigger a brief and explosive period of surface activity Smilisca fodiens to reach such a high latitudinal range, far above the northern limit of the remaining hylids. We suggest that evolving a fossorial behavior allows this species to inhabit temperate regions while retaining its climatic niche.Among amphibians, the Hylid frog family is widely distributed around the world Smilisca fodiens across its geographical distribution. We model the ecological niche of the species based on the month with the most suitable conditions for the species activity (i.e. July) and project it on the climatic conditions of the remaining months of the activity period (defined by the occurrence data). We evaluate the geographical correspondence of monthly projections with the occurrence data per month and discuss how a behavioral trait associated to fossorial activity can favor the conservation of the climatic requirements of a species with low dispersal ability inhabiting a marked climatic gradient.In this study, using coarse-scale ecological context of species niches S. fodiens is fossorial, we assumed that the records ensured that the collected organisms were found on the surface in suitable environmental conditions. We used records with geographic information (latitude-longitude); and those with no geospatial information were georeferenced using BioGeomancer (http://www.biogeomancer.org) and the Georeferencing Calculator (http://manisnet.org/gci2.html). Each locality record was verified in ArcView 3.2 We compiled locality occurrence records from three sources: biological collections ; published literature http://www.worldclim.org/) and are the result of the interpolation of monthly averages from weather stations throughout the world, from 1950 to 2000 http://eros.usgs.gov/). Resolution of all layers was 30 arc-seconds (\u223c1 km2). The selection of the climatic variables was based on their relevance for amphibians We employed a set of five variables for each of the analyzed months (four climatic and one topographic). The layers of maximum and minimum monthly temperature (Tmin and Tmax), monthly mean temperature (Tmean) and monthly total precipitation (Prec) were obtained from the WorldClim project , assuming that this month meets the climatic conditions that are the most suitable for species activity. Of the 95 occurrence data points used for the model, only four were outside the temporal interval (1950\u20132000) of the climatic layers. Thus, we expect no effects due to the climate variation outside these five-decades that compromise the reliability of the potential niche obtained for July. Finally, the climatic niche was then projected on the climatic conditions of the remaining months of the period of activity: May, June, August, September, October, November and December (months in which we found at least one occurrence data point).n simulations, where the result is an index of how favorable the climatic conditions are to species requirements We employed two automatic learning algorithms: the Genetic Algorithm for Rule-set Prediction (GARP) and the maximum entropy approach (Maxent). For GARP models we used a desktop version which operates under a stochastic process where classifiers compete to select solutions that identify the presences and the pseudoabsences \u22125).Maxent fits a probability distribution for occurrence of the species to the set of pixels across the study region subject to the appropriate constraints. In ecological niche modeling these constraints are the expected value of each feature, which should match its empirical average. We assigned 80% of the occurrence data points to formulate the model parameters. For Maxent models we used the default parameters ratios The distributions obtained by ENM generally over predict because the model does not consider the factors that may have limited biologically, historically and geographically the occupation of such niches kappa2 function in the irr package p-value less than 0.05. The amount of predicted area was estimated based on the thresholds of agreement described above and presented as percentages of number of pixels.The temporal variation of suitable climatic conditions throughout the area of distribution was evaluated based on both the degree of correspondence between the monthly projections and occurrence data, and the estimation of the percentage of predicted area for each month. We determined the amount of monthly occurrence data that coincided with the monthly projections and estimated the kappa coefficient Spatial Analyst extension of ArcView 3.2. Finally, we compared the monthly variation in the precipitation and minimum temperature ranges performing a Mann\u2013Whitney U-test with the wilcox.test function in the stats package To describe the environmental space for species activity, we analyzed the ranges of precipitation and minimum temperature that are suitable for the species activity based on the climatic information of the occurrence data. Because the occurrence data points of July are widely and evenly spread along the distribution area of the species, we used them to demonstrate that the climatic conditions suitable for activity spatially vary throughout the year. Thus we analyzed the climate ranges of July points both for the dry season, when activity is not reported (January to April), and for the season in which the period of activity is favored (May to December), and then were compared with the climate ranges of remaining months of the activity period. For this we used the We found that the geographic distribution of suitable climatic conditions for the activity of the species varies temporally . We obseThe occurrence data for each month of the activity period (i.e. May to December) showed a high geographical correspondence with the variation of suitable conditions throughout the study region did not match that described for the activity period (May to December) . FinallySmilisca fodiens has a range of climatic conditions wider that the species can tolerate, so it can be suggested that the species evolved behavioral We found that the activity of the species outside burrows is predicted by climatic conditions and thatSeveral local-scale studies in fossorial anurans have shown the influence of microclimatic conditions over the explosive surface activity of individuals Smilisca fodiens and the annual latitudinal variation of suitable conditions could reflect a behavioral adaptation in order to retain its physiological tolerance ranges. For instance, amphibians from xeric environments rely on rainy conditions to avoid desiccation due to the high permeability of the skin The fossorial behavior is a common strategy among temperate anurans Smilisca fodiens. Such changes appear to be a strong selective force in species adaptation, as it has been shown when the response rate of some traits increase its selective values at the optimums of the environmental tolerance range of species, particularly, when there is an increase in the spatial and temporal heterogeneity of environment between generations Local-scale studies of fossorial anurans Smilisca group Smilisca fodiens will face a reduction in its already restricted activity period for feeding and breeding, and that metabolic responses during dormancy in cold periods can be compromised, affecting the survival of the species. In this context, it will be particularly important that future studies address the possible impact of the shifts in climatic patterns over fossorial anurans.Based on the importance of precipitation on the activity of this species, the increase in aridity during the Pleistocene along its geographical range, could have favored its differentiation within the The important role that climatic conditions have on the distribution patterns of anurans is well known, but the role of spatial and temporal climatic variation in the activity of fossorial anuran species is poorly understood. Despite this analysis focusing on a single species, the life history traits and the evolutionary history of the Lowland Burrowing Treefrog allows us to project our findings to other ectothermic organisms with low dispersion ability. We expect that approaches based on the analyses of ecological niches can contribute and enhance the understanding of current patterns and its evolutionary processes. All of these as part of a more general theory of seasonality of the ecological niches.Acknowledgments S1Collections and institutions included in HerpNET and GBIF that provided historical occurrence.(DOCX)Click here for additional data file."} +{"text": "Anatomical comparisons of the ear region of baleen whales (Mysticeti) are provided through detailed osteological descriptions and high-resolution photographs of the petrotympanic complex of all extant species of mysticete cetaceans. Salient morphological features are illustrated and identified, including overall shape of the bulla, size of the conical process of the bulla, morphology of the promontorium, and the size and shape of the anterior process of the petrosal. We place our comparative osteological observations into a phylogenetic context in order to initiate an exploration into petrotympanic evolution within Mysticeti.Balaenoptera musculus; confluence of fenestra cochleae and perilymphatic foramen in Eschrichtius robustus), and several mysticete clades are united by derived characteristics. Balaenids and neobalaenids share derived features of the bulla, such as a rhomboid shape and a reduced anterior lobe (swelling) in ventral aspect, and eschrichtiids share derived morphologies of the petrosal with balaenopterids, including loss of a medial promontory groove and dorsomedial elongation of the promontorium. Monophyly of Balaenoidea and Balaenopteroidea was recovered in phylogenetic analyses utilizing data exclusively from the petrotympanic complex.The morphology of the petrotympanic complex is diagnostic for individual species of baleen whale (e.g., sigmoid and conical processes positioned at midline of bulla in This study fills a major gap in our knowledge of the complex structures of the mysticete petrotympanic complex, which is an important anatomical region for the interpretation of the evolutionary history of mammals. In addition, we introduce a novel body of phylogenetically informative characters from the ear region of mysticetes. Our detailed anatomical descriptions, illustrations, and comparisons provide valuable data for current and future studies on the phylogenetic relationships, evolution, and auditory physiology of mysticetes and other cetaceans throughout Earth's history. The mammalian petrotympanic complex houses the organs of hearing and balance and in cetaceans is partially or completely detached from the rest of the basicranium Unlike the odontocete petrotympanic complex, the petrotympanic bones in mysticetes are firmly attached to one another via two bony connections: the anterior and posterior pedicles . DevelopNot surprisingly, the mysticete petrotympanic bones comprise a morphologically and functionally complex structure that preserves a wealth of anatomical information useful in examining patterns of adaptation and macroevolution. As recently discussed by O'Leary Other important works utilizing a comparative anatomical approach to investigate cetacean hearing adaptations and evolution include research on the tympanic bulla of stem cetaceans A cursory review of the literature reveals a somewhat confusing array of anatomical terms for the various salient morphological parts and regions of the mysticete petrotympanic complex. Although several recent reports With over 35 years elapsing since the seminal study by Kasuya All of the specimens that were examined are housed in curated museum collections . The speAs with terrestrial mammals, the mysticete ear is divided into three major anatomical divisions: the outer ear from the external environment to the tympanic membrane, the middle ear from the tympanic membrane to the ventrolateral (tympanic) surface of the petrosal, and the inner ear including the organs of balance and hearing within the petrosal bone itself. Although the present study is focused on the morphology of the middle ear and its structures in mysticetes, we also provide an overview of the outer and inner ears as an introduction to the entire system.Unlike the outer ear of terrestrial mammals, all extant cetaceans lack an external pinna surrounding the external auditory meatus, which connects the tympanic membrane to the surrounding environment. At the proximal end of the mysticete external auditory meatus is a dense waxy plug that nearly fills the entire lumen of the meatus . The proIn vivo this region of the tympanic cavity is occupied by a vascular structure called the corpus cavernosum tympanicum that may expand to fill the bullar cavity during diving Within the middle ear, the bullar portion of the tympanic cavity is formed by the deeply excavated region between the involucrum and lateral wall of the tympanic bulla . NumerouCaperea); the stapedial fossa is often large and hemispherical; the malleolar fossa is indistinct in contrast to well formed fossa in odontocetes ; the ossicular chain of stapes, incus, and malleus is constructed as in odontocetes with the malleus fused to the tympanic bulla via the processus gracilis.In mysticetes the promontorium of the petrosal is typically domed and not flattened as in odontocetes; the fenestra cochleae and fenestra vestibuli are well separated from each other as in odontocetes and unlike the condition in terrestrial artiodactyls where the two openings are more closely positioned; the secondary facial foramen and facial sulcus is generalized in its form and position rotates its head at slower velocities than the terrestrial artiodactyl Bos taurusThe anatomy of the mysticete inner ear as currently understood is summarized in several reports The endocranial surface of the mysticete petrotympanic complex is characterized by a number of important features including the rough and jagged surface texture of this region, the closely appressed endolymphatic and perilymphatic foramina, and variable development of the internal auditory meatus , 5. The Ontogeny provides another source of petrotympanic morphologic variation that can only briefly be described here. The tympanic bullae of neonate and yearling individuals typically possess smooth and rounded external surfaces in contrast to the roughened and sharply angled surfaces of adult individuals. This textural difference is especially prominent on the involucral and medial surfaces of the bulla. In the former the involucral surface is smooth and lacks the strong transverse creases of adults, and in the latter the main and involucral ridges are low and poorly defined in comparison to the much more prominent ridges of adults. The petrosals of neonate and yearling individuals typically possess short anterior and posterior processes relative to the adult condition of distinctly longer anterior and posterior processes. In addition, the endocranial surface of the pars cochlearis in neonates and juvenile individuals typically is low and extremely porous rather than extended dorsally into the cranial hiatus. Further, the internal auditory meatus of neonates and juvenile individuals is broadly subdivided into distinct foramina for CN VII and CN VIII by a prominent crista transversa. In adult individuals the internal auditory meatus generally consists of a single common opening, at the bottom of which lies the two foramina separated by a deeply recessed crista transversa .Tursiops truncatus). Although we largely accept their suggestions , and not a specific bone in many studies focused on mammals A structure on the lateral side of the petrosal referred to as the superior process in many cetacean studies is homologous to the tegmen tympani of terrestrial mammals A pair of openings that penetrate the promontorium, the fenestra cochleae and fenestra vestibuli, often are referred to as the fenestra rotunda (round window) and fenestra ovalis respectively based on the shapes of the structures . However, two openings that transmit branches of the trigeminal nerve, namely the foramen rotundum and the foramen ovale, are described in very similar terms. The similarity in these names can lead to confusion Lastly, many descriptions of the ear regions of whales describe a structure on the promontorium of the pars cochlearis that is identified in numerous studies as the \u2018caudal tympanic process\u2019 The following morphologic descriptions include combinations of apomorphic and pleisomorphic character states that together provide a useful characterization of the petrotympanic anatomy of individual taxa. For each taxon we describe the tympanic bulla first followed by a description of the petrosal. In addition the descriptions are constructed in a parallel format to facilitate anatomical comparisons. We have also included digitial images of tympanic bullae and petrosals for each taxon, incorporating four standard views for each element with salient features labeled. A taxonomic order overlies the description section and is based on the generally recognized clades within crown Mysticeti .Balaena mysticetus is rhomboid shaped with a distinct anteromedial corner. The medial half of the ventral surface is dorsoventrally compressed to form a longitudinal furrow, a feature also seen in species of Eubalaena. The anterior lobe (new term) of the bulla is distinctly smaller than the posterior lobe, the two regions being separated by a deep, obliquely directed lateral furrow. Again, species of Eubalaena share this feature. The main ridge . The hiatus Fallopii opens through the ventral surface of the promontorium medial to the juncture between the promontorium and the anterior process. The groove for the tensor tympani muscle is deeply recessed along this same juncture. The epitympanic recess is broad and smooth and lacks a clearly defined malleolar fossa. The posterior cochlear crest is short and thin with a pointed tip that only slightly extends over the ovoid, relatively shallow stapedial fossa. The stapedial fossa is separated from the facial nerve sulcus by a thin longitudinal ridge. The composite posterior process is relatively broad and short in comparison to the narrower and more elongated processes of balaenopterids. The facial nerve sulcus in Balaena is distinct and broadly open.In ventral view , the flaEubalaena the suprameatal region is elevated and continuous between the promontorium and tegmen tympani. The dorsolateral surface of the tegmen tympani is broadly rounded.In dorsal view the juncIn posterior view the fenestra cochleae is large and recessed into the promontorium with a narrow dorsomedially directed embayment extending towards the dorsomedial rim of the promontorium . The fenIn dorsomedial view all of tEubalaena species given our limited samples for each taxon and mixed semaphorants. We have described morphologic variation that may be species specific when more specimens are studied.We provide below a composite description of the three E. japonica are generally larger than other species . A relatively deep sulcus for CN VII extends laterally on the posteroventral side of the posterior process. The globular promontorium is relatively longer anteroposteriorly as in Eubalaena and unlike the shorter promontorium in Balaena. However, the transverse width of the promontorium more closely resembles Balaena than Eubalaena. The ventral surface of the promontorium is rounded and convex. As in balaenids there is a distinct, but irregular promontorial groove below the dorsomedial rim of the promontorium. In turn, the dorsomedial rim of the promontorium is marked by distinct columnar bony extensions. The tympanic opening for the facial nerve is not in its common and typical location adjacent to the fenestra vestibuli and at the head of the facial nerve sulcus, but instead is positioned well anteriorly near the anterior margin of the pars cochlearis and medial to the juncture between the promontorium and anterior process. The opening varies from an elongate slit with subtle sulci extending both anteriorly and posteriorly to a circular opening with more distinct anterior and posterior sulci . The posterior sulci likely transmitted the hyomandibular branch of CN VII, while the greater petrosal nerve traversed the anterior sulcus. Whether the tympanic opening is homologous with the hiatus Fallopii, or if the hiatus Fallopii is absent all together in Caperea, remains unclear at this time. The origin of the tensor tympani muscle has no bony landmarks . The epitympanic region is occupied by an elliptical fossa that may accommodate the malleus. The posterior cochlear crest is short but thick and forms only the extreme anterior portion of the floor of the large and broadly hemispherical stapedial fossa. The stapedial fossa is separated from the facial nerve sulcus by a low longitudinal ridge.In ventral view , the flaIn dorsal view the parsIn posterior view the feneIn dorsomedial view the fourB. bonaerensis, the tympanic bulla of B. acutorostrata is small external mastoid apex. The body of the posterior process is compressed anteroposteriorly and expanded dorsoventrally. A relatively shallow sulcus for CN VII extends laterally on the ventral side of the posterior process.In ventral view , the flachlearis . The venIn dorsal view the parsA much broader anterointernal sulcus is present on the anteromedial edge of the anterior process and extends to the apex of the process. When the petrosal is in place within the cranial hiatus, this sulcus forms a portion of the canal for transmission of the mandibular branch of CN V as it passes over the pterygoid fossa and descends towards the dentary.In posterior view the feneIn dorsomedial view the inteB. bonaerensis is morphologically very similar to that of B. acutorostrata, although slightly larger flange posterior to the fenestra cochleae. The dorsolateral surface of the posterior cochlear crest is thin and concave and forms the floor of the relatively large and hemispherical stapedial fossa. The composite posterior process is very long, especially in adults. The body of the posterior process is compressed anteroposteriorly and expanded dorsoventrally. A relatively deep sulcus for CN VII extends laterally onto the ventral side of the posterior process and in some specimens is nearly completely enclosed by a bony roof.In ventral view the flanchlearis . The proB. acutorostrata is not obvious on specimens of B. musculus. The anterointernal sulcus is absent from the anteromedial edge of the anterior process.In dorsal view the parsIn posterior view the feneen echelon arrangement separated by a thin, mediolaterally oriented bony septum. In this view the sharply truncated dorsomedial rim of the promontorium is clearly evident with a rim of cancellous bone marking the area where the dorsomedial bony extension of the pars cochlearis has been broken away.In dorsomedial view the inteB. omurai is intermediate in size the anterior process is broadly attached to the anterior margin of the promontorium with a convex dorsomedial rim. The medial margin of the anterior process is relatively smooth, but in some specimens possesses short, irregular, medially directed bony extensions at the point of contact between the promontorium and anterior process. This is similar to the condition in some specimens of B. musculus, although in the latter taxon the bony extensions generally are more elaborate. The anteroposterior length of the anterior process is greater than that of the pars cochlearis flange posterior to the fenestra cochleae. The dorsolateral surface of the posterior cochlear crest is thin and concave and forms the floor of the relatively large stapedial fossa. Unlike the condition in B. musculus the stapedial fossa is not excavated into the posterior cochlear crest and is separated from the secondary facial sulcus. The composite posterior process is very long, especially in adults. The body of the posterior process is compressed anteroposteriorly and expanded dorsoventrally. A relatively deep sulcus for CN VII extends laterally onto the ventral side of the posterior process and in some specimens is nearly completely enclosed ventrally by a bony floor.In ventral view the flanchlearis . The proIn dorsal view the parsIn posterior view the feneen echelon arrangement separated by a thin, mediolaterally oriented bony septum. In this view the sharply truncated dorsomedial rim of the promontorium is clearly evident with a rim of cancellous bone marking the area where the dorsomedial bony extension of the pars cochlearis has been broken away.In dorsomedial view the inteM. novaeangliae is intermediate in size and distinctly inflated as reflected in both the broadly convex ventral surface and the enlarged tympanic cavity but less so in others . Irregular bony projections occur on the anteromedial portion of the anterior process in more immature specimens that possess a slight anteromedial embayment between the anterior process and promontorium. A broad, anteroposteriorly oriented sulcus occurs in some specimens on the ventral surface of the anterior process between the groove for the tensor tympanii and the anterior bullar pedicle. The anteroposterior length of the anterior process is greater than that of the pars cochlearis as its sister taxon , but balaenopteroid relationships were unresolved. However, a monophyletic Balaenopteroidea was recovered as a polytomy . These results support the Balaenoidea plus Balaenopteroidea hypothesis C. marginata plus Balaenopteroidea hypothesis . We also provide a brief discussion concerning the lack of resolution within Balaenopteridae.Mammalodon and Eomysticetus and three unambiguous synapomorphies from the petrotympanic complex. Those characters include elongation of the main ridge to the anterior end of the bulla share the rhomboid shape of the tympanic bulla and orientation of the posterior process at a right angle to the long axis of the pars cochlearis . Neither of these character states were observed in any other extant mysticete that we examined for the currenty comparative study. The short anterior lobe and the weakly developed conical process of the tympanic bulla that characterize balaenids and neobalaenids likely are other synapomorphies for Balaenoidea, although we were unable to accurately measure these features for the outgroup taxa in order to polarize the two characters.Although most morphological studies recover balaenoid monophyly teroidea . BalaenoC. marginata with Balaenopteroidea was supported by six unambiguous synapomorphies. Of those six characters, one was taken from the ear region , and we included that character in our analysis (character 27 of our study) as well. However, we differ in our interpretation of pars cochlearis elongation in C. marginata (scored as absent here but as present in the previous study). The transverse elongation of the pars cochlearis that characterizes the petrosal of balaenopteroids is associated with a medial flaring of the promontorium is much more globular, and almost boxy (C. marginata (B. musculus (as opposed to the flattened promontorium of B. bonaerensis). It is unlikely that rescoring the character for C. marginata would overturn the Marx's hypothesis A monophyletic Balaenoidea is consistent with most previous morphological studies. In the morphological analysis disputing balaenoid monophyly proposed by Marx um e.g., . In the ost boxy and 9. Iarginata appears Monophyly of Balaenidae has been well supported by both molecular and morphological data en echelon arrangement of the perilymphatic and endolymphatic openings , and an acute angle between the posterior process and the flange of the ventrolateral tuberosity . It should be noted that several of these characters, such as shape of the apex of the anterior process and attachment of the process to the promontorium, vary within the genus Balaenoptera is less controversial than Balaenoidea, but the results of at least one morphological study enoptera .E. robustus and M. novaeangliae. In trees 1 and 2, these taxa form a clade nested among other balaenopteroid species . In contrast, M. novaeangliae was placed as the sister taxon to all species of Balaenoptera with E. robustus excluded from Balaenopteridae all together in tree 3 . If character 45 is excluded from the phylogenetic analyses, both M. novaeangliae and E. robustus fall outside a clade composed of all species of Balaenoptera with Balaenopteroidea, or place eschrichtiids (E. robustus) with Balaenoidea.It is clear from our analysis that the petrotympanic complex exhibits a phylogenetic signal, even if only at a gross level. The morphology of the auditory region alone can support monophyly of the two traditional lineages of Mysticeti, which are Balaenoidea and Balaenopteroidea, and which is a result that follows traditional interpretations of mysticete relationships while competing with more recent studies that either place neobalaenids Mammalodon colliveri (toothed mysticetes) and Eomysticetus whitmorei (edentulous mysticete) were chosen as outgroup taxa. Bootstrap analyses included 1000 pseudoreplicates.We identified 48 morphological characters within the petrotympanic complex and codeFigure S1Three most parsimonious trees recovered from phylogenetic analysis of 48 petrotympanic characters. Major difference between topologies lies with Eschrichtius robusts and Megaptera novaeangliae (in bold).(TIFF)Click here for additional data file.Figure S2Six most parsimonious trees recovered from phylogenetic analysis of petrotympanic characters excluding #45 .(TIFF)Click here for additional data file.Table S1List of extant mysticetes studied .(PDF)Click here for additional data file.Table S2Measurements (mm) of tympani bulla of balaenid and neobalaenid species.(PDF)Click here for additional data file.Table S3Measurements (mm) of tympanic bulla of balaenopterid and eschrichtiid species.(PDF)Click here for additional data file.Table S4Tympanic bulla anterior lobe measurements (mm) among mysticetes.(PDF)Click here for additional data file.Table S5Tympanic bulla conical process measurements (mm) among mysticetes.(PDF)Click here for additional data file.Table S6Petrosal measurements (mm) among mysticetes and reated taxa.(PDF)Click here for additional data file.Table S7Data matrix of petrotrympanic characters scored for extant mysticetes used in phylogeneti analyses. Matrix can be downloaded from project page associated with this manuscript at MorphoBank (www.morphobank.org).(PDF)Click here for additional data file.Text S1Dichotomous key for identifying extant species of mysticetes using the petrotympanic complex.(PDF)Click here for additional data file.Text S2Phylogenetic characters derived from the petrotympanic complex of extant mysticetes. Further information can be found on the project page associated with this study at MorphoBank (www.morphobank.org).(PDF)Click here for additional data file."} +{"text": "Among a general lack of awareness of scoliosis, the incidence of Adolescent Idiopathic Scoliosis (AIS) in the United Arab Emirates is not known although efforts are currently being made to remedy this. The Emirati people have a unique and well-structured culture with strong family relationships and might be considered a closed group. Consequently, there are some factors within this group that suggest that the incidence of AIS might be higher than in other parts of the world. In this study, the incidence of ligamentous laxity among young Emirati women was measured because this has been associated many times previously with AIS and casual observation has suggested that ligamentous laxity is prevalent among many young Emirati women.The degree of extension of the middle metacarpo-proximal-phalanx joint and thumb abduction were measured on both hands using standard, previously reported techniques of 100 randomly-selected, young Emirati women. The results were compared with similar published data from the United Kingdom.The results showed that there was a higher incidence of ligamentous laxity among the emirati population in both the degree of extension of the middle metacarpo-proximal-phalanx joint as well as thumb abduction when compared to the values from the UK.These results suggest that the incidence of AIS in the UAE (when determined) might be expected to be greater than in other parts of the world due to a higher incidence of ligamentous laxity among the susceptible population."} +{"text": "In particular, we compared the electric potential profile of the membranes of spinal ganglion and neuroblastoma cells, during the resting and action potential (AP) states. The spinal ganglion neurons represent healthy cells, while neuroblastomas denote tumorous neurons.Electrical signals underlie the propagation of information in the nervous system. It is known that neuronal cells can generate electric potentials by diffusing ions across the neuronal membrane. We have previously studied the effects of electric charges fixed onto the inner surface of the membrane, on the potential of the membrane surfaces of healthy and cancerous neuronal cells ,2. BasedTo analyze the electric potential profile of neuronal membranes, we numerically solved the Poisson-Boltzmann equation ,3. The mz axis from the extracellular region to the surface of the glycocalyx. The decay of the potential is more expressive for the neuroblastoma than for the ganglion neuron. An interesting observation is that the electric potential continues to decrease across the glycocalyx region of the spinal ganglion neuron. This however does not occur for the neuroblastoma cells, whose potential does not change in this region of the membrane.For the resting and AP states of spinal ganglion neurons and neuroblastoma cells, simulation results indicate that the electric potential significantly decreases along the Because there is no electric charge within the lypidic bilayer, our results demonstrate linear variations of the potential across the bilayer of neuronal membranes. Furthermore, the intracellular potential of both spinal ganglion neurons and neuroblastoma cells exponentially increases from the inner membrane surface to the bulk cytoplasmic region during the resting state. However, during the AP state, the electric potential remains unchanged in the cytoplasm.Our simulation results match those obtained for the membrane of the squid axon , whose m"} +{"text": "The propyhlaxy programme for the families with high hereditary risk of malignant cancers organized and guided by the Ministry of Health of Poland. Our data are based on the activity of the ZOZ \u201cSALVE\u201d, one of three units operating in the Lodz district in Poland. Over 600 questionnaires produced by patients with problems of in-family cases of cancer (including ovary cancers and/or breast cancer) were collected between mid-2010 to the end of the 2011. Four cases of male breast cancer were recorded and screened across the clinically recorded family data for these patients. It appeared that these cases could not be fully explained in accordance with the current concept of the family cases of cancers. These discrepances could be either related to the faulty selection criteria or to the highly differentiated male population suffering the breast cancer. The diagnostic potential within the \u201cModule 1: early detection of malignant cancers in hereditary high-risk families with ovary cancer and breast cancer\u201c is discussed and evaluated in practical terms including regulatory/ legislation aspects. Supported by the Ministry of Health of Poland."} +{"text": "We describe and expand on a novel computational approach to reduce detailed models of central pattern generation to equationless return mapping for the phase lags between the constituting bursting interneurons .Such mappings are then studied geometrically as the model parameters, including coupling properties of inhibitory and excitatory synapses, or external inputs are varied. Bifurcations of the fixed points and invariant circles of the mappings corresponding to various types of rhythmic activity are examined. These changes uncover possible biophysical mechanisms for control and modulation of motor-pattern generation. Our analysis does not require knowledge of the equations that model the system, and so provides a powerful new approach to studying detailed models, applicable to a variety of biological phenomena beyond motor control.Motifs of three coupled cells are a common network configuration including models of biological central pattern generators. We demonstrate our technique on a motif of three reciprocally coupled, inhibitory and excitatory, cells that is able to produce multiple patterns of bursting rhythms. In particular, we examine the qualitative geometric structure of two-dimensional maps for phase lag between the cells. This reveals the organizing centers of emergent polyrhythmic patterns and their bifurcations, as the asymmetry of the synaptic coupling is varied. The presence of multistability and the types of attractors in the network are shown to be determined by the duty cycle of bursting, as well as coupling interactions."} +{"text": "The infection times of individuals in online information spread such as the inter-arrival time of Twitter messages or the propagation time of news stories on a social media site can be explained through a convolution of lognormally distributed observation and reaction times of the individual participants. Experimental measurements support the lognormal shape of the individual contributing processes, and have resemblance to previously reported lognormal distributions of human behavior and contagious processes. The analysis of human social dynamics stemming from the emergent effects of individual human interactions has recently created a spur of research activity. Encompassing a wide area ranging from the propagation of opinions, epidemic spreading of information and innovation across groups of individuals The analysis of the observed temporal distributions of online human activity data such as the inter-arrival and forwarding times of email Lognormal distributions There are a few general processes that lead to a lognormal random variable. A lognormal random variable is defined In this paper, we study the inter-arrival time of retweets in Twitter and the spreading times on Digg. Both times reflect human interaction through communication technology. We show that this relatively new type of human interaction is lognormally distributed.In order to spread information, such as forwarding a message or news item on a social media platform, three consecutive processes take place as shown in In the following discussion, time measurements from the microblog service Twitter and the (former) social news aggregator Digg.com are used to explore the spread of information on online social media. For the case of Twitter we measure Measurements of the network time In the following, we make two basic hypotheses:We assume that the random variable We make the hypothesis based on previous findings , and as the observation time (colored orange in the insets). The measured and fitted observation time parameters The analysis of the experimental data shows that both lower and upper bound of observation and reaction time are following a lognormal distribution, thereby confirming our initial hypotheses for the spreading time The analysis of spreading times was based on two datasets, a collection of initial original posts and their corresponding retweets on the microblog Twitter, and five year trace of stories and votes submitted to the social media aggregator Digg.com. Both datasets were collected from the providers' webpages and API interfaces and contain only publicly available information. The former dataset of 20.5 million tweets was constructed in a two step process: Starting from an initial 1% random sample of all tweets published within Twitter, we automatically retrieved for each of these initial tweets all further subsequent reappearances of these messages thereby generating forests of individual message spreads The analysis and the experimental data point to lognormal distributions as an explanation of the temporal distributions observed in epidemic spreading online. This finding is not per se surprising as lognormal distributions have for a long time been observed in and connected to human behavior, which is also driving the spread of information and innovation online.The analogue to the reaction time Similar lognormal patterns have been discovered across domains for the equivalent of the observation time This general notion of the lognormally distributed duration of human activities and associated spreading tasks are also repeatedly found across other domains, for example in the duration of strikes The lognormal distribution regularly appears in the broader context of universal human activities and behavior. When tracing the mobility of cell phone handsets, Barcelo and Jordan Similarly, a variety of such lognormally distributed properties of populations have been discovered, such as in the distribution of incomes or marriage ages Based on a huge number of measurements in Twitter and Digg, we found that human related activities such as the observation time"} +{"text": "In adult mammals, under physiological conditions, neurogenesis, the process of generating new functional neurons from precursor cells, occurs mainly in two brain areas: the subgranular zone in the dentate gyrus of the hippocampus, and the subventricular zone (SVZ) lining the walls of the brain lateral ventricles. Taking into account the location of the SVZ and the cytoarchitecture of this periventricular neural progenitor cell niche, namely the fact that the slow dividing primary progenitor cells (type B cells) of the SVZ extend an apical primary cilium toward the brain ventricular space which is filled with cerebrospinal fluid (CSF), it becomes likely that the composition of the CSF can modulate both self-renewal, proliferation and differentiation of SVZ neural stem cells. The major site of CSF synthesis is the choroid plexus (CP); quite surprisingly, however, it is still largely unknown the contribution of molecules specifically secreted by the adult CP as modulators of the SVZ adult neurogenesis. This is even more relevant in light of recent evidence showing the ability of the CP to adapt its transcriptome and secretome to various physiologic and pathologic stimuli. By giving particular emphasizes to growth factors and axonal guidance molecules we will illustrate how CP-born molecules might play an important role in the SVZ niche cell population dynamics. The adult subventricular zone (SVZ) neural stem cell niche, also designated as subependymal zone to distinguish from the embryonic SVZ, is the major source of novel neurons in the adult brain an apical surface composed of a large number of microvilli of variable length that extensively increases the contact area with the CSF; (2) a smother basolateral membrane ; and (3) lateral membranes, the surface contact area between adjacent epithelial cells. At the most apical portion of the lateral membranes the existence of tight junctions limits the paracellular passage of blood derived cells and proteins , is well described to promote growth and neuronal survival in the mouse developing cortex , adult SVZ derived cells form neurospheres that display multipotent and self-renewal properties and its apical interface (the CNS side). On the other hand, any response the CP mounts to external stimuli will ultimately reflect in its secretome, and hence in the CSF that surrounds the brain parenchyma. In fact, since the CP epithelial cells are equipped with transporters for several proteins and metabolites, pathological damage to the CP itself will alter CSF composition and ventricular volume, as in the case of hydrocephalus , or might function as potential rescue mechanisms for brain parenchyma lesions , will be a key issue in adult neural stem cell research in the future. In fact, modulating the CP-CSF nexus in pathologies of the central nervous system could become an important aspect in the usage of endogenous/exogenous neural progenitor cells for stem cell based therapies in the brain.Under basal normal physiological conditions the CP displays the ability to express several genes encoding for proteins known to promote proliferation, differentiation, and survival of neural progenitor cells. These proteins are secreted toward the CSF, which is as a route for the delivery of CP-born proteins/molecules to the SVZ. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Multiple biomarkers have been proposed for identifying patients at risk of developing the syndrome of acute kidney injury (AKI) . These bWe measured serum and urinary NGAL and urinary hepcidin in patients admitted to the ICU of a tertiary referral hospital with SIRS and either oliguria or a 25 \u03bcmol/l serum creatinine increase within 48 hours of admission. We used point-of-care creatinine measurements to identify the maximum RIFLE category of AKI within the first 5 days of enrolment. We corrected both urinary biomarkers for urinary creatinine. We calculated the reciprocal of hepcidin measurement and noted if serum NGAL was greater than the upper limit of normal (149 ng/ml). We derived the area under the curve (AUC) for the receiver operating characteristic curve (ROC) for all biomarkers.Between 31 August 2010 and 17 November 2010, we enrolled 92 patients; 17 of these patients had APACHE II diagnoses of sepsis. In patients with a diagnosis of sepsis, the predictive ability of all of the biomarkers measured was worse than in those without (Table Although the sample size is limited, there is a marked difference in the predictive ability of the measured biomarkers to predict AKI between septic and nonseptic patients. All patients admitted met the criteria for a diagnosis of SIRS, suggesting that inflammation and sepsis contribute to the development of AKI via different pathways. The ability of these biomarkers to predict AKI in patients with a diagnosis of sepsis in our cohort is limited. Further investigation is needed into whether the combination of specific biomarker patterns and clinical features can better identify patients at risk, particularly in the setting of sepsis. In addition, further work examining the relationship between the various biomarkers and the aetiology and natural history of AKI is required."} +{"text": "The goal of humankind is to alleviate the suffering from disease and quest is to have good and robust health always. The gain of knowledge through research is one of the primary steps towards the same. With the newly emerging infectious diseases, increase in the antimicrobial resistance and also the increasing incidences of noncommunicable diseases have thrown at us new challenges to develop better diagnostics, drugs, medical devices, and vaccines.The present century is the era of biotechnology and information technology. Biotechnology has become the amalgamation of several technologies ranging from the field of biomedical research and synthetic biology and further transcending to engineering, nanotechnology, and bioinformatics. Biomedicine have gone a step further converging biology, chemistry, and physics which have led to increasing the understanding of the pathophysiological processes by deciphering the molecular interaction that plays a significant role in the cellular mechanism and thus devising new strategies to produce new diagnostics and therapies.Health innovation process needs a boost to stem the declining productivity and high turnover rates of drugs with escalating cost. So, the promotion of newer technologies and innovation is the need of the hour. It is imperative to have health innovation reach the masses and add a value to the public health system of nations across the globe. With adequate impetus we are ushering in today the era of knowledge driven, evidence-based innovation in fields of biotechnology and biomedicine, leading to development of platform technologies which can have global implications.This special issue in the journal of Biomed Research is a step towards the path which brings us to the forefront of new innovations and technologies that transcend all aspects of translational research and have high significance in the public health arena. Hence the editors have selected the cutting edge research in the field of medical biology which has bearing in the future of science.The paper by Y. Qin et al., which have found that DNA vaccine encoding BCR/ABL-hil2 enhances the in vivo humoral and cellular responses in BABL/c mice, hence presenting a new targeted immunotherapy approach which holds promising finding for patients with chronic myeloid leukemia (CML). The paper postulates these findings and if the research is translated to effective DNA vaccine in humans for CML patients, it will help in the treatment of residual disease after the treatment with chemotherapy or targeted therapy.Persea americana var. drymifolia). This paper reports the antimicrobial activity of defensin from avocado against Escherichia coli and Staphylococcus aureus. The alternative method of pathogen control paves the way for development of new treatment regimes against infectious pathogens.New research in antimicrobial therapy has become a very essential tool to fight infection in context of growing antibiotic resistance. The paper by J. J. G.-R. Rodr\u00edguez et al., has given alternate strategy with cloned antimicrobial peptide PaDef homologous to defensins from Mexican avocado ( Curcuma longa L. (from the family Zingiberaceae) on fighting oxidative stress in liver and brain in mice fed with excessive alcohol was studied in the paper by C. W. Pyun et al. The finding showed that curcumin increases brain and hepatic phosphatidylcholine hydroperoxide levels in mice after consumption of excessive alcohol, hence proving the effectiveness of consumption of daily curcumin intake in protecting the liver and brain against alcohol induced oxidative stress.The effect of curcumin fromThe G. C. Fontes et al., paper on characterization of alginate and OSA starch bead for the use of controlled release carrier for penicillin is an important finding to alleviate the discomfort of patients undergoing conventional administration of the vital drug. This paper also has long implications on designing public health intervention for delivery of essential drugs.The paper by M. P. L. Cunha et al., is the analysis of the vaccine adverse event reporting in the state of Rondonia, Brazil during the first ten years (1998\u20132008) after introduction of vaccines for BCG , DTwP/Hib , DTP , MMR and yellow fever (YF) is a major impact paper on public health. This study is of paramount importance which can help the regulators and clinicians alike from all over the world in gauzing the effectiveness and adverse reaction that can be anticipated from the regular vaccines that are a part of immunization programs of many countries around the world.The simple model to analyze assessment of the antitoxin antibodies in the paper by A. Skvortsov and P. Gray also is an essential scientific finding which was once validated experimentally will be a very useful tool to assess in vitro the potential of protective antibodies for further evaluation in vivo.The compilation of the articles in the special issue is very good read of high impact research and important scientific findings would have robust impact on strategizing innovative solutions of public health interventions globally; hence, all editors have selected the most promising research for publication.Nirmal K. GangulySimon CroftLalji SinghSubrata SinhaTanjore Balganesh"} +{"text": "The collective behavior of a neuronal population can often be illuminated by representing neurons as simple phase oscillators, where the response of each neuron to its synaptic input is given by an infinitesimal phase response curve (iPRC). This approach can, for example, predict whether neuronal activity will tend to synchronize across the population depending on the nature of the synaptic coupling and the shape of the iPRCs-3. Despi"} +{"text": "The overall network is obtained by joining all the neighborhoods. RegnANN makes no assumptions about the nature of the relationships between the variables, potentially capturing high-order and non linear dependencies between expression patterns. The evaluation focuses on synthetic data mimicking plausible submodules of larger networks and on biological data consisting of submodules of Escherichia coli. We consider Barabasi and Erd\u00f6s-R\u00e9nyi topologies together with two methods for data generation. We verify the effect of factors such as network size and amount of data to the accuracy of the inference algorithm. The accuracy scores obtained with RegnANN is methodically compared with the performance of three reference algorithms: ARACNE, CLR and KELLER. Our evaluation indicates that RegnANN compares favorably with the inference methods tested. The robustness of RegnANN, its ability to discover second order correlations and the agreement between results obtained with this new methods on both synthetic and biological data are promising and they stimulate its application to a wider range of problems.RegnANN is a novel method for reverse engineering gene networks based on an ensemble of multilayer perceptrons. The algorithm builds a regressor for each gene in the network, estimating its The task of gene regulatory network (GRN) inference is a daunting task not only in terms of devising an effective algorithm, but also in terms of quantitatively interpreting the obtained results co-expression networks. On the basis that correlation coefficients fail to capture more complex statistical dependencies among expression patterns (e.g. non-linearity), more recently general methods based on measures of dependency such as mutual information have been proposed Early network inference models were based on the analysis of the correlation coefficients Escherichia coli), controlled by hundreds of regulators (e.g. approximately Escherichia coli), are recorded for a limited amount of experimental conditions - about 450 for the last publicly available Escherichia coli gene expression data set. Thus, considering also possible combinatorial regulations and feedback loops, the number of possible solutions of the inference problem becomes prohibitively large compared to the available experimental measurements at hand.One of the aspects that makes network inference a daunting task is its intrinsic underdetermination neighborhood of each gene independently and then it joins these neighborhoods to form the overall network. RegnANN performance is compared with those of top-scoring methods such as KELLER In this work we introduce a novel inference method called Reverse Engineering Gene Networks with Artificial Neural Networks (RegnANN). This inference algorithm, trained using steady state data as provided by microarray data, builds a multi-variable regressor (one to many) for each gene in the network. The algorithm is based on an ensemble of Multilayer Perceptrons (MLPs) trained using steady state data. RegnANN estimates the Escherichia coli.In evaluating the performance of the four different network inference methods, first we settle in a controlled situation with synthetic data and then we focus on a biological setup by analyzing transcriptional subnetworks of The general performance of the network inference task is evaluated in terms of Matthews Correlation Coefficient - MCC The experimental evaluation firstly verifies RegnANN ability of inferring direct and indirect interactions among genes and possible cooperative interaction between putative regulators on a set of toy experiments. Considering only the underlying topology (e.g.: undirected unweighted graph), we then focus on synthetic data mimicking plausible submodules of larger networks generated according to both Barabasi Escherichia coli including a number of genes ranging from We finally demonstrate our approach on a biological data set consisting of a selected number of subnetworks of In order to present coherently the results obtained on synthetic data, we start firstly with an evaluation of RegnANN ability of inferring direct and indirect interactions among genes and possible cooperative interaction between putative regulators (e.g.: transcription factors). This is done on four toy examples considering interaction among four genes.The second phase of our analysis focuses on the effect of varying the amount of available data in the task of network inference while considering a fixed threshold for the binarization of the inferred adjacency matrix. The performance in terms of MCC obtained with RegnANN is systematically compared with the ones obtained by KELLER. The accuracy of each inference method is firstly evaluated on synthetic data by varying the topology of the network, the amount of data available and the method adopted to synthesize the data. Once the topology of the network is (randomly) selected, the desired amount of data is synthesized according to the generation method of choice, the network inference methods are applied and the MCC score calculated. We consider discrete symmetric adjacency matrices for fair comparison between the two methods as KELLER does not infer coupling direction, nor the strength of the interaction. To account for intrinsic instability of each inference method, data generation and network inference are repeated The third phase of the experimental evaluation on synthetic data compares the performance of RegnANN, ARACNE and CLR in terms of AUC (the area under the curve). The curve is constructed by varying the value of the binarization threshold between Finally, we compare the results obtained on a selection of Escherichia coli gene subnetworks for the four inference algorithms. We will first start considering a fixed threshold for the binarization of the inferred adjacency matrix. Secondly, we briefly analyze the problem of optimal threshold selection for the binarization of the inferred adjacency matrix in the hypothesis of the presence of gold standard data, which is not available in most realistic biological applications. As for the two phases before, we consider discrete symmetric adjacency matrices.In this section we present four different toy experiments to illustrate how RegnANN is capable of inferring direct and indirect interactions among genes and cooperative interaction between putative regulators (e.g.: transcription factors).Let us consider four different genes The correlation matrix shown in Let us consider four different genes seed expressions with values uniformly distributed in We synthesize expression profiles very similarly to the SLC method: we start considering a set of The correlation matrix shown in Let us consider again the four different genes seed expressions with values uniformly distributed in for each gene As in the previous case, we generate the gene expression profiles considering a set of The correlation matrix in As a final example, let us consider the four different genes seed expressions with values uniformly distributed in We generate the gene expression profiles considering a set of The correlation matrix in Strictly considering the adjacency matrix in In order to eliminate the ambiguity in the determination of the direction of the interaction among genes, in the following we will consider symmetric adjacency matrices as input for data generation and as the output of the network inference task. As a second possible solution, in the case of RegnANN we could have selected the direction of interaction by identifying the highest correlation value. On the other hand, this choice would have resulted in an unfair comparison with (i.e.) KELLER: the latter discards any information about the direction of the interaction. However, it is important to consider that an exhaustive analysis of the direction of the coupling would require a dedicated procedure to account for the variability of the regression.In this section we analyze the performance of RegnANN in inferring network topology by applying a fixed discretization threshold of data ratio: the ratio of the number of expression profiles to the number of nodes. We rescale linearly the synthetic gene expression values in In the following we show results obtained by varying the mean degree of the nodes between In the case of Erd\u00f6s-R\u00e9nyi topology with mean degree equal to In the case of Barabasi topology, the accuracy of both methods drops quickly to a value of It is interesting to note that for both topologies, KELLER tends to outperform RegnANN for mean degree values bigger and equal to In microarray experiments, the analysis of the raw data is often hampered by a number of technical and statistical problems. The possible remedies usually lie in appropriate preprocessing steps, proper normalization of the data and application of statistical testing procedures in the derivation of differentially expressed genes In this section we compare the performance of ARACNE, CLR and RegnANN varying the threshold applied to discretize the inferred adjacency matrix in terms of the the AUC (area under the curve) value.The figure indicates that the mean performance or it should be derived from a gold standard data, which is not available in most realistic biological applications. An obvious solution to this problem is to adopt a training/validation schema: ground-truth data is used to infer the optimal threshold value while external data is used to verify the reconstruction accuracy. Here, for each module in Escherichia coli submodules as in The table indicates that the optimal threshold value depends on the algorithm adopted and the submodule considered.Escherichia coli submodule. Although outside the scope of this work, this preliminary evaluation indicates that in the case of biological data, learning the optimal threshold value via standard machine leaning methods is not straightforward: presence of noise in the data and the high complexity of the domain often cause selection bias. This is the key point that lead us focus on estimating the structures of the interaction between genes rather than the detailed strength of these interactions.In this work we presented a novel method for network inference based on an ensemble of multi-layer perceptrons configured as multi-variable regressor (RegnANN). We compared its performance to the performance of three different network inference algorithms on the task of reverse engineering the gene network topology, in terms of the associated MCC score. The proposed method makes no assumptions about the nature of the relationships between the variables, capturing high-order dependencies between expression patterns and the direction of the interaction, as shown on selected synthetic toy examples. Our extensive evaluation indicates that the newly introduced RegnANN shows accuracy and stability scores that compare very favorably with all the other inference methods tested, often outperforming the reference algorithm in the case of fixed binarization threshold. On the other hand, considering all the possible thresholds for the binarization of the inferred adjacenci matrix (the AUC score) the differences among the tested methods tend to become irrelevant. Our evaluation on synthetic data demonstrates that various factors influence the performance of the inference algorithms adopted: the topology of network, its size and its complexity, the amount of data available, the normalization procedure adopted. Generally, these are only a few of the factors that may influence the outcome of a network inference algorithm; they may not be limited to the relative small set of parameters explored here.Escherichia coli, although the very same expression values are used to infer the different gene sub-networks. Our experiments indicate great variability of the scores of the reference inference algorithms across the different Escherichia coli sub-modules. On the other hand, RegnANN scores are more homogeneous, decreasing as the density of the module decreases.Results on the biological data confirm that the correctness of the inferred network depends on the topological properties of the modules: very different accuracy results are obtained on the different submodules of Finally, we tested the possibility of applying standard machine leaning methods to learn the optimal binarization threshold value. Our preliminary evaluation indicates that this is not a straightforward task: presence of noise in the data and the high complexity of the biological domain often cause selection bias.The robustness of RegnANN performance recorded across the board and the agreement between results obtained with this new methods on both synthetic and biological data are promising and they stimulate its application to a wider range of problems.neighborhood of each gene independently and then joining these neighborhoods to form the overall network, thus reducing the problem to a set of identical atomic optimizations.To infer gene regulatory networks we adopt an ensemble of feed-forward multilayer perceptrons. Each member of the ensemble is essentially a multi-variable regressor (one to many) trained using an input expression matrix to learn the relationships (correlations) among a target gene and all the other genes. Formally, let us consider the multilayer perceptron as in n matrix . By cyclWe build The algorithm of choice for training each multi-layer perceptron is the back-propagation algorithm Although back-propagation is essentially a heuristic optimization method and alternatives such as Bayesian neural network learning The learning parameters we use to train each multi-layer perceptron are as follows: learning rate equal to Once the ensemble is trained, the topology of the gene regulatory network is obtained by applying a second procedure. Considering each gene in the network separately, we pass a value of http://sourceforge.net/projects/regnann/files/. The source code is distributed according to the GPLv3 license (open-source).To improve the general efficiency of the algorithm and thus to allow a systematic comparison of its performance with the other gene network reverse engineering methods tested, we implemented the ANN based regression system using the GPGPU programming paradigm. A reference implementation of the RegnANN algorithm is available at As reference methods we select three alternative algorithms widely used in literature: ARACNE, CLR and KELLER.KELLER is a kernel-reweighted logistic regression method With respect to other inference methodologies, KELLER adopt a fixed threshold to discretize the inferred adjacency matrix while it performs an optimization of the regularization weight lambda controlling the sparsity of the solution by maximizing a Bayesian Information Criterion (BIC). The authors apply a grid search on a selection of possible parameter values. In our work, we adopt the very same procedure: we use the same fixed discretization threshold for the binarization of the adjacency matrix, while we select the optimal solution by maximizing the BIC via a grid search for the optimal value of lambda as performance metric.CLR is an extension of the relevance networks class of algorithms true positives, while FN indicates the fraction of false negatives.When the performance of a network inference method is evaluated, it is common practice to adopt two metrics: precision and recall. Recall indicates the fraction of true interactions correctly inferred by the algorithm, and it is estimated according to the following equation:false positives.On the other hand, precision measures the fraction of true interactions among all inferred ones, and it is computed as:In this work we adopt the Matthews Correlation Coefficients - MCC The MCC is in essence a correlation coefficient between the observed and predicted binary classifications: it returns a value between The Matthews Correlation Coefficient is calculated as:We benchmark the reverse engineering algorithms here considered using both synthetic and biological data.igraph extension package to the GNU R project for Statistical Computing The synthetic data sets are obtained starting from an adjacency matrix describing the desired topology of the network. Here we consider two different network topologies: Barabasi-Albert Once the topology of the network is (randomly) generated, the output profiles of each node are generated according to two different approaches: the first one considers only linear correlation among selected genes (SLC), the second one is based on a gene network/expression simulator recently proposed to assess reverse engineering algorithms GES . In ordeSimple Linear Correlation (SLC): similarly to the simulation of gene expression data presented in the supplementary material of seed expressions : this second methodology is based on a gene network simulator recently proposed to assess reverse engineering algorithms Escherichia coli starting from a set of steady state gene expression data. The data are obtained from different sources and they consist of three different elements, namely the whole Escherichia coli transcriptional network, the set of the transcriptional subnetworks and the gene expression profiles to infer the subnetworks from. The Escherichia coli transcriptional network is extracted from the RegulonDB (http://regulondb.ccg.unam.mx/) database, version Escherichia coli Affymetrix Antisense2 microarray expression profiles for The task for the biological experiments is the inference of a few transcriptional subnetworks of the model organism A number of sources of noise can be introduced into the microarray measurements, e.g. during the stage of hybridization, digitization and normalization. Therefore, it is often preferred to consider only the qualitative level of gene expression rather than its actual value In this work we compute the discrete value of the expression for each of the Generally, when a scaling method is applied to the data, it is assumed that different sets of intensities differ by a constant global factor In this work we test two different data rescaling methods:linear rescaling: each gene expression column-vector is linearly rescaled between statistical normalization: each gene expression column-vector is rescaled such that its mean value is equal to We consider gene expression matrices of dimension In this section we show how the tuning parameters of RegnANN impact its performance on a selected testbed: Barabasi networks with 100 nodes and SLC data generation. Accuracy scores (MCC) are calculated as mean of The two set of figures indicate that the values for the learning parameters adopted in the evaluation of the performance of RegnANN (momentum\u200a=\u200aeps) on a sample testbed. The reference implementation of ARACNE provided by In this section we explore the influence of the tolerance parameter (We adopt the eps parameter, e.g.: considering network size eps value eps value eps value eps value eps parameter value to eps value eps value eps value eps value The figure indicates that for Barabasi networks no statistically relevant difference in the performance of ARACNE is recoded varying the The figure indicates that the value"} +{"text": "Cancer is a disease of aging.With the aging of the population the management of cancer in the older person with chemotherapy is beoming increasingly common. This treatment may be safe and effective if some appropriate measures are taken, including, an assessment of the physiologic age of each patient, modification of doses according to the renal function, use of meyelopoietic growth factors prophylactically in presence of moderately toxic chemotherapy, and provision of an adequate caregiver. Cure, prolongation of survival, and symptom palliation are universal goals of medical treatment. Prolongation of active life expectancy should be added to the treatment goal of the older aged person. The management of cancer in the older aged person is an increasingly common problem. Cancer is a disease of aging.Is the person life expectancy going to be shortened by cancer?Is the patient\u2019s life expectancy long enough that he or she will experience the complications of cancer?Is the patient able to tolerate antineoplastic treatment?What are the long term side effects of cancer treatment in the older person?Does the patient have adequate social support to undergo cancer treatment?The management of cancer in the older-aged person include some questions that are specfic of aging:We will address these questions using cytotoxic chemotherapy as a model. After an overview of aging and its assessment we will explore the pharmacologic changes of aging and the provisions to ameliorate the complications of chemotherapy.Aging implies a decline in life expectancy and stress-coping ability, increased prevalence of comorbidity, increased risk of functional dependence and of the need of social support.Aging has been defined as loss of entropy and fractality10Homeostasis is the ability of a system to restore basic conditions after stress imposed by environmental interactions. One may observe the dysregulation of a number of physiologic parameters, including blood pressure, insulin sensitivity, circulating levels of corticosteroid and cathecolamines. The so called \u201callostatic load\u201d assesses the dysregulation of 12 different parameters and may estimate the physiologic age. Its clinical value so faris unestablished.Inflammatory markers and the length of leukocyte telomeres are promising laboratory tests, but at present have limited clinical use.Table 1.17The best validated instrument for the assessment of chronologic age is the Comprehensive Geriatric Assessment\u201d (CGA), whose elements are illustrated in Dependence in one or more ADLs and the presence of one or more geriatric syndromes imply a marginal functional reserve associated with very limited tolerance of stress. These individuals can only survive thanks to a full time home caregiver, or to admission to an adult living facility. Dependence in one or more IADLs is associated increased risk of mortality,In addition to reduced life-expectancy and increased risk of treatment complicationsSome elements of the CGA may be utilized in models that predict the mortality and the risk of chemotherapy-related toxicity in older individuals.34Any discussion of the assessment of physiologic age should include frailty. This is constructed as a condition of critically reduced physiologic reserve so that a minimal stress may cause loss of independence and start a chain of events that lead to the patient \u2018s death.3Involuntary weight loss of 10 lbs or more over a 6 months period;Decreased grip strength;Difficulty in starting movements;Reduced walk speed;Exhaustion.The clinical definition of frailty was first provided by the Cardiovascular Health Study (CHS).The three groups of individuals were classified as: non-frail or fit ; pre-frail , and frail .Another index of frailty validated both in older women and older men has been provided by the Study of Osteoporotic Fracture (SOF)The interactions of aging and frailty should be clarified. In particular we should ask: is cancer a cause of frailty? Is chemotherapy a cause of frailty? Is frailty associated with increased risk of therapeutic complications?Table 2).40Aging is associated with pharmacokinetic and pharmacodinamic changes that may enhance the toxicity of these agents and in interrupting treatment when the neuropathy may interfere with an individual\u2019s activity.An important and unanswered question is whether age is a risk factor for more frequent and more severe manifestations of \u201cchemobrain\u201d that is a cognitive dysfunction caused by chemotherapy.Age is a risk factor for delayed complications of chemotherapy. Myelodysplasia and acute myeloid leukemia may develop in 1\u20132% of patients 65 and over who had received anthracycline-containing treatment in the previous 10 years. The risk of these complications is enhanced by myelopoietic growth factors.49Other potential long term complications of cancer chemotherapy include dementia, functional dependence and frailty.Cure, prolongation of survival, and symptom palliation are universal goals of medical treatment. Prolongation of active life expectancy should be added to the treatment goal of the older aged person.The construct of active life expectancyWith the aging of the population the management of cancer in the older person with chemotherapy is beoming increasingly common. This treatment may be safe and effective if some appropriate measures are taken, including, an assessment of the physiologic age of each patient, modification of doses according to the renal function, use of meyelopoietic growth factors prophylactically in presence of moderately toxic chemotherapy, and provision of an adequate caregiver. The goals of treatment should include prolongation of active life-expectancy."} +{"text": "Perinatal infections include pregnancy associated infections, fetus infection during labor, and other types of infections, including the early postnatal and late postnatal ones that appeared in the range of one month after birth. They represent a constant threat to the health of the conceptus, fetus and newborn. Therefore their early diagnosis is fundamental in terms of establishing a fair treatment of maternal infection for prophylaxis of product conception and fetal illness and treatment of neonatal infections.Starting from current practice realities, the authors show the need to include infectious disease specialist consultants in all periods: in antepartum, during the first trimester and occasionally in the remainder of the pregnancy and postpartum. In this way it is possible to avoid the excess diagnosis of potentially teratogenic acute infections included in the acronym TORCH during pregnancy."} +{"text": "Seismic design loads for tunnels are characterized in terms of the deformations imposed on the structure by surrounding ground. The free-field ground deformations due to a seismic event are estimated, and the tunnel is designed to accommodate these deformations. Vertically propagating shear waves are the predominant form of earthquake loading that causes the ovaling deformations of circular tunnels to develop, resulting in a distortion of the cross sectional shape of the tunnel lining. In this paper, seismic behavior of circular tunnels has been investigated due to propagation of shear waves in the vertical direction using quasi-static analytical approaches as well as numerical methods. Analytical approaches are based on the closed-form solutions which compute the forces in the lining due to equivalent static ovaling deformations, while the numerical method carries out dynamic, nonlinear soil-structure interaction analysis. Based on comparisons made, the accuracy and reliability of the analytical solutions are evaluated and discussed. The results show that the axial forces determined using the analytical approaches are in acceptable agreement with numerical analysis results, while the computed bending moments are less comparable and show significant discrepancies. The differences between the analytical approaches are also investigated and addressed. Underground structures do not fall in resonance with the ground, however, response in accordance with the response of the surrounding ground. Therefore, seismic design loads for underground structures are generally characterized in terms of the deformations imposed on the structure by the surrounding ground. The free-field ground deformations due to a seismic event are estimated, and the tunnel is designed to accommodate these deformations. Underground structures are assumed to experience three primary modes of deformation during seismic shaking :axial coAxial deformations in tunnels are generated by the components of the seismic waves that produce motions parallel to the axis of the tunnel and cause alternating compression and tension. Bending deformations are caused by the components of seismic waves producing particle motions perpendicular to the longitudinal axis. Design considerations for axial and bending deformations are generally in the direction along the tunnel axis . Ovalingfree-field deformation approach;soil-structure interaction approach.Studies have suggested that, while ovaling may be caused by waves propagating horizontally or obliquely, vertically propagating shear waves are the predominant form of earthquake loading that causes these types of deformations . There aIn this paper, seismic behavior of circular tunnels has been investigated due to propagation of shear waves in the vertical direction using quasi-static analytical approaches as well as numerical methods. This issue has been investigated in the past by several researchers using numerical and analytical methods, which has led to solutions and results. In particular, analytical solutions are still very popular, as confirmed by the growing body of the literature devoted to this topic: while early studies referred to simplified geometries and constitutive assumptions , recently proposed closed-form solutions deal, as an example, with piecewise tunnel lining connected by joints , nonunifThe surrounding soil of the tunnels constructed in the urban areas, usually consisting of the alluvium and low strength soil, experiences large deformation during ground motions. In this case the soil is voided elastic state and undergoes large plastic deformation. One of the best methods for modeling large deformations is finite difference method. The use of structural elements available in FLAC allows the dynamic soil-structure interaction analysis to be performed. In this research the nonlinear finite difference method has been used, which allows the dependence of damping and stiffness of the material to the strain level to be considered and taken into account. Also, the shallow circular tunnels under the propagation of shear waves have been analyzed to compare the analytical solutions with the numerical finite difference method. The geotechnical properties of the materials used in the analyses are obtained from the geotechnical investigations performed for the second line of Tabriz urban subway project (Iran). Also the input variables such as the maximum shear strain at tunnel's level, the shear modulus proportional to the level of the strain, and the maximum particle velocity of the mass required in the analyses with the analytical methods are determined using two-dimensional free-field numerical analysis.In order to make use of actual data, the cross sections and their material properties chosen for the analyses are obtained from the geotechnical reports prepared for Tabriz subway project. The properties have been used from the geotechnical reports including the reports of geophysics vibration tests to determine the elastic properties for dynamic analysis and the goal of these tests is measuring the velocity of volume and shear waves in cross sections to determine the dynamic properties of the soil. The data and specifications of Sections 1 and 2 used in the analyses are shown in Also shown in determination of the backbone (skeleton) curve;application of a set of rules relevant to behavior of loading and unloading and reduction of stiffness along with other required parameters.The numerical analyses were performed with the code FLAC which alCs is the minimum velocity of the shear wave and \u0394l is the largest dimension of element. For each cross section the maximum frequency is determined and, by use of the code filtration function, the higher frequencies are deleted from the ground motion data.By defining the hysteretic model for dynamic analysis, the modulus reduction curve of the material is determined as a skeleton curve for conducting nonlinear analysis. For dynamic analysis, the elements dimensions are limited by the criterion of wave transmission. The dimension of the largest element and the minimum velocity of the shear wave are used to determine the maximum value of the frequency as follows:\u03c1 is the density, \u03c3s is the applied shear stress, Cs is propagation velocity of the shear wave, and vs is shear particle velocity.The lateral boundaries of the model must take into account the free-field conditions. In order to minimize the reflection of waves from bottom boundary of the model, the boundary is considered as quite boundary, and consequently the seismic input should be defined as a stress wave. By integrating the acceleration time history, the velocity time history is determined and the input wave is subsequently changed into the stress shear wave using the following relationship:The acceleration of the top and bottom joints is also compared with the maximum input acceleration, and if needed the applied coefficient can be corrected in such a way that the modified acceleration approximates the exact value. The hysteretic damping is used to attenuate the energy of the numerical modeling. The modulus reduction curves proposed by Sun et al. for fineMs is secant modulus, \u03b3 is the shear strain, and L1 and L2 are bounds of the logarithmic strain (points at which gradients are zero). Figures in which Vertically propagating shear waves are the predominant form of earthquake loading that causes the ovaling deformations of circular tunnels to develop, resulting in a distortion of the cross sectional shape of the tunnel lining. The most important and useful analytical methods for calculating these deformations are Wang and Penzien methods , 19 and Wang and Penzien solutions are developed for both full-slip and no-slip condition between the tunnel and lining. As the structural elements are directly joint to the surrounding soil, the no-slip condition is used in this paper. The input parameters in Wang and Penzien methods are cover's geometric and elastic properties, elastic properties of soil mass, and the maximum shear strain caused by the ground motions. The basic point in Wang and Penzien methods is determining the shear modulus. The shear modulus of soil materials is dependent on the level of shear strain and on higher shear strain levels; the maximum shear modulus determined by the geophysical tests will be decreased. The results obtained by use of the maximum shear modulus are therefore overestimated, and consequently the compatible shear modulus should be used. The shear modulus and shear strain parameters are determined by numerical modeling of the free-field condition. Figures Bobet presenteBobet presented the solution by assuming the no-slip condition between the tunnel and the lining at dry and saturated conditions. In this study, both conditions have been considered and investigated. The maximum particle velocity at the tunnel axis is obtained by the free-field numerical modeling. In numerical models, the tunnel cover has been shown with 72 structural elements which are rigidly connected to each other. The number of the elements is governed by the size of the surrounding zones, and therefore, in order to increase the accuracy, one structural element is created in each surrounding zone of the excavated tunnel. Dynamic loading is used in the models which are in statically stable conditions. By considering excavating conditions, the numerical models are first reached to the static equilibrium and then the dynamic analysis can be started. The numerical model of cross section 1 has been shown in In this research, 8 structural elements have been chosen for each cross section in order to record the force and moment time histories during dynamic loading. The results of the numerical analyses are obtained by determining the moment and force time histories during model seismic motions along with their maximum values. In the stable condition, before applying dynamic input, values of these parameters detracted from the maximum values during the dynamic wave passage, in order to determine the net maximum values. Owing to the generation of numerous time history curves and for the sake of space saving, only two history curves for each of axial and shear forces as well as bending moments related to two points of the above-mentioned eight structural element points at various positions are provided. Figures In this section, forces imposed on tunnel lining after seismic loading are investigated. Also, results of the finite difference numerical method are compared with the results obtained by Wang-Penzien and Bobet analytical methods in the no-slip condition. One maximum value for axial force is determined by Wang and Penzien methods. Also for seismic ground motions induced by shear waves, one maximum value is obtained for axial force by Bobet method. As shown in Figures The bending moment of analytical and numerical methods is compared in Figures As shown in Figures Underground structures using numerical methods are necessary because of many complicated conditions such as heterogeneous layers, irregular geometry of tunnels, ground water, and soil-structure interactions. The results of numerical methods should be compared with analytical methods. The maximum values of axial forces obtained by numerical analyses are more than the other three analytical methods because of reduction coefficient of shear modulus. The results of analytical methods seemed to be overestimated, since the maximum values of shear modulus have been used.The result obtained by Bobet method in dry ground is the closest one to the results of numerical analysis in both cross sections. Also the axial forces computed by Bobet method in dry and saturated ground are approximately the same. By assuming saturated condition in Bobet method, the axial forces induced by shear waves are a little less than the axial force obtained in the dry ground. The axial forces computed by Wang method are close to the results of Bobet method. The solutions of Penzien result iIn the case of bending moment, the highest value was obtained by the numerical modeling. The results of four analytical methods are very close to each other, and the results of Penzien method are compatible with the others. However, the differences with the results of numerical modeling are noticeable and Tables"} +{"text": "Introduction: We have reported that a developed lower-positioned transverse ligament between the superior-medial orbital rim and the lateral orbital rim on the lateral horn in the lower orbital fat space antagonizes eyelid opening and folding in certain Japanese to produce narrow eye, no visible superior palpebral crease, and full eyelid. In this study, we confirmed relationship between development of the lower-positioned transverse ligament and presence of the superior palpebral crease. Methods: We evaluated whether (1) digital immobilization of eyebrow movement during eyelid opening and (2) a developed lower-positioned transverse ligament could classify Japanese subjects as being with or without visible superior palpebral crease. Results: Digital immobilization of eyebrow movement restricted eyelid opening in all subjects without visible superior palpebral crease but did not restrict in any subject with visible superior palpebral crease. Macroscopic and microscopic evidence revealed that the lower-positioned transverse ligament behind the lower orbital septum in subjects without visible superior palpebral crease was significantly more developed than that in subjects with visible superior palpebral crease. Conclusions: Since a developed lower-positioned transverse ligament antagonizes opening and folding of the anterior lamella of the upper eyelid in subjects without visible superior palpebral crease, these individuals open their eyelids by lifting the eyebrow with the anterior lamella and the lower-positioned transverse ligament owing to increased tonic contraction of the frontalis muscle, in addition to the retractile force of the levator aponeurotic expansions. In subjects with visible superior palpebral crease, the undeveloped lower-positioned transverse ligament does not antagonize opening and folding of the anterior lamella, and so they open their eyelids by folding the anterior lamella on the superior palpebral crease via the retractile force of the levator aponeurotic expansions. Since the absence of visible SPC indicates the anterior lamella of the upper eyelid to be unfoldable and require lifting of the eyebrow for maintenance of an adequate visual field, we considered this to be the key distinguishing trait of descendants of the Yayoi migrants in this study. In contrast, we presumed the presence of visible natural SPC to specifically indicate the anterior lamella of the upper eyelid as being foldable on the SPC without lifting the eyebrow for maintenance of an adequate visual field in Japanese stemming from Jomon native ancestry.Digital immobilization of eyebrow movement during eyelid opening from closed eyelid to primary gaze counteracts the contraction of the frontalis muscle to lift the eyebrow with the anterior lamella of the upper eyelid and is commonly used to exclude involvement of eyebrow lifting when impaired levator function is evaluated in congenital blepharoptosis c. This tTo verify our hypotheses, we evaluated whether (1) digital immobilization of eyebrow movement during eyelid opening and (2) a developed LTL that restricts eyelid opening and folding could classify Japanese subjects as being with or without visible SPC.We enrolled 66 Japanese subjects . Reasons for the surgery consisted of 10 blepharoplasties, 18 bilateral acquired blepharoptoses, and 5 unilateral congenital blepharoptoses in 33 subjects without visible SPC, and 8 blepharoplasties and 25 bilateral acquired blepharoptoses in 33 subjects with visible natural SPC. The nonptotic eyelids in 5 subjects with unilateral blepharoptosis were evaluated.We first evaluated whether digital immobilization of eyebrow movement by pressing on the anterior surface of the supraorbital margin, during movement from loosely closed eyelids following tight eyelid closure with relaxing the frontalis muscle2 square scale and the size of retractors or forceps was significantly larger than that in the visible SPC group (0.88 \u00b1 0.45 mm) ( 0.0136) .Histological examination confirmed our macroscopic findings, whereby collagen fibers of the LTL in subjects without visible SPC a appeareMacro- and microscopic evidence obtained in the current study demonstrated that whereas subjects without visible SPC had a developed LTL behind the lower orbital septum in terms of not only the width of the lowest LTL and the number of LTLs a, those On the basis of our findings, it appears that variations in the LTL may determine the features of Yayoi migrants or the Jomon natives in the Japanese. A developed LTL between the superior-medial orbital rim and the lateral orbital rim on the lateral horn behind the lower orbital septum not only restricted eyelid opening and folding but also kept the orbital fat in a lower position. Subsequently, narrow eye, no visible SPC, and fullness of the upper eyelid ensued as specific features of the Yayoi migrants. They appeared to open the eyelid by not only the eyelid retraction but also the upward movement of the lateral canthus. On the contrary, an undeveloped LTL did not restrict eyelid opening and folding, resulting in the wide eye and visible SPC that are distinctive of the Jomon natives. Undeveloped LTL might allow the orbital fat to sink into the upper orbit like the Occidental eyelid . In the In subjects without visible SPC, because the developed LTL antagonizes opening and folding of the anterior lamella of the upper eyelid, these people open their eyelids not only by the retractile force of the levator aponeurotic expansions but also by lifting the eyebrow with the anterior lamella and developed LTL owing to increased tonic contraction of the frontalis muscle. Since their eyebrows had been lifted in normal primary gaze, the digital immobilization test in this study was performed during movement from loosely closed eyelids following tight eyelid closure with relaxing the frontalis muscle to eyelid opening for primary gaze. In subjects with visible SPC, because the undeveloped LTL did not antagonize opening and folding of the anterior lamella, they were able to open their eyelids by folding the anterior lamella on the SPC via the simple retractile force of the levator aponeurotic expansions.Our subjects who had no visible SPC and developed LTL may correspond to the Yayoi migrants with eyelid structures for cold tolerance. Because they always lift the eyebrows in primary gaze, their supraorbital margin may be high-positioned and round as a result of the lifting force by tonic contraction of the frontalis muscle, which mechanically presses on the supraorbital margin a. In conThe development of LTL restricts eyelid opening and folding and distinguishes Japanese as being with or without visible SPC, or rather being of Jomon native or Yayoi migrant ancestry. To compensate for the restriction of eyelid opening and folding in Japanese without SPC, tonic reflex contraction of the frontalis muscle to persistently lift the eyebrow with the anterior lamella of the upper eyelid and thick LTL serves as another eyelid-opening mechanism. From a surgical viewpoint, both the excision of LTLs behind the orbital septum and the creation of the functional SPC, on which the anterior lamella is folded, may reduce both tonic reflex contraction of the frontalis muscle and the eyebrow height. Further studies are needed on the differences in supraorbital margin shape between Japanese subjects without and with visible SPC, as well as on the relationship between LTL development and expansion of the levator aponeurosis to the pretarsal skin, both of which contribute to the formation of SPC."} +{"text": "Significant differences were observed between the glottic areas of the excised larynges in the initial state and following modified frontolateral partial laryngectomy with the cartilage closed. However, no significant differences were observed between the glottic areas of the excised larynx in the initial state and following modified frontolateral partial laryngectomy with the cartilage open. The glottic area of the larynges in vivo in the initial state and following right chordectomy via laryngofissure were not observed to be significantly different. Furthermore, no significant differences were observed between the glottic areas of the larynges in vivo in the initial state and following modified frontolateral partial laryngectomy without tracheotomy. In conclusion, modified frontolateral partial laryngectomy without tracheotomy is a feasible and efficacious means of eradicating early and selected invasive carcinomas of the larynx, which is supported by animal experiments.The aim of this study was to validate the feasibility of modified frontolateral partial laryngectomy without tracheotomy using animal experiments. The glottic area before and after surgery of 6 excised canine larynges and 10 canine larynges Male and female mongrel dogs of various ages, weighing 9\u201315 kg were used for the experiment. A total of 16 dogs were divided into 2 groups denoted The dogs were sacrificed and their larynges were excised for anatomical study and placed in appropriate individual plastic vessels. The supraglottic tissue was removed to expose the true vocal folds. Stepwise procedures were performed and, after each step, the vocal folds were photographed from a superior perspective using a grid placed over the vocal folds to measure the glottal area.Photographic images of the glottis were captured prior to performing any procedures on the larynx. The thyroid laminae were then incised vertically 2\u20133 mm posterior to the anterior commissure and removed. In all larynges, \u223c20% of the laryngeal cavity was excised. The incisal margins of the vocal folds were sutured to the ipsilateral thyroid perichondrium and the glottis was then photographed. Finally, the incisal edges of the thyroid laminae were sutured together to reconstruct the anterior commissure. The glottis was then photographed for the third time.The laryngeal cavity was isolated and adequately exposed. The epiglottis was captured and pulled forward and upward after the mucosa of the epiglottic vallecula was cut open. The mucosa of the lateral pharyngeal walls was cut to pull the whole larynx out. Photographic images of the glottis were then captured at the maximum phase to measure the maximum phase area of the untreated glottis. Right cordectomy was performed via laryngofissure and the incisal margins of the thyroid cartilage were sutured together; the glottis was then photographed again at maximum phase. Subsequently, modified frontolateral partial laryngectomy was performed according to the procedure described previously ,6 and thex vivo group were processed using AutoCAD2004 image software and the following data were obtained: i) area of the untreated glottis of the ex vivo larynx; ii) area of the glottis of the ex vivo larynx with the incisal margins of the thyroid cartilage sutured following frontolateral partial laryngectomy; and iii) area of the glottis of the ex vivo larynx following frontolateral partial laryngectomy with the incisal margins of the anterior commissure sutured to the homolateral perichondrium of the thyroid cartilage.Images of the in vivo group were processed using AutoCAD2004 image software and the following data were obtained: i) the maximum phase area of the untreated glottis of the in vivo larynx; ii) the maximum phase area of the glottis of the in vivo larynx following right cordectomy via laryngofissure; and iii) the maximum phase area of the glottis of the in vivo larynx following modified frontolateral partial laryngectomy.Images of the ex vivo experiments were compared by Student\u2019s t-test, since the data were normally distributed. The data of the in vivo group was compared using ANOVA and Newman Keuls test.The data of ex vivo glottises were stationary and the areas were relatively constant without the effect of respiration and thus simple to measure. The distance and angle of the scales may greatly affect measurements of the glottic area. Therefore, in the present study the scales were placed strictly on the same plane as the vocal cords and photographed at a distance of 30 cm to provide accurate measurements.The The processing of all images was completed using AutoCAD2004 image software. An image of the primitive glottis is shown in ex vivo group are shown in ex vivo group, attempts were made to confirm whether changing the shape of the glottis (with the front part resected) to a trapezoid by suturing the sternohyoid muscle to the laryngeal lumen, significantly decreases the area of the glottis. The statistical data above show that area of the glottis was enlarged in certain cases by changing its shape to a trapezoid.The experimental results of the A glottis following modified frontolateral partial laryngectomy is shown in in vivo group are shown in The experimental results of the Conventionally, laryngofissure and cordectomy have been the primary means of eradicating early and selected invasive glottic squamous cell carcinomas . A numbeet al(et al(Tracheotomy is routinely performed in patients undergoing partial laryngectomy due to the high risk of postoperative dyspnea originating from a narrowed laryngeal lumen or laryngeal edema. Postoperative care for the tracheotomy is burdensome for the patient and provider and is associated with prolonged hospitalization and significant morbidity. Even temporary tracheotomy is associated with increased complication rates, suggesting that prophylactic tracheotomy at the time of surgery is less than ideal. Although Muscatello et al and Wolfet al have rep al(et al reported al(et al.The feasibility and effectiveness of the modified frontolateral partial laryngectomy are functions of the anatomical association between the laryngeal lumen and its surrounding structures. The interior of the larynx in cross-section is triangular due to the contour of the thyroid cartilage. Geometrically, the area of a triangle is less than that of a trapezoid or rectangle of equivalent base and height. In the present approach, we aimed to transform the natural triangular contour into a trapezoid to achieve a greater cross-sectional area. To achieve this, the sternohyoid muscle was sutured to the laryngeal lumen and the thyroid lamina was vertically incised, resulting in abduction of the anterior part of the thyroid cartilage. In addition, the muscular fascia was retroflexed and sutured to the contra-lateral side, bringing the larynx into the desired trapezoidal or rectangular conformation. To expand the breadth of the laryngeal cavity, the sternohyoid fascia was reverted, covering the anterior larynx and completing the ladder-shaped lumen. Although the anteroposterior diameter of the neolarynx was decreased, the cross-sectional area was sufficiently enlarged to allow normal respiration, even in the absence of an endotracheal tube.In the present study, on the basis of clinical study, animal model establishment and computer technology, it was demonstrated that expanding the anterior end of the laryngeal cavity and changing the shape from the original triangle into a trapezoid with equal bottom length and height was able to considerably increase the effective respiratory area. This result demonstrates the theoretical basis of modified frontolateral partial laryngectomy and validates its efficacy and feasibility.In conclusion, these animal experiments demonstrated the feasibility of modified frontolateral partial laryngectomy without tracheotomy. The present data indicate that it is a safe and reliable method for excising the anterior 20% of the vocal cord and thyroid cartilage without the necessity of tracheotomy. This procedure, therefore, represents a new, less invasive technique for the treatment of glottic squamous cell carcinoma."} +{"text": "Thyone, and the horseshoe crab Limulus. Any realistic future theories for filopodium stability are likely to rely on an accurate treatment of such steric effects, as analysed in this work.Filopodia are long, thin protrusions formed when bundles of fibers grow outwardly from a cell surface while remaining closed in a membrane tube. We study the subtle issue of the mechanical stability of such filopodia and how this depends on the deformation of the membrane that arises when the fiber bundle adopts a helical configuration. We calculate the ground state conformation of such filopodia, taking into account the steric interaction between the membrane and the enclosed semiflexible fiber bundle. For typical filopodia we find that a minimum number of fibers is required for filopodium stability. Our calculation elucidates how experimentally observed filopodia can obviate the classical Euler buckling condition and remain stable up to several tens of ThyoneFilopodia are formed by the growth of bundles of biological fibers outwards from a biological cell surface that remain enclosed in a membrane tube. They are implicated in many processes vital to life, including sensing and motility In this work, we investigate the stability of filopodia, which involves the subtle interplay between a fluid membrane tube, and an enclosed semiflexible fiber bundle. The simplest physical picture of filopodia is one in which the membrane tube produces a longitudinal force and a transverse force on the enclosed fiber bundle. The longitudinal membrane force acts to try and shorten the end-to-end distance of the fiber bundle, while the transverse force is required to maintain fiber bundle enclosure. The energetics required to investigate the stability of filopodia thus necessitates us to consider the elasticity of both the membrane tube as well as the fiber bundle, subject to the constraint that the polymer bundle must remain enclosed by the membrane tube. The energetic ground state conformations of filopodia thus necessitate a careful theoretical treatment of both elastic and steric considerations. For example, one might ask if a filopodium ever buckles, or perhaps more intriguingly does the region of filopodium buckling exist in some small corner of a complicated energetic phase diagram, well outside the range of physiologically relevant parameters?A naive Euler buckling type estimate for the stability of filopodia that remained perfectly cylindrical, despite simulation snapshot evidence to the contrary In Typical experimental parameter values for biological membranes range from The ground-state configuration of a filopodium is determined by finding the minimum of the total energy per unit length Shown in We have calculated theoretically the ground state configurations of filopodia, and found \u2018islands of stability\u2019 for typical filopodia within physiologically relevant parameters. Our calculation elucidates how experimentally observed filopodia can obviate the classical Euler buckling condition and remain stable up to several tens of The work presented here differs from that presented in ThyoneLimulusExperimental observation of the results obtained in this work for the helical-like deformations of enclosing membrane tubes in filopodia would presumably be difficult. However, such helical membrane conformations are qualitatively supported by the snapshot pictures of simulation work carried out in We adopt a ground state approximation in which thermal fluctuations are assumed to be small. Since the amplitude of these fluctuations is small at the high tensions of interest to us here, perhaps a few nm or less without an enclosed stiff polymer has also recently been considered in Analogous steric constraints to those considered here are likely to be of relevance in other similar and important biological contexts, such as the packaging of semiflexible DNA in viral capsids, for example In order to describe the filament bundle, inside filopodia, we study the semi-flexible polymer Hamiltonian Any realistic deformation of the polymer must be able to pack a given contour length We have chosen to parameterise the polymer in terms of the In this way we can easily translate between the arc-length The polymer part In order to describe deformations of our membrane tube, we use:We parameterise our membrane given by The membrane contribution By inspection of Eqs. (2) and (6), we can see that the steric condition we need to apply to the membrane in order to guarantee polymer enclosure is given by:In order to find the ground-state configuration of our filopodium, we need to find the conformation which minimises the total energy Putting the result of Eq. (10) into By inspection of the Fourier coefficients Utilising the inextensibilty conditions outlined above, we can easily re-write"} +{"text": "Here we propose a possible role of chaotic dynamics in the generation of two distinctive rhythm patterns of local field potential of the hippocampus; namely the theta rhythm and large irregular activity (LIA). The basic idea is that the rapid alternation of the state between theta rhythm and LIA can be described as bifurcation of the attractor between limit cycle and chaos.It is well known that the hippocampus has two distinctive states described by the characteristic activity of the local field potential, the theta rhythm and LIA. The theta rhythm is a highly periodic activity thought to play an important role in learning and retrieving process, whereas LIA is a irregular field activity with sharp ripple complex, which is thought to play an important role in the consolidation process of old memories. However, the underlying mechanism of realizing these two distinctive field oscillations embedded in the same network remains unknown. On the other hand, Katori et al. reported that transitions between synchronous and asynchronous oscillatory state can be realized with gap junction-coupled simple conductance-based model neurons . Further"} +{"text": "Acute aortic dissection is one of the most dreaded clinical conditions during pregnancy. The limited experience reported in the literature does not allow the determination of guidelines for surgical management of aortic dissection in these cases. In this case presentation we successfully treated a 27-year-old woman with Marfan syndrome in the 36th week of pregnancy with acute type A aortic dissection, who underwent aortic repair with the fetus remaining in situ.After a review of the data reported in the literature, we present this case of acute aortic dissection in a pregnancy woman with Marfan syndrome and discuss some new perspective of surgical management and maternal-fetal outcome considering the peculiarities of this disease's manifestation.Surgery for acute aortic dissection during pregnancy has been described by other investigators and, in most cases; the fetal outcome was relatively poor. The review of the data suggests that, in cases of fetal maturity, Cesarean section should be performed before or in combination with aortic repair. However, the appropriate surgical management with an immature fetus in utero remains unclear. The cardiopulmonary bypass (CPB) with the fetus in utero may itself represent a risk factor, as already demonstrated by studies that hypothermia contributes for a worse prognosis. We present a case of a 36th weeks pregnancy patient with Marfan syndrome who had acute type A Aortic dissection and underwent operative repair with modified Bentall de Bono technique with the fetus remaining in situ, where during CPB was used high-flow, high pressure and mild hypothermia. The maternal-fetal outcome was excellent and an elective Cesarean of the fetus could be done one week later the aortic repair when the clinical conditions of the mother was completely stabilized and the fetus presented better maturity.Despite the controversies of surgical management of aortic dissection during pregnancy because the lack of data, it seems to be possible as established in this case presentation, with the development of CPB and surgical techniques, perform the aortic repair with the fetus remaining in situ."} +{"text": "Maintenance of testicular temperature below body temperature is essential for the process of spermatogenesis. This process of thermoregulation is mainly achieved by testicular veins through pampiniform venous plexus of the testis by absorbing the heat conveyed by the testicular arteries. However, this mechanism of thermoregulation may be hampered if an abnormal communication exists between the testicular vessels. We report herewith a rare case of arteriovenous communication between testicular artery and testicular vein on left side. The calibre of the communicating vessel was almost similar to left testicular artery. Such abnormal communication may obstruct the flow of blood in the vein by causing impairment in the perfusion pressure with the eventual high risk of varicocele. Testicular arteries are long, slender vessels originating from abdominal aorta slightly below the origin of renal arteries. Testicular arteries reach the deep inguinal ring and pass through spermatic cord in the inguinal canal and finally enter the scrotum to supply testis . VenulesMaintaining the suitable temperature for the crucial stages of spermatogenesis is priority of the body. The specialised venous network in the form of pampiniform plexus of the testicular veins allows countercurrent heat exchange with the testicular artery and maintains the thermoregulation.Occasionally there may be parallel collaterals to the gonadal veins at different locations of its extent . Very raDuring routine cadaveric dissection for the undergraduate medical students, we observed a prominent communicating vascular channel between testicular artery and testicular vein on the left side of the male adult cadaver aged about 65 years. The calibre of the communicating vessel was almost similar to testicular artery. The communicating channel was found to be connecting the vessels between 1.5 and 2 inches at the commencement of the testicular artery and termination of testicular vein . The skiVariation in testicular artery and vein is not uncommon. Studies show that the testicular artery variations are more common on right side as compared to left side . CicekciThe present case is second extremely rare report of arteriovenous communication between the left testicular artery and the vein. Only one case report by Nayak et al. has been found in literature regarding such variation . There iEtiological factors of varicocele and male infertility are numerous and in rare cases it is difficult for the clinician to pinpoint the exact cause of the disease. Testicular arteriovenous communication can be one of the etiological causes for the abovementioned diseases. Even though the cases of testicular arteriovenous communication are rare, cases of male infertility due to oligospermia and varicocele should be investigated with the help of Doppler ultrasound and arteriogram to rule out any vascular anomaly such as described in the present case. During the procedure for varicocele embolisation to treat varicocele on left side, the catheter is passed through the renal vein and then into testicular vein. The catheter or probe during such procedure may accidently pass in the testicular arteriovenous communication reaching the testicular artery. Thus during left varicocele embolisation, presence of testicular arteriovenous communication should be noted.Arteriovenous communication of testicular vessels should be kept in mind by the clinicians and surgeons treating the varicocele and male infertility."} +{"text": "Assessment of the lateral wall thickness of the maxillary sinus is very important in decision making for many surgical interventions. The association between the thickness of the lateral wall of the maxillary sinus and the dental status is not well identified.To compare the thickness of the lateral wall of the maxillary sinus in individuals with and without teeth to determine if extraction of the teeth can lead to a significant reduction in the thickness of the maxillary sinus lateral wall or not.In a retrospective study on fifty patients with an edentulous space, the thickness of the lateral wall of the maxillary sinus,one centimeter above the sinus floor in the second premolar (P2), first molar (M1) and second molar (M2) areas was determined by cone beam computed tomography scans(CBCTs) and a digital ruler in Romexis F software (Planmeca Romexis 2.4.2.R) and it was compared with values measured in fifty dentated individuals. Three way analysis of variance was applied for comparison after confirmation of the normal distribution of data.The mean of the wall thickness in each of these points was lower in patients with edentulous spaces; however it was not significant. There was no association between gender and the thickness of the lateral wall of the maxillary sinus, but location was associated with different thicknesses.The differences in the thickness based on the location and dental status necessitates assessment of the wall thickness of the maxillary sinus in addition to the current evaluation of bone thickness between the sinus floor and the edentulous crest before maxillary sinus surgery. Assessment of the thickness of the lateral wall of the maxillary sinus is very important in decision making for many surgical interventions such as Caldwell-Luc surgery, Lefort I osteotomy, open sinus lift, facial and jaw bone fracture fixation and mini-screw insertion in orthodontics as well as the diagnosis of chronic sinusitis -5. It isWe designed a study to compare the thickness of the lateral wall of the maxillary sinus in individuals with and without teeth to determine if extraction of the teeth can lead to a significant reduction in the thickness of the maxillary sinus lateral wall or not. The number of years that had passed after teeth extraction was not considered in this study.In this retrospective study, we assessed the thickness of the lateral wall of the maxillary sinus one centimeter above the sinus floor by cone beam computed tomography scan (CBCT) in fifty patients with edentulous spaces who were candidates for dental implant placement (the edentulous space group) and we compared the results with CBCTs of fifty maxillofacial trauma patients who had no maxillary bone and teeth problem (the dentate group). This area was chosen because the majority of surgical procedures that need bone removal to get access inside the maxillary sinus or osteotomy cuts and osteosynthesis devices are all involved with this location. The main inclusion criteria were the tooth absence and presence in the posterior maxilla in the coronal axis of CBCTs in the first and second groups, respectively. In addition, as the maxillary sinus fully develops in 15 year olds and age Multivariate analysis was used for comparison of lateral wall thickness of the maxillary sinus as the effects of gender and dental status were considered in the analysis. It showed that both of them and their interaction had no significant effect on the wall thickness (P>0.05). For comparison between the three bony locations , repeated measure ANOVA was used. There were significant differences between these three locations. Bonefferoni correction showed significant differences between all three pairwise locations (P<0.05). The thickest bone was thefirst molar region followed by the second premolar and finally the second molar . While the anatomy of the maxillary sinus septa is well-identified -10, therThe thickness of the maxillary sinushas previously showed to have association with the difficulty of sinus surgeries.The differences inthe thickness based on the location and dental status necessitates CBCT assessment of the wall thickness of the maxillary sinus in addition to the current evaluation of the bone thickness between the sinus floor and the edentulous crest or dental roots before sinus surgeries."} +{"text": "Mobile elements including retrotransposons are implicated in the organization of the epigenetic landscape, the progression of tumorigenesis, and the enhancement of genetic diversity. Despite the importance of repetitive and transposable elements these sequences are traditionally ignored in high-throughput sequencing analysis due to the technical difficulty of uniquely mapping reads from repeat DNA sequences. Here we report a new computational method for the analysis of repetitive elements from high-throughput sequencing datasets that accounts for all mapping reads. In our approach, we examine reads that map uniquely and to multiple locations of the genome using two separate strategies to determine a complete estimate of enrichment for repetitive elements. Included in our computational method is an output defined by reads per kilobase of repeat element per million mapped reads (similar to RPKM definition for the exon model) . The callt mouse . We comp [et al. , and dem"} +{"text": "Schwannomas are rarely seen on the sciatic nerve and can cause sciatica. In this case report we aimed to present an unusual location of schwannoma along sciatic nerve that causes sciatica. A 60-years-old-man was admitted to us with complaints of pain on his thigh and paresthesia on his foot. Radiography of the patient revealed a solitary lesion on the sciatic nerve. The lesion was excised and the symptoms resolved after surgery. Schwannomas are derived from Schwann cells of neuroectoderm. They serve for the formation of myelin sheaths of nerves that insulate nerve and facilitate the transmission of an impulse . It is aSciatica is defined as pain along the course of the sciatic nerve and its branches. Characteristically the patients report gluteal pain radiating down the posterior thigh and leg with paresthesia in the calf and foot along the route of the sciatic nerve . In this report we aimed to present a patient with the symptoms of sciatica for five years due to unrecognized eight-centimetre schwannoma of sciatic nerve at the sciatic notch of pelvis. A 60-years-old patient with long standing symptoms of pain and paresthesia was referred to algology and physical therapy departments for finding out the possible etiology as his symptoms of sciatica increased. His past medical history revealed that he had been treated with the diagnosis of lumbosacral degenerative pathology for a long period of time although he had irrelevant lumbosacral magnetic resonance imaging (MRI). He had been further investigated with electromyography (EMG) that revealed decrease peroneal and tibial motor and sensory nerve conduction velocity. Sural and superficial peroneal sensory action potentials were decreased. Pelvic MRI was taken for the possible lesion compressing the sciatic nerve which displayed the noncontrast enhancing mass on the sciatic nerve at the sciaitc notch section .Surgery was planned with the possible diagnosis of schwannoma or neurofibroma. An incision was made through the route of sciatic nerve and the nerve was explored till the sciatic notch beginning proximally. The soft tissue mass on the sciatic nerve was seen and removed from the nerve sheath . The patSciatica is most commonly caused by herniated disc compressing the nerve roots or lumbosacral degenerative pathology, although very infrequent entrapment of sciatic nerve along its course within the pelvis or the lower extremity due to heterotopic ossification, misplaced intramuscular injections, myofascial bands in the thigh, myositis ossificans of biceps muscle, posttraumatic or anticoagulant-induced hematomas, compartment syndrome, and bone and soft tissue tumors can cause sciatica \u20135. Schwannomas are common, slow-growing benign tumors of the sheath of peripheral nerves arising from the schwann cells and involvement of sciatic nerve is very rare. Most are solitary lesions and multiplicity or malignant transformation occurs very rarely , 7. Alth Pain not responding to rest or activity, sensory and motor dysfunction along the nerve distribution are the most common manifestations. Surgical intervention should be the treatment of choice in order to prevent neurological deficits and exclude the possibility of malignancy. Sciatic nerve schwannomas should be kept in mind as a causative factor of sciatica and magnetic resonance imaging of the sciatic nerve in case of suspicion especially in patients with irrelevant lumbosacral MRI is important for diagnosis."} +{"text": "Discrete structural rearrangements emerge as a series of abrupt discontinuities in stress-strain curves. We obtain the theoretical dependence of the yield stress on system size and geometry and elucidate the statistical properties of plastic deformation at such scales. Our results show that the absence of dislocation storage leads to crucial effects on the statistics of plastic events, ultimately affecting the universal scaling behavior observed at larger scales.Nanoindentation techniques recently developed to measure the mechanical response of crystals under external loading conditions reveal new phenomena upon decreasing sample size below the microscale. At small length scales, material resistance to irreversible deformation depends on sample morphology. Here we study the mechanisms of yield and plastic flow in inherently Over the past years, experimental investigations have gathered increasing evidence that plastic deformation of crystalline materials proceeds through intermittent bursts of activity small systems. We propose a simple geometry, which can be reproduced in experiments on two-dimensional micrometer colloidal crystals, and provide robust input for mechanical testing of crystalline thin films below the micrometer scale. Our aim is to show that at very small scales the irreversible deformation of materials proceeds in a novel way, deviating from the allegedly universal behavior observed in larger systems. To this end, we investigate the dependence of the yield stress on system size and geometry and the statistical properties of plastic deformation and energy dissipation in uniaxially compressed two-dimensional small crystals, by means of atomistic simulations and analytical modeling.While several nanoindentation techniques have been developed to measure material resistance to irreversible deformation and plastic flow, the experimental observation of plasticity at the nanoscale still represents an enormous challenge We first consider the compression of perfect crystals of various sizes and aspect ratios. Crystals are simulated as two-dimensional aggregates of short-range interacting monodisperse particles in their lowest energy configuration, that is a triangular lattice in the free see . Uniaxials as in . Particlwhere anes see . In dispThe coupled Eqs. (1) for In both protocols, the response is initially elastic. In a perfect crystal, the elastic limit is reached as soon as the motion of a pair of opposite sign edge dislocations is activated, as in By moving, dislocations allow the system to slip plastically and emit/dissipate part of the stored elastic energy. The value of both the yield stress dislocation starvation mechanism. According to the literature, in larger systems the dependence of the yield stress on the system size follows a power law The connection between the yield stress and the boundary effects can be easily visualized in our simple model system. Up to values of the stress very close to where Equation (3) establishes a connection between the yield stress and dislocation strain distribution. It also bears implicitly the information about the dependence of the yield stress on the system size and geometry.The essence of our problem then lies in the stress distribution that accounts for By means of elasticity theory, we can demonstrate that the stress fields produced inside the sample by an edge dislocation close to a rigid boundary are long ranged and decay as with boundary conditions such that the full where From the above result, the elastic strain tensor In the case of our simulations, however, Eq. (6) may seem of limited help in calculating the strain energy We can conclude that the configuration in As soon as the yield point is reached, the response of the system to further loading differentiates depending on the deformation protocol. Under conditions of displacement control, stress-strain curves are characterized by serrated yielding, while they assume a staircase shape under conditions of stress control. We emulate realistic realizations of compressed samples by introducing randomness at free boundaries as follows. The initial state of each realization is obtained from the perfect crystal by extracting a random number of particles from one free surface and relocating them at random positions on the opposing surface. In this way both the number of particles and the linear size of all simulated specimens are kept constant, while the morphologies of their free surfaces are allowed to vary stochastically. Strain plateaus ases see . Remarkaases see reveals Due to the limited system size, moving dislocations easily leave the sample through free boundaries. Pioneering studies have shown that in sub-micrometer Ni samples, pure mechanical loading can induce dislocation depletion within the sample Plastic event sizes are commonly quantified in experiments by looking at the amount of energy Here the dissipated energy Let us first consider the case of force control. If focusing on a single platen displacement event, we have Under displacement control conditions, instead, the statistical analysis of plastic flow and dissipated energy can be performed by looking at the distribution of stress drops . In Fig.Compared to numerical studies of size effects in dislocation dynamics at the microscale As for the origin of the anomalous avalanche exponents, we should remark that the novel behavior is related to the inability of the system to store large numbers of dislocations. In the absence of collective behavior, plastic flow departs from the traditional picture of cooperative dislocation organization. In fact our simulations show a behavior which approximately recalls a sequence of load-unload events, much in the spirit of stick-slip dynamics or fracture/failure mechanics. We notice that an energy release exponent very close to universal mean-field behavior observed at larger scales. Our results are thus a significant example of source-limited deformationIn conclusion, we have shown that the onset of plasticity at small scales is mediated by few dislocations. The number and arrangement of nucleated dislocations must account for the distribution of stress stored inside the crystal during the elastic-loading regime, allowing one to estimate the dependence of the yield stress on sample size and geometry. Our results confirm that both size and shape are crucial factors in determining the strength of materials at these scales. We find that plastic flow occurs in an intermittent manner reminiscent of irreversible deformation at larger length scales. Plastic avalanches of broadly distributed sizes are still observed, however, the absence of dislocation storage has important effects on the scaling characteristics of viscoplastic dynamics, which ultimately violate the Video S1The first appearance of a dislocation pair signals the onset of yield for the perfect system. As soon as each dislocation reaches the opposite rigid boundary, plastic activity stops and the the first event is over. The animation consists of 7 snapshots of the dynamics. Top: dislocations are represented as pairs of 5- and 7-coordinated particles, in blue and red respectively. Bottom: velocity field corresponding to the dislocation configuration above. The modulus of the velocity vector is represented. Lower velocities are in red, higher in violet, according to the color scheme of the visible spectrum.(MOV)Click here for additional data file.Video S2Time evolution of the velocity field at the time-steps shown in Video 1. Moving dislocations trigger particle motion and thus elastic energy dissipation.(MOV)Click here for additional data file.Video S3Dislocation dynamics in the flow regime. Plastic events correspond to the activation of few dislocations at a time. Dislocation storage is not observed for such small systems. The animation consists of 20 snapshots of the dynamics. Conventions are as in Video 1.(MOV)Click here for additional data file."} +{"text": "After publication of this work , we noteThe authors declare that they have no competing interests. GF declares no competing interests.RP did the primary data analysis and prepare the initial and subsequent versions of the manuscript; SA supervised the data analysis and preparation of the results table and commented on versions of the manuscripts. GF supervised the data analysis and statistical evaluation; KM conceived the study, reviewed the data and draft versions of the manuscript."} +{"text": "In the Funding Statement, an organization providing funding to the last author (HDO) is incorrectly omitted. The Funding Statement should read:\"This work was supported by a grant by the Bundesministerium f\u00fcr Bildung und Forschung and the LOEWE excellence initiative of the State of Hesse within the framework of the Cluster for Integrative Fungal Research (IPF) to HDO. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\""} +{"text": "Although the maximum human life span of 122 years is well established, the genetic and biochemical changes that influence the ability to reach old age in good physical and mental health are not very well understood. Both multiple environmental (~70%) and genetic (~30%) factors seem to play a role in attaining longevity [Clues to the role of FOXO3A in controlling longevity may be available through the comparative study of organisms which show no sign of aging. One of the very few examples of animals which appear to be truly immortal is the freshwater polyp Hydra . Tissue With technological advances including the development of genomic resources and noveIn the new study , a literThe findings have captured the imagination of the popular press, and raised the skeptic's eyebrows. What lessons can actually be learned from the Hydra study? What does this mean for understanding human longevity? First, the Hydra results have moved the longevity-enabling FOXO3A gene from reported association to possible functions, corroborating and extending beyond previous observations in C. elegans and Drosophila. Second, the link between FoxO and components of the innate immune system is of pa"} +{"text": "The gonadotropin-releasing hormone (GnRH) system is well known as the main regulator of reproductive physiology in vertebrates. It is also part of a network of brain structures and pathways that integrate information from the internal and external milieu and coordinate the adaptive behavioral and physiological responses to social and reproductive survival needs. In this paper we review the state of knowledge of the GnRH system in relation to the behavior, external, and internal factors that control reproduction in one of the oldest lineage of vertebrates, the lampreys. Neuro-endocrine integration is defined as the process whereby a neuro-endocrine system converts signals received from the external and internal media into adaptive physiological and/or behavioral changes . ReproduGeotria and Mordacia) can be found in the southern hemisphere as well. Lampreys (Petromyzontidae) and hagfishes (Mixini) are the only members of the monophyletic vertebrate group of Cyclostomes and the only agnathans surviving in the actual fauna. The overall body morphology of lampreys seems to have been surprisingly well conserved over the 400\u2013500 millions years of evolution as suggested by paleontological evidence factors known to influence lamprey reproductive behavior and physiology.Petromyzon marinus) and the Pacific lamprey (Entosphenus tridentatus). Sea lamprey is represented by two varieties: the landlocked lampreys in the Great Lakes and the anadromous variety in the Atlantic Ocean seaboard. Due to the extraordinary measures that were taken for the control of the lamprey population in the Great Lakes larSexual maturation in lampreys is initiated during the parasitic phase of its life cycle. Final maturation takes place during upstream migration. Three main stages were identified for the upstream migration of reproductively mature lamprey: (i) initial migration from the ocean or lake to coastal rivers usually during the winter that precedes spawning, (ii) pre-spawning animals swim upstream the river, and (iii) spawning stage when sexual organs are fully matured and which includes specific reproductive behaviors like nest building, nest fanning, inter-male aggression, and the spawning act itself.Among the environmental factors that influence the upstream migration and their behavior during the final spawning phase the most important are photoperiod and temperature. Hydrological factors may also impact migration , 13 althP. marinus. The relative genetic homogeneity of the anadromous Atlantic sea lamprey on the North American east coast also suggests that mature animals do not return to their home streams for spawning and landlocked sea lamprey (up to 10%) . This wenervus terminalis is present in a majority of species with the exception of some fish. The type 2 GnRH (or \u201cchicken\u201d GnRH) shows the widest pattern of distribution, an identical GnRH type 2 was found in all major Gnathostome lineages. Type 3 is characteristic to teleosts . Most sprminalis .Therefore, in a very simplified GnRH type on biological function mapping scheme GnRH 1 is the hypophysiotropic form, GnRH 2 is considered mostly a neuromodulatory peptide with role in behavior while the teleost GnRH 3 could play various roles as mentioned, many times being found in the telencephalic area where some evidence suggests that it plays an olfactory modulatory role. The GnRH type 2 peptides are very well conserved in all vertebrates species with the exception of some of the mammalian groups where this peptide is absent, as it is the case of rodents and some primates. Behavioral control function hypothesized to be played by midbrain GnRH 2 seems to be supplanted by other systems in these species. Interestingly, in monkeys as well as in humans both GnRH type 1 and GnRH type 2 are present and show hypothalamic distribution albeit in distinct neuronal populations. This and the evidence collected mainly in monkeys on their hypophysiotropic role and differential estrogen neuronal sensitivity suggests that both isoforms are components of the HPG axis in these species with functions tuned to respond to different physiological contexts . The mai2 respectively) are common with the Gnathostome GnRH. Tyr3 and Leu5 in lGnRH-I, Asp6 in lGnRH-III, and Glu6 in lGnRH-I as well as Phe or Lys in position 8 are characteristic to lamprey sequences in lamprey brains is much more restricted. Lamprey GnRH immunoreactive nerve fibers originate from cells in the arc-shaped hypothalamic/preoptic areas and end at the neurohypophysis forming the preoptico-hypophyseal GnRH tract , 26. Thenscripts . Presenchemistry . This ishemistry , 29.Gonadotropin-releasing hormone receptors are G-protein coupled receptors acting primarily through interaction with Gq/11 G-proteins followed by activation of the IP3 second messenger signal transduction pathway . ActivatIn vitro autoradiography followed by Scatchard analysis of binding of a mammalian GnRH analog to lamprey pituitary identified two classes of high affinity binding of the ligand. These binding sites were located in the proximal pars distalis and in a lesser extend in the rostral pars distalis of the pituitary. The labeled analog was displaced by the lamprey GnRH peptides I and III which suggested that the detected high affinity binding was due to the presence of at least two types of GnRH receptors expressed in the pituitary tissue of lamprey and a steroid receptor orthologs were demonstrated as well. Regulatory changes in the expression of these genes during spawning suggest that these steroid hormones might play a role in gonadal maturation and reproductive function in this animal. However, the existence of a central neuro-endocrine control mechanism in Cephalochordates is still in the stage of unconfirmed hypothesis as no GnRH or other gonadal steroidogenic factor originating in their central nervous system was found . Treatmein vitro release of estradiol from isolated lamprey gonads which demonstrates a direct action of GnRH on the reproductive organs in vitro. This effect strongly increases in co-culture systems where the gonads are exposed to GnRH treatment in the presence of pituitary tissue which convincingly supports the hypothesis of a hypophyseal gonadotropin-like factor with a GnRH-dependent release and having a steroidogenic effect on the gonads negative feedback \u2013 neuro-endocrine homeostatic mechanism of control of gonadal steroidogenesis, (2) positive feedback \u2013 has a physiologic role during the LH surge and ovulation in the estrous cycle of mammals, and (3) behavioral neuromodulators upon binding to a multitude of steroid sensitive brain areas. In female vertebrates gonadal steroid hormones regulate activity on the HPG axis via both positive and negative central feed-back mechanisms, these mechanisms being the most prominent and most studied during the estrous cycle of mammals .The neuromodulatory role of steroids depends on the spatial and temporal pattern of expression of different types of steroid receptors in the brain tissue. First evidence of steroid feed-back action in lamprey was provided by estrogen brain tissue autoradiography experiments . Early ein situ hybridization by Sower and Baron The first type of interaction can be described as a pheromone priming effect on GnRH secretion. Recent evidence suggests that chemosensory input induce upregulation of GnRH expression in the lamprey brain followed by the downstream activation of HPG pathway and increase in plasma steroid level . The pat(2)nervus terminalis and olfactory sensory epithelium: the neurons of the nervus terminalis ganglia were shown to express GnRH and send projections peripherally to the olfactory epithelium neurons that express GnRH receptors effect of GnRH on olfactory sensory pathways. In most vertebrates, this role is studied in connection with the GnRH system attached to eceptors . Centriftructure . In the hemistry . More rehemistry .In Gnathostomes there are two sides of the relationships between olfactory input and GnRH system, related to the hypophysiotropic versus non-hypophysiotropic (neuromodulatory) roles of GnRH: (1) in the first case the olfactory input with reproductive relevance modulates the secretion of the GnRH from the preoptic area of the hypothalamus and consequently the activity on the HPG axis and (2) non-hypophysiotropic GnRH regulates the receptivity to olfactory stimuli via centrifugal projections to the peripheral olfactory structures:In conclusion, the influence of pheromonal signals on lamprey behavior has been well established at different stages of its reproductive cycle and they also seem to induce priming effects on GnRH. The neural pathway that mediate these actions are largely unknown in both of cases. Moreover, no evidence of a non-hypophysiotropic, neuromodulatory role of lamprey GnRH on olfaction has been found to date.The key players in photoperiod neuro-endocrine regulation in vertebrates are the pineal gland and its hormone, melatonin. In adult mammals the pineal does not directly respond to light stimuli, this information if received at the central level from the retina via a specialized retinofugal pathways, the retinohypothalamic tract [see for example Ref. ]. This cIn lamprey both retinofugal projections to hypothalamus and optic tectum as well as retinopetal FMRF-amide immunoreactive projections connecting the visual centers in the brain with the eye have been described . SimilarLamprey behavior during the last, reproductive phase of their life cycle is sensitive to the dark/light cycle. Observations of animals during their migration and spawning suggest they swim upstream mostly at night while spawning takes place during the day . As descin vitro cultured lamprey pineal glands under different light and temperature regimes has shown that under an alternating light:dark regime at 20\u00b0C the melatonin secretion was completely inhibited during the light cycle and peaked in the dark and internal hormonal stimuli.Measurement of the melatonin release from the dark , 87. The125I]-melatonin was detected in the lamprey optic tectum and in the parvocellular and magnocellular cells of the preoptic nucleus (Photoperiod affects the neuro-endocrine reproductive axis at different levels [reviewed for example in Ref. ]. Intera nucleus . Co-loca nucleus . GnRH re nucleus . HoweverIn summary, light and temperature are important environmental factors that directly impact the behavior of the animals during their reproductive phase, possibly through mechanisms that involve the pineal and its melatonin secretion. The effect of these factors are sensitive to the combined influence of gonadal maturation stage and the presence of olfactory signals. As in many other cases in lamprey, there is only a limited understanding of how these interrelations are determined at the central level possibly through interplay between the melatonin and steroid sensitive brain pathways.Our goal was not to give an exhaustive presentation of the research in lamprey reproduction as there is not enough room for that in the space of a short review. Nor are we attempting to provide a comprehensive model for the control of reproduction in lamprey as this would be condemned to be highly speculative at this time given the rather fragmentary information available on molecular mechanisms that drive reproduction in this animal. Instead we adopted here a perspective on lamprey reproduction as an integrated system that in its most basic organization is an input/output system reflective to a certain extent of the primordial neuro-endocrine system that governed reproduction at the beginning of the vertebrate radiation. We briefly surveyed some of the most important channels through which the organism collects information about the internal and external media. This information is processed to generate an output represented by a sequence of stereotyped behaviors and stages of maturation of reproductive organs. The data accumulated to date suggests that lamprey share many of the neuro-endocrine characteristics of Gnathostomes like the existence of a HPG axis, the role of the sex steroid hormones, and the sensitivity of reproduction to light and temperature. These processes are coordinated at the central level by mechanisms that in lamprey are likely genetically encoded and inscribed into the morphological and chemical makeup of the brain during the embryonic and metamorphic development. Aside from the central role of the GnRH however, much less is understood from reproductive neurobiology of lamprey and of agnathans in general. There is an important body of information available on chemical neuroanatomy of lamprey brain, collected more recently through the efforts of research groups like Pombal in Spain or GrillThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Evidence accumulating the past decades indicates that human populations are exposed to environmental chemicals interfering with endocrine systems. There is growing concern of the potential adverse impact of exposure to such endocrine disrupting chemicals (EDCs) on human health, based on observations in wildlife, animal model systems and reports of yet unexplained increased incidences of hormone related disorders in humans. Exposure in fetal and neonatal age may be associated with developmental and reproductive disturbances due to interference with the programming of normal hormone signaling and metabolic pathways. Boys are conceived to be more vulnerable than girls due to their more complicated and androgen driven prenatal sex differentiation.Male prenatal sex differentiation is crucially dependent on the functioning of fetal Leydig cells. Exposure of these cells to EDCs at critical time windows of development may result in undermasculinization and disorder(s) of sex development (DSD), such as hypospadias, cryptorchidism and ambiguous genitalia. Furthermore, a novel trend of earlier start of puberty has been found during the past few decades, at least in some regions and mostly affecting girls. Among plausible causes behind this phenomenon it has been suggested that exposure to EDCs affecting the programming or triggering of the pubertal clock could result in premature activation of gonadotropin secretion. Another hypothesis is that gonadotropin independent effects of EDCs are associated with an earlier start of puberty.Proof of principle of such potential roles of EDCs has been obtained from a multitude of studies in experimental animals and gained some support by case studies, exposure data and epidemiological investigations in humans. This presentation will review recent data and ongoing studies on the mechanism(s) of action of EDCs, including effects of mixtures, on the hypothalamus- pituitary-gonadal axis in experimental animals and humans. The impact of genetic susceptibility on the degree of disrupting activity caused by certain EDCs will be discussed on the basis of ongoing animal studies."} +{"text": "Rhythmic activity in the brain has been known since Berger's discovery of the alpha rhythm in the 1920's. Numerous mechanisms have been proposed for various rhythms but in the past half-century no consensus has been reached on the mechanism of any major rhythm. The recent development of high-throughput imaging methods enable us for the first time to rigorously and quantitatively test ideas about the dynamics of brain rhythms.The aim of this project is to characterize the contributions of intrinsic dynamics of brain regions and network connections in generating the global dynamics of cortical activity using a mouse model.2 of mouse cortex by voltage-sensitive dyes, in both anesthetized and awake animals. We have instantiated current ideas about delta rhythm in models for the dynamics of such activity and we measure the fit of these models quantitatively in predicting this data, thus shedding light on the relative contributions of these processes in the delta rhythm. We include in the model intrinsic regional oscillations, thalamic input, and communication between cortical regions. We specify the form of the model and then estimate the parameters by fitting the dynamical behavior to high-resolution time series of cortical activity in mouse cortex. We try to estimate the relative contributions of each of the major components of the model to the fluctuations.We have generated high-resolution data on neural activity over 40 mmWe show that a potassium-current mechanism for intrinsic oscillations does in fact fit the data well, and we estimate some of the effective connectivity between different cortical regions under anesthesia."} +{"text": "Insects are able to find their mates from the sparse pheromone patches transported by turbulent air streams. This source-finding ability is remarkable because the pheromone patches don't point towards the source and because the temporal and concentration statistics encountered during the search are highly irregular. The knowledge of the coding dynamics at the single ORN level is presently incomplete and it seems mandatory to decipher these dynamics before further investigating the integration of the signal at higher levels of the olfactory system.A receptor that would attempt to infer the distance of a source from the detection dynamics of the pheromone patches would obviously benefit from the complete evolution of the pheromone concentration with time. Yet, it is possible to build a Bayesian model based on the known dynamics of the plume propagation and to iAgrotis ipsilon, stimulated by puffs of pheromone (cis-7-dodecenyl acetate) of variable duration delivered at constant concentration. The spiking responses of ORNs to single puffs varying over four decades of durations (1ms-10s) were recorded and decoded. In the absence of precise knowledge on the decoding mechanism inside the antennal lobe and in higher neural structures of the insect brain, 3 decoding schemes were utilized, based on logistic regression, support vector machine and signal dependent Poisson firing model designed to include bursting dynamics. With this protocol the relative selectivity of ORNs coding with the variation of puff duration and thus the relative information transfer could be analyzed. Then, in order to investigate the possible coding dependency of the currently detected pheromone patch on previous patch detections, double puffs coding were investigated, similarly over four decades of durations. The evolution of the ORN coding showed that the detection of previous puffs influenced the coding of the current puff and hence that instantaneous decoding could bring information on the previous puff duration and timing. Furthermore, we quantified the relative sensitivity of the instantaneous coding with respect to the duration and spacing of the previous puff. Interestingly, we also show that the \"random spiking\" dynamics bears a non-negligible amount of information on the past patch dynamics detection.As a simplified framework, we investigated the dynamical olfactory coding of ORNs of a moth, The coding and decoding procedures were then evaluated on signals mimicking the temporal dynamics of patches in a turbulent stream. We found that training based on one patch and two patches allowed efficient and reliable decoding of patch dynamics in turbulent air streams. Finally, we quantified the amount of information encoded by ORNs about the duration, timing, and most importantly waiting time between consecutive pheromone puffs."} +{"text": "Ungulates like sheep and goats have, like many other mammalian species, two complementary olfactory systems. The relative role played by these two systems has long been of interest regarding the sensory control of social behavior. The study of ungulate social behavior could represent a complimentary alternative to rodent studies because they live in a more natural environment and their social behaviors depend heavily on olfaction. In addition, the relative size of the main olfactory bulb (MOB) [in comparison to the accessory olfactory bulb (AOB)] is more developed than in many other lissencephalic species like rodents. In this review, we present data showing a clear involvement of the main olfactory system in two well-characterized social situations under olfactory control in ungulates, namely maternal behavior and offspring recognition at birth and the reactivation of the gonadotropic axis of females exposed to males during the anestrous season. In conclusion, we discuss the apparent discrepancy between the absence of evidence for a role of the vomeronasal system in ungulate social behavior and the existence of a developed accessory olfactory system in these species. The sense of smell is of primary importance for social recognition among mammals and is mediated by the main and the accessory olfactory systems. These olfactory systems differ both in their organization and in their function. The main olfactory system is involved in the processing of volatile odors detected at the level of the main olfactory epithelium in the nasal cavity. Sensory neurons send axons to glomerular cell layer of the main olfactory bulb (MOB) where they synapse with dendrites of mitral and tufted cells. The olfactory information is then conveyed to several primary olfactory structures including the anterior olfactory nucleus, the olfactory tubercle, the piriform cortex, the posterolateral cortical amygdala, or the entorhinal cortex and we then discuss the general involvement of both systems in the regulation of ungulates social behavior and in comparison to rodents.In sheep, the establishment of maternal behavior is under the major influence of amniotic fluids which cover the lamb at birth. An important shift toward amniotic fluid is observed across pregnancy and parturition: while repelled by amniotic fluid throughout pregnancy, ewes become highly attracted to amniotic fluid around parturition and the excitatory glutamate within the MOB. Once ewes establish a selective bond with their lambs after parturition, the odors of familiar lambs, but not those of unfamiliar ones, increase the release of both transmitters. Infusion of the GABAa receptor antagonist bicuculline in the MOB prevents lamb recognition once it has been formed. Therefore, it is hypothesized that the general increase of GABA refines the olfactory signal by inhibiting MOB mitral cells with the exception of those processing the odor of the familiar lamb.A dramatic increase of noradrenaline release also occurs during the learning of lamb odor without being in contact with a lamb has been performed. It showed an increase in Fos expression that was mainly restricted to the main olfactory processing regions, i.e., the MOB, the piriform cortex, the frontal medial cortex and the orbitofrontal cortex in females exposed to lambs activation around parturition Keller, and thatIn ungulates, the introduction of a male among seasonally anoestrous females results in activation of LH secretion (short-term response) leading later to ovulation and sexual receptivity (long-term response; Delgadillo et al., The chemosignal responsible for the reactivation of the female gonadotropic axis seems to be a mixture of various compounds which have only been partially identified. It has been shown that the biological activity of the chemosignal requires the simultaneous presence of compound retained in both acid and neutral fractions (Cohen-Tannoudji et al., The role of the main and the accessory olfactory systems have been evaluated through lesioning or inactivating different regions of the main or the accessory olfactory pathways. Destruction of the main olfactory epithelium through intranasal administration of zinc sulfate or inactivation of the cortical nucleus of the amygdala by infusion of the anaesthetic lidocaine completely blocks the neuroendocrine response to ram odor (Gelez and Fabre-Nys, Correlational studies using Fos immunocytochemistry to reveal central activations triggered by male odors support the view that the main olfactory system primarily conveys the male odor. In sheep, when comparing groups exposed to a male, male fleece, female fleece or no odor, the male or its odor significantly increases Fos expression in the main olfactory system, especially the MOB and the cortical nucleus of the amygdala (Gelez and Fabre-Nys, In goats, neuroendocrine activations by male odor involve the kisspeptin system in the arcuate nucleus of the hypothalamus (Hamada et al., In summary, it has been shown that in sheep, contrary to many other species, the main olfactory system is primarily involved in the processing of the olfactory signal emanating from the male and that mediates a physiological response, while the accessory olfactory system seems to be less engaged. This is in contrast to rodents, where pheromonal cues are usually processed by the vomeronasal system (Keller et al., The data presented in the context of maternal behavior as well as those related to the male effect clearly support a key role for the main olfactory system in ungulate social behavior. Evidence showing an involvement of the vomeronasal system is scarce and the only experiment claiming a role of the VNO in sheep offspring recognition has raised methodological concerns.However, ungulates are one of the animal taxa where a specific olfactory behavior that is thought to be dependent upon the vomeronasal system, namely flehmen response, is widely reported (Melese-d'Hospital and Hart, At the neuroanatomical level, the VNO, the AOB, and the \u201cvomeronasal\u201d amygdala have been identified and are quite well developed in many wild and farm ungulate species (Kratzing, As in other species, in sheep the chemosignals are thought to contact vomeronasal receptors in the VNO epithelium through a pumping mechanism (Meredith et al., The downstream organization of the vomeronasal system also seems to be perfectly functional, even if some slight differences with rodents can be noticed. First, the respective zone to zone projection from the apical and basal sensory epithelium of the VNO to the anterior and posterior part of the AOB, typical in rodents, is not present in adult sheep (Salazar et al., Finally, at the level of the central projections of the vomeronasal system, it seems that the vomeronasal amygdala of the sheep is as extensive as that of rodents (Jansen et al., In conclusion, the current knowledge on the regulation of social behavior by the accessory olfactory system leads to an apparent contradiction. Indeed, neuroanatomical characterizations suggest that despite a reduced relative size, the vomeronasal system seems to be perfectly functional. By contrast, the behavioral evidence regarding its function in social olfaction is scarce, therefore advocating for further investigations in this area. Particularly, the use of other behavioral situations than the ones explored so far could lead to a re-evaluation of the role of the vomeronasal olfaction in the control of social behavior in ungulates. For example, it is likely that vomeronasal olfaction could play a more developed role in wild ungulates such as antelopes or moose than in domesticated species (Deutsch and Nefdt, The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The Gambia Hepatitis Intervention Study (GHIS) consisted in the progressive introduction of HBV plasma-derived vaccine in different zones of this African country during the period 1986-1990. The study was launched and coordinated by IARC and is one of the most effective examples of an intervention project that both substantially contributed to our knowledge and to the health of local populations. Similar intervention studies have been carried out in South-East Asia. The studies indicate that the natural history of HBV infection differs in different populations , having a direct relevance for the implementation of HBV vaccination programmes in various parts of the world. Recent estimations provided by GloboCa 2008 indicateAll the children vaccinated within the Gambia Hepatitis Intervention Study (GHIS) were registered, resulting in two cohorts of approximately 60,000 children, one of which received only the routine EPI vaccination and the other the HBV vaccine in addition. To determine the response to HBV vaccine and the persistence of vaccine-induced immunity, some 1000 children were recruited consecutively and these have been followed annually to assess their HBV serological status. A cross-sectional survey was carried out at the ages of 4 and 9 years of a similar number of unvaccinated children to determine the vaccine efficacy against HBV carriage status and infection.The vaccine efficacy shows an 84% protection against infection and 94% against HBV chronic carriage at 9 years of age , clearly show that exposure to aflatoxins affects the entire population and that exposure to both HBV infection and aflatoxins results in a prominent increase risk of developing HCC.In The Gambia the high incidence of HCC is also associated with exposure to the carcinogen aflatoxin Brasiticum. ComprehIn summary, these studies show that HBV vaccination programmes have been successfully implemented in various parts of the world. In addition, molecular epidemiological studies in The Gambia and in South-East Asia have clearly shown the relevance of other major risk factors, namely aflatoxin exposure , in the The author declare that has no competing financial or non-financial interest."} +{"text": "The TB screening algorithm used in the study to identify individuals with symptoms indicative of active tuberculosis is actually based on reporting ANY of three primary symptoms, not ALL three primary symptoms, thus the number of individuals with symptoms indicative of active tuberculosis should have been 79%, not 23%. This finding appears once in the Abstract, once in the Results, and once in the Discussion."} +{"text": "Hip arthroscopies are often used in the treatment of intra-articular hip injuries. Patient-reported outcomes (PRO) are an important parameter in evaluating treatment. It is unclear which PRO questionnaires are specifically available for hip arthroscopy patients. The aim of this systematic review was to investigate which PRO questionnaires are valid and reliable in the evaluation of patients undergoing hip arthroscopy.A search was conducted in Pubmed, Medline, CINAHL, the Cochrane Library, Pedro, EMBASE and Web of Science from 1931 to October 2010. Studies assessing the quality of PRO questionnaires in the evaluation of patients undergoing hip arthroscopy were included. The quality of the questionnaires was evaluated by the psychometric properties of the outcome measures. The quality of the articles investigating the questionnaires was assessed by the COSMIN list.Five articles identified three questionnaires; the Modified Harris Hip Score (MHHS), the Nonarthritic Hip Score (NAHS) and the Hip Outcome Score (HOS). The NAHS scored best on the content validity, whereas the HOS scored best on agreement, internal consistency, reliability and responsiveness. The quality of the articles describing the HOS scored highest. The NAHS is the best quality questionnaire. The articles describing the HOS are the best quality articles.This systematic review shows that there is no conclusive evidence for the use of a single patient-reported outcome questionnaire in the evaluation of patients undergoing hip arthroscopy. Based on available psychometric evidence we recommend using a combination of the NAHS and the HOS for patients undergoing hip arthroscopy. Hip arthroscopy is a relatively new procedure in the management of hip disorders ,2. It haThe number of hip arthroscopies is rising because of improvements in surgical technique and a better understanding of the pathology associated with the hip joint . TherefoA number of PRO questionnaires have been developed for individuals with hip pathology, especially osteoarthritis -14. The The aim of this systematic review was to investigate which PRO questionnaires are valid and reliable in the evaluation of patients undergoing hip arthroscopy.A systematic review was performed 1) to identify all PRO questionnaires used in the evaluation of patients undergoing hip arthroscopy 2) to evaluate the quality of these questionnaires based on their psychometric evidence 3) to determine the methodological quality of the studies into the psychometric evidence of these questionnaires.A health-related PRO questionnaire is a measurement of any aspect of a patient's health status that is directly assessed by the patient, thus without interpretation of the patient's responses by a physician or anyone else .Psychometric properties are part of psychometrics, which is the discipline concerned with the construction and validation of measurement instruments, such as questionnaires and tests . The psyA computerized literature search was performed using Pubmed, Medline, CINAHL (via EBSCO), the Cochrane Library, Pedro, EMBASE (via OVID) and Web of Science to identify relevant articles published between January 1931 and 1 October 2010. The search was conducted by two reviewers (NM and MT). The following terms were used:Hip AND arthroscopyHip AND arthroscopy AND questionnaires OR outcome assessment OR self assessment OR outcomeHip AND rehabilitation OR treatment AND questionnaires OR outcome assessment OR self assessment OR outcomeTerms were searched as key words or 'free-text' terms in all databases except for Pubmed in which they were searched as MESH terms. The reference lists of the retrieved articles were searched for more relevant studies. The search was completed with a separate search for the identified questionnaires as well as for authors of these questionnaires.The two reviewers (NM and MT) independently assessed all collected publications on title and abstract for possible inclusion in the study. All selected publications were retrieved in full and in- and exclusion criteria were applied by the two reviewers. Inclusion criteria are presented in Table Two assessment procedures were used to assess the quality of the identified questionnaires and the methodological quality of the articles describing the questionnaires.Terwee et al. developeNo overall score is calculated, but a conclusion is drawn based on the information of the properties combined with the aim of the questionnaire . This crThe methodological quality of the studies into the psychometric evidence of these questionnaires was determined by the Consensus-based Standards for the selection of health Measurement Instruments list (COSMIN) . This liThe total search identified 3025 articles. A total of 2971 articles were excluded based on title and abstract which left 54 articles that were read in full text. Of these 54 articles 49 were excluded based on the remaining exclusion criteria which left five articles to be included in the study with a total of 830 subjects. None of the articles used the same group of subjects for their data collection. An overview of the descriptive data of the articles is shown in Table Any disagreement between the two reviewers (NM and MT) was resolved by consensus.The psychometric properties per questionnaire are shown in Table The MHHS scored high on construct validity because it correlated well with the domains bodily pain and physical functions of the Short Form-36 (SF-36) . Some inThe quality of the articles investigating the psychometric evidence of the PRO questionnaires were rated by the COSMIN checklist and presented in Table This systematic review included five articles on hip arthroscopy using three different questionnaires . The MHHS is a modification of the Harris Hip Score which is an observer-administrated score . Potter The overall quality of the articles investigating the measurement properties as rated by the COSMIN list was fair to good. Remarkably, in most cases the generalisability per box was better than the quality of the assessed properties per article. Only one article scored excellent on hypothesis testing of the correlation between the MHHS and SF-36 . FurtherThe NAHS has been developed for a young population with orthopedic, non arthritic hip pain and not specifically for patients undergoing hip arthroscopy, like the HOS . TherefoThe COSMIN list we used in this review was recently developed. At present no other checklists for the assessment of articles on the methodological quality of questionnaires are available ,16,19. TThis systematic review shows that there is no conclusive evidence for the use of a single patient-reported outcome questionnaire in the evaluation of patients undergoing hip arthroscopy. A limitation of this study is the small number of studies that could be included. Based on available psychometric evidence we recommend using a combination of the NAHS and the HOS for patients undergoing hip arthroscopy. In order to provide a conclusive recommendation more studies on the validity and reliability of these questionnaires are warranted.The authors declare that they have no competing interests.MT participated in the design of the study, carried out the literature search, selection and evaluation of articles and writing of the review. RC participated in the design of the study and revising the manuscript. NM carried out the literature search, selection and evaluation of articles. EV participated in the design of the study and revising the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2474/12/117/prepubDefinitions and scoring criteria of the psychometric properties. Definitions and scoring criteria of the psychometric properties developed by Terwee et al. Note: Important for other authors in order to get a clear image of the research performed. Not important enough to be placed in manuscript.Click here for fileQuality of the questionnaires based on psychometric properties rated by article. Quality of the questionnaires based on psychometric properties and displayed by article. Note: Important for other authors in order to get a clear image of the research performed. Not important enough to be placed in manuscript.Click here for file"} +{"text": "Glomerular diseases including diabetic nephropathy are the leading cause of ESRD worldwide. Proteinuria, the hallmark of renal damage in glomerular diseases, is dependent on two main factors: the alteration of the glomerular filtration barrier and its three layers ), and the impairment of proteins reabsorption by proximal tubular epithelial cells. In recent years there has been a great increase of knowledge of the molecular structure of glomerular filtration barrier and of the molecular mechanisms involved in tubular reabsorption of proteins.In this special issue the structure of the glomerular filter and the importance of glomerular talk in maintaining the integrity of the glomerular filtration barrier are reviewed by M. Menon et al. A. Tojo and S. Kinugasa reviewed the albumin glomerular permeability in normal and disease status and present their views of a possible mechanism of selective proteinuria in nephrotic syndrome of minimal change disease. A. Zhang and S. Huang summarized several molecular defects responsible for dysfunction of the glomerular filtration barrier. J. E. Toblli et al. overviewed the alterations of glomerular endothelial cells, basement membrane and podocytes, the possible relationship between glomerular proteinuria and tubulointerstitial damage, and described less and more recent approaches to reduce proteinuria. The review of J. R. Machado et al. summarizes the most important molecules involved in the pathogenesis of nephrotic syndrome. Galactose-deficient IgA1 is the hallmark of IgA nephropathy; in a cohort of 40 pediatric patients with biopsy-proven IgAN, a research group from the Le Bonheur Children's Hospital found no association between albuminuria and the galactose-deficiency, a finding that may question the pathologic role of galactose-deficiency in IgAN. The possible therapeutic efficacy of inhibition of mammalian target of rapamycin (mTOR) in primary mesangioproliferative glomerulonephritis was discussed by H. Trimarchi et al. who suggested prospective clinical trials. B. Zhang and W. Shi reviewed the therapeutic effects of cyclosporine A (CsA) in glomerulonephritis and evaluated the data in support of a nonimmunologic antiproteinuric effect of CsA dependent on a direct stabilization of podocyte cytoskeleton. The review by A. Cohen-Bucay and G. Viswanathan discusses nine additional urine biomarkers that may offer better prediction for the course of diabetic kidney disease progression than urinary albumin. The authors call for further longitudinal studies to validate the clinical value of these biomarkers to overcome the limitations of albuminuria.The great increase in knowledge of molecular biology of glomerular filtration barrier and tubular reabsorption of proteins has not been matched as yet by outcome prediction improvement and therapeutic advances. There is a need for further studies for better understanding the pathogenesis of proteinuria and the clinical value of the different urine biomarkers in diagnosis and management of patients with chronic glomerular diseases.Finally special thanks to the authors and the reviewers for their efforts to provide to the nephrology community a concise up-to-date knowledge on the pathogenesis and management of proteinuric kidney disease.Claudio BazziOmran BakoushLoreto Gesualdo"} +{"text": "Environmental Policy and Household Behaviour is a collection of studies on household behavior in environment-related areas originated from a multidisciplinary research program carried out by four Swedish universities between 2003 and 2008 under the financial support of the Swedish Research Council for Environment, Agricultural Sciences and Spatial Planning, and the Swedish Environmental Protection Agency.The book\u2019s contributions can be best appreciated within the context of recent shifts in environmental policy. Over the past decade, much of the focus of environmental policy has moved away from industrial production as the main source of pollution and from the implementation of cleaner processes toward an integrated approach that considers the life-cycle environmental impacts of goods and services involving all actors along the product chain, including consumers. Environmental policy-making bodies have also recognized the importance of a better understanding of these decision-making processes and households within a broader social science framework.For environmental policy, then, the main contribution lies in the focus on incorporating consumption-based measures into environmental policy and management and promoting more sustainable patterns of consumption a better understanding of the role and drives of consumption. For household behavior, the main contribution lies in the focus on capturing a wider range of policy influences through the inclusion of noneconomic explanations of behavior\u2014from those grounded on noncognitive foundations of everyday life to those embedded in the collective, cultural, and social view of both choice and action.Overall, the book is a valuable source of information for both researchers and policy makers interested in or working on environmental issues that arise in household decisions, although it may not readily appeal to researchers engaged in theoretical or methodologically sophisticated empirical analyses. It can also serve as a supplementary text in an upper-level undergraduate course in environmental policy, particularly in interdisciplinary programs such as environmental studies.s that policies should facilitate comparison across the effects of different environmental activities. The book addresses the question of policy legitimacy, which requires that policies reflect society\u2019s core values and attitudes, and how financial rewards and punishments may be inconsistent (and thus viewed as lacking legitimacy) with a moral willingness to do the right thing.Through its comprehensive approach to household behavior, the book provides insight into the complexity of designing environmental policies to achieve environmentally sustainable lifestyles and offers guidelines as to how such a complexity can be simplified. It begins with a discussion of the potential tension between individual freedom of choice and environmental obligations and how such a conflict is avoided through market-based policy instruments that place environmental responsibility on individuals as consumers in the marketplace. It explores how households handle the complexities and challenges of making environmentally informed decisions through compensatory measures and simplifications. To help households, the authors suggestThe book provides a thorough review of four types of determinants of behavior within a broad framework of analysis. The discussion of the various types of determinants of behavior is most enlightening in bringing together insights about human behavior from many academic disciplines and fields and also encompassing linkages between different types of factors, particularly between attitudinal factors and contextual factors and habits. Hence, inner motivation to \u201cact green\u201d is not likely to result in green behavior if the perceived cost of sustainable behavior is high and/or habits of nonsustainable behavior are strong.In a series of case studies, the book analyzes the interplay of the four types of casual factors in specific environment-related household consumption areas: waste management, eco-consumption, and transportation. The three areas are cleverly selected to stress how policy challenges differ across areas as the relevance of the different casual factors and strength of interaction between them vary across different household activities. The specific reference to Sweden and the Swedish experience is most telling in illuminating another layer of complexity in the pursuit of sustainable consumption patterns that arises when nonenvironmental policies and other public initiatives are at odds with promoting environmental sustainable household behavior.Despite the relevance of combining elements about human nature and behavior from various social sciences, the book tends to overly discount, at least implicitly, the importance of economic motives and instruments. One of the three key implications summarized in the concluding chapter does speak of the need for policy packages as a potentially effective way to respond to morality-based motivations for green behavior while providing economic incentives. Nevertheless, the theoretical part of the book tends to emphasize the political, social, and psychological dimensions of household decision making, pointing to an excessive reliance on economic instruments; and the empirical part tends to overlook the impact of market-based policies even though such policies have received much attention, especially in the transport area, for their efficiency and their financial transparency.Environmental Policy and Household Behaviour makes a significant contribution to our understanding of the incentives and constraints households face when formulating judgments and making decisions. This understanding is a key requirement for the design of policies that are both effective and acceptable. The book does not offer specific policy prescriptions but suggests a framework that allows for the inclusion of a wider range of motives for environmentally responsible behavior and gives the opportunity of examining and rethinking the interaction between different types of motivation.In sum,"} +{"text": "Pyramidal neurons of the neocortex display a wide range of synchronous EEG rhythms, which arise from electric activity along the apical dendrites of neocortical pyramidal neurons. Here we present a theoretical description of oscillation frequency profiles along apical dendrites which exhibit resonance frequencies in the range of 10 to 100 Hz. The apical dendrite is modeled as a leaky coaxial cable coated with a dielectric, in which a series of compartments act as coupled electric circuits that gradually narrow the resonance profile. The tuning of the peak frequency is assumed to be controlled by the average amplitude of voltage-gated outward currents, which in turn are regulated by the subthreshold noise in the thousands of synaptic spines that are continuously bombarded by local circuits. The results of simulations confirmed the ability of the model both to tune the peak frequency in the 10\u2013100 Hz range and to gradually narrow the resonance profile. Considerable additional narrowing of the resonance profile is provided by repeated looping through the apical dendrite via the corticothalamocortical circuit, which reduced the width of each resonance curve to approximately 1 Hz. Synaptic noise in the neural circuit is discussed in relation to the ways it can influence the narrowing process. The pyramidal neuron, with its relatively long dendrite that extends upward from the top of the soma, is the main excitatory neuron and most numerous type of neuron found in the neocortex resonance denotes the ability of a system to oscillate most strongly at a particular frequency.Electrical brain rhythms, particularly synchronized oscillations generated by the pyramidal neurons, are measured as encephalograms (EEGs), and are believed to be related to a wide variety of cognitive functions Resonance activity apparently can arise from noise in neural networks in a variety of ways, ranging from the stochastic resonance produced by a moderate level of synaptic noise that enhances subthreshold signals, e.g., Apparently, transient alterations in synaptic function on the dendrites of the waking animal are continually being initiated by local circuit background activity on dendritic spines, which produce subthreshold membrane activity. Alterations are produced by locally produced action potentials and backpropagating action potentials. The finding The purpose of the present paper is to increase our understanding of the mechanisms by which the dendritic membrane can become tuned to a specific narrow band of frequencies by describing a model of the transfer of electrical energy along the apical dendrite of a neocortical pyramidal neuron. As current pulses move down the apical dendrite, they are influenced by the conductive and capacitive elements of the anisotropic membrane . The progressive narrowing of resonance in the present model of the neocortical apical dendrite is based on the surface impedance theory formulated by Delogne The vertical shaft of the typical human neocortical apical dendrite is modeled here as a series of 6 compartments, as shown in The input from the thalamic principal neuron to the first compartment is located at the distal region of the Layer 5 apical dendrite for all neocortical minicolumns except for minicolumns located in the primary sensory areas, where the input from the thalamic principal neuron is mainly in the mid region of the Layer 5 pyramidal neuron, but with some inputs at the distal region The instantaneous movement of charges is represented in Each circuit compartment contains an active voltage-gated outward conductance, The present formulation may be considered a variation of traditional cable theory The approach taken here modifies the circuit of the traditional cable model by replacing the inductance (L) element with a capacitive element that is directed along the outer surface, longitudinal to current, so that the circuit now contains two kinds of capacitive elements: the non-linear surface capacitance, rane see . Energy Delogne's theory The apical dendrite model we have proposed has a single peak value in the band-pass characteristic not because of any energy exchanges back and forth between inductance L and capacitance C as in electric circuit theory. A resonance or peak amplitude in the transfer impedance of the neuron is created by the way the electric energy divides between the surface and membrane capacitive and conductive elements as a function of frequency. At low frequencies, capacitance tends to block current flow and at high frequencies capacitance tends to act as a short.The horizontal capacitance For the single compartment, the derived steady-state complex transfer impedance is: where andThe parameters, Equation (1) contains expressions for two different capacitances: The dynamic characteristics of this network model can be specified by either its voltage transfer function, The sheath geometry for the classical leaky cable involves a cylindrical outer surface defined by a two-layer construction: a conducting layer on one side and a dielectric layer on the other side. Holes (channels) exist through the surface allowing current flow from inside the cable to outside or in the opposite direction. The surface impedance representation of the current and voltage boundary condition for a leaky cable is assumed to be approximately equivalent to the dendrite outer surface impedance boundary in the range of neuronal resonant frequencies of our paper by symmetry. The effect of attenuation or resistivity (conductivity) in Equation (7), is given by the propagation constant or wave number for the classical neuron cable model The surface transfer admittance derived from the Delogne leaky cable theory where Using the definition of the classical cable propagation constant, and the well-known formulas for the intracellular impedance, andandwhere The electrical theory of leaky cables applied here to the dendritic segment provides a way to consider the effect of coupling between longitudinal currents and radial membrane currents (across the membrane). If we consider the charge per unit length on each of two concentric cylindrical coaxial capacitor surfaces, the per unit length capacitance of a coaxial cable is given by Equation (11) The many radial leak paths of the membrane containing high conductivity fluid greatly increase the membrane conductivity over that of the axon membrane. Although many kinds of conductances exist in the dendritic membrane tric see . The eletric see . Axon meTo observe the operation of the one-compartment circuit over the 10\u2013100 Hz range where most of the EEG measurements in the literature are located, we estimated the transfer impedances that showed peak values at 20, 40, 60, and 80 Hz. Using the circuit parameters assumed for the present model, we obtained maximum impedance amplitude values for these 4 frequency locations of 2.5369, .46009, .13223, and .04952 The four transfer impedance amplitude curves shown in where To illustrate the amplitude calculation of the impedance values shown in 0 Hz see .The corresponding phase angles calculated from Equation (1) are shown in When we insert into Equation (13) the same vely see .The impedance amplitude curves, shown in The impedance phase curves, shown in in vitro experiment with rats The general characteristics of the impedance amplitude and phase curves of the present model, shown in The amplitude curves shown in To achieve a narrow, peaked resonance curve, we extend the present single compartment model to a 6-compartment model, with the added feature that the pulse train is recycled repeatedly through the 6 compartments by way of the thalamus within the corticothalamic circuit. Both the single compartment and the looping multiple-compartment models assume that the average level of the radial conductance, Although transmembrane ion channels are distributed non-uniformly over a segment of dendritic membrane The magnitude of the impedance transfer functions of the cascade-loop model, plotted as percent of maximum voltage for 20, 40, 60, and 80 Hz peak frequencies, are shown in According to the model, the peak frequency of the impedance transfer function is shifted upward or downward by varying the level of the total output conductance, which in turn is influenced by the tonic voltage level maintained inside the dendrite by the subthreshold synaptic activity at the hundreds of local spines. The area of the resonance curve around the peak is narrowed by successive electrical operations in the compartments through which the pulse travels as it makes its way toward the soma. As more compartments are traversed the resonance peak becomes more sharply defined.Two aspects of resonance tuning are treated in this paper: (a) the selection of the peak frequency of the resonance curve, and (b) the sharpening of the resonance curve around the peak frequency.(a) Selective activity is indicated by the finding that the impedance transfer function exhibits a peak, which suggests that only one compartment of the apical dendrite model is needed to exhibit some degree of selective filtering over the total range of frequencies of an injected input. Although the peak in the single-compartment case is weakly defined here, the peak becomes more noticeably narrowed when the compartment operations are repeated through more compartments. Thus when the current containing a range of frequencies is passed to the next compartment, the voltage of some frequencies will be larger than that of other frequencies. As mentioned in the previous section, the physical basis hypothesized for the selection of the (peak) resonance frequency is the outward conductance of potassium ions. A particular value of outward conductance is produced by a corresponding level of voltage on the inside surface of the membrane. This level of internal voltage is maintained by the ongoing subthreshold synaptic activity in local spines, which in turn is produced by pulses from axons of the local cortical circuitry. The local circuitry of the cortex is assumed to be noisy, owing to its complex circuitry and the firing characteristics of the constituent neurons. The source or sources that regulate the level of this background noise apparently are not yet known in detail. However, it would seem that the noise level for a specific minicolumn (or for a column cluster of minicolumns) should differ from the noise level of minicolumns serving other functions. For example, the ambient background noise level for minicolumns involved in processing the color red should exhibit a different noise level than the minicolumns involved in processing the square shape of an object. In this way the corticocortical circuits that connect Layer 2/3 pyramidal neurons of a group of minicolumns can function somewhat independently of the noise level of minicolumn local circuits, once the local noise level selects the resonance frequency of the participating pyramidal neurons. The amplitude of the noise of a particular minicolumn will, of course, exhibit a variance, and the size of this variance is expected to affect the sharpness of the resonance function.(b) Sharpening activity, shown in In the search for experimental evidence supporting the existence and function of surface capacitance, While it is recognized that noise is added to the resonance processing within the pyramidal neuron itself, we assume here that the level of that noise is small relative to the level of noise added to the resonance processing by synapses. The issue of synaptic noise in the present model can be separated into the noise produced in two different places in the cascade-loop model of resonance tuning: the ambient noise in the thousands of spines along the apical dendrite shaft which produce subthreshold activation of neighboring conductance channels , and the noise at the synapses in the corticothalamic circuit which enables output current pulses from the dendritic compartments to loop back to the apical dendrite. Many hundreds of spines dot the apical dendrite , as well as the synapses of the principal cell axon on the apical dendrite of the cortical pyramidal cell. Additional synapses to be considered contact the corticothalamic loop from other sources. They include the important synapses by which ascending afferent fibers contact the thalamus en route to the primary sensory cortical areas. For the higher cortical areas, both ascending fibers from lower areas and descending fibers from frontal cortical areas make synaptic contacts with the thalamus see . SynaptiIf the traditional view of noise is applied to the workings of the present model, then all of the synapses under consideration, especially the synapses in the corticothalamic circuit, produce a significant level of noise that tends to flatten the shape of the resonance curve. While every cycle of the corticothalamic circuit adds noise to flatten the shape of the profile of the resonance curve, the next cascade through the apical dendrite compartments narrows the shape of the profile of the resonance curve. If the total noise in the corticothalamic loop is sufficiently large relative to the number of dendritic compartments, then the resonance curve cannot be narrowed to the maximum asymptotes shown in The purpose of the single compartment model simulation is to demonstrate resonance in the apical dendrite by generating four resonance curves, corresponding to peak resonant frequencies of 20, 40, 60, and 80 Hz. To simulate the operations of a segment of the Layer 5 apical dendrite, we employ here the assumed equivalent circuit in a single compartment see . The choTo calculate the transfer impedance, ment see , the expWe now consider the way that the multiple compartment model could tune an apical dendrite to narrow its resonance curve around a particular peak frequency. For the model, we select a segment of the apical dendrite 1200 The decrease in The excitatory input conductance to the nth compartment (At the end of the first compartment, which states that the transfer impedance of that compartment circuit, so that, in general,.The cascading operations across the A cascade through the 6 compartments produces an array of new impedances across the frequency range. These impedance values are sent back to the first compartment via the corticothalamic loop see , and pro"} +{"text": "The traditional surgical treatment for abdominal aortic aneurysm (AAA) is now well codified in vascular surgery with a perioperative mortality rate that has gradually declining in recent years. The introduction of endovascular techniques has led the indications for surgery to be reviewed. The results of surgery in patients over eighty are fairly well defined: it is encoded that age affects only a small part of the immediate results in terms of overall mortality, while it is a significant factor in increasing rates of perioperative major morbidity, in particular cardiac and respiratory diseases. The purpose of this article is to assess the octogenarian patients who are not candidates for treatment with endoprosthesis.The characteristics of the proximal neck, distal and size influence the feasibility of enablers and results of endovascular treatment. All patients who do not respect these features are handled by us with traditional surgery. The high prevalence of ischemic heart disease in patients with AAA is a major cause of morbidity and mortality in the surgical treatment of AAA .Literature data show that if coronary angiography is performed routinely in patients with AAA awaiting surgery, the frequency of hemodynamically significant stenosis varies from 46 to 75%. For the reasons that all patients, twice in our experience can not be treated by endovascular undergo coronary angiography before surgery for AAA.In our series, since 2002, we have operated on 560 patients of which 282 are of AAA ultra octogenarians.Of these, 224 have been processed with endoprosthesis and 58 to surgical repair of aortic graft. 100% of those operated with traditional surgery had a short proximal neck, in 70% with the involvement of a renal artery aneurysm. 42% of all patients had a coronary heart disease that was treated preoperatively with coronary stents. Among the octogenarians the operative mortality of patients undergoing surgical repair for AAA was 4%.The advent of endoprotesis has certainly improved the survival rate and morbidity of elderly patients with risk of AAA rupture. A selection of the patients, careful study of cardiac risk and treatment of coronary artery disease and carotid artery before surgery are prerequisites to reduce perioperative complications."} +{"text": "Attention Deficit/Hyperactivity Disorder (ADHD) is considered as a model of neuro-developmental cognitive function. ADHD research previously studied mainly males. A major biological distinction between the genders is the presence of a menstrual cycle, which is associated with variations in sex steroid hormone levels. There is a growing body of literature showing that sex hormones have the ability to regulate intracellular signaling systems that are thought to be abnormal in ADHD. Thus, it is conceivable to believe that this functional interaction between sex hormones and molecules involved with synaptic plasticity and neurotransmitter systems may be associated with some of the clinical characteristics of women with ADHD. In spite of the impact of sex hormones on major neurotransmitter systems of the brain in a variety of clinical settings, the menstrual cycle is usually entered to statistical analyses as a nuisance or controlled for by only testing male samples. Evaluation of brain structure, function and chemistry over the course of the menstrual cycle as well as across the lifespan of women is critical to understanding sex differences in both normal and aberrant mental function and behavior. The studies of ADHD in females suggest confusing and non-consistent conclusions. None of these studies examined the possible relationship between phase of the menstrual cycle, sex hormones levels and ADHD symptoms. The menstrual cycle should therefore be taken into consideration in future studies in the neurocognitive field since it offers a unique opportunity to understand whether and how subtle fluctuations of sex hormones and specific combinations of sex hormones influence neuronal circuits implicated in the cognitive regulation of emotional processing. The investigation of biological models involving the role of estrogen, progesterone, and other sex steroids has the potential to generate new and improved diagnostic and treatment strategies that could change the course of cognitive-behavioral disorders such as ADHD. Behavioral, biochemical, and physiological data in animals demonstrate that gonadal steroid hormones estrogen, progesterone and testosterone have an effect on behavior and modulate neuronal activity experience regular menstrual cycle from menarche to menopause, whereas the rest have irregular cycles , (unipolar) postpartum depression, perimenopausal depression, and bipolar disorder studies have investigated the impact of fluctuating sex hormone levels during the menstrual cycle on the interplay between emotion and cognition in healthy regularly cycling women. This lack of knowledge is remarkable, considering the evidence for major emotional disorders occurring specifically during normal hormonal swings in the lifespan of women. A recent review of the literature by As with many neurodevelopmental disorders, the prevalence of attention deficit/hyperactivity disorder (ADHD) differs in males and females , and external hormonal use . In spite of the impact of sex hormones on major neurotransmitter systems of the brain in a variety of clinical settings, the menstrual cycle is usually entered to statistical analyses as a nuisance regressor (Lonsdorf et al., We suggest that the menstrual cycle should be considered as a modulating factor when examining cognitive response to emotional information in women. Furthermore, with the introduction of sensitive tests to measure cognitive performance and imaging techniques to visualize brain morphology and study its neurochemistry, it is now becoming possible to carefully analyze cognitive performance in women by their menstrual cycle phase, or current hormonal status. This may lead to better understanding of the sex hormone impact on women\u2019s brain in health as well as in ADHD and may resolve the inconsistency of the findings in women with ADHD."} +{"text": "Fillet flaps are commonly used to cover skin defects after trauma or tumors, especially in the extremities -5.We report four patients operated in the last three years, for lesions of the ear, invading the post auricular skin and infiltrating into the scalp.On examination these ulcerating lesions had the clinical appearance of deeply infiltrating basal cell carcinoma. Surgery was performed under general anaesthesia.After resection was performed with 1 cm skin margins, down and including the temporal fascia, the clear anterior skin surface of the upper pole of the ear was freed by its internal cartilaginous skeleton. The obtained fillet flap of the homolateral upper ear pole was raised and reflected posteriorly to cover the wound completely or partially. When needed, a split skin graft was added to the reconstruction.Histology gave the confirmation of the diagnosis and indications on the radicality in the operated patients. The cosmetic result was satisfactory. Follow up at a mean 6 months showed no recurrence.Several possibilities for the coverage of scalp defects have been reported, includiWe describe the use of fillet of pinna flap to cover periauricular scalp defects. We are not aware of any report of this type of flap to cover defects following tumour resection.The main advantages of the fillet of pinna flap are its durable coverage, colour match of the skin and absence of a donor site defect. In our patients, flaps were composed of healthy tissue which otherwise would have been discarded in the resection. In our opinion, fillet of pinna flaps can be used to cover the periauricular scalp defects when the auricular skin is clear and not been invaded by the lesion."} +{"text": "The structure of the hydrogen bond network is a key element for understanding water's thermodynamic and kinetic anomalies. While ambient water is strongly believed to be a uniform, continuous hydrogen-bonded liquid, there is growing consensus that supercooled water is better described in terms of distinct domains with either a low-density ice-like structure or a high-density disordered one. We evidenced two distinct rotational mobilities of probe molecules in interstitial supercooled water of polycrystalline ice . Here we show that, by increasing the confinement of interstitial water, the mobility of probe molecules, surprisingly, increases. We argue that loose confinement allows the presence of ice-like regions in supercooled water, whereas a tighter confinement yields the suppression of this ordered fraction and leads to higher fluidity. Compelling evidence of the presence of ice-like regions is provided by the probe orientational entropy barrier which is set, through hydrogen bonding, by the configuration of the surrounding water molecules and yields a direct measure of the configurational entropy of the same. We find that, under loose confinement of supercooled water, the entropy barrier surmounted by the slower probe fraction exceeds that of equilibrium water by the melting entropy of ice, whereas no increase of the barrier is observed under stronger confinement. The lower limit of metastability of supercooled water is discussed. Several water anomalies with deep implications in biology, atmospheric phenomena, geology, and food technology are rooted in the supercooled liquid state The different viewpoints on supercooled water can be partitioned into two broad classes: mixture/interstitial models and distorted hydrogen bond or \u201ccontinuum\u201d models The above discussion pointed out that regions of ice-like supercooled water are expected by mixture models of water In parallel with several numerical studies, e.g. At ambient pressure the supercooled regime of water ranges between the glass transition temperature In polycrystalline ice liquid water is localized where three grain meet in channels, or veins, that generally extend along the whole length of the grain edge. Four veins meet in a node (pocket) at a four-grain intersection, thereby forming a sponge-like, interconnected network of veins known as the vein system. The network was evidenced by experiments Dimensional arguments lead to the conclusion that the volume fraction (f) of water with respect to ice in the vein system has the expression The experiments show that the size of the ice grains decreases by increasing the cooling rate From the above discussion it follows that the liquid fraction in polycrystalline ice close to One may resort to the different character of the vein size (controlled by the thermodynamics) and the grain size to control the degree of confinement of the liquid fraction in ice/water mixtures. Consider two ice-water mixtures with different polycrystallinity and equal temperature It is worth noting that increasing the confinement of water close to a hydrophilic surface like ice is equivalent to a pressure (or density) increase In a previous paper we reported evidence of two distinct rotational mobilities of probe molecules (spin probes) in interstitial supercooled water of polycrystalline ice The major conclusions of the study, which is presented and discussed below, are:the S fraction of the spin probes is embedded in regions of QRW water with ice-like structure ,the ice-like environment is suppressed in the liquid fraction of SC ice/water mixtures .We studied the rotational motion of the polar nitroxide molecule TEMPOL (spin probe) in the interstitial liquid water of polycrystalline ice by using the Electron Spin Resonance (ESR) spectroscopy This finding is consistent with the stronger water confinement in the SC sample than in the QRW sample \u2013 where the lineshape collapse was never observed \u2013 combined with the shrinkage of the reservoirs where TEMPOL is trapped when departing from the melting point faster the reorientation, the narrower the line. Inspection of faster in SC water below higher temperature Apart from the previous case, the ESR lineshapes in one adjustable parameter describes its reorientation in a given environment no signature of ice melting has been detected in QRW water while crossing To gain more quantitative insight, we fitted the ESR lineshape of TEMPOL by using the numerical methods detailed elsewhere faster than in QRW water (as hinted by is same activated process as that of the equilibrium region (is driven by the not show the signature of the fragile-to-strong dynamic crossover (FSC) temperature at does Since TEMPOL links up with the HB network of water average environment, denoted by FS were prepared by direct exposition to liquid helium and slow (S) fractions of TEMPOL, with weights"} +{"text": "Distal humerus fractures may range from a simple extra-articular fracture pattern to a complex pattern, involving extensive destruction of the articular surface with comminution and bone loss in the metaphyseal-diaphyseal junction of the bone.The management decision in the younger patient requires complete anatomical reconstruction and early active range of motion. Different approaches to the distal humerus will be discussed, as will be different fixation methods. I will also present relevant clinical cases and outline the known outcomes of these fractures.In the elderly with osteoporotic fractures especially those with preexisting arthritis, there is an emerging trend for primary total elbow replacement. The outcomes of such replacement have been shown to be comparable to those of total elbow replacement in arthritis. However, there have been a few complications reported in the literature and this method is best reserved for complex comminuted distal humerus fractures in patients with significant osteoporosis and those above the age of 70.I will also discuss complex fracture dislocations of the elbow joint in terms of the management decisions and outcomes of these difficult problems."} +{"text": "Microstructural disturbances develop in the wall of vessels in patients with mild traumatic brain injuries (TBI).The purpose of our research was to detect the regularities of changes of hemodynamics and reactivity of cerebral circulation in patients with consequences of mild traumatic brain injuries by using the method of Doppler ultrasonography with hyper-and hypocapnic tests.We examined 62 patients with consequences of mild TBI. The control group consisted of 20 healthy people. Doppler ultrasonography with hyper-and hypocapnic tests were carried out according to the standard method.Conducting Doppler ultrasonography with hyper-and hypocapnic tests in the patients with consequences of TBI we detected that coefficient of reactivity was 1.11\u00b10.07. The insignificant asymmetry of bloodflow which increased after hyper- and hypocapnia was observed in the patients (82%). An early development of atherosclerotic lesion of cerebral vessels in the form of hemodynamic insignificant atherosclerotic plaques of the walls of cerebral vessels was observed (73%). We detected paradoxical reaction of cerebral vessels in 40 patients when hyper-and hypocapnic tests were being held.The data obtained confirm the changes of reactivity of cerebral vessels towards the decrease of linear velocity of bloodflow in the patients with consequences of mild TBA. The detected paradoxical reaction of vessels which was observed when we used hyper-and hypocapnic tests, is the consequence of the decrease of elasticity of vessels walls as a result of microstructural disturbances of vessels, an early development of atherosclerotic changes in vessels and the manifestation of a long-term angiospasm.No conflict of interest."} +{"text": "Plaster casts of individual patients are important for orthodontic specialists during the treatment process and their analysis is still a standard diagnostical tool. But the growing capabilities of information technology enable their replacement by digital models obtained by complex scanning systems.This paper presents the possibility of using a digital camera as a simple instrument to obtain the set of digital images for analysis and evaluation of the treatment using appropriate mathematical tools of image processing. The methods studied in this paper include the segmentation of overlapping dental bodies and the use of different illumination sources to increase the reliability of the separation process. The circular Hough transform, region growing with multiple seed points, and the convex hull detection method are applied to the segmentation of orthodontic plaster cast images to identify dental arch objects and their sizes.The proposed algorithm presents the methodology of improving the accuracy of segmentation of dental arch components using combined illumination sources. Dental arch parameters and distances between the canines and premolars for different segmentation methods were used as a measure to compare the results obtained. A new method of segmentation of overlapping dental arch components using digital records of illuminated plaster casts provides information with the precision required for orthodontic treatment. The distance between corresponding teeth was evaluated with a mean error of 1.38% and the Dice similarity coefficient of the evaluated dental bodies boundaries reached 0.9436 with a false positive rate In the fields of orthodontics and dentofacial orthopaedics, the optimal timing with regard to the patient\u2019s age and skeletal maturity is just as important as identification of the most appropriate treatment process. Depending on his or her actual age, it is critical to identify the growth periods that provide an opportunity to correct the existing dentofacial irregularities while minimizing the potential risks of the orthopedics intervention using dental arch analysis. Multidisciplinary dental care and therAlthough dental casts have been used for diagnosis and treatment planning , 3 in vaThe aim of this paper is to analyse digitized dental plaster casts \u20139 by a cFor more than a century, traditional film radiographs were used at most dental clinics before digital dental radiography was firstly introduced in the lThe latest developments in computer technology enable creating electronic tools that can benefit many areas of medicine, surgery and dentistry \u201315. ImagDevices for the digital imaging of dental casts include 3D scanners, 3D printers, and digital cameras in general. For the purpose of this paper, the 2D digital images of a standard plaster cast were acquired by a digital camera using different directions of illumination sources to obtain images with different shadow sizes related to the shape of the plaster cast.The experimental environment included the camera placed at a fixed position above the observed orthodontic plaster cast. The source of illumination was installed at the side of the plaster cast to obtain an image with a shadow and reflection to improve the location of the boundary of the object. Instead of using a single image, a series of images was acquired and combined for the following segmentation process. The location of the illumination source was defined by its azimuth image quality. The noise of the image negatively affects the quality of the image, changing the true grey-level values of each its pixel. Such noise can be caused by a number of factors, including image acquisition conditions, illumination level, positioning of illumination sources, and scene environment.The initial analysis of the image noise components allows designing the appropriate filter to reduce the noise and to keep the desired information. Noise components of the digital camera (using CCD or CMOS sensors) can be classified into two main categories: the fixed pattern noise caused by sensor non-uniformities and temporal noise. Temporal noise is a non-ideality noise in an image sensor which varies randomly over time. In fact, this type of noise varies from frame to frame and is independent across pixels. The sources of noise related to the camera include photon shot noise, dark current, readout noise, reset noise, and quantization noise.The Wiener filter was applied as a type of low-pass filter that adaThe Wiener filter output a with the median value of all The median filter was used in image processing as a robust filter , 19 thatThe determination of the curvature and location of circular objects in an (orthodontic) image are important tasks in machiThe circular Hough transform used in the present paper obtains satisfactory results in the detection of circle patterns within an image in noisyThe segmentation \u201326 of anThe proposed algorithm is based upon the region growing method using multiple seed points for segmentation of orthodontic images based on partitioning of an image into regions \u201330 usingQ connected sub-regions Pixels that have similar properties form a region and are grouped together. The purpose of image segmentation is to diThe region growing method is initiated with the appropriate selection of a set of seed points. When there exists a priori information about the image properties, such starting points can be defined directly. Otherwise, selected properties should be evaluated for each pixel and after the initial clustering process, seeds can be defined in the centroids of the obtained clusters. The growing starts from the initial seed points and using predefined criteria makes it possible to group pixels with similar properties into larger regions.The iteration process of the region-growing method can be stopped in case all pixels are distributed into regions according to the predefined criteria but some additional conditions can be added, such as region sizes or their shapes. According to the threshold values selected and the sensitivity, the extracted region may grow over the actual region boundary. The suitable selection of seed points, stopping rules, thresholding, and sensitivity are veryT components in two-dimensional space. The associated morphology algorithm can then be used to define the convex hull by kth convex hull component for The region growing method applied to one object results in several sub-areas. Their merging can be done using computational geometry and detection of a convex hull cument}Z , 36 compThe separation of two connected neighbouring regions when their common boundaries were removed during data processing is an important issue in image analysis and machine vision applications. The identifying of a common boundary between two regions or two overlapping objects is usually challenging, as one segment is incorrectly detected by the segmentation process. Several studies and algorithms have beeApplication of a number of morphological operators, such as dilation performed for boundary extraction, filling the holes to remove unwanted regions in the binary image, and shrinking for reducing the objects on the boundary to a single point. Dilation aims to expand objects in a binary image where thThe application of boundary tracing of two connected neighbouring regions and smoothing of the traced boundary by a moving average filter. In the binary image, the foreground pixels are labelled by \u2018one\u2019 and the background pixels are labelled by \u2018zero\u2019 so that Calculation of the second derivative at each point on the smoothed boundary of two connected neighbouring regions.Determination of specific zones that contain the intersection points of two connected regions based on the second derivative that are situated inside of the object.Evaluation of the absolute extreme of the two zones obtained in the previous step which will mark the position of the intersection points of the two connected regions.Figure\u00a0In this paper, we propose identifying the common boundaries of two connected neighbouring regions for orthodontic images presented in Figure\u00a0Image acquisition with the proposed illumination strategy and fusion of image matrices obtained;Wiener and median filtering of image data to reduce their noise components;The use of circular Hough transform to apply local segmentation for individual teeth;Application of the region growing method with multiple seed points to find boundaries of individual sub-images provided by the circular Hough transform;Merging of corresponding sub-areas using computational geometry and convex hull regions to separate overlapping objects as well;Evaluation of dental arch parameters and measures using centers of mass of individual objects detected by the previous segmentation process.Measures obtained are used for evaluation of the effect of the invasive or non-invasive treatment in stomatology. The segmentation process proposed enables semiautomatic evaluation of mass centers of individual objects and more efficient analysis of location of individual teeth.The newly proposed method of dental arch image processing based on separate methods described above consists of the following steps:Figure\u00a0Figure\u00a0TP) and false negative (FN) pixels in the positive set, and true negative (TN) and false positive (FP) pixels in the negative set (outside the positive set). The number of pixels belonging to these regions define the following:Sensitivity as the true-positive rate of the correct positive classification in the positive set Specificity as the true-negative rate of the correct negative classification in the negative set Probabilities of false classifications in the positive set and negative set Accuracy as the measure of correct classification Jaccard similarity index and Dice coefficient process \u201348 with Results of the proposed segmentation process were further analysed by a confusion analysis . Image pThe numerical results presented in Table\u00a0Similarity measures evaluated for the whole dental arch include the Jaccard index The comparison of selected measures obtained by manual and digital measurements is summarized in Table\u00a0This paper presented an innovative approach to the segmentation of orthodontic plaster cast images. The proposed method is based on processing the image constructed from separate images acquired with different illumination sources reflecting different edges of the object. The combined image with its increased contrast and enhanced object boundaries is then used for the detection of separate object.The results of segmentation of a digital image of the orthodontic plaster cast by the method proposed in this paper show that the convex hull followed by the separation of two connected objects form effective complementary techniques to improve the segmentation by the region growing method.The illumination from different sides highlights shadows of the object, converting each region into several sub-regions: hence, region growing, based on the application of multiple seed points, is a suitable tool to extract individual bodies. However, the method (1)\u00a0does not produce satisfactory results in the common boundary of two regions that have similar properties and (2)\u00a0the identified sub-regions related to the same region are not always recognized as one region.The final evaluation of the segmentation process points to the efficiency of the proposed method with a Dice similarity coefficient of 0.9436 and a mean error of real and estimated distances between corresponding teeth of 1.38%.Further studies will be devoted to further more sophisticated methods based upon three-dimensional convex hulls, used for the separation of individual bodies, as well as to a more detailed analysis of the shapes of the separate dental arch components."} +{"text": "Spinal cord injury and repair is a dynamic field of research. The development of reliable animal models of traumatic spinal cord injury has been invaluable in providing a wealth of information regarding the pathological consequences and recovery potential of this condition. A number of injury models have been instrumental in the elaboration and the validation of therapeutic interventions aimed at reversing this once thought permanent condition. In general, the study of spinal cord injury and repair is made difficult by both its anatomical complexity and the complexity of the behavior it mediates. In this perspective paper, we suggest a new model for spinal cord investigation that simplifies problems related to both the functional and anatomical complexity of the spinal cord. We begin by reviewing and contrasting some of the most common animal models used for investigating spinal cord dysfunction. We then consider two widely used models of spinal deficit-recovery, one involving the corticospinal tracts (CTS) and the other the rubrospinal tract (RST). We argue that the simplicity of the function of the RST makes it a useful model for studying the cord and its functional repair. We also reflect on two obstacles that have hindered progress in the pre-clinical field, delaying translation to the clinical setup. The first is recovery of function without reconnection of the transected descending fibers and the second is the use of behavioral paradigms that are not under the control of the descending fiber pathway under scrutiny. The most commonly used injury paradigms for spinal cord injury are contusions and transections. Contusions are produced by controlled blunt force directed to a portion of the cord, whereas transections consist of selective cuts to all or a portion of the cord. An advantage of contusion methods is that they produce histologically graded and consistent trauma , damage at least two major descending fiber tract systems: the corticospinal tract (CST) and the rubrospinal tract (RST) CST transection permanently abolishes the down conditioning of the H-reflex in the rat, whereas the ablation of the lateral column has no effect on this type of operant conditioning and locomotor capacity. We have demonstrated that lesions that affect the integrity of the RST or of its cells of origin in the red nucleus do not produce deficits in whole-limb movement to RM.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. Underground sewer pipes are required to resist and operate safely under various external loads, as well as severe environmental conditions. The degradation of sewer pipes over time in combination with the effects of overlaying soil and surface traffic loads can sometimes cause catastrophic failures in sewer pipes. Despite the ever increasing spending on prevention and mitigation, there are thousands of collapse reports per year in sewer pipe networks throughout the world. The consequences of collapses of sewers are socially, economically, and environmentally devastating, causing, e.g., enormous disruption of daily life, massive costs, widespread pollution, and so on. It is known that sewer collapses are predominantly caused by deterioration of the pipes. For cementitious sewer pipes, corrosion is the main cause of deterioration. Corrosion can cause reduction in structural strength of the pipeline, leading to pipe collapse. Severe localised corrosion can cause pitting; also resulting in the failure of sewer pipes ,2,3,4. Tversus their service life. Ahammed and Melchers )fP) is 10% . The service life of each FE model can be evaluated using the results provided by the SFEM then the service life of the sewer pipe is reduced from 60 years to zero years for the acceptable probability of failure of 10%. A similar trend can also be seen for other presumed acceptable values of probability of failure .After choosing the type of each parameter (deterministic or stochastic) and creating the FE models, the time-dependant SFEM were performed for different values of scaled parameters in order to study their effect on the probability of failure of concrete sewer pipes. the SFEM b. It canThe effect of the buried depth of the sewer pipe on the probability of failure is shown in i.e., h* and t*) and stochastic (F*) parameters on the probability of the failure of sewer pipes. In order to investigate the relative contribution and importance of each stochastic (random) variable in the probability of failure of concrete sewer pipes, a sensitivity analysis was carried out. The results of the sensitivity analyses for two examples with normalised soil cover (h*) after 100 years of service life are presented in i.e., \u03b1, \u03bb) together contribute to nearly half (48%) of the probability of failure of sewer pipes. This shows the significant influence of corrosion in general and corrosion coefficients in particular on the prediction of probability of failure. In general, the results of the sensitivity analysis can be used to efficiently improve the design towards a higher reliability for sewer pipe projects or to enhance management, rehabilitation and spending on existing pipelines.i.e., corrosion and stress failure) were adopted to define the limit state functions. The results of the numerical simulations revealed a nonlinear relationship between most of the parameters and the probability of failure of sewer pipes. In addition the results of the sensitivity analyses showed the significant contribution of the corrosion parameters. Study of the available literature clearly demonstrated that there is a serious lack of monitoring data of underground sewer pipes. The monitoring field data compiled by water industry are neither readily available nor comprehensive enough to be utilised for appropriate analytical purposes. No doubt access to such data is of paramount importance to check the validity of developed model in this study. The authors are currently working with concerning industries in order to have access or compile the much-needed data. In the present study a stochastic finite element method was utilised to predict the probability of failure of concrete sewer pipes subjected to combined effects of corrosion and stresses. Uncertainties involved in pipe material, traffic load, and corrosion are considered to develop the stochastic finite element model. A nonlinear time-dependent model was chosen to predict the corrosion in concrete sewer pipes. The model parameters were chosen based on a set of existing data in the literature. A normalised numerical example was employed to investigate the effect of both deterministic and probabilistic parameters on the probability of failure of sewer pipes. Two mechanisms of failure (The results of the SFEM can be used to improve the performance and planning of existing sewer systems, by providing better predictions for the probability of failure of sewer pipes compared to the existing approaches. The SFEM can bring together the effects of contributing parameters in the probability of failure of the system being studied in a numerical framework with high precision. Using the SFEM, it is possible to study the effect of each parameter on the failure of the system and their interaction with each other. The SFEM also provides a time-dependant reliability analysis for predicting the remaining safe life of sewers, which provides a means to better manage the existing sewers and plan resources during their whole life of service."} +{"text": "In predisposed subjects with migraine or tension-type headache increasing intake of acute medications is associated with a progressive clinical worsening. When the days of symptomatic drug use reach a given threshold and the headache has become chronic for at least three months, a causal relationship is deemed to exist between the medication overuse and the clinical worsening, and the headache is termed medication-overuse headache (MOH). MOH affects nearly 2% of the general population. It represents a highly disabling condition that impacts considerably on the quality of life of sufferers and on the society in general because of high levels of disability and use of healthcare resources.MOH is treatable: withdrawal from overused drugs leads to a clinically significant improvement in the majority of patients. Even more so, when it is performed within an integrated approach aimed at targeting frequent associated features/conditions and in the most appropriate clinical settings . Some Authors also report improvement with prophylactic medication alone.Increasing amount of evidence suggests that the clinical process leading to MOH is paralleled by the establishment of chronic sensitization, which is partly reversed by withdrawal of overused drugs. In addition, several experimental reports confirm that frequent use of analgesics or triptans facilitates nociception, probably via the overexpression of CGRP and neuronal NOS in the trigeminal ganglion.Clinical practice and analysis of the literature suggest that the beneficial effect of drug withdrawal tends to be more marked and abrupt as compared to the reported slower reduction of headache days when prophylactic medication alone is used. Direct comparison trials are however lacking.A major issue to be addressed is the high risk of relapse into overuse and pain chronicity following improvement. A good wealth of literature findings has allowed precise quantification of the rate of relapse into overuse and pain chronicity following drug withdrawal , while data are missing as regards relapse rates following prophylactic treatment alone for MOH.Patient's education regarding the need to stop overuse;Withdrawal from overused drugs;Management of comorbid conditions;Optimization of symptomatic medications;Personalization of prophylactic medication.Taking all these observations into consideration, and awaiting for the necessary evidence from comparative trials, presently the optimal approach to MOH seems to be provided by a multistep process that possibly includes all of the following procedures:"} +{"text": "A substantial proportion had primary physical illness, while 18%, n = 8606, had primary neurologic disorders. The commonest physical comorbidity was hypertension (4%) and diabetes (2%). A significant proportion of the populace with mental disorders appeared not to be accessing mental health care services, even when it is available. Meaningful efforts to improve access to mental health services in the northeast region of Nigeria will require successful integration of mental health into primary and general medical services.Mental and neurological disorders are common in the primary health care settings. The organization of mental health services focuses on a vertical approach. The northeast as other low income regions has weak mental health services with potentially huge mental health burden. The manner of presentations and utilization of these services by the population may assist in determining treatment gap. We investigated the pattern and geographical distribution of presentations with mental disorders and explored the linkages with primary care in northeastern Nigeria over the last decade. A retrospective review of hospital-based records of all the available mental health service units in the region was conducted over a decade spanning between January 2001 and December 2011. A total of 47, 664 patients attended available mental health facilities within the past decade in the northeast. Overwhelming majority (83%, Mental, neurological, and substance use (MNS) disorders account for an estimated 14% of the global burden of disease . These dEvidence suggests that the burden of mental and neurological disorders predominates in the primary health care settings . Mental However, across the majority of low and middle income countries, a low level of skilled mental health professionals is the norm \u20137. This Available mental health care services are limited to a single tertiary mental health facility and psychiatric units in 5 general hospital settings in the entire northeastern region of Nigeria, comprising 6 states and serving a population of nearly 20 million people. Additionally, these mental health facilities also provide services to substantial parts of the neighbouring countries of Chad, Cameroon, and Niger republics.The life time prevalence of mental illness among Nigerians is 12.1% and the 12-month prevalence is 5.2%; and while 20% of these are severe enough to warrant hospital admission, only 8% of serious and debilitating mental illnesses are actually treated . In absoWhile the resources for mental health care services are clearly limited in northeast Nigeria, the burden of mental disorders is high and is comparable to what is observed in most low and middle countries . ParadoxAlthough the weakness of the mental health system of the region is evident and the needs for mental and neurological care services are high it is not known to what extent the available services are currently being utilized. It is also not known to what extent the needs for services remain unmet; the extent of communal utilization of the available mental health services is not known. This information is useful to provide useful insights and a basis for future regional mental health policy formulation, planning, or the implementation of national policies at the local regional level. It will also assist in achieving a nuanced understanding of the mental health gap of the region.This study therefore aimed at investigating the pattern and geographical distribution of presentation with mental disorders, along with an exploration of the linkages with primary care in northeastern Nigeria over the last decade.A retrospective review of hospital-based records of all the available mental health service units in the region was conducted over a decade spanning between January 2001 and December 2011.Study Setting. We studied the mental health services of the northeastern region of Nigeria comprising 6 states of Adamawa, Bauchi, Borno, Gombe, Taraba, and Yobe were handled by the region's tertiary mental health facility. A substantial proportion had primary physical illness, while 18%, n = 8606, had primary neurologic disorders. The commonest physical comorbidity was hypertension (4%) and diabetes (2%).The sociodemographics of the population of patients utilizing the mental health services in the northeast in the period under review in The overwhelming proportions of the patients were managed in Maiduguri, which is the solitary tertiary mental health facility in the region. The specialist hospital in Taraba saw the least number of patients over the last decade. See Primary neurological disorders were the most frequently seen conditions, followed by schizophrenia spectrum and depressive disorders, respectively. No personality disorders were diagnosed within this period. See Most of the disorders were handled by the tertiary mental health facility. Relatively few were seen at the mental health facilities in the general hospitals even though they are closer to the community.Primary physical health problems had a disproportionate representation in the out-patient clinic attendance, with 30% of the patients attending the out-patient clinics of the mental health facilities presenting with primarily physical health problems.Linkage of Mental Health with the Primary Health Care System. The linkage with the regional primary health care system was poor. Only one of the secondary mental health facilities had regular linkage networks and a referral system, with the tertiary mental health centre located in Maiduguri. There was no interaction or linkages between the secondary care and the primary care level.The total number of patients that utilized the facilities in the ten-year period falls short of the potential number of people in need of admission for a year only. This may be due to different factors at play preventing people from accessing the services being provided. These factors include lack of awareness , 15, stiAlthough psychosis constitutes low prevalence within the community , schizopOur study reveals the relatively higher burden on the tertiary health facility even though it is located in the regional capital which is geographically far from the majority of the region's population and a lower rate of utilization of the psychiatric units within general hospitals, which should be closer to the populace. This may be related to the poorly skilled manpower in these facilities or a lack of awareness that such services are available in the general hospitals.Headaches and the epilepsies (g40\u2013g47) were seen in unusually large numbers. This may have been due to a massive public education campaign about the availability of services for this group of disorder through the electronic media over the last decade particularly the Federal Neuropsychiatry Hospital in Maiduguri, Borno state.It is also evident that very little has changed in terms of the policy recommendation of strengthening and integrating mental health into primary care in Nigeria, even though the country's mental health policy was passed over two decades ago (1991) as evidenced by the low levels of utilization of the psychiatric units located in the psychiatric units located outside the regional capital. It will appear that the pragmatic way forward to increase access to mental health care services in underserved populations such as the northeastern region of Nigeria will have to be based on the fulcrum of effective integration of mental health care services into primary and general medical care services. This will however require adequate training in mental health, as well as ongoing supportive supervision from the few available mental health professionals from the tertiary facility.Overall there is poor utilization of in-patient services by the population majority of whom were treated at the tertiary health centre. A significant proportion of the populace with mental disorders appeared not to be accessing mental health care services, even when it is available. Meaningful efforts to improve access to mental health services in the northeast region of Nigeria will require successful integration of mental health into primary and general medical services. Community awareness and mobilization to engage with and to utilize these services where available will also be pertinent."} +{"text": "Currently there is great interest in identifying critical residues in proteins, to improve our understanding and allow for the engineering of protein families. Diverse approaches combine sequence information, structural data, dynamics analysis and functional description to determine the importance of amino acids with regards to protein function. In this work, we propose a hybrid approach for the identification of critical residues in proteins, combining the use of evolutionary information (co-evolution), cross-correlation of atomic fluctuations derived from Anisotropic Normal Mode Analysis simulations (ANMA) aBy combining the information of the covariance matrix derived from Statistical Coupling Analysis (SCA) and the The hybrid approach ANMA.SCA opens a wide range of possibilities in the study of functional motion within protein families. By means of detecting networks of critical sites and their topology it is able to reveal the hidden aspects of protein dynamics."} +{"text": "To the editor: As members of the French Ministry of Health Working Group on autochthonous urinary schistosomiasis, we read with interest the 2 recently published articles regarding schistosomiasis screening of travelers to Corsica, France . In the study based on travelers from Italy (In the study by authors from the GeoSentinel Surveillance Network (Altogether, these 2 studies identified only 1 patient with parasitological evidence of infection that was attributable to the already known 2013 focus in Cavu River. Therefore, these articles do not provide evidence of transmission of schistosomiasis in Corsica after 2013 or outside the Cavu River."} +{"text": "Heart rate variability has been recognized as a parameter that partly describes autonomic nervous system modulation of cardiovascular system.Analysis of heart rate variability has been proposed as clinically important as predictive and monitoring factor in subjects with different cardiac disease conditions and in subjects who suffer from diabetes mellitus.Despite numerous clinical and experimental trials on the topic of heart rate variability in the setting of intensive care medicine there is a lack of appropriate technological facilites for routine monitoring and analysis of this phenomenon in everyday clinical practice.Assessment of the parameters of heart rate variability in critically ill patients with different disease condition.Recordings of forty-two consecutive patients with different disease conditions admitted to intensive care were included in the observational trial. Electrocardiogram was recorded in the periods after admission and primary stabilization of the general condition, and afterwards during the period of stay in intensive care every day (first week of stay). After recording, analysis of short-term segments of electrocardiograms was performed by means of software packages for heart rate vaiability analysis . Linear parameters in time domain and parameters in frequency domain and parameters of Poincar\u00e9 plot were analysed.Statistical analysis of the data obtained was performed by software package IBM SPSS version 20. Diversity of the clinical conditions of the patients and different medications that were used on regular bases limited statistical analysis to only descriptive statistics. Results of assessment of different parameters of heart rate variability in the setting of ICU, in small group of subjects with different comorbid states are presented in this paper.Subjects in critical state for different disease conditions or after trauma had different patterns of alterations of heart rate variability. In this small group of patients, despite considerable variations in reduction of different parameters of heart rate variability, alterations of heart rate variability were the most pronounced in subjects who had also had developed acute coronary syndrome before admission to the hospital."} +{"text": "Mus musculus), simian immunodeficiency virus (SIV) in rhesus macaques (Macaca mulatta) and human immunodeficiency virus (HIV) in humans.In this Opinion piece, I argue that the dynamics of viruses and the cellular immune response depend on the body size of the host. I use allometric scaling theory to interpret observed quantitative differences in the infection dynamics of lymphocytic choriomeningitis virus (LCMV) in mice responses in the two host species. Asquith and McLean during HIV and SIV infection can be used as a quantity for disease progression. The wide variation of this rate in HIV-infected humans, ranging from years to decades (Fraser et al., + T cell responses are slower in larger animals. Whether this influences the ability of the cellular immune responses to eradicate viruses during the acute phase of an infection remains unclear. Others have argued that the response rate of immune systems does not change systematically with body size. Instead, the sub-modular architecture of the immune system, where the number and size of lymph nodes increase sublinearly with body size, could balance the tradeoff between the local detection of pathogens and the global host response (Banerjee and Moses, In summary, there are indications that viral replication and the proliferation of CD8Discrepancies between experimental findings of the mice and human immune system have been described and illustrate that using mice as preclinical models for the study of human diseases can be challenging (Mestas and Hughes, The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The temporal organization of sleep is regulated by an interaction between the circadian clock and homeostatic processes. Light indirectly modulates sleep through its ability to phase shift and entrain the circadian clock. Light can also exert a direct, circadian-independent effect on sleep. For example, acute exposure to light promotes sleep in nocturnal animals and wake in diurnal animals. The mechanisms whereby light directly influences sleep and arousal are not well understood. In this review, we discuss the direct effect of light on sleep at the level of the retina and hypothalamus in rodents. We review murine data from recent publications showing the roles of rod-, cone- and melanopsin-based photoreception on the initiation and maintenance of light-induced sleep. We also present hypotheses about hypothalamic mechanisms that have been advanced to explain the acute control of sleep by light. Specifically, we review recent studies assessing the roles of the ventrolateral preoptic area (VLPO) and the suprachiasmatic nucleus (SCN). We also discuss how light might differentially promote sleep and arousal in nocturnal and diurnal animals respectively. Lastly, we suggest new avenues for research on this topic which is still in its early stages. The use of light for image-forming vision is critical in sighted animals as it is used to both detect and distinguish objects in the surrounding environment. In addition to its role in image-formation, light also exerts direct effects on physiology and behavior. These non-image forming processes include synchronization (entrainment) of circadian rhythms has been shown to increase locomotor activity during the dark period and melanopsin in mice has been shown to cause a switch to diurnal activity patterns in 80% of the animals (Doyle et al., The flip-flop switch model has been used extensively to highlight the control of behavioral state by the sleep and wake promoting systems. Briefly, the model proposes a mutual inhibition between the sleep and wake promoting systems which allow both rapid and stable transitions (Saper et al., The suggestion that light initiates a sequence of activation and/or inhibition across a range of regions during the induction of sleep has also been suggested to exist for the suppression of pineal function, phase shifting, corticosterone release, and core temperature decrease Morin, . The proIn this review, we discussed our current understanding of the neural circuits involved in the control of murine sleep by light at the level of the retina and the downstream brain areas. At the level of the retina, the rod, cone and melanopsin photoreceptors work together to drive a continuous light signal to the sleep promoting system in the brain Figures and 2. HThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In addition, hb5 mutants exhibited aerial rosettes at the base of the lateral inflorescence branches instead of growing cauline leaves as in wild-type plants .The overall growth behavior of the mutants did not differ significantly from the controls, except the bolting and flowering time were altered in three of the mutant lines. Hypolignified The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Due to the intricate structure of porous rocks, relationships between porosity or saturation and petrophysical transport properties classically used for reservoir evaluation and recovery strategies are either very complex or nonexistent. Thus, the pore network model extracted from the natural porous media is emphasized as a breakthrough to predict the fluid transport properties in the complex micro pore structure. This paper presents a modified method of extracting the equivalent pore network model from the three-dimensional micro computed tomography images based on the maximum ball algorithm. The partition of pore and throat are improved to avoid tremendous memory usage when extracting the equivalent pore network model. The porosity calculated by the extracted pore network model agrees well with the original sandstone sample. Instead of the Poiseuille\u2019s law used in the original work, the Lattice-Boltzmann method is employed to simulate the single- and two- phase flow in the extracted pore network. Good agreements are acquired on relative permeability saturation curves of the simulation against the experiment results. Accurate acquisition of the micro structure and flow properties of porous media is of great importance in petroleum engineering, biomedical science, micro-electronics, and composites images based on the maximum ball algorithm using 3D nineteen velocity model; In recent years, the Lattice-Boltzmann method (LBM) has been developed into an alternative and promising numerical scheme for the simulation of fluid flows and physics modeling in fluids. The scheme is particularly effective in fluid flow applications involving interfacial dynamics and complex boundaries are assigned to study the effects on oil recovery. As is shown in Fig.\u00a0This paper presents a modified method of extracting equivalent pore network model from the 3D micro-CT images based on the maximum ball algorithm. The improved partition methods are able to avoid tremendous memory usage when extracting the equivalent pore network model. Two types of micro sandstone images are used to display the pore network models and simulate for single- and two- phase flow. The porosity, calculated permeability of the pore network model agree well with experimental benchmark data of the original sandstone sample. Using the extracted pore network model and two-phase flow codes based on Lattice Boltzmann method, the simulation on water flooding mechanism is conducted to obtain the effects of wettability on oil recovery in the porous sandstone. Visualized water flooding process and the relative permeability saturation curves are obtained. Moreover, it is found that the optimal oil recovery would be realized in the mixed wettability reservoirs. Both of the two simulation results are identical to the experimental benchmark data, which verifies the feasibility of our pore network model and the simulation codes.However, the assumptions used in the pore network extraction algorithm, which leads to idealized shape, radius and connectivity of pore and throat, result in a lager porosity and permeability of the reconstructed model compared to the original sample. A further research focus on the reproduction of the pore structure for the real shape is worth of studying to realize a wider application in other fields."} +{"text": "Trauma is a well recognised risk factor for the future development of osteoarthritis (OA). The Armed Services Trauma and Rehabilitation outcome study (The ADVANCE study) is a 20 year cohort study comparing medical and psychosocial outcomes of military personnel both exposed and not exposed, to significant trauma.The Bio-Mil-OA study, a sub-study of the ADVANCE study, is an ideal opportunity to investigate the predictive value of biomarkers in joint pain and osteoarthritis.The ADVANCE study is a 20 year cohort study of 600 combat casualties and 600 matched non exposed participants investigating the predictive value of biomarkers and trauma on the long term development of pain and OA in the hip and knee. Validated serum biomarkers for OA, knee and hip radiographs and patient reported outcomes for pain and function will be taken at baseline and at 5 years.The predictive value of the biomarkers in predicting OA development and progression and joint pain in those exposed to different levels of trauma will be investigated using quantitative immunoassays for catabolic markers of cartilage matrix degradation."} +{"text": "The introduction of tooth colored resin composite had a significant impact in pushing dentistry into the esthetic arena based on 2 major developments: adhesion and light-cured materials. Different light curing devices were introduced based on the mode of action and wavelength properties. But probably one of the major breakthroughs of technology in clinical dentistry is the integration of laser therapy as a therapeutic option into treatment plan for clinical improvement. The introduction of different laser devices with different wavelengths and biological interaction . Some papers on clinical studies encompass the effect of laser on the periodontium, effect of mouthwash on biofilm control, and the use of analgesic combination in the management of chronic temporomandibular disorders.We hope that the content of this special issue provides valuable insights on the interaction of oral tissues with lights and matters to clinicians and researchers.Samir NammourSamir NammourUmberto RomeoUmberto RomeoCarlos de Paula EduardoCarlos de Paula EduardoToni ZeinounToni Zeinoun"} +{"text": "Lung cancer is the leading cause of cancer-related mortality in Canada and arouWe have reached a plateau , 5 with Current recommendations for first-line treatment of advanced NSCLC use both histologic and molecular diagnostics in designing the course of treatment , 8. We hRecent advances in understanding signaling pathways for malignant cells, interconnections in those pathways, the importance of various receptors \u201315, and These treatments are aimed at specific alterations in the malignant cells. Various NSCLC subtypes are associated with potentially targetable biomarkers such as mutation of the epidermal growth factor receptors (EGFR) \u201322, KRASKnowledge about the advantages of treatments with targeted agents in advanced NSCLC is rapidly growing, but the hope is to eventually apply this knowledge to earlier stages of NSCLC and thus to increase the cure rate of these patients. Combining various targeted agents or sequencing them properly will be of the utmost importance in the new era of personalized targeted therapy . Many clContributors in this issue of Frontiers in Thoracic Oncology describe the importance of team work from diaOur review will cover the description starting with the interventional procedures , to treaWe hope to provide a complete review of present and future approaches to personalized medicine in advanced NSCLC, reflecting the present views, and practices in Canada.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Purkinje cells (PC) of the cerebellar cortex have two distinct firing signatures: simple and complex spikes. The simple spikes are due to the intrinsic mechanisms of the cell and synaptic inputs coming through the parallel fibers and molecular layer interneurons -3. The PIn this study we analyze how PC synchrony in the context of \"pauses\" in simple spikes affects different coding mechanisms of the neurons of the cerebellar nuclei. To this end, we use a computational model of a cerebellar nuclear neuron and syntWe find that both the amount and type of pause synchrony is encoded as rate increases in the firing of the nuclear cell. Synchronized pauses briefly release the nuclear neuron from inhibition giving rise to well-timed spikes. Further, pauses synchronized with pause ending spikes caused greater firing rate modulation in the nuclear cell while pause beginning type synchrony enhanced the degree of timelocking. We also analyze the effect of pause length and spike jitter on the timelocking phenomenon of the nuclear neuron. We argue that these results lead to better understanding of how PC pause synchrony is processed in its target nuclear neuron."} +{"text": "Metapopulation processes are important determinants of epidemiological and evolutionary dynamics in host-pathogen systems, and are therefore central to explaining observed patterns of disease or genetic diversity. In particular, the spatial scale of interactions between pathogens and their hosts is of primary importance because migration rates of one species can affect both spatial and temporal heterogeneity of selection on the other. In this study we developed a stochastic and discrete time simulation model to specifically examine the joint effects of host and pathogen dispersal on the evolution of pathogen specialisation in a spatially explicit metapopulation. We consider a plant-pathogen system in which the host metapopulation is composed of two plant genotypes. The pathogen is dispersed by air-borne spores on the host metapopulation. The pathogen population is characterised by a single life-history trait under selection, the infection efficacy. We found that restricted host dispersal can lead to high amount of pathogen diversity and that the extent of pathogen specialisation varied according to the spatial scale of host-pathogen dispersal. We also discuss the role of population asynchrony in determining pathogen evolutionary outcomes. Relatively little is known about actual patterns of generalisation and specialisation in natural plant-pathogen systems largely because many fewer studies have focused on the pathogenicity structure of pathogen populations than on the resistance structure of their host populations. The spatial scale of interactions between a pathogen and its host is seen to be of prime importance in determining evolutionary trajectories of host-pathogen metapopulations because migration rates of one of the species affect the spatial and temporal heterogeneity of selection on the other. Here we develop a simulation model to specifically examine the joint effects of host and pathogen dispersal on the evolution of pathogen specialisation in a spatially explicit metapopulation. The present approach gives insights into the role of host and pathogen dispersal in driving pathogen diversity and adaptation and encourages further characterisation of the pathogenicity structure of crop and natural pathogen populations. In spatially structured populations, habitat spatial heterogeneity plays a crucial role in determining the potential for species and genotypes to coexist Some of the first models investigating the role of dispersal on host-pathogen coevolving patterns assumed a qualitative type of interaction (single locus population genetics model). The work of Gandon and colleagues et al. et al. et al.i.e. when hosts undergo disruptive selection and branch into two coexisting types. They found that branching was possible in a spatial model but requires higher virulence and stronger trade-offs than in a non-spatial model . However, they did not characterise the coexisting genotypes. In contrast, in a similar model but focusing on pathogen evolution, Kamo et al.et al.Host-pathogen interactions are however not limited to qualitative relationships but are also largely determined by quantitative traits et al.Another crucial question that arises in coevolving systems is how the geographical structure of coevolution may shape spatial patterns of variation in the coevolving species Here we develop a simulation model to specifically examine the joint effects of host and pathogen dispersal on the evolution of pathogen specialisation in a spatially explicit metapopulation. We consider a plant-pathogen system in which the host metapopulation is composed of two plant genotypes and the pathogen is dispersed by air-borne spores. We assumed that the pathogen population is characterised by a single life-history trait under selection, the infection efficacy of the pathogen on the host genotypes. We did not consider environmental spatial heterogeneity. In particular we addressed the following questions: How do the scale of dispersal and the strength of evolutionary trade-offs affect the potential for multiple pathogen genotypes to coexist? Does the level of pathogen specialisation depend on host and pathogen dispersal scales? Is there spatial heterogeneity in patterns of diversity? We first present the model and the simulation experiment. Then we study the extent of synchrony among local populations, the effect of dispersal on pathogen diversity and level of specialisation. We also analyse the sensitivity of our results to the pathogen life-history traits, to the shape of the dispersal function and to the metapopulation structure. Finally we discuss our results with an emphasis on the role of population asynchrony in determining evolutionary outcomes.e.g. seeds and spores). The model describes a polycyclic disease caused by a foliar pathogen dispersed by air-borne spores (e.g. rust fungus). The model is stochastic and time is considered as discrete. In population We consider a metapopulation model in which plant and pathogen populations are inter-connected via dispersal of propagules becomes infected with a probability Seeds and spores that fail to establish are removed (no seed or spore bank) which implicitly imposes a cost of dispersal. The different steps and their chronology are detailed in Appendix S1 in The host and pathogen metapopulation consists of a network of The probability that a spore (or respectively a seed) disperses from population The host population is composed of two resistant genotypes, The pathogen population is initially composed of one generalist genotype defined by a fixed infection efficacy on each host genotype. Specialist genotypes emerge through the balance of mutation and selection. They are defined according to the gain, in percentage of the infection efficacy of the generalist genotype, that they experience on the host to which they are adapted (their susceptible host) and the cost that they suffer on the other host (their resistant host). The gain of the generalist is by definition 0. We assume a trade-off between gain and cost such that the i.e. those with small gains or losses in Pathogen genotypes are classified according to their gain in infection efficacy on their susceptible host. Genotype 1 corresponds to the full specialist of host Ten mean dispersal distances (in proportion of the region size) were considered for both host and pathogen by varying We fixed the infection efficacy of the generalist to 0.2. In addition to the generalist we defined 10 specialists on each host by increasing The probability that a spore was of the same genotype as its parental lesion was set as Sensitivity to the other parameters . Thus, boom and bust dynamics (time periods characterised by sustained increase in one of the host genotype followed by its sharp and rapid contraction in favour of the second host genotype) dominated the system . Thus, spatial coexistence was also observed but the The mechanisms explaining the pattern of pathogen diversity when vs.When the trade-off shape was linear was enhanced because of a reduce amplitude in oscillations and thus a reduce probability of fixation of one of the two full specialists genetic cluster(s) with low genotypic variability (efficacy range). Here also among-population asynchrony was sufficient to allow spatial coexistence. As the scale of dispersal increased, the system became increasingly dominated by boom-and-bust dynamics. Local populations became increasingly synchronised and spatial coexistence was only transitory. For large pathogen and host dispersal scales, severe oscillations appeared leading to frequent local extinction-recolonisation of hosts. When moderately specialised genetic clusters were selected, these oscillations also resulted in the global extinction of one of the hosts and the subsequent full specialisation of the pathogen population on the remaining host. In this case the metapopulation acted as a single population where host and pathogen frequencies experienced growing oscillations, resulting in a single genotype of each species being fixed.et al.The effect of host and pathogen dispersal on the maintenance of diversity was specifically studied by Thrall and Burdon The competitive exclusion principle suggests that in an environment composed of two resources, competition can either lead to selection for a generalist or for two specialists. We found here that up to four pathogen genetic clusters can coexist in a two-resource (host) environment. Using a one-patch model for studying adaptive evolution, Abrams A particular feature of the current study was to characterise the level of specialisation, as measured by et al.The impact of gene flow on levels of pathogen specialisation has only rarely been discussed in the literature. Most studies deal with spatially unstructured populations et al.A contrasting pattern of specialisation was observed when pathogen dispersal was fixed to be spatially local and host dispersal scale was varied. Under these conditions, the pathogen generally dispersed less than the host and it was expected that this differential would decrease the level of pathogen specialisation. However, contrary to this prediction, increasing host mean dispersal distance initially resulted in greater pathogen specialisation, which reached its maximum for intermediate host mean dispersal distances . For lare.g.et al.Puccinia triticina) on wheat (Triticum aestivum) at the scale of France and found coexistence among qualitative specialists (very restricted host range), quantitative specialists (large host range but transmitted efficiently only by a few of them) and generalists . This high diversity of pathogenicity patterns is consistent with the high diversity found here when the host disperses essentially locally. In addition, we found, in the present study, that more generalist, and thus less damaging, pathogen genotypes were favoured when the host population fluctuates in time because, under these conditions, the pathogen population is forced to track host oscillations. Such fluctuations in variety frequencies could be impose in agricultural landscapes to prevent the evolution of pathogen populations toward highly specialised and adapted morphs. Moreover, local spatial aggregates of crops or varieties that behave similarly with regard to a given disease should be avoided in order to prevent the local emergence of specialised (and damaging) pathogen genotypes.In our study, although the host population was diversified, we assumed that the properties of the two host genotypes did not change through time and thus that the pathogen evolves faster than the host. This is particularly the case for crops, for which the same host genotypes are used for several years and that integrate relatively low genetic diversity for disease resistance. In agricultural landscapes host dispersal and aggregation is restricted to spatial and temporal patterns of cropping. Although there have been a few attempts to produce advice on optimal crop spatial organisation for restricting pathogen evolution e.g., this quAnother example is provided by invasive plants which are likely to exhibit low diversity with respect to disease resistance Relatively little is known about actual patterns of generalisation and specialisation in natural systems largely because many fewer studies have focused on the pathogenicity structure of pathogen populations Text S1Model description and supplementary figures.(DOCX)Click here for additional data file."} +{"text": "Solanum lycopersicum) evolved from a wild ancestor (S. pimpinellifolium) bearing small and round edible fruit. Molecular genetic studies led to the identification of two genes selected for fruit weight: FW2.2 encoding a member of the Cell Number Regulator family; and FW3.2 encoding a P450 enzyme and the ortholog of KLUH. Four genes were identified that were selected for fruit shape: SUN encoding a member of the IQD family of calmodulin-binding proteins leading to fruit elongation; OVATE encoding a member of the OVATE family proteins involved in transcriptional repression leading to fruit elongation; LC encoding most likely the ortholog of WUSCHEL controlling meristem size and locule number; FAS encoding a member in the YABBY family controlling locule number leading to flat or oxheart shape. For this article, we will provide an overview of the putative function of the known genes, when during floral and fruit development they are hypothesized to act and their potential importance in regulating morphological diversity in other fruit and vegetable crops.Domestication of fruit and vegetables resulted in a huge diversity of shapes and sizes of the produce. Selections that took place over thousands of years of alleles that increased fruit weight and altered shape for specific culinary uses provide a wealth of resources to study the molecular bases of this diversity. Tomato ( Angiosperm plants vary tremendously in morphological traits related to their reproduction. The floral appearance is driven by evolutionary aspects of the pollination syndrome whereas distinct dispersal modes drive the evolution of phenotypes associated with the fruit. In natural settings, the main functions of the fruit are to protect the developing seeds and to act as a dispersal agent. The onset of the change to an agricultural lifestyle, approximately 10,000 years ago, provided strong selection pressures on the fruit of incipient vegetable and fruit crops. The selections made by early farmers offer a great opportunity to identify the molecular basis of a range of phenotypic traits, especially those related to fruit morphology and flavor. For example, selections against bitter taste resulted in palatable eggplant and cucumber in promoters and coding regions underlie the phenotypic diversity of the tomato fruit.Even though the fruit is a terminal structure that forms relatively late in the plant's lifecycle, the formation of this organ and the parameters that determine its final dimensions are rooted much earlier in the plant's lifespan. Therefore, it is important to view tomato fruit development in the context of overall plant development starting after germination. Plant growth in tomato and other Solanaceous plants is characterized by a sympodial shoot architecture where after formation of 8\u201310 leaves, the shoot apical meristem (SAM) terminates into the inflorescence meristem (IM), and growth continues from lateral meristems called sympodial meristems (SYM). Meanwhile, the IM terminates into the floral meristem (FM) generating the flower controls the number of carpel primordia and a mutation results in a fruit with more than the typical two to three locules and a WD40 motif containing protein (Solyc02g083940). Further association mapping led to the identification of two single nucleotide polymorphisms located 1080 bp downstream of the putative tomato ortholog of WUS leads to increases in locule number with more pronounced effects on locule number than lc is considered to underlie fas screens using Arabidopsis KNOX and BELL transcription factors as bait led to the identification of OFP members, lending support for the notion that OVATE interacts with patterning genes that impact fruit shape at the early stages of gynoecium development . However, the most dramatic effect of SUN on shape is manifested after anthesis, during phase 5, which is the cell division stage of fruit development , Cucumis melo (melon) and Vitis vinifera (grape), the putative ortholog of KLUH and members of the same CYP78A class were associated with larger fruit, suggesting a possible role of this small and largely unknown cytochrome P450 family in parallel domestication processes in fruit and vegetable crops (Chakrabarti et al., CNR/FW2.2, members of the family regulate plant growth and biomass as well as ear length and kernel number per row in maize (Guo et al., FW2.2/CNR-like genes to control weight in a range of crop species (Paran and Van Der Knaap, The domestication of fruit and vegetable crops was likely driven by selections for increases in fruit weight and shape in many incipient crop species. Thus, the question arises whether any of the tomato genes or members of their families are associated with fruit weight and shape in other species. Of the fruit weight genes, other members of the CYP78A class to which WUSCHEL-like gene in soybean showed an enlarged gynoecium (Wong et al., WUS could impact the size of fruits and vegetables in other crops.Of the fruit elongation genes, down regulation of a member of the OFP family in pepper led to a longer shaped fruit (Tsaballa et al., ovate (Rodriguez et al., lc and fas mutants both provide evidence for the existence of other genes that interact with these major regulators of fruit shape and size. In addition, the identification of additional fruit weight QTLs (Huang and Van Der Knaap, Recent discoveries have started to shed light on the regulation of fruit shape and weight, and the molecular mechanisms underlying this diversity found in cultivated germplasms. However, these six genes are unlikely to represent the entire repertoire of genes acted on by domestication and diversification. The identification of suppressors of Advancing the research into the function of fruit morphology proteins is going to lead to fundamental insights into plant developmental processes. Especially processes that regulate cell proliferation and enlargement patterns, as well as its rate and duration are of particular importance since they pertain to growth of all plant organs and eventually yield. In all, the discoveries made using tomato fruit morphology as a model will undoubtedly support fundamental and applied research that is applicable to many other plant systems.All authors contributed critically to the writing and editing of the manuscript, agree to be accountable for the data presented and approve the version of the manuscript. Esther van der Knaap wrote the manuscript and constructed Figure The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Migraine with (MA) and without aura (MO) is a common brain disorder that affects 15% of the general population. Genetic studies on twins have shown that MA and MO heritability spans between 50% and 60% and pharIn recent years the technologies for studying nucleic acids have literally exploded, opening to new possibilities for study of genetics and epigenetics of MA and MO.One of the most significant results is the sharp cost decrease for the whole genome DNA sequencing, since the psychological threshold of 1000$ for a 30X genome is about to be achieved. This cost reduction is fostering a wealth of large sequencing campaigns that will allow overcoming all the limitations due to the poor knowledge of human genetic variability that has plagued the ability of identifying the genetic basis of all sporadic diseases including MA and MO.The reduction of nucleic acids sequencing costs and the availability of cost effective microarray solutions for the analysis of DNA methylation has favored the implementation of epigenomic studies, in particular DNA methylation microarray has been thoroughly used providing new insight regarding the variability and the role of such epigenetic agent. DNA methylation, miRNA and histone modifications have proven to be a potential source of powerful and robust biomarkers.Taken together both the new genetic and epigenetic omic approaches have the potential to provide new molecular insight in the etiology of MA and MO. Moreover from such approaches we expect to obtain tools to improve migraine diagnosis, patient stratification, and therapy.None."} +{"text": "The activation of the classical renin\u2013angiotensin aldosterone system (RAAS) is known to be involved in the regulation of blood volume and blood pressure and plays an important role in cardiovascular pathology including hypertension and heart failure. Evidence is now available that independently of the classical RAAS, several RAAS components are expressed in cells from different organs including the heart and kidney and are able to change important physiological properties like cell communication, heart excitability, and activation of ionic channels and cell volume when applied locally to the cells or systeAlthough studies performed on transgenic animals generated controversial results, evidence is available that the overexpression of some components of RAAS like Ang II on cardiac muscle, elicit ventricular hypertrophy independently of changes in arterial blood pressure . FurtherThe harmful effects of Ang II on cardiovascular and renal systems inducing remodeling, seems, in part, related to increase in oxidative stress. The discovery of angiotensin converting enzyme 2 (ACE2) and the In this Research Topic, the pathophysiological role of local RAAS in different tissues and organs are reviewed by different authors, each one expert in their respective fields \u201318. We hThe author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "T0 and T30min SDF allowed identification of convoluted tubules with lumen and peritubular microvessels without alterations. The architectural feature of the kidney was of convoluted tubules surrounded by peritubular microvessels forming a homogeneous aspect that merges the sequence of a microvessel followed by a convoluted tubule to the fullest extent. From T1h, the outlining of tubules became blurred by their enlargement and the onset of the compression of tubular lumen and peritubular microvessels, suggesting an obstructive phenomenon by cellular edema. This focal occurrence in T1h became increasingly widespread at T6 changing the homogeneous organization of the cortex architecture.These findings suggested that the process involved in the genesis of renal failure in sepsis could be due to the cyclical repetition of the event: peritubular microcirculatory dysfunction-cytopathic hypoxia of the tubular wall epithelia-edema of the tubular cells-compression of both peritubular microvessel and tubular lumen-exacerbation of microvessel and nephron dysfunction. This hypothesis raised on visual analysis of SDF images could be confirmed by histology which showed a progressive swelling of the epithelial cells of convoluted tubules and reduction of tubular lumen and peritubular vessels with the progression of sepsis. In addition, epithelial cells showed membrane injury, pyknosis and necrosis.The genesis of acute renal failure in severe sepsis appears to depend on the repetitive cycle of peritubular microcirculatory dysfunction and subsequent tubular injury that exacerbates the progression of the renal injury, thus suggesting the conjoined participation of microvessels and their adjacent cells in the genesis of the solid organ dysfunction."} +{"text": "Nonvisualization of the internal carotid artery (ICA) on cross-sectional imaging studies can be due to congenital (dysgenesis of the ICA) or acquired (complete occlusion of ICA) causes. We report two cases, one with absent carotid canal on bone window setting of computed tomography (CT) suggestive of congenital cause and the other with normal carotid canal, suggesting acquired cause. Development of aortic arches with six pathways of collateral circulation in brain is also discussed. Nonvisualization of the internal carotid artery (ICA) on cross-sectional imaging studies can be due to dysgenesis of the ICA or due to complete occlusion. In both the cases the clinical possibility ranges from that of an asymptomatic patient to one having transient ischemic attack (TIA) and fatal stroke .The aorta develops at around 21st day of embryonic life. Primitive aorta consists of ventral and dorsal segments that are continuous through the first aortic arch. The two ventral aortae fuse to form the aortic sac. The dorsal aortae fuse to form the midline descending aorta. Six paired aortic arches develop between the ventral and dorsal aortae .(i) First Arch. It contributes to the formation of the maxillary and external carotid arteries.(ii) Second Arch. It contributes to the formation of the stapedial arteries.(iii) Third Arch (Also Known as Carotid Arch). Proximal segments of the third pair form the common carotid arteries. The distal portions contribute to the formation of the internal carotid arteries along with segments of the dorsal aortae. The ECA arises as a sprout from the CCA and also receives contribution from the first and second aortic arches.(iv) Fourth Arch. The left fourth arch forms the aortic arch. The proximal right subclavian artery is formed from the right fourth arch whereas the distal right subclavian artery is derived from a portion of the right dorsal aorta and the right seventh intersegmental artery.(v) Fifth Arch. It forms the rudimentary vessels that regress early.(vi) Sixth Arch. The left sixth arch contributes to the formation of the main and left pulmonary arteries and ductus arteriosus. The right sixth arch contributes to the formation of the right pulmonary artery.Initially, the aortic arches are connected to the dorsal aorta. As development progresses, the connection of the first and second arches to the dorsal aorta regresses and they contribute to the formation of the ECA. Persistence of the connection with the dorsal aorta may present as transcranial ECA-ICA anastomosis. Through this anastomosis the internal maxillary artery and middle meningeal arteries can supply the distal ICA in cases of hypoplasia of the ICA (known as rete mirabile in the region of the cavernous sinus) .Two longitudinal vascular plexuses dorsal to the third and fourth arches form the basilar artery during the 5th week of intrauterine development. Multiple primitive vessels connect the developing basilar artery and the ICA. All of these vessels involute except for the most cranial one, which persists as the posterior communicating artery .We hereby present two cases, which presented to our hospital with symptoms of TIA and, on evaluation with computed tomography angiography (CTA) of carotid vessels, diagnosis of nonvisualization of the ICA was made.A 64-year-old male, hypertensive for 20 years (on medication), presented with transient right-sided weakness and numbness. The patient underwent CTA imaging of the carotid vessels and circle of Willis, which showed absent ICA on left side and collateral flow to the left hemisphere through the circle of Willis. Absence of the left carotid canal was also discovered at bone window setting of computed tomography (CT), which confirmed the congenital nature of the nonvisualization of left ICA. Maximum intensity projection (MIP) reconstruction revealed that the left middle cerebral artery was fed through a dilated left anterior cerebral artery supplied by the anterior communicating artery .There was no associated vascular malformation or any transcranial ECA-ICA anastomosis or any embryonic persistent artery. The patient's symptoms resolved spontaneously and were attributed to either transient ischemic attacks or migraine headaches. No thromboembolic source was identified.A 59-year-old male, hypertensive for 5 years (on medication), presented with transient left-sided weakness and numbness. The patient underwent CTA imaging of the carotid vessels and circle of Willis, which showed nonvisualization of ICA on right side and collateral flow to the right hemisphere through the circle of Willis. Right carotid canal was normal at CTA, which confirmed the acquired nature of the nonvisualization of right ICA. A diagnosis of complete occlusion of the right ICA along its whole course was made .Our first case was diagnosed as agenesis of the left ICA with absent left carotid canal assessed on bone window setting of CT. Dysgenesis of the ICA includes agenesis, aplasia, and hypoplasia. Complete failure of development of the ICA leads to agenesis whereas hypoplasia refers to a very small caliber ICA after the development started and the term aplasia is used when only vestiges of the ICA are present . DysgeneAgenesis of the ICA occurs before 24\u2009mm stage of the embryonic growth .Lie reported the first case of agenesis of the ICA and defined agenesis as the total absence of the entire length of the artery. According to Lie, there are six pathways (types A to F) of collateral circulation associated with agenesis of the ICA. In type A, there is unilateral absence of the ICA with collateral circulation to the ipsilateral anterior cerebral artery and middle cerebral artery through anterior communicating artery and hypertrophic posterior communicating artery, respectively. Unilateral absence of ICA with collateral flow to the ipsilateral anterior cerebral artery and middle cerebral artery across a patent anterior communicating artery comes under type B as in our cases. In type C, bilateral ICA agenesis is associated with patent anastomoses between carotid and vertebra-basilar system. Unilateral agenesis of the cervical portions of the ICA with collateral from an intercavernous communication from the cavernous segment of contralateral ICA comes under type D. In types E and F, there is bilateral ICA hypoplasia with bilateral posterior communicating arteries supplying the middle cerebral arteries in type E and the hypoplastic ICA getting flow from bilateral rete mirabile in type F. Retia mirabilia are transcranial anastomoses between the branches of ICA and external carotid artery system .Congenital absence of ICA is often associated with intracerebral aneurysm formation. The carotid canals in petrous bone form secondary to the presence of the embryonic ICA. Absence or hypoplasia of embryonic ICA leads to hypoplasia of the carotid canal. Absence of carotid canal on a computed tomography scan should suggest a congenital ICA abnormality and suggest an extensive search for associated intracranial vascular malformations .In patients with agenesis of the ICA, cross-sectional imaging techniques are currently the modality of choice . Our secDysgenesis of the ICA should be differentiated from complete occlusion especially when unilateral. Complete occlusion of the ICA is more likely due to severe atherosclerosis, chronic dissection, or fibromuscular dysplasias .In patients with occlusion of the ICA, postocclusive diminished arterial pressure causes collaterals to develop via the circle of Willis which is important to prevent stroke. The anterior communicating artery and the posterior communicating artery are the collateral channels through which the circle of Willis can supply blood flow to the affected side of the brain. When collateral compensation mechanisms fall short, low-flow infarcts in border zone areas of the brain may develop .Cote et al. evaluated forty-seven patients with ICA occlusion who were asymptomatic or had only mild neurological deficit and prospectively followed them up for an average of 34.4 months. During that period of time, they found that 51% of patients experienced TIAs in the territory of the occluded artery and 23.5% of patients suffered a cerebral infarction .Agenesis of ICA is mostly asymptomatic, being identified only incidentally. The finding of absent carotid canal on routine CT should suggest the diagnosis. It is important in the management of cerebrovascular accidents as the single ICA supplies both the cerebral hemispheres. ICA dysgenesis has to be distinguished from acquired stenosis as the management of the two conditions is different."} +{"text": "It is involved in a wide range of cellular processes that require independent regulation. However, our understanding of how this single second messenger achieves specific modulation of the signaling pathways involved remains incomplete. The subcellular compartmentalization and temporal regulation of cAMP signals have recently been identified as important coding strategies leading to specificity. Dynamic interactions of this cyclic nucleotide with other second messenger including calcium and cGMP are critically involved in the regulation of spatiotemporal control of cAMP. Recent technical improvements of fluorescent sensors facilitate cAMP monitoring, whereas optogenetic tools permit spatial and temporal control of cAMP manipulations, all of which enabled the direct investigation of spatiotemporal characteristics of cAMP modulation in developing neurons. Focusing on neuronal polarization, neurotransmitter specification, axon guidance, and refinement of neuronal connectivity, we summarize herein the recent advances in understanding the features of cAMP signals and their dynamic interactions with calcium and cGMP involved in shaping the nervous system. The development of nervous system connectivity is a multistage process that requires neuron specification and polarization, axon guidance and targeting, as well as refinement of synaptic connections. All stages require second messenger cascades involving cAMP. This cyclic nucleotide is required for the control of transcription . The diversity of regulation and localization of ACs (10 isoforms) and PDEs (more than 40 isoforms) offers a wide range of combination to shape specific signals in response to distinct stimuli are critical for this process. AKAP isoforms are targeted to distinct subcellular compartments and modulate the spatial spread of cAMP, binding at least some isoform of PDEs and ACs and choline acetyltransferase transporters (ChAT). In contrast, overexpression of the voltage gated sodium channel Nav1.2 increases calcium spike frequency and the number of the inhibitory GABAergic and glycinergic spinal neurons at the expense of VGluT and ChAT-expressing neurons, demonstrating that pattern calcium activity affects neuronal differentiation but not transmembrane ACs (AC1 to 9) is required for the intrinsic ability of axons to grow are of special importance to gain new insight into the spatiotemporal control of cAMP signals. Pursuing the analysis of these coding strategies will provide a better understanding of the regulatory second messenger networks shaping neuronal connectivity.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Journal of Experimental Botany Special Issue on Fruit Development and Ripening . At that time the biosynthesis and mode of action of ethylene in fruit ripening had already been established, and advances in genetics were revealing links between genes and phenotypes, especially noteworthy was the map-based cloning of genes underlying tomato non-ripening loci such as ripening inhibitor (rin). Since then there have been substantial advances in our understanding of ripening in tomato and many other dry and fleshy fruits. This has been accelerated by the delivery of genome sequences for a wide range of plants including fleshy fruit bearing species and the development of systems biology approaches to understanding regulatory networks.It has been 12 years since publication of the last The first paper in the 2002 issue was a seminal work by Sandy Knapp focused on fruit diversity in the Solanaceae and highlighting the phylogenetic relationships between dry and fleshy fruit forms . The curFRUITFULL (FUL) and SHATTERPROOF (SHP), and set these events in the context of a phylogenetic framework. Cristina Ferr\u00e1ndiz and Chlo\u00e9 Fourquin from Instituto de Biolog\u00eda Molecular y Celular de Plantas in Valencia, Spain, then review the role of FUL and SHP in Arabidopsis and provide a comprehensive discussion of their role in many other species including Brassicas, legumes and also in fleshy fruits (FUL and SHP expression, and their review concludes that there are conserved roles of FUL and SHP in late fruit development both in fleshy and dry fruits. These ideas were hinted at in the 2002 Special Issue and strong supporting evidence is now presented consistent with dehiscence and ripening sharing a common origin and being parallel, rather than completely different processes. Mar\u00eda Dolores G\u00f3mez and colleagues, also from Valencia, explore and develop for us further ideas about the similarities and differences between the mechanistic basis of \u2018ripening\u2019 and \u2018over-ripening\u2019 in dry and fleshy fruits including comparing the transcriptomes of senescent and ripening Arabidopsis siliques and tomato berries (In the first paper in this special issue, Sofia Kourmpetli and Sin\u00e9ad Drea from the University of Leicester in the UK review the regulatory networks involved in the development and maturation of two, at first sight, very different dry fruits, the poppy capsule and the cereal grain. They highlight the importance of MADS-box genes including y fruits . Dehisce berries . et al. describe the process in the non-climacteric grape berry including an in-depth discussion of the key processes associated with grape maturation such as flavonoid biosynthesis and volatile development. The authors provide an excellent summary of the profound effect of the environment on grape berry ripening, and a review of the role of various hormones in the ripening process in this species where ethylene seems to have an effect even on the ripening of this non-climacteric fruit. The role of hormones other than ethylene in modulating fleshy fruit ripening has received relatively limited attention, although the effects of auxin as a ripening inhibitor are perhaps best known. The reviews by Molecular networks controlling the ripening of fleshy fruits are also the focus of reviews by scientists from a range of European and South American laboratories (et al. from the Max-Planck-Institute of Molecular Plant Physiology in Germany and Jos\u00e9 L. Rambla and colleagues from Valencia and the Netherlands (The importance of the plastid in ripening fruits is the subject of the review by Maria Florencia Cocaliadis and her colleagues from Valencia (The final three review papers focus on genes regulating the genetic basis of fruit morphology in horticultural crops and the genetics, form and function of fruit cuticles. Antonio J. Monforte and colleagues from Spain and the USA describe the identification of genes controlling size and shape of tomato using QTL map-based cloning and how this information can be used to understand the basis of these traits in fruits such as melon and thiset al. report on the development of a two-step in vitro culture system for grape which couples the use of fruiting-cuttings with organ in vitro culture, while Fu and colleagues explore the roles of the phytoene synthase gene family in loquat. The new volume reflects the advances in technology since 2002 and the reader can now survey the discoveries concerning the molecular networks in a range of dry and fleshy fruits. The last papers in this special issue describe original research from"} +{"text": "In our experiments, we removed a major source of activation of somatosensory cortex in mature monkeys by unilaterally sectioning the sensory afferents in the dorsal columns of the spinal cord at a high cervical level. At this level, the ascending branches of tactile afferents from the hand are cut, while other branches of these afferents remain intact to terminate on neurons in the dorsal horn of the spinal cord. Immediately after such a lesion, the monkeys seem relatively unimpaired in locomotion and often use the forelimb, but further inspection reveals that they prefer to use the unaffected hand in reaching for food. In addition, systematic testing indicates that they make more errors in retrieving pieces of food, and start using visual inspection of the rotated hand to confirm the success of the grasping of the food. Such difficulties are not surprising as a complete dorsal column lesion totally deactivates the contralateral hand representation in primary somatosensory cortex (area 3b). However, hand use rapidly improves over the first post-lesion weeks, and much of the hand representational territory in contralateral area 3b is reactivated by inputs from the hand in roughly a normal somatotopic pattern. Quantitative measures of single neuron response properties reveal that reactivated neurons respond to tactile stimulation on the hand with high firing rates and only slightly longer latencies. We conclude that preserved dorsal column afferents after nearly complete lesions contribute to the reactivation of cortex and the recovery of the behavior, but second-order sensory pathways in the spinal cord may also play an important role. Our microelectrode recordings indicate that these preserved first-order, and second-order pathways are initially weak and largely ineffective in activating cortex, but they are potentiated during the recovery process. Therapies that would promote this potentiation could usefully enhance recovery after spinal cord injury. The last 30 years of intensive research has led to a greatly improved understanding of how the somatosensory system of mature primates and other mammals responds to sensory loss, as reviewed by others One possibility is that different lesions produce different results. For performance with the hand, the dorsal column lesion should be at the C4\u2013C5 level or higher to remove all afferents from the hand, the lower lesions at C5\u2013C6 of the cervical spinal cord spare some afferents from digit 1 and 2, and still lower lesions would spare inputs from most of the hand. Even with C4\u2013C5 lesions or higher, various afferents in the dorsal columns could be spared. (2) Compensations likely occur. As described below for our study of the effects of dorsal column lesions on behavior, vision appears to supplement tactile feedback in food retrieval tasks after dorsal column lesions. Other compensations for sensory impairments are likely to be rapidly acquired as well. (3) Plasticity of the somatosensory system, even in mature primates appears to be a major source of recovery after dorsal column lesions. Recoveries of activation within the system and use of the hand may result from the potentiation of the activating effects of preserved dorsal columns afferents, even when they are very few. This possibility is well demonstrated by the effects of cutting most of the dorsal sensory roots of nerves of the forearm in monkeys, while leaving a few inputs from that hand that initially fail to activate cortex. However, several months later cortex responded to cutaneous stimulation of the largely deafferented digits. Figure The results of our recent study of hand use in squirrel monkeys after a unilateral dorsal column lesion provide evidence that the sensory loss produced by lesions confined to the dorsal columns do produce impairments in a food retrieval task (Qi et al., The reactivation of deprived hand cortex weeks to months after high cervical dorsal column lesion depends on the potentiation of preserved sensory pathways through axon growth and the formation of new synapses. We have previously suggested that dorsal column lesions are often incomplete, and that preserved but subthreshold dorsal column inputs to the cuneate nucleus gain strength by forming more synapses on more neurons as the result of reduced competition for synaptic space (Rasmusson and Northgrave, The most likely alternative pathway is from neurons in the dorsal horn of the spinal cord that project to the dorsal column nuclei (Perl et al., In order to be effective, the projections to the cuneate nucleus by second-order dorsal horn neurons must survive the dorsal column lesion. Some axons from second-order neurons might enter the dorsal columns below the lesion and be lost. Others might enter above the lesion, or travel in some other pathway, such as the lateral funiculus, which is dominated by proprioceptive afferents (Rustioni et al., Given the evidence that the reactivation of the cortex by inputs from the hand is important for the recovery of manual dexterity, what can be done to promote these reactivations? Recently, we have investigated two interventions for promoting reactivations. First, we found promising evidence that behavioral training after sensory loss improved the recovery of hand use in a retrieval task and correlated with cortical reorganization [Qi et al., All authors contributed to the writing of the manuscript. The findings reviewed are those of the authors, as well as those of other investigators.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Genome sequences are making available an unprecedented amount of genetic information that has the potential to reliably elucidate many aspects of physiology, biochemistry, and evolutionary relationships of different organisms. For efficient analyses of the vast amount of genomic sequence data, new reductive approaches are needed which can identify reliable genetic characteristics that are specific for either particular species or related groups of organisms. These characteristics provide novel means for distinguishing different organisms and for understanding their evolutionary history as well as novel tools for phenotypic, behavioral, and biochemical studies. The papers collected in this research topic discuss the application of comparative genomic techniques in a wide range of modalities covering the evolution, classification, identification, and characterization of bacteria and viruses.Koton et al. utilize genomic sequence data to elucidate the evolutionary interrelationships of Vibrio vulnificus strains and to both trace the origin and identify unique genomic characteristics of V. vulnificus biotype 3. The comparative genomic analyses completed by Koton et al. are a model of how to utilize the comparative analysis of core, accessory, and unique genomic elements in order to understand the interrelationships of various strains of a single bacterial species. Bacterial classification at the genus and species level, however, is highly structured and often based on the characterization of a limited number of strains (often a single isolate) focussed on the 16S rRNA gene and a number of phenotypic and chemotaxonomic properties . This group of researchers has previously applied similar techniques to divide the genus Borrelia into two genera, Borrelia and Borreliella (Adeolu and Gupta, Thermotoga into two genera, Thermotoga and Pseudothermotoga (Bhandari and Gupta, Yang et al. who utilize the comparative analysis of Microcystis aeruginosa genomes to identify the CRISPR-Cas systems of different Microcystis aeruginosa subgroups, giving novel insights into their resistance to bacteriophages and their ability to modulate genomic stability and plasticity. Liu et al. utilize a number of biochemical assays to characterize the function of a laterally transferred gene cluster containing the tetR gene, identified via comparative genomic techniques. They show that the tetR gene and TetR protein contribute to cell survival under oxidative stress and are able to identify a number of regulatory roles played by the TetR protein. Taken together, these two studies exemplify the utility of studying and characterizing the genome derived characteristics that are beginning to be used to classify bacterial groups.The evolutionary relationships of bacterial organisms form the underlying basis of their classification. Chan et al. use genome sequence data to reveal the evolutionary relationships between the members of N4-like phage genus and to define the core genome of all members of the N4-like phage genus and the core genome of the N4-like Roseobacter phages, providing novel insight into the traits which characterize the genus and their evolutionary relationships. Mohiuddin and Schellhorn utilize environmental sampling and comparative genomic analyses of environmental DNA and DNA extracted from virus-like particles to examine the abundance and distribution of different classes of viruses in freshwater environments. These studies both represent useful applications of genomic sequence data and genome derived characteristics to identify and characterize viral groups and populations that are currently underexplored.Viruses and virus-like particles represent the most abundant sources of DNA in the environment (Weinbauer, The papers presented in this research topic exemplify the broad range of applications of comparative genomic analysis and genome derived characteristics in biology and related sciences. The use of genome sequence data to answer novel questions or challenge previously held assumptions is revolutionizing all fields of biology and novel methods and applications of comparative genomics are becoming an indispensable part of modern biological research.RG was the Editor for this Research Topic. I have critically read all of the papers submitted under this research topic and have written/finalized the submitted Editorial.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Cancer is a multi-genic, complex biological phenomenon and its growth is affected by various categories of factors including the genetics of the host, the accumulating genetic alteration within cancer cells and environmental modifiers.Therefore, to identify the determinants of immune-mediated tumor rejection during immunotherapy, a systematic and comprehensive monitoring approach needs to be applied that covers the various categories at a time relevant to the therapeutic intervention. In addition, a temporal dimension needs to be added to evaluate in real time the changes induced by treatment and their relationship with clinical outcome.In this presentation, we will review the salient concepts that should guide the future monitoring of clinical trials taking advantage of novel and comprehensive technological advances presenting examples of integrated approaches for the assessment of patients' response to adoptive T cell therapy.Our experience highlights the need to apply multi-parametric approaches for the understanding of the mechanism(s) leading to cancer rejection by the immune system in humans."} +{"text": "In the last decades there have been significant advances in our understanding of the cellular and subcellular mechanisms underlying changes in synaptic connectivity that subserve memory formation. The so called Theory of Synaptic Plasticity and Memory has gathered a wealth of experimental support from different areas of neuroscience to become the main phenomenological description of memory at the behavioural level. This special issue of neural plasticity compiles some of the most recent advances in our understanding of the mechanisms underlying formation and persistence of different types of memories from invertebrates to humans. Contributions from different laboratories around the world pinpoint hot topics in this area of memory research, highlighting growing avenues for future research.The experience of a salient event can lead to the formation and storage of a long-term memory that can sculpt and alter future behaviour up to a lifetime of an individual. This unique and highly adaptive behavioural capacity relies on specific changes occurring within the brain. Specific signalling pathways and patterns of gene expression are required in neuronal and nonneuronal cells for the stabilization and long-term persistence of synaptic changes that underlie memory. Depending on the retrieval conditions, these fully consolidated memories can undergo reconsolidation or extinction that will maintain or inhibit the expression of the original memory, respectively. These opposing memory processes recruit distinctive subcellular events in order to restabilize the original memory or to form a new inhibitory memory trace. The formation of associative memories and their maintenance are evolutionary conserved phenomena present from the simplest to the most complex animals. The use of a multidisciplinary approach, comprising behavioural, physiological, and molecular analysis, in combination with a variety of wild and laboratory animals, from invertebrates to humans, brings light into the intricate mechanisms of memory. The three research papers and five review articles included here were revised by at least two international experts and their comments helped in making each piece an even more compelling article. Drosophila to show that muscarinic-type acetylcholine receptors contribute to the generation of olfactory aversive memory. Besides the obvious anatomical differences between vertebrate and invertebrate nervous systems, this article further supports the evolutionary conserved role of key contributors to memory consolidation. T. P. Todd and D. J. Bucci show that retrosplenial cortex (RSC) is specifically involved in forming associations among the neutral stimuli that are present in the environment. Furthermore, they discuss evidence that posits RSC as a site in which multiple cues are linked together in the service of memory formation and persistence after training.This Issue includes two articles addressing novel mechanisms in memory consolidation. B. Silva et al. used larvae of the fruitflyA comprehensive review article by D. Moncada et al. serves both as an introduction and a thorough revision of the existing literature regarding the experimental findings supporting the behavioural tagging process in rodents and humans. This working hypothesis links the concept of synaptic tagging proposed by Morris and Frey in the late 90s with more recent evidence of a significant promoting effect of a novel behavioural experience in the formation of new and independent associative memories. Moreover, M. Tomaiuolo et al. establish novel links between the synaptic tagging hypothesis and memory persistence, showing that a dopamine- and Arc-dependent maintenance tagging process may operate in the hippocampus late after acquisition for the persistence of long-lasting memories. Ending the persistence mechanisms section, J. B. Hales et al. investigated the effect of the zeta-inhibitory peptide (ZIP) in the persistence of recognition memory in rats. This article shows that recent, but not remote, object recognition memories can be disrupted by ZIP infusion into the hippocampus and suggests a dynamic role of hippocampal LTP-dependent mechanisms supporting strong recognition memories shortly after training.J.-P. Morin et al. discuss at length the role of the protein Arc as one of the main molecular substrates of memory. They go over its characteristics and regulation and the reasons why this molecule could be an essential part of the memory engram. They propose that Arc possesses particular characteristics like its persistent expression after learning its pre- and posttranslational regulation and its interactions with molecules at the synapse that make it an ideal candidate to mediate plasticity in the cells activated by a given learning experience.Adult neurogenesis in the dentate gyrus of the hippocampus has gained increasing interest as a potential plasticity mechanism for learning and memory at the cell and system level of analysis. The article by S. Yau et al. addresses the role of adult hippocampal neurogenesis in learning and memory focusing on novel findings that indicate a function for this process in two features of memory. One of these features is \u201cpattern separation\u201d which refers to the computational process involved in separating the representations of similar learning experiences. The second is the far less studied process of memory forgetting, which will certainly be one of the new most interesting fields in memory research. The authors incorporate this new information and relate it to treatments such as environmental enrichment and voluntary exercise, which are known to increase neurogenesis.The issue presents an article dedicated to analyse the implications of memory studies for the development of novel therapeutical tools for the treatment of psychiatric disorders in humans. In particular, C. K\u00f6hler et al. propose that the manipulation of the reconsolidation of autobiographical memories might represent a novel therapeutic opportunity for depression treatment. The authors suggest that disruption of memory reconsolidation could serve as a novel approach for the modification of dysfunctional autobiographical memories associated with major depressive disorder.We are very pleased to introduce this special issue that covers a variety of features of memory at different levels of analysis. The persistent nature of maladaptive memory components is a common characteristic in several psychiatric disorders including posttraumatic stress disorder (PTSD), specific phobias, and drug addiction. We believe that understanding the key molecular mechanisms underlying the formation, persistence maintenance, and forgetting of different forms of memories will prove to be invaluable at both the foundational and translational levels, helping the design and development of new therapeutical approaches."} +{"text": "The translocon at the outer envelope membrane of chloroplasts (TOC) initiates the import of thousands of nuclear encoded preproteins required for chloroplast biogenesis and function. The multimeric TOC complex contains two GTP-regulated receptors, Toc34 and Toc159, which recognize the transit peptides of preproteins and initiate protein import through a \u03b2\u2013barrel membrane channel, Toc75. Different isoforms of Toc34 and Toc159 assemble with Toc75 to form structurally and functionally diverse translocons, and the composition and levels of TOC translocons is required for the import of specific subsets of coordinately expressed proteins during plant growth and development. Consequently, the proper assembly of the TOC complexes is key to ensuring organelle homeostasis. This review will focus on our current knowledge of the targeting and assembly of TOC components to form functional translocons at the outer membrane. Our analyses reveal that the targeting of TOC components involves elements common to the targeting of other outer membrane proteins, but also include unique features that appear to have evolved to specifically facilitate assembly of the import apparatus. The plastids constitute a diverse array of organelles, which play central roles in plant growth, development, and defense by providing a remarkable range of metabolic and physiological capabilities in different cell and tissue types , recognizes the majority of plastid-destined proteins at the organelle surface and appears to be unique among chloroplast outer envelope membrane proteins (OEPs) in being targeted to the membrane al Table , with its N-terminus (including the G-domain) exposed to the cytosol and a relatively short C-terminal sequence (CTS) oriented toward the intermembrane space Table . Organelin vitro targeting experiments with isolated organelles suggest that selectivity can occur at the surface of the organelle, independent of cytosolic targeting factors in Arabidopsis , further highlighting the importance of regulating protein import at the level of the TOC translocon (Ling et al., The translocons mediating the import of nuclear encoded preproteins play central roles in the biogenesis and functional differentiation of plastids. Major attention has been devoted to uncovering the mechanism of preprotein recognition and membrane translocation at TOC and TIC, and it is increasingly clear that the assembly and regulation of these complexes play an important role in organelle function and homeostasis. To date, studies on individual TOC components suggest that their targeting involves elements in common with other outer membrane proteins, but also reveal features that are unique to TOC biogenesis. Detailed studies on the characteristics and relationships of targeting pathways for the TOC proteins, the identification of the components of each pathway, and the definition of the roles of known components are needed to provide a complete picture of the mechanism and regulation of TOC assembly. A more complete picture of targeting and assembly in conjunction with information on the structures and interactions of core TOC components will undoubtedly shed light on key regulatory or quality control checkpoints in the assembly and dynamics of this unique macromolecular assembly. Finally, studies integrating TOC assembly with the newly discovered mechanism of TOC control by regulated proteolysis provides an opportunity to understand how the levels and diversity of the translocons are controlled, and thereby contribute to the plasticity of organelle function in response to developmental and physiological events in the cell.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Nasal reconstruction presents a challenge for plastic surgeons due to the prominent location and complex structure of the nose. When reconstructing full-thickness nasal defects, options to replace the inner lining, cartilaginous framework and outer skin must be considered. The aesthetic units of the nose must also be replaced in full rather than simply filling holes. This report presents a case of a 62 year old male who\u2019s full-thickness alar defect due to resection of a basal cell carcinoma (BCC) was reconstructed using a free chondro-cutaneous helical rim flap based on a retrograde flow superficial temporal artery pedicle.The margins of the existing defect were excised by a further 5mm as suggested by histology. A mould of the defect was made using bone cement and used to plan the harvest site of the flap from the left helical rim. The flap was raised based on a retrograde superificial temporal artery and consisted of cartilage from the helical rim, skin from the anterior and posterior surfaces of the helix and a small segment of post-auricular skin. A 10cm inter-positional pedicle was harvested from the descending lateral circumflex femoral artery and vein from the right antero-lateral thigh (ALT). This inter-positional ALT pedicle was tunnelled under the cheek to reach the facial artery and vein in the submandibular region as the vessels in the naso-labial fold were not suitable for anastomosis.The vascularised free chondrocutaneous flap allows reconstruction of the three layers of the nasal ala: the inner lining, cartilaginous framework and outer skin, in one procedure. Due to similar sun exposure, the auricular skin also provides a good match to nasal skin in terms of colour and texture. The natural curvature of the helical rim at the root is similar to that of the nasal ala. The resulting ear defect with be reconstructed at a later stage using a cartilaginous graft from the costal margin and skin grafting.As 30% of head and neck BCC\u2019s are nasal, consideration of reconstructive options following resection is an important part of patient management. The vascularised helical rim flap allows reconstruction of all 3 layers of the nasal ala with a good aesthetic result due to adequate matching of contour, colour and texture. The location of any scarring is inconspicuous in comparison to a local forehead flap that could also be used to reconstruct such a defect."} +{"text": "Cancer is a leading cause of death worldwide and it is caused by the interaction of genomic, environmental, and lifestyle factors. Although chemotherapy is one way of treating cancers, it also damages healthy cells and may cause severe side effects. Therefore, it is beneficial in drug delivery in the human body to increase the proportion of the drugs at the target site while limiting its exposure at the rest of body through Magnetic Drug Targeting (MDT). Superparamagnetic iron oxide nanoparticles (SPIONs) are derived from polyol methods and coated with oleic acid and can be used as magnetic drug carrier particles (MDCPs) in an MDT system. Here, we develop a mathematical model for studying the interactions between the MDCPs enriched with three different diameters of SPIONs in the MDT system with an implanted magnetizable stent using different magnetic field strengths and blood velocities. Our computational analysis allows for the optimal design of the SPIONs enriched MDCPs to be used in clinical applications. Cancer is a leading cause of death worldwide. Its cause is multifactorial and is linked to the interaction of genomic, environmental, and lifestyle factors . Cancer MDT refers to the attachment of therapeutics to magnetizable particles to concentrate them at the desired locations by applying magnetic fields . It inclThere has been a growing interest in the scientific and clinical application of MDCPs as MDT vehicles for the development of efficient treatment strategies. A nanoparticle-based cancer drug has been developed and the phase 1 clinical study of cancer patients providing positive clinical evidence for the progress of nanoparticle application is reported . Superpa3O4 nanoparticles with respect to particle diameters has been previously investigated [3O4 nanoparticles with different diameters at 300\u2009K is shown in The fraction of the constituting atoms on the surface of the nanoparticles varies with the decrease in the size of the particles (<100\u2009nm) and this leads to significant changes in the magnetic structure and properties of the nanophase materials. The variation of the magnetic behaviour of well-dispersed monodisperse Festigated . It has Here, we propose SPIONs as carriers in SA-MDT system using a magnetizable stent as an implant and focus on the theoretical modelling of the interaction between the MDCPs enriched with three different sizes of SPIONs see . The quaN MDCPs under the influence of (i) Stokes drag, (ii) hydrodynamic interaction forces, and (iii) magnetic forces that account for the mutual magnetic dipole-dipole interactions and calculated (iv) the velocity of each MDCP and MDCPn and (v) the system performance in terms of collection efficiency (CE) ignoring the effect of inertia and gravity Stokes drag, \u2009\u2009\u03b7blood is the viscosity of the blood, Rpn is the radius of MDCPn, and n, respectively. In the model, where escribed .(ii)n due to the movement of other MDCPs in the blood flow. By considering N MDCPs, the force acting on MDCPn due to the presence of the other (N \u2212 1) MDCPs is given byHydrodynamic interaction force, \u2009i and \u03beni is the modification due to the hydrodynamic interaction.where (iii)n, N MDCPs, each MDCP is taken as spherical and is having a homogeneous magnetic flux throughout the entire volume. n can be written asMagnetic forces acting on MDCP\u2009n and n.where (iv)n, The velocity of MDCP(v)n are obtained from evaluating the streamline functions. ConsiderThe system performance of the model is calculated in terms of collection efficiency (CE) and the trajectories of MDCP\u2009Rvessel is the radius of the vessel and y1 and y2 are defined by the location of the streamline at the entrance of control volume (CV) of the last MDCPs captured by the stent wires.where The forces acting on a given particle labeled\u2009\u2009In the current study, we present the simulation results of the behaviour of MDCPs enriched with three different sizes of SPIONs in SA-MDT system. We examine the effects of interactions on the CE of the system in terms of the changes in blood velocity and applied magnetic field strength .N (N = 100) MDCPs together with the blood flow velocity. Magnetic and hydrodynamic forces acting on MDCPs as well as blood velocity were calculated using the finite volume library OpenFOAM [We calculate the forces due to the magnetic dipole-dipole and hydrodynamic interactions on OpenFOAM . We creaOpenFOAM . In our OpenFOAM .3O4 nanoparticles) [In order to describe the effect of different SPION diameter on the content of MDCPs, rticles) is preseWe have presented SA-MTD model incorporating the agglomeration of particles known to occur in real biological systems and studied the effect of SPION diameter used in the MDCPs using different magnetic field strengths and blood velocities. We calculated both the dipole-dipole and hydrodynamic interactions for 100 particles and the resulting collection efficiencies derived from the mathematical model are in closer agreement with our latest experimental results .We envisage that new insights obtained from the results of our analysis may be used in prediction of efficacy of targeted drug delivery for designing effective nanotherapeutic tools that can translate into the clinic. The CE of the system is increased with the higher magnetic field strength and decreased with the higher blood velocities as expected. Moreover, the modelling of different sizes of SPIONs in a SA-MDT system presented in this work represents a useful analytical tool for the prediction of the efficacy of targeted drug delivery. Our simulations indicate that size of the SPIONs in MDCPs together with saturation magnetization of the SPIONs has considerable effect on collection efficiency of the SA-MDT system. The response of SA-MDT is mainly dominated by the size of SPIONs and the saturation magnetization value of SPIONs, and these parameters can be calibrated based on the clinical applications of SA-MDT system using the results of our simulation. Improvement of the fundamental models in MDT systems may allow for the development of the more complex models that include systems level interactions.The presented mathematical model for the movement of the MDCPs in the blood can be integrated with genome-scale metabolic models (GEMs) for healthy cells/tissues \u201333, canc"} +{"text": "While it has been used clinically in Europe since the early 2000, molecular allergology remains relatively unknown to a number of clinicians involved in the field of allergy and this at a time when the understanding of allergy at the level of proteins allows for novel diagnostic and therapeutic avenues. Molecular allergy offers clinicians a better understanding of the allergy dynamics of certain patients, particularly in some situations of food allergy and in multi-allergy contexts.A introduction to the science of molecular allergology, based on 6 concepts has been developed. A concise yet clinically effective \"Molecular allergology summary 2013\" card has been devised so as to facilitate the allergy physician's first steps in the clinical use of molecular allergology. Notions of molecular similarities and the relevance of biologic testing results will be reviewed. This will assist initial assessments of specific clinical contexts where molecular allergology will prove particularly useful.A coherent appreciation of molecular families, their major component markers and their clinical character adds a whole new dimension to the evaluation and management of patients with various inhalant, nuts and other food allergies. The oral allergy syndrome, from the perspective of the molecular families at play, can be better appreciated for the myriad of syndromes that they really are, each with its own prognostic and therapeutic implications. The pertinence of molecular allergy testing vis-\u00e0-vis multi-sensitized respiratory allergy patients, milk and egg allergy management and other more specific clinical conditions can be addressed. Future applications of molecular allergy will likely change considerably some of our therapeutic approaches.Molecular allergy can enhance our care of allergy patients today and opens new doors on allergy therapeutics. It also brings new parameters and business-model issues in allergy care in North-America. Beyond the helpfulness of component-resolved diagnosis in the practice of allergy medicine, the leadership of clinical allergists is neeeded in defining its judicious application to the care of our allergic patients."} +{"text": "Childhood-onset SLE is a complex multiorgan disease. In order to judge the need of medical intervnetions and the patient benefits from them, measurement disease activity and damage are key. Furthermore, important improvement and deterioration of disease needs to be ascertained. Such measurement are the basis for clinical trial aimed at identifying improved medications and are needed to judge the benefits of medical interventions in general.An overview will be provided about current surrogate and biological markers of global and disease specific disease activity and damage. Focus will be placed on the relevance for children with SLE and current research activities, particularly NPSLE and lupus nephritis.Upon completion of the presentation the audience will h ave a firm understanding about the suitability of individual measures of disease activity and their differences. Measures include the various versions of the SLEDAI and BILAG, as well as SLE flare tools. The SLICC Damage Index and its pediatric version will be discussed in addition to previously validated measures of clinically relevant improvment and disease flare. Additionally, biomarkers of lupus nephritis as they are suited to help diagnose kidney disease and anticipate changes in the activity and chronicity of kidney lesions will be available. Some novel imaging biomarkers of neuropsychiatric damage will be reviewed.None declared."} +{"text": "Musculoskeletal tuberculosis manifest commonly in the spine, hip and knee and rarely in the hand and wrist. Certain factors influence the functional outcome after treatment of tuberculosis of the upper extremity namely location , tissue involvement, delay prior to consult, care by a multispecialty team and early surgery.Appropriate drug therapy is the mainstay in the management of musculoskeletal tuberculosis. The role of medical management of tuberculosis of the upper extremity appear to have a predominant role compared to surgery with the latter reserved for drainage of abscesses, synovectomy and carpal tunnel release.The functional outcome with medical treatment alone in the series of Kotwal and Khan (2010) was good using the modified Green and O'Brien scoring system. Benkeddache (1982) reported favorable response to drug treatment with resolution of bone and soft tissue lesions. There is a scarcity of reports in this area prompting the author to undertake this study in 2013-2014 on 25 Filipino subjects.The functional outcome of patients with tuberculosis of the upper extremity aged 17 to 87 years who received appropriate medical and surgical treatment for tuberculosis of the upper extremity was determined using the DASH scoring system (Filipino version) . The majority of the patient population had surgical treatment which included wrist fusion, finger joint fusion, joint synovectomy, tendon synovectomy, arthrotomy, arthroscopic debridement and ray amputation.The mean DASH score of patients with tuberculosis of the upper extremity was 28. This score allowed patients to do ADLs without much difficulty. The type of tissue involvement and the location involved did not appear to be good predictors of functional outcome after medical and surgical treatment of tuberculosis of the upper extremity."} +{"text": "In the past few decades there has been a surge in initiatives to catalogue the diversity of the human microbiome, which have been rapidly followed by research documenting the state of the microbiome in various human physiological states and geographical populations. The evidence for a causal implication of the microbiome in disease is compelling; our so-called second genome could underlie a variety of complex diseases including immunological, neurological and cardiovascular conditions, as well as cancer. It is therefore not surprising that the deep scrutiny of the normal microbiome and its intersection with disease have drawn major scientific and public interest.Genome Medicine presents a collection of articles describing novel insights into the functional characterization of the human microbiome and the shifts in microbiome composition in disease states or recovery from disease. The issue also details the latest advances in classifying individuals according to their disease risk using computational approaches. For example, Patrick Schloss and colleagues demonstrate that a microbiota-based random forest model complements existing screening methods to detect colonic lesions; using this classification method, early stage colorectal cancer may be detected in patients in a non-invasive manner [In this special issue, e manner .We also present the first study to develop a risk classification index for bacteremia from analysis of the fecal microbiomes of patients with non-Hodgkin lymphoma . Dan KniThese studies pave the way for translational research to exploit the therapeutic potential of modulating the microbiome. A review by Gautam Dantas and colleagues highlights the adverse effects of broad spectrum antibiotics on the dynamics between healthy and dysbiotic states; the authors expand upon the current landscape of therapeutic approaches for precision modulation of the microbiome and the advances made so far in this area . In anotThis special issue aims to provide a current view of the state of the field, including often forgotten components of the microbiome such as the virome . BuildinWe would like to express our deepest gratitude to our Guest Editor, Curtis Huttenhower, for his invaluable advice and guidance in shaping this special issue. We are also extremely grateful to the many reviewers who provided feedback and advice."} +{"text": "The human interaction required for manual motion correction/contouring of cardiac perfusion series remains a significant obstacle to quantitative perfusion gaining a wider acceptance in clinical practice. The use of image registration for motion correction in perfusion data offers a considerable time saving. Numerous registration methods have been proposed, with evaluation limited to the image registration accuracy. However, the important clinical question is how do these methods affect diagnosis? The aim of this study is to evaluate perfusion series registration in terms of its affect on the diagnostic accuracy of myocardial ischaemia.This was a retrospective sub-study using data from the CE-MARC trial . A 50-patient sample was selected such that the distribution of risk factors and disease status within the sample was representative of the full CE-MARC cohort. Image registration was performed with the mutual information image similarity metric; all images in the basal location of the series were registered with translation displacement to the maximum contrast image of the series at the basal slice. The recovered correction transforms were propagated to the medial and apical slices of the series . The AUCs for manual motion correction and automatic motion correction were 0.93 and 0.92 respectively (Figure We have shown that automated motion correction provides diagnostic accuracy equivalent to the common protocol of manual motion correction. Automated motion correction offers a significant time reduction in the human interaction required for delineation of contours for quantitative perfusion analysis, and therefore opens the way for a more widespread use of this technique in research and clinical practice.This work was funded by the Top Achiever Doctoral scholarship awarded by the Tertiary Education Commission of New Zealand."} +{"text": "The molecular machinery underlying memory consolidation at the level of synaptic connections is believed to employ a complex network of highly diverse biochemical processes that operate on a wide range of different timescales. An appropriate theoretical framework could help us identify their computational roles and understand how these intricate networks of interactions support synaptic memory formation and maintenance.Here we construct a broad class of synaptic models that can efficiently harness biological complexity to store and preserve a huge number of memories, vastly outperforming other synaptic models of memory. The number of storable memories grows almost linearly with the number of synapses, which constitutes a substantial improvement over the square root scaling of previous models ,2, especThis is achieved by combining together multiple dynamical processes that operate on different timescales, to ensure the memory strength decays as slowly as the inverse square root of the age of the corresponding synaptic modification. Memories are initially stored in fast variables and then progressively transferred to slower ones. Importantly, in our case the interactions between fast and slow variables are bidirectional, in contrast to the unidirectional cascades of previous models.The proposed models are robust to perturbations of parameters and can capture several properties of biological memories, which include delayed expression of synaptic potentiation and depression, synaptic metaplasticity, and spacing effects. We discuss predictions for the autocorrelation function of the synaptic efficacy that can be tested in plasticity experiments involving long sequences of synaptic modifications."} +{"text": "Following almost 30 years of relative silence, chikungunya fever reemerged in Kenya in 2004. It subsequently spread to the islands of the Indian Ocean, reaching Southeast Asia in 2006. The virus was first detected in Cambodia in 2011 and a large outbreak occurred in the village of Trapeang Roka Kampong Speu Province in March 2012, in which 44% of the villagers had a recent infection biologically confirmed. The epidemic curve was constructed from the number of biologically-confirmed CHIKV cases per day determined from the date of fever onset, which was self-reported during a data collection campaign conducted in the village after the outbreak. All individuals participating in the campaign had infections confirmed by laboratory analysis, allowing for the identification of asymptomatic cases and those with an unreported date of fever onset. We develop a stochastic model explicitly including such cases, all of whom do not appear on the epidemic curve. We estimate the basic reproduction number of the outbreak to be 6.46 . We show that this estimate is particularly sensitive to changes in the biting rate and mosquito longevity. Our model also indicates that the infection was more widespread within the population on the reported epidemic start date. We show that the exclusion of asymptomatic cases and cases with undocumented onset dates can lead to an underestimation of the reproduction number which, in turn, could negatively impact control strategies implemented by public health authorities. We highlight the need for properly documenting newly emerging pathogens in immunologically naive populations and the importance of identifying the route of disease introduction. Aedes albopictus) in many of these regions. This study describes a mathematical model for a chikungunya outbreak in the rural Cambodian village of Trapeang Roka, where a chikungunya epidemic was recorded and documented in March 2012. The outbreak data is unique, in that all infections were confirmed by laboratory analysis, enabling the identification of asymptomatic individuals, in addition to individuals who failed to report details of their infection. A stochastic model, partitioning the infectious population into three distinct classes, is implemented using Gillespie's algorithm. We show that the incorporation of both biologically-confirmed symptomatic cases undocumented by date of fever onset and asymptomatic cases yields a higher estimate of the reproduction number. Our results highlight how reproduction numbers could be underestimated by limiting analysis to the epidemic curve. Carefully documenting cases and performing laboratory testing in cluster regions, such as the village considered here, could provide a more comprehensive insight into the true infection dynamics.During the recent resurgence of chikungunya, the scale of imported cases into previously unaffected countries has caused great concern due to the presence of a competent vector ( Aedes aegyptiAedes albopictusAedes albopictus population, first detected in 1990 Aedes albopictus vector is present Chikungunya virus (CHIKV) belongs to the genus Alphavirus. It is a mosquito-borne pathogen transmitted by the Aedes mosquitoes. In humans, the virus causes an acute illness with symptoms including fever, headaches, rash and arthralgia Unlike dengue fever, which has been extensively modelled, chikungunya has only started to receive attention since its reemergence in 2005. Mathematical models have been developed to describe detailed mosquito dynamics and the host-vector interactions Aedes albopictus) in many of these regions albopictus population during the 2007 outbreak. The urgent need to establish adequate monitoring and mosquito control programs in vulnerable countries is particularly highlighted by the recent outbreak in Singapore, in which 1059 cases were recorded in 2013 The scale of imported cases into previously unaffected countries The model results highlight the importance of accurate epidemiological data collection to identify the route of disease importation and the ability of poor human recall to impact the epidemic curve by excluding cases with an undocumented date of onset. The consequences of excluding these cases was demonstrated by considering a simplified SEIR model which yielded biologically unrealistic mosquito population estimates and produced a simulated epidemic that continued for long after the real epidemic had finished. Furthermore, many individuals interviewed recalled a date of symptom onset but infection was not confirmed by IgM. This indicates that biological confirmation is crucial to avoid errors introduced due to the presence of other diseases , the impact of memory bias and the unintentional effects of people aiming to please the interviewers by confirming a non-existent infection. The results demonstrate that the identified index case was not alone in the population at the epidemic onset and the infection was already present in both the human and mosquito populations. The infected humans could have imported the infection from nearby regions, however, it is more probable that the infection was imported by an infected visitor or a resident who was either asymptomatic or undocumented by date of symptom onset, who would not have been recorded on the epidemic curve but nonetheless seeded the epidemic within the village's mosquito population.The estimate obtained for Aedes albopictus mosquito was responsible for the first chikungunya outbreak in a temperate region recorded in the Emilia Romagna region of Italy. This outbreak highlighted the need to understand the dynamics of disease introduction into new regions and the impact of biological, human and climatic factors on invasion dynamics. The model presented here provides an estimate for Accurate estimates of"} +{"text": "Corporate identity clothes (CI-clothes) are today more and more important for companies and their identity. One important aspect is the appearance of the clothes over the long lifetime. After several washing and wearing cycles the appearance of such clothes should only show low differences between new and worn clothes. To minimize the signs of use many CI-clothes have a water and soil repellent finishing avoiding contamination. This finishing shows often a negative influence on the comfort of the clothing. The aim of the German research project 16365N was to optimize water and soil repellent finishing in combination with high industrial wash ability and high comfort.In the project, typical fabrics of CI- clothes were finished with different auxiliaries for water and soil repellency as one and two side application. The different auxiliaries characterise the plurability of commercial available products (e.g. fluorocarbon dispersions (C6 and C8), fluorocarbon free dispersions, sol-gel based dispersions). The thermophysiological and skin sensorial comfort of the finished fabrics was determined by measurements with established devices: the Hohenstein skin model, a sweating guarded-hot plate and the five Hohenstein methods for the measurement of the skin comfort. For evaluation the thermophysiological comfort vote and the skin sensorial comfort vote was determined. At the end the total wear comfort was calculated.The thermophysiological parameters of the finished fabrics of the fabric in new state and after finishing with different auxiliaries with variation of the finishing parameters and also the skin sensorial parameters were collected. The fabrics in origin state show in every case better thermophysiological properties than the finished fabrics. For all fabrics the total wear comfort of the finished products is lower than of the fabric in new state. The amount of the finishing detergents influenced the properties. In most cases high amounts of the finishing detergents degrade thermophysiological and skin sensorial properties. But with low amounts of the finishing detergents the water and soil repellency is not given.The application of the water and soil repellent finishing on two sides leads to better thermophysiological comfort vote but also to a lower skin sensorial comfort vote. The one side application has the advantage of a better skin sensorial comfort but the thermophysiological comfort is lower caused by additional used thickener of the application. The influence of different water and soil repellent finishing to the total wear comfort are very similar independent on the chemistry of the finishing.The results of the research project show that the physiological comfort of CI-clothes can be influenced by the used water and soil repellent finishing. The application methods (one side coating or two side application via the foulard-process) have an important influence to the thermophysiological and skin sensorial comfort. The comfort vote system which is developed for all day clothes can be also used for CI-clothes but the additional functionalization lead to lower votes. At the moment all finishing auxiliaries for water and soil repellency degrade the thermophysiological and skin sensorial comfort. New developments of auxiliaries are necessary to improve also the wear comfort. In cases of CI-clothes the question to the needs of water and soil repellent finishing must be discussed to improve very easily the wear comfort of such clothes. In the case of PPE water and soil repellent finishing is often necessary and essential for the protection of the worker."} +{"text": "Odontomas are a common type of odontogenic tumor, usually asymptomatic and mostly detected on routine radiographic examination. An 11-years-old male child with the chief complaint of mobility of deciduous dentition in the upper front region was diagnosed with an odontome with an impacted central incisor, missing lateral incisor and retained deciduous incisors following radiographic analysis. Histopathology revealed a compound odontoma following a conservative enucleation. Odontomas associated with primary dentition, impacted teeth and erupting into oral cavity have been described, but the association with a missing lateral incisor makes this an interesting case report.How to cite this article: Nammalwar RB, Moses J. A Rare Association of Compound Odontome with Missing Lateral Incisor. Int J Clin Pediatr Dent 2014;7(1):50-53. The cells of the tissues in odontomas are normal but lack organization due to disordered expression and localization of the extra-cellular matrix molecules in the dental mesenchyme.3Odontomas are a common type of odontogenic tumors and are considered to be developmental anomalies (hamartomas) rather than true neoplasm. Fully developed odontomas consist mainly of enamel and dentin and variable amounts of cementum and pulp tissue.4 Hitchin suggested that odontomas are inherited through a mutant gene or interference, possibly postnatal, with genetic control of tooth development. In humans, there is a tendency for the lamina between the tooth germs to disintegrate into clumps of cells. The persistence of a portion of lamina may be an important factor in the etiology of complex or compound odontomas and either of these may occur instead of a tooth.5Experimental production of odontoma in rats as a result of trauma was studied by Levy BA suggestive of trauma as an etiological factor.67The World Health Organization histological typing of odontogenic tumors classifes odontoma under benign tumors containing odontogenic epithelium with odonto-genic ectomesenchyme, with or without dental hard tissue formation. Ameloblastic fbro-odontoma, compound and complex odontoma are entities under this category.8The anterior maxilla holds a somewhat stronger tendency for being the predilection site for compound odontoma than the posterior mandible does as the predilection site for the complex odontoma.9 The radiographic features of the odontoma show two regions, the well defined periphery which may be smooth or irregular, mostly with a hyper-ostotic or cortical border and a soft tissue capsule adjacent to the cortical border. The internal structure is largely opaque with compound odontoma showing a number of teeth like structures. The degree of opacity is equivalent or exceeds the adjacent structures.10They are mostly asymptomatic, but a study of 60 cases in Department of Oral Pathology, Dankook University Hospital between 1991 and 2006 revealed delayed eruption of either the deciduous or permanent tooth; intra- or extraoral swelling; and the reporting of pain. Eighteen cases had no subjective symptoms and of the 60 cases, 55% were female.An 11-years-old male child presented to the department of pedodontics with the chief complaint of mobile deciduous teeth in the right maxillary anterior region for the past 4 days with pain, with the history of dental trauma. Clinical examination of the maxillary anterior region revealed retained missing right maxillary permanent central and lateral incisor with retained deciduous incisors . PanoramHistopathology of the soft tissue section showed delicate cellular fbrous connective tissue, with dense focal collection of chronic infammatory cells like lymphocytes and plasma cells in few areas. Few islands of odontogenic epithelial cells were seen. The mass of hard tissue was confirmed to be a compound odontoma with the decalcified section showing Odontoma is a condition in dental medicine that mostly proceeds unrecognized until the occurrence of clinical symptoms such as delayed eruption, or is incidentally detected on routine X-ray examination. The exact cause is not known, however, previous dental trauma and infection have been postulated as the potential factors in the development of odontogenic tumor as described in the literature below.67 could possibly be associated with this case. Trauma in the form of dental mutilation as a result of traditional tooth extractions a practice of the people of Africa have reported to cause malformation and dilacerations of permanent teeth.11Intrusion of the permanent dentition as a result of trauma leading to malformation in the form of hyperplasia of the permanent lateral incisor have been reported. The odontoma was reported to be located very deep for surgical removal to be carried out.12 The effects of odontoma on the primary dentition range from noneruption to impactions as in this case of an unerupted primary canine that was managed surgically with the unerupted tooth retained to allow its eruption.1314The case described in this study was diagnosed initially as odontoma as the radiographic examination showed calci-fication similar to that of teeth. Histological examination of the lesion after enucleation revealed a compound odontoma and the surrounding mass of tissue could possibly be from the connective tissue surrounding the odontoma. The etio\u00adlogy of trauma being a causative factor as suggested by Levy BA15 The treatment of choice for these impacted teeth associated to odontomas appears to be removal of the lesion with preservation of the impacted tooth. The latter in turn require clinical and radiological follow-up for at least one year. If no changes in the position of the tooth are seen during this period, fenestration followed by orthodontic traction is indicated. Extraction advised when the tooth is ectopic or heterotopic, with morphological alterations, or presence of cystic lesions.16 Orthodontic therapy for alignment of the impacted central incisor has been suggested.Our case history had a positive note on a traumatic epi\u00adsode and considering the study of the literature available, further research into how, for example any histological or chemical as a result of trauma should be considered. The presence of the missing permanent lateral incisor is of course a unique entity and it may be theoretically considered that trauma can be an etiologic factor. Spontaneous eruption of an impacted tooth after removal of a supernumerary tooth or odontoma depends on several factors, such as distance of the apex of the impacted tooth relative to its midline, time of surgery relative to the expected eruption time of the impacted teeth and loss of space, estimated position, depth of impaction, angle of impaction relative to the midline, time of surgery relative to the expected eruption time of the impacted teeth.The detection of odontoma is more likely an accidental radio\u00adlogical finding, hence the need for routine radiographic analysis should be emphasized. Early diagnosis of odon-tomas in primary dentition is essential in order to prevent later complications, such as impaction or failure of eruption of teeth."} +{"text": "The aim of our study is to demonstrate the surgical management of myringosclerosis over a perforated whole tympanic membrane using simple underlay myringoplasty. Simple underlay myringoplasty with fibrin glue was performed in 11 ears with myringosclerosis over the entire tympanic membrane. The patients were one male and ten females and their mean age was 61.8 years . Surgical success was defined as an intact tympanic membrane 12 months after surgery. Closure of the perforation was successful in 10 (91%) of the 11 patients. Failure of the graft occurred in one patient who then underwent a revision procedure using her stored fascia in the outpatient clinic with a successful outcome. The overall success rate was 100%. Although this study included a small number of cases, removal of myringosclerosis at the edge of a perforation is a beneficial technique for simple underlay myringoplasty in terms of the success rate and postoperative hearing threshold, especially when myringosclerosis extends over the entire tympanic membrane. Myringosclerosis is a pathologic condition affecting the tympanic membrane. It appears as whitish, sclerotic plaques in certain areas of the tympanic membrane. Histologically, there is an increase in collagen fibers as well as hyaline degeneration within the lamina propria \u20133. The pIn Japan, simple underlay myringoplasty with fibrin glue is a well-established procedure because of its high success rate and low risk of sensorineural hearing loss \u201311. The We present our operative procedure of simple underlay myringoplasty with myringosclerosis over theClosure of the perforation was successful in 10 (91%) of the 11 patients. Failure of the graft occurred in one patient who underwent a revision procedure using her stored fascia in the outpatient clinic with a successful outcome. The overall success rate was 100%. The air conduction hearing level postoperatively was significantly improved as compared with that of preoperative level . The airAll patients gave their written informed consent, and the study was approved by the ethics committee of Juntendo University Faculty of Medicine.Myringosclerosis is the term used to describe hyalinization and calcification of the connective tissue layer in the lamina propria of the tympanic membrane. The exact etiology and pathogenesis of myringosclerosis are not known. Myringosclerosis often occurs in patients who had undergone the insertion of tympanostomy tubes. An increased oxygen concentration in the atmosphere of the ear is associated with the increased development of myringosclerosis in a traumatized tympanic membrane , and admA previous study showed no significant difference in the success rate of myringoplasty between the myringosclerotic and the nonmyringosclerotic groups . On the In conclusion, despite the small number of cases and having no control group, removal of myringosclerosis at the edge of a perforation is expected to improve graft success and a better-looking graft for simple underlay myringoplasty, especially when myringosclerosis extends over the entire tympanic membrane."} +{"text": "Current advances in molecular biology, together with the development of Biobanks as stable sources of biologic material, are enhancing the possibility of detecting genetic factors involved in the molecular pathogenic mechanisms of migraine, a complex neurological disorder classified as the seventh most disabling disease worldwide.To date, the migraine section of the Interinstitutional Multidisciplinary Biobank (BioBIM) of IRCCS San Raffaele Pisana has recruited 863 migraine patients and 400 healthy individuals as controls. Each biological sample has been associated with extremely detailed socio-demographic and clinical features of the donor. Thanks Our Biobank dedicated to migraine has proven to be a valuable resource to conduct molecular studies on this disease, allowing the identification of a new potential biomarker for detection of asymptomatic individuals at increased risk for migraine development, in addition to providing the basis for the design of more tailored and effective therapies.None."} +{"text": "The group of filamentous fungi contains important species used in industrial biotechnology for acid, antibiotics and enzyme production. Their unique lifestyle turns these organisms into a valuable genetic reservoir of new natural products and biomass degrading enzymes that has not been used to full capacity. One of the major bottlenecks in the development of new strains into viable industrial hosts is the alteration of the metabolism towards optimal production. Genome-scale models promise a reduction in the time needed for metabolic engineering by predicting the most potent targets in silico before testing them in vivo. The increasing availability of high quality models and molecular biological tools for manipulating filamentous fungi renders the model-guided engineering of these fungal factories possible with comprehensive metabolic networks. A typical fungal model contains on average 1138 unique metabolic reactions and 1050 ORFs, making them a vast knowledge-base of fungal metabolism. In the present review we focus on the current state as well as potential future applications of genome-scale models in filamentous fungi. Aspergillus niger as the major source of citric acid production. Furthermore, reflecting the saprobic lifestyle of many filamentous fungi, they harbor a great variety of biomass-degrading enzymes natively produced in high amounts. Additionally, the large diversity of bioactive compounds produced by filamentous fungi is just being recognized as a valuable reservoir of promising new natural compounds. Exploration of the biosynthetic capabilities of these organisms has been facilitated by the availability of genome sequences, thereby enabling the discovery of secondary metabolite clusters being inactive under standard laboratory conditions.Filamentous fungi have been used for decades in industrial biotechnology exploiting their ability to utilize various sources of nutrients and tolerating adverse growth conditions. For example, tolerance of low pH and the endogenous property of producing citric acid in high amounts have led to the establishment of A key requirement for the transition of a new compound into a viable commercial product is the availability of a host producing the compound in sufficiently high amounts. As the organisms are, in general, not evolutionarily optimized to produce a single compound in optimal amounts, process optimizations as well as genetic modifications have to be performed. The rational approach of modifying the metabolism of an organism in order to improve product output constitutes the field of metabolic engineering. Due to the lack of information in the pre-genomic era, the scope of metabolic engineering has been limited to individual pathways not considering inherent interdependencies in the metabolic network. The availability of genome sequence information provides the opportunity to expand the scope of metabolic engineering to the whole metabolism transforming the field towards systems biotechnology. As the process of metabolic engineering represents a bottleneck in the development of many cell factories, accurate predictions by metabolic models could help to reduce the time and costs involved by guiding the efforts towards the most promising set of modifications.Haemophilus influenza 20\u00a0years ago in filamentous fungi started considerably later with the first genome being published in 2003 for These reconstructions differ considerably in their content of legacy information included, reflecting different strategies of model establishment. The process of manual reconstruction tends to be laborious, as a maximum amount of information is considered leading to the generation of a structured knowledge base. The complementary approach aims at establishing genome-scale reconstructions (semi-)automatically based on sequence comparisons and gene assignments, enabling the prediction of genome-scale networks for less covered species. While these models generally move towards a larger number of genes and reactions included as models are progressively improved, a qualitative comparison with respect to the numbers of reactions and genes included in the resulting models of these different strategies is not directly possible. The majority of automatically generated models contain dead-ends and/or unconnected reactions that have been removed from manually created and curated models, resulting in higher number of genes included. The results from both of these approaches are fragmentary summaries of the metabolic capacities of the organisms requiring additional curation in the form of gap-filling in order to make computational analysis of the networks feasible. The mathematical background of the underlying modeling approach has been reviewed elsewhere has been used to assign genes to metabolic pathways in different strains and level of characterization of the individual organisms. A key aspect of metabolism in eukaryotes is the separation of metabolic reactions in different compartments. The correct assignment of metabolic enzymes to these compartments requires detailed information not readily available for a number of components in filamentous fungi metabolism. Even though this imprecision can limit the accuracy of predictions by these models, they still provide a valuable tool for data interpretation besides being a collection of information that is easily accessible. The unique feature of genome-scale network reconstructions is the integration of different types of information from various sources including databases such as KEGG and Swissprot as well as primary literature data. This ordered collection of metabolic reactions is curated with genomic information and literature citations which results in a structured knowledgebase for the specific organism. While organism specific databases pursue a similar goal, the strength of genome-scale reconstructions is the easy accessibility for computational analysis.While genome-scale metabolic modeling in E. coli and S. cerevisiae, enabling multiple genetic modifications in the same host, but have only recently become available in filamentous fungi (Oakley et al. Although genome-scale models have been developed for various filamentous fungi (see Table\u00a0E. coli where accuracies of >90 and 80\u00a0% have been achieved for single- and double gene knockouts respectively (Monk and Palsson Additional factors potentially hampering the final development of these models into predictive tools for genetic modifications are the low percentage of experimentally verified gene assignments, unknown subcellular localization of the different pathways, as well as insufficient data for model validation. Due to advancements in the field of high throughput omics-technologies this necessary data for modeling is becoming more and more available facilitating the prospective use of genome-scale models as well as the model building process. The extension of the knowledge being available for incorporation will furthermore improve the prediction accuracy of existing models rendering more advanced applications possible. This improvement of prediction accuracy over time has already been demonstrated by the genome-scale model of Continuous extension of the models towards more detailed representations while improving their predictive power demonstrates the iterative nature of model development. The availability of predictive models for important organisms used in industrial biotechnology will provide the opportunity to choose the best performing host platform for a new biotechnological process by simulation, as optimal production of an individual compound requires a very specific metabolic performance. This comparability of genome-scale reconstructions between different species has been severely hampered by the non-standardized modeling practice of the different research groups in the past. With the development of modeling standards (Le Nov\u00e8re et al. During the past years, life sciences and biotechnology has begun the transformation from data-poor to a data-rich discipline introducing the need for new tools of analysis and interpretation of the generated data. Genome-scale metabolic networks have been proven useful in contextualizing and analyzing the data acquired by the different omics-techniques. At the same time, the increase in availability and quality of omics data pave the way for the generation of more detailed and context-specific metabolic networks based on the presence of individual metabolic activities. This usage of omics data as a surrogate for the customization of genome-scale models will increase in importance as the sensitivity of the omics technologies is improving. The combination of multi-omics measurements with metabolic modeling provides a powerful tool for the detection and elucidation of unknown relations and interactions. Improvements in the molecular toolbox for genetic engineering will allow for a faster realization of model predictions in the future thereby increasing the speed of model improvement. Taken together the recent developments in the metabolic modeling as well as experimental techniques for generating relevant data are promising for an increased use of model-driven decision making in future."} +{"text": "A review of research undertaken to evaluate the biomechanical stability and biotribological behaviour of osteochondral grafts in the knee joint and a brief discussion of areas requiring further improvement in future studies are presented. The review takes into consideration osteochondral autografts, allografts, tissue engineered constructs and synthetic and biological scaffolds. Patients suffering with osteoarthritis of the knee account for just over half (4.71 million) of all individuals living with osteoarthritis in the United Kingdom. The prevalence of knee osteoarthritis and the associated socioeconomic pressures it presents are set to increase in the future; accounting for predicted increases in population obesity, growth and ageing, the incidence of osteoarthritis of the knee in the UK population is estimated to have nearly doubled by 2035 and chronic mechanical overload due to factors such as severe joint misalignments and the removal of meniscal tissue.A wide variety of surgical methods for the treatment of osteochondral defects are currently available ; these r12 Cell-based approaches to the treatment of osteochondral defects, such as ACI and matrix-assisted ACI utilising scaffolds have also demonstrated a number of inherent limitations on clinical follow-up. These limitations include (1) fibrocartilage formation, (2) incomplete defect filling and (3) limited integration with surrounding tissues.13The clinical application of osteochondral grafts in the knee currently involves the implantation of single or multiple (mosaicplasty) autologous or allogeneic grafts. The aim of osteochondral graft implantation is to achieve a congruent articular surface resembling that of the native joint in order to restore the biomechanics and biotribology of the joint. The current clinical use of osteochondral autografts and allografts is limited by a number of factors, including (1) tissue availability, (2) donor site morbidity, (3) disparity in congruency between graft and host tissues and (4) lack of integration between graft and host articular cartilage.14 By engineering an osteochondral construct, constructs may be developed with biological, structural, biomechanical and tribological properties that closely mimic those of natural cartilage and bone, which are essential for long-term performance and durability in the natural knee joint. To date, no tissue engineered osteochondral construct has yet regenerated functional tissues that possess the properties of native cartilage and bone.Tissue engineering of osteochondral constructs has the potential to overcome the limitations of existing therapies and provide surgical solutions with improved long-term outcomes. The design of tissue engineered constructs is often based on a combination of three fundamental elements, namely, scaffolds, cells and bioactive molecules, with the aim of producing functional tissues in vitro or in vivo.A number of approaches have been adopted in the research and development of a potential regenerative solution for osteochondral replacement; these include synthetic and natural scaffolds pre-seeded with cells in vitro or as intelligent scaffolds capable of in vivo regeneration utilising the body\u2019s own endogenous cells. Scaffolds may be monophasic, biphasic, triphasic or multiphasic in structure, consisting of one or more layers or scaffold materials with difThe predicted future population trends regarding ageing, obesity and osteoarthritis and the limitations in current therapies for the treatment of osteochondral defects indicate a clear requirement for the development of effective early stage interventions to repair or regenerate osteochondral defects in the knee. Regenerative solutions for osteochondral defect repair have the potential to delay or halt further degenerative changes and may ultimately negate the requirement for total joint replacements in the long term. This review aims to present the research undertaken to assess and evaluate the biomechanics and biotribology of osteochondral autografts, allografts, tissue engineered constructs and scaffolds for the repair or regeneration of osteochondral defects in the knee.Two major challenges exist for the successful application of osteochondral grafts and novel regenerative solutions, the first being the restoration of biomechanical and biotribological function in order to establish the correct environment for tissue repair and regeneration, which is the primary focus of this review. The second challenge is the stratification of the population and the development of segmented product interventions designed with appropriate levels of precision, which can be delivered to reliably restore function and performance. This is addressed as a future challenge in the discussion.The aim of osteochondral grafts is to restore the congruent articulating surfaces of the joint, restoring normal joint biomechanics and biotribology (the biphasic load carriage and lubrication). Achieving and maintaining the congruent articular surfaces, along with the integrated support from the underlying bone are paramount to the long-term success of osteochondral graft procedures and the prevention of further progressive degenerative changes in the joint. Graft stability in the initial period following implantation is dependent on the resistance to motion arising from the graft\u2013host interference fit and where present, support from the underlying trabecular bone structure. The graft\u2013host interference fit (press-fit) is a direct product of the material properties and geometries of the graft and the host implantation site. Grafts that protrude above or subside below congruency level following implantation may induce inferior biomechanical and tribological conditions in the joint, potentially resulting in the onset of degenerative changes.19 These studies have evaluated graft stability by measuring the compressive push in forces required to displace grafts a set depth below congruency level with the surrounding host cartilage.Biomechanical studies that have evaluated the effects of graft and defect geometry have shown that the primary stability of osteochondral autografts or allografts in the initial post implantation period is greater when the graft and defect length are equal , grafts with larger diameters resist greater push in forces; similarly, graft stability also increases with increasing graft length with unbottomed grafts. Conversely, shorter bottomed grafts have been shown to provide greater resistance to push in forces than longer bottomed grafts.18The stability of bottomed grafts is great20 concluded that shorter grafts and those of smaller diameter resisted significantly lower pull out loads; these results are in agreement with those obtained by Kock et al.18 and Kord\u00e1s et al.19 for unbottomed grafts during push in tests. In vivo tests in both human and animal models have also demonstrated that the lack of basal support in unbottomed grafts is likely to predispose them to a tendency to subside below congruency level.22Investigations that have assessed graft pull out strength have measured the resistance to graft movement due to the graft\u2013host interference forces. The study conducted by Duchow et al.23 showed that the implantation of congruent osteochondral grafts into the femoral condyle resulted in altered stress and strain distributions in the opposing cartilage surface when compared to an intact joint. A discontinuous contact stress profile over the graft\u2013host interface was present when the grafts were inserted congruent to the native cartilage surface. The differences in the contact stress profiles are attributable to the discontinuous cartilage surface and may negatively affect the development of repair tissue in the graft\u2013host boundary space. The finite element simulations also highlighted that there was an abnormal local tensile stress present in the opposing articulating surface of the tibial plateau when grafts were inserted either proud or countersunk to the host cartilage layer; such abnormal stresses may compromise the integrity of the opposing cartilage surface resulting in degenerative changes.Finite element simulations conducted by Wu et al.25 Kock et al. showed that the implantation of osteochondral grafts in a mosaicplasty procedure reduced the contact stress present around the edge of osteochondral defects. Furthermore, the study indicated that similar to untreated osteochondral defects, unbottomed grafts that had subsided below congruency also had increased rim stresses compared to implanted grafts that had remained congruent.22 Gratz et al.26 reported increased axial, lateral and shear strains in cartilage adjacent to defects and slightly elevated shear strains in the opposing cartilage surfaces. Grafts experiencing subsidence, similar to untreated osteochondral defects, are likely to result in altered stress\u2013strain distributions in the surrounding and opposing articular cartilage surfaces. Elevated stress\u2013strain levels and abnormal distributions may place the articulating cartilage surfaces of the knee joint at risk of damage through biomechanical and mechanobiological mechanisms; therefore, it is important that new graft designs have adequate material properties to provide suitable resistance to motion and loading.The effects of osteochondral defects on the contact stress of the surrounding articular cartilage have been studied experimentally. Elevated contact stresses have been shown to occur in the rim of osteochondral defects with peak stresses and increased contact stress gradients also occurring in the cartilage surrounding the defect.28 have demonstrated significant increases in contact pressure when grafts are inserted proud . Follow-up arthroscopies showed that protruding plugs displayed fibrillation at the graft edges and degenerative changes in the opposing tibial surfaces including cartilage softening and fibrillation. Studies evaluating the effects of altered joint biomechanics due to protruding osteochondral grafts indicate a clear relationship between altered contact mechanics and subsequent damage of the articulating cartilage surfaces; this is likely due to resultant changes in the local biotribology of the joint.Nakagawa et al.3033 Fibrocartilage was also shown to have grown above the subchondral bone level in the void between the periphery of the graft and the host tissue. However, fibrocartilage in-growth was inconsistent with some specimens displaying clefts in the repair tissue from the cartilage surface down to the subchondral bone level at the graft\u2013host cartilage interface.3034Several animal studies examining the outcomes of single osteochondral graft transfer and mosaicplasty procedures have indicated integration of the subchondral and underlying trabecular bone with the surrounding host bone at 3\u20136\u2009months postoperatively.34 The studies by Huang et al.34 and Nosewicz et al.21 indicated that when grafts subside less than 1\u2009mm below congruency, cartilage thickening may occur, compensating for the difference between the surface profiles of the graft and host cartilage. Subsidence of grafts below 1\u2009mm has been shown to induce significant fibrocartilaginous overgrowth; despite this, the surface profiles of the graft and host often still remain incongruent.34 These results correlate with the clinical observations of Nakagawa et al.,29 where all grafts that had subsided greater than 1\u2009mm below congruency displayed fibrocartilage overgrowth. The presence of fibrocartilage on the articulating joint surfaces is undesirable as fibrocartilage is known to be biomechanically and histologically inferior to articular cartilage; this may result in the onset of degenerative changes in the cartilage of the graft and surrounding host tissue.3537Unbottomed grafts, due to a lack of support from underlying bone, rely predominantly on the interference fit to resist subsidence below congruency in the post-operative period until integration with underlying bone occurs. A number of studies performed in ovine models to investigate the effects of osteochondral graft alignment and subsidence have shown that when grafts subside to expose the subchondral host bone of the defect walls, the articular cartilage surface of the graft is susceptible to fibrous tissue overgrowth.38 reviewed studies between 2009 and 2012 that were concerned with the evaluation of osteochondral repair by constructs implanted into animal models. This review indicated that only 15% of the studies reviewed reported mechanical evaluation of explants following in vivo implantation. Lopa and Madry39 in their review of preclinical studies applying biphasic osteochondral scaffolds also reported limited numbers of studies reporting biomechanical analysis of explants to assess osteochondral repair. Where mechanical testing is reported, this tends to be limited to basic indentation, stiffness and stress relaxation tests of the explanted grafts without consideration of the effects of the grafts on the whole joint system.The evaluation of the biomechanics of osteochondral autografts, allografts and tissue engineered constructs is limited in the published literature. Studies designed to assess the in vivo and in vitro development of tissue engineered osteochondral constructs have focused predominantly on morphological and histological scoring and assessment as opposed to mechanical and tribological functionality. Jeon et al.40 The cartilage layer, integrated to the underlying bone, is primarily responsible for the biotribological function of osteochondral grafts in the natural joint environment and the subsequent maintenance or disruption of the biotribology in the surrounding and opposing cartilage surfaces.Osteochondral grafts consist of an articular cartilage component and an underlying supporting bone component that also serves to anchor and constrain the articulating hyaline cartilage interface of the graft and establish the essential thin layer constrained contact mechanics in the articular cartilage layer. Cartilage is a biphasic tissue with a complex zonal structure and composition; the structure and composition of cartilage endow the tissue with exceptional functional properties allowing low friction movement under high load bearing conditions. The complex organisation of collagen type II fibres and hydrophilic proteoglycans in a dense cross-linked network results in the retention of interstitial fluid within the tissue, under the severe loading conditions in the natural knee. Load is initially carried by the fluid phase in cartilage tissue, facilitated through an increase in internal fluid pressure; this results in a very low coefficient of friction.The main aim of osteochondral grafts is to reconstruct the natural articulating surface and biphasic biotribology of the joint and restore low friction articulation, in order to resist degeneration and wear. For osteochondral grafts to be successful, they must possess adequate tribological and mechanical properties to withstand the complex loading environment within the joint. The structure, composition and subsequent material properties of osteochondral grafts (the integrated structure to replace the bone and the cartilage) must first be sufficient in the short term for the graft to support the growth and integration of repair tissue under complex loading in the joint. Second, the biomechanical and biotribological properties of the graft and repair tissues should not compromise the integrity of the surrounding and opposing cartilage surfaces. Osteochondral grafts aim to repair the underlying supporting bone structure and restore a near frictionless articulating surface; therefore, the biotribological performance of these grafts in the natural joint is a key factor in determining their success.4145 The use of these test methods has also been extended to study the tribology of cartilage scaffold biomaterials and tissue engineered cartilage constructs.4651 Pin-on-plate test methods involve the translation and/or rotation of a pin against a larger counterface in the presence of a lubricant. Commonly used counterface materials are natural cartilage\u2013bone plate specimens or materials such as stainless steel and glass. Small-scale in vitro pin-on-plate test methods allow for the control of experimental variables such as normal load, sliding distance and velocity, contact pressure and tissue loading and unloading intervals which dictate the outcomes under investigation such as friction and wear.52 These simple small-scale in vitro tests can be utilised to assess the tribological performance of newly developed biomaterials and engineered tissues at an early stage. The knowledge gained from pin-on-plate test methods may also be used to better understand the tribological behaviour of complex whole joint simulation models that are capable of reproducing the natural physiological conditions experienced in synovial joints such as the knee.52Biotribological evaluation is used to simultaneously study the friction, lubrication and wear of materials under compressive loading and sliding shear stress; the biotribological properties of a graft may provide a better indication of functionality compared to the uniaxial biomechanical properties alone. Pin-on-plate tribological test methods have commonly been used to study the tribology of cartilage.23 When implanting osteochondral grafts, it is particularly difficult to achieve a perfectly congruent articulating surface; therefore, it is likely that even with grafts inserted to flush level, there may be subsequent increased wear in the joint due to an increased coefficient of friction arising from a discontinuous articulating surface (edge effects).Analysis of osteochondral graft insertion into the knee joint using finite element simulations has indicated that the implantation of grafts results in altered stress and strain distributions in the cartilage surfaces, with discontinuous stress profiles noted in the region of the graft and host tissue interface.53 studied the effects of osteochondral allograft implantation on the coefficient of friction of cartilage in a caprine knee model in vitro. The study highlighted a significant increase in coefficient of friction when no graft was inserted into the defect and when grafts were inserted flush, protruding and recessed in respect to the host cartilage surface compared to the intact joint. Following osteochondral graft implantation, cartilage damage was only observed on the edge of the graft and defect hole. A lack of integration between the graft and surrounding host cartilage on the articulating surfaces may result in increased friction and wear due to the effects of the tibia moving across the edge between the graft and host tissue. This may induce increased levels of friction and wear at the graft edges, on the host tissue adjacent to the graft and on the opposing articulating surfaces.Bobrowitsch et al.4657 Previous in vitro experiments evaluating the friction and wear response of tissue engineered cartilage in pin-on-plate experimental set-ups have reported a higher equilibrium coefficient of friction compared to native cartilage.4757 Several of these studies have shown a time-dependent frictional response of the tissue engineered cartilage constructs49 similar to that of native cartilage. The presence of a time-dependent frictional response is indicative of the presence of some biphasic behaviour and interstitial fluid pressurisation within the tissue engineered cartilage constructs. To date, however, tissue engineered cartilage constructs have yet to demonstrate the tribological function of natural cartilage.A limited number of studies have investigated the biotribology of tissue engineered cartilage and scaffold materials designed for the regeneration of articular cartilage.56 Whitney et al.46 compared the frictional response of scaffold-free tissue engineered cartilage constructs to bovine cartilage specimens in a pin-on-plate test configuration. The tissue engineered cartilage exhibited a time-dependent frictional response similar to that of native cartilage. The measured frictional force was initially low and then increased over time before appearing to approach equilibrium; however, at later time points, the frictional coefficient of friction of the tissue engineered specimen was seen to decrease. The average reported mean frictional shear stress was not significantly different between the two groups; despite this, all of the tissue engineered cartilage samples clearly showed evidence of surface peeling with 90% of samples delaminating before equilibrium was reached. Delamination of tissue engineered cartilage in the superficial zone has also been reported previously by Plainfosse et al.40 The tissue engineered cartilage constructs studied by Whitney et al.46 were found to have a significantly lower glycosaminoglycan and collagen content compared to native tissues; similarly, the shear and aggregate modulus of the tissue engineered specimens were approximately 60% that of their natural counterparts. Lima et al.49 also reported similarly low levels of collagen in the tissue engineered cartilage samples, therefore limiting the constructs ability for internal fluid pressurisation and the maintenance of a low coefficient of friction. Plainfosse et al.40 reported that levels of matrix components, particularly collagen type II, are commonly lower in tissue engineered cartilage compared to mature natural articular cartilage and may be considered structurally immature; the tissue immaturity is reflected in the low aggregate modulus often obtained during compression testing of such constructs. Investigations by both Plainfosse et al.40 and Whitney et al.46 reported an increasing coefficient of friction with time during testing followed by a notable decrease at later time points. The decrease in coefficient of friction may have arisen from the generation of wear debris and the potential accumulation of wear particles on the counterface plates, acting to reduce the surface roughness and therefore resistance to motion.58Studies have shown that that the composition and structure of engineered tissues and scaffolds for osteochondral repair play a key role in dictating their frictional response and susceptibility to damage during biotribological testing.47 also reported a time-dependent frictional response for cartilage that had been engineered utilising a fibrin scaffold. The equilibrium coefficient of friction reached was higher when compared with native cartilage; however, this was reported to decrease when the engineered cartilage was cultured for longer time periods. Increased culture time was associated with an increase in deposition of surface layer extracellular matrix and an increase in proteoglycan content allowing for improved retention of proteoglycans and interstitial fluid.Morita et al.56 showed that the level of friction of acellular poly(\u03b5-caprolactone) scaffolds under shear was predominantly dependent on surface morphology and fibre orientation, which in turn determined the onset and degree of damage sustained. For acellular scaffolds with aligned fibre orientations, it was shown that alignment of the scaffold fibres in the direction of shear, as is present in native cartilage collagen fibres, was preferential in order to increase the resistance to tension and damage due to shear. The fibre orientation in the cellular scaffolds did not appear to have a significant effect on their friction and wear characteristics due to a masking effect by the deposited extracellular matrix. These tissue engineered scaffolds did not exhibit the time-dependent frictional response typically seen in native cartilage; furthermore, the cellular scaffolds also demonstrated a higher equilibrium coefficient of friction compared to the acellular scaffolds. These factors were attributed to limited culture time, a lack of extracellular matrix deposition and a subsequently limited fluid load support. The reduced load bearing capacity of the tissue was thought to result in the formation of surface damage and wear debris leading to an increase in the surface roughness and coefficient of friction.Acellular and cellular cartilage scaffolds have been shown to display differing levels of resistance to friction and wear during shear testing. Accardi et al.56 indicate that in order to improve the frictional properties, it would be beneficial to more closely replicate the tissue structure and composition of extracellular matrix components such as collagen and proteoglycans; this may subsequently allow for improved biphasic behaviour and a low coefficient of friction.40 The complex fibre structure and orientation in natural cartilage with an orientated surface layer of fibres, which carries tensile stresses, have been shown to be necessary to sustain the hydrostatic stress field and fluid pressurisation in cartilage.Studies investigating the frictional response of tissue engineered cartilage replacements60 These whole joint simulation models can be utilised to study the friction and wear properties of potential osteochondral repair solutions under a variety of dynamic loading profiles simulating those experienced in the natural joint in vivo; furthermore, such models will allow for the effects of the osteochondral repair solution on the whole joint system to be evaluated. As highlighted by Jeon et al.,38 appropriate mechanical and tribological assessment of explants from in vivo animal test models in addition to purely histological methods should also be carried out in order to provide valuable information for the development of successful osteochondral repair solutions.Biotribological assessment of osteochondral repair solutions in the literature is predominantly limited to small-scale pin-on-plate methodologies and basic whole joint torsion models. In order to progress the development of novel osteochondral repair solutions, such as tissue engineered constructs and their successful delivery to the clinic, effective preclinical evaluation is required in order to assess their efficacy in the short and long terms. The development of in vitro whole joint simulation models capable of reproducing the physiological and anatomical conditions of the natural joint may prove to be an invaluable preclinical testing tool.The restoration of a low friction congruent articulating surface and the stability of osteochondral repair solutions are key factors in avoiding the introduction of abnormal stress and strain distributions in the surrounding and opposing cartilage surfaces. Research has indicated a clear link between protruding grafts and cartilage wear and degeneration; similarly, recessed grafts have been shown to induce similar stress and strain changes in the adjacent and opposing articulating surfaces as untreated osteochondral defects. The subsidence of osteochondral grafts below flush level following implantation has also been shown to induce significant fibrocartilaginous overgrowth of the graft surface.Studies assessing the tribological performance of tissue engineered cartilage constructs have highlighted that the structure and composition of repair tissues play a key role in dictating their frictional response and susceptibility to damage. Although some tissue engineered osteochondral constructs have shown a time-dependent frictional response, they have yet to demonstrate the true biphasic behaviour of native cartilage. There is a clear interdependency between the biomechanical and biotribological properties of osteochondral grafts and their functional performance in the natural joint environment. The biomechanical, biotribological and structural properties of osteochondral grafts ultimately determine their ability to withstand wear and the local biotribology within the joint, therefore preventing or promoting further degenerative changes in the surrounding and opposing articular surfaces.At present, there is a distinct lack of mechanical and tribological assessment of potential osteochondral repair solutions in order to evaluate their functional performance and efficacy in the natural knee joint. In order to efficiently develop successful osteochondral repair solutions, in vitro and in vivo evaluations should not be purely limited to the assessment of gross morphology, structure and composition. Preclinical evaluations should also assess the mechanical and tribological performance, as functionality is key to producing osteochondral constructs capable of withstanding the complex loading environment of the knee joint while supporting the growth of repair tissue.In addition to the standard indentation and compression tests generally used to assess material properties, it would be useful to assess the stability of osteochondral repair solutions in the knee joint using push in and push out tests. These methods allow for an assessment of stability and resistance to motion, and they have previously been utilised in published research studies but have been limited to the testing of osteochondral autografts and allografts. Biotribological evaluation of osteochondral repair solutions can provide a better understanding of functional performance than uniaxial biomechanical testing alone. Small-scale biotribological pin-on-plate tests can provide key information regarding the ability of osteochondral repair solutions to restore a biphasic, low friction articulation with negligible wear. Robust preclinical assessment of osteochondral repair solutions may be achieved through the use of whole joint simulators capable of reproducing the natural physiological and anatomical conditions within the knee joint. Whole joint simulation models should allow for the biotribological assessment of repair solutions under dynamic loading profiles and the evaluation of the resulting friction, lubrication and wear in the wider joint. Future studies evaluating performance should include appropriate control groups and comparisons to existing osteochondral repair therapies; where appropriate, these may include experimental groups such as cartilage defects, osteochondral allografts or autografts, disease models and commercially available osteochondral scaffolds.Robust evaluation of osteochondral repair solutions through both in vitro and in vivo preclinical testing will aid the efficient development of current and future osteochondral repair solutions. A more systematic approach to the assessment of osteochondral repair solutions will allow for easier comparison of functional performance between different regenerative osteochondral repair strategies and to current repair strategies used in the clinic.The predicted future population trends indicate a clear requirement for the development of effective therapies for the repair or regeneration of osteochondral defects in the knee. A wide variety of strategies to produce potential regenerative osteochondral repair solutions is currently been researched; however, to date, there has been limited evaluation of the biomechanical and biotribological properties of potential osteochondral repair solutions and their effects within the natural joint environment. The structure and composition of osteochondral repair solutions have been shown experimentally to have a direct impact on the functional performance; therefore, therapies which more closely mimic the structure and composition of natural cartilage and bone tissue are likely to have improved functional properties. In addition to these improvements, the development of an effective, functional osteochondral repair solution for successful delivery to the clinic requires the implementation of robust in vitro preclinical evaluation strategies simulating the in vivo physiological conditions of the natural joint."} +{"text": "We read with interest the manuscript entitled\"Serum and peritoneal fluid levels of vascular endothelialgrowth factor in women with endometriosis\"recently published in the International Journalof Fertility and Sterility . The autin vitro is the maincontributor to the level of serum VEGF and maynot reflect the level of circulating VEGF producedby peripheral tissues (such as endometrioticlesions) . In the lesions) , small dlesions) .Furthermore, when serum is used for themeasurement of VEGF, the methodology ofcollection and processing of samples should bestandardised and declared. In fact, spinning thesamples for different times and with differentforces may influence the levels of VEGF .FurtherGiven these considerations, we believe thatthe authors\u2019 conclusion that \"circulating\" VEGFis similar in blood of women with and withoutendometriosis is not supported by the presenteddata."} +{"text": "Sound information is encoded as a series of spikes of the auditory nerve fibers (ANFs), and then transmitted to the brainstem auditory nuclei. Features such as timing and level are extracted from ANFs activity and further processed as the interaural time difference (ITD) and the interaural level difference (ILD), respectively. These two interaural difference cues are used for sound source localization by behaving animals. Both cues depend on the head size of animals and are extremely small, requiring specialized neural properties in order to process these cues with precision. Moreover, the sound level and timing cues are not processed independently from one another. Neurons in the nucleus angularis (NA) are specialized for coding sound level information in birds and the ILD is processed in the posterior part of the dorsal lateral lemniscus nucleus (LLDp). Processing of ILD is affected by the phase difference of binaural sound. Temporal features of sound are encoded in the pathway starting in nucleus magnocellularis (NM), and ITD is processed in the nucleus laminaris (NL). In this pathway a variety of specializations are found in synapse morphology, neuronal excitability, distribution of ion channels and receptors along the tonotopic axis, which reduces spike timing fluctuation in the ANFs-NM synapse, and imparts precise and stable ITD processing to the NL. Moreover, the contrast of ITD processing in NL is enhanced over a wide range of sound level through the activity of GABAergic inhibitory systems from both the superior olivary nucleus (SON) and local inhibitory neurons that follow monosynaptic to NM activity. The auditory nervous system is highly sensitive to changes in acoustic signals both in the frequency and the level . IPD modulates the activity of NA neurons in the chick through the acoustic interaction across the interaural canal that connects the middle ear cavities of two sides neuron but not in low CF neurons. Accordingly the EPSCs recorded in the high-middle CF NM neurons are large and generated in all-or-none manner with a small number of amplitude steps when the intensity of electrical stimulation applied to the ANFs bundle is changed, while the EPSCs recorded in the low CF neurons are small and the size gradually increases depended on the intensity of electrical stimulus , the site of action potential initiation, is extended longer in the axon of low CF NM neurons than the high-middle CF NM neurons. Clustering of a large number of Na+ channels at the AIS would allow sufficient current to generate action potentials even under a certain level of inactivation channels have a reversal potential around \u221230 mV, are activated by membrane hyperpolarization, and the voltage-sensitivity is modulated by cyclic nucleotides. Channel gating is shifted in the positive direction when the cytosolic concentration of cyclic nucleotides is high, and the sensitivity to cyclic nucleotide is greater in HCN2 than in HCN1 channel subtype Pape, . In the The fast time course of EPSPs is critical to enhance the coincidence detection; however the sharpness of coincidence detection depends also on the size of EPSPs (Kuba et al., \u2212, GABA was depolarizing in brainstem auditory neurons (Hyson et al., B receptors and mGluRs in NL of embryonic age (E19-E21, Tang et al., B and mGluRs are cooperative and may improve the coincidence detection in NL neurons.Because of the relatively high intracellular concentration of Clin vivo recordings from the barn owl (Pena et al., Firing rates of ITD processing neurons alternates periodically as ITD changes during a tonal stimulation, and the period of the ITD tuning curve was determined by the CF of the neuron (Goldberg and Brown, in vivo, ITD tuning was found dependent both on the sound frequency and the sound pressure level (Nishino et al., By recording single unit activity in NL A receptor (Goldstein et al., We further found a phasic IPSC in the low CF NL neurons in slice preparations, which followed the ipsilateral NM inputs with a short time delay and small timing jitter; thus the phasic IPSC likely follows monosynaptically the ipsilateral NM activity (Yamada et al., Interaural difference cues are small, particularly for animals endowed with small heads. This review has focused on works conducted on the chick, which provide profound insights into the mechanisms that contribute to the accuracy of ITD processing. Further, these studies reveal how ITD tuning is maintained over a wide range of sound pressure level in birds. The morphological specializations complement the roles of ionic channels in the ITD tuning. The distribution of ionic channels and receptors including the inhibitory synapses in the NL is critically arranged to optimize the ITD processing, and in turn, sound source localization. Moreover, timing and level cues of sounds are used cooperatively in both mammals and birds to improve the processing of small interaural difference cues.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "As is apparent from reading the first line of nearly any research or review article on speech, the task of perceiving speech sounds is complex and the ease with which humans acquire, produce and perceive these sounds is remarkable. Despite the growing appreciation for the complexity of the perception of music, speech perception remains the most amazing and poorly understood auditory accomplishments of humans. Over the years, there has been considerable debate on whether this achievement is the result of general perceptual/cognitive mechanisms or \u201cspecial\u201d processes dedicated to the mapping of speech acoustics to linguistic representations from non-auditory sources can affect phonetic perception. The famed McGurk effect call into question the strict dichotomy of speech and general auditory processing .There are many examples of sound perception being influenced by non-auditory information. Detection of low-intensity sounds is enhanced when paired with a task-irrelevant light stimulus isolated from the full act of communication is unnatural even when bringing to bear information from other sense modalities. The small and context-specific sensorimotor and multisensory effects we can uncover in this artificial task (Hickok et al., The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The recent outbreak of Ebola Virus Disease (EVD) in West Africa has ravaged many lives. Effective containment of this outbreak relies on prompt and effective coordination and communication across various interventions; early detection and response being critical to successful control. The use of information and communications technology (ICT) in active surveillance has proved to be effective but its use in Ebola outbreak response has been limited. Due to the need for timeliness in reporting and communication for early discovery of new EVD cases and promptness in response; it became imperative to empower the response team members with technologies and solutions which would enable smooth and rapid data flow. The Open Data Kit and Form Hub technology were used in combination with the Dashboard technology and ArcGIS mapping for follow up of contacts, identification of cases, case investigation and management and also for strategic planning during the response. A remarkable improvement was recorded in the reporting of daily follow-up of contacts after the deployment of the integrated real time technology. The turnaround time between identification of symptomatic contacts and evacuation to the isolation facility and also for receipt of laboratory results was reduced and informed decisions could be taken by all concerned. Accountability in contact tracing was ensured by the use of a GPS enabled device. The use of innovative technologies in the response of the EVD outbreak in Nigeria contributed significantly to the prompt control of the outbreak and containment of the disease by providing a valuable platform for early warning and guiding early actions. The recent outbreak of Ebola Virus Disease (EVD) in West Africa has been declared by the World Health Organization (WHO) as the world's deadliest international health emergency in the modern time. It has bSlow pace of manual entry of large volume of data from multiple sources using the Epi-Info database of viral hemorrhagic fevers developed by the US Centers for Disease Control on a single computer system, resulting in backlog of data.Delay in receiving reports from the Contact Tracers when they identified onset of symptoms in contacts and or new contacts, causing delay in evacuation of new suspects and registration of new contacts.High turn-around time for receipt of laboratory results leading to delays in confirmation of status of suspect cases on admission for prompt actions by the case management team.The peculiar traffic situation in Lagos and Port Harcourt which led to a delay in collation of reports from different team members.Delayed communication across response teams causing delays in taking immediate follow up actions and strategic decisions.In the recent times, IT has penetrated the domain of public health in many countries and the potential for harnessing information technology for surveillance is evident . The useDue to the need for timeliness in reporting and communication for early discovery of new EVD cases and promptness in response; it became imperative to empower the response team members with technologies and solutions which would enable smooth and rapid data flow. The Strategy group of the Ebola Emergency Operations Committee had a discussion with the IT team and took a decision on deploying IT methods to overcome some of these challenges. Previous experience gained in using the ODK by members of the IT team during Polio surveys and enumeration of hard to reach areas in the country served as a leverage in deploying these techniques. Both ODK and Formhub are open-source and free which need just smartphones and cloud servers to digitalize data collection. This required the setting up of a real time system to ensure that the process is automated from data capture to data retrieval. The EEOC already had a server, funds from the EEOC bought the smartphones and tablets needed, the IT team had the knowledge and expertise and the technology was open source and free. This paper describes the innovative use of public health informatics tools to improve response to an Ebola virus disease outbreak in Nigeria.www.opendatakit.org) and Form Hub Technology (www.formhub.org). Supporting technologies were the Dashboard technology and ArcGIS mapping. The contact listing form, contact follow up form, laboratory investigation request form and case investigation forms were created using the extensible markup language (xml) and eHealth Ebola Sense android app developed for 21 day follow up .Two android base applications were the major technologies used\u2014The Open Data Kit (ODK) (These forms were adapted from the WHO guidelines for Ebola Virus Disease Outbreak Response. The forms were uploaded on the Form Hub server and downloaded to android devices (smartphones or tablets) using the ODK collect application . SmartphThe initial contact listing was done using the paper forms but subsequently as new contacts were identified, the listing was done using both the paper and the ODK forms. Each day as contacts were being monitored for symptoms of EVD, the temperature reading and presence or absence of clinical signs or symptoms of EVD were recorded using the smartphones as well as the GPS coordinates of the place of visit. Once the information was entered in the ODK form on the phone, the finalized form was sent automatically to the Form Hub server that housed information on all contacts in a database.http://www.thesitewizard.com/general/set-cron-job.shtml) which generates a red highlight on the contact\u2019s information on the dashboard Large touch screen or QWERTY Keys.High speed InternetGood Mobile data plans.eHealth Sense mobile applicationODK Collect SoftwareViewing device such as television for viewing real time data and maps1Open Data Kit (ODK) Collect Application. The ODK Collect application was used in creating respective data collection forms (interfaces) which enable users enter data or edit saved data on their phones/tablets. The entered data are then submitted on finalization. The geo-coordinates were automatically embedded in the data during submission.Seven set of forms were created for the ODK collect. These were the New Contact Form, Contact Follow-Up Form, Laboratory Request/Result Form, Case Investigation Form, Social Mobilization Form, Point of Entry Screening Form and Health Facility Survey Form.2The Form Hub Technology. The hub technology forms were built based on the hard paper forms being used to collect data from the field. When new cases are identified, the details are entered into a new case form on the mobile phone by the contact tracer which is sent directly to the form hub server \u201c3Dashboard Technology. The Dashboard comprises of a segmented output, which focuses on providing evidence-based information to support management in taking prompt decisions. An internet-capable television is used to display the status of contacts followed-up daily, the interviewers and their performance in terms of proportion of contacts followed up \u201c4ArcGIS Mapping. Locations of identified contacts and cases were mapped using the coordinates recorded on the ODK forms and integrated to show the distribution and clustering of contacts and cases within the cities using ArcGIS software. This informed areas to be targeted for social mobilization, sensitization and awareness creation by the social mobilization teams.Improvement was recorded in the reporting of daily follow-up of contacts after the deployment of the integrated real time technology. The electronic data collection and alert system reduced the turnaround time between identification of symptomatic contacts and evacuation to the isolation facility from between 3 to 6 hours to an hour. This aided prompt evacuation of symptomatic contacts to reduce or stop further transmission of the virus in the community.Also, the turnaround time for laboratory results was considerably reduced. This aided the immediate discharge of suspected cases with negative laboratory results from the suspect ward as well as prompt movement of suspected cases with positive laboratory results to the confirmed ward and commencement of supportive treatment.In addition, the Incident Manager and strategy group had timely and complete information required to make informed decisions on all issues. Deployment of the integrated technology also provided other benefits such as simultaneous multiple data entry by different persons into a single database, thereby eliminating the dependency on a single data entry clerk. It introduced accountability in contact tracing exercise thereby ensuring that all contacts were seen physically in their homes because of the use of a GPS enabled device. It also facilitated daily collation of reports from different team members for daily evening review meetings and daily situation report.The deployment of the technology was not without some challenges and these were the initial costs of setting up the required technology , the need for trained personnel many had participated in polio surveys, some had used the ODK before and most knew how to use smartphones and tablets) and the need to have highly effective internet connections to run the applications. Also the integration of Epi-info VHF with form hub technology to align same data structure across the system posed some challenges.The use of innovative technologies in the response of the EVD outbreak in Nigeria contributed significantly to the prompt control of the outbreak and containment of the disease by guiding the prompt evacuation of symptomatic contacts from the community to the isolation facility thereby reducing contact with others and reducing the risk of transmission of the virus."} +{"text": "Discordant course of the disease in monozygotic (MZ) twins is known to be characteristic for familial amyloidotic polyneuropathy (FAP). Existing cases of FAP in MZ twins refer to amyloidosis due to mutant transthyretin (TTR) Val30Met gene. We present a case of a pair of MZ twins associated with a substitution of tyrosine to cysteine at position 114 in the TTR gene in a Russian kindered. Until now FAP due to mutant TTR Cys 114 has only been described in one Japanese and in one Dutch family. Though in none of them MZ twins were present.Complete laboratory and instrumental investigation of both of the Russian MZ twins was performed. Detailed life history was evaluated in each of the brothers and compared with other known cases of FAP in MZ twins.One of the twins had a prominent clinical picture of FAP and visceral amyloidosis, starting around the age of 45 years. In the mean time the other brother was still clinically healthy at the age of 50. DNA confirmed identical mutation of TTR gene in both brothers. Amyloid depositions were found to be similar in the intestines, but not in other locations. Both patients lived in same district and had similar educational background. Though the patient with a prominent clinical picture of FAP experienced vaccination agravated by side effects as well as appendicitis agravated by severe peritonitis in his twenties.Charactristic feature of FAP in known pairs of MZ twins is the discordance in the disease course with a prominent manifestation in one of the twins, and delayed disease onset and/or only slight presentation in the other. Genetical and non-genetical factors, or their combination, were supposed to be contributing. Non-genetical mechanisms of the phenotypic variability of FAP could consist of influences on the mutant gene expression during twinning process or along the life. In Russian pair of MZ twins different life-course events could determine clinical presentation of the disease."} +{"text": "To illustrate the anatomy of renal vasculature and its variants on cross-sectional imaging.To highlight the benefits of obtaining images in both arterial and venous phase in staging and follow-up of renal cancer.It is common practice to perform dual phase computed tomography (CT) in preliminary staging and subsequent follow-up of renal cancer patients in some institutions across the United Kingdom. We provide the best examples from our institution (2010-2013) with illustrations and the clinical relevance for the conditions stated below.We discuss the normal anatomy and variants of the renal artery including early division of artery, accessory artery and double renal artery. In addition, usual and uncommon sites of hypervascular metastasis in primary renal cancer patients will be illustrated.We will highlight the normal anatomy and variants of the renal vein and associated tumour infiltration in unexpected veins and solid organ metastasis.The renal vasculature is frequently visualised on imaging but often overlooked. This exhibit will provide radiology trainee\u2019s an insight into the anatomical variants and its relevance in management of primary renal cancer. It reminds them of the common and uncommon metastasise and tumour infiltration seen in renal cancer, thus affecting the outcome."} +{"text": "N = 38 healthy participants scanned with the time-of-flight (TOF) magnetic resonance technique, and normalized with procedures analogous to those commonly used in functional neuroimaging studies. Spatial colocalization of brain parenchyma and vessels is shown to affect specific structures such as the anteromedial face of the temporal lobe, the cortex surrounding the Sylvian fissure (Sy), the anterior cingular cortex, and the ventral striatum. The vascular frequency maps presented here provide objective information about the vascularization of the brain, and may assist in the interpretation of data in studies where vessels are a potential confound.While widely distributed, the vascular supply of the brain is particularly prominent in certain anatomical structures because of the high vessel density or their large size. A digital atlas of middle to large vessels in Montreal Neurological Institute (MNI) coordinates is here presented, obtained from a sample of Digital brain atlases are an increasingly important instrument in neuroimaging research, as they provide a spatial framework for the selection, interpretation, and generalization of complex experimental data studies, heart pulsation is known to affect the signal near large arterial vessels , which results from the anastomosis of the two cerebral arteries. These vessels were localized consistently in TOF images, with voxels assigned to the vascular structure reaching frequency values of over 80% in correspondence to these vessels. Another important artery giving rise to high localization frequencies was the anterior cerebral artery and its prolongation, the pericallosal artery (PC) Figure . MNI cooThe anterior cerebral and PC arteries are commonly present as a pair of left and right vessels, running side by side deep in the medial longitudinal fissure . This partial overlap may arise from individual differences in the size and shape of these limbic structures, which displace the vessels running close to their surface was particularly conspicuous in the ventral striatum . The whole space separating the insular cortex from the frontal, parietal, and temporal operculi was characterized by fairly high vascular frequency Figure . The corThe dense web of vascular structures running along the medial wall is displayed in Figure Also the isocortex surrounding the anterior cingulus was bordered by a dense, poorly structured vascular web Figure , top rowWhile the anteroventral part of the brain is characterized by the presence of large arteries, the main vascular structures in the posterior part of the brain are sinuses in the dura mater and tentorial tissue. The sagittal and transversal sinuses were by far the most prominent collectors of venous blood, followed by the straight sinus and qualitative. The quantitative differences staked out regions where the presence of large vessels was prominent. The qualitative differences were due to different relationships between vessel course and anatomical structures, which may be ordered along a range according to the extent of intermingling.At one extreme of this range, large vessels were located in separate spaces well outside brain parenchyma, but individual variation in their course brought about a partial overlap between vessels and registered brain structures. This kind of overlap was exemplified by the relationship between the amygdala and the terminal part of the internal carotis and the CM. Even if among the least variable of all anatomical structures, the amygdala has a size that varies by a factor of two across healthy individuals .The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "It's difficult to talk about nurse's autonomy in ventilation assistance, because is a medical prescription. However in accordance with the Code of ethics, the professional profile and law 42/99, nurses have decision and operative autonomy in nursing care, which is achieved through specific autonomous and complementary interventions of intellectual, technical-scientific, managerial, relational and educational nature -3. The tMonitoring peripheral oxygen saturation is more suitable method in the ventilated preterm (<27 weeks) because transcutaneous oxygenation monitoring is not of routine use for lack of an adequate correlation with the blood gas and for highly sensitive skin.Many studies suggest the prevention of lung damage and retinopathy of preterm concerning a prolonged hyperoxia by setting alarm limits in the event of administration of an oxygen concentration higher than 21%. Numerous clinical conditions, including the need for mechanical ventilation, can affect and change the brain oxygenation. The near-infrared ray spectrophotometry (NIRS) is a technique that allows non-invasive monitoring of oxygenation and cerebral hemodynamics. It provides a single quantitative parameter rSO2 as an index of tissue oxygenation .Compared to the intubated newborn there is not a unique method and standardized anchorage of the endotracheal tube. The quality of the attachment can vary greatly depending on the choice of the tape and according to the method of taping adopted.No recommendation exists in the literature about the method of taping: the most widely used systems include the use of one or two strips cut to Y or H. The adhesive tape, indisputably considered the system capable of guaranteeing the best results in terms of sealing, when applied with a taping system and encoded together with a hydrocolloid protective largely reduces the risk of injury-8.The aspiration of the airways in infants should be based on careful clinical assessment and not on a routine basis. It recommended to avoid suctioning the nose but use saline drops instead, then suction the oropharynx .Care posture is crucial because it promotes the stabilization of the neonatal functions and prevents bad posture .It is recommended the use of ventilation systems with manual pressure control and delivered volumes in order to safeguard the delicate lung tissue."} +{"text": "However, little information has been put forth on the correlations between coma and cerebral imaging methods. The purpose of the article is to compile the available information derived from various imaging methods and placing it in a context of clinical knowledge of coma and related states. The definition of coma and the cerebral structures responsible for consciousness are described; the mechanisms of cerebral lesions leading to impaired consciousness and coma are explained. Cerebral imaging methods provide a large array of information on the structural changes of brain tissue in the various diseases leading to coma. Circumscript lesions produce space-occupying masses that displace the brain, ultimately leading to various types of herniation. Generalized disease of the brain usually leads to diffuse brain swelling which also can cause herniation. Epileptic states, however, rarely are detectable by imaging methods and mandate EEG examinations. Another important aspect of imaging in coma is the increasing use of functional imaging methods, which can detect the function of loss of function in various areas of the brain and render information on the extent and severity of brain damage as well as on the prognosis of disease. The MRI methods of Coma is one of the most common clinical signs in critical care medicine and is observed in all areas of critical care. Unfortunately, little interest seems to be directed towards this ubiquitous clinical sign, and few intensivists take the time to closely observe and document phenomena of impaired consciousness and come in their patients. Usually, only global scores of impaired consciousness are documented. This is somewhat astonishing considering the fact that the functional impairment of brain function in any disease course probably has the greatest impact on the long-term outcome of the individual patient.The rapid development of cerebral imaging methods in medicine in general and intensive care medicine in particular has vastly improved our understanding of cerebral lesions and their development in the disease course. On the other hand, only little attention has been directed toward the information on consciousness and come that can be derived from imaging methods.While cerebral imaging is able to detect and assess the localization and size of space-occupying lesions, it can render only little information on the functional impairment of the brain. These functional deficits are usually better studied by neurophysiological methods, such as EEG and evoked potentials. These methods are far better suited to make statements as to the functional disturbance and its prognosis.The authors have attempted to compile an overview of the contribution of brain imaging methods to the knowledge of the phenomenon \u201ccoma\u201d. This collection of data can only shed a spotlight on various areas of scientific investigation and cannot provide an exhaustive lexical database on brain imaging and coma. Keeping these limitations in mind, the collection may increase the interest in the contribution of cerebral imaging methods in the assessment of coma.Coma is defined as a severe disturbance of consciousness, which precludes awakening, and the directed movement of extremities. The comatose person demonstrates closed eyes and shows no purposeful reaction to noxious stimuli. The quantitative reduction of wakefulness, or better of arousal function, is the main feature of this condition. Many other qualitative disturbances of consciousness may occur before the manifestation of the state of coma. If both phenomena appear together in a fluctuating or alternating manner, the diagnosis of an organic psychosis of the delirium type should be considered. The broad spectrum of etiologies overlaps and is not specific for either qualitative or quantitative disturbances.In older age, typical etiological factors include metabolic disturbances such as hypoglycemia or hyperglycemia, electrolyte imbalance as well as single and multiple organ failure including hepatic or renal failure and finally focal and global cerebral perfusion deficits especially in stroke or anoxia following resuscitation . Whenever this system is functionally impaired bilaterally, one must anticipate disturbances of consciousness ultimately attaining the degree of coma. The ARAS connects the thalamic and subthalamic nuclei with the reticular intermediary grey substance of the spinal cord. Here, the etiology and exact localization of the functional neuronal disturbance in the ARAS are not especially important: reversible metabolic CNS disease in the context of metabolic derangement is just as well possible as structural lesions along the thalamic loop structures.However, infratentorial space-occupying lesions with consecutive brain stem compression cause disturbances of consciousness through secondary functional disturbances within the ARAS . Even very small lesions, strategically placed bilaterally in the ARAS, can produce coma, e.g. in cases of small bilateral infarctions in the thalamo-mesencephalic border region or in the thalamic nuclei Figure\u00a0. As a ruUnilateral hemispheric lesions do not produce coma as long as they do not compromise the midline structures or lead to a secondary impairment of the brainstem structures. Both features can be caused by progressive focal cerebral edema and spreading of a lesion to the contralateral side Figure\u00a0. This prSecondary brain lesions are caused by displacement of brain tissue: The space- occupying lesions lead to lateral displacement and downward directed herniation toward the brainstem , and the immediately life-threatening lower herniation of the cerebellar tonsils which often occurs in the preterminal phase of disease. The latter compromises the lower brain stem nuclei, which are essential for the regulation of respiratory function and blood pressure regulation.Another important event immediately preceding death in coma is the development of venous congestive swelling and hemorrhages (Duret\u2019s hemorrhages) due to impairment of drainage though swelling of the brainstem tissue. The degree of displacement of the midline has been correlated in vivo with the depth of coma.In the detection of primary cerebral lesions, the terms of focal and lateralizing signs have been introduced. Usually, ipsilateral pupillary enlargement occurs in uncal herniation, due to the compression of the ipsilateral oculomotor nerve. The rare appearance of contralateral mydriasis is explained by the fact that some space occupying lesions develop less downward, but rather predominantly lateral displacement, resulting in a tipping of the neuraxis. Instead of an obstruction of the ipsilateral parts of the perimesencephalic cisterns, the contralateral fluid compartments are compressed which can be followed in imaging tests. Thus, \u201cfalse localizing signs\u201d such as a dilated contralateral pupil or spastic signs ipsilateral to the primary cerebral lesion can be well explained and corroborated by neuroradiological findings. In neuropathological terms, this corresponds to the pressure of the brainstem on the tentorium (Kernohan\u2019s notch), and clinically, this is probably closely correlated to the appearance of extensor spasms and posturing.The initial assessment of the clinical presentation is dominated by the neurological and general clinical examination. Aside from determining the vital signs , the presence of neck stiffness and focal neurological deficits must be recorded. The Tables\u00a0Psychogenic disturbances of consciousness are usually dissociative in nature and only rarely factitious disease. They can mimic any coma-like situation and often also present with focal neurological signs such as speech arrest or sensory-motor hemisyndromes. The verification of the psychogenic nature of these signs is very difficult to achieve, and often evolves over time, after all neurophysiological tests have failed to detect any functional disturbance. Imaging tests invariably demonstrate normal findings or may detect older premorbid conditions that are irrelevant to the presented condition of disturbed consciousness. These patients often demonstrate marked fluctuations of the neurological findings, especially dependent on the attention of the examiner, and often one finds a psychiatric history or a special emotional burden. Reactivation of a traumatic experience in the context of a posttraumatic stress disorder (PTSD) is a common cause for these dissociative states.Even if neurological focal signs are completely absent, multilocular cerebral lesions remain a possible differential diagnosis and must be sought using imaging methods Figure\u00a0.Figure 3Furthermore, disorders of consciousness are frequently seen as secondary epileptic complications such as generalized epileptic seizures after stroke. The recovery phase is usually associated with recovery from impaired consciousness during the next few hours, however, in elderly and multimorbid patients, protracted recovery can occur over several days . As far as danger to life is concerned, the generalized and persistent epileptic functional disturbances are most important. These functional disturbances are accompanied by neuronal necrosis and later by life-threatening brain swelling in combination with cerebral lactacidosis, which cannot be measured by clinical methods. Even regularly reoccurring singular epileptic seizures can produce relevant temporal lobe lesions , which are occasionally detectable by imaging methods Duncan .Although the metabolic exhaustion can be detected in the form of tissue diffusion abnormalities (cytotoxic edema) in severe cases, imaging methods at present usually do not provide relevant information for the diagnosis of epileptic functional disturbances or the minimally conscious state (MCS), functional imaging methods have provided a host of new insights into these syndromes.18\u2009F-2-Fluoro-2-deoxy-D-glucose (FDG), as well as SPECT (single photon emission tomography) has been used for these purposes. The methods provide information on regional and global function of the brain. In coma and related conditions, they usually show severe global decrease of brain activity.Imaging methods have been used for over 20\u00a0years to detect the extent of cerebral damage in severe diseases of the brain. The method of Positron-Emission-Tomography (PET), most frequently using the tracer On examination of patients in PVS using FDG PET, which measures the cerebral glucose metabolism, the global cerebral glucose metabolism was reduced by approximately 50% has been implemented to overcome this dilemma. The goal was to measure a change in brain activity in response to stimuli even if a motor response of the patient was not possible. FMRI is a method to examine activated brain areas with high spatial resolution. The method is used for a number of different motor, sensory, and neuropsychological functions. The activation of a certain brain area, which is activated during a specific task, leads to a change in blood oxygenation. This, on the other hand, leads to a change in the magnetic properties of the hemoglobin molecules, which can be detected by MRI. The signal, which is dependent on the oxygen level of the blood, is called the BOLD (blood oxygen level dependent) signal. In this context, the examination is based on the observation that pure thought (imagination) of motor functions leads to activation of certain cerebral areas. The increased oxygen consumption leads to a change of the BOLD signal, which is detected by MRI.The regions activated by thought tasks could be compared between healthy subjects and patients suffering from PVS or MCS. Owen et al. . The most important methods are native x-ray diagnostics, angiography, computed tomography (CT), and magnetic resonance imaging (MRI).The Ethics Committee of the University of Magdeburg approved the anonymous evaluation of data of comatose patients after head injury for scientific purposes without consent of the patients or their legal guardians (decision 131 of 1997).The indications and relevance of these four examination methods will be discussed in the order of their historical development.Today, the skull x-ray only plays a historical role in the treatment of TBI patients. In patients with impaired consciousness and a skull fracture, the probability of an intracranial hemorrhage amounts to 1 in 4 .To date, the CT scan is the method of choice in TBI and cases of unconsciousness since it can detect density changes between brain tissue and acute hemorrhages with high resolution implicate that imaging methods can be utilized to define criteria for treatment success or treatment failure which can be detected in an early stage and can be used for confirmation and documentation of clinical findings. The findings and their technical implementation are focused on better understanding of brain reactions to the primary disease process and its dynamics during treatment.One of the goals of imaging methods is to give a basic view for the detection of loss of function, the documentation of irreversible loss of function as well as the irrefutable documentation of the time point of the total and irreversible loss of brain function which is needed in the determination of brain death to document the cause of death and to reliably assess the absence of therapeutic options.Examinations pertaining to morphological finding of the spinal cord in the phase of spinalisation, i.e. after declaration of brain death have not been published so far. Neuropathological findings in brain death have been referred by Wijdicks and Pfeifer whereas the 2 phase CT demonstrates the loss of perfusion of the MCA and the perfusion loss of the large cerebral veins seems to increase the specificity of prognostication up to 97% , which is performed using this diagnosis. There is a new discussion on the problem of decision making on the treatment in patients suffering from a terminal coma or imminent brain death. Criteria of the FOUR score include a catastrophic brain lesion, which has to be documented by adequate imaging techniques, e.g., multimodal MRT imaging (Nichol et al. The compiled data of imaging methods in the context of coma is provided to give an overview of the current knowledge concerning the contribution of brain imaging methods in coma and related states. The focus of the article is not to provide an exhaustive review of all of the various facets of brain imaging in single disease entities, but rather to give information on the approaches to the differentiation of comatose states by imaging methods. Therefore, no attempt was made to describe the vast information gathered in specific disease entities such as stroke or brain tumors, much less in traumatic brain injury. The approach to the clinical sign \u201ccoma\u201d is a clinically relevant task, which requires several different approaches. The first and most important approach is the clinical examination, which can differentiate between supratentorial, infratentorial and diffuse brain damage. With this information, it becomes possible to narrow down the differential diagnosis to certain etiologies and brain regions in which the primary cause of disease should be sought. The contribution of imaging methods to this differential diagnostic approach is compiled. The next step is the assessment of disease prognosis. This question can again be based on the clinical findings, and imaging methods can used in certain areas to enhance the clinically assessed prognostic data. Here, certain neurophysiological methods render the most relevant prognostic statements. Imaging methods are especially valuable in the prognostic assessment of PVS and MCS, an area where neurophysiological methods provide little information. The proper selection of examination methods for diagnostic and prognostic purposes in coma and related states requires broad knowledge of the strengths and weaknesses of the various methods. Since the various examination methods are performed by a number of different medical subspecialties, the neurologist caring for comatose patients is in need of knowledge as to what examinations to have performed and for which purpose. As in many circumstances in critical care medicine, the unreflected application of a host of possible diagnostic tests does lead to a better understanding of the disease process per se. The correct selection of technical examinations, depending on the nature of the clinical problem on hand, renders highly sensitive and reliable information in many specific clinical situations. This is especially true in the diagnosis of the terminal phase of comatose patients where multimodal MRI methods may increase the prognostic accuracy earlier in the clinical course."} +{"text": "While historically acetylcholine (ACh) holds a central place in the discovery of chemical transmission in the nervous system, progress in our knowledge of the mechanistic underpinnings of cholinergic transmission in the central nervous system (CNS) has lagged behind its sister transmitters. For example, unlike what we know about glutamatergic transmission, neither the prevalence of fast cholinergic transmission or postsynaptic specializations at cholinergic synapses, is well understood. Every level of inquiry into the cholinergic system reveals a bewildering complexity in signaling and transduction mechanisms. Receptor localizations within and among neurons, along with transmitter source and access to receptors, provide for a system that is capable of neuromodulation at multiple time scales that allow for short- and long-term regulation of circuit output in the brain. The compilation of reviews and primary papers in this research topic provides a sampling of findings at different levels of integration that highlight both the current status of our understanding of CNS cholinergic mechanisms as well as reveals the gaps in our knowledge that need to be filled.How ACh mediates signaling in the brain is still an unresolved issue at the level of synaptic physiology. Cholinergic innervation in the brain arises from two main loci- the basal forebrain and the pendunculo-pontine area of the hindbrain. In both instances a relatively small cluster of neurons send diffuse projections into various target areas. The diffuse nature of the projections, as well as the non-planar relationship between the cholinergic neurons and their targets precludes traditional slice electrophysiology approaches to examining signaling. Recent advances in optogenetic techniques offer a potential solution to this problem and some of the advances made thus far are reviewed in this research topic but send dense projections throughout the area. Recent findings that are summarized here and the metabotropic muscarinic acetylcholine receptors (mAChRs), their expression in multiple neuronal types within a region, and varying locations within a neuron all orchestrate a complex symphony of neuromodulation. A review of receptor localization and function within hippocampal CA 1 interneurons (McQuiston, These local control mechanisms can regulate circuit outputs in various regions of the brain, potentially mediating the attention and learning-specific behaviors ascribed to ACh-driven modulation. In the olfactory system, nAChR activation can filter odor signals such that weak inputs are eliminated while strong ones are allowed through, thus providing a mechanism for altering the gain function of a circuit (D'souza and Vijayaraghavan, Cholinergic receptors are a potential gold mine as targets for therapies and pharmaceutical companies have not been diffident about exploring various receptor ligands as potential therapies. Emerging studies implicate cholinergic receptors in various addictive mechanisms: the direct interaction between cocaine and nAChRs reported in this research topic (Acevedo-Rodriguez et al., This is prime time for the development of clinical therapies that target the cholinergic system for a host of brain disorders. At the same time, as the collection of studies in this research topic illustrates, much remains to be understood regarding the physiology of cholinergic transmission and modulation. There is a risk of pharmacological advances outpacing our knowledge of cholinergic signaling mechanisms in the brain. History tells us that such a disconnect between therapeutic advances and our knowledge of physiology can often lead to unintended complications from novel therapies, a classic example being the aggressive marketing of heroin as a cough remedy at the turn of the twentieth century. There needs to be significant investment in examining the basic biology of cholinergic modulation of brain circuits in order to develop more rational and safe therapeutic targets that the cholinergic system offers.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "This article presents authors\u2019 methods of digital reconstruction using composite tissue transfer. The authors present their approach to achieve the goal of restoring full cosmetic appearance of the reconstructed toes while preserving the function and cosmetic appearance of the donor foot. The reconstructive procedures for each degree of digit defect are discussed in detail, and pitfalls and technical tips are given. The article is a summary of the authors\u2019 experience in reconstruction of 646 digits since 1998 and the challenges we faced in the complex microsurgical reconstruction necessary in pursuing the goal of restoring the cosmetic appearance of reconstructed digits and donor feet.1. We present our methods of digital reconstruction using composite tissue transfer.2. Our major aims of surgery are to improve the cosmetic appearance of reconstructed hands and donor feet.3. We describe a number of methods using modified or novel designs for composite tissue transfer from the foot to improve cosmetic appearance of the reconstructed digits and to improve both the cosmetic appearance and function of donor feet.4. The article provides technical details of the methods we used to move closer to an ideal reconstruction of digits and minimizing donor foot morbidity."} +{"text": "Background. Descriptive evaluation of nerve variations plays a pivotal role in the usefulness of clinical or surgical practice, as an anatomical variation often sets a risk of nerve palsy syndrome. Ulnar nerve (UN) is one amongst the major nerves involved in neuropathy. In the present anatomical study, variations related to ulnar nerve have been identified and its potential clinical implications discussed. Materials and Method. We examined 50 upper limb dissected specimens for possible ulnar nerve variations. Careful observation for any aberrant formation and/or communication in relation to UN has been carried out. Results. Four out of 50 limbs (8%) presented with variations related to ulnar nerve. Amongst them, in two cases abnormal communication with neighboring nerve was identified and variation in the formation of UN was noted in remaining two limbs. Conclusion. An unusual relation of UN with its neighboring nerves, thus muscles, and its aberrant formation might jeopardize the normal sensori-motor behavior. Knowledge about anatomical variations of the UN is therefore important for the clinicians in understanding the severity of ulnar nerve neuropathy related complications. Ulnar nerve (UN) is the major branch of the medial cord of brachial plexus, given off in the axilla. It consists of fibers of ventral rami of C8 and T1 spinal nerves. But it often receives additional contribution of C7 fibers through median nerve via its lateral root [An anomalous pattern could be appreciated in the division of trunks and formation of the cords of brachial plexus. However, no such aberrations usually could be noticed in the subsequent arrangement of their terminal branches . AnomaloUN variations are consistently located in the origin or course of the distal branches. But, in the available literature describing these variations, it has been noted that much variations in communications between neighboring nerves do exist either in the forearm (most commonly) or in the hand. The present anatomical study aims to identify the variations related to ulnar nerve and to discuss the potential importance of such variations in clinical practice.The present study was conducted in the Department of Anatomy, Melaka Manipal Medical College , Manipal, India. It involved examination of 50 upper limbs of formalin embalmed human cadavers aged between 45 and 60 years. Of the 50 upper limbs 24 were right sided and 26 were left sided. The axillary region of all the limbs was exposed carefully after clearing the entire fascia and to look for the anatomy of ulnar nerve formation from medial cord of brachial plexus. The dissection was further continued towards the anterior compartment of the arm in order to probe any communications of ulnar nerve with neighboring peripheral nerves at high humeral level. Meticulous observation of variant forms and/or abnormal communication if any was made. In the case of persistence of variant communication between neighboring peripheral nerves, the length of the communicating channel has been measured and documented. Relevant photographs of representative specimens were taken and produced herewith.We observed a total of 4 ulnar nerve variations out of 50 upper limbs. It accounted for 8% of incidence cases. All four variations of ulnar nerve were observed in right upper limbs. Aberrant formation of the ulnar nerve with a remarkable contribution from the lateral cord of brachial plexus was seenIn the remaining two cases, ulnar nerve had abnormal communications with the neighboring nerves, radial nerve , and medIn case of the communication from medial cutaneous nerve of forearm, the nerve gave off a communicating branch on its medial aspect . The comAn effective brachial plexus blockade can be achieved, given that there is possession of thorough knowledge of all possible anatomical variations that can be appreciated in the form of either abnormal origin of its branches or the variant communication between its branches in addition to normal anatomy.Several studies have reported communications between terminal branches of brachial plexus in the forearm as well as in hand. Different terminology has been allotted to the existence of communication between median and ulnar nerve based on the location of their existence as Riche- Cannieu anastomosis in the palm, Martin-Gruber anastomosis and Marinacci communication in the forearm, and Berrettini anastomosis manifested by the communication between digital branches in the palm. Descriptive features of these communications have been discussed by Dogan et al. . HoweverVariant formation of the UN is not common since there are few reports about it existence. Sachdeva and Singla reported a rare origin of UN from median nerve . In theiIn the present study, we came across with the rare aberrant formation of UN by the remarkable contribution from lateral cord in two of 50 upper limbs. Variations in the origin, course, and distribution of nerves are prone to iatrogenic injuries and entrapment neuropathies . TherefoUnusual communications between the branches of brachial plexus are often seen in medial and lateral cords . AberranVery rarely might a communication be observed between ulnar nerve and medial cutaneous nerve of forearm. Medial cutaneous nerve of forearm is found to communicate with medial cutaneous nerve of arm and radiDiverse opinions of ontogeny of these communications have been put forward by researchers. Probably the simplest cause can be attributed to altered coordination between the mesenchyme responsible for limb muscle and its spinal nerve component which might have led to the formation of aberrant communicating branches .Investigations of peripheral neuropathies are based upon patterns of functional deficits and diagnostic testing. Therefore, an anatomical variation can often lead to confounding patterns of physical and diagnostic findings. According to Ajayi et al., anatomical variant communication between branches of the brachial plexus could obscure the management of complex regional pain syndrome .Ulnar neuropathies are the most frequent causes of nerve injuries as reported by Kroll et al. and they account for a majority with a prevalence of 33%, which is followed by 23% of incidence cases by brachial plexus injuries . AnatomiTherefore, knowledge on the variant pattern of peripheral nerves is imperative not only for the surgeons, but also for the radiologists during image technology and MRI interpretations and for the anesthesiologists before administering anesthetic agents thus in diagnostic approaches . Damage Awareness of an anatomical variation of UN both in its formation and in abnormal communication at the high humeral level is essential because of the frequency of surgeries performed in these regions for various reasons as well as in diagnostic approaches of management of ulnar neuropathy. Knowledge about anatomical variations of the peripheral nerves is therefore important for the clinicians in understanding the severity of neuropathy related complications."} +{"text": "Healthcare in India comes mostly as an out of the pocket payment as health insurance is virtually absent for majority of the population. Determination of the typical hospital cost and length of stay (LOS) are of decisive importance for patients, healthcare providers, and payers who need to make evidence based rational decisions about patient care and the allocation of resources.A cross sectional pilot study was performed and the data for LOS and hospital cost was collected from the medicine ICU and the accounts section for 38 patients with resistant bacterial infections who were admitted to Intensive care unit. The data was analyzed for average LOS and hospital cost for infections. The data was analyzed using univariate analysis.Almost 29% of the patients with resistant infections were between 18-40 years of age. The mean cost of hospitalization was found to be INR76082.79 and only 34% of the patients were having any insurance coverage. The median length of stay in the hospital ICU was 8.5 days. 37% of the patients were on ventilator during the hospitalization and 25% of the total patients died.Resistant bacterial infections may lead to a high LOS in the ICU furthermore adding to the economic burden of the patient, healthcare provider and the payer. These results can be used to study the treatment alternatives in case of resistant infections and making them cost effective at the same time stressing upon the need of health insurance among the population."} +{"text": "Spontaneous ruptures of the quadriceps tendon are infrequent injuries, it is seen primarily in patients with predisposing diseases such as gout, rheumatoid arthritis and chronic renal failure. A 32-year-old man had a history of end stage renal disease and received regular hemodialysis treatment for more than 5 years. He was admitted in our service for total functional impotence of the right lower limb with knee pain after a common fall two months ago. The radiogram showed a \u2018'patella baja\u201d with suprapatellar calcifications. The ultrasound and MRI showed an aspect of rupture of the quadriceps tendon in its proximal end with retraction of 3 cm. Quadriceps tendon repair was performed with a lengthening plasty, and the result was satisfactory after a serial rehabilitation program. The diagnosis of quadriceps tendon ruptures needs more attention in patients with predisposing diseases. They should not be unknown because the treatment of neglected lesions is more difficult. We insist on the early surgical repair associated with early rehabilitation that can guarantee recovery of good active extension. Ruptures of the extensor mechanism of the knee joint are defined by the existence of a solution of continuity on the chain bone muscle tendon which provides the extension of the leg on the thighs: patellar tendon, quadriceps tendon and muscle, anterior tibial tuberosity and patella. Spontaneous ruptures of the quadriceps tendon are infrequent injuries; it is seen primarily in patients with predisposing disease such as gout, rheumatoid arthritis and chronic renal failure.Several factors probably combine to weaken the tendon, including a loss of local vascular supply, repeated microtrauma and osteodystrophy secondary to hyperparathyroidism Through one case of spontaneous rupture of the quadriceps tendon occurred in a patient with chronic renal failure we stressed the difficulty of therapeutic management in this patient and the benefit of early rehabilitation.Our patient, a 32 year old man, employee, without any sport practice, had a chronic renal failure, with hemodialysis dependence for 5 years, was admitted in our service for total functional impotence of the right lower limb with knee pain after a common fall two months ago.Clinical examination revealed an inability to walk, a loss of active extension of the knee joint and palpable tendinous suprapatellar defect . RadiogrMRI confirmed the diagnosis of rupture of the quadriceps tendons and eliminates injuries of the cruciate ligament, the menisci, collateral ligaments and patellar tendon . SurgicaPost-operatively, the knee was immobilized with a removable knee splint for a period of six weeks. A week after, the patient was then put on program of rehabilitation based primarily on isometric contractions of the quadriceps and passive mobilization of the knee One month later, an active mobilization was initiated. The recovery of walking was allowed after 6 weeks with protection of external support (crutches). The patient was clinically evaluated each week for a month and every month.After three years of patient follow-up, the results were graded good, based on the range of motion of the knee, the strength of the quadriceps muscle and the ability of the patient to walk with one crutch. The joint assessment found a patella mobile with active extension and flexion.Quadriceps tendon ruptures are uncommon injuries. They most often occur on road accident in the population under 40 years . The spoChronic renal failure can cause several complications related to the process of dialysis as amyloidosis in which there is abnormal production of beta 2 microglobulin. This molecule normally metabolized by the kidney is accumulated in chronic hemodialysis patients with rates above 30 to 40 times normal, it tends to accumulate in specific structures such as bone and tendon reducing its elasticity and predisposes to ruptures after minimal efforts [Besides amyloidosis, secondary hyperparathyroidism in chronic renal failure leads to dystrophic calcification and resorption of subperiostal bone causing fragility of the bone tendon junction \u20137. This The injury mechanism is essentially indirect muscle contraction during an extension movement of the thigh on the leg or during a forced flexion above 45\u00b0 where the balance of forces between the quadriceps tendon and patellar tendon is reversed The diagnosis of tendon rupture is easy in the acute phase with the existence of a suprapatellar depression and pain on palpation a patella in the low position relative to the opposite side and inability of active extension of the knee.In the neglected forms clinical diagnosis is more difficult, the defect may not be evident because of scar-tissue formation, and the signs are present at lower levels which give more importance to the complementary examinations.The lateral view radiograph of the knee shows a patella baja and suprapatellar calcifications Ultrasonography of the soft tissues shows in case of total rupture, a complete interruption of tendon fibers separated by a hypoechogenous track (hematoma), but it remains operator-dependent examination .In case of partial rupture it reveals a partial rupture of the tendon in the transverse plane or dissection of the fibers in the longitudinal plane.Finally MRI shows high signal on T2 signing hemorrhage or edema. She finds its indications in the neglected forms if the clinical examination and ultrasonography cannot conclude. The treatment of a complete rupture of the quadriceps tendon is surgical but it is not always easy. The techniques are quite diverse because of the variety of lesions encountered, recent or old, and evolution of the conception of surgical techniques.According to various authors, the repair of the neglected forms after the sixth week remains difficult because of the tendinous retraction requiring a lengthening plasty. In these cases it is recommended to use Scuderi repair and Codivilla V/Y plasty and whenThe postoperative immobilization pedal for about 6 weeks, time of healing, is recommended this immobilisation must be made with 15\u00b0 on flexion to avoid the creation or perpetuation of a patella baja.Rehabilitation is conducted by early passive mobilization of the knee on the first day after surgery, taking into consideration the stability and solidity of surgical repair. This work will be replaced by an active rehabilitation witch it is essentially based on the stretch of the quadriceps, the gradual increase in quadriceps muscle strength and the change in execution speed of movement with the aim of this work is to reinforce the tendon.The spontaneous quadriceps tendon ruptures are uncommon. They should not be unknown because the treatment of a neglected lesion is more difficult. The large number of plasties described in the literature shows the absence of codification of this surgery. We insist on the early surgical repair associated with early rehabilitation that can guarantee recovery of good active extension."} +{"text": "Iatrogenic injury during the posterior approach to the humerus during operative fixation is not an uncommon occurrence. A comprehensive understanding of the normal anatomy and its variants is of paramount importance in order to avoid such injury. Typically, the inferior lateral cutaneous branch of the radial nerve originates towards the distal end of the humerus at the inferior portion of the spiral groove. Here, we report an important variant of this nerve, which originated significantly more proximal than expected, further emphasizing the importance of identification, dissection and protection of the radial nerve and its major branches. Understanding normal anatomy and being aware of potential variants is of paramount importance during the operative fixation of fractures. Especially when regarding the radial nerve, injury can occur despite a comprehensive understanding and meticulous dissection . While tPatient is a 27 year-old right hand dominant female who presented initially following an assault where a car door was closed on her right arm. She was evaluated and treated at a local emergency department with a coaptation splint. She had sustained a closed injury and remained neurovascularly intact. Initial management consisted of conversion over to a fracture brace and close follow-up. However, six weeks from injury, the patient endorsed continued pain. Examination revealed tenderness and movement through the fracture site; radiographs revealed unacceptable angulation and minimal callus formation Figure\u00a0. With peThe patient was brought to the operating room and placed in the lateral decubitus position. Stress fluoroscopy exhibited gross motion at the fracture site Figure\u00a0. Via theIatrogenic nerve injuries are among the devastating complications in the treatment of humerus fractures. Recent studies showed the rate of iatrogenic nerve injury in operatively treated supracondylar humerus fractures is 3% to 4% . The iatThe inferior lateral cutaneous nerve of the arm is the branch of radial nerve that provides sensory and vasomotor innervations to the lower, lateral aspect of the arm. Understanding the variants of sensory nerve not only helps in the identification of radial nerve, but also reduces the chance of iatrogenic injury. In this case, the inferior lateral cutaneous nerve of the arm branched off at the level high above the spiral groove with a long arm branching down laterally into the subcutaneous tissue and skin. Hannouche et al, in a cadaveric study, noted the same takeoff origin of the inferior lateral cutaneous branch in all 18 specimens, which was at the inferior end of the spiral groove . In our Despite a seemingly reliable anatomic understanding the radial nerve and its branches, we report here an important variant of the inferior lateral cutaneous branch of the radial nerve. Typically, its origin is in the distal third of the humerus at the inferior end of the spiral groove, however, we report an abnormally high branching variant well above even the proximal extent of the spiral groove. We recommend using the confluence of the triceps lateral head, long head and aponeurosis to identInformed consent was obtained prior to the completion of this manuscript."} +{"text": "Hyperexcitability in the neural networks is one of the hallmark features of the epileptic brain and can manifest itself as recurrent short or long duration discharges . In rodeIn this study, we developed a branched computational model of CA3 region of hippocampus, consisting of a network of an astrocyte and a pyramidal cell with a feedback inhibitory interneuron. As both potassium and calcium ions have been shown to potentially affect neuronal hyperexcitability, the astrocytic model has both mechanisms - the clearance of potassium through potassium channels, and the influence of astrocyte in the synapse.Preliminary results show that in hyperexcitable systems with fully working potassium AHP channels depolarization discharges cease after less than a second , classifying them as short duration. With partial dendritic AHP channel blockade the duration of discharges increases, transitioning into long duration discharges . In the extreme case where all of the dendritic AHP channels are blocked, there is no cessation of the seizure-like state . We hypothesize that in hyperexcitable systems the postsynaptic AHP (and to a lesser extent the Nap) channels work in concert with glial cells to control the duration of depolarization discharges."} +{"text": "Inclination of the subtalar joint (STJ) in the transverse and sagittal planes may be highly associated with ankle sprain mechanisms. However, the validity and reliability of measuring inclination of the STJ axis of rotation (AoR) is not well established.The purposes of this study were to: 1) examine the validity of a custom made instrument (locator) to measure the STJ AoR on the basis of the STJ inclination measured by X-ray, 2) to measure the intra-tester reliability of the locator.Cross sectional study.Biomechanics laboratory.Twenty nine healthy male and Nine health female subjects were recruited for this study.No Intervention.Variables that were measured in this study were as follows: 1) Inclination of STJ AoR in the sagittal plane measured by radiographic images of the foot in the sagittal plane. In order to collect radiological images of the foot, subjects stood with a tandem position and the STJ was placed in neutral position. Sagittal plane inclination of the STJ AoR were further analyzed using ViewRex per McClay\u2018s method . Once thThe locator may be used in the clinical setting since validity verified by correlation was high and the intra-test correlation coefficient was large indicating consistent measurements. Along with the locator measurement, it is suggested that further study including motion analysis may provide more information regarding the relationship between inclination of STJ AoR and movement at the STJ."} +{"text": "The Titanosauria were much diversified during the Late Cretaceous, but paleobiological information concerning these sauropods continues to be scarce and no studies have been conducted utilizing modern methods of community analysis to infer possible structural patterns of extinct assemblages. The present study sought to estimate species richness and to investigate the existence of structures in assemblages of the South American Titanosauria during the Late Cretaceous. Estimates of species richness were made utilizing a nonparametric estimator and null models of species co-occurrences and overlapping body sizes were applied to determine the occurrence of structuring in this assemblages. The high estimate of species richness (n\u200a=\u200a57) may have been influenced by ecological processes associated with extinction events of sauropod groups and with the structures of the habitats that provided abundant support to the maintenance of large numbers of species. The pseudocommunity analysis did not differ from that expected by chance, indicating the lack of structure in these assemblages. It is possible that these processes originated from phylogenetic inertia, associated with the occurrence of stabilized selection. Additionally, stochastic extinction events and historical factors may also have influenced the formation of the titanosaurian assemblages, in detriment to ecological factors during the Late Cretaceous. However, diagenetic and biostratinomic processes, influenced by the nature of the sedimentary paleoenvironment, could have rendered a random arrangement that would make assemblage structure undetectable. Sauropods constituted a group of Saurischian dinosaurs that were highly diversified, attaining large dimensions and wide geographic distributions Isisaurus and DiamantinasaurusTitanosauria constituted the most diverse sauropod lineage, represented by more than 30 known genera, widely distributed on nearly all continental landmasses during the Late Cretaceous Studies of Late Cretaceous South American Titanosauria have dealt predominantly with taxonomic and biochronological aspects, paleogeographic distributions, strategies of locomotion and behavior, reproductive and developmental biology, appendicular morphology, cranial morphology and phylogenetic systematics The structure of a vertebrate assemblage can be defined as nonrandom patterns in resource utilization among individuals that coexist in time and space Within this context, studies have shown that co-occurrence patterns are commonly attributed to competitive interactions or to environmental filters, and these patterns can be generated by historical factors, habitat associations, and/or species dispersal limits Considering the hypothesis that species co-occurrence in time and space will be determined and structured by their interspecific relationships, the present work sought to estimate the species richness of the South American Titanosauria during the Campanian and Maastrichtian ages (83.5\u201365 Mya) and to investigate the occurrence of structuring in this assemblage with respect to species co-occurrences and overlapping body sizes.The list of sauropods presented in this work was prepChao1\u200a=\u200aSobs+F21/2F2, where Sobs is the number of species recorded in the assemblage sampled, F1 is the number of species represented by only one individual (\u201csingletons\u201d), and F2 is the number of species represented by two individuals (\u201cdoubletons\u201d) The nonparametric estimator Chao1 was utilized to estimate sauropod species richness based on abundance data: SC-score index C-scores measure the mean numbers of units in a single block (checkerboard units \u2013 CU) for all pairs of species The EcoSim Module of co-occurrence was utilized to test the occurrence of nonrandom patterns of co-occurrence of the Titanosauria species recorded in the stratigraphic formations corresponding to the Lower Campanian to the Upper Maastrichtian in South America CU\u200a=\u200a(ri\u2013S) (rj\u2013S), where ri and rj correspond to the totals in a row, and S is the number of sites occupied by both species. The utilization of fixed row and column totals and column restrictions generate null matrices with the same number of occurrences of sites per species and the same number of species per stratigraphic formation as observed in the original data. The algorithm of sequential change repeatedly rearranges the original matrix, changing the sub-matrices that preserve the row and column totals, and is not very inclined toward type I or type II errors In a structured assemblage, the mean numbers of units in a single block should be significantly higher than the score expected by chance, according to a null model 10 transformed) for each species was derived from data available in literature The EcoSim Module of Size Overlap was utilized to determine the presence of nonrandom patterns of body size overlapping among species Segment length was calculated by the ordination of size estimates of the different species. These values represent the differences in body size between two consecutive species. Utilizing the variance in segment length as the size overlap metric, the overall tendency for the observations can be measured. A structured assemblage would have an observed variance significantly smaller than that seen in random assemblages (pseudocommunities). When the minimum segment length values (in meters) are selected, the smallest segment of the assemblage can be calculated by measuring the difference between the closest pair of species. This measure determines whether a minimum space between species is necessary for their coexistence in an assemblage. Thus, in a structured assemblage, the minimum segment length should be significantly greater than that expected by chance A total of 23 species were recorded in fourteen fossiliferous strata ranging from the Lower Campanian to the Upper Maastrichtian in different localities in South America . In relaC-score index was 1.96, which did not differ significantly from the mean expected by chance , whereas Puertasaurus reuili was estimated to be 30 m long The species themselves showed great variability with respect to their estimated sizes. The fossil record points to the South American continent as having had a diverse assemblage of Titanosauria, and estimators of species richness indicate an even greater species richness during the Late Cretaceous (Early Campanian-Late Maastrichtian). This high species richness was possibly influenced by the availability and occupation of ecological niches left by the diplodocoids sauropods after their extinction in the Late Coniacian, resulting in a rich diversity of forms and sizes within the clade Titanosauria The species richness could also have been related to the structural complexity of the habitats occupied by titanosaurian assemblages during the Late Cretaceous. Dinosaurs that coexisted during the Late Jurassic exhibit close associations with the characteristics of environments in which they lived, indicating the occurrence of structural patterns in those assemblages The results obtained by co-occurrence analyses of species richness demonstrated that the observed numbers of checkerboard units did not differ from random. This pattern is consistent with the hypothesis that the local coexistence of Titanosauria species during the Campanian and Maastrichtian in South America was not structured by ecological factors existing during the Late Cretaceous, such as resource limitations in the environment, interspecific competition, or predator-prey relationships.The coexistence of species in an assemblage can be limited by ecological interactions known to be negative, such as interspecific competition for spatial and trophic niches, the occurrence of aggressiveness between individuals of the assemblage, species that develop specific preferences for certain types of habitats on a wide geographic scale, or predator-prey relationships Random patterns in coexistence of Titanosauria species during the Late Cretaceous could have originated not through competitive interactions between species but through the influence of species showing specificity for a particular habitat type on a reduced spatial scale, distributions restricted to a particular period, endemism, or low abundances of some species Another important issue regarding the lack of structure in South American titanosaurian assemblages is the fact that species records in any particular formation can be influenced by factors such as taphonomic processes, the types of sedimentary paleoenvironment, and sampling efforts in fossil collecting. This latter aspect will be influenced by the numbers of paleontological explorations in each stratigraphic formation and by the environmental conditions at the sites where the fossiliferous strata are found .Paleoecological studies should emphasize the importance of taphonomic processes for the different types of sedimentary paleoenvironments, since these factors can influence the fossil records of one or more species in stratigraphic formations. Diagenetic and biostratinomic processes, influenced by the nature of the sedimentary paleoenvironment at the site where the animal died (which can hinder fossilization) and the transport carcasses to different assemblages, will determine the number of specimens preserved in place and, consequently, estimated species richness The analysis of size-overlapping in this study indicated a lack of structure in the Titanosauria assemblage, suggesting that the sizes of these dinosaurs were not a determinant factor for species coexistence in time and space. The random patterns attributed to body size overlapping among vertebrates in general may be due to local extinctions, non-limited food availability, or reduced population sizes Ecological differences between sauropod lineages could also have been associated with certain morphological attributes, such as body size and differences in dentition, shape of the necks and cranial morphology Competitive interactions for food resources and habitat utilization in bird assemblages can become reduced through morphometric variations related to the size and shape of the beak, the length of metatarsus, or body size Another important aspect that should be taken into consideration concerns possible historical effects on South American titanosaurian assemblages that occurred during the Late Cretaceous. The preference for, and utilization of, particular resources by Titanosauria lineages could have been strongly influenced by the evolutionary histories of the different clades. Phylogenetic effects include important processes that will determine the ecologies of large numbers of species (as opposed to putative interactions between members of the assemblage in terms of the utilization of available resources), reflecting the evolutionary histories of different lineages that diverged over millions of years It is possible to conclude that the species richness of Titanosauria during the Late Cretaceous in South America was influenced by various ecological processes associated with the extinction events of various sauropods groups during this period and habitat structures that provided support for the maintenance of high species diversity in the assemblage. The observed patterns of co-occurrence and size overlapping suggest the existence of random processes and a lack of structuring in this assemblage. It is likely that these processes originated from phylogenetic inertia, associated with the occurrence of stabilizing selection, and that extinction events and historical factors had important roles in the formation of titanosaurian assemblages during the Late Cretaceous, more than did strictly ecological factors. Nonetheless, diagenetic and biostratinomic processes can cause random species distribution patterns, making structuring of those undetectable.Table S1Species of Titanosauria recorded in the stratigraphic formations of the Late Cretaceous in South America. The number of recorded fossils (n) and whole information were obtained from the matrix of data available in the Paleobiology Database (DOC)Click here for additional data file."} +{"text": "The authors retract this publication due to concerns about the cell lines employed in the study.The study reports experiments looking at pathways involved in anoikis resistance in human adenoid cystic carcinoma; the experiments involved the use of the cell lines ACCM and ACC2. After the publication of the article, a reader raised concerns that these two cell lines may not originate from adenoid cystic carcinoma. The authors have completed a short tandem repeat analysis on the cell lines and this has revealed cross-contamination with other human cells, which compromises the relevance of the work to human adenoid cystic carcinoma, and thus the conclusions of the study.In the light of this, the authors retract this publication."} +{"text": "Toxicity caused by chemical mixtures has emerged as a significant challenge for toxicologists and risk assessors. Information on individual chemicals' modes of action is an important part of the hazard identification step. In this study, an automatic text mining-based tool was employed as a method to identify the carcinogenic modes of action of pesticides frequently found in fruit on the Swedish market. The current available scientific literature on the 26 most common pesticides found in apples and oranges was evaluated. The literature was classified according to a taxonomy that specifies the main type of scientific evidence used for determining carcinogenic properties of chemicals. The publication profiles of many pesticides were similar, containing evidence for both genotoxic and non-genotoxic modes of action, including effects such as oxidative stress, chromosomal changes and cell proliferation. We also found that 18 of the 26 pesticides studied here had previously caused tumors in at least one animal species, findings which support the mode of action data. This study shows how a text-mining tool could be used to identify carcinogenic modes of action for a group of chemicals in large quantities of text. This strategy could support the risk assessment process of chemical mixtures. Chemical risk assessment of mixtures is an important but challenging task for toxicologists. Unlimited variations of mixtures in our environment and knowledge gaps about toxic effects caused by chemical mixtures are examples of factors that make this process complex. Mixture effects can be described as caused either by additivity or interactions (such as synergistic or antagonistic effects) of the individual compounds reports that the percentage of samples of fruits, vegetables and cereals with multiple residues increased by 11 percent from 1997 to 2007 (from 15 to 26 percent). In 2008, residues of two or more pesticides were found in 27 percent of the samples analyzed. The same proportion, 27 percent of samples containing multiple residues, was found in 2010. One sample of grapes was found to contain as many as 26 different pesticide residues was gathered via a search (in August 2013) using the names of the pesticides. A computational tool, CRAB, was used to analyze the abstracts. The tool classifies PubMed abstracts automatically according to a MOA taxonomy. The MOA taxonomy captures the current understanding of processes leading to carcinogenesis and is based on two main categories: genotoxic and non-genotoxic MOA. The taxonomy is further structured into sub-categories according to a classification of Hattis et al. are listed in Table We investigated the literature of human and animal Data from an analysis of apple and orange samples carried out in 2009\u20132010 was kindly provided by the Swedish NFA. Data shows that the majority of the tested apples and oranges contained several pesticide residues, 78 percent of the apple samples contained more than one pesticide and for orange samples this number was as high as 96 percent. The results thus show that two or more pesticide residues were freqently detected in apple and orange samples. For example, one apple sample contained residues of seven pesticides and in one orange sample 10 different pesticide residues were detected.Table The distribution of abstracts over the MOA taxonomy was analyzed in detail. The distribution of classified abstracts for the individual pesticides in apples and oranges is shown for 11 selected common MOA categories Figures . The disWe compared the MOA analysis by the CRAB tool with carcinogenic evidence and classifications for each pesticide Table . Common When all information on cancer classifications and published studies of the 26 pesticides was summed, it showed that the majority of the chemicals have evidence or classifications that suggest carcinogenic potential. The animal tumor data retrieved from published literature through PubMed shows evidence of carcinogenicity reported previously for 18 of 26 substances for all pesticides permitted in food products intended for human consumption EFSA, . IndividThe CRAB tool has many advantages over manual literature analysis when large quantities of data need to be examined. The tool provides a rapid view of published literature and can point to carcinogenic MOA that groups of chemicals can have in common. We have previously conducted case studies to demonstrate how the text-mining tool can be used to support cancer risk assessment. For example, literature profiles of well-known carcinogens were compared with the known properties of each chemical and the classification results correlated well to what was previously known about these substances (Korhonen et al., Hazard identification and risk assessment of mixtures is a complex and challenging process. The study described here provides an example of how a text mining-based tool could support the analysis of large amounts of textual information to detect trends and patterns in data. A MOA analysis can identify common links between different chemicals which could serve as basis for hypothesis generation and direct further research and risk assessment of chemical mixtures.All authors were involved in the design of the study. Ilona Silins conducted the text mining experiments, analysis and the manual literature review. All authors contributed to the writing of the manuscript. All authors approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Spinal infections are an uncommon but significant clinical scenario that often requires aggressive medical therapy, sometimes leading to surgery . SurgerySiam et al. report oThe authors stress the role of the superinfection of the local haematoma after surgery, and are convinced that the haematogenous route represents the main mechanism of infection of the adjacent disc. However, they also suggest that a direct infection of the adjacent disc space may occur by direct contamination during surgery because of the violation of the disc space by drilling or because of screw malpositioning .No role has been suggested by the authors for the use of titanium-based instrumentation, and in particular for the use of cages in these patients, even though their role in this surgery is still a major issue among surgeons. The use of metal implants in the infected spine has been avoided in the past because of the known adherence of bacteria to metal surfaces. However, experimental studies showed a variable bacterial adherence to different metal surfaces depending on biofilm formation and species. Titanium implants have been used in the setting of spinal infections, because these showed a reduced bacterial biofilm adherence; moreover, similar results and fusion rates were observed when titanium implants or bone grafts were used in surgery for spondylodiscitis .Of the eleven patients with positive microbiological findings described in the study, eight had a recurrence of the same microorganism with multiple antimicrobial drug resistance, and three had a superadded infection with another organism. The high recurrence of the same microorganism poses a question about the influence of the general health status of patients on the genesis of this disease . MoreoveTwelve out of the 23 patients in the study had no germs retrieved after cultural sample harvest, and the diagnosis of infection was made by clinical and radiological examinations. In orthopaedic surgery, a similar scenario is often observed in the case of suspected joint arthroplasty infections, and general guidelines have been implemented into clinical practice to improve the chances of isolating the responsible microorganisms ; however"} +{"text": "The functions of the medial geniculate body (MGB) in normal hearing still remain somewhat enigmatic, in part due to the relatively unexplored properties of the non-lemniscal MGB nuclei. Indeed, the canonical view of the thalamus as a simple relay for transmitting ascending information to the cortex belies a role in higher-order forebrain processes. However, recent anatomical and physiological findings now suggest important information and affective processing roles for the non-primary auditory thalamic nuclei. The non-lemniscal nuclei send and receive feedforward and feedback projections among a wide constellation of midbrain, cortical, and limbic-related sites, which support potential conduits for auditory information flow to higher auditory cortical areas, mediators for transitioning among arousal states, and synchronizers of activity across expansive cortical territories. Considered here is a perspective on the putative and unresolved functional roles of the non-lemniscal nuclei of the MGB. The medial geniculate body (MGB) is the main thalamic nucleus associated with audition, receiving direct synaptic inputs from the inferior colliculus receives topographically organized projections from the central nucleus of the IC and projects to tonotopically-organized areas of the auditory cortex , AI and AII, via layer 5 of AI to MGBd and then to layer 4 of AII areas, such as AI and PAF also tend to be greater than inter-group connections , although the magnitude of these inter-group connections seems greater in primates compared with cats (Hackett et al., However, rather than forming the basis of a strict prediction, one might better approach these conjectures as a framework for deciphering future physiological investigations to consider both the corticocortical and corticothalamocortical routes as potential neural substrates in auditory forebrain operations. The question then of utility of these two pathways in auditory forebrain operations might be better construed as one of degree, rather than that of hegemony.A caveat to this notion of the non-lemniscal MGB nuclei as conduits for information flow to higher auditory cortical areas is the medial division of the MGB. Unlike the nuclei of the dorsal division, the medial division does not appear to be a major nuclear target of the giant, driver-like corticothalamic projections that establish the first leg of the corticothalamocortical pathway Figure . FurtherInstead then, the prevailing notion for the medial division of the MGB considers it to be part of the matrix system of thalamic nuclei, proposed by Jones in his cIn this regard, the connections of the medial division of the MGB with the limbic-related nuclei in the amygdala position it uniquely to alter auditory forebrain networks in affective and emotional responses to aversive stimuli Figure . The samOverall though, it is clear that the operations of the non-lemniscal medial and dorsal division nuclei of the MGB extend and enhance the operations of the auditory thalamus beyond that of a simple relay for acoustic information entering the auditory cortical network. The ultimate challenge for future investigations will be to specifically parse their interrelated roles in global auditory forebrain processes and the emergent construction of holistic auditory percepts.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The Hodgkin and Huxley 1952) model of action potential (AP) generation accounts for many properties of APs observed experimentally and has been successfully used in modeling neurons of different types. In this model, however, the spike onset is much shallower than in experimental recordings from the soma suggesting different activation properties of sodium channels in the real tissue. To explain the origin of the observed sharpness (kink) in the spike onset three hypotheses were proposed: 1. Cooperative hypothesis: sodium channels cooperate in the axon initial segment, which makes their collective activation curve much sharper 952 model.To find out what is truly happening in the cell during the action potential, we investigated the active backpropagation and compartmentalization hypotheses by means of computational modeling and theoretical analysis. In order to differentiate the hypotheses, we varied systematically the morphology of the neuron and distribution of the ionic channels along the cell, and tested how they contribute to the appearance of the kink. We show that the kink at spike onset is primarily due to compartmentalization rather than to active backpropagation."} +{"text": "Objective: Due to the development of medical education in the past decade the role of teachers has changed and requires higher didactic competence. Student evaluation of teaching alone does not lead to considerable improvement of teaching quality. We present the concept of \"Integrative Teaching Consultation\", which comprises both the teacher\u2019s reflection and own objectives to improve their teaching as well as data from students ratings.Methods: Teachers in collaboration with a teaching consultant reflect on their teaching ability and set themselves improvement goals. Then the consultant himself observes a teaching session and subsequently analyses the respective student evaluation in order to give meaningful feedback to the teacher.Results: The combination of student feedback with professional consultation elements can initiate and maintain improvements in teaching. Conclusion: Teaching consultation complements existing faculty development programs and increases the benefit of student evaluations. Th. Thshiftlearning . At the learning , 6]. Th. Thshiftlearning , even exlearning . As an essential part of faculty development programs, workshops and seminars have been designed to improve teacher effectiveness. While the literature describes successful programs, their long-term effect and the sustainability remain unclear . The traThe \"Integrative Teaching Consultation\" (ITC) is a team of pedagogues and psychologists headed by a medical doctor. The aim of the ITC is to combine SETs with counseling and professional feedback in order to enhance the ability of teachers to translate the SET feedback into concrete measures of teaching behavior. Evidence on teaching and learning and principles of effective consultative feedback are used during consultation. This includes active involvement of teachers in the counseling process, the use of teacher self-ratings and feedback from teaching observations . StudiesAt the Medical Faculty of Heidelberg all departments receive additional feedback from the ITC on the basis of SETs. Successful lecturers receive positive feedback on their key factors of success. Lecturers with less positive evaluation results receive didactical advice to improve their teaching, as well as an offer for teaching consultation. The goal of the ITC is to support the teachers individually in order to improve their teaching and to strengthen their ability of self-assessment , which were complemented by practical training sessions and tutorials. One of the lecturers sought the support of the ITC in order to aim to adapt and improve his lectures and tutorials. On the basis of his reflections, the feedback and the SETs, he implemented concrete changes in his lectures and tutorials: learning objectives were clearly defined and communicated to the students in order to enhance the transparency of content and success criteria. Application exercises related to practice were integrated into the seminars to stress the importance of certain learning objectives and to uncover possible queries of the learners during the lecture. At the same time the content of the lessons was reduced. Tutorials were adapted with the help of assistant tutors who had received good evaluations in the past. The entire counseling process including teaching observation, pre- and post-processing extended over a period of 18 months.To analyze the effectiveness of the consultation process, the free texts of the students relating to the didactic lecture and the tutorials were analyzed using the method of Mayring . The texThe results (see table 1 If SETs alone do not lead to considerable improvement of teaching quality, it should not be the centerpiece of quality assurance. Combined with feedback interventions, the effectiveness of SETs can be improved. Our findings suggest that consultation on the basis of student ratings is an effective method for enhancing the teaching effectiveness of university teachers.This is based on the prerequisite that a long-term support of lecturers and disciplines is possible. The close connection between faculty consultant, faculty development and the SETs seems to be important. The concept of ITC may also be applied to other faculties. For the improvement of teaching quality, a teaching consultation should be integrated into the quality management of medical universities. Future research should attempt to clarify, what changes can be expected and which factor of consultation is notably relevant for lecturers and the curriculum. The authors declare that they have no competing interests."} +{"text": "In the last decades, the focus of vascular research has been on acute vascular events. The acute coronary syndrome with its dramatic acute clinical manifestations was in the centre of this whirlwind. The focus was on atherosclerotic plaque located in the intima, stenosis, rupture, acute occlusion and the very successful therapeutic interventions. This has eclipsed the other manifestations of vascular disease: stiffening of the vessel wall and arteriosclerosis. This stiffening leads to diminished windkessel function of the elastic vessels, water hammer damage to sensitive tissues and dysfunction of sensors located in the vessel wall. The effects of this stiffening are in general not acute but manifest themselves in the long and probably very long term.Many slowly evolving vascular manifestations of disease such as dementia, lacunar cerebral infarctions, chronic limb ischaemia and chronic heart failure could be caused by these mechanisms. The stiffening of the vessel wall is due to a mixture of intima proliferation, fibrosing and calcification, which is mainly located in the media layer of the vessel wall.Although these two processes of atherosclerosis and arteriosclerosis seem to be separate, they can probably occur sequentially or simultaneously and it is feasible that these processes interact with each other.The arteriosclerotic process has been mainly investigated with the pulse wave velocity (PWV) technique in hypertensive populations . This ulIn this issue, Kroner et al. use suchKroner et al. looked at the stiffness of the carotid artery from the level of the origin of the common carotid artery up to the level of the petrous channel and found that the thickness of the artery in this pathway nicely correlates with white matter lesions. Yet, there is excellent literature showing that the really important part of the internal carotid artery from the point of view of pulse wave modulation is the carotid syphon, located just beyond the petrous channel running in the cavernous sinus . StiffenAnd lastly, why is this information important when the general belief is that we cannot change the process of stiffening? It has indeed long been thought that stiffening of the vessel wall with fibrosis and calcification is an unavoidable inert end stage of ageing and certain arterial diseases. However, it is increasingly becoming clear that the calcification component of stiffening is an active metabolic process very much resembling bone formation. The process of bone formation can be halted or even reversed and experimental medication to reverse fibrosis in the form of angiotensin type 2 receptor agonists is becoming available too. With this therapeutic prospect and to protect both grey and white matter, more knowledge about arteriosclerosis is needed. That knowledge should encompass morphology, histology and function of the vessel wall.None.None declared."} +{"text": "Micro(mi)RNAs are small non-coding RNAs that play critical roles in physiological networks by regulating genetic programs. They are conserved from worms to mammals and function as negative regulators of protein-encoding gene expression. Research on the role of miRNAs in pathophysiological conditions is very active since 10 years and several works evidenced that miRNAs play a key role in the regulation of immunological functions and the prevention of autoimmunity. I will discuss the involvement of miRNAs in the regulation of innate and adaptive immune functions and in the development of autoimmune disease. Focusing on the role of few miRNAs, I will emphasize the intertwined relationships between tissue homeostasis and immunity, and on how studying miRNAs in autoimmunity and immune-mediated inflammatory disorders will shed light on pathological processes and help identifying novel drug candidates and biomarkers.None declared."} +{"text": "Ayurveda has a personalized approach on dietetics. Requirement of food intake is assessed on the basis of constitution in healthy individuals. In an ideal condition instinct of the subject is the indicator of bodies needs to compensate deficient dosha. Assessment of quality of food is made subjectively through six tastes, attributes, potency and metabolized taste. Ayurveda gives more importance in regulates the digestive power than correction of the calorie of food. According to the change in space and time dosha predominance varies. This warranted a consideration in choosing the right diet. Ayurveda emphasized the need of healthy mind for the proper nourishment of body. Various processing methods alter the attributes of food. Hence the assessment of food quality is to be made at the time of dining. Ayurveda warns against the complications of food-food interaction due to wrong combination of good materials. Unhealthy diet practice is considered as one of the important reasons of disease. Hence the correction of diet itself is considered as treatment."} +{"text": "The identification and annotation of protein-protein interactions (PPIs) is of great importance in systems biology. Big data produced from experimental or computational approaches allow not only the construction of large protein interaction maps but also expand our knowledge on how proteins build up molecular complexes to perform sophisticated tasks inside a cell. However, if we want to accurately understand the functionality of these complexes, we need to go beyond the simple identification of PPIs. We need to know when and where an interaction happens in the cell and also understand the flow of information through a protein interaction network.Another perspective of the research on PPI networks is the study of their relation to disease. In disease conditions, mutations that alter the secondary structure of one protein might perturb its PPIs, as well. Thereafter many things can go wrong via cascading effects, caused by the inter-relatedness of the mutated protein to other proteins through the PPI network. Such perturbations could block the formation of a protein complex or lead to the formation of new protein complexes and the activation of abnormal signaling pathways. These events could alter the cellular transcriptome profile and further contribute to disease pathogenesis. That is why the maintenance of the proper structure and functionality of a PPI network is crucial for cellular homeostasis. Its disruption can cause complex effects and understanding them requires advanced methods for analysis.The aim of this Research Topic is to present novel findings and recent achievements in the field of PPI networks. Thematically, it is divided into two parts. First, we present methods for the identification and quantification of PPIs; second, we describe computational approaches to annotate interactomes and extract information related to disease prediction or disease progression.Suter et al. describe the application of next generation sequencing (NGS) for the characterization of binary PPIs. Authors present an accurate method to analyze yeast two-hybrid data by NGS and also interpret interaction data via quantitative statistics. They also discuss how this methodology can be used to discover differential PPIs allowing the identification of disease mechanisms .The first four articles deal with the identification and quantification of PPIs. In the first work, Yang et al. present methods that can determine the relative abundance of purified proteins in a sample enabling the identification of transient PPIs in different conditions. Additionally, when combined with proximity tagging methods, MS may illuminate spatial or temporal PPIs, especially those of signaling pathways whose perturbation may underlie human diseases . Meyer and Selbach indicate how MS can be used to identify dynamic changes in the interactome. Stable isotope labeling in aminoacids and affinity purification-MS can shed light on the dynamic behavior of proteins even at different stages of an experiment following perturbation. Authors also describe how MS may identify the stoichiometry of proteins in complexes. These methodologies can be employed to study the dynamic changes of PPIs under normal and disease conditions (Meyer and Selbach).The next two review articles describe mass spectrometry (MS) based approaches. Buntru et al. review novel cell-based assays for the detection of PPIs and discuss their strengths and weaknesses. Compared to traditional genetic or biochemical methods, these techniques provide quantitative information of PPIs even in the context of living cells. This information can be used to prioritize a large number of PPIs, allowing researchers to better describe the biological systems and improve our understanding of disease processes .In the next article, Alanis-Lobato describes computational mining tools to improve the reliability of protein networks and predict new interactions based on the topological characteristics of their components. He also provides examples on how the integration of clinical data can highlight disease modules in these networks or indicate similarities between diseases (Alanis-Lobato).The second part of the Research Topic is comprised of seven papers dealing with the annotation of protein interaction networks. Pelassa and Fiumara study the functional role of homopolymeric amino acid repeats (AARs) in proteins and their PPIs. AARs are considered to mediate PPIs and in some cases correlate with human diseases, such as polyglutamine expansions involved in Huntington's disease. The authors describe a computational screening of the human interactome and show that AAR-containing network components have a high degree of connectivity. They also indicate an overlap between AARs and interaction domains suggesting that AARs play an important role in shaping protein interaction networks (Pelassa and Fiumara).Lecca and Re present WG-Cluster, a novel algorithm for the detection of modular structures in protein networks. This tool combines network node and edge weight information of connected proteins improving the biological interpretability of a PPI. The authors also apply their technique in biological datasets from patients with colorectal cancer and indicate differentially active cellular processes in normal vs. tumor conditions (Lecca and Re).Chen et al.).In the next article, Chen and colleagues use the dynamical network biomarkers method to detect early disease signals in a breast cancer cell model. The authors pinpoint critical network changes and highlight a number of pathways associated with the pre-transition from the normal state to a cancer cell progression stage. They also suggest the use of these signals as targets for disease intervention .Theofilatos et al. argue about the challenges of computational analysis of PPI data and present future goals such as biomarker discovery or identification of pathogenic PPIs and their drug targeting. Authors also support that the integration of environmental or clinical data in protein networks will allow their in-depth study and the construction of personalized interactomes .In the last article of this topic, In the past decade, network biology focused on the representation of the binary interaction of proteins. Today, the field of PPI research capitalizes and hops above the establishment of such previous work and resources, identifies existing limitations, and proposes further avenues of investigation, as reflected in this Frontiers Research Topic. A tight connection between experimental and computational efforts is a hallmark of the articles that we present here, which set the tone that PPI research will follow in the next years. If anything remains unchanged, this is our awareness of the fact that diseases are often caused by the malfunction of large protein complexes. This holds as the main motivation of research in the field, which screams for more complete and reliable interactomes, ultimately crucial in order to identify relevant pathogenic mechanisms and design therapeutic intervention strategies.All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Nucleosomes are the basic structural units of chromatin. Most of the yeast genome is organized in a pattern of positioned nucleosomes that is stably maintained under a wide range of physiological conditions. In this work, we have searched for sequence determinants associated with positioned nucleosomes in four species of fission and budding yeasts. We show that mononucleosomal DNA follows a highly structured base composition pattern, which differs among species despite the high degree of histone conservation. These nucleosomal signatures are present in transcribed and non-transcribed regions across the genome. In the case of open reading frames, they correctly predict the relative distribution of codons on mononucleosomal DNA, and they also determine a periodicity in the average distribution of amino acids along the proteins. These results establish a direct and species-specific connection between the position of each codon around the histone octamer and protein composition."} +{"text": "During brain development the neural stem cells are regulated by both intrinsic and extrinsic sources. One site of origin of extrinsic regulation is the developing choroid plexuses, primely situated inside the cerebral ventricles. The choroid plexuses are very active in terms of both secretion and barrier function as soon as they appear during development and control the production and contents of cerebrospinal fluid (CSF). This suggests that regulated secretion of signaling molecules from the choroid plexuses into CSF can regulate neural stem cell behavior (as they are in direct contact with CSF) and thereby neurogenesis and brain development. Here, choroid plexus development, particularly with regards to molecular regulation and specification, is reviewed. This is followed by a review and discussion of the role of the developing choroid plexuses in brain development. In particular, recent evidence suggests a region-specific reciprocal regulation between choroid plexuses and the neural stem cells. This is accomplished by site-specific secretion of signaling molecules from the different choroid plexuses into CSF, as well as brain region specific competence of the neural stem cells to respond to the signaling molecules present in CSF. In conclusion, although in its infancy, the field of choroid plexus regulation of neurogenesis has already and will likely continue to shed new light on our understanding of the control and fine-tuning of overall brain development. Neurogenesis, both in the adult and during development, occurs in a specialized environment established by both the neurogenic and non-neurogenic cells within the neurogenic area, and from cells and compartments in direct contact with the niche lining the ventricle . The processes of the NSCs span the entire thickness of the cortical wall and their nuclei move up and down in the ventricular zone (VZ) during the cell cycle but they stay in contact with the ventricle via the apical process and the cilium responsive to elements present in CSF through-out the cell cycle. The NSCs divide and give rise to the next cells in the differentiation cascade, the transient amplifying cells, TAPs (yellow cells) or neurons (green cells). The TAPs are a heterogeneous population and vary in their morphology, location and abundance are derived from plasma and are transferred across the choroid plexus epithelium by a subset of choroid plexus epithelial cells in a highly specific and regulated manner and alterations in developmental neurogenesis choroid plexus Figure . The choIn the adult, the choroid plexuses and CSF have several know functions such as: (i) protecting and regulating the internal environment of the brain via the blood-CSF barrier; (ii) secretion and modulation of CSF through the activity of the choroid plexus epithelial cells; and (iii) waste and metabolite removal, via the \u201cCSF sink,\u201d through the continuous production and then removal of CSF into peripheral circulation. During development two out of these three functions are already performed by the choroid plexuses; protection of the brain via the blood-CSF barrier and regulation of CSF composition via specific and regulated transfer and secretion (see more below). However, due to the lack of CSF reabsorption during embryogenesis, the removal of waste and metabolites does not occur . Ttr is known as the choroid plexus marker and is expressed in the mouse lateral ventricular choroid plexus from E11 choroid plexus has also been described the choroid plexus was reduced in size but Cajal-Retzius cells were increased in number , resulting in a transient doubling of the size of the lateral ventricles free passage of water and solutes between the cells has to be inhibited via the presence of tight-junctions , (ii) the capacity of water transport across the cells , and (iii) the presence of transporters and exchangers that can create an ionic or osmotic gradient that drives water across the epithelium. These criteria are of course all met in the adult choroid plexus via the presence of the blood-CSF barrier, the presence of the water channel AQP1 and the creation of ion gradients between blood and CSF via a multitude of ion transporters and exchangers creates a pressure inside the cerebral ventricles. The requirement of this pressure for normal brain development was shown by a study in chick embryos (Stage18), where the intraventricular pressure was removed via insertion of hollow tubes into the ventricular system or via changes in the secretion of molecules produced by the choroid plexuses themselves Figure . In the in vitro experiments. Here it was shown that isolated cortical cells can be maintained on embryonic CSF alone .The choroid plexuses appear whilst neurogenesis is occurring through-out the nervous system. This gives the potential for the choroid plexuses (via their modulation of CSF) to alter the behavior of the neural stem cells along the neuroaxis Figure , Box 1. With the advancement of region and time specific deletion techniques new strategies to isolate choroid plexuses involvement in neurogenesis are available. Along these lines, some recent work has demonstrated, via choroid plexus specific deletions, that molecules expressed in the choroid plexus have the capability to influence brain development at other sites. This was first achieved by deletion of Shh from the hindbrain choroid plexus (using the Wnt1-Cre), resulting in a more than 50% decrease in the proliferation of the neural stem cells in the nearby cerebellum and impaired GABAergic progenitor expansion it not only significantly altered the size of the choroid plexus but also the relative expression of many secreted signaling molecules . We suggest that select tissues are prepared for certain signaling molecules, by the expression of not only receptors but also by the expression of modulators and inhibitors of the different signaling pathways. Thus, neurogenesis can be seen as a reciprocal process between the choroid plexus and the neural stem cells mediated via the CSF Figure .In conclusion, there is substantial evidence that there is indeed a role played by the choroid plexus-CSF-signaling axis in the modulation of developmental neurogenesis. However, much more remains to be discovered. The specific mechanisms by which the choroid plexuses modulate brain development both in terms of the choroid plexuses as whole organs as well as specific aspects of their functions needs to be investigated. Further detailed studies of choroid plexus impact on different brain regions and developmental time-points would increase our understanding of the whole process of brain development. Understanding the role of the choroid plexus-CSF signaling axis and its impact on brain development can also lead to novel routes and mechanisms for treatment approaches for a multitude of developmental disorders and in the long-run also for application to adult neurodegeneration and brain injuries.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The hypothalamus is the higher neuroendocrine center of the brain and therefore possesses numerous intrinsic axonal connections and is connected by afferent and efferent fiber systems with other brain structures. These projection systems have been described in detail in the adult but data on their early development is sparse. Here I review studies of the time schedule and features of the development of the major hypothalamic axonal systems. In general, anterograde tracing experiments have been used to analyze short distance projections from the arcuate and anteroventral periventricular nuclei (Pe), while hypothalamic projections to the posterior and intermediate pituitary lobes (IL) and median eminence, mammillary body tracts and reciprocal septohypothalamic connections have been described with retrograde tracing. The available data demonstrate that hypothalamic connections develop with a high degree of spatial and temporal specificity, innervating each target with a unique developmental schedule which in many cases can be correlated with the functional maturity of the projection system. The hypothalamus is an important structure of the brain positioned as a higher part of the vegetative nervous system and a part of the limbic system. It is involved in realization of numerous neuroendocrine, endocrine, somatomotor and behavioral functions which help an organism to survive and adopt to the environment Swanson, . The bacCarbocianine dye tracing was introduced for studies of the brain connections since first works of Honig and Hume , 1989 anThe small dimensions of the hypothalamic nuclei at early developmental stages require very precise application of the marker, sometimes even single crystals of appropriate size were used sections are the most appropriate for the analysis of DiI tracing results using conventional or confocal fluorescence microscopy. Frozen sections can not be recommended for the DiI tracing because of the fast lateral diffusion of the marker from the axon in any type of the mounting medium. This can be overcome only by mounting and drying cryocut sections without coverslipping of the thalamus, the parvicellular division of the hypothalamic PV, the subparaventricular zone, the dorsomedial nucleus of the hypothalamus, the ventral lateral septum, the intergeniculate leaflet, the bed nucleus of the stria terminalis) on the first postnatal day (P1) with significant increase on P2\u2013P3 and P10 in male rats but not in females although projections from the bed nucleus of stria terminalis (BSTp) to the preoptic region were revealed earlier on P4\u2013P7 both in males and females transported by their axons to the posterior pituitary and arcuate hypothalamic nuclei in adult mammals . It is formed by collaterals of the MTeg as has been shown in adult rats with double tracer injections Mammillary body projections to the tegmentum Projections of parvicellular hypothalamic neurons to the median eminenceProjections of magnocellular hypothalamic neurons (SO and PV) to the PL and median eminenceLateral septal projections to the preoptic areaProjections starting late prenatally and differentiating postnatallyProjections of accessory retrochiasmatic nucleus to the PLMammillary body projections to the anterior thalamic nuclei Projections from the tegmental nuclei to the mammillary body (mammillary peduncle)Projections of the accessory hypothalamic nuclei to the posterior pituitaryProjections of the parvicellular hypothalamic neurons to the ILProjections of anteroventral Pe to GNRH neurons around OVLTProjections developing postnatallyArcuate nucleus projections to the hypothalamic nuclei Arcuate nucleus projections to the BSTp and lateral septal nucleusProjections of anteroventral Pe to the BSTp, lateral septal nucleus and parvicellular PVSuprachiasmatic nucleus projections to the hypothalamic nuclei, lateral geniculate nuclei, BSTp and lateral septal nucleusGnRH neurons projections to the mediobasal hypothalamusProjections of the BSTp to anteroventral Pe and medial preoptic nucleusRetinal projections to the suprachiasmatic nucleusIt can be useful to distinguish three groups of hypothalamic projection systems according to the period of their formationThe author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Dear EditorThe recent paper on the methadone- and buprenorphine- related testicular toxicity adds a dThe study is also consistent with the known interdependence of the germinal centres of the testis with the surrounding Sertoli cells, and the interdependence of the cellular elements with the basement membrane and vasculature. Therefore this highly provocative careful histolopathological study invites further careful mechanistic investigations. One also notes many reports of buprenorphine abuse. A comment from the authors on the very differing doses used in their rats from those usually employed clinically, would have been of interest."} +{"text": "The initial discovery of resistin and resistin-like molecules (RELMs) in rodents suggested a role for these adipocytokines in molecular linkage of obesity, Type 2 Diabetes mellitus and metabolic syndrome. Since then, it became apparent that the story of resistin and RELMs was very much of mice and men. The putative role of this adipokine family evolved from that of a conveyor of insulin resistance in rodents to instigator of inflammatory processes in humans. Structural dissimilarity, variance in distribution profiles and a lack of corroborating evidence for functional similarities separate the biological functions of resistin in humans from that of rodents. Although present in gross visceral fat deposits in humans, resistin is a component of inflammation, being released from infiltrating white blood cells of the sub-clinical chronic low grade inflammatory response accompanying obesity, rather than from the adipocyte itself. This led researchers to further explore the functions of the resistin family of proteins in inflammatory-related conditions such as atherosclerosis, as well as in cancers such as endometrial and gastric cancers. Although elevated levels of resistin have been found in these conditions, whether it is causative or as a result of these conditions still remains to be determined. Obesity is increasing worldwide at such an alarming rate that is has been classified as an epidemic . With thVisceral fat accumulation, or white adipose tissue, has been implicated as important risk factors not only for the development of type 2 Diabetes mellitus , but alsHere, we review the dissimilarity between rodent and human forms of resistin, and demonstrate how the function of resistin differs in rodent and human counterparts. We present the differences between both human and rodent resistin. We summarize the current knowledge of the signaling of resistin in humans, as well as the current hypotheses of the potential role of resistin in humans.White adipose tissue, one of the two types of adipose tissue found in mammals may represent the largest endocrine tissue of humans . ClassicA high percentage of genes expressed within visceral adipose tissue, about 30\u00a0%, are attributed to secretory proteins . The secIn lean individuals, white adipose tissue (WAT) storage of triglycerides is systematically regulated, controlling the release of anti-inflammatory cytokines such as Adiponectin , TransfoThe shift in cellular composition surrounding WAT sees a shift in the balance of anti-inflammatory macrophages (M2 phenotype) to pro-inflammatory macrophages (M1 phenotype) ).The story of the resistin family of adipokines is very much of mice and men. Vast differences exist between these adipokine families across species in relation to existence, expression and tissue specificity. The lack of homology between human and rodent families of resistin adds to the intrigue of this family of cytokines.The physiological role of resistin and RELM\u03b2 in the pathogenesis of human disease remains to be determined, and leaves several questions unanswered. What is known is that elevated levels of both resistin and RELM\u03b2 are found in certain inflammatory-based disease states. Whether elevation of these adipokines is a cause or a consequence of the disease still remains to be determined. What causes its elevation if determined to be causative of an inflammatory condition? What is the effect of their elevation if found to be consequential to an inflammatory condition?The determination of a signalling cascade for both resistin and RELM\u03b2 should shed some light on the understanding of the role of these adipokines in human disease. Determination of the mechanisms of control of expression of these adipokines as well as determination of the functional receptor and effects on target cells would add invaluable insight into the biological role of these adipokines, in normal and pathological states."} +{"text": "Aim The aim of this study was to identify the communication methods and production techniques used by dentists and dental technicians for the fabrication of fixed prostheses within the UK from the dental technicians' perspective. This second paper reports on the production techniques utilised.Materials and methods Seven hundred and eighty-two online questionnaires were distributed to the Dental Laboratories Association membership and included a broad range of topics, such as demographics, impression disinfection and suitability, and various production techniques. Settings were managed in order to ensure anonymity of respondents. Statistical analysis was undertaken to test the influence of various demographic variables such as the source of information, the location, and the size of the dental laboratory.Results The number of completed responses totalled 248 (32% response rate). Ninety percent of the respondents were based in England and the majority of dental laboratories were categorised as small sized (working with up to 25 dentists). Concerns were raised regarding inadequate disinfection protocols between dentists and dental laboratories and the poor quality of master impressions. Full arch plastic trays were the most popular impression tray used by dentists in the fabrication of crowns (61%) and bridgework (68%). The majority (89%) of jaw registration records were considered inaccurate. Forty-four percent of dental laboratories preferred using semi-adjustable articulators. Axial and occlusal under-preparation of abutment teeth was reported as an issue in about 25% of cases. Base metal alloy was the most (52%) commonly used alloy material. Metal-ceramic crowns were the most popular choice for anterior (69%) and posterior (70%) cases. The various factors considered did not have any statistically significant effect on the answers provided. The only notable exception was the fact that more methods of communicating the size and shape of crowns were utilised for large laboratories.Conclusion This study suggests that there are continuing issues in the production techniques utilised between dentists and dental laboratories. Principles of dental team workingProsthodontics is a discipline that requires a synergy between the dentist and dental technician, in order to fabricate intraoral prostheses with acceptable fit, function and aesthetics.12However, a number of studies6789101112678910111213The first five years: a framework for undergraduate dental education.17Undergraduate training should theoretically prepare dentists with the required knowledge to provide fixed prostheses in a safe and predictable manner. However, a number of studies121The purpose of this cross-sectional study was to identify the communication methods and production techniques used by dentists and dental technicians for the fabrication of fixed prostheses within the UK from the dental technicians' perspective. Part one of this cross-sectional survey reported on the communication issues between dentists and dental laboratories.The details regarding materials and methods have been published in the first paper.The Dental Laboratories Association was approached and approved the use of their database of e-mail contacts (782 addresses). A web-based survey tool, Opinio , was utilised for the administration of the survey and assimilation of data. Settings were managed in order to ensure anonymity of respondents.The data collected was presented as descriptive statistics and analysed using Fisher's exact test, the Mann-Whitney test or the Spearman's rank correlation . P-values of less than 0.025 were regarded as statistically significant. A significance level of 2.5% was chosen rather than the conventional 5% to avoid spuriously significant results arising from multiple testing.The null hypothesis was that factors such as the source of information used to answer the questionnaire, the location, and size of the dental laboratory, did not influence the communication methods and production techniques.The number of responses totalled 248, which yielded a 32% response rate. Sixty-eight respondents answered only some of the questions. The results presented in this paper pertain to the subchapters of general information, disinfection and suitability of impressions and production techniques. The subchapters and questions along with the results in parentheses are depicted in The results of the general information subchapter have been published in part one,The results of this study showed that a significant number of respondents (52%) considered that less than half of the impressions received from the dentist were clearly labelled as having been disinfected. Sixty-five percent of dental laboratories indicated that they routinely disinfected the impressions received from the dentist before pouring them up.The most popular impression tray used in the fabrication of crowns and bridgework was the full arch plastic tray, which was used in 61% and 69% of cases respectively. Custom made trays were only used in 10% of cases and quadrant plastic trays were the least popular. A significant number of respondents (17%) considered that the majority of final master impressions received were of poor quality and inadequate to use for a varied number of reasons including air voids, defects at the preparation margin and deformation of the impression material.The aforementioned results, pertaining to the disinfection and suitability of impressions, were not influenced by the size of the laboratory or the source of information with the exception of the responses about the inadequacy of the master impressions (p = 0.03), which suggested that the proportion of inadequate impressions was greater in the records group than the memory group.Regarding the adequacy of tooth preparations, the results of this study showed that, on average, 18% of respondents considered that they routinely received tooth preparations where there had been inadequate bucco-lingual tooth reduction. The analysis showed that the percentage was statistically (p = 0.01) higher (28%) in the records group compared to the memory group. With respect to occlusal tooth reduction, 26% indicated that they frequently encountered tooth preparations with insufficient reduction.The semi-adjustable articulator was the most frequently used (44%) in the fabrication of fixed prostheses followed by the simple hinge type (28%). This survey indicated that only 11% of the dental laboratories perceived that the majority of inter-occlusal records were accurate.The majority of respondents (68%) reported that they rarely received any particular guide, such as a diagnostic wax-up, or impressions from provisional restorations, in order to communicate the shape and size of the definitive restoration. Written instructions were the most widely used means of communicating the size and shape of crowns, and were often supplemented with photographs, drawings, or the use of diagnostic wax-ups . The staThe study also showed that dental technicians often had to decide on the type of material and the surface on which to use the material, as it had not been accurately prescribed by the dentist. Almost a quarter (24%) of dental technicians had to routinely choose both for the materials to be used for the fixed prostheses as well as the particular surfaces that needed to be covered with an aesthetic veneering material.For the fabrication of fixed prostheses, base metal alloys (52%) were the most commonly used, with high gold content alloys only used in 8% of cases .The most commonly requested combination of materials for the construction of both anterior and posterior crowns was metal-ceramic (69% and 71% respectively). All-ceramic crowns accounted for 29% of anterior cases and only 8% of posterior crowns. Metal-only posterior crowns were only used in 19% of situations. No significant statistical observations were noticed for the aforementioned results.This cross-sectional survey was undertaken to identify the communication and production techniques used by dentists and dental technicians for the fabrication of fixed prostheses within the UK from the dental technicians' perspective. The current publication reports on the production techniques used. The response rate of 32% was similar to previously published surveys101112613Personal bias may have affected the accuracy of the results as the majority of the information used to answer the survey questions was sourced from memory. Dental technicians could have exaggerated the extent of poor impression disinfection and suitability of the impressions, as well as potential issues in production techniques. Nevertheless, the statistical analysis showed that the source of the information did not play a significant role.The results of this study showed that a significant number of dental laboratories were receiving impressions from dentists that were not clearly labelled as having been disinfected. It was also shown that 65% of laboratories would routinely disinfect the impressions on their arrival. It seems that there is a lack of agreement between dentists and laboratories regarding decontamination and disinfection of dental impressions, even though clear guidelines have been made available via the British Dental Association.222324Full-arch plastic trays were the most frequently used impression trays for the fabrication of crowns and bridgework, and this confirms previous findings.611262729A concerning finding of this survey was that a high proportion of final master impressions were considered as inadequate for use by the dental technician. Most troublesome was the fact that the majority of the inadequate impressions presented with a combination of problems. Similar results regarding the frequency and aetiology of inadequate definitive impressions have been reported in previous studies711213233The lack of sufficient tooth preparation presents the dental technician with the difficult task of fabricating a crown or bridge with adequate form and aesthetics.113637To date there has been no research data on the use of articulators and occlusal records within commercial dental laboratories the UK. This particular survey showed that the semi-adjustable articulator was favoured in the fabrication of fixed crown and bridgework, being used 44% of the time. This type of articulator is also preferred in dental schools in the UK40The results of this study also showed that, in the majority of cases (68%), no guides were provided by the dentists for the fabrication of definitive prostheses. In cases that it happened, it was usually in the form of written instructions or photographs. Guides, such as the diagnostic wax-up, a copy of the provisional crowns, and occlusal aids, such as a custom incisal guide table should be provided by dentists.This survey concurs with previous ones6254The increasing cost of gold was reflected in the popular use (52%) of base metal alloys for crown and bridgework. This has also been a trend in other countries,74446There is still an apparent lack of protocol in the disinfection of impressions between the dentists and laboratories, thus creating a potential health riskPlastic full arch trays were the dentists' preferred choice of impression tray for recording master impressionsDentists frequently sent master impressions to the laboratory, which are not appropriate for the fabrication of fixed prosthesesMore use of diagnostic wax-ups and reduction guides to ensure adequate tooth removal should be used by dentistsThe dental technicians in the main did not trust the authenticity of the occlusal relationship records providedDentists frequently failed to prescribe the material to be used or the design of the prosthesis, incorrectly leaving the decision to the dental technicianMetal-ceramic crowns were still the most popular choice for both anterior and posterior units.Within the limitations of this UK based study, the following conclusions could be drawn:"} +{"text": "Luteinizing hormone-releasing hormone (LHRH) neurons and fibers are located in the anteroventral hypothalamus, specifically in the preoptic medial area and the organum vasculosum of the lamina terminalis. Most luteinizing hormone-releasing hormone neurons project to the median eminence where they are secreted in the pituitary portal system in order to control the release of gonadotropin. The aim of this study is to provide, using immunohistochemistry and female brain rats, a new description of the luteinizing hormone-releasing hormone fibers and neuron localization in the anterior hypothalamus. The greatest amount of the LHRH immunoreactive material was found in the organum vasculosum of the lamina terminalis that is located around the anterior region of the third ventricle. The intensity of the reaction of LHRH immunoreactive material decreases from cephalic to caudal localization; therefore, the greatest immunoreaction is in the organum vasculosum of the lamina terminalis, followed by the dorsomedial preoptic area, the ventromedial preoptic area, and finally the ventrolateral medial preoptic area, and in fibers surrounding the suprachiasmatic nucleus and subependymal layer on the floor of the third ventricle where the least amount immunoreactive material is found. The luteinizing hormone-releasing hormone (LHRH) is a gonadotropin releasing hormone (GnRH), which acts on the pituitary hormones as a follicle stimulating hormone (FSH) and luteinizing hormone (LH), which act on the gonads, . The GnRThe preoptic area (PA) is part of the anterior hypothalamus and is confined to the anteroventral region of the third ventricle (AV3V); the PA is divided into, the medial preoptic area (MPA) and lateral preoptic area (LPA). The MPA makes its morphological appearance at eight weeks of gestation in humans and is located in the periventricular regions of the anterior hypothalamus covering the organum vasculosum of the lamina terminalis (OVLT) , 10. TheThe OVLT , 19 beloBrains from five female Wistar rats from Charles River Laboratories Espa\u00f1a S.A. of 6 months of age were used. Rats were kept under lighting conditions of 12:12, and food and water were provided ad libitum. Rats were sacrificed at diestrus stage , and bef The sections of five coronal cephalocaudal anatomical levels of the anterior hypothalamus were simThe immunohistochemistry slides were converted to digital images by using an LEICA DMRB photomicroscope with an LEICA DC 300 F camera (Germany). Image analysis was completed by ImageJ . The \u201cmean gray value\u201d was measured from the selected areas for all stained tissue. This value gives the average stain intensity in grayscale units for all threshold pixels. The immunohistochemistry statistical study was conducted using the IBM SPSS statistic 19 software (one-way ANOVA).Many LHRH fibers and neurons were found in different parts of the preoptic hypothalamus; the neurons presented a monopolar or bipolar morphology; the dendrites were uncomplicated, without branches and roofed with spines Figures . The distribution of LHRH cells and fibers is described with the use of the Paxinos and Watson atlas of the rat , the Paxa till the level d, where the smallest amount of immunoreactive material was found was found in the OVLT at the level as found . The findings here agree with morphological studies which report that the GnRH neurons have long and simple uni- or bipolar dendrites typically covered in spines , 32. The In view of the prevailing significance of the GnRH neurons, it is necessary to know their anatomical localization to fully understand their structure and function in general, specifically in the sexual dimorphism. At the same time, it is unclear whether the medial preoptic area is a diencephalic or a telencephalic structure because the MPA is located in the anterobasal forebrain or prosencephalon during the early stages of its development. But in the following stages, when the diencephalon and telencephalon develop and differentiate from the prosencephalon, the preoptic area is anatomically localized in the anterior hypothalamus , 11, altc and d , and therefore, this is the terminology used in this study. Positive LHRH cells and fibers located in different parts of MPA and in OVLT such as dorsomedial preoptic area (DMPA), Figures Figures , would cIn view of that described above, one could conclude that the exact denomination of the LHRH localization is peculiar; however, one could say that the greatest amount of LHRH immunoreactive material (LHRH-ir) was found in the OVLT and decreased cephalocaudally in the preoptic hypothalamus. LHRH immunoreactive material was found in the following parts: dorsomedial preoptic area, ventromedial preoptic area, and ventrolateral medial preoptic area, and in fibers surrounding the suprachiasmatic nucleus and subependymal layer on the floor of the third ventricle where the least amount of LHRH fibers and neurons was found."} +{"text": "Chronic migraine represents the most disabling condition among headaches, in particular when migraine is associated with drug abuse.Patients with chronic migraine (CM) coming to our centres are difficult to treat, both because of their refractory to antimigraine prophylactic treatment and for the combination of several comorbidities, that often need a multidisciplinary approach that leads to a multi-prescription of drugs.\u00ae) is an important therapeutic option both for its efficacy in the long term, and for the safety profile, due to the lack of clinically significant side effects.The treatment with OnabotulinumtoxinA (BotoxIn our Headache Centre we performed a retrospective study including a sample of 67 patients with a diagnosis of CM associated with drug abuse according to the ICHD-III (beta) classification. The patients were treated with OnabotulinumtoxinA according to the paradigm of the PREEMPT study (155 U to 31 injection sites) [The purpose of our study was to evaluate the duration of the Botox's efficacy in terms of headache days (HD), analgesic consumption (AC) and to assess the patients\u2019 quality of life by some self-administered scales and pain scale (VAS) .We recorded medical charts for 67 patients. However, we report the data concerning the results of only 57 patients since they represent the ones who were injected regularly every 3 months without interruption, some of them being injected up to cycle 7. Ten patients discontinued for regulatory reasons.Positive trend of the effectiveness of the treatment appears to be significant in all parameters evaluated as shown in the table This retrospective study confirms the safety and tolerability profile of repeated treatment with OnabotulinumtoxinA and shows a good consistency of the therapeutic effect over one year of treatment. The trend of the clinical parameters suggests other studies to further investigate the long-term efficacy of the treatment, as recently suggested by Pascual .Moreover, it is important to outline that in our sample we did not register any clinically relevant side effect, besides slight pain in the site of injection, and two cases of transient hypotension during the injection protocol, spontaneously reversed.Written informed consent to publish was obtained from the patient(s)."} +{"text": "We propose some objective points which could enhance the internal validity of the study ( We have read the article entitled \u201cPrevalence of marijuana use among university students in Bolivia, Colombia, Ecuador and Peru\u201d with great interest . In factThere have been previous studies concerning the evolution of legal and illegal drug use in the United States ,3 and oti.e., those who did not participate). We would also recommend presenting weighted and unweighted estimates [Therefore, we suggest the authors present the participation proportion of each of the surveys to assess the internal validity of the study, and thus evaluate the potential selection bias in relation to the population not involved in the study (stimates .We suggest reporting the participation proportion for each survey and to present weighted and unweighted estimates."} +{"text": "In the recently published review article by Jagmag et al. , some cuThe pathological hallmark of PD involves the progressive loss of neurons in the substantia nigra pars compacta (SNpc) . Reserpine model was one of the first models to investigate the pathophysiology and to demonstrate the therapeutic efficacy of L-DOPA, which remains the gold-standard treatment for PD (Carlsson et al., Our research group has proposed that repeated administration of low doses of reserpine can mimic the progressive nature of PD (Santos et al., All authors participated in the preparation and discussion of the commentary. Designed and organized the illustration: AG.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In the Results section of the Abstract, the last sentence contains a typo. The sentence should read as the following: In contrast, the risk of typhoid fever did not vary geographically or with elevation among individuals more than ten years of age."} +{"text": "Vocal cord paralysis (VCP) can be caused by any process that interferes with the normal function of the vagal nerves or recurrent laryngeal nerves. It may be a first sign of extensive and severe pathology. Radiologists must therefore be able to recognise the imaging findings of VCP and know the course of the vagal and recurrent laryngeal nerves. This review focuses on the anatomy and imaging evaluation of these nerves and thereby the possible sites for pathology causing VCP. The imaging characteristics and imaging mimics of VCP are discussed and cases from daily practice illustrating causes of VCP are presented.Vocal cord paralysis may be the first presentation of severe pathology.\u2022 Radiologists must be aware of imaging characteristics and mimics of vocal cord paralysis.\u2022 Lesions along the\u00a0vagal nerves and recurrent laryngeal nerves can cause vocal cord paralysis.\u2022 The vocal cords play a crucial role in phonation. The muscles that are responsible for vocal cord movement are mainly innervated by the recurrent laryngeal nerves. The recurrent laryngeal nerves are branches of the vagal nerves. Vocal cord paralysis (VCP) can therefore be caused by any lesion along the course of the vagal nerves above the branching of the recurrent laryngeal nerves or of the recurrent laryngeal nerves itself. An offending lesion located in the brainstem or the skull base usually results in multiple cranial nerve deficits because at this level the vagal nerve is intimately related to other cranial nerves. Pathology involving the recurrent laryngeal nerves and/or the extracranial vagal nerves frequently results in isolated laryngeal symptoms. VCP most frequently affects one side but can be bilateral. Due to long anatomical course of the vagal and recurrent laryngeal nerves, there are many disease processes that can cause VCP. Surgery, malignancy, trauma, infection and inflammation can all result in VCP. A review of more than 800 patients showed that iatrogenic injury by mediastinal and neck surgery is the most important cause of VCP . Around Clinically, vocal cord function can be assessed by laryngoscopy, during which a stroboscopic light can confirm the absence of movement of the affected side. Symptoms of VCP include: hoarseness, vocal fatigue, loss of vocal pitch, shortness of breath and aspiration . HoweverThis review focuses on the anatomy and imaging evaluation of the vagal and recurrent laryngeal nerves and thereby the possible sites for pathology causing VCP. The imaging characteristics and imaging mimics of VCP are discussed and cases from daily practice illustrating various causes of VCP are presented.The vocal cords are located in a subsite of the larynx, called the glottis. The glottis includes the true vocal cords, the anterior commissure and the posterior commissure. From medial to lateral the vocal cords consist of the mucosal surface, the vocal ligaments and the intrinsic laryngeal muscles . The anterior commissure is the midline area where the cords meet anteriorly and where they are attached to the thyroid cartilage. The posterior commissure is the mucosal surface anterior to the cricoid cartilage in between the arytenoid cartilages. Posteriorly, the vocal cords are attached to the arytenoid cartilages and laterally to the inside surface of the thyroid lamina. The medial margins are free to permit the opening and closing of the airway. During quiet respiration the cords are in a relaxed, abducted state Fig.\u00a0. Breath-The vagal nerve exits bilaterally from the medulla oblongata just lateral to the oliva through the olivary sulcus. There are three nuclei within the medulla that receive and transmit information from and to the vagal nerve. The nucleus ambiguus gives rise to motor fibres to the larynx and is located just dorsally to the inferior olive, lateral and ventral to the lower part of the fourth ventricle Fig.\u00a0. After eBilateral VCP is indicative of a central cause in the medulla oblongata . Acute oTo depict pathology in the course of the extracranial vagal nerves and the recurrent laryngeal nerves, we prefer contrast-enhanced computed tomography (CT) from the midbrain to the aortic arch including the AP window Fig.\u00a0 15]. Wi. Wi15]. The most specific findings of unilateral VCP are , 16: (1)18F-fluorodeoxyglucose positron emission tomography (FDG-PET) exams of patients with VCP. In such cases, the PET may show increased metabolism in the unaffected vocal cord due to compensatory hypertrophy the imaging characteristics and mimics of VCP, (2) the expected course of the vagal nerves and recurrent laryngeal nerves, and (3) the different types of pathology that can occur along their course."} +{"text": "Throughout the last decade, our understanding of the mechanisms involved in gene regulation has increased enormously, no more so than the role of non-coding RNAs (ncRNAs) in the regulation of gene expression and how their levels change following exercise and under pathological conditions (Aoi and Sakuma, The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The rapid development of interventional procedures for the treatment of arrhythmias in humans, especially the use of catheter ablation techniques, has renewed interest in cardiac anatomy. Although the substrates of atrial fibrillation (AF), its initiation and maintenance, remain to be fully elucidated, catheter ablation in the left atrium (LA) has become a common therapeutic option for patients with this arrhythmia. Using ablation catheters, various isolation lines and focal targets are created, the majority of which are based on gross anatomical, electroanatomical, and myoarchitectual patterns of the left atrial wall. Our aim was therefore to review the gross morphological and architectural features of the LA and their relations to extracardiac structures. The latter have also become relevant because extracardiac complications of AF ablation can occur, due to injuries to the phrenic and vagal plexus nerves, adjacent coronary arteries, or the esophageal wall causing devastating consequences. There continues to be a lack of understanding of the pathogenesis of AF. Current evidence suggests that the pathogenesis of AF is multifactorial, because this arrhythmia may not only accompany a variety of pathological conditions, but also occur in a heart with no known structural abnormality, a condition known as \u201clone AF\u201d . Recent From a gross anatomical viewpoint, the LA has four components : (1) a vAn anatomic septum in a heart is like a wall that separates adjacent chambers so that its removal would enable us to enter from one chamber to the other without exiting the heart. Thus, the true IAS wall is confined to the flap valve of the oval fossa. The flap valve is hinged from the muscular rim of IAS that, deriving from the septum secundum, is seen from the right atrial aspect of the interatrial wall . At its The major part of the endocardial LA including the septal wall and interatrial groove component is relatively smooth. The left aspect of the interatrial groove, apart from a small crescent-like edge , is almoThe walls of LA are nonuniform in thickness and in gDetailed dissections of the subendocardial and subepicardial myofibers along the entire thickness of the LA walls have shown a complex architecture of overlapping bands of aligned myocardial bundles , 13 Fig. The terAlthough there are some individual variations, our epicardial dissections of the LA have shown a distinctive pattern of arrangement of the myocardial fibers . On the The epicardial fibers of the superior wall are composed of longitudinal or oblique fibers, (named by Papez as the \u201cseptopulmonary bundle\u201d in 1920) Figures that ariOn the subendocardial aspect of LA, most specimens showed a common pattern of general architecture. The dominant fibers in the anterior wall were those originating from a bundle described by Papez as the septoatrial bundle . The fibAtrial fibrillation is the most common sustained cardiac arrhythmia and is characterized by uncoordinated contraction of the atrium. It is still unclear whether the initiation and maintenance of human AF depends on automatic focal or reentrant mechanisms. Recent reports have shown the contribution of different atrial regions on the fibrillatory process and to the maintenance of AF, emphasizing the role of structural discontinuities and heterogeneous fiber orientation favoring anatomic reentry or anchoring rotors , 16. TheAlthough different mechanisms of AF exist, it is well established that the myocardial sleeves of the PVs, especially the superior veins, are crucial sources of triggers that initiate AF . CardiacNormal PVs anatomy consists of two right-sided and two left-sided PVs with separate ostia Figures and 3. HThe presence of atrial myocardial tissue extending over the wall of the PVs has been confirmed both macroscopically and histologically by many investigators \u201331. In mSeveral studies have been designed to investigate and to compare the pathology of the PVs in patients with and without AF , 31. AltThe left lateral ridge (LLR) between the orifices of the left PVs and the mouth of the LAA is the most relevant structural prominence of the endocardial LA Figures, and 8. Within the LLR, the oblique vein of Marshall runs, together with abundant autonomic nerve bundles and a smLinear ablation connecting the inferior margin of the ostium of the left inferior PV and the mitral annulus, particularly when complete linear block is achieved, appears to increase the success rate of catheter ablation in patients with persistent or long-standing/permanent atrial fibrillation and prevent macroreentry around the mitral annulus or the left PVs . AlthougAdditional linear ablation lesions are created to improve the outcomes of PV isolation during AF ablation. In their study Cho et al. using muThe LAA appears to be responsible for triggering AF in 27% of patients presenting for repeat procedures of catheter ablation . The LAAIn autopsy specimens and imaging studies, the LAA ostium is usually elliptical or round and in elliptical-shaped variant and its long axis is obliquely orientated relative to the mitral annulus , 53. TheIn some specimens (28%), muscular trabeculations can be found extending inferiorly from the appendage to the vestibule of the mitral valve . These eDue to the close proximity of the esophagus to the posterior wall of the LA, ablation procedures involving this region of the LA may cause esophageal damage and result in the formation of an atrial esophageal fistula Figure . In an aThermal injury may involve the periesophageal vagal nerves, resulting in acute pyloric spasm and gastric hypomotility . The vagThe phrenic nerves lie along the lateral mediastinum and run from the thoracic inlet to the diaphragm . PhrenicThe intrinsic cardiac nervous system (ICNS) is a crucial regulator of heart rate, atrial and ventricular refractoriness, contractility, and coronary blood flow. Morphologically, the ICNS forms a neural ganglionated plexus that may be subdivided into epicardial and myocardial subplexuses. Several studies indicate that the ICNS is a complex of distinct subplexuses and that the cardiac ganglia are mainly distributed on (1) the superior surface of the right atrium, (2) the superior surface of the left atrium, (3) the posterior surface of the right atrium, (4) the posterior medial surface of the left atrium , and (5) the inferior and lateral aspect of the posterior left atrium and left PVs , 64. PosCatheter ablation has been shown to be an increasingly important therapeutic option for patients with paroxysmal, persistent, and chronic AF. Ablation techniques have evolved from rather limited initial approaches to quite extensive atrial interventions. The LA has a distinctive atrial appendage and an atrial body that comprises component parts that blend into one another. The patterns of general myocardial arrangement in the left atrial wall and the presence of interatrial muscle bundles may provide some anatomic background to atrial and interatrial conduction. Understanding the structure of the component parts and their relationship to one another and to other cardiac structures is relevant to interventional procedures inside and outside of the LA."} +{"text": "Conducting clinical trials in paediatric rheumatology has been difficult in the past mainly because of the lack of funding for academic studies and the lack of interest by pharmaceutical companies for the small and non-rewarding paediatric market. The situation changed dramatically few years ago with the introduction of the Best Pharmaceuticals for Children Act in USA and of a specific legislation for paediatric medicines development (Paediatric Regulation) in the European Union (EU).http://www.prcsg.org), covering North America, and the Paediatric Rheumatology International Trials Organisation (PRINTO at http://www.printo.it), covering more than 50 countries worldwide; the availability of validated measures to evaluate response to therapy, now called JIA American College of Rheumatology (ACR) criteria, accepted by both the Food and Drug Administration (FDA) and the European Medicines Agency (EMA); last but not least the advent of the biologic therapies which have revolutionized juvenile idiopathic arthritis (JIA) treatment.The main reasons for success are: the availability of two large international non-for-profit networks working in close collaboration, such as the Pediatric Rheumatology Collaborative Study Group (PRCSG at Some problems however remain still to be solved: There is a need to harmonise all the regulatory aspects related to drugs that are used in the treatment of paediatric rheumatic diseases and in particular in JIA; the issue of me too drugs; the issue of proper pK studies; the ethics of drugs provision and of trial implementation; the implementation of proper pharmacovigilance systems.This presentation will review the reasons for success and the problems that still remains to be solved for conducting trials in JIA.None declared."} +{"text": "In the developing central nervous system, most neurogenesis occurs in the ventricular and subventricular proliferative zones. In the adult telencephalon, neurogenesis contracts to the subependyma zone and the dentate gyrus (subgranular zone) of the hippocampus. These restricted niches containing progenitor cells which divide to produce neurons or glia, depending on the intrinsic and environmental cues. Neurogenic niches are characterized by a comparatively high vascular density and, in many cases, interaction with the cerebrospinal fluid (CSF). Both the vasculature and the CSF represent a source of signaling molecules, which can be relatively rapidly modulated by external factors and circulated through the central nervous system. As the brain develops, there is vascular remodeling and a compartmentalization and dynamic modification of the ventricular surface which may be responsible for the change in the proliferative properties. This review will explore the relationship between progenitor cells and the developing vascular and ventricular space. In particular the signaling systems employed to control proliferation, and the consequence of abnormal vascular or ventricular development on growth of the telencephalon. It will also discuss the potential significance of the barriers at the vascular and ventricular junctions in the influence of the proliferative niches. It is essential that an organ and its blood supply should develop together in synchrony to allow optimal conditions for the different stages of growth, differentiation and changing functional requirements. This co-dependence is particularly evident in the brain, where the vascular network originates from the peri-neural vascular plexus in early embryonic development and undergoes multiple stages of remodeling to meet the changing needs of the complex developing environment . Both the VZ in the developing brain and the SEZ in the adult have close association with the ventricular surface and the CSF. Therefore, in addition to proliferative support from the cerebrovasculature, trophic regulation/modulation is thought to come from the CSF is a major signaling molecule that regulates both vascular and neural development and may be important in the establishment of the neurovascular niche. Shh has a clear role in regulating neural development, and in the dorsal cortex conditional knockout of Shh or its receptor Smoothened (Smo) results in reduced cell division, and a smaller cortex (Komada et al., \u03b2-catenin and the Wnt signaling pathway also affect vessel ingrowth from the peri-neural plexus and the structure of the developing vessels. Endothelial specific modifications of \u03b2-catenin expression, Wnt7 knockout or delivery of the soluble frizzled-8 receptor to bind extracellular Wnt ligand all cause a disruption in the vascular ingrowth into the parenchyma and malformation of the vessel beds, characterized by enlarged vascular space and thickening of the vascular wall with multiple layers of endothelial cells (Stenman et al., Contact-dependent signaling is another local environmental regulator of growth. McCarty et al. have shoin utero injection of VEGF into the dorsal cortex, causes a change in distribution of the Tbr2 positive SVZ progenitor cells in conjunction with the new vascular network, and a disruption of radial fibers and axonal ingrowth into the surrounding tissue (Javaherian and Kriegstein, in vitro. These studies confirmed that BDNF produced by the endothelial cells was in fact secreted to affect the neurogenic precursors. Further co-culture studies confirmed that secretions from endothelial cells increased proliferation of precursors and ultimately facilitated the production of a larger number of neurons, of all neuronal classes (Shen et al., In turn, vascular development can control proliferation and brain development. For example, ectopic vascular development, induced by There are certainly many studies showing both secreted protein from endothelial cells, and progenitor-endothelial cell contact is important for maintain proliferation in the SEZ. Transplant studies of neural precursor cells into the SEZ have shown that the chemokine cxcl12, which is produced by endothelial cells, is important for the localization of these cells to the vasculature. Cxcl12 appears to act through the activation of integrin that allows cell binding to the endothelial cell wall, and support migration of differentiating cells out of the SEZ (Kokovay et al., Vascular production of VEGF is likely to be due to HIF activation in response to the low oxygen environment of the developing brain. HIF knockout in neural crest cells leads to reduced vascular density in the developing brain, and gross abnormalities in cellular migration (Tomita et al., Recent work has provided evidence in support of a further neurogenic niche in the adult brain, that of perivascular stem cells that do not proliferate in control conditions, but are upregulated following injury, such as ischemia (Ohira et al., In addition to secreted signaling to neural precursors, the endothelial cells may also support the progenitor population in a contact-dependent manner (contact between type B and endothelial cells in the SEZ is shown in Figure While the ventricular zone in the developing brain is characterized by specialized junctions between the neuroepithelial cells, there are no such junctions between the ependymal cells that make the border with the CSF in the adult SEZ. The absence of these junctions between the ependymal cells appears to be very important for the structure of the SEZ, as progenitor cells are intercalated between the ependymal cells (Shen et al., Both the VZ in the developing brain and the SEZ in the adult brain have clear connections with the CSF (see Figures The possibility that systemic signaling may directly alter the neurogenic niche has been explored by a number of authors, and it is clear that signaling through inflammatory pathways can cause short- and long-term changes in proliferation in the brain (Stolp et al., Both the CSF and the vasculature of the brain provide regulatory niches for neurogenesis in the developing and adult brain. Variation in angiogenesis and CSF production clearly affect proliferation in the neurogenic niches as a result of cross-talk between these regions, which appears to be dependent both on secreted tropic factors and contact-dependent signaling pathways. The brain barriers, which regulate the internal environment and contribute to the contact between cells in the neurogenic niches are part of this complex system that can be altered both in systemic and central injury. The localization of neurogenic niches makes them sensitive to circulating soluble factors. Recent work from Villeda et al. highlighThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "She was put under symptomatic treatment after discontinuation of the offending molecule with a good mucocutaneous improvement. After healing of lesions of the oral mucosa in December 2011, the examination of the oral cavity has found a nodular median lesion of the dorsal surface of the tongue, in favor of a granulation tissue at the histology with spontaneous regression of the rest of lesion. The current decline is two years and two months without local recurrence.A 55 years old woman, was hospitalized in November 2011 for a Stevens-Johnson syndrome with severe mucosal impairment appeared three weeks after taking allopurinol (Zyloric"} +{"text": "Cancer is a class of diseases characterized by uncontrolled cell growth and has the ability to spread or metastasize throughout the body. In recent years, remarkable progress has been made toward the understanding of proposed hallmarks of cancer development, care, and treatment modalities. Radiation therapy or radiotherapy is an important and integral component of cancer management, mostly conferring a survival benefit. Radiation therapy destroys cancer by depositing high-energy radiation on the cancer tissues. Over the years, radiation therapy has been driven by constant technological advances and approximately 50% of all patients with localized malignant tumors are treated with radiation at some point in the course of their disease. In radiation oncology, research and development in the last three decades has led to considerable improvement in our understanding of the differential responses of normal and cancer cells. The biological effectiveness of radiation depends on the linear energy transfer (LET), total dose, number of fractions and radiosensitivity of the targeted cells or tissues. Radiation can either directly or indirectly damages the genome of the cell. This has been challenged in recent years by a newly identified phenomenon known as radiation induced bystander effect (RIBE). In RIBE, the non-irradiated cells adjacent to or located far from the irradiated cells/tissues demonstrate similar responses to that of the directly irradiated cells. Understanding the cancer cell responses during the fractions or after the course of irradiation will lead to improvements in therapeutic efficacy and potentially, benefitting a significant proportion of cancer patients. In this review, the clinical implications of radiation induced direct and bystander effects on the cancer cell are discussed. Cancer is a complex disease, which grow locally and also possesses the capacity to metastasize to different organs in the body. Cancer continues to be a major disease and the numbers of cancer cases are projected to be more than double worldwide in the next 20\u201340 years and surpass heart disease as the leading cause of death , total dose, fractionation rate and radiosensitivity of the targeted cells or tissues Hall, . Low LETThe overall outcome of radiation treatment is cell or tissue damage; if it is not repairable eventually kill the cells. Effectiveness of radiation therapy that have been developed over years showed an increase in the number of cancer survivors, but preventing or reducing late effects are a significant public health issue. Furthermore, increase in the number of cancer survivors has stimulated interest in the quality of life of cancer survivors. The situation is important among non-elderly adults. In particular, children are inherently more radiosensitive and have more remaining years of life during which radiation induced late effect in normal cells could manifest in their hyperproliferation and ribonucleotide reductase (RNR) and further prolongation of radiation-induced gamma H2AX foci formation activity and epigenetic changes were reported in the RIBE (Camphausen et al., RIBE is also reported using mouse model, the bystander responses of internal tumor cells or tissues were also confirmed Though tremendous progress has been made toward understanding the hallmarks of cancer, cancer is responsible for one in eight deaths worldwide (Garcia et al., The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The goal of radiation therapy is to deliver a lethal dose of radiation to diseased tissue while minimising dose to surrounding healthy structures. Prior to the actual treatment delivery, treatment planning is the most critical part of a patient's radiation treatment management. The most crucial step is the accurate localisation and delineation of the target volume. Advances in tumour localisation and treatment delivery capabilities are limited by the inability to deliver treatment with complete precision to the localised tumour on a day-to-day basis over an entire course of radiation treatment. Most solid tumours are soft tissue masses, so the lack of inherent soft tissue contrast within images of intrathoracic regions can result in reduced visualisation and distinction of tumour boundaries from surrounding structures such as blood vessels, fatty tissues and lymph nodes.There are many sources of uncertainty in radiation therapy which impact on treatment accuracy and patient outcome. Geometric uncertainties cause deviations between the intended dose and the actual dose received by the tumour volume. These uncertainties consist of both external and internal factors. The external factors relate to the external patient set up displacements and the internal influences are due to organ motion and respiration.5The implementation of IGRT in radiation treatment has reduced the impact of organ motions and set up errors. Can the increased precision mitigate the issues associated with target volume delineation? I do not think so because IGRT is only as precise as the accuracy of the delineated target volume. The precision of IGRT and the steep dose gradient of the intensity modulated radiation therapy (IMRT) technique made accurate target volume delineation ever more important. The recent article by Liang et\u00a0al. investigating the effect of IGRT on the margin between the clinical target volume (CTV) and planning target volume (PTV) in lung cancer found that the application of IGRT reduced the geometric uncertainties, but was unable to completely mitigate the errors.7Current target volume delineation protocol is based on the guideline of International Commission on Radiation Units and Measurement (ICRU) recommendations for consistent definition of target volumes, but individualised based on tumour location, size, proximity to dose-limiting structure and the probability of high-grade treatment-related toxicities occurrence.9Target volume delineation and margin determination is even more challenging in lung when compared to other anatomical regions. The issue with overlapping anatomical structures that appears with similar densities on imaging scans making it difficult to distinguish, making target volume delineation highly imprecise and margin with high degree of variation. Liang et\u00a0al. found the largest margin values were attributed to individual radiation oncologist's (RO) ability to identify and delineate the nodal invasion.10Uncertainties can be minimised by having concise contouring procedures and protocols, use of multimodality imaging techniques, training and multidisciplinary consultations either within the department or between institutions. A study by Senan et\u00a0al. demonstrated the existence of statistically significant inter-observer variability with standardised contouring protocols and patients. The ratio of greatest delineated target volume over the smallest delineated target volume for gross tumour volume (GTV) of a T1N0 lung tumour was 1.6. Similarly, the ratio for PTV of a T2N2 lung tumour lesion was 2.Errors in target volume localisation and delineation occur early in the planning process and only once. This systematic error can produce the biggest deviation in the entire radiation therapy process and potentially can alter the outcome of the treatment. Once defined, this error is constant throughout the entire radiotherapy planning process and cannot be corrected unless a new target volume is defined and delineated and the patient treatment is re-planned.12Based on the evidence presented, the question should be asked whether it is prudent to reduce the tumour volume margins.The contouring of a target volume is influenced to a largest extent by the observer's subjective interpretation of what he or she sees on the images\u201d.To be able to answer that question, we must first examine the factors attributing to target volume delineation variability. The ability to accurately localise and delineate the target volume is based on the availability and the quality of imaging data and the clinical experience and expertise of the RO. With the advancement of imaging technology and technique, the visibility of tumour has increased which in turn increases the ability for ROs to delineate the borders of the malignancy as stated by Weiss and Hess. \u201cAs far as the personal attribute is concerned, the training received, years of experience and the availability of instructions on contouring all have significant impact on tumour delineation especially of the extent of microscopic involvement. The implementation of multiple imaging modalities in the management of cancer has improved the visualisation of the tumour, but this has also created further problems for ROs as many have limited expertise, in particular the interpretation of PET images. The American Society for Therapeutic Radiology and Oncology (ASTRO) has recognised the problem by developing standardised delineating protocols for common cancers. Another professional institution recommended the development of close links between radiologists, nuclear medicine physicians and ROs to optimise the interpretation of radiological images to reduce the delineating variability and improve accuracy.14Although target volume delineation can be significantly variable between different observers, I believe we are heading in the right direction in utilising the technology to be less subjective and less observer dependent in target volume delineation. This combined with precision in the treatment delivery and increased understanding of morphology and molecular profile of the tumour growth and pattern of spread has further improved the accuracy of target volume delineation leading to reduction in margins. Also with increased education and further training, the development of the standardised protocols and multidisciplinary collaboration has further reduced the degree of variability in target volume delineation."} +{"text": "The events that have led to the development of cytogenetics as a specialty within the life sciences are described, with special attention to the early history of human cytogenetics. Improvements in the resolution of chromosome analysis has followed closely the introduction of innovative technology. The review provides a brief account of the structure of somatic and meiotic chromosomes, stressing the high conservation of structure in plants and animals, with emphasis on aspects that require further research. The future of molecular cytogenetics is likely to depend on a better knowledge of chromosome structure and function. Variations in morphology within species, and to a greater extent between species, led Linnaeus and other taxonomists to classify all organisms in terms of genealogies with species, families and orders depending on their similarities, starting with individuals capable of reproduction that defined a species. The stage was set for ideas about the transmutability of species, the heritability of physical traits and Darwin\u2019s theory of the origin of species [Our specialty was pioneered by scientists who developed the compound microscope to study the cellular organisation of the living world. While comparative anatomists had known for centuries that all animals share physical features that suggest a common structure among creatures both living and revealed in fossils, the cytologists of the 19th Century by the behavior of chromosomes in germ cells [The mechanisms of transmission of both discontinuous and continuous characteristics across the generations were unknown before Mendel\u2019s laws were explained at the turn of the 20rm cells . Stains rm cells -6. Cyrilrm cells . These sIn 1944 it was realized that genetic transformation in bacteria was due to DNA and not protein and that DNA was the molecule responsible for heredity in genes and chromosomes . The molSince the genetic code was deciphered much has been learnt about the chromosome structure shared by all organisms from yeast to human. Much more remains to be discovered. One of the purposes of this review is to encourage research into chromosome structure as this could help advance molecular cytogenetics. The following is a brief summary of the author\u2019s view of current knowledge, emphasizing areas that need further study.We now recognize that, following DNA replication, the metaphase chromosome consists of two chromatids held together by a centromere and by cohesin. Each chromatid is a single molecule of DNA attached to protein matrix fibres that forms its scaffold or axial filament . Over 20During gametogenesis parental homologous chromosomes, each consisting of two chromatids, pair together during the long prophase of the first meiotic division and form chromosomal bivalents. Here again, the two DNA molecules of each parental chromosome are attached to protein matrix fibres that now form the axial filaments of the two lateral elements of the synaptinemal complex and these show that G- and Q-bands are associated with A-T rich regions and repetitive DNA, while the regions between bands are associated with G-C rich coding regions including genes. The A-T rich regions also correlate with the chromomere patterns observed in pachytene bivalents . Thus thth century genetic studies were mostly confined to plant and animal species rather than to humans. It was more productive to make crosses in fruit flies and mice because of the larger number of progeny that could be observed over several generations. However, a number of human pedigrees were collected and characterised [th century gave counts of 16\u201338 with most in favour of 24 [Throughout the first half of the 20cterised and inbocterised . But thecterised ; this leur of 24 -37. Moreur of 24 . Over thur of 24 until Tjur of 24 . The corur of 24 -44, all ur of 24 in humanThe first discovery of a human chromosome aberration was made by Marthe Gautier and colleagues from Paris in May 1958. They found an extra small chromosome in fibroblast cultures from several children with Down syndrome . This waChromosome analysis became much easier later in 1960 when lymphocyte cultures were introduced, made from small samples of peripheral blood stimulated by phytohaemaglutinin . Air-driIn previous sections of this article emphasis has been made of several technical milestones in the progress of human cytogenetics. Cell cultures, colchicine and hypotonic treatment led to the correction of the human chromosome number, and lymphocyte cultures to the widespread use of diagnostic cytogenetics and the discovery of many chromosomal syndromes. While the techniques used in the 1960s were sufficient to demonstrate the sex chromosome and autosomal aneuploidies, the Philadelphia chromosome and some of the gross structural aberrations, such as translocation Down syndrome , there wVicia faba by the group of Caspersson and Zech with quinacrine that intercalated into DNA producing dark and light Q-bands visible by UV microscopy along each chromosome [in situ to the centromeres of denatured mouse chromosomes [The introduction of chromosome banding in 1969\u201370 has been one of the most important innovations in cytogenetics. The discovery was first made in romosome . Meanwhiomosomes . The radomosomes . Variousomosomes identifiomosomes ,75. Manyomosomes and bandin situ hybridization for mapping DNA sequences to chromosomes. The method was used in 1972 to map the ribosomal genes to the short arms of the human acrocentrics [in situ hybridization (FISH) has become the standard method for gene mapping [As mentioned above, Pardue and Gall were the first to use centrics but was centrics -80. Beca mapping ,82. The mapping . Further mapping . DNA proThe above account so far has discussed cytogenetic techniques that are based on conventional or electron microscopy. Flow cytometry is another valuable approach that examines chromosomes in fluid suspension. The dual laser fluorescence-activated cell sorter (FACS) was designed for the analysis of cells stained by immunofluorescence, but has been adapted for measuring and sorting chromosomes on the basis of their size and base-pair ratio -87. The Chromosome painting has played an important role in basic research on gene interactions, including regulation, in the interchromatin compartment between chromosome territories in the interphase nucleus . SpecifiCross-species reciprocal chromosome painting has been most productive in phylogenetic studies in determining the relationships between species and in predicting ancestral karyotypes . The metIt has been shown that chromosome sorting by FACS can be one of the most accurate methods for determining a species genome size and for estimating GC content of individual chromosomes. The method involves sorting a suspension of chromosomes from the test species in a mixture containing a suspension of chromosomes from the control species, such as human, containing several non-heteromorphic chromosomes whose DNA content has been accurately determined. The size in megabases of each chromosome in the test species is calculated in relation to the control chromosomes. The sum of the individual measurements equals the genome size of the species, and this estimate correlates well with genome sizes determined by sequencing, at least for species in which the draft DNA sequence is believed to be complete. The results have been used to correct many errors in the genome size database in which genome sizes have been estimated by less precise methods .Several other molecular methods have been introduced to identify chromosome deletions and duplications at high resolution. Comparative genome hybridization (CGH) depends on the comparison of the patients genomic DNA with that of a normal control . In esseThe history of human cytogenetics has been punctuated by the introduction of new technology which on each occasion has led to the discovery of an increasing number of smaller chromosome aberrations associated with disease. Modern molecular methods are capable now of identifying chromosome aberrations at the level of the DNA sequence. One of the problems of this refinement is the difficulty in distinguishing between pathological events and normal copy number variation (CNV), but this"} +{"text": "In general, observations made from studies carried out before the advent of next generation sequencing technologies have been supported by recent work. Moreover, the challenge remains to definitively link the structure and function of hydrocarbon-degrading microbial groups to improve predictive models of biodegradation.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The ultimate goal of functional brain imaging is to estimate the neural signals that flow through the brain, mediating behavior, and conscious experience during the spectrum of activities controlled by the nervous system. Although, various brain imaging techniques are in routine use, determining the underlying neural activity remains a challenge , but the ability to localize these signals with any degree of accuracy remains remarkably elusive as the complexity of brain activation for even the simplest of tasks tends to confound attempts to resolve the local neural components contributing to the recorded scalp responses. To improve our estimates of neural signals using non-invasive brain imaging techniques, this Frontiers Research Topic invited empirical and theoretical contributions focusing on the explicit relationship of brain imaging signals to causative neural activity.The submitted contributions responded to the challenge of neural signal estimation in a variety of ways including: advanced analyses of the neural implications of magnetoencephalographic (MEG) and electroencephalographic (EEG) signals, derivations of the pathway for BOLD signal generation from the underlying neural activation signals through animal recording, human BOLD modeling studies, detailed assessment of local BOLD response components and resting-state activation, and interpretation of the new field of functional diffusion tensor imaging in terms of neural activation.Cicmil et al. highlighted the limits on localizing MEG signal sources by testing the ability of several reconstruction approaches to localize the source of retinotopic MEG signals in the human brain and found that none of the approaches for assessing angular position were suitable for resolving annular stimuli spanning different retinal eccentricities (unless restricted in angular position). A second contribution to such electrical signal analysis is the time-frequency approach to the source localization and functional connectivity from simultaneous MEG/EEG signals proposed by Zerouali et al. Although, this analysis specifically targeted sleep spindles, the work has broader implications for the functional integration of MEG and EEG signals and their source localization within the brain. This analysis revealed that functional connectivity across the cortex evolved during the spindles from short-range intra-hemispheric connections to longer range inter-hemispheric connections, suggesting an integrative role for these dynamic features of neural activity.Although the EEG and MEG commonly used to measure human neural activity have high temporal resolution, spatial localization of the signal source is difficult to achieve. Martin reviews the need for accurate neurovascular models of the coupling between neural activity and the local BOLD signal from animal studies. Animal studies have the striking advantage of allowing a wide variety of technical approaches to the analysis of neurovascular coupling. Martin evaluates 16 of these, from single-neuron electrophysiology to tissue oxygen voltammetry, considering both their advantages and limitations and highlighting the key areas in which our understanding of fMRI signals has been improved through the use of animal models. Howarth takes up the issue of whether cortical astrocytes , and calcium transients within them, are involved in the vascular response to neuronal activity based on the recent debate regarding whether evoked glial calcium signals occur quickly enough to account for the dynamics of neurovascular coupling. Indeed, the exact mechanisms by which astrocytes respond to changes in neuronal activity and trigger the intracellular events regulating the resulting vascular response underlying the fMRI BOLD signal remain unclear. To take an analytic approach to this question, Tyler et al. evaluate four models for the neurovascular coupling between local field potentials recorded in cortex and BOLD signals recorded simultaneously in an adjacent location, for a range of stimulus durations. The results imply that the BOLD response is most closely coupled with metabolic demand derived from the neuronal input waveform, suggesting that the astrocytic signaling is responsive to the neurotransmitter metabolism of the dendritic arborization rather than to the neuron's spiking activity.Several contributions focused on estimating the properties of the underlying neural sources that generate BOLD fMRI signals. Buxton et al. assess the coupling ratio of blood flow and oxygen metabolism to different kinds of neural activation, finding that blood flow variations are more closely coupled with stimulus-driven variations than with endogenous variations in neural activity . Variations in oxygen metabolism, on the other hand, are more closely coupled with endogenous neural variations. The authors suggest that these differences in coupling ratio reflect differential proportions of excitatory and inhibitory contributions of the neural signal to cortical BOLD signals, and hence provide a new window into the assessment of neural activity. A related topic is addressed by Chen, who uses stimulus-driven manipulations of activation and suppression to assess the excitatory and inhibitory contributions to the evoked BOLD signal. The stimuli were designed to have invariant local effects, but differential long-range interactions were found according to configural relationships of local orientations, which should produce no differences in BOLD signal in the absence of neural interactions. One component of the BOLD suppression was dependent on the orientation-specific inhibitory effect of the long-range interactions, while a second appeared to be a general negative BOLD response to adjacent contrast stimulation independent of the stimulus configuration. Thus, BOLD response properties can be used to identify targeted aspects of the underlying neural organization.Further studies focus on contributions to the positive and negative components of the neurovascular relationships. Gonzalez-Castillo et al. take the novel approach of analyzing the time-course of resting-state BOLD signals across the cortex to assess the stability of neural connectivity. The most stable connections were between homologous (symmetric) interhemispheric local regions, with stability persisting for several minutes. The more variable connections were found to correspond primarily to occipito-frontal connections across the traditional resting-state networks, which can be interpreted as corresponding to transient visual imagery. Gravel et al. take resting-state analysis a step further to develop the concept of local cortical connective fields. These are neural organizations analogous to neuronal receptive fields, but defined in terms of connectivity among cortical regions, rather than connectivity of the neuron to a sensory surface. In combination with the population receptive mapping developed by this group for the analysis of the visual cortex, resting-state BOLD connectivity can be interpreted in visual space. This approach allows visuotopic maps to be reconstructed using resting state data recorded in the visual cortex, enabling these authors to show that the local resting-state connectivity from visual area V1 to both V2 and V3 was invariant with eccentricity with a scale of ~2 mm, substantially smaller than the population receptive fields for visual input in these cortical areas. This work suggests that it is possible to obtain some neural properties from resting-state fMRI data.Three papers focus on advanced methods of decomposing the neural connectivity and reorganization in the brain from the distribution of BOLD signals. Yang et al. extend the analysis of BOLD activation maps. Learning may generate not only changes in the strength of activation in predefined regions of interest, but also changes in the spatial distribution of the activation across the cortex. To address this issue, the authors measure the changes in spatial distribution of activation following a simple motor learning task. Dimension reduction via singular-value decomposition was able to capture aspects of the neural reorganization produced by this form of motor learning. These findings validate the capability of computational modeling to determine properties of neural connectivity and reorganization from BOLD signal analysis.Concentrating on the example of motor learning, functional form of Diffusion Tensor Imaging (DTI). DTI is a well-established technique for assessing the anatomical organization of the fiber pathways in vivo from the local anisotropy of the diffusion directions of water molecules within brain tissue. Functional DTI, on the other hand, assesses changes in this kind of anisotropy as a result of some functional manipulation of the state of the brain. Autio and Roberts raise concerns about contamination of this form of functional analysis by leakage of BOLD signal activation from adjacent gray matter into the voxels designated as fiber pathways. Mandl et al., whose previous paper on functional changes in fractional anisotropy in the optic radiations during visual stimulation was the subject of the Autio and Roberts critique, argue that such partial voluming would only occur at the ends of fiber tracts where they meet with the cortical regions that they are connecting, whereas the reported changes in fractional anisotropy occurred throughout the tracts.The final two papers are concerned with a new In summary, functional imaging techniques are increasingly used to infer neural activity within the human brain. This special issue improves our ability to estimate these neural signals non-invasively and points us in the direction of the remaining issues that must be addressed before we can fully understand functional imaging signals.All authors listed, have made substantial, direct, and intellectual contribution to the work, and approved it for publication.CH was a Vice Chancellor's Advanced Fellow at the University of Sheffield and currently holds a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society (grant number:105586/Z/14/Z).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Food and the eating thereof is a universal part of the human condition, and life is obviously impossible without adequate nutrition. The fact that the composition of diets varies so dramatically across the globe and that human populations survive and thrive fairly successfully despite seemingly very different nutrient intakes provides food for thought! Either the exact composition of the diet is irrelevant or else humans are biologically remarkably adaptable to great variation in the composition of their diet. If either of the statements in the previous sentence is correct then those individuals in what I refer to as the \u2018diet industry\u2019, an industry with enormous media appeal and reward, should feel seriously threatened.In light of the dramatic advances in the pharmacological and interventional management of patients with cardiovascular diseases, it is unfortunate that apparently conflicting advice on such a simple matter as diet is offered to patients and those at risk of disease. Such conflicting advice surely confuses rather than educates those at whom it is aimed. Lifestyle and diet are the topics reviewed by Opie (page 298) in an in-depth and scholarly review, which addresses in a balanced manner many of the current controversies. The accompanying editorial from Raal (page 302) is further valuable commentary.Awad and colleagues (page 269) report on the high prevalence rates of hypertension in the Gambia and Sierra Leone, as has been previously reported from other parts of Africa. The South African hypertension practice guideline prepared by the Hypertension Guideline Working Group of the Southern African Hypertension Society is published on page 288. It is comprehensive and includes information on lifestyle modification and education, in addition to detailed advice on pharmacotherapy. An accompanying comment (page 296) addresses the value and importance of such guidelines in clinical practice. It would be interesting to hear comment from colleagues in other parts of Africa as to the applicability and relevance of these guidelines to practice in their own countries.Otaigbe and colleagues (page 265) report on the prevalence of congenital heart disease, detected by echocardiography, among children referred to two specialist paediatric cardiology clinics in the Niger Delta region of Nigeria. Such information adds to our increasing knowledge of patterns of cardiovascular disease in Africa. It must be borne in mind however that this is not a population-based study, referral bias remains possible, and the authors\u2019 attribution of the high prevalence to environmental pollution is speculative.Despite the success of percutaneous interventions, coronary artery bypass grafting is still a very common operation and the impact of interventions and risk factors for complications continues to be investigated. Cingoz and colleagues (page 279) examine the impact of co-morbidity on bleeding after the operation, while Yildiz and co-workers (page 259) examine the value of patient-directed education on patient anxiety after surgery. The importance of the anxiety experienced by patients and families of hospital survivors of major cardiac surgery is often underestimated by healthcare professionals."} +{"text": "Intraspecific acoustic communication requires filtering processes and feature detectors in the auditory pathway of the receiver for the recognition of species-specific signals. Insects like acoustically communicating crickets allow describing and analysing the mechanisms underlying auditory processing at the behavioral and neural level. Female crickets approach male calling song, their phonotactic behavior is tuned to the characteristic features of the song, such as the carrier frequency and the temporal pattern of sound pulses. Data from behavioral experiments and from neural recordings at different stages of processing in the auditory pathway lead to a concept of serially arranged filtering mechanisms. These encompass a filter for the carrier frequency at the level of the hearing organ, and the pulse duration through phasic onset responses of afferents and reciprocal inhibition of thoracic interneurons. Further, processing by a delay line and coincidence detector circuit in the brain leads to feature detecting neurons that specifically respond to the species-specific pulse rate, and match the characteristics of the phonotactic response. This same circuit may also control the response to the species-specific chirp pattern. Based on these serial filters and the feature detecting mechanism, female phonotactic behavior is shaped and tuned to the characteristic properties of male calling song. In many species of insects, intraspecific signaling systems have evolved to allow mate attraction over long distances, including systems based on sex pheromones in moths and butterflies , descending (DN1), ascending , and T-shaped (TN1) auditory interneurons responds to high frequency signals. AN1 is not tuned to the temporal pattern of the calling song and the Isaac Newton Trust .The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Synchronous bursting plays an integral role in a variety of applications, from generating respiratory rhythms and inducing hormonal releases to conveying information about a stimulus. Since the network structure influences the dynamical behavior of a network, we aim to identify network structures that promote synchronous bursting.By generating the network topology using specific probability distributions, Zhao et al. demonstrWe construct the graph theoretical measure by restricting the numbers of incoming and outgoing connections of a neuron to take one of finitely many possible values. Fixing certain constraints, such as the number of neurons in our network and the expected number of connections, we construct a probability distribution for our network. Through averaging, we construct difference equations from the probability distribution to track the expected number of active neurons and inactive neurons in a given time step. In the special case of spiking, the difference equations force a neuron to fire if one of its neighbors fires. By considering the maximum number of active neurons within a single time step, we formulate a graph theoretical measure for synchronous spiking. Using an analogous system of difference equations, we can construct a graph theoretical measure for synchronous bursting as well. Consequently, by calculating the second order network statistics from the probability distribution, and identifying all candidate probability distributions that satisfy the constrains, we can analyze how second order network statistics from any probability distribution promote synchronous spiking and bursting.In our simulations, we find that increasing the covariance of the in-degree and out-degree monotonically increases the predicted occurrence of synchronous spiking under the graph theoretic measure. We also note that for a wide range of parameters, the number of neurons in the network and the constraints on the governing probability distribution of our network have minimal qualitative impact on the relationship between second order network statistics and predicted likelihood of synchronous spiking. Furthermore, preliminary results regarding the effect of second order network statistics on predicted synchronous bursting suggest a more intricate relationship than in the case of synchronous spiking. Based on the consistency of the measure in the case of synchronous spiking with the existing literature and due to its ability to incorporate relatively abstract properties of the network, we conjecture that the measure will provide new insight regarding the impact of the network topology on continuous time-scale models of synchronous bursting and other complex behaviors."} +{"text": "There is an error in the fifth sentence of the last paragraph of the introduction. The correct sentence is: Recently, analysis of skin from space-flown mice showed that prolonged exposure to space conditions might induce skin atrophy and dysregulate the hair follicle cycle.There are errors in the last two sentences of the first paragraph of the discussion. The correct sentence is: Neutelings et al. investigated skin from space-flown mice and suggested that prolonged exposure to space environment might induce skin atrophy and deregulate hair follicle cycle, although they reported that the number of hair follicle in anagen increased in the space environment. Considering the FGF18 function on hair cycle, our results are not inconsistent with their findings."} +{"text": "Aneurysm of left ventricle (ALV) is formed in each 5th patient with acute myocardial infarction in the presence of complete occlusion of the coronary artery (CA) by atherosclerotic plaque. But we noticed the formation of ALV in the absence of atherosclerotic stenosis, for patients with myocardial \"bridges\" (MB). The essence of this anomaly is the presence of systolic compression of the tunneled segment of the artery, which in itself raises doubts about its clinical significance. Due to the attitude of the medical community toward the MB, as a result of its ambiguous nature, and given the favorable long term trend, the MB is regarded as a variant of the norm. At the same time, increasing reports of cases of sudden death and myocardial infarction associated with the presence of MB demonstrates the relevance of this anomaly.To show possibility of formation of postinfarction aneurism of left ventricle (LV) in the absence of atherosclerotic plaques in CA.12 patients in average age 35+/-5 years with transmural MI in anamnesis underwent standard examination and surgical treatment.All patients had ECG-signs of aneurysm of antero-septal and apical area of LV, which was confirmed by ECHO study, where we notice reduction of ejection fraction less 45% (from 35 till 45%). On the coronary angiography we found myocardial \"bridge\" (MB) over middle portion of LAD with systolic compression from 30% to 100% and aneurism of the apex of LV. We performed CABG with resection of an aneurism of LV with thrombectomy (in 7 cases) on-pump with good remote results after procedure.Transient systolic compression of the LAD by MB can lead to myocardial infarction with the formation of ALV even in the absence of atherosclerotic lesions of CA."} +{"text": "Introduction: Since its inception, reduction mammaplasty has matured considerably. Primary evolution in clinical research and practice has focused on preserving tissue viability. Surgery involves preserving not only tissue viability but also function and sensation. The nipple serves as the sensate unit of the breast and is a valuable part of women's psychological and sexual health, making preservation of nipple sensation of utmost important. Studies regarding primary innervation to the nipple are few and often contradictory. We propose an unsafe zone in which dissection during reduction mammoplasty ought to be avoided to preserve nipple sensation. Methods: Circumareolar dissection of 22 cadaveric breasts was performed. Primary nerve branches to the nipple-areola complex were identified and dissected to their origin. Results: Three to 5 branches of the fourth intercostal nerve primarily innervated the nipple on 18 of 22 breast dissections. Two breasts received innervation from the third intercostal nerve and 2 from the fifth intercostal nerve. In half of the specimens, accessory innervation from the third and fifth intercostal nerves provided medial branches to the nipple. Conclusions: The fourth intercostal nerve provides the major innervation to the nipple-areola complex. Avoiding dissection in inferolateral quadrant \u201cunsafe zone\u201d of the breast during reduction mammaplasty and other breast surgical procedures can reliably spare nipple sensation and maximize patient outcomes. Primary evolution in clinical research and practice has focused on developing techniques to preserve tissue viability and breast parenchyma, skin, and nipple tissue. Previously, women with macromastia were more concerned with breast size and shape over mammary sensation. Presumably, the improved aesthetic outcome resulted in an enhanced body image and helped patients feel more sensual. However, surgery today involves preserving not only tissue viability but also function in terms of sensation. The nipple serves as a sensate unit in erectile function and plays a large part in the physical intimacy of women. Nipple sensation has shown to be a valuable part of women's psychological and sexual health. While preservation of nipple sensation is of utmost importance, the literature regarding primary innervation of the nipple is scant and contradictory.1Eleven dissections were performed on 22 cadaver breasts at the University of Louisville Fresh Tissue Lab. Four cadavers (8 breasts) had macromastia as determined by the investigator's judgment. Circumareolar subcutaneous dissection was performed to identify the nerves from the chest wall to the nipple using 2.5\u00d7 loupe magnification. Once the trajectory of the nerves to the nipple was identified, the nerves were dissected back to their origin of penetration of the chest fascia.Anatomical results identified 3 to 5 branches of the fourth intercostal nerve to primarily innervate the nipple on 18 of 22 breast dissections. Two breasts received innervation from the third intercostal nerve and 2 from the fifth intercostal nerve. In half of the specimens, accessory innervation from the third and fifth intercostal nerves provided medial branches to the nipple . On the -Breast-reduction surgery has evolved considerably through the centuries. Prior to the late 1800s, breast amputation was the procedure performed to eliminate excessively large breasts. Theodore Galliard-Thomas was the first to advocate preservation of some part of the glandular tissue in the 1880s.15-However, many advocate that nipple sensation is paramount to patient satisfaction as well. As the nipple is perhaps the most sensitive area of the breast, it serves a significant role in a woman's sexual life. Erectile function and sensation are frequently necessary for both the woman herself and her partner. Consequently, loss of these functions has a detrimental impact on procedure outcome and patient satisfaction.22Previous studies have demonstrated that the majority of women feel that nipple-areola sensitivity as an important part of their sexual life, and of those women who underwent breast surgery and lost nipple sensation, the majority of women were significantly bothered by the result.-1,In general, patients undergoing breast-reduction surgery demonstrate high satisfaction due to the improvement in neck, shoulder, and back pain. However, loss of sensation to the nipple results in a poorer outcome. Anatomical analysis of the innervation of the NAC possibly helps guide the surgeon in avoiding damage to the nerves of the nipple. Our anatomical study demonstrated the innervation of the nipple to come laterally from 3 to 5 branches off of the fourth intercostal nerve. In addition, in some specimens, intercostal nerves 3 and 5 provided accessory innervation. These findings were consistent in both normal and hypertrophied breast specimens. Breast size did not alter the trajectory of the nerve to the NAC. Our results demonstrate that the distortion of breast tissue observed in obese patients and patients with macromastia does not alter the anatomical course of innervation to the NAC. Furthermore, the stretching of breast tissue observed with aging as a result of loss of support by the suspensory ligaments was not observed to alter anatomical course of the intercostal nerves to the nipple. The fourth intercostal nerve pierces the fascia of the fifth rib just lateral to the border of the pectoralis major muscle. The nerve travels to the NAC through the inferolateral position of NAC. Previous studies have demonstrated the lateral branch of fourth intercostal nerve to be the most reliable innervation to the NAC.1Lessons learned in the anatomy laboratory demonstrate that the plastic surgeon ought to avoid excessive resection and dissection in the inferolateral areas of the breast so as to preserve the innervation of the NAC. Breast size does not appear to alter the course of the intercostal nerves through the breast parenchyma. Consequently, we propose that the findings of this anatomical study can be extrapolated for guidance of breast surgery in patients with either normal or hypertrophied breast tissue. Avoidance of the inferolateral quadrant \u201cunsafe zone\u201d during reduction mammoplasty and other breast surgical procedures can prevent damage to the fourth intercostal nerve and accessory innervation by the third and fifth intercostal nerves. Such technique will reliably maintain the primary innervation of the nipple and maximize patient satisfaction.Frequently, the plastic surgeon must individualize therapy to the patient. A fixed procedure does not always apply to every clinical scenario. Adhering to principles of techniques and knowledge of anatomy frequently serves as a foundation for the reconstructive surgeon when planning procedures. This study can aid the novice and experienced surgeons in obtaining quality outcomes in terms of not only aesthetics but also function.Preserving nipple sensation is a valuable goal in breast surgery. Many women value nipple sensation as a significant component of sexuality and quality of life. The innervation of the nipple is predictable based on anatomical findings. An unsafe zone can reliably be avoided in the inferolateral area of the breast. Clinical application of these findings demonstrates the possibility to reliably maintain the nipple as an aesthetic and sensate unit."} +{"text": "The aim of this pictorial review is demonstrate the radiological appearances of adenocarcinoma with particular focus on more unusual appearances such as cystic adenocarcinoma.Non-small cell lung cancers account for almost 85% of all lung cancers and of these, adenocarcinoma is the most common. This entity has recently been reclassified to reflect increased understanding of the underlying pathology and thus it is crucial for radiologists to understand the new classification, the role of radiology in identifying pre-invasive lesions and the guidelines for management of subsolid nodules. We present the spectrum of imaging appearances from ground glass nodules (GGNs) to solid mass lesions with histopathological correlation.It is important for radiologists to recognise the spectrum of appearances of lung adenocarcinoma and follow appropriate algorithms for surveillance or further management."} +{"text": "We screened for evidence of HCV infection in healthy heterologous monogamous spouses of chronic HCV patients and studied the relation with various risk factors. A cross-sectional study of fifty healthy monogamous heterosexual spouses of HCV-positive index cases was carried out. All participants were HBV and HIV negative. The association with various risk factors was studied. Five spouses (10%) showed evidence of HCV infection. Two partners were positive for HCV antibody alone (4%) and 3 for antibody and HCV PCR (6%). No association was found between HCV infection and various sociodemographic parameters with the exception of older age categories. Intraspousal transmission of HCV may be an important source of spread of HCV infection. The reservoir of HCV-infected individuals in Egypt is sizable, and sexual transmission of HCV may contribute to the total burden of infection in Egypt. The extent to which HCV infection is associated with sexual exposure has been debated extensively. The 2008 Egyptian Demographic and Health Survey estimateThere are some limitations in this study. An important limitation is failure to analyze the sequence of nucleotides of the HCV genome. The detection of homology in the nucleotide sequences would have been a strong evidence of a common source of infection but it would not clarify the direction of the infection, nor the responsible risk factors. The major limitation of our study may be the relatively small number of participants.Our study results raise the possibility that HCV is sexually transmitted between spouses in Egypt. This confirms the need to screen all people in the so-called high risk groups for HCV infection. Due to the ongoing high incidence of HCV in Egypt, further research is needed to identify the exact routes of transmission and the associated risk factors so that preventive measures can be instituted."} +{"text": "Postconditioning (PostC) is an exposure of the damaged organism to extreme factors of the mild intensity to mobilize endogenous protective mechanisms. In our laboratory method of PostC using three daily trials of mild hypobaric hypoxia (MHH) was developed. It has been found that such method of the PostC effectively prevents degeneration of the hippocampal and neocortical neurons in rats, subjected to severe hypoxia (SH). Present study has been aimed at examination of the impact of oxidative processes in the development of the neuroprotection acquired in the course of hypoxic PostC during first three days of reoxygenation after SH in rats. The levels of thiobarbituric acid reactive substances (TBARS) and Schiff bases (SB) were used as markers of lipid peroxidation. In addition, the intensity of the apoptotic DNA fragmentation has been studied. During the three days after the SH a sustained increase of SB in the rat hippocampus was observed . After the first PostC episode the SB levels decreased to 150% of the baseline. Subsequently this parameter did not differ significantly from the control values. TBARS showed accumulation on the first day following the SH but afterwards its levels dropped to 40% of control and did not recover then to normal values. In the PostC animals, the levels of TBARS after each of three PostC episodes did not differ from the control values. These facts indicate that the PostC MHH balances the activity of pro- and antioxidant systems in vulnerable brain regions and promotes the effective utilization of components damaged by peroxidation. Fragments ladder typical for cells undergone apoptosis was obtained by the electrophoretic separation of the total DNA, extracted from a rat brain after one, two and three days after the SH. In the PostC group, the DNA fragmentation was revealed only after the first PostC episode, demonstrating antiapoptotic action PostC MHH."} +{"text": "Electrical stimulation of the preoptic area (POA) interrupts the lordosis reflex, a combined contraction of back muscles, in response to male mounts and the major receptive component of sexual behavior in female rat in estrus, without interfering with the proceptive component of this behavior or solicitation. Axon-sparing POA lesions with an excitotoxin, on the other hand, enhance lordosis and diminish proceptivity. The POA effect on the reflex is mediated by its estrogen-sensitive projection to the ventral tegmental area (VTA) as shown by the behavioral effect of VTA stimulation as well as by the demonstration of an increased threshold for antidromic activation of POA neurons from the VTA in ovariectomized females treated with estradiol benzoate (EB). EB administration increases the antidromic activation threshold in ovariectomized females and neonatally castrated males, but not in neonatally androgenized females; the EB effect is limited to those that show lordosis in the presence of EB. EB causes behavioral disinhibition of lordosis through an inhibition of POA neurons with axons to the VTA, which eventually innervate medullospinal neurons innervating spinal motoneurons of the back muscle. The EB-induced change in the threshold or the axonal excitability may be a result of EB-dependent induction of BK channels. Recordings from freely moving female rats engaging in sexual interactions revealed separate subpopulations of POA neurons for the receptive and proceptive behaviors. Those POA neurons engaging in the control of proceptivity are EB-sensitive and project to the midbrain locomotor region (MLR). EB thus enhances lordosis by reducing excitatory neural impulses from the POA to the VTA. An augmentation of the POA effect to the MLR may culminate in an increased locomotion that embodies behavioral estrus in the female rat. Protracted electrical stimulation of the ventrolateral part of the ventromedial nucleus of the hypothalamus (VMN) at low frequencies has been found to cause lasting facilitation of the lordosis reflex in female rats in the estrus, a combined contraction of the longissimus and other back muscles caused by touch-pressure stimulation on the flank-perineal skin given by male partners , was needed for electrical stimulation of the VMN to facilitate lordosis in the ovariectomized female rats. EB-induced increase in the excitability of VMN neurons does not fully explain the requirement of systemic EB to stimulation-bound facilitation of lordosis, because VMN stimulation does not promote lordosis in the absence of systemic EB, even at stronger currents. The VMN contains estrogen receptor (ER) \u03b1 positive projection neurons to the midbrain, but ER\u03b1 positive neurons are also present in the preoptic area (POA), medial amygdala, midbrain central gray (CG), and lateral septum, to name but a few is one of major projection targets of estrogen concentrating neurons in the POA . One of the targets of the VTA projection, the dorsal tegmentum, contains neurons associated with paradoxical sleep and lateral vestibular nucleus (LVN) are the origins of the ipsilateral reticulospinal and vestibulospinal tract, respectively, which innervate spinal motoneurons responsible for the induction of the lordosis. Lesion studies have suggested that the contribution of these tracts is not dependent upon the integrity of the other, and that the magnitude of the lordosis deficit is instead correlated with amount of giant cell loss in NGc and Deiters cell loss in the LVN (Modianos and Pfaff, Electrical stimulation of the NGc in urethane-anesthetized female rats induced antidromic activation in neurons in the CG. Antidromically driven cells were in all parts of the CG and adjacent mesencephalic reticular field except within the inner ring of the CG that surrounds the aqueduct.As with the antidromic potentials induced in the VTA in response to CG stimulation, POA stimulation reduced the rate of successful propagation of NGc-induced antidromic potentials into the soma, whereas VMN stimulation increased the rate. Thus, the pattern of descending effects originating in the EB-sensitive POA and VMN on these CG neurons is required for their control of the lordosis reflex, via the regulation of the activity of medullospinal neuron that govern the contraction of back muscles responsible for the induction of the lordosis reflex.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The paralysis of the external branch of spinal nerve is very rare. It manifests clinically by a weakness and abnormal morphology of the shoulder. We must think about it in front of any simple surgery of the cervical region. We report the case of a 20 year old patient, who consulted several doctors for pain and progressive weakness of the left shoulder appeared a few days after resumption of a keloid scar complicating surgical excision of a cervical lipoma operated some months earlier. Physical examination revealed strength of the left shoulder listed on 3 without articular limitation, atrophy of the trapezius muscle with ipsilateral asymmetry and fall of the left shoulder. A lesion of spinal nerve was suspected and an EMG was executed. The EMG objectified a partial lesion of the left spinal Nerve. The patient was sent in Plastic and Reconstructive surgery for nerve repair. The achievement of the external branch of spinal nerve is manifested by pain and weakness in the shoulder triggered by the anteflexion movements of the upper limb. The most usual cause is cervical lymph node biopsy. In our case, the spinal nerve lesion occurred while the resumption in keloid skin scar. This is explained by the very superficial location of the Spinal Nerve."} +{"text": "GTPases of the RAB family are key regulators of multiple steps of membrane trafficking. Several members of the RAB GTPase family have been implicated in mitotic progression. In this review, we will first focus on the function of endosome-associated RAB GTPases reported in early steps of mitosis, spindle pole maturation, and during cytokinesis. Second, we will discuss the role of Golgi-associated RAB GTPases at the metaphase/anaphase transition and during cytokinesis. In mammalian cells, GTPases of the RAB family are key regulators of multiple steps of membrane traffic. RAB GTPases play a central role in the formation of transport carriers from a donor membrane, movement of these carriers along cytoskeletal tracks and finally anchoring/fusion to the correct acceptor membrane , the Drosophila homolog of the nuclear mitotic apparatus protein (NuMA), which is known to be important for spindle formation and maintenance in mammalian cells. RAB5 is required for the disassembly of the nuclear envelope at mitotic entry and the accumulation of Mud at the spindle poles . The identification of specific RAB24 effectors involved in early stages of mitosis would be critical to determine the precise role of this protein at the molecular level.cdc2 membranes in interphase and is a key regulator of Golgi homeostasis , the Institut Curie and the Fondation ARC pour la Recherche Sur le Cancer.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Congenital ocular colobomas are the result of a failure in closure of the embryonal fissure. They are important causes of childhood visual impairment and blindness. A 22 year old female patient with no particular history complaining of blurred vision of left eye; Visual acuity of the left eye is limited to counting finger; examination of the anterior segment was unremarkable. At fundoscopy, a large coloboma involving the optic disc and the adjacent retina. Examination of the right eye was normal. General examination was unremarkable including the neurological examination. Ocular coloboma can be seen in isolation and in an impressive number of multisystem syndromes. Systemic associations include the CHARGE syndrome . Visual acuity can range from normal to severly impaired. In general, severity of disease can be linked to the temporalexpression of the gene, but this is modified by factors such as tissue specificity of gene expression and genetic redundancy."} +{"text": "The wide spread and high rate of gene exchange and loss in the prokaryotic world translate into \u201cnetwork genomics\u201d. The rates of gene gain and loss are comparable with the rate of point mutations but are substantially greater than the duplication rate. Thus, evolution of prokaryotes is primarily shaped by gene gain and loss. These processes are essential to prevent mutational meltdown of microbial populations by stopping Muller\u2019s ratchet and appear to trigger emergence of major novel clades by opening up new ecological niches. At least some bacteria and archaea seem to have evolved dedicated devices for gene transfer. Despite the dominance of gene gain and loss, evolution of genes is intrinsically tree-like. The significant coherence between the topologies of numerous gene trees, particularly those for (nearly) universal genes, is compatible with the concept of a statistical tree of life, which forms the framework for reconstruction of the evolutionary processes in the prokaryotic world. When in the late 1970s and early 1980s, Carl Woese and his colleagues constructed phylogenetic trees from 16S RNA sequence alignments, the resulting phylogenetic trees were thought to have solved the problem of microbial evolution Woese . Indeed,The recent paradigm shift in the study of genome evolution is most often discussed in terms of horizontal gene transfer (HGT). Yet, the very concept of HGT is conditioned on the existence of a vertical, tree-like evolutionary standard (often referred to as the \u201ctree of life\u201d) remains an open question. The answer hinges on the existence of pronounced, coherent trends in the \u201cphylogenetic forest,\u201d i.e., the entirety of individual gene trees. More specifically, does a tree of a universal gene reflects solely the evolutionary history of that gene or does it carry information on the evolution of other genes, and if so, how many genes and how much information? In a phylogenomic study that was specifically designed to address this question, my colleagues and I performed an exhaustive comparison of the topologies of thousands of phylogenetic trees of conserved eukaryotic genes often substantially differ in their gene repertoires has revealed a striking picture of genomes in turmoil Darwin but rathBacillus anthracis Bushman . NumerouAt present, perhaps, the best showcase for dedicated vehicles of HGT appears to be the gene transfer agents (GTAs). The GTAs are defective prophages that form virus particles in which, however, they package apparently random fragments of the bacterial chromosome, rather than the phage genome and are key contributors to innovation including origin of new clades with novel lifestyles. The contribution of gene gain and loss in microbial evolution is ostensibly greater than the contribution of point mutations. The strongest indication of the importance of massive gene transfer for the emergence of major clades comes from comparative genomics of archaea where influx of bacterial genes seems to have coincided with the origin of multiple phyla. The eukaryotes apparently evolved via a similar scenario, with the crucial distinction of the survival of the bacterial gene donor in the form of an endosymbiont. Bacteria and archaea appear to have evolved multiple dedicated devices for gene transfer.Not withstanding the ubiquity and essentiality of gene transfer, tree-like processes are intrinsic to the processes of replication and cell division. Moreover, the substantial coherence between the topologies of numerous gene trees, particularly those for (nearly) universal genes, is compatible with the concept of a statistical tree of life, a central vertical trend in genome evolution. The statistical tree of life is a natural framework for the reconstruction of processes of gene gain and loss that shape the evolution of the prokaryotic world."} +{"text": "We report the case of 55 year old man hospitalized in intensive care unit following a complication of his surgery for acoustic neuroma, for which he was intubated. During his hospitalization, he presented a bilateral exposure keratitis complicated by an abscess and corneal perforation. The ocular surface is protected by the tear film, the blinking of eyelids and the lid closure. The tear film provides lubrication of the cornea and also contains antimicrobial substances. Use of muscle relaxants and sedation in patients on ventilator contributes to inadequate lid closure by decreasing the tonic contraction of ocular muscles. The constant exposure of the ocular surface put the ICU patients at high risk of developing exposure keratopathy. This condition predisposes to microbial keratitis, which may lead to corneal perforation and visual loss. Previous studies have reported that about 40% of patients develop exposure keratopathy during their stay in the ICU. The use of ocular lubricants and securing tape over the eyelids in intubated patients can prevented it. Also, the use of swimming goggles and regular moistening of eyelids providing a moisture chamber could be more effective."} +{"text": "Caulerpa cylindracea is a non-autochthonous and invasive species that is severely affecting the native communities in the Mediterranean Sea. Recent researches show that the native edible fish Diplodus sargus actively feeds on this alga and cellular and physiological alterations have been related to the novel alimentary habits. The complex effects of such a trophic exposure to the invasive pest are still poorly understood. Here we report on the metabolic profiles of plasma from D. sargus individuals exposed to C. cylindracea along the southern Italian coast, using 1H NMR spectroscopy and multivariate analysis . Fish were sampled in two seasonal periods from three different locations, each characterized by a different degree of algal abundance. The levels of the algal bisindole alkaloid caulerpin, which is accumulated in the fish tissues, was used as an indicator of the trophic exposure to the seaweed and related to the plasma metabolic profiles. The profiles appeared clearly influenced by the sampling period beside the content of caulerpin, while the analyses also supported a moderate alteration of lipid and choline metabolism related to the Caulerpa-based diet.The green alga Caulerpa cylindracea Sonder, previously identified as Caulerpa racemosa var. cylindracea (Sonder), due to the remarkable change induced in the invaded area. In the last decades, the invasion of the non-native C. cylindracea was increasingly at the center of hot discussions for both scientific and general public communities, because of its rapid and broad spread, which is inducing profound changes in the bottom of the sea landscape . R2 is a cross validation parameter and defined as the proportion of variance in the data explained by the models and indicates goodness of fit; Q2 is defined as the proportion of variance in the data predictable by the model and indicates predictability, which is extracted according to the internal cross-validation default method of SIMCA-P software; p[CV-ANOVA] provides a p-value indicating the level of significance of group separation in OPLS analyses [The orthogonal partial least squares discriminant technique (OPLS-DA) is the most recently used for the discrimination of samples with different characteristics as it has been shown in several recent studies of metabolomics . OPLS-DAanalyses ,47,48,49analyses . For eacD. sargus might be the effect either of a single dietary compound contained by C. cylindracea or of a combination of bioactive compounds acting in synchrony. While the assessment of the relative contribution of each chemical component of the alga to the observed metabolic alteration needs further interdisciplinary research, the much broader approach that characterizes the present report provides a useful tool both to measure the alterations that are occurring in the fish as a consequence of its alimentary behavior and for better management of marine biological invasion in the Mediterranean Sea. Overall, this study set the stage for the rapid assessment of the metabolic status of D. sargus feeding on C. cylindracea providing new methodological insights for the understanding of the complex effects affecting natural systems as results of biological invasions.The present work represents the first study where a metabolomic approach has been used to analyze the blood plasma of fish trophically exposed to an invasive pest. The observed variation of the metabolic profiles of the plasma of"} +{"text": "In recent years the human microbiome has become a growing area of research and it is becoming clear that the microbiome of humans plays an important role for human health. Extensive research is now going into cataloging and annotating the functional role of the human microbiome. The ability to explore and describe the microbiome of any species has become possible due to new methods for sequencing. These techniques allow comprehensive surveys of the composition of the microbiome of nonmodel organisms of which relatively little is known. Some attention has been paid to the microbiome of insect species including important vectors of pathogens of human and veterinary importance, agricultural pests, and model species. Together these studies suggest that the microbiome of insects is highly dependent on the environment, species, and populations and affects the fitness of species. These fitness effects can have important implications for the conservation and management of species and populations. Further, these results are important for our understanding of invasion of nonnative species, responses to pathogens, and responses to chemicals and global climate change in the present and future. The microbiomes, including bacteria, fungi, and viruses, live within and upon all organisms and have become a growing area of research. With the advances of new technologies it is now possible to entangle complex microbial communities found across animal kingdoms.Recent advances in molecular biology have provided new possibilities to investigate complex microbial communities and it has become clear that the vast majority of bacteria living in/on other animals cannot be cultured. It is now commonly accepted that at least 80% of the total bacterial species in the human gut cannot yet be cultured , 2. de novo clustering of specific regions of sequences. Functional profiling of metagenomics is more challenging since major parts of the metagenomic data remain insufficiently characterized and frequently samples are contaminated by host DNA or traces from the diet. Compared to both culture-dependent and more traditional molecular approaches such as sequencing of clone libraries and DGGE, amplicon sequencing approaches allow a more in depth analysis of the complete microbiome and are less restricted to the number of samples to be investigated. For further technical details see, for example, Caporaso et al. [High-throughput DNA sequencing approaches provide an attractive and cost-effective approach to investigate the composition and functions of the host microbiome. The culture-independent analysis of the host microbiome can be obtained by either metagenomic approaches or amplicon sequencing using specific marker genes. Amplicon sequencing provides a targeted version of metagenomics with a specific genetic region shared by the community members of interest. The amplified fragments derive from universal primers and are usually assumed to produce sequence read abundance that reflects the genetic diversity in the studied sample and hence sequence read abundance should reflect the genetic diversity in the studied sample. The amplified fragment typically contains phylogenetic or functional information, such as the 16S ribosomal RNA gene. 16S rRNA gene sequences are well studied and provide excellent tools for microbial community analysis , but otho et al. .The Human Microbiome Project (HMP) was initIn recent years the microbiome of a number of vertebrate nonhuman species has been sequenced including livestock , 13 and Insects are the most diverse and abundant groups of animals on earth and haveThe microbiome of other groups of invertebrates has also been established although for a limited number of species. Studies have compared the microbiome of different species of marine invertebrates with or without photosynthetic symbionts including five families of marine invertebrates . Marine The microbes of soil invertebrates have received some attention. The gut microbes of soil animals play an indispensable role in the digestion of food and are of ecological importance in the global carbon cycle. Recently, research reported that like that of terrestrial insects some soil invertebrates such as collembolans, earthworms, and nematodes contain a rich microbiome and putative symbionts \u201330. FurtTo begin with all microorganisms were seen as pathogens causing infectious diseases to the host. The host immune system of eukaryotes was built to eliminate these intruders, but at the same time tolerating its own molecules. However, we now know that the association between eukaryotic hosts and the microorganisms is far more complex. With the advances in molecular biology, such as next generation sequencing, it is now possible more specifically to address the association between a host and its microbiome. In animals the association between the host and its microbiome can take many forms and includes symbiotic and pathogenic associations . SymbiotThe number of studies addressing the role of the microbiome on animal health is limited and almost entirely restricted to human studies. However, a large number of studies have addressed the role of single bacterial symbionts on animal fitness, where especially insect species have received much attention \u201339. Ther Drosophila melanogaster provide a promising model system to address some of these issues and for this species it is possible to rear axenic flies. Next generation sequencing approaches can provide an in-depth analysis of the functional roles of specific groups of bacteria and the entire microbiome on the fitness of the host. Results on D. melanogaster have shown how the microbiota affects developmental rate and changes metabolic rates and carbohydrate allocation under laboratory conditions [Some invertebrates lack the complexity and diversity of associations with microorganisms. Such insect model systems allow investigations that aim to understand the contribution of specific bacteria and the entire microbiome towards host physiological processes. For example,nditions . Similarnditions and thatnditions . Hypothenditions .Recent studies have highlighted the importance of the microbiome not only in shaping the immune system but also in the context of host pathogen transmission processes . AnThe recent interest in the importance of the microbiome on tolerance to environmental perturbations , 39 has Changes in the microbial community have been shown to affect fitness of humans and other species as described above. However, the implications of changes in the microbiome for animal conservation have only been addressed in a limited number of studies even though the implications are many.Several studies using next generation sequencing approaches have addressed the comparison of the microbiome of laboratory populations or individuals kept in captivity with that of wild animals , 34, 44 2S. This strongly suggests that habitat fragmentation will affect not only the microbiome of the host but also host fitness.It is essential that we address the importance of the microbiome of other species rather than humans and the impact it has on their health status. For larger species such as primates this can be difficult and often only correlative evidence exists or can be achieved through a functional annotation of the microbiome , 17. ForSimilarly, keeping animals under captivity and maintaining breeding populations are likely to affect animal microbiomes. This is often undertaken in order to protect or increase abundance of rare species aiming at releasing species into the wild again. However, if the microbiomes of the individuals being released are affected, this is likely also to affect fitness compared to that of wild individuals and will subsequently reduce the probability of successful reintroduction into the wild. This is supported by studies on humans and mice where results have shown that obesity causes shifts in gut microbiome composition , 56. SimIt has been suggested that engineering microbiomes can be used to improve plant and animal health . How thiInbreeding has been suggested to affect the demography and persistence of natural populations and play an important role in conservation biology . Recent Microbiome analysis of wild populations has shown that the microbiome is dependent on the surrounding habitats as discussed above. This information might be used as a sensitive screening tool to establish populations affected by habitat fragmentation and possThe microbiome can provide protection of the host from pathogens either through stimulation of the immune system or through competitive exclusion. However, when animals are compromised or exposed to unfavorable environmental conditions the symbionts themselves can act as opportunistic pathogens , 27 or n Achatina fulica results showed a highly diverse microbiome and functional analysis revealed a variety of microbial genes encoding enzymes, which is in agreement with the wide-ranging diet of this species [ Aphis glycines from populations of native and invasive regions showed no differences [Differences in microbiomes may affect invasions. For example, the interactions between native and nonnative of closely related species may be affected by the transmission of bacteria. This also appears to be associated with another emerging type of invasion, the transmission of infectious diseases of wild animals to humans . Such tr species . Interesferences . Future ferences .Recent advances in molecular biology have given new possibilities to establish complex microbial communities and it has become clear that the vast majority of bacteria living in/on other animals cannot be cultured. One of the most common methods to describe complex microbiomes is the sequencing of the bacterial marker 16S ribosomal RNA (16S rRNA) genes through amplicon sequencing. Studies have shown that the microbiome plays a major role in human health, and in recent years the microbiomes of an increasing number of nonhuman species have been investigated. However, the number of studies addressing the role of the microbiome on animal health still remains limited. Some studies have discussed the role of the microbiome on nutritional supplementation, tolerance to environmental perturbations, and maintenance and development of the immune system. Thus the implications of changes in the microbiome for animal conservation are many although a limited number of studies have addressed this. We suggest that a number of factors relevant in conservation biology could affect the microbiome of animals including inbreeding, habitat fragmentation, change in climate, and effect of keeping animals in captivity. Changes in these factors are thus also likely to affect the fitness of the host both directly and indirectly. With the development of next generation sequencing and functional analysis of microbiomes it has become possible more specifically to test direct hypothesis on the importance of the microbiome in conservation biology."} +{"text": "The capability of improving performance on visual tasks with practice has been a matter of intense investigation during the last 40 years in patients with vision loss (hemianopia or quadrantanopia) can be also obtained with the so-called vision restoration therapy, an individualized program providing stimulation at the border of the dysfunctional visual field (Poggel et al., Finally, perceptual learning could be useful even for blind people. Blindness often produces an impaired spatial representation in other sensory domains (e.g., Gori et al., To sum up the findings of the present Research Topic, the studies collected here provide the frontline of behavioral and brain stimulation-coupled treatments of a heterogeneous ensemble of visual dysfunctions. Future studies are needed to define the best combination of approaches in order to improve vision with the shortest and most efficacious training, increasing patients' compliance and tailoring the training specifically for each patients' needs.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In Response: The letter by Rebelo et al. (Concerning the quality of the standard clinical diagnosis, both thin blood film analysis and rapid diagnostic test results were obtained in a certified US clinical laboratory and returned consistent data. The lack of re-evaluation of the patient and the diagnostic timing are indeed limitations but were caused by the clinical restrictions. Our goal in the 2015 article (The criticism of Rebelo et al. might have been fueled by their own limited detection of hemozoin with flow cytometry and microscopy (In conclusion, we agree with the need for optimization of the technology and additional testing. We are currently developing and testing our technology in a malaria-endemic country. Nevertheless, the letter by Rebello et al. does not alter the fact that our novel noninvasive malaria diagnostic technology worked in a human."} +{"text": "With the increasing prevalence of autism spectrum disorders (ASD), the pace of research aimed at understanding the neurobiology of this complex neurodevelopmental disorder has accelerated. Neuroimaging and postmortem studies have provided evidence for disruptions in functional and structural connectivity in the brains of individuals with ASD , which detect correlations of the blood oxygen level dependent (BOLD) signal, provided first findings and EEG enable the measurement of functional connectivity with high temporal resolution and medium level spatial resolution. An additional advantage of these techniques is that they are not susceptible to motion artifacts that would confound connectivity results in the same way as fcMRI. Using EEG, Coben and colleagues propose While functional connectivity studies of autism continue to reveal nuanced patterns of hypo- and hyper-connectivity associated with the disorder, studies of white matter connectivity using diffusion tensor imaging (DTI) and tractography provide complementary metrics. However, only few studies to date have combined functional and anatomical connectivity findings are compromised in children with ASD, suggesting that in vivo diffusion weighted MRI can generate complementary evidence in support of cellular findings from the postmortem literature.In a comprehensive review, McFadden and Minshew examine Returning to the basic questions regarding brain network connectivity in ASD raised in the initial announcement, the contributions to this Research Topic underline the need for differentiated interpretations of functional connectivity findings that consider the specificity of networks and cognitive states under investigation and the exact preprocessing pipelines and analysis tools implemented. The need for electrophysiological studies that provide a window onto the dynamic aspects of network connectivity is further emphasized by several contributions, as is the need for multimodal investigations that combine assays of functional and anatomical connectivity. The developmental trajectory of brain connectivity and the classification potential of different connectivity measures are important topics that are investigated by different studies. Finally, several articles contribute to a better understanding of the links between cellular abnormalities in autistic cortex and disturbances in network connectivity.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Furthermore, the ability to isolate and expand from patients various types of muscle progenitor cells capable of committing to the myogenic lineage provides the opportunity to establish cell lines that can be used for transplantation following ex vivo manipulation and expansion. The purpose of this article is to provide a perspective on approaches aimed at correcting the genetic defect using gene editing strategies and currently under development for the treatment of Duchenne muscular dystrophy (DMD), the most sever of the neuromuscular disorders. Emphasis will be placed on describing the potential of using the patient own stem cell as source of transplantation and the challenges that gene editing technologies face in the field of regenerative biology.The progressive loss of muscle mass characteristic of many muscular dystrophies impairs the efficacy of most of the gene and molecular therapies currently being pursued for the treatment of those disorders. It is becoming increasingly evident that a therapeutic application, to be effective, needs to target not only mature myofibers, but also muscle progenitors cells or muscle stem cells able to form new muscle tissue and to restore myofibers lost as the result of the diseases or during normal homeostasis so as to guarantee effective and lost lasting effects. Correction of the genetic defect using oligodeoxynucleotides (ODNs) or engineered nucleases holds great potential for the treatment of many of the musculoskeletal disorders. The encouraging results obtained by studying The discovery of dystrophin as the gene responsible for Duchenne muscular Dystrophy (DMD) has enabled researchers to identify several of the genes linked directly or indirectly to dystrophin and to correlate defects in those genes to many of the different forms of muscular dystrophies are probably the most studied. Since their initial identification Mauro, studies mdx mice that have been used as models for DMD as therapeutic vectors. First among those, the success obtained using homologous recombination (HR) technologies, an approach that has been employed extensively to generate animal models to study disease mechanisms made of 68 residues which were originally given the name of chimeraplasts. The vector contained both RNA and DNA residues complementary to the region of the genomic DNA targeted for repair and flanked by 2\u2032-O-methylated RNA residues which were used to increase resistance to RNase H activity Figure . To incrmdx mouse model for DMD were as efficient as chODNs in directing the gene correction process and non-homologous end-joining (NHEJ) mechanisms as one of the mechanisms responsible for the correction induced at the genomic level through evidences that demonstrate that a portion of the ODN becomes integral part of the genomic DNA gene, which encodes SMN. Loss of SMN protein is thought to be responsible for the progressive loss of motor neurons which is paralleled by the progressive muscle wasting characteristic of SMA patients . SMA is an autosomal recessive genetic disorder caused by a genetic defect in the survival motor neuron 1 patients Tang, .There are three major families of engineered nucleases being employed in gene editing approaches: zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and engineered meganuclease (MNs) Figure . A fourtin vitro, the need to use viral or plasmid vectors to ensure high levels of expression of nucleases in the nucleus required to achieve an effect and the risk of off-target mutations that have been associated with their use (as described in more detail below). Nonetheless, the results reported to date have clearly proven the validity of using engineered nucleases for therapeutic purposes.The mechanisms of action of nucleases are common to all system and rely on their ability to create a double-strand break (DSB) which is either repaired by NHEJ, or, in the presence of a donor DNA, can be repaired by HR Figure . SeveralFokI restriction endonuclease and have been used to induce targeted deletions of exon 51 to restore the coding reading frame of the dystrophin gene protein of the HIV (Frankel and Pabo, The main drawback of directly delivering systemically ODNs or nucleases into muscles is represented by the inability to control the repair process once the therapeutic agent reaches it targeted stem cell. As a result, issues of toxicity, off-target effects, and low frequencies of gene repair may limit the beneficial effects achieved. Studies aimed at further refine the efficacy and specificity of the repair process mediated by ODNs or nucleases is likely to have important implications for the success of gene editing approaches.ex vivo or systemic administration of gene editing tools in situ, other factors may hamper the efficacy and stability of the repair process. Considerations should be given to possible immune response toward the protein being restored as the result of the therapeutic application. Preconditioning of patients using immunosuppressive reagent as well as administration of chemotherapeutic drugs that are toxic to proliferating cells may be necessary to ensure efficient cell engraftment and rapid clonogenic growth of the transplanted cells.Independently from the approach used to correct the genetic defect and whether restoration of the missing protein is achieved through delivery of genetically modified cells The past decade or so has seen an exponential growth in the development of therapeutic applications for muscle disorders specifically designed to target stem cells. Clinical trials are currently undergoing to test the feasibility and efficacy of restoring dystrophin expression in skeletal muscles of DMD patients following systemic administration of mesoangioblasts highlighting the fact that the field is rapidly advancing toward clinical applications for this disease. Approaches aimed at using the patient own stem cells as source for the transplantation procedures has clear advantages over those using heterologous sources of stem cells. As such, it is likely that gene editing approaches will become integral part of future applications to treat muscle disorders using genetically modified cells.ex vivo will require to refine culturing techniques to ensure that, once explanted, muscle stem cells can be efficiently propagated in vitro while maintaining maximal regenerative potential. Furthermore, a better understanding of the mechanisms that regulate stem-cell properties will help redefine and select a specific population of cells that is safe to use in patients without compromising the beneficial effects that can be achieved using the approach. Along the same line, the development of new delivery systems or vectors capable of targeting muscle stem cells in situ will be a key to the optimization of gene editing strategies. Ultimately, a key component of preclinical and clinical studies will remain the efficacy and safety of the approach being employed. The trials currently under way for muscle disorders as well as other genetic diseases and the clinical trials that are planned to start within the next few years will be instrumental in determining the key parameters necessary to achieve sustained effects in patients and to ensure the safety and efficacy of the approach being employed. Despite the early stages of gene editing approaches aimed at targeting and correcting stem cells for the treatment of muscle disorders, the results obtained to date are encouraging. Collaborations among different laboratories interested in pursuing these technologies for the treatment of inherited genetic disease affecting muscle could result in advancing gene editing strategies more rapidly and more efficiently into the clinic.Additional parameters will have to be taken into account and defined before these approaches can enter into the clinic. For instance, gene editing strategies targeting stem cells The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The changing temperature and rainfall patterns associated with climate change are expected to alter the distribution of environmental suitability for malaria transmission in West Africa.We use a mechanistic model of disease transmission to investigate the effects of climate change on village scale hydrology, entomology, and disease transmission. This highly detailed model explicitly simulates water pools that serve as mosquito breeding sites, the life cycle of individual mosquito agents, and the transmission of the malaria parasite between human agents.We simulate current malaria conditions ranging from the very low transmission region bordering the Sahara to the higher transmission Savanna zones, focusing on areas with high sensitivity to increases in vectorial capacity. We then consider predictions of future climate, and assess the impact these changes would on malaria transmission.The use of this mechanistic model allows us to translate projected changes in temperature and rainfall into changes in vectorial capacity and malaria transmission rates."} +{"text": "Migraine headaches and undesirable quality of sleep in patients with multiple sclerosis are very common.The aim of the present study is the effectiveness of sleep health training on improving sleep quality, and reduction of the symptoms of migraine headaches in individuals with multiple sclerosis.Therefore, to do this, 60 patients with MS peered selected and randomly put into two groups of experimental and control. They answered Pittsburg quality of sleep of the Najarian symptoms of migraine headache. Experimental group took part for four sessions of Sleep health training session. After completing the sessions both answered to two tests again.The results showed that sleep health training on improving the quality of sleep and reduction the symptoms of migraine headache has been effective in experimental group.Therefore, sleep health training to improve the quality of sleep and reduce the symptoms of migraine headache sufferers of MS, can be conducted along with other pharmaceutical and medical treatments.No conflict of interest."} +{"text": "The appropriate function of the nervous system relies on precise patterns of connectivity among hundreds to billions of neurons across different biological systems. Evolutionarily conserved patterns of neural circuit organization and connectivity between morphologically and functionally diverse sets of neurons emerge from a remarkably robust set of genetic blueprints, uniquely defining circuits responsible for planning and execution of behavioral repertoires (Arenkiel et al., This Research Topic comprises a wide variety of articles contributing to current views and understanding of different neural circuits, how they are organized in neural networks, and what are the functional outputs of this organization. Additionally, pioneering researchers in the field review novel high-throughput tools and analytical approaches, further describing how these methods have evolved to better explore neural circuits at different levels, covering a wide spectrum of analyses that range from the study of big volumes of brain tissue, to the functional properties of a given network.The Research Topic eBook is organized into three chapters that cover different aspects of our current knowledge of neural circuits. In chapter 1 the reader will find articles related to the architecture and structural definition of neural circuits. In this chapter Li et al. use a coTogether, this Research Topic brings to the readers not only new neurobiological data and novel analytical tools, but also offers new perspectives about the way we think about neural circuits and networks, giving rise to important insights to be considered when exploring structural and functional features of micro- and macrocircuits.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "We report a rare case of 50-year-old Moroccan woman with local recurrence of a subcutaneous hydatid cyst in proximity to the medial surface of the tibia and another cyst at the tibialis posterior muscle in the absence of liver, lung und spleen involvement. The first surgery was done in another hospital three years ago; no adjuvant treatment was performed after surgery. Recurrence was diagnosed according to the MRI appearance, serological and pathological findings. The patient underwent complete excision of the subcutaneous cyst with two centimeters of the medial gastrocnemius muscle; the tibialis posterior muscle cyst was intraoperatively drained and irrigated with scolicidal agent as it was next to the posterior tibial pedicle. A periopertive anthelmintic chemotherapy was administered. Two years after the patient showed no recurrence. This case report and literature review describe an approach to the diagnosis and management of this pathological entity. Echinococcosis has its highest prevalence in countries, where the common intermediate hosts, sheep and cattle, are raised , 2. The A 50 year old Moroccan female patient of rural origin presented in Avicenna hospital Rabat-Morocco with history of pain and swelling of the medial aspect of the right leg for two years. The patient had been operated, in another hospital, three years earlier for echinococcosis of the subcutaneous tissue of the right leg with no adjuvant treatment. No history of trauma, fever and weight loss was reported. A painful mass next theRecurrent disease is defined as the appearance of new active cysts after therapy. It occurs after surgical or radiologic intervention and manifests as reappearance of live cysts at the site of a previously treated cyst or the appearance of new disease resulting from procedure-related spillage . The reaDetermining the ideal therapeutic approach for a recurrent subcutaneous and muscular hydatid cyst can be quite challenging for the surgeon. Moreover, the rarity of the disease renders the decision making on the favorable treatment quite difficult. Nevertheless, once the diagnosis is established, the surgeon should consider performing a radical procedure with periopertive administration of anthelmathic chemotherapy aiming in minimizing the possibility of a recurrence. A long term follow-up is required in all cases."} +{"text": "This e-book is the culmination of countless hours of meticulous work by global scientists. We would like to thank the researchers for their great contributions to this hot topic. The combination of these studies reflects the importance of the topic amongst researchers and practitioners and the wide interest from numerous laboratories around the world. The contributions include a variety of formats including five original investigations, three review articles, one opinion article and a hypothesis and theory article. Notably, these contributions included both human and animal models that encompassed a range of techniques from molecular mechanisms to real life interventions thus reinforcing the translational approach for the understanding of cardiovascular responses to stress.The three review articles (Huang et al., Following the review articles, two interesting papers addressed novel and important aspects for stress management therapies. The opinion article by Stults-Kolehmainen highlighAs mentioned previously, the variety of original research investigations included within this topic reinforced the importance of research translation. The cross-sectional study by Childs and de Wit reportedOther important responses during different sources of stress were also included in this e-book. The study of Rauber et al. was the The study of Franklin et al. documentFinally, the study of Sasse et al. presenteThis e-book has taken the initial action to integrate current research findings to stimulate further discussion and research. We would like to thank the authors for their significant contributions and the many reviewers who critiqued and improved the overall topic. Future studies will clarify the importance of physical fitness, exercise and PA to regulate cardiovascular responses during stress and such benefits for cardiovascular health.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Ligularia, an important genus of the Compositae family, has captured the interest of natural product chemists for years. Phytochemical investigations on the title genus have led to isolation of hundreds of secondary metabolites with various skeletons. Herein, we summarized the chemical constituents of this genus and their biological activities over the past few decades."} +{"text": "Throughout human history, complementary and alternative medicine in the form of folk medicine has emerged and flourished in every civilization, tribe, and continent. Some forms evolved to become traditional medicine and some disappeared to be forgotten, while others have been labeled untraditional medicine and are now regarded as complementary and alternative medicine. In the past half century, an increasing number of patients and health care providers in the West have become dissatisfied with aspects of traditional Western medicine and have turned their attention to these branches of untraditional medicine. The term integrative or integrated medicine was born recently as the popularity grew in incorporating complementary medicine to reinforce gaps to better fulfill the purpose of traditional medicine.Complementary and alternative medicine includes many branches such as herbal medicine which increasingly appears in the form of nutritional supplements to elude increasing governmental regulation as demand for these products grow. In an analysis by the International Trade Center that spanned 2010\u20132013, it was estimated that global medicinal plant production was $50 billion and is growing at a rate of almost 16% annually. The increasing use of various forms of traditional herbal medicine in combatting modern illnesses, particularly the dangerous side effects of pharmaceutical drugs, has proven to be valuable. However, the absence of proper warning labels concerning drug-herb interaction causes an alarming number of emergency clinic cases due to the unwanted consequences of some of these interactions.Impact of Chinese herbal medicine on American society and health care system: perspective and concern\u201d reflected these issues and concerns. Also included are articles highlighting research into ginkgo biloba and cancer-related herbal research.Acupuncture is one of these aforementioned branches that has made major inroads into Western medicine. It, along with the increased interest and research in herbal medicine, is likely the most researched branch of alternative medicine in the West. Acupuncture has been recognized for its healing value by the National Institutes of Health in 1997. The subsequent creation of the National Institute of Complementary and Alternative Medicine within the NIH in the United States and the founding of European Congress of Integrative Medicine has promoted research into these various overlooked disciplines. Understanding the value and discovering the merits of each discipline using modern Western scientific methodology is integral in trying to incorporate desirable aspects into traditional medicine. This special issue reviewed and accepted merited articles ranging in topics from the current dilemma of Eastern medicine in the West to the problem of government oversight in the field of herbs that are frequently and misguidedly marketed as nutritional supplements. An article included in this special issue titled \u201cNoting all these development, we turn our attention to a systematic way of doing traditional Chinese medicine research. All articles published in this special issue underscore the positive trend of returning to natural approaches for our health care and emphasize better treatment for all types of human sickness.There were three research articles, in this special issue, on the mechanisms of immune system and apoptosis for the therapeutic studies of antitumor activities. Both in vitro herbal drugs possess an enormous potential for the cure of certain types of cancer diseases.Anti-inflammatory effects of 81 Chinese herb extracts and their correlation with the characteristics of traditional Chinese medicine\u201d which suggests that herbs with pungent flavors be considered the drugs of choice due to their effective anti-inflammatory agents which can be evaluated by their effects on nitrogen oxide (NO) production and cell growth in LPS/IFN\u03b3-costimulated murine macrophage RAW264.7 cells. This discovery could be used as one of the criteria to select different Chinese herbs for anti-inflammatory purposes. Also included in this special issue is an intensive study on the effect of Cryptotanshinone, extracted from the Chinese herb Dan Shen on reversing the reproductive and metabolic disturbances in polycystic ovary syndrome (PCOS) in rats. This study and its analysis into the possible regulatory mechanism would validate the clinical efficacy of this particular herb for the treatment of PCOS patients.It is important to have well-designed pharmaceutical studies to help explain the millennia old theory of Chinese herbology and the mechanism, pharmacodynamics, pharmacokinetics, and pharmacognostics that elucidate the efficacy of Chinese herbs and unlock its century-old mysteries. The traditional clinical application of traditional Chinese medicine has been based on the characteristics of taste, flavor, channel entering, and actions of the herbs. This special issue includes an article entitled \u201cThe guest editors of this special issue hope that, through the articles accepted and published, we can bridge the gap between Western and Eastern medicines and bring them closer in order to further understand the human body and to promote the advancement of health care. Western cancer treatments such as radiation and chemotherapy have adverse effect. Eastern medicine could help mitigate those side effects by minimizing the symptoms and reducing dosage requirements when used in conjunction with current Western treatment and therapies. Immunological, enzymatic molecular biology-related research could benefit from studies of effective Eastern medicinal treatments. Studies of the ways in which Western mainstream medicine and technology can be integrated with traditional Eastern medicine are the focus of this special issue.Dominic P. LuDominic P. LuYemeng ChenYemeng ChenLixian XuLixian XuLeo M. LeeLeo M. Lee"} +{"text": "Neuroergonomics at any given moment or how much mental effort it will cost for brain to meet given task demands . Fortunately, the demand for high-metabolic energy of the brain tissue is mainly regulated by complex but adequate energetic substrate delivery via a dense and redundant network of microvessels. Hence, metabolic demands are orchestrated by the blood supply hemodynamic response.The absence of consideration of the neurophysiological mechanisms in Neuroergonomics is certainly due to the difficulty in investigating them. Yet, there are real energy mobilizations that occur within the operator's neurovascular coupling (NVC). Simply, NVC is a tight temporal association of the neuronal activity with regional cerebral blood flow delivery. Understanding the fundamental cellular mechanisms underlying NVC is necessary to measure a dimension of the local brain machinery expenditure at work. The appraisal of the energetic costs required by NVC implies the assessment of mental resources. For instance, when an operator is engaged in a task, the mobilization of the neural pathways needs a synergistic support of massive astrocyte glial cells to fuel neurons and interneurons with oxygen and nutriments furnished by close capillaries.Since the first discoveries by Roy and Sherrington , it has NVC is observable due to changes in neuronal-astroglial and microvasculature activities, which occur in several steps. First, the measurable electrical neuronal activity is accompanied by synaptic neurotransmitter release with a neuronal-astroglial regional cerebral metabolic rate of oxygen consumption, mainly for regional cerebral metabolic rate of glucose demand. Second, this activity induces a cascading pathway involving the production and the release of powerful vasodilator metabolites by neurons and astrocytes and drives a chemical signal up to the vascular smooth muscle and pericytes cells along the microvessels which dilate the microvasculature. Third, the microvessels dilatation significantly modulates the regional cerebral blood activity which greatly exceeds the neuronal-astroglial oxygen requirements, and results in a measurable overabundance of blood flow, hence, a local hyperoxygenation. Yet, the role of NVC as it contributes to the comprehension of the energy mobilization in response to mental resources is not common knowledge. The cellular measures of energy production, delivery, and utilization are crucial to understanding and interpreting NVC activity. How to clearly establish the role of NVC into the operator's brain machinery? One possible way would be to associate the level of correlates of NVC while interpreting the degree of task demand. It seems thus fairly possible that an accurate measurement of NVC, spatially and temporally and in terms of amplitude, would be a valuable neurophysiological marker for quantifying changes in brain activation. Although this statement is still reductionist , this approach links the concept of human MWL and mental resources to objective neurophysiological measures for Neuroergonomics purposes.Recent Neuroergonomics research has progressed in neurocognitive or neuroimaging-sensing instrumentation for determining operator states through the measurement of NVC activity associated with the degree of mental processes . fNIRS provides a continuous monitoring of the hemodynamic activity using near-infrared light transmitted between optodes. It infers the changes in the concentrations of oxygenated and deoxygenated hemoglobin in the cortical regions from scattering and absorption properties of light probing beneath the surface of the skull Perrey, . These tOn the other hand, EEG offers a fine temporal resolution (milliseconds) thus enabling detection of brief neuronal processes, but is limited in its capacity for spatial resolution, at least in real time even though dense array EEG permits source propagation localization. EEG uses scalp electrodes to capture weak electrical current fluctuations generated by inhibitory or excitatory postsynaptic potentials of a pool of neurons firing simultaneously in response to a stimulus. The electrophysiological roots of these signals correspond to the summation of the spontaneously and synchronously recruited neuronal population that contributes to the neuronal activity of the superficial layers of the cortex. EEG waves and event-related potentials signals are particularly strong candidates for objective measures of operator's brain activity at the workplace and methods for its assessment in work settings wrote the majority of the manuscript. The other authors have extensively reviewed and revised the manuscript from the first draft before giving final approval of the version to be submitted.This work was funded by the French Research National Agency, the French Defence Procurement Agency (ASTRID), and the AXA Research Fund.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Osprioneides kampto borings were found in bryozoan colonies of Sandbian age from northern Estonia . The Ordovician was a time of great increase in the quantities of hard substrate removed by single trace makers. Increased predation pressure was most likely the driving force behind the infaunalization of larger invertebrates such as the Osprioneides trace makers in the Ordovician. It is possible that the Osprioneides borer originated in Baltica or in other paleocontinents outside of North America.The earliest Trypanites reported in Early Cambrian archaeocyathid reefs in Labrador Trypanites and PalaeosabellaPetroxestes), bryozoan etchings , sponge borings (Cicatricula), Sanctum (a cavernous domichnium excavated in bryozoan zoaria by an unknown borer) and GastrochaenolitesThe oldest macroborings in the world are the small simple holes of GastrochaenolitesTrypanites borings are known from brachiopods of the Arenigian Trypanites and Sanctum, in bryozoans of Middle and Upper Ordovician strata of northern Estonia.The bioerosion trace fossils of Ordovician of North America are relatively well studied The aims of this paper are to: 1) determine whether the shafts in large Sandbian bryozoans belong to previously known or a new bioerosional ichnotaxon for the Ordovician; 2) determine the systematic affinity of the trace fossil; 3) discuss the ecology of the trace makers; 4) discuss the paleobiogeographic distribution of the trace fossil; and 4) discuss the occurrence of large borings during the Ordovician Bioerosion Revolution.During the Ordovician, the Baltica paleocontinent migrated from the temperate to the subtropical realm The total thickness of the Ordovician in Estonia varies from 70 to 180 m Mastopora), brachiopods , conulariids, gastropods , ichnofossils , sponges, receptaculitids (Tettragonis), rugosans (Lambelasma), bryozoans, and asaphid trilobites. The Alliku Ditches are located in Harju County near the village of Alliku. Clayey limestones with interlayers of marls are exposed here. The fauna includes algae, brachiopods, bryozoans , echinoderms, gastropods, ostracods, rugosans, receptaculitids and trilobites according to R\u00f5\u00f5musoks The material studied here was collected from the Hirmuse Creek and AlliOsprioneides borings are deposited at the Institute of Geology, Tallinn University of Technology (GIT), Ehitajate tee 5, Tallinn, Estonia, with specimen numbers GIT-398-729, GIT 665-18 and GIT 665-19.No permits were required for the described study, which complied with all relevant Estonian regulations, as our study did not involve collecting protected fossil species. Three described bryozoan specimens with the Trypanites borings occur inside the large boring with oval cross section. The apertures of the large borings occur on both the upper and lower surfaces of the bryozoans .Numerous unbranched, single-entrance, large deep borings with oval cross sections were found in three large trepostome bryozoan colonies , 4, 5, 6Petroxestes known from Late Ordovician bryozoans and hardgrounds of North America Petroxestes the aperture width is much greater than the boring\u2019s depth. In contrast, the depth of the borings in bryozoans is much greater than their apertural width. Unlike Petroxestes, the Sandbian borings examined here have a tapering terminus and somewhat sinuous course. The axial ratio of Petroxestes borings aperture (major axis/minor axis) is also much greater than observed in these borings.The borings in these bryozoans resemble somewhat Osprioneides, which is known from the Silurian of Baltica, Britain and North America Osprioneides kampto because of their similar general morphology. They have a single entrance, an oval cross section, and significant depth similar to Osprioneides kampto. Their straight, curved to somewhat sinuous shape also resembles that of Osprioneides. Both Osprioneides and these borings in bryozoans have a tapered to rounded terminus.The other similar large Palaeozoic boring is Osprioneides trace maker was a soft-bodied animal similar to polychaete worms that used chemical means of boring as suggested by Beuck et al. Osprioneides means bivalves were very unlikely to have been the trace makers.Most likely the Osprioneides borings were made post mortem because the growth lamellae of the bryozoan do not deflect around the borings. There are also no signs of skeletal repair by the bryozoans. Several Osprioneides borings truncate other Osprioneides borings that were likely abandoned by the trace maker by that time. Similarly, empty Osprioneides borings were colonized by Trypanites trace makers. This indicates that the Osprioneides borings may have appeared relatively early in the ecological succession. Overturning of the bryozoan zoaria can explain the occurrence of Osprioneides borings apertures on both upper and lower surfaces. There is no sign of encrustation on the walls of the studied Osprioneides borings, suggesting relatively rapid burial of the host bryozoans shortly after the Osprioneides colonization.Osprioneides trace makers were suspension feeders similar to the Trypanites animals due to their stationary life mode Osprioneides comprise stromatoporoids and tabulate corals. This new occurrence of Osprioneides borings in large bryozoans shows that the trace maker possibly selected its substrate only by size of skeleton because the traces are not found in smaller fossils. However, they are not found in any Ordovician hardgrounds that provide more area than do the bryozoan colonies. Wyse Jackson and Key It is likely that Gastrochaenolites from the Early Ordovician of Baltica Petroxestes borings appeared in North America. At the same time the Osprioneides borings described here appeared in Baltica. Thus the Ordovician was also the time of great increase in quantities of hard substrate removed by single trace makers. The biological affinities of Ordovician Gastrochaenolites are not known Petroxestes was almost certainly produced by the facultatively boring bivalve Corallidomus scobinaOsprioneides trace makers, which is suggested by the somewhat sinuous shape of some borings. This indicates that more than one group of animal was involved in the appearance of large bioerosional traces during the Ordovician Bioerosion Revolution. Increased predation pressure Osprioneides trace makers in the Ordovician. On the other hand in echinoids, for example, infaunalization was presumably the result of colonization of unoccupied niche space Morphological diversification was not the only result of the Ordovician Bioerosion Revolution. Most of the large bioerosional traces of the Paleozoic had their earliest appearances in the Ordovician Osprioneides is a relatively rare fossil compared to the abundance of Trypanites in the Silurian of Baltica Osprioneides borings also occur outside of Baltica. They are known from the Llandovery of North America and Ludlow of the Welsh Borderlands Osprioneides is presumably absent in the Ordovician of North America because Ordovician bioerosional trace fossils of North America are relatively well studied Osprioneides trace maker originated in Baltica or elsewhere and migrated to North America in the Silurian. This may well be connected to the decreased distance between Baltica and Laurentia (the closing of the Iapetus Ocean) and the loss of provinciality of faunas in the Silurian."} +{"text": "In the process of treating scoliosis essence of treatment is to maintain normal patterns of attitudes through appropriate antigravity muscle tone. Stimuli proper posture pattern in the period in which the child does not perform exercises provide appropriately selected and made corset. The use of dynamic slicing applications - Kinesiology Taping at the surface of the skin also stimulates the child to maintain proper muscle tone shaping correct posture.Evaluated in a group of 40 children diagnosed with idiopathic scoliosis in age from 10 to 15 years residing in the treatment by the Fed at the Centre for Rehabilitation in Zgorzelec. Each child before treatment, the day of admission to the ward had made an assessment method Diers. Then an application of Kinesiology Taping. Used applications ligamentous took the form of V and were used on curves thoracic and lumbar scoliosis. Next, a re-image method Diers assessing mathematical representation of the body surface after the application of Kinesiology Taping.The results obtained after the application of Kinesiology Taping show that the image of body posture changes, which record the images method Diers.Kinesiology Taping techniques are useful in the treatment of idiopathic scoliosis. Alter the tension of the skin and muscles make it easy to maintain the correct posture pattern"} +{"text": "In this article we investigate the heat and mass transfer analysis in mixed convective radiative flow of Jeffrey fluid over a moving surface. The effects of thermal and concentration stratifications are also taken into consideration. Rosseland's approximations are utilized for thermal radiation. The nonlinear boundary layer partial differential equations are converted into nonlinear ordinary differential equations via suitable dimensionless variables. The solutions of nonlinear ordinary differential equations are developed by homotopic procedure. Convergence of homotopic solutions is examined graphically and numerically. Graphical results of dimensionless velocity, temperature and concentration are presented and discussed in detail. Values of the skin-friction coefficient, the local Nusselt and the local Sherwood numbers are analyzed numerically. Temperature and concentration profiles are decreased when the values of thermal and concentration stratifications parameters increase. Larger values of radiation parameter lead to the higher temperature and thicker thermal boundary layer thickness. The boundary layer flow of non-Newtonian fluids gains a special attention of the researchers because of its wide occurrence in the industrial and engineering processes. The most commonly involved fluids in industry and technology are categorized as non-Newtonian. Many of the materials used in biological sciences, chemical and petroleum industries, geophysics etc. are also known as the non-Newtonian fluids. The non-Newtonian fluids are further divided into three main classes namely differential, rate and integral types. The simplest subclass of non-Newtonian fluids is the rate type fluids. The present study involves the Jeffrey fluid model which falls into the category of rate type non-Newtonian fluids. This fluid model exhibits the properties of ratio of relaxation to retardation times and retardation time. This model is very popular amongst the investigators. Few studies regarding Jeffrey fluid model are mentioned in the references The better cooling rate in the manufacturing processes is very essential for the best quality final product. For such processes, a controlled cooling system is required. An electrically polymeric liquid seems to be a good candidate for such applications of polymer and metallurgy because here the flow can be controlled by an applied magnetic field. Further the magnetohydrodynamic (MHD) flows are quite prominent in MHD power generating systems, cooling of nuclear reactors, plasma studies, geothermal energy extraction and many others. Interesting investigations on MHD flows can be seen in the references Influence of stratification is an important aspect in heat and mass transfer analysis. The formation or deposition of the layers is known as the stratification. This phenomenon occurs due to the change in temperature or concentration, or variations in both, or presence of various fluids or different densities. It is quite important and interesting to examine the effects of combined stratifications in mixed convective flow past a surface when heat and mass transfer analysis is performed simultaneously. Investigation of doubly stratified flows is a subject of special attention nowadays because of its broad range of applications in industrial and engineering processes. Few practical examples of these applications include heat rejection into the environment such as rivers, seas and lakes, thermal energy storage systems like solar ponds, mixture in industrial, food and manufacturing processing, density stratification of the atmosphere etc. Having all such applications in view, Hayat et al. Here our main theme is to study the influences of thermal and concentration stratifications in mixed convection flow of Jeffrey fluid over a stretching sheet. Heat and mass transfer characteristics are encountered. Further, we considered the thermal radiation effect. Mathematical modelling is presented subject to boundary layer assumptions and Roseland's approximation. The governing nonlinear flow model is solved and homotopic solutions 0B is applied normal to the flow direction (see We consider the mixed convection flow of an incompressible Jeffrey fluid over a stretching surface. Thermal and concentration stratifications are taken into account in the presence of thermal radiation. The vertical surface has temperature tion see . The effThe subjected boundary conditions are The radiative flux is accounted by employing the Rosseland assumption in the energy equation SettingHere The skin friction coefficient, the local Nusselt number and the local Sherwood number areTo develop the homotopic procedure When variation of The general solutions are derived as follows:The coupled nonlinear ordinary differential equations are solved via homotopy analysis method. The convergence of derived homotopic solutions depend on the suitable values of auxiliary parameters The dimensionless velocity profile The variations in the non-dimensional temperature distribution function The effects of magnetic parameter ness see .The numerical values of We examined the effects of thermal and concentration stratifications in mixed convective radiative flow of Jeffrey fluid in this attempt. The main observations that we found in this investigation are as follows:We have to compute 28th-order of HAM deformations for the convergent solutions.Deborah number The effects of thermal buoyancy parameters on the velocity field An increase in thermal stratification parameter The temperature profile and thermal boundary layer thickness are enhanced when radiation parameter The concentration field and its associated boundary layer thickness are decreasing functions of concentration stratification parameter Numerical values of skin-friction coefficient are increased by increasing The larger values of"} +{"text": "Parascolymia bracherti has been identified as a new species in spite of the dissolved skeleton. In the recent era, Parascolymia like all Lobophylliidae is restricted to the Indo-Pacific region, where it is represented by a single species. The new species proves the genus also in the Miocene Mediterranean reef coral province. A review of the spatio-temporal relationships of fossil corals related to Parascolymia indicates that the genus was probably rooted in the Eastern Atlantic\u2012Western Tethys region during the Paleocene to Eocene and reached the Indo-Pacific region not before the Oligocene. The revealed palaeobiogeographical pattern shows an obvious congruence with that of Acropora and tridacnine bivalves reflecting a gradual equatorwards retreat of the marine biodiversity center parallel to the Cenozoic climate deterioration.Palaeobiogeographical and palaeodiversity patterns of scleractinian reef corals are generally biased due to uncertain taxonomy and a loss of taxonomic characters through dissolution and recrystallization of the skeletal aragonite in shallow marine limestones. Herein, we describe a fossil lobophylliid coral in mouldic preservation from the early middle Miocene Leitha Limestone of the Central Paratethys Sea . By using grey-scale image inversion and silicone rubber casts for the visualization of the original skeletal anatomy and the detection of distinct micromorphological characters Scolymia Haime, 1852, once thought to be cosmopolitan . Th. ThParasdbridge\u201d [13,14,2 Optimum ,63, whic Optimum . This shhat time ,64. Thertruncata in West Parascolymia shows obvious parallels with that of the reef coral Acropora and tridacnine bivalves farquharsoni in the Oligocene of Somali and its association with P. (Parascolymia) vitiensis in the early Miocene of Makran [P. (Parascolymia) used the East African\u2012Arabian bioprovince as stepping stone into the Indo-Polynesian bioprovince and became typical elements of the entire Indo-West Pacific region after the closure of the Tethyan Seaway the late Eocene\u2012early Miocene Arabian hotspot acting as stepping stone for Parascolymia into the Indo-West Pacific; and (3) the early Miocene\u2012recent Indo-Australian Archipelago hotspot in the center of the recent distribution of Parascolymia in the Vienna Basin based on the mould of a single corallum. In order to overcome the mouldic preservation grey-scale inversion images and silicone rubber casts were applied for the identification of original macro- and micromorphological features . The results show that coral moulds potentially prove the chance to study micromorphological characters of reef corals from shallow water limestones where skeletal aragonite is usually dissolved. This is important since the highest diversity of zooxanthellate corals occurs in pure carbonate shallow-water environments, but the fossil record of the Scleractinia is significantly biased in favour of well-preserved isolated corals from argillaceous lithologies representing unfavorable habitats of reduced coral diversity [Acanthophyllia ampla, which occurred associated with the new species in the Central Paratethys, is placed in synonymy with Parascolymia. Before this evidence, Parascolymia was considered as strictly Indo-Pacific coral genus. The records from the Central Paratethys occur roughly 7 million years after the complete isolation of the Mediterranean from the Indian Ocean coral faunas by the restriction of the Tethyan Seaway. This finding and a review of the temporal and spatial distribution of other fossil corals related to Parascolymia indicate that the lineage originated in the Eastern Atlantic\u2012Western Tethys region during the Paleocene\u2012Eocene, corresponding to the former center of marine biodiversity. It also reveals an obvious temporal and spatial coincidence between the dispersal of Parascolymia and the displacement of the marine biodiversity hotspot into the present Indo-West Pacific region. Parascolymia is thus an example of a successful transformation of an originally Tethyan element contributing to the present biodiversity in the Indo-West Pacific. The gradual nature of this palaeobiogeographic change implies an important climatic control contrasting the hypothesis of a primarily tectonically driven hopping of geographically distinct biodiversity hotspots in the Cenozoic.iversity . Acantho"} +{"text": "Functional, esthetic and endodontic restoration of a pulpally involved permanent incisor with root dilaceration often presents a daunting clinical challenge. The outcome of conventional treatment modalities like surgical removal of the tooth followed by orthodontic closure of the space is time consuming and esthetically compromising. Even the prosthetic and implantalogical rehabilitation after extraction is not possible until the patient reaches certain age; while the compliance is a problem with the use of removable partial denture in young children. Autoalloplastic anterior tooth transplantation can lead to physical and psychological trauma in a young individual.Thus endo-esthetic management of such teeth helps in maintaining both morphology and esthetics in a growing child until the permanent long lasting prosthetic solution is sought after the complete development of the dentition and jaws. This treatment option for a pulpally involved permanent incisor with root dilaceration involves completing the endodontic treatment in a partially calcified and aberrantly located root canal followed by the use of light transmitting fiber post and core build up using composite resin. One of the sequelae to trauma to the primary dentition is the possibility of damaging permanent successor tooth buds.Intrusive displacement of deciduous teeth is the most frequent cause of sequelae involving teeth of the second dentition, with damage ranging from coronal changes in color and shape to radicular malformation and distinct root anomalies.1The creasing or bending of the crown axis with respect to the root axis, as commonly observed in these cases, is also termed dilaceration. This results from the displacement of the crowns of the incisors, usually still without roots, in a vestibular direction while root growth is still progressing in a cranial direction4137The incidence of root dilacerations amounts to approximately 3% of all the damage to permanent teeth following trauma to deciduous teeth. In the literature a distinction is made between vestibular and lateral root bending. Whether the trauma is the sole cause of vestibular root bending is a subject of controversy. In the majority of the cases it has been observed only in the upper central incisor. In many cases the roots are by then malformed and root growth has come to an end prematurely.When such dilacerated teeth gets involved pulpally due to caries, it poses a challenging endo-esthetic problem.A 12-year-old healthy male child with no contributory medical history visited the Dept. of Pediatric and Preventive Dentistry, Dr DY Patil Dental College Pune, with the chief complaint of pain in the upper front tooth since 15 days and history of intermittent dull ache in upper right and left back region for a couple of months.On examination, the upper left lateral incisor showed a gross carious involvement of crown. Upper right and left second premolars showed severe carious destruction of crown rendering them clinically nonrestorable . The upper left lateral incisor was tender on vertical percussion. The child had mandibular prognathism and angles class III molar relation ship .Radiographic examination of the upper anterior region revealed tortuous root canals suggestive of severe root dilacerations of upper left central and lateral incisor. In case of lateral incisor the root tortuousity does not seem to be as complicated as of a central incisor. This is because of the buccopalatal angulation of the root of lateral incisor which is not evident on radiograph very clearly .Patient\u2019s parents revealed history of traumatic episode at the age four which involved avulsion of two deciduous teeth from upper front region. Electric pulp testing of the involved lateral incisor exhibited delayed response compared to the contralateral tooth.Various treatment modalities were explained to the parents of the child with their pros and cons and it was finally decidedto go ahead with the present treatment approach.As the pulp chamber was calcified and root canal was patent only in the middle and apical third, the negotiation of the root canal was not an easy task .Profound anesthesia was achieved by administering 2% Lignocaine with Adrenaline (1:200000) and access cavity preparation was done using round carbide bur.In an attempt to gain an access to radicular portion, a perforation occurred in coronal one third of the distal aspect of root.Upon further exploration, a very unusual location of the root canal was found on the palatal wall of the root .An immediate diagnostic radiograph was made to ensure the correct positioning and the length of the root canal .Upon confirming the canal position, thorough biomechanical preparation was carried out using rotary protaper system and pulp space was obturated with gutta percha points using AH-26 as a root canal sealer .The perforation on coronal one third of the distal aspect of root was sealed using mineral trioxide aggregate .The affected tooth was kept isolated using cotton rolls, 2 \u00d7 2 gauze and high vacuum suction all throughout the procedure.The entire endodontic treatment and sealing of the perforation was carried out in a single visit. In the subsequent visit the post space was prepared using peeso reamers. Considering the length of the root canal and crown-root ratio and the disadvantages associated with cast posts, the light transmitting fiber post was used .It was cemented using dual cure adhesive cement .The core structure was built around the post using nano filled composite resin (Z-350 3M ESPE USA) .Finishing and polishing of the composite was done using contouring and polishing discs (Sof-Lex-3M ESPE USA) which ultimately resulted into a fairly acceptable esthetics .Considering the dental and chronological age of the child the decision of giving a final extracoronal ceramic restoration is deferred until the eruption of all permanent teeth.Trauma to primary incisors may lead to lasting damage to the permanent teeth. The patient\u2019s age, and the degree and the direction of the malposition of the primary teeth are some of the factors.Possible sequelae to permanent teeth following trauma involving primary teeth includeDiscoloration and hypoplasia of the enamelBending and malformation of the anatomic crown and rootHypoplasia of the rootRetarted eruption.The maximum incidence of accidents to primary teeth occurs in children aged 3 to 4 years.For this reason sequelae to accidents occurring at this time particularly affect root growth. Even in the presented case the parents gave history of trauma at the age of 4 years and avulsion of two primary maxillary anterior teeth.In such cases various treatment options that would be thought of are surgical removal of the tooth.10Just as the above mentioned treatment options have drawbacks, the autoalloplastic transplantation also does inherit disadvantage of a surgical approach in a young individual.Even an extraction of a tooth with carious crown and a dilacerated root, would pose a traumatic problem.Less time consumingLeast traumatic physically and psychologically to a childMore estheticVery much predictable in a young childMore economical when compared to other treatment modalities.It remains to be seen the extent of the longevity of the restoration prior to the final permanent restoration in the form of a ceramic extracoronal restoration after the complete eruption of all the permanent teeth.Considering the drawbacks associated with above mentioned approaches we thought of endo-esthetic management approach with following advantages,Although the endodontics and prefabricated post and core build up restoration is an established procedure in dentistry, the prognosis of the same needs to be assessed on the basis of long-term periodic recalls."} +{"text": "Our knowledge of the modular organization of the cerebellum and the sphere of influence of these modules still presents large gaps. Here I will review these gaps against our present anatomical and physiological knowledge of these systems. Advances in our knowledge of the functional anatomy of the cerebellum have raised new questions. In this review I will draw attention to some of, what I consider, the main flaws in our knowledge of the anatomy and physiology of cerebellar modules and their connections.My main questions are:The anatomical organization of the cerebellum is based on modules, defined by their projection to a specific cerebellar or vestibular nucleus, their climbing fiber input and the physiological and neurochemical properties of their Purkinje cells. Why do the modules in motor regions of the cerebellum alternate between those connected with motor pathways and those connected with wide regions of the cerebral cortex?The cerebello-rubral pathway, the cerebello-thalamo-cortical projections and the corticorubral-olivary climbing fiber system seem to be organized as closed loops. What is the function of these loops and of the convergence of cortical and cerebellar nuclear input to the parvocellular red nucleus and other intercallated nuclei at the meso-diencephalic junction?Which are the tractable behaviors with which to evaluate the hypothesis that each Purkinje cell zone constitutes a basic functional unit of the cerebellum Simpson, ?What are the topographical and functional relations between mossy and climbing fibers in the cerebellar cortex?Are mossy and climbing fiber pathways organized according to the same anatomical principles?What are the topographical interrelations of different mossy fiber system in the cerebellum?The cerebellum is known to be organized in a modular fashion. Cerebellar modules consist of one or more Purkinje cell zones that project to a particular cerebellar or vestibular nucleus, their climbing fiber input from a subdivision of the contralateral inferior olive with a collateral projection to the cerebellar target nucleus and reciprocal connections of this target nucleus with the contralateral inferior olive. Seven to nine of these modules originally were distinguished on both sides of the cerebellum in carnivores, rodents and primates Figures .(A) and (D)). Purkinje cell zones, therefore, are either zebrin-positive or negative and lateral (LIP) intraparietal areas and prefrontal areas 9l, 46d, preSMA and the rostral dorsal premotor area can be subdivided into microzones. Microzones are narrow longitudinal strips of Purkinje cells that receive climbing fiber input sharing the same receptive field (Andersson and Oscarsson, Mossy fibers, initially, follow a transverse course through the cerebellar white matter Figure . At reguIn the studies of Voogd et al. and PijpDifferent ideas have been proposed for the functional relations of mossy fiber terminal aggregates and the climbing fiber microzones. Llin\u00e1s hypothesGibson et al. and otheIt has been suggested that the efferent thalamocortical and the cortico-ponto-cerebellar mossy fiber systems are organized as closed loops (Kelly and Strick, Cerebellar-cortical connections and the cortico-cerebellar mossy fiber projections in monkeys have been found to be reciprocally organized (Kelly and Strick, Where individual mossy fibers emit collaterals bilaterally at specific medio-lateral positions, entire mossy fiber systems all terminate in bilaterally distributed discrete aggregates of mossy fiber rosettes Figure . This paAnother feature of the distribution of mossy fibers in motor regions of the cerebellum is their concentric arrangement Figure . CorticoLength and synaptic size of parallel fibers differ for the upper and lower molecular layer (see Van der Want et al. for refeRepairing gaps in our knowledge on cerebellar systems asks for a collaborative effort of anatomists and physiologists. Noninvasive techniques for the tracing of axonal pathways will have to be developed to collect information in non-human primates. MRI technology will have to be improved to visualize climbing fiber paths and Purkinje cell activity. The alternation of Purkinje cell zones receiving peripheral and cortical climbing fiber input, and the contribution of the mutiple narrow Purkinje cell zones to cerebellar function should be evaluated in suitable animal models. But even when we know the precise interrelations of different cortical areas and brainstem centers with the cerebellum, the contribution of the cerebellum to the information processing in these structures remains incompletely known.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Our ICU is a general mixed ICU with 9 beds in a trauma centre. We admit over 300 patients/year. We have started using thromboelastometry (ROTEM) in 2009. Thromboelastometry is used to guide the therapy of massive bleeding and therapy of coagulopathy in ICU patients. The usage of thromboelastometry has been increasing since 2009 and in January 2014 we have implemented POC thromboelastometry guided factor concentrate based coagulation algorithm for the treatment of coagulopathy in massive bleeding [Analysis of transfusion requirements of packed red blood cells (PRBC), plasma (FFP), platelet concentrates and the usage of fibrinogen and prothrombin complex concentrate (PCC) in ICU patients before and after the introduction of thromboelastometry guided factor concentrate based coagulation algorithm for the treatment of coagulopathy in massive bleeding and its influence on cost of therapy.Retrospective analysis of the utilisation of blood products and fibrinogen and PCC before and after the introduction of thromboelastometry guided factor concentrate based coagulation algorithm.The usage of all allogenic blood products has decreased in our unit after the introduction of thromboelastometry guided factor concentrate based coagulation algorithm by 46,7%.The reduction of PRBC was 16,6%, platelet concentrate usage decreased by 43,6 % and FFP usage decreased by 81,8%.The usage of fibrinogen and PCC has increased after the implementation of algorithm.The cost of therapy of bleeding and coagulopathy in our ICU has not significantly changed after the implementation of algorithm.We have achieved a major reduction in allogenic blood products after the introduction of a thromboelastometry guided factor concentrate based coagulation algorithm in our intensive care unit. The usage of fibrinogen and PCC has increased, but the cost has not significantly changed. Implementation of thromboelastometry guided factor concentrate based coagulation algorithm reduces the exposure of intensive care patients to allogenic blood products and does not increase cost of therapy of bleeding and coagulopathy.Our study was supported by a grant from Scientific Board of Regional Hospital Liberec"} +{"text": "The etiology of most joints' inflammations is unknown. Arthritis in children can have a diversified course. Various parameters useful in diagnosis and treatment of different forms of inflammations of the joints are being researched. Reactive oxygen species in large concentrations are toxic and cause, among others, the phenomenon of polyunsaturated fatty acids lipid peroxidation, which results in the formation of aldehyde compounds. Non enzymatic antioxidants include: lipid peroxides (LHP = LOOH), lipophuscin (LF). An important part of non enzymatic antioxidative defense are sulfhydryl groups-SH. Total oxidative status (TOS) is used as an indicator of the overall oxidative potential of cells.The aim of the study was to find how the level of the selected oxidative parameters changes in serum in children with inflammation of the joints. The correlation between the selected oxidative parameters and disease's relapses is also studied.The studied parameters were measured in blood serum of 59 patients with JIA, aged from 2 to 18 years old, hospitalized in the Rheumatology Division of the Department of Pediatrics, Silesian Medical University. The control group consisted of 25 healthy children.See Table The studied parameters of oxidative status differ between children with arthritis and healthy ones. Lipid peroxide levels are dependent on the type of arthritis. The LOOH and lipofuscin concentrations in healthy children as compared with a group of children with JIA differ. Higher values occur in the acute forms of JIA, but the difference is not statistically significant. There is no correlation in the total oxidative status (TOS) and the inflammation's level in the joints.None declared."} +{"text": "The dissemination of cancer cells from the primary tumor to a distant site, known as metastasis, is the main cause of mortality in cancer patients. Metastasis is a very complex cellular process that involves many steps, including the breaching of the basement membrane (BM) to allow the movement of cells through tissues. The BM breach occurs via highly regulated and localized remodeling of the extracellular matrix (ECM), which is mediated by formation of structures, known as invadopodia, and targeted secretion of matrix metalloproteinases (MMPs). Recently, invadopodia have emerged as key cellular structures that regulate the metastasis of many cancers. Furthermore, targeting of various cytoskeletal modulators and MMPs has been shown to play a major role in regulating invadopodia function. Here, we highlight recent findings regarding the regulation of protein targeting during invadopodia formation and function. Although epithelial cancers are one of the leading causes of death, the mechanisms regulating the development and metastasis of carcinomas are not fully understood. Multiple studies suggest that the progression of tumors is dependent on the intrinsic properties of cancer cells, such as their ability to migrate and invade. Furthermore, many extrinsic factors, such as extracellular matrix (ECM) proteins, are also crucial for regulation of cancer metastasis. The ECM proteins that make up the specialized basement membrane (BM) serve as a barrier for cell invasion. However, the BM which is rich in laminin and collagen IV, also provides the substrate for adhesion of the migrating tumor cells. Furthermore, BM degradation results in the release/activation of various growth factors required for angiogenesis, tumor growth, and metastasis Kalluri, . ECM degBM disruption involves a localized degradation of the ECM via the secretion of MMPs and inhibition of the active enzyme by tissue inhibitors of MMPs or tissue inhibitors of metalloproteinases (TIMPs) P2, thus forming the site for invadopodia formation promotes ECM proteolysis kinase have recently emerged as important players in invadopodia formation and maturation (Beaty et al., The final maturation stage of the invadopodia involves targeted delivery and exocytosis of MMP2, MMP9, and MMP14. The appearance of these MMPs is generally considered to be a mark of functional mature invadopodia. As the result, much effort has been invested in understanding the regulation of MMP targeting to invadopodia, leading to recent studies defining the machinery governing MMP transport during cancer cell invasion.in vitro (Nakahara et al., MMP14 is a membrane embedded MMP whose extracellular proteolytic activity is regulated by a balance between exocytosis and internalization via clathrin and/or caveolar mediated endocytosis (Remacle et al., 2+ oscillations regulate MMP14 recycling to the plasma membrane (Sun et al., Transport vesicle targeting and fusion with its destination membranes often relies on specific tethering factors that impart specificity to membrane transport. The tethering factors regulating MMP14 targeting remain to be identified. However, Tks4, a scaffolding factor related to Tks5, has been shown to be required for the formation and function of invadopodia (Buschman et al., MMP2 and MMP9 are gelatinases that possess fibronectin type II repeats that allow them to degrade collagen and gelatin. Gelatinolytic degradation can cause the release of signaling molecules from the ECM that aid cell migration and angiogenesis. A lot of effort has been focused on understanding the transport and targeting of gelatinases because they are overexpressed in a variety of tumors and are associated with tumor aggressiveness and poor patient prognosis (Pacheco et al., in vitro, the formation and function of invadopodia in vivo is less well understood due to challenges associated with visualizing and distinguishing these structures in animal models. Cancer invasion usually occurs deep in tissues and these events are highly dynamic and unpredictable making it difficult to visualize invadopodia during primary tumor metastasis. Though the majority of invadopodial studies have been conducted in 2D tissue culture systems, some groups have studied invadopodia formation in 3D matrices as they better simulate the physiological in vivo environment. Such studies of invadopodia in complex 3D matrices have shown that the matrix degrading activity is localized to the base rather than the tip of the invadopodia (Wolf et al., in vivo and provide a good model to study formation of invadopodia.Although there is an increasing amount of evidence demonstrating the existence of invadopodia in vivo experiments that confirm that invadopodia are not just in vitro artifacts. Recently, the chorioallantoic membrane of the chicken embryo was used to visualize the intravascular formation of invadopodia and the extravasation of tumor cells into the stroma (Leong et al., Caenorhabditis elegans, the anchor cell breaches the uterine and vulval basement membranes by making an invadopodium (Hagedorn et al., meltdown forms invadopodia-like protrusions that invade into the stromal tissue in response to cues from the surrounding smooth muscle layer (Seiler et al., Despite the challenges mentioned above, there is some compelling evidence drawn from elegant in vivo and understand how widespread the use of invadopodia by cells is. Many questions regarding the importance of invadopodia in cancer invasion and metastasis still exist. Future studies in the field of invadopodia will need to focus on detection of invadopodia in human cancer samples as well as to identify the role of invadopodia in the different steps of the metastatic cascade. The other areas that require focus are the identification of components specific to invadopodia that can be targeted and the development of compounds that can specifically inhibit invadopodia formation and function. These issues will need to be addressed before invadopodia can become a candidate for development of new cancer therapies.Significant advances have been made in understanding the formation and function of invadopodia. However, there are still a lot more unknowns regarding this subcellular structure. While all of the above mentioned studies have helped to confirm the physiological role of invadopodia as a structure used by invasive cells to penetrate the basement membrane, more evidence is required to elucidate the functional role of invadopodia The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The lateral accessory lobe (LAL) mediates signals from the central complex to the thoracic motor centers. The results obtained from different insects suggest that the LAL is highly relevant to the locomotion. Perhaps due to its deep location and lack of clear anatomical boundaries, few studies have focused on this brain region. Systematic data of LAL interneurons are available in the silkmoth. We here review individual neurons constituting the LAL by comparing the silkmoth and other insects. The survey through the connectivity and intrinsic organization suggests potential homology in the organization of the LAL among insects. The lateral accessory lobe (LAL) is a neuropil that is highly associated with the central complex (CX). The LAL is thought to facilitate communication between the CX and the motor centers. For example, it is proposed that the LAL receives input from the CX and selects the activity of descending output that originate in the brain and project to the thoracic motor centers.Male moths orient to conspecific females by the use of sex pheromones. The circuit within the LAL generates pheromone-evoked persistent firing in the silkmoth is thought to mediate sensory-motor pathways for phonotaxis cell bodies belong to the cluster located on the anterior surface beside the anterior optic tubercle, (2) they descend the ipsilateral side of the neck connective, and (3) they have smooth processes in the LAL. The neurons that meet these morphological features have been reported in other species, including the sphinx moth Manduca sexta and locust (Homberg, The LAL is classified into two subdivisions that are delineated by the LAL commissure that is the prominent bundle connecting the bilateral LALs: upper division and lower division (Iwano et al., Homberg, . The intHomberg, . This suBombyx (Supplementary Figure Drosophila (Lin et al., Bombyx and Drosophila. The lobula complex also supplies biased inputs to the LAL (Supplementary Figure Drosophila (Namiki et al., Drosophila seems to be biased toward the lateral side of the LAL (N\u00e4ssel and Elekes, The inputs from the CX terminate in specific sub-regions within the LAL (Heinze et al., The dendritic innervation of LAL DNs is biased to the lower division (Supplementary Figure Bombyx group-I DNs in Drosophila show similar features (VGlut-F-500726, VGlut-F-000150; FlyCircuit Database; Chiang et al., Putative homologous neurons of From these anatomical observations, we propose the modular organization of the LAL is common across insects. The upper division integrates the information from multiple protocerebral regions in addition to the CX, while the lower division produces the premotor signal output via DNs Figure .Drosophila and other insects. There are plentiful examples for their potential homology at the level of individual neurons, suggesting the presence of a ground pattern organization. Insects adapt to various environments in different ways, but the same basic design of the nervous system may underlie diverse behavioral repertoire. Expanding the application of a comparative neurobiological approach provides a powerful clue to explore these mechanisms.We reviewed the neuronal components of the LAL in the silkmoth and described the neurons with similar morphology in All authors listed, have made substantial, direct, and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Gastroesophageal Reflux (GER) is a common condition in childhood characterized by the rise of gastric contents into the esophagus. According to the International Consensus of the Montreal, gastroesophageal reflux disease (GERD) is defined as \"the condition that develops when the retrograde passage of gastric contents causes troublesome symptoms and/or complications that result in an impairment of the quality of life of these patients\" . In pedi"} +{"text": "Drosophila melanogaster\u2014provides an opportunity to explore circuit functions using genetic manipulations, which, along with high-resolution microscopic techniques and lipid membrane interaction studies, will be able to verify the structure\u2013function details of the presented mechanism of perception.Perception is a first-person internal sensation induced within the nervous system at the time of arrival of sensory stimuli from objects in the environment. Lack of access to the first-person properties has limited viewing perception as an emergent property and it is currently being studied using third-person observed findings from various levels. One feasible approach to understand its mechanism is to build a hypothesis for the specific conditions and required circuit features of the nodal points where the mechanistic operation of perception take place for one type of sensation in one species and to verify it for the presence of comparable circuit properties for perceiving a different sensation in a different species. The present work explains visual perception in mammalian nervous system from a first-person frame of reference and provides explanations for the homogeneity of perception of visual stimuli above flicker fusion frequency, the perception of objects at locations different from their actual position, the smooth pursuit and saccadic eye movements, the perception of object borders, and perception of pressure phosphenes. Using results from temporal resolution studies and the known details of visual cortical circuitry, explanations are provided for (a) the perception of rapidly changing visual stimuli, (b) how the perception of objects occurs in the correct orientation even though, according to the third-person view, activity from the visual stimulus reaches the cortices in an inverted manner and (c) the functional significance of well-conserved columnar organization of the visual cortex. A comparable circuitry detected in a different nervous system in a remote species\u2014the olfactory circuitry of the fruit fly Helmholtz proposed that visual perception is mediated by unconscious inferences using cognitive resources is one of the different spiking activities observed at different neuronal processes. Others include dendritic spikes and axonal spikes . The third-person observed neuronal firing is comparable across different excitatory neuronal populations. However, severe variabilities exist in both the number of input connections and the sets of sensory inputs that can trigger a single neuronal firing. The number of inputs of a neuron ranges from one to many approximately 5600 to 60,000 (as in a monkey\u2019s motor cortex) Cragg .The observed firing of a neuron cannot be directly attributed to the induction of first-person internal sensory elements of the various higher brain functions due to the following reasons. (1) Since different sets of dendritic spine inputs can lead to the same action potential, neuronal firing is non-specific with regards to its inputs. For example, in a pyramidal neuron with thousands of inputs, the arrival of nearly any set of 40 excitatory postsynaptic potentials (EPSPs) at the axonal hillock can lead to its firing , a total delay ranging from 6 to 11\u00a0ms is expected for the activity to propagate from the retina to reach the dendritic spines of the cortical neurons Fig.\u00a0. The perHuman observers perceive visual stimuli as continuous above the flicker fusion frequency . Among all the sensory stimuli, light has the maximum velocity. In this regard, for both the predator and the prey, the visual details are of paramount importance for their survival. Therefore, the structural features of the visual cortex are expected to be optimized to obtain the finest visual details of the object and the organization of cortical columns is likely to have a functional role. Similarly, the nocturnal life of rodents necessitates touch perception via whiskers, which can provide the finest surface details of objects so that the rodents can retrieve memories of the remaining qualities of the object. In this regard, the structural organization of the barrel cortex is expected to have a functional role. How can the columnar organization provide the finest details for the internal sensation of object perception?In contrast to the different higher brain functions, the internal sensation of perception occurs in real time at the arrival of the sensory stimuli. From the earlier sections, it can be seen that two essential features are associated with the formation of the internal sensation of perception. One is the need for an overlapping of the units of perception for achieving homogeneity of the percept above the flicker fusion frequency. The second is the limitations imposed by the functional perimeter of the cortical columns that necessitate the arrival of only a few input stimuli from the object at a given column. Since it was found that neurons within a column are selectively sensitive to straight-line visual stimuli at limited angles of orientation . Since the lateral spread of activity through the inter-postsynaptic functional LINKs contributes a horizontal component to the oscillating potentials for the systems function of percepton formation, horizontal connections between the columns are essential. Since the induction of perceptons takes place only at the level of the inputs from the LGN neurons to the abutting postsynaptic terminals at different neuronal layers, it is not affected by several non-matching findings detected when neuronal firing is given a central role in sensory perception\u00a0. The eyeball indentation leads to a non-uniform tangential stretch of the retina that exerts a locally variable depolarization of horizontal cells Brindley , providiPressure phosphenes are perceived at the field of vision opposite to the location of application of pressure. This indicates that pressure phosphenes are perceived at a direction opposite to the direction of arrival of pressure stimuli over the posterior aspect of the eyeball similar to that of the normal visual perception. In this regard, the mechanism of percept formation of pressure phosphenes and normal visual perception has similarities in the direction at which percepts are formed. Since light-emitting cells were not observed in the retina Bokkon , how doeIn patients with visual hallucinations due to the pathological accumulation of Lewy bodies in the inferior temporal cortex shows an alpha frequency background in the posterior occipital regions when the eyes are kept closed. These alpha waves change to beta frequency waves when the eyes are opened and light stimuli reach the visual cortex. This indicates that there is relative increase in the vertical trans-synaptic spread of activity between different neuronal orders compared to the increase in the horizontal component of oscillating potentials contributed by the reactivation inter-postsynaptic functional LINKs between the dendritic spines of the visual cortical neurons that synapse with the LGN axonal terminals. Extracellularly-recorded gamma oscillatory responses in the visual cortex were shown to be important in visual perception activates only one glomerulus and the lateral horn (LH). PN axons innervate the MB by terminating in large boutons inhibit all the remaining glomeruli (Hong and Wilson The reactivation of a maximum number of inter-postsynaptic functional LINKs evokes the peak quality of percept for smell. A given odor is not expected to activate non-specific receptors even at the highest concentrations. It is observed that many odors are attractive at low concentrations but aversive at higher concentrations (Wang et al. Since a large number of postsynapses is expected to be inter-LINKed by innate mechanisms that enable the fly to smell food and survive, the presence of islets of LINKed inter-postsynapses is anticipated at the time of birth. It may be noted that perceptons can be formed even if the inter-postsynaptic functional LINKs occur between the postsynapses that belong to the same PN. A comparable location in mammals is the dendritic excrescences on the dendritic tree of individual CA3 neurons in the hippocampus. The present circuit can provide a mechanism for inducing internal sensation of perception at the inter-LINKed postsynapses of the PNs at the ORN-PN synaptic junctions and its correlation with the spiking of GABAergic ILNs and possibly the latter\u2019s phase-locking with the LFP oscillations in the MB (Tanaka et al. Oscillating potentials and the arrival of background sensory stimuli are expected to induce non-specific semblances and were explained to contribute to C-semblance for consciousness Vadakkan . Even thIt is known that ORNs spike even in the absence of ligands Wilson with an Based on the present work, limiting the number of inter-LINKable postsynapses is essential to provide visual perception properties similar to that of the pixilation in a digital image. This is achieved by limiting the lateral borders of the possible expansion of the islets of inter-postsynaptic functional LINKs by the vertically arranged cortical columns. It is described in a previous section that by increasing the number of cortical columns, the pixilation effect by the perceptons is increased. However, the number of different types of smell percepts induced is expected to be much smaller than the number of visual percepts. This is evident from the presence of only 50 different types of ORNs Wilson among thBased on the present work, innately determined and pre-existing inter-postsynaptic functional LINKs form the major mechanism responsible for percept formation. The pre-existing ones are expected to have inter-postsynaptic membrane hemifusions likely stabilized by trans-membrane proteins. Evidence for these can be examined in the visual cortex and olfactory glomerulus by using advanced high-resolution microscopes.The second possible mechanism of percept formation by rapid removal of hydration repulsion at the inter-postsynaptic locations and its rapid reversal by rehydration can be examined in the visual cortex and olfactory glomerulus. This is likely to require developing novel techniques.By changing the frequency of oscillating potentials in the glomerulus, the net semblance for olfactory perception can be changed. This will alter olfactory perception.Blocking large number of inter-postsynaptic functional LINKs either in the visual cortex or in the glomerulus is expected to alter the horizontal component of oscillating potentials, which will alter the frequency of oscillating potentials and disrupt visual or olfactory perception respectively.The present framework used third person observations to construct a feasible mechanism for the first-person internal sensation of perception and has explained a large number of findings made at different levels. In a state of background semblances induced by oscillating patterns of activation of several postsynapses that contributes to the internal sensation of awareness, the arrival of sensory stimuli induces perceptons that are basic units of perception. The lack of orientation for the perceptons indicates their likely mechanism of integration to form the percept is likely similar to that of pixels in a digital image. The present work has provided a new explanation for the spatial restriction of the cortical columns that can provide mechanisms for accommodating the requirements for the induction of perceptons, and incorporating a pixilation-like effect in the percepts. The presented framework has also provided answers to a large number of questions arising when the circuitry for perception is examined from a third-person frame of reference Wilson .Drosophila supports the view that unitary elements for the formation of the internal sensation exist across different members of the animal species and across different sensations. As the complexity of the nervous system varies, the nature of the basic quality of percepts is likely to change. Drosophila genetics can be used to design experiments to perturb the functioning of the circuitry at different levels to understand the details of the mechanism. A similar mechanism is expected to operate for the perception of other sensations in the nervous system.The percept of the smell of nutritious food that is deemed attractive is likely evolved from an evolutionary selection process due to the survival of flies that fed on those food items. Even though light travels much faster than smell, odorant molecules can reach the fly even from hidden locations from where visual inputs won\u2019t reach the fly. Therefore, perceiving smell is much more efficient than visual stimuli for detecting the presence of food. Flying towards the direction of increasing stimulus gradient of odorants can lead the fly towards the source of food. The presence of a circuitry for the first-person visual perception in mammals and olfactory perception in the fly"} +{"text": "Mycobacterium tuberculosis, the etiological agent of tuberculosis (TB), have evolved a remarkable ability to evade the immune system in order to survive and to colonize the host. Among the most important evasion strategies is the capacity of these bacilli to parasitize host macrophages, since these are major effector cells against intracellular pathogens that can be used as long-term cellular reservoirs. Mycobacterial pathogens employ an array of virulence factors that manipulate macrophage function to survive and establish infection. Until recently, however, the role of mycobacterial cell envelope lipids as virulence factors in macrophage subversion has remained elusive. Here, we will address exclusively the proposed role for phthiocerol dimycocerosates (DIM) in the modulation of the resident macrophage response and that of phenolic glycolipids (PGL) in the regulation of the recruitment and phenotype of incoming macrophage precursors to the site of infection. We will provide a unique perspective of potential additional functions for these lipids, and highlight obstacles and opportunities to further understand their role in the pathogenesis of TB and other mycobacterial diseases.Mycobacterial pathogens, including Mycobacterium tuberculosis (Mtb), and its close pathogenic relative M. marinum , and its structurally-related compound, phenolic glycolipids (PGL). In an elegant study, Cambier et al. proposed an exciting mechanism for their role in immune evasion strategies evolved by Mtb to evade immune surveillance and the subsequent recruitment of microbicidal macrophages , a truncated form of PGL containing just the saccharide domain and the phenol ring, which may also have biological activities induced increased secretion of the pro-inflammatory cytokines TNF-\u03b1, IL-6 and IL-12p40 receptor 2 (CCR2)-dependent recruitment of macrophages by inducing the expression of the chemokine CCL2 through a molecular mechanism still unknown for diverting macrophages from their natural function and for establishing infection. Most of the research has focused on the macrophage because its dialog with mycobacteria is thought to be the seminal step of the immune response. The study of Cambier et al. marks an important step in understanding the functions of lipids, as it takes into account the full repertoire of the innate immune system (Cambier et al., Mtb lipid effects will lead to the characterization of signaling pathways that are modulated for the benefit or detriment of mycobacteria survival in the host. Therefore, we would like to provide a framework to study the molecular mechanisms governing mycobacterial DIM and PGL activity at the direct interface with macrophages (Figure Mtb PAMPs (Cambier et al., We anticipate that the identification of the molecular mechanisms involved in s Figure . We propmar in the context of zebrafish infection (Cambier et al., M. bovis BCG as surrogate might be a valuable tool to enable the direct comparison of the effects of the PGL variants in the context of a relevant mycobacterial envelope and within the same genetic background (Tabouret et al., For the purpose of conciseness, we focused the current perspective on DIM and PGL, but we believe that in the near future there will be a need for a broad assessment of mycobacterial lipids and their overall effects in the immune system. Novel strategies need to be devised in order to deal with the redundant role as virulence or immunomodulatory factor among mycobacteria cell envelope lipids. In this aspect, our group has undertaken the construction of single and multiple lipid mutants to assess the role of trehalose-derived lipids, sulfolipids, diacyltrehaloses and polyacyltrehaloses, and their respective contribution to the virulence together with DIM (Passemar et al., In conclusion, we believe that the development of microbiological tools and adequate research models, in combination with multidisciplinary strategies, should open up new venues to achieve a better understanding of the ever-evolving relationship between host and pathogen. In many ways, the lipid at the mycobacteria wall is just a starting point!The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The new EU project TRANS-FUSIMO (http://www.trans-fusimo.eu) aims at the translation of this software demonstrator into a fully integrated system for the FUS treatment of the liver.The movement of the target challenges the application of high intensive focused ultrasound (HiFU/MRgFUS) for the treatment of malignancies in moving abdominal organs such as liver and kidney. Moreover, the anatomical location of the lesion is often behind the rib cage. The physiology of the organs, the dynamic and complex blood perfusion impairs the energy disposition in the tissue due to the heat transfer within the organ. To explore the full potential of extracorporeal FUS to safely and precisely destroy tissue in the depth of a moving organ requires sophisticated software and advanced hardware. In the EU project FUSIMO for human patients with metastases or HCC will show the feasibility of the TRANS-FUSIMO system for the clinical setting.The FUSIMO software demonstrator based on MeVisLab incorporates a set of dynamic organ models for the physical and biophysical processes involved in MR guided FUS treatment: (i) an organ motion model simulates the patient specific deformation of the relevant anatomical structures during breathing; (ii) a patient specific tissue model represents the ultrasound propagation, the energy distribution as well as the tissue heating and cooling; (iii) an organ/tumour model captures the patient specific tissue\u2019s response to the therapy. These model components are integrated into a software demonstrator, which orchestrates the interplay between the models and feeds them with model parameters that are extracted from patient specific MR and/or US imaging data. The system and the model components are being validated in phantom and ex vivo experiments show that the FUSIMO system is capable of compensating organ motion through real-time motion detection, motion modelling and real-time beam steering.The FUSIMO/TRANS-FUSIMO software demonstrator comprises specific models for the simulation of FUS application in moving organs based on imaging data derived from volunteers. It supports the assessment of the feasibility of the intervention, predicting and optimizing the outcome, detecting potential risks and avoiding them, as well as monitoring the progress and tracking deviations from the planned procedure. Our in vivo animal studies and first patient study shall show that MRgFUS in moving organs can be performed safely, efficaciously and effectively.In TRANS-FUSIMO a fully integrated system will be developed for which"} +{"text": "Fractures of the lateral end of the clavicle are common in pediatric patients; most of these fractures occur at the physeal level representing Salter Harris injuries. The vast majority of fractures of the lateral end of the clavicle are managed nonoperatively. In this report, we describe a unique type of fracture of the distal end of the clavicle in the pediatric patients in which the fracture occurs in the metaphyseal lateral clavicle with the proximal edge of the fracture displaced posteriorly through the trapezius muscle causing obvious deformity. It is similar in pathology to type IV AC joint dislocation. In this study we report this injury in eleven-year-old boy. Literature review showed that similar injuries were described before three times (two of them in pediatric patients). Due to the significant clinical deformity of this category with entrapment of the bone through the trapezius muscle, reduction (open or closed) of the fracture is the recommended treatment. The clavicle ranks among the most frequently fractured bones in the immature skeleton. It is prone to fracture as it resides subcutaneously along the majority of its length and distributes almost all forces from the upper extremity to the trunk. Fractures of the clavicle are categorized according to anatomic location, medial third, middle third, and distal third, the bulk of which involves the middle third, while distal fractures only represent 10\u201320% [As fusion of the distal epiphysis is not complete until the mid-twenties, fractures of the lateral end in children usually result in a physeal separation of the distal clavicle, rather than a true acromioclavicular (AC) separation \u20134. AdditThe purpose of this report is to describe a new category of distal clavicular fracture in children that occurs in the distal metaphysis and behaves like AC joint disruption type IV (posterior displacement). This category requires special attention as it needs reduction and not mere observation as in the majority of distal clavicular fractures. We present our case in addition to review of the literature for possible similar published cases. Pubmed search was used with cross reference of the articles. Studies describing distal clavicular fractures in children were assessed for possible similar cases.An 11-year-old male was unconstrained in a motor vehicle collision. No loss of consciousness was reported. He presented to the emergency room with significant left shoulder pain. There was tenderness to palpation at the left shoulder and clavicle with minor overlying abrasion, erythema, and prominent posterior bony mass over his posterior shoulder. He was neurovascularly intact but was unable to move the shoulder due to significant pain. Radiography and CT revealed an oblique fracture of the distal third of the left clavicle with posterior displacement of the distal end of the proximal fragment . The fraPatient underwent surgery one week following the initial injury. A small transverse incision was made over the distal clavicle. The periosteal sleeve around the distal end of the clavicle was disrupted; however, the attachment of the coracoclavicular ligament into the inferior part of the periosteal sleeve surrounding the clavicle was still intact. The distal end of the proximal fragment was observed to have disrupted the periosteal sleeve with significant posterior displacement into the trapezius muscle. Multiple attempts were required to free the distal end of the medial segment of the fracture from the trapezius muscle. Then, the fracture ends were reduced together. Once good reduction was achieved, three K-wires were inserted percutaneously from the lateral end of the acromion to the clavicle traversing both ends of the fracture . The patInjuries involving the distal clavicle in children are classically \u201cpseudodislocations\u201d of the AC joint in which the joint and the coracoclavicular ligaments are usually intact, and fracture involves the lateral physis of the clavicle with displacement of the bone through a split in the periosteal sleeve. True dislocation typically does not occur because the AC joint is maintained by the trapezius and the deltoid muscles .In the skeletally immature patient, ligamentous attachment by the coracoclavicular (CC) ligaments is biomechanically stronger than the physeal-metaphyseal region, thus allowing the lateral portion of the clavicle to be displaced from its periosteal tube, giving the radiographic impression of a dislocation of the AC joint due to radiolucent cartilaginous lateral physis.The excellent remodeling capacity in immature bone allows most distal clavicular injuries to be treated nonoperatively. It is widely accepted that nondisplaced or minimally displaced fractures can be managed conservatively with sling immobilization and early rehabilitation with range-of-motion exercises , 5. KubiPediatric injuries that involve the AC joint have been classified by Dameron and Rockwood in a scheme that mirrors the classification of adult AC joint injuries. Type IV injury includes a posterior displacement of the distal clavicle in relation to the acromion with buttonholing of the shaft through the trapezius . The injLiterature review revealed three other similar cases with type IV-like fractures of the distal clavicle; two of these cases were in the pediatric age group , 11. ItoIn conclusion, our case and the three other cases presented previously in the literature suggest that there is a separate category of distal clavicular fracture (more common in pediatric population) that involves the metaphysis of the distal clavicle with posterior displacement of the medial end of the proximal fracture segment through the trapezius. This fracture is similar to type IV AC joint injury, yet the fracture is entirely through the metaphysis of the distal clavicle. Due the significant clinical deformity of this category with entrapment of the bone through the trapezius muscle, reduction (open or closed) of the fracture is the recommended treatment."} +{"text": "Rac GTPases are regulators of the cytoskeleton that play an important role in several aspects of neuronal and brain development. Two distinct Rac GTPases are expressed in the developing nervous system, the widely expressed Rac1 and the neural-specific Rac3 proteins. Recent experimental evidence supports a central role of these two Rac proteins in the development of inhibitory GABAergic interneurons, important modulatory elements of the brain circuitry. The combined inactivation of the genes for the two Rac proteins has profound effects on distinct aspects of interneuron development, and has highlighted a synergistic contribution of the two proteins to the postmitotic maturation of specific populations of cortical and hippocampal interneurons. Rac function is modulated by different types of regulators, and can influence the activity of specific effectors. Some of these proteins have been associated to the development and maturation of interneurons. Cortical interneuron dysfunction is implicated in several neurological and psychiatric diseases characterized by cognitive impairment. Therefore the description of the cellular processes regulated by the Rac GTPases, and the identification of the molecular networks underlying these processes during interneuron development is relevant to the understanding of the role of GABAergic interneurons in cognitive functions. Inhibitory \u03b3-aminobutyric acid (GABA)ergic interneurons modulate brain functions of the telencephalon causes several defects, including a failure of the tangential migration of interneurons from the GEs with the conditional deletion of Rac1 in postmitotic neurons by the Synapsin-I-Cre results in mice that are neurologically impaired, with spontaneous epileptic seizures in the development of interneurons. The secreted factor neuregulin-1 (NRG1) interacts with its transmembrane tyrosine kinase receptor ErbB4 to promote the growth of dendrites in mature interneurons. Evidence has been produced that Kalirin-7, a major dendritic Rac GEF (guanine nucleotide exchange factor) of the Trio/Kalirin family, mediates the effects of NRG1/ErbB4 activation on the growth of dendrites in interneurons, and that the phosphorylation of the carboxy-terminus of kalirin-7 is critical for these effects pathway that produces phosphatidylinositol 3,4,5-trisphosphate (PIP3) at the plasma membrane. How the activation of PI3-kinase affects the migration of the precursors and their later differentiation into mature inhibitory neurons is unknown. One possibility is that PIP3 engages the GEFs to activate the Rac\u2019s at the plasma membrane can be regulated by Rac GTPases , which together with GIT2 is part of a family of scaffold proteins with Arf GTPase activating protein (GAP) activity. GIT proteins form stable complexes with the Rac/Cdc42 GEFs of the PIX family, \u03b1PIX, and \u03b2PIX (Totaro et al., Less is known about the role in neuronal development of the widely expressed GIT2 protein. GIT2 knockout mice have no evident brain defects, but show anxiety-like behavior as a major neurological symptom (Schmalzigaug et al., Interestingly, it has been recently shown that knockout of GIT1 causes a strong and specific reduction in the inhibitory input and in the number of PV-positive interneurons in the hippocampal CA1 region, where the GIT1 protein is normally expressed (Won et al., in vitro highlight the importance of flanking the studies in vivo with reductionistic approaches entailing explants or primary cultures of interneurons from the GEs, to address the machinery required for their development in simplified experimental systems. The identification of the molecular sets involved in the different phases of interneuron development is expected to give an important contribution to the understanding of different neurological and psychiatric disorders where defects in cortical GABAergic cell development appears to importantly contribute to the pathogenesis.The role of Rac1 and Rac3 in the development of cortical and hippocampal GABAergic interneurons has been confirmed by recent studies. Still, substantial work remains to characterize the distinct effects of the two Rac proteins in different developmental steps required for the formation of mature interneurons. Moreover, a number of players has been identified that may regulate or mediate the action of the Rac GTPases in these cells. The wide variety of GABAergic cell types required for normal brain function suggests that specific combinations of regulators and effectors for these GTPases are expected to take part in the maturation and function of distinct types of interneurons. Most of the work on developing interneurons has been performed using mice models, but the fewer studies The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The insertion of an endotracheal tube (ETT) in a patient of esophageal atresia (EA) with trachea-esophageal fistula (TEF) can be challenging for even the most experienced of anesthesiologist. Incorrect placement, if not detected on time, can have the most devastating complications. The correct placement of an endotracheal tube is dependent on the anatomy and morphology of the fistula. Holzki et al [1] in their study of bronchoscopy on 113 patients of EA with TEF reported that in 11% of patients, the fistula was below the level of carina, in 22% the fistula was within 1cm of the carina while in 67% the fistula was 1cm above the carina. Furthermore, these patients sometimes have other associated congenital anomalies such as tracheomalacia and vascular ring that require additional intervention.The anesthetic and surgical management of neonates with EA with TEF focuses on ventilation of the lungs while avoiding ventilation through the fistula. The mainstay of airway management focuses on careful tracheal intubation beyond the fistula with avoidance of muscle relaxants and positive pressure ventilation until the fistula has been ligated. Gastrostomy, either preoperatively under local anesthesia or soon after the induction of general anesthesia is sometimes used to decompress the stomach and prevent gastric distention.[2]Special attention to placement of the endotracheal tube in the trachea is warranted. The incorrect placement of an ETT can cause increased morbidity by ineffective ventilation as a result of migration of ETT in the fistula. The additional gastric distension would lead to an increased intra-abdominal pressure and potentially cause decreased venous return with hemodynamic compromise as well as restriction of diaphragmatic excursion with significant respiratory embarrassment. Gastric distension can also result in the aspiration of gastric contents via the TEF causing aspiration pneumonitis. Aspiration pneumonitis alone has accounted for up to 50% of the perioperative morbidity and mortality in this patient population.[3] Thus theoretically, the sooner the TEF is closed, the less likely are any of the predictable complications.Techniques of Endotracheal Tube Placement:The ideal method of intubation is placement of the tracheal tube tip beyond the fistula. This facilitates adequate ventilation and prevents gastric insufflation. However, this is not always possible especially in large low lying fistulas. Various techniques have been advocated for accurate endotracheal tube placement and depend on the type and site of fistula, pulmonary status and co-morbid conditions. Direct laryngoscopy or auscultation may not be confirmatory for correct placement of the ETT. Also, a properly placed ETT at the beginning of the case may slip into a large fistula. This can happen most frequently during lateral positioning of the patient for thoracotomy or thoracoscopy.[4] Role of Bronchoscopy:Preoperative bronchoscopy may help the anaesthesiologist and the surgeon to assess the size and location of the TEF and plan for the best airway management strategy. It provides information about the exact size of the fistula, its location, and other coexisting airway anomalies, including the presence of proximal fistula. Additionally, it is also be useful to confirm ETT placement after endotracheal intubation and intra-operatively in the event of any cardio-respiratory instability or ETT dislodgement. The routine use of preoperative bronchoscopy by Aztori et al modified the surgical approach and management of 24% patients.[4] Most tertiary care centres in the west advocate the use of either a rigid or flexible bronchoscope for the identifying of the site and anatomy of the fistula as well as for guiding endotracheal tube placement.[6] Flexible bronchoscopy can be used to confirm proper placement of the ETT and rule out any migration of the endotracheal tube with position changes. In the event of intra-operative tube migration and subsequent hypoxia urgent bronchoscopic evaluation and re-positioning of the ETT is advocated even during thoracotomy. The rigid bronchoscope offers the possibility to ventilate the patient while also visualizing the airway. Other adjuncts in case of continued cardiorespiratory instability include needle decompression of the stomach and pleura as well as occlusion of the fistula with a balloon via emergent gastrostomy or via bronchoscopy.[7-10]Considering the benefits conferred by bronchoscopy and given the wide-spread availability of this equipment, it should be considered standard practice in all patients of EA with TEF.Minimal Invasive Repair of TEF:Minimally invasive surgical techniques are being used increasingly for the repair of EA with TEF in neonates. Traditional repair through a right dorsolateral thoracotomy has major disadvantages, including a large scar, significant postoperative pain, and a high degree of scoliosis and shoulder girdle weakness later in development. An advantage of minimally invasive surgery is superior visualization of the fistula and the surrounding anatomy through the thoracoscope. However, this new approach poses extra challenges to the anesthesiologist because of the requirement for collapsing a lung during the thorascopic repair. Thus, the anesthesiologist needs to isolate not just the TEF, but also a mainstem bronchus during the procedure.Airway management and placement of ETT:The correct placement of ETT below the level of fistula is crucial during endotracheal intubation and prior to ventilation. Therefore, spontaneous respiration is maintained during induction with inhalational anaesthetic agents and muscle relaxants are administered only after the ligation of the fistula so as to prevent gastric insufflations from positive pressure ventilation. Alternatively, a rapid sequence intravenous induction without positive pressure mask ventilation may be the preferred mode of induction followed by rapid securing of the airway.[12] Several different methods for securing the airway and controlling the fistula have been described. Traditional technique:The technique most commonly used involves insertion of the ETT deep endobronchial followed by gradual withdrawal till the ETT is just above the carina and bilateral equal breath sounds are heard on auscultation of the chest.[2] Additionally, the ETT is rotated so the bevel faces anteriorly and away from the fistula which is most commonly located on the posterior wall.[10] In this manner, the tip of the ETT is placed beyond the fistula and the anteriorly placed bevel ensures that the shaft of the endotracheal tube occludes the fistula. For obvious reasons, an ETT without side holes is recommended. The correct placement maybe confirmed either by auscultation or by bronchoscopic evaluation. In case a gastrostomy has been performed previously, the end of the gastrostomy tube may be placed underneath a water seal. The presence of bubbling indicates ventilation through the fistula, which will occur if the tip of the ETT is proximal to the opening of the fistula. The ETT should be pulled back almost until gas begins to bubble from the end of the gastric tube , then re-advanced until the bubbling stops.[2] A capnograph inserted into the gastrostomy tube will indicate the same thing by showing tracing of respiratory movements and persistently elevated levels of end tidal carbon dioxide levels.[13]This technique is especially useful in emergency situations and when TEF is associated with duodenal atresia.Use of Fogarty Balloon Catheter:Use of Fogarty balloon catheter to occlude the fistula is another method commonly employed. After the induction of general anaesthesia, a suitable sized Fogarty arterial embolectomy catheter is placed through the vocal cords under direct laryngoscopy and then the bronchoscope is placed through the cords. The Fogarty balloon-tipped catheter is then advanced into the TEF. The Fogarty catheter often preferentially passes into the TEF because of the dependent position of the TEF. The fistula is then occluded by inflating the balloon-tipped catheter. The bronchoscope is removed, and the trachea is intubated with an oral ETT in the standard fashion and the ETT is placed in the trachea alongside the Fogarty catheter.[9] The insertion of the Fogarty catheter can be from either the tracheal route (2-3Fr) or via the gastrostomy (5Fr).[14]However, apart from being technically complex, this technique has its own problems. The occlusion of the fistula with a Fogarty embolization catheter may or may not be effective. In order to place the Fogarty catheter with a rigid bronchoscope, it would be required to interrupt ventilation and the size of the bronchoscope may limit use of the catheter. In the event of the Fogarty catheter getting dislodged intra-operatively, it may occlude the trachea making ventilation impossible. Also, the catheter can damage the esophageal mucosa at the balloon site.[10] Further, gastrostomy is not routinely performed in patients with no other complications, so, retrograde occlusion through a gastrostomy may not always be an option.[15] Despite the above mentioned drawbacks, Fogarty catheter is a useful aid in isolating large fistulas or those located near the carina.Use of Double Fogarty Catheters:Placing two balloon-tipped blockers, one in the fistula and the other in the right mainstem bronchus, is a viable technique for thoracoscopic EA with TEF repair when the fistula is at or very close to the level of the carina and one lung ventilation is required. In a case report [16] on a full-term neonate with a type C type EA with TEF (fistula at the level of the carina), two Fogarty type balloon-tipped embolectomy catheters were placed alongside the ETT to successfully achieve the goal of blocking ventilation of the fistula and the right lung. The balloon of one catheter blocked the fistula while the other was inserted into the right mainstream bronchus to occlude the ventilation of the right lung. The use of fibreoptic bronchoscopy greatly facilitated placement of the blockers. The patient made an uneventful recovery.[16]The disadvantages of bronchial blockers include the possibility of mucosal damage (though not yet seen), retrograde migration of either blocker into the tracheal lumen, resulting in partial or complete airway obstruction. The insufficient blockade of the mainstream bronchus may lead to partial ventilation of the collapsed lung and bronchial rupture.[17]Use of Cuffed ETT:The cuff of an ETT can be similarly used to block the fistula. Immediately following the insertion of the ETT, a 2.5 mm flexible bronchoscope is inserted into the ETT. The ETT is advanced under direct vision to just above the carina but distal to the fistula. At this location, the cuff on the ETT is inflated to occlude the fistula. Greenberg et al have reported accurate placement of a 3.5mm cuffed ETT with bilaterally equal breath sounds on auscultation and without gastric distension on positive pressure ventilation.[18] Thus, a cuffed ETT may be accurately positioned to exclude the fistula allowing for an easier and safer operation.[12]One lung Ventilation:Many case reports have documented the successful use of one-lung ventilation in term and preterm neonates. After the administration of general anaesthesia, the ETT is facilitated into the left mainstream bronchus till the ligation of the fistula after which it is slowly withdrawn back into the trachea for oesophageal anastomosis. This technique of inserting the ETT into the left mainstream bronchus acts by blocking the fistula and the right mainstream bronchus simultaneously. However, differences in the diameters of main-stem bronchus and the trachea may result in an ETT that fits a main-stem bronchus well but is too small for the trachea while the one that fits the latter is too large for the former. This might predispose to left bronchial edema.[21] In one report, left upper lobe collapse was also seen in deliberate endobronchial ETT placement in neonates to achieve one-lung ventilation.[19] Another disadvantage is occurrence of desaturation during one-lung ventilation in EA with TEF repair.[12] The ETT may need to be retracted several times to ventilate both lungs intra-operatively and subsequent repositioning of the ETT tip back into the left mainstream bronchus requires fibreoptic bronchoscopy, which may be hazardous and cumbersome in a semi-prone neonate, made even more difficult by the sterile drapes separating the small distance between the operative field and the patient\u2019s mouth.Use of a specially designed bifurcated tracheal tube for EA with TEF repair has also been described.[22] This is not a double-lumen tube, and is therefore not suitable for differential lung ventilation.Use of Endobronchial blocker:In neonates with EA and a large carinal TEF, endobronchial blocker has been used to occlude the fistula.[23] The endobronchial blocker is inserted down to the level of the mid-trachea and the proximal trachea was intubated with a microcuff endotracheal tube alongside the blocker, while, the fiberoptic bronchoscope is used to guide the blocker into the fistula. Use of a new 5-Fr endobronchial blocker suitable for use in children with a multiport adapter and fiberoptic bronchoscope has been described.[24] This helps reduce the incidence o hypoxemia and also aids repositioning of the endobronchial blocker intra-operatively.The airway management and ETT positioning in a case of EA with TEF is definitely challenging but is aided to a great extent by preoperative and intra-operative bronchoscopy. Out of the various techniques mentioned above, the traditional technique of endobronchial intubation followed by gradual withdrawal into the trachea remains the most popular amongst anaesthesiologists. The technique of choice must take into account the type and location of fistula, the preoperative chest condition, pulmonary compliance and other associated co-morbid conditions. The positioning of the ETT in the left mainstream bronchus or distal to the fistula may minimize gastric insufflation and improve ventilation as well as surgical field, however, the surgeon and anaesthesiologist must remain vigilant for any inadvertent tube mal-positioning with its catastrophic sequelae at all times.Source of Support: NilConflict of Interest: NoneEditorial Comment: Many pediatric surgeons would disagree with the above recommendation of the authors to place the ETT tip beyond the TEF. Instead, if the tip of ETT is placed just above the TEF, it is easier to identify the lower esophageal segment in the most common variant of EA- type C with distal TEF; it would distend with each breath. However, there is no randomized trial or metanalysis available to suggest the superiority of one over the other."} +{"text": "A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities\u2019 preparedness and response capabilities and to mitigate future consequences.An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model\u2019s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel.the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard.The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. Despite immense efforts invested in disaster risk reduction around the world, earthquakes continue to claim a heavy toll and remain the deadliest natural disaster worldwide, as demonstrated in recent events such as the 2010 Haiti and 2015 Nepal earthquakes .Dense urban population centers are known as highly vulnerable in this context . NeverthFuture challenges in the field of disaster risk reduction thus require a more diverse, people-centered preventive approach. An efficient disaster risk management program should be founded on preliminary evaluation of vulnerabilities and potential risks to the population, structures and infrastructures; such knowledge can be leveraged for improving pre-disaster preparedness and mitigation and for the development of an effective response . Loss-esIsrael is situated along the Dead Sea Fault, which has been the origin of intensive earthquakes causing widespread devastation for over 2000 years. Although no major earthquake has struck the region in the last 90 years, experts forecast that strong tremors might occur in the near future and stress the need for action in the region, which is almost entirely located in a seismic risk zone .The aim of this study is to produce an integrative and interdisciplinary model for estimating earthquake casualties in a high risk area in Israel, using risk factors associated with both the built environment and the population\u2019s characteristics. The model structure is generic and can be applied in different regions depending on their specific population characteristics and available data.The new model was formulated based on the HAZUS casualty estimation model. Several human related factors that were found as increasing the risk of injury and death in earthquakes were integrated into the current HAZUS model; and a sensitivity analysis was performed in order to assess how the addition of these parameters affects the casualty projections.The predictors used in this model were identified based on a previous peer-reviewed systematic analysis aimed to assess individual and household characteristics associated with earthquake-induced death and injury in previous events; the review included studies with an analytical design that reported effect size measures, and the results revealed four risk-factors that increased human vulnerability to earthquakes. These were: gender ; age >65 years); having a physical disability; and belonging to a low socioeconomic status was calculated in order to determine the number of fatalities; this number was subtracted from the total number of residents in the census tract (residents with the highest death probabilities were excluded first and so on) and the reminder was then transferred to serve as the basis for step (b)\u2013estimation of expected severely injured, which was performed in the same method as in step (a). Steps (c) and (d) estimated the moderate and slight injuries, again calculated in the same manner. After ending the procedure (following step (d)), the remainder of the residents were defined as uninjured. The entire estimation procedure is detailed in Age was the strongest factor increasing the risk of death in earthquakes, followed by socioeconomic status, physical disability and gender. Individuals aged 65 years and older had the highest combined OR for increased risk of dying in an earthquake of 2.92 compared to younger individuals. Two studies directly assessed the impact of age and gender with regard to the risk of injury; the results indicated that gender has a higher combined OR of 1.7 compared with age . No studies were found that measured the direct impact of socioeconomic status and physical disability on the risk of injury and therefore, the combined effect that was calculated for the risk of death was used for injury as well. The combined effect measures are detailed in The data gathered regarding the rate of risk factors related to individual and household characteristics in the population revealed disparities in different census tracts (neighborhoods) in Tiberias. The difference in the rate of residents older than 65 years in the various census tracts of the city ranged between 1.5%-16.5%. The rate of physically disabled persons ranged between 3.5%-27.5% in different census tracts. Gender distribution was similar among all census tracts . The average annual income of the census tracts residents was compared with the national average and found to be lower than the national average among 10 out of 12 census tracts.The results of the casualty estimation procedure according to twelve census tracts of the city of Tiberias are detailed in When examining the casualty rates among various census tracts, it becomes clear that certain areas bear a heavier casualty burden compared to others. Areas with relatively low rates of vulnerable populations (manifested in the risk factors defined previously) are likely to have almost 10% less fatalities than more vulnerable areas in this regard; for example census tract 24 is expected to have 20% fatalities versus 28% in census tract 33 that include much higher rates of elderly population (over 65) and physically disabled population and are also ranked lower in the socioeconomic index. These differences are also demonstrated regarding severely injured casualties (5% expected in census tract 24 vs. 10% in census tract 33). When examining the predicted values of moderately and slightly injured casualties the differences seem to decrease; the difference between the predicted lowest value and the highest value of moderately injured casualties is 4% and 2% for slightly injured casualties.This paper provides an innovative integrated and interdisciplinary practical approach to estimate the number of earthquake-induced casualties of different severity levels using a combination of human-related and structural factors. The results of the new model were compared with a traditional engineering-based model. The integration of human-related risk factors altered both the general expected number of casualties in a given event and more importantly, the composition of casualties . There was an increase in the overall number of casualties and in the expected number of fatalities, severe and moderate injuries, and a decrease in the expected number of slight injuries. Geographical variability in vulnerability was also demonstrated, as areas of Tiberias with higher rates of at-risk population were identified to be more vulnerable in this regard. When interpreting these results, one should note that since the examined area\u2019s social and demographic attributes has a great impact on the outcomes of the model, two sub-populations located within a small geographical distance may produce different results in the same earthquake simulation. In the current study, the city of Tiberias, is ranked relatively low in the socioeconomic index compared to other municipalities in the country . Since tThe results of the current model along with the evidence from the literature may indicate a potential gap in the estimation process of earthquake-related casualties resulting from the previously missing human characteristics dimension. This conjecture is also supported by other evidence dealing with the level of uncertainty of the traditional HAZUS model evaluated in several validation assessments that manifested among others in inaccuracies of the casualty estimates \u201336. FurtThe addition of human-related risk factors to the estimation process of earthquake-related casualties is part of a more comprehensive approach of risk assessment ,37, and Several limitations of the current study derive from the scarcity of data regarding the effect size measures of human-related risk factors. Other result from difficulties in obtaining reliable data from areas steeped in chaos . FurtherThe implications and use of this novel approach are wide ranging. Previously, casualty estimation was based solely on engineering methods and damage to the build environment. As suggested by this paper, the expected number of casualties can be estimated utilizing a more comprehensive approach which incorporates social vulnerability of the investigated area.This study demonstrates the use of an innovative approach which takes into account both the built environment and population characteristics to predict earthquake casualties. Such knowledge may lead to more focused investment of efforts in reducing vulnerability of potentially more severely affected populations, prior to an occurrence of an earthquake.S1 Table(XLSX)Click here for additional data file."} +{"text": "Premature birth affects 12.5% of pregnancies, and as a result, intraventricular hemorrhage (IVH) of the brain with subsequent development of hydrocephalus is a major cause of morbidity, mortality, and poor intellectual outcomes. Prospective clinical trials to dissolve IVH clots with intraventricular infusion of tissue plasminogen activator through surgically implanted catheters demonstrated improved intellectual outcome in the survivors but at an increased risk of hemorrhage. A non-invasive method to lyse the clots may result in a reduced risk of subsequent hydrocephalus, and better intellectual outcome for these patients. Magnetic resonance guided focused ultrasound (MRgFUS) delivered through the open fontanel is such a therapy, but requires a versatile transducer positioning system that adapts to the MRI compatible transport and imaging incubator used to manage these fragile neonates.A five degree of freedom MRI compatible robotic device has been designed to precisely position an MRgFUS transducer for transcranial therapies within the constraints of the neonatal incubator system. Pulsed focused ultrasound (FUS) energy is used to lyse IVH blood clots in the brain of the patient. The robot is used to position the transducer above the head of the patient while the patient is inside the MRI machine. Five ultrasonic, non-magnetic motors are used to actuate the robot. Specially selected, non-magnetic materials are used to construct the robot. A workspace analysis for delivery of the FUS to the intraventricular system of the brain in typical premature neonates was carried out, as were predicted load and speed requirements. A mock-up of the treatment system, with a 3D printed neonatal skull inside the incubator was constructed. The first prototype of the robot was tested for MRI compatibility while the motors were in operation. A master-slave control system, integrated with the MRI imaging system was also designed.Tests of individual components of the robot show that it has the potential for highly accurate targeting with the FUS transducer through the anterior fontanel of a neonatal head within the incubator transport system in the MRI environment.The design meets workspace, load and speed requirements. The activated components do not create significant imaging artifacts or degradation of signal to noise ratio (SNR). Geometric distortions due to large metal objects (such as the drive motors) are negligible at distances longer than half the focal length of the transducer. We conclude from these experiments that the design of the robot is appropriate for transcranial MRgFUS thrombolysis and could significantly improve treatment of IVH in premature infants."} +{"text": "The effect of galanin is mediated through three GPCR subtypes, GalR1-3. The limited number of specific ligands to the galanin receptor subtypes has hindered the understanding of the individual effects of each receptor subtype. This review summarizes the current data of the importance of the galanin receptor subtypes and receptor subtype specific agonists and antagonists and their involvement in different biological and pathological functions."} +{"text": "The analysis of retinal blood vessels plays an important role in detecting and treating retinal diseases. In this review, we present an automated method to segment blood vessels of fundus retinal image. The proposed method could be used to support a non-intrusive diagnosis in modern ophthalmology for early detection of retinal diseases, treatment evaluation or clinical study. This study combines the bias correction and an adaptive histogram equalisation to enhance the appearance of the blood vessels. Then the blood vessels are extracted using probabilistic modelling that is optimised by the expectation maximisation algorithm. The method is evaluated on fundus retinal images of STARE and DRIVE datasets. The experimental results are compared with some recently published methods of retinal blood vessels segmentation. The experimental results show that our method achieved the best overall performance and it is comparable to the performance of human experts. Automated segmentation of retinal structures allows ophthalmologist and eye care specialists to perform large population vision screening exams for early detection of retinal diseases and treatment evaluation. This non-intrusive diagnosis in modern ophthalmology could prevent and reduce blindness and many cardiovascular diseases around the world. An accurate segmentation of retinal blood vessels plays an important role in detecting and treating symptoms of both the retinal abnormalities and diseases that affect the blood circulation and the brain such as haemorrhages, vein occlusion, neo-vascularisation. However, the intensity inhomogeneity and the poor contrast of the retinal images cause a significant degradation to the performance of automated blood vessels segmentation techniques. The intensity inhomogeneity of the fundus retinal image is generally attributed to the acquisition of the image under different conditions of illumination.Previous methods of blood vessels segmentation can be classified into two categories: 1) pixels processing based methods, and (2) tracking-based or vectorial tracking or tracing methods , , [The experiment results of different retinal blood vessels segmentation methods on the STARE dataset are shown in Tables We also compared the performance of our method on both healthy and unhealthy ocular images. The results of the experiments show that the unhealthy ocular images cause a significant degradation to the performance of automated blood vessels segmentation techniques. Table Similarly to STARE dataset, The performance results of Staal , Mendon\u00e7Figure We have presented in this paper a new approach to blood vessels segmentation by integrating the pre-processing techniques such bias correction and distance transform with a probabilistic modelling EM segmentation method. We have evaluated our method against other retinal blood vessels segmentation methods on STARE and DRIVE datasets. The overview of the experimental results presented in Tables Our method has an advantage over tracking-based methods because it applies bias correction and distance transform on retinal images to enhance vessel appearance and allows multiple branches models. Also our method achieves better results over pixel processing based methods as it corrects the intensity inhomogeneities across retinal images to improve the segmentation of the blood vessels. This technique also minimises the segmentation of the optic disc boundary and the lesions in the unhealthy retinal images.Djibril Kaba is the first author and has made more contributions, the rest of the authors have made the same contributions."} +{"text": "One of the most important manifestations of mucopolysaccharidosis (MPS) type I, II and VI is a progressive disease of the osteoarticular system. The evaluation of the disease advancement is difficult due to the complexity of symptoms. The characteristic features are progressive limitation of joint mobility and joint pain. These symptoms affect the quality of patient life. A uniform scale has not been developed for these patients.The aim of this study was to use the experience in the evaluation of disorders in rheumatic diseases in patients with MPS.6 patients with MPS VI were evaluated: 2 with advanced disease, 2 with moderate and 2 with slow progressing disease. The following parameters were selected for assessment: Physician global assessment of disease activity (PGA), Patient/parent global assessment of well-being (PGE), Functional ability (CHAQ), Number of joints with limited movement (LJC) and VAS pain \u2013 visual analogue scale for pain.The evaluation results are shown in Table The parameters used in JIA may be applied for assessment of the MPS severity. With their implementation, the progression of the disease and the effect of the treatment can be assessed and compared.None declared"} +{"text": "We describe a case of traumatic obturator hip dislocation in an adult. Closed reduction was done under general anesthesia. Post-reduction radiographs showed concentric and congruent reduction of the right hip. Traction was applied for three weeks followed by progressive mobilization and loading. Follow up for two years after the injury showed that the patient achieved a full recovery without any evidence of hip pain or a decreased range of motion. There were no signs of osteonecrosis of the femoral head. The rise of road traffic accidents involving high-energy trauma has increased the incidence of traumatic hip dislocation. Obturator hip dislocations in adults are rare, and only a few cases have been reported in the literature. We describe an adult case of traumatic true obturator hip dislocation.Male patient, 35-years-old, victim of an automobile accident was admitted in our emergency department two hours after. He complained about severe pain in his hip and inability to move the right lower limb. On physical examination he was conscious and hemodynamically stable, the lower limb was found in extension; abduction and external rotation. There were no neurovascular deficits without associated injuries Radiographic examination of the pelvis revealed an obturator dislocation of the right hip. No associated fracture was seen Follow up for two years after the accident showed that the patient was pain free with full range of motion. There were no changes suggestive of avascular necrosis of the femoral head.Anterior dislocations of the hip are divided into two types according to the position of the femoral head, pubic or superior type 1) and obturator or inferior type 2) [ and obtu [ and obObturator dislocations of the hip are uncommon injury, occurring in less than 5% of all traumatic hip dislocations , 3. The Dislocation of the hip is an orthopedic emergency. Closed reduction under general anesthesia is considered as the treatment of choice in traumatic obturator hip dislocations , 7. ReduObturator dislocation of the hip in adults is rare. Its rarity is due to the inherent stability of the joint, its deep position in the pelvis with strong ligaments and bulky muscles around the articulation. Prompt diagnosis and treatment are crucial in the management of these injuries."} +{"text": "In the absence of a national newborn hearing screening program in Italy, parent associations have been working with the Italian Paediatric Federation (FIMP), the Society of Neonatology (SIN) and members of the Italian Society of Audiology and Phoniatrics (SIAF) to promote guidelines, best practice and training courses in early hearing detection intervention that incorporate sensitivity training for professionals working with families of deaf children. The establishment of the Italian Paediatric Federation\u2019s Audiology Network is the result of an international collaboration between parents and medical professionals designed to promote an effective model in developing Early Hearing Detection Intervention Programs (EHDI) that recognize the role of parents as partners in the process. Among other factors, one important component frequently underestimated in most early intervention programs, both in the USA and in other countries, involves the role of parental involvement within the EHDI process. From scrAfter five years of working region by region in Italy, the network recently held its first National Pediatric Course of Audiology where the parental voice was fundamental in providing sensitivity training and in recruiting the participation of pediatricians with patients diagnosed with hearing loss.In the absence of a Regional Protocol for Newborn Hearing Screening in Sicily, the Association Io Sento offers assistance to families of children with hearing loss. The Association collaborates with health structures and agencies to offer resources that inform, educate and support families regarding diagnosis, school services, the cochlear implant, speech habilitation by using a network created by information technology."} +{"text": "Bacteria are present in all foods, whether they are indigenous or inoculated. They can be beneficial to the quality of foods, responsible for food spoilage, or even pathogens. In solid food products, bacteria are immobilized. They thus grow as colonies entrapped within the food products or on the food surfaces. In both cases, bacteria interact with the solid matrix, sometimes facing difficulties to access the nutrients as nutrients have to diffuse from the matrix to the bacterial colonies. Bacterial development can thus be impaired in solid matrices in comparison to planktonic growth. To control the growth of bacteria in solid foods is then of major importance. In the case of pathogens, it is crucial for safety issues to predict how the bacteria present as initial contaminants will develop. In the case of inoculated bacteria, such as lactic acid bacteria, it is also crucial to control their development because they are responsible for the final quality of the food products. However, studies on the growth of bacteria have been essentially focused on growth in liquid media. Resolute techniques, which include both microcopy approaches and quantitative techniques, have now been developed to observe colonies at the microscopic scale. They allow studying the variation of growth in different contexts either in model growth culture media or in model foods.Jeanson et al.). The second review demonstrates the importance of modeling the growth of bacterial colonies in solids foods despite a lack of knowledge. The different types of models and their potential consequences on decisions are presented and discussed (Skandamis and Jeanson). The third review shows how non-invasive techniques can be used to observe bacterial growth and also quantify the bacterial metabolism in colonies at the microscopic scale . The last review paper shows how microscopy techniques are particularly valuable to increase knowledge about bacterial colonies in model foods and foods. Fluorescent microscopy allows targeting metabolites and understand the physiology of bacteria within colonies .This Research Topic starts with four review papers. A first comprehensive review about bacterial colonies redraws the history of the studies on colonies, synthetizes the conditions of growth in which growth in colonies differs from planktonic growth, and finally presents concepts of the interaction of bacterial colonies with the food matrix in which they grow, with cheese as an example . The following study demonstrates the accuracy of a phenotyping technology based on laser-induced speckle scatter patterns in Bacillus colonies. The authors showed that the distribution of speckle size is modified during the growth of colonies .The five following papers present original results in the field. Using an automated microscope coupled with fluorescence dyes, it was possible to demonstrate that the exposure to an oxidizing disinfectant led to different morphologies of cells depending on the strains and that dead cells were randomly distributed within the micro-colonies , growth rates and metabolism in renneted milk differed from the one observed in liquid milk . The use of fluorescence lifetime imaging was shown to be relevant to investigate the pH micro-heterogeneity in Cheddar cheeses . These results suggest that the surroundings of bacterial colonies within cheeses could differ from homogeneous growth conditions in liquid medium. The last study questioned the access of bacteria in colonies to their nutrients, through the assessment of diffusion of solutes inside colonies. The diffusion of high molecular weight molecules was assessed inside Lactococcus lactis colonies grown in a model cheese. The results led to the conclusion that diffusion of molecules inside bacterial colonies depends on the physicochemical properties of the molecules .The three following studies investigate bacterial colonies of lactic acid bacteria in their food environment in model cheeses or investigate the pH in their surrounding micro-environment in Cheddar cheeses. Physiology and growth of In conclusion, since bacteria mainly grow in colonies in foods, increase knowledge on growth and physiology of bacteria growing as colonies is now a crucial issue. This is achieved by the recent development of non-invasive techniques that allows investigating at the microscopic scale with time lapse and quantitative analyses. Knowledge about the growth and the adaptive response of bacteria to the food environment will continue to grow by addressing the remaining questions about interactions between bacterial colonies and their food environment.SJ: wrote the editorial. AT: revised and improved the editorial.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Nature Communications6: Article number: 752010.1038/ncomms8520 (2015); Published: 07032015; Updated: 01212016.The authors inadvertently omitted Sumit Jaiswal, who was originally included in the Acknowledgements section of this Article and contributed to the cloning of transgenic constructs and the balancing of transgenic flies, from the author list. This author has been added to the author list and removed from the Acknowledgements in both the PDF and HTML versions of the Article."} +{"text": "This overlooks the social and emotional salience associated with hearing the voices of trusted friends and loved ones. Sidtis and Kreiman write: \u201cTherefore, there are remaining questions about how familiar voices of different types\u2014family members, friends, colleagues, romantic partners\u2014engage the brain during the perception of vocal signals. A number of existing studies have shown evidence for heightened release of oxytocin\u2014a hormone associated with parental and interpersonal bonding\u2014when participants hear the voice of a loved one (Seltzer et al., Recent neurobiological models of speech production have adopted a forward models approach, in which the brain aims to reduce the error between the predicted and actual sensory consequences of a spoken utterance (Guenther, Phonetic convergence describes the phenomenon of interlocutors aligning their acoustic-phonetic pronunciation of speech over a period of spoken interaction, often outside of their conscious awareness (Krauss and Pardo, An improved understanding of how flexible control of the voice affords the attainment of social goals demands investigation of how the talker's intentions are expressed in speech, detected by the listener and used to elicit or guide further social behaviors between interlocutors. Pardo writes: Studies of vocal identity perception should make more regular and selective use of familiar voices, in order to interrogate the interaction of speech/voice perception systems with other response networks relevant to social interactions. It is important to consider that there are different types of familiar person, for whom the perceptual response may systematically differ. Alternatively, Sugiura suggestsStudies of speech production mechanisms should consider the intended recipient of the spoken message and their relationship to the talker. Here, neuroimaging techniques may offer a means of investigating the interaction of speech perception and production systems with affective, reward and motivational responses, in both the presence and absence of measurable behavioral changes in the phonetic realization of speech.The advent of improved methodological approaches to brain imaging during speech production (e.g., Xu et al., The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "There are now a number of studies which have analyzed membrane tethers in tissues and organisms which are providing new insights into the role of this class of membrane protein at the physiological level. Here we review recent advances in the understanding of the function of membrane tethers from knock outs (or knock downs) in whole organisms and from mutations in tethers associated with disease.Membrane tethers have been identified throughout different compartments of the endomembrane system. It is now well established that a number of membrane tethers mediate docking of membrane carriers in anterograde and retrograde transport and in regulating the organization of membrane compartments. Much of our information on membrane tethers have been obtained from the analysis of individual membrane tethers in cultured cells. In the future it will be important to better appreciate the network of interactions mediated by tethers and the potential co-ordination of their collective functions Membrane trafficking is a dynamic process which involves the generation of transport vesicles loaded with cargo and their delivery and fusion to their designated compartments. Vesicle docking represents a key step in this process whereby transport vesicles or carriers are delivered with high fidelity to target membranes. Tethering factors play a central role in the docking process as they mediate the bridging of the vesicle and target membranes. Loss of membrane tethers impacts not only on membrane transport but also in many cases on the organization of intracellular compartments suggesting additional roles for tethers in organelle biogenesis.The majority of membrane tethers are recruited from the cytoplasm by Rab and Arl small G proteins to a defined intracellular location such as the Golgi apparatus or the early endosome. The recruitment of these peripheral membrane proteins provides a dynamic mechanism to establish specialized protein complexes within defined membrane subdomains. Membrane tethers interact with Rabs and SNARES (soluble NSF attachment protein receptors) for co-ordinating docking and fusion. SNARE proteins are crucial for vesicle fusion following the docking event. Membrane docking mediated by tethers provides the first level of specificity as well as promoting the shorter range interaction between SNARE molecules on opposing membranes which then promotes membrane fusion.in vivo studies would not have been predicted from the knowledge of the function of membrane tethers in cultured cells, illustrating the potential limitations of the use of cell lines. Therefore, we think it timely to highlight the recent in vivo studies exploring the roles of membrane tethers in whole tissues and organisms which is the main focus of this review.Membrane tethers interact not only with Rabs and SNAREs but also with other effectors, such as cytoskeletal components, which suggests membrane tethers have functions in addition to the regulation of docking of membrane transport carriers . Coiled-coil tethers are typically hydrophilic homodimeric proteins containing extensive regions of coiled-coil domains; typically there are discontinuities in the coiled-coil domains which are proposed to provide flexible joints in the molecules to allow the tether to mediate docking of bound transport vesicles onto the target membrane. MTCs are a diverse family of proteins consisting of 3 to 10 subunits with sizes up to 800 kDa thus allowing microtubule nucleation which facilitate Golgi ribbon formation mutagenesis can result in a loss of function of a gene by the insertion of a nonsense mutation and the production of a truncated protein. Disease causing mutations may be missense or nonsense mutations, resulting in a truncated protein or mutations in the coding region which perturb the binding to one (or more) binding partners. If the tethers are involved in a range of biological functions then the resulting phenotypes of different genetic alterations will likely reflect the consequence of the interactions of the tether, not only with the membrane compartment where the tether is normally localized, but also with specific binding partners. Indeed, the use of different genetic approaches for a given tether is likely to provide complimentary information in unraveling the functions of these molecules.A number of genetic approaches have been used to genetically alter membrane tether genes and these experiments have resulted in a variety of developmental and physiological phenotypes in a range of organisms which have been studied. As membrane tethers have a suite of binding partners, any phenotype observed in whole organisms needs to be interpreted within the context of the consequence of the genetic alteration on the biochemistry of the membrane tether and its interactions with binding partners. Figure Caenorhabditis elegans and Drosophila melanogaster and zebrafish as well as disease causing mutations in humans. Figure The following describes the findings for genetic manipulation of individual membrane tethers of both the secretory pathway and endocytic pathways in mice, cis-Golgi and knockdown in cultured cells has been shown to result in disruption of the normal Golgi structure gene develop osteochondrodysplasia, cleft palate, and system edema. These rats are born with an abnormal skeletal system localized to the Golgi apparatus that is involved in mediating retrograde Golgi transport gene are known to also cause SMA and it has recently been shown that the function of the SMN protein is linked to the Golgi network is a coiled-coil TGN-golgin (Mori and Kato, C. elegans. However, knockout of Vps51 in C. elegans showed lysosomal defects (Luo et al., GARP is a heterotetrameric complex comprised of 4 subunits\u2014Vps51, Vps52, Vps53, and Vps54 (Conibear and Stevens, Drosophila, knockout of HOPS or CORVET subunits leads to either embryonic lethality (Messler et al., HOPs and CORVET are MTCs localized to the endo-lysosomal system in cells where they act sequentially and coordinate tethering and fusion events at the early/late endosomes and the lysosome (Solinger and Spang, Drosophila are associated with growth and developmental defects. An early study from a gene trap screen in mouse embryonic stem cells identified Sec8 as required for mesoderm induction in embryos (Friedrich et al., Drosophila resulted in early postembryonic lethality and defined a role for the exocyst in endocrine secretion (Andrews et al., Drosophila lead to early embryonic lethality and showed normal protein trafficking along the axons but reduced protein cargo delivery into the plasma membrane (Murthy et al., Drosophila to be required for branching morphogenesis of the tracheal system (Jones et al., in vivo studies have demonstrated that the exocyst MTC is required for a range of functions associated with different tissues and organs.Exocyst is an octameric complex consisting of Sec3, Sec5, Sec6, Sec8, Sec10, Sec15, Exo70, and Exo84 which has been proposed to mediate tethering of transport carriers derived from the recycling endosomes and the Golgi for fusion with the plasma membrane (Grote et al., C. elegans and Drosophila and the identification of mutations in membrane tethers associated with disease in patients, have revealed a wide range of biological functions for membrane tethers in both the secretory and endocytic transport pathways. Defects in embryogenesis, tissue development, neural networks, and a range of tissue-specific disorders have been identified, particularly muscular-skeletal disease/dysfunctions. These findings demonstrate that the individual tethers are non-redundant factors essential for fundamental cell processes. Given the range of phenotypes associated with these genetic studies, and the differences in susceptibility of different cell types to knocking out or silencing individual tethers, it is likely that many membrane tethers have functions that extend beyond acting as a docking factor for vesicle fusion. Given their multiple binding partners, tethers are likely to co-ordinate a network of interactions which could regulate the establishment of membrane subdomains and the integrity of organelles.Genetic studies of membrane tethers in mice, Drosophila. More sophisticated approaches of conditional and inducible knockout systems would be particularly useful to explore the functions of tethers in individual tissues and organs in the adult, as illustrated by the recent studies of the exocyst in ureter function in the mouse (Fogelgren et al., Many mutations of tethering genes are embryonic lethal in mouse and in vivo now requires exploration of their binding partners in specific cell types. There may be binding partners of membrane tethers which are cell type specific and which are important for cell type specific functions. The application of a molecular systems biology approach to compare specialized cells from mutant and wild-type organisms should provide a deeper molecular understanding of the precise roles of membrane tethers in vivo. In particular, with the increasing awareness of the cross-talk that exists between different molecular pathways in the cell, it will be of interest to understand how the network of interactions mediated by tethers co-ordinate membrane trafficking and membrane flux in different specialized cell types. It is likely that this information will reveal fundamental insights into the relationship between membrane cell biology and physiological function.The molecular basis for many of the phenotypes described in this review are not clear and need to be further explored. The binding partners of membrane tethers have mostly been identified in cell lines which has provided an initial framework for defining their function; however, understanding the molecular basis of the cell and tissue-specific phenotypes of membrane tethers WHT and PG contributed to the writing of the review and the design of the figures.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The primary objective of present investigation is to introduce the novel aspect of thermophoresis in the mixed convective peristaltic transport of viscous nanofluid. Viscous dissipation and Joule heating are also taken into account. Problem is modeled using the lubrication approach. Resulting system of equations is solved numerically. Effects of sundry parameters on the velocity, temperature, concentration of nanoparticles and heat and mass transfer rates at the wall are studied through graphs. It is noted that the concentration of nanoparticles near the boundaries is enhanced for larger thermophoresis parameter. However reverse situation is observed for an increase in the value of Brownian motion parameter. Further, the mass transfer rate at the wall significantly decreases when Brownian motion parameter is assigned higher values. Mixed convection in vertical channels is of considerable importance for the enhancement of cooling systems in engineering. This includes modern heat exchangers, nuclear reactors, solar cells and many other electronic devices. Such flows are mainly affected by buoyancy. MHD mixed convective heat transfer analysis in vertical channels is of considerable importance due to its applications in self-cooled or separately cooled liquid metal blankets, cooling systems for electronic devices, solar energy collection and chemical processes. Use of nanoparticles as means to enhance the heat transfer in low thermal conductivity fluids has proven to be a novel technique Peristaltic pumping is a mechanism of the fluid transport in a flexible tube by a progressive wave of contraction or expansion from a region of lower pressure to higher pressure. Peristalsis is one of the major mechanisms for fluid transport in physiology. It is an involuntary and key mechanism that moves food through the digestive tract, bile from the gallbladder into the duodenum, transport of blood through the artery with mild stenosis, urine from the kidneys through the ureters into the bladder and sperm through male reproductive track. Further, several engineering appliances including roller and finger pumps, hose pumps, dialysis and heart-lung machines are designed on the principle of peristalsis. Subject to such extensive applications, several investigators studied peristaltic flows under different flow configurations It is noticed that none of the above mentioned studies and others in the existing literature investigate the novel aspect of peristaltic transport with the thermophoresis boundary condition. This study aims to fill this void. Therefore this article investigates the peristaltic transport of viscous nanofluid under the influence of constant applied magnetic field. Incompressible fluid is taken in a channel with thermophoresis condition at the boundaries Consider a viscous nanofluid in two-dimensional symmetric channel of width Here We define the following dimensionless quantities in order to non-dimensionalize the problemc\u200a=\u200a2 cm/min, The assumptions of long wavelength and small Reynolds number give Here ough Eq. . the conough Eq. and 15) denotes F as the dimensionless mean flows in the laboratory and wave frames byDefining Kuznetsov and Nield Thus the dimensionless boundary conditions in the present flow ared in Eq. and The system of Eqs. \u201318 subjeThis portion of the article aims to analyze the numerical results through graphs. Plots for axial velocity, temperature and concentration are prepared and analyzed. Heat and mass transfer rates at the wall are studied through bar-charts.Nb>1. Temperature profile for variation in different parameters is studied through Impact of different parameters on the concentration of nanoparticles is examined through Effects of Hartman number, Brownian motion and thermophoresis parameters on the heat and mass transfer rates at the wall are studied through bar-charts given in In order to check the validity of the solution methodology and the obtained results, we provide the comparison of the special case of present study with the results of Ali et al. MHD mixed convective peristaltic transport of viscous nanofluid in a channel with thermophoresis at the boundaries is examined. Key findings of this study are summarized below.Thermal and concentration Grashoff numbers have opposite effects on the axial velocity and temperature of nanofluid.Axial velocity decreases with an increase in the value of Brownian motion parameter whereas opposite is reported for thermophoresis parameter.Temperature of the nanofluid decreases with increase in the strength of applied magnetic field and Brownian motion of nanoparticles.Increase in the strength of applied magnetic field results in an increase in nanoparticles concentration near the channel walls.The Brownian motion and thermophoresis parameters have large but opposite effects on the concentration of nanoparticles.Significant decrease in the mass transfer rate at the wall is noted when Brownian motion parameter is assigned higher values."} +{"text": "A 47 year-old male, with no history of pathological notables who present since 3 years chronic and severe itching eyelids pushing the patient to remove her eyelashes. Examination of the eyelids objectified a madarosis with irritation of eyelids (A). Dermoscopic examination revealed the presence of nits attached to the eyelashes (B). There was any lesions on the examination of other body areas. Based on the observation of nits at the base of the eyelashes (C) a diagnosis of phthiriasis palpebrarum was mad. Patient was treated with oral ivermectin associated to hygiene measure. Phthiriasis palpebrarum (lice infestation of eyelids), caused by the Phthirus pubis, is an unusual cause of blepharitis and conjunctivitis and may easily be overlooked because of the failure of physicians to recognize P. pubis. Dermoscopy can aid in the detection of nits and in monitoring treatment response by determining whether nits contain viable or dead nymphs or are empty (D)."} +{"text": "The purpose of this research topic is to showcase recent anatomical, physiological, and clinical studies revealing how the cerebellum is embedded within distributed neural networks recruited during both direct motor functions plus cognitive and other domains. The topic comprises a total of 15 articles that together span experimental, theoretical, methodological approaches, and reviews of the literature.In recent decades, converging lines of evidence indicate that the roles of the cerebellum extend beyond direct motor control to include interactions with higher centers associated with cognition. Identifying the functional roles of the cerebellum within these wider brain networks is therefore of critical importance when developing methods to alleviate symptoms of both motor and non-motor disorders will surely lead to important advances in our understanding of cerebellar roles in health and disease.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The adjustable microfluidic devices that have been developed for hydrodynamic-based fractionation of beads and cells are important for fast performance tunability through interaction of mechanical properties of particles in fluid flow and mechanically flexible microstructures. In this review, the research works reported on fabrication and testing of the tunable elastomeric microfluidic devices for applications such as separation, filtration, isolation, and trapping of single or bulk of microbeads or cells are discussed. Such microfluidic systems for rapid performance alteration are classified in two groups of bulk deformation of microdevices using external mechanical forces, and local deformation of microstructures using flexible membrane by pneumatic pressure. The main advantage of membrane-based tunable systems has been addressed to be the high capability of integration with other microdevice components. The stretchable devices based on bulk deformation of microstructures have in common advantage of simplicity in design and fabrication process. Effective fractionation of beads and cells in microfluidic devices is essential for applications such as lab-on-chip for pharmaceutical and biological studies ,2,3. FunThe active techniques of fractionation that are reported extensively for microfluidic applications are normally based on discrimination between different parameters of cells or beads due to the physics of motion in fields of acoustic , opticalThe methods of hydrodynamic-based passively differentiate between the label-free microparticles only by using their intrinsic properties of size, form, deformability, stiffness, and viscoelastic behavior ,25,26 inThe main feature of mechanically tunable separation systems is deformation of microstructures of a device under mechanical loads such as external forces of compress and stretch or internal pneumatic pressure for the purpose of influencing the interaction between microstructures and fluid flow of microbeads or cells. Until now, there are just a few microdevices that have been developed by implementing this concept for modification of selectivity of cells and beads. Meanwhile, the technique has been reported for similar microfluidic-based applications such as mechanically tunable optofluidic devices , mechaniIn this review, the focus is on a group of microfluidic devices that effectively have adjustability performance by using mechanical actuators for hydrodynamic-based fractionation of cells and beads. The small modulus of elasticity of elastomeric materials in addition to appropriate design of microstructures in hydrodynamic-based separation of beads and cells allow such devices to have a tunable selectivity . The motThe microfluidic devices in present review are classified according to flexibility in two groups of structural-based and membrane-based. The reviewed works, such as the tunable separation using pillar-based microfluidic ,40,41, tThe present paper is organized as follows. The effect of substrate type and loading techniques on deformation of microstructures in a microfluidic device is briefly reviewed in The forced deformation of microstructures in elastomeric microfluidic devices is an easy way of changing the width and height of microchannels in existing systems without repeating the costly fabrication process of a new design. Physically, when the microchannel dimensions are distorted under stress, the parameters of fluid flow such as pressure and velocity components that are important components of momentums of beads and cells are altered. The direction, frequency, and total amount of deformation of microstructures determine the amount of adaptation occurs to interaction between microbeads or cells with the fluid flow and the walls of modified microchannels . The intAll materials utilized in fabrication of microfluidic devices exhibit a level of elasticity under different loading intensities. Those with large modulus of elasticity such as silicon (E = 169 GPa) ,48,49 deThe size-based hydrodynamic manipulation of beads and cells in microfluidics devices has been achieved by adjustment of a critical geometry such as the width of narrow gaps in microstructures or the cross-section profile of the microchannel that stream of micro particles has to pass through it. The mechanical tuning with high resolution of the manipulation in such systems has been conducted by elastic modification of the critical dimensions of microstructure defined based on characteristic size of the microparticles. As shown schematically in 1 stretched to d2 by the force of F. The force is normal to the channel direction and uniformly distributed on the edges of device. This simple design has been implemented in tunable pillar-based system with uniform micron-scale spacing for size-based microfiltration on PUMA substrate [F is applied on the microchip by actuator clamped to the microdevice for stretching the length from L to L + \u0394L. To have a micron precision tunability, actuating systems with high resolution have been used. The required external force in the device plane has been supplied by off-chip actuators that are either manually [In ubstrate and tunaubstrate . The reqmanually or autommanually controll1d to 2d while the overall thickness of the elastomeric device is reduced by \u0394T. The concept of adjustable devices has been exploited for elastomeric tuning of hydrophoretic separation [In paration . The forP, determines the effective deflection of the membrane with the thickness of t. Design and sizing of the microfluidic device is normally performed such that any unwanted deformation in microstructures is at lowest while displacement of membrane is maximum [Deflection of a thin membrane by pneumatic pressure is a well-known technique that has been implemented by several researchers for controlling the pneumatic and microfluidic zones in a microdevice for separation , isolati maximum ,46.The p maximum . The comSeveral design models have been introduced based on manipulation of microbeads or cells in interaction with fluidic flow and device internal microstructures ,61. In hPhysical manipulation of cells and beads by mechanical tuning of elastomeric microfluidic devices via bulk deformation of their microstructures has been implemented in different ways. As depicted schematically in The elastomeric microfluidic devices with pillar-based microstructures have been demonstrated as suitable candidates in tunability for separation of microbeads and cells. A microstructure layer consisted of an array of pillars is mechanically stretched to increase the inter-pillar spacings with micron or nano resolution. The limitation on tunability of the device is maximum force of actuator and stretchability of the substrate. The tuning is made before microfluidic injection and is reversible by the elastomeric material. The demonstrated advantage of such mechanisms has been the simplicity and low-cost fabrication process.\u03b8 and d, with resolution as low as 10 nm as depicted in The mechanically tunable device based on DLD technique as shown schematically in Another tunable system is a work of the authors of the present review on using linear arrays of pillars with square cross-section for microfiltration of blood cells and beads . The sepThe microfluidic devices consisted of array of cup-shaped elements as shown in The hydrophoretic continuous separation of bulk of microparticles has been adapted tunable by Park\u2019s group using elastic deformation of the microchannel cross-section . The oriMechanical tuning of microfluidic devices for the purpose of manipulation of beads and cells by technique of local deformation of thin membranes under pulsing pneumatic pressures or controlled by valve has been exploited by several researchers. A number of such novel devices have been introduced with PDMS membrane as the core element or for actuation of a microstructure for handling , trappinet al. for microfiltration using tunable blockage of a microchannel cross-section [A microdevice has been reported by Huang -section . The devAnother filtration mechanism consisted of resettable traps for cells and beads with selectivity based on size and deformability has been introduced recently by Ma\u2019s group . The iniAdditionally, a microfluidic device has been introduced for enhancement of separation efficiency in a straight microchannel integrated with membrane. Air bubble plugs are incorporated for formation of a wide and uniform slit. The membrane with thickness of 50 \u00b5m is actuated by pneumatic pressures of 50\u201380 kPa in a three-layer device [A microfluidic system has been introduced for size-based separation of microbeads and blood cells by Lee\u2019s group that works using membrane deflection as the actuating element for a floating block structure . The criet al. using array of dynamic U-shaped microstructures [Recently, a microfluidic device has been developed for trapping of controllable number of cells by Liu ructures . One of ructures . As showSo far, several elastomeric microfluidic devices have been designed and fabricated with mechanical tunability for size-based fractionation of cells and beads with applications defined in biological studies and other research fields. Tunability of the described systems enables us to adjust and to optimize the performance and efficiency of operation in microscale for the working conditions suitable for each specific application. In The microfluidic systems that are adapted tunable using existing methods such as DLD , pillar-The two methods of stretchable and dynamic cup-shape structures that work base on modified forms of fixed U-shape posts for cell trapping, demonstrate the importance of novel ideas in development of alternative devices with similar applications and general performances but with more flexibility and compatibility with other microfluidic components.The membrane-based tunable devices are designed with clog-free mechanism, which is very important for continuous working and integration with other microfluidic components such as micropumps and valves for applications in more complex systems and lab-chip devices. These types of microfluidic systems have more creative designs for tunability compared to other devices. The materials and processes necessary for development of these devices are typical in microfluidic technology.All reviewed tunable hydrodynamic-based microfluidic devices are potentially applicable for separation of solid microbeads and deformable biological cells. This review article provides a good reference for investigation of the microfluidic platforms with purely mechanical tunability and the corresponding limitations and possibility of further development and enhancement."} +{"text": "The cochlear nucleus (CN) is the first processing center of auditory signals from the cochlea. Several distinct neuronal circuits have been identified that form different parallel representations of auditory information from auditory nerve fibers (ANFs). One cell circuit of interest is the stellate cell microcircuit of the ventral cochlear nucleus (VCN), which is centered on two populations of stellate cells, known as T and D stellate cells . The outA neural network model of the stellate microcircuit was created using leaky integrate-and-fire neurons for each cell in the network. The key mechanisms incorporated to achieve SNR and dynamic range improvement were wide-band lateral inhibition and selective processing of excitatory and inhibitory inputs . The numTo quantify the enhancement achieved by the network, two signal-to-noise metrics were used. SNR metric 1 measures the level of noise reduction and is calculated as the ratio of the average output rate of cells within 1.5 critical bands of the known stimulus frequency to the average background output rate. SNR metric 2 measures the level of peak enhancement and is calculated as the ratio of the maximum output rate of the cells within 1.5 critical bands of the known stimulus frequency to the maximum background output rate. The metrics were calculated for the input signal, the input ANF population, and the TS cell population. Each metric was calculated for the first three formants of the vowel /o:/ and for a range of input SNR values. The SNR metric 1 was significantly higher than for the ANF population for the majority of input SNR levels. Similarly, SNR metric 2 was also higher for the TS cell population than for the ANF population. This demonstrates that the TS cells improved the ability to decode the spectral information in noise and the model illustrates the mechanisms by which this is achieved."} +{"text": "The homophilic potential emerges as an important biological principle to boost the potency of immunoglobulins. Since homophilic antibodies in human and mouse sera exist prior environmental exposure, they are part of the natural antibody repertoire. Nevertheless, hemophilic properties are also identified in induced antibody repertoire. The use of homophilicity of antibodies in the adaptive immunity signifies an archetypic antibody structure. The unique feature of homophilicity in the antibody repertoire also highlights an important mechanism to boost the antibody potency to protect against infection and atherosclerosis as well to treat cancer patients. PepWorking with homophilic-converted Trastuzumab (Herceptin), we discovered that the dose\u2013response in inducing apoptosis in a Her2/neu-expressing human cell line was not linear but was bell shaped . The higThis non-linear dose affecting the potency of homophilic Herceptin could be due to the inherent mechanic of self-binding that was observed in 1986 , 55. HerWe believe that we are only at the beginning to discover networks of shared idiotypes. By using polyclonal anti-idiotype antibodies, Urbain and colleagues have observed shared idiotypes expressed on antibodies with different antigen specificity in rabbits and mice , 56. TheThe data on homophilic natural antibodies point to a general strategy of enhancing potency not only of antibodies but also other biological systems . This fiThe homophilic antibody enhancement and therapeutic potential have been already demonstrated , 46. TheThe discussion of the unique feature of homophilicity as a part of the natural and induced antibody repertoires highlighHK surveyed the literature and wrote the article. JB and SK provided suggestions. All authors approved the final version of this article for publication and accepted the responsibility for the integrity of the work.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The National Health Security Scheme was established 10 years ago with the document finance model in communicating the transfer of budget from payer to providers. This model was found to be inefficient in transferring (17 billion Baht delay) or not knowing the amount transferred (33 billion Baht bad debt). A new ageing account model was developed 3 years ago to improve efficiency in financial process. It is interesting to know whether the change achieved the efficiency. This research also wanted to compare efficiency in financial process of all three government health insurance funds managed by the National Security Health Office , the Social Security Office (SSO managing insurance for workers in private sector) and the Comptroller General Department (CGD managing insurance for civil servants and dependents).This research employed two rounds of questionnaire surveys to heads of finance department in hospitals in 2010 and 2012. An in-depth interview was employed in ten of responses from the Ministry of Public Health hospitals commenting that the new model achieved the most efficient financial process and another ten complaining the least efficient process. Eight hospitals outside the MOPH were added to get unbiased views comparing financial processes of the three government health insurance funds. The interviews were undertaken from November 2012 to February 2013.Comparing results from the two surveys: providers felt a marginal increase in financial process of the ageing account model in terms of efficient transfer, outstanding realisation, speed and the ease of use of transferred fund. The new model was inferior to the old model in terms of the frequency of sending financial report. Qualitative data confirmed that efficiency was interpreted as the quickness and correctness of fund transfer. The better efficient responders explained that the NHSO transferred fund became quicker and simpler to use. However, the least efficient responders explained that the CGD transferred fund quicker than the NHSO and used web-based financial report for quickness and transparency. The accounting items of the NHSO were the most difficult to understand.With perceived marginal increase in efficiency of the new account model, the NHSO should improve timeliness and comprehensiveness of the account model as a benchmark with other two government health insurance funds."} +{"text": "Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. The major task of wireless sensor network (WSN) is intermittent monitoring of environment, where data or objects are sensed and then sent to the base or sink node for further processing. With the increasing demand in the network, the design of sensor nodes should be performed in such a manner with high-density operation with minimum transmission bandwidth. Therefore, the objective of minimizing network traffic has become an important issue.Data Routing for In Network Aggregation (DRINA) for WSNs resultedThe integrated Wireless Hospital Sensor Network maximizeA new constructing approach for a weighted topology of wireless sensor networks (WSNs) is designed in based onA scalable and energy efficient mechanism is designed in to improIn wireless sensor networks, each sensor nodes compete with each other in order to access the shared transmission medium. With increasing traffic in the network, proper routing protocols have to be designed due to the higher level of interference. Energy efficient method using energy balanced routing protocol is based on forward aware factor technique introduced in . In FAF,Maximum Weighted Matching (MWM) scheduling algorithm was designed in to improAmong the various measurements the graph invariants have been utilized for the structure of entropy-based measures to describe the structure of complex networks in . The ShaThis paper makes the following contributions: First, we present a Bayes Node Energy Distribution (BNED) model, which provides sink node an accurate subset of nodes by detecting target objects of similar events in an energy efficient manner reducing the energy consumption of sensor nodes. Second, we show that the major component of our BNEPD technique uses Polynomial Regression Function (PRF) to reduce node energy drain rate using polynomial coefficient. Therefore, the proposed BNEPD technique effectively integrates multiple functionalities like minimizing energy consumption, node energy drain rate with low time complexity. The performance of BNEPD technique compared with other possible data aggregations techniques is also investigated.The rest of this paper is organized as follows: Section 2 introduces several routing mechanisms adopted in wireless sensor network. The core component of our technique called BNEPD technique and the design goals of it is included in Section 3. Section 4 includes the experimental setup with parametric definitions. Section 5 discusses in detail the results using table and graph form. Section 6 finally concludes with the concluding remarks.Wireless Sensor Networks are equipped with different communication and computing capabilities and hence applicable to different operating scenarios. Hence routing protocol for relay node placement is one of the most important to be performed in WSNs involving diversified environments. In , approxiDefending collaborative false data injection attacks in wireless sensor networks designed in combine One of the most important functionality considered in wireless sensor network is the link scheduling. This is because of the shared and wireless nature of the network that changes drastically over time. Wireless scheduling algorithms used strActive node and dynamic time slot allocation was introduced in with theWireless sensor network has been introduced for several reasons including securing the link to physical humanity, excellent robustness and capability to work; hence is employed for wide range of applications. In various WSN applications, motion strength has a common view of monitoring the field. Most of the existing researches on movement monitoring focused on dealing with single object, where object numeration was not required. Sensor nodes are used for sensing the node emitted by an object such as, sound, vibration, etc., to detect the appearance of an object, and then helps to monitor and track it.The proposed technique is employed to distribute the sensor nodes and identifies energy efficient routing path of similar event using Bayes Node Energy Distribution and Polynomial Regression Function in Wireless Sensor Network. The distribution of sensor nodes is performed through Bayes distribution that detects an object of similar event. The energy efficient routing path for each sensor nodes are generated by data aggregation at the sink based on polynomial regression function. We use a novel distribution algorithm to adaptively represent nodes stop sensing and transmitting data to sink for a specific time period. The framework of the Bayes Node Energy Polynomial Distribution technique is shown in The framework of BNEPD technique is divided into construction of Bayes Node Energy Distribution, applying Polynomial Regression Function for data aggregation and application of Poly Distribute algorithm. The framework starts with the construction of Bayes Node Energy Distribution that applies Bayes rule through which the sensor nodes senses the target object of similar events within its frequency to reduce the consumption of energy.Next, the Polynomial Regression Function is applied to the target object of similar events for multiple sensors and are aggregated on the basis of the minimum and maximum object of events, sent to the sink node. Finally, the Poly Distribute algorithm distribute the sensor nodes in an efficient manner that splits the nodes to be in sleep state and nodes for object detection of similar events according to the number of nodes in the network and network size. The elaborate description of design of BNEPD technique is explained in the forthcoming sections.In this section, a decentralized approach using Bayes principles is presented in order to increase the performance of the network. Instead of compressing data collected by the sensor nodes in Wireless Sensor Network, the BNEPD technique selects subset of the nodes in the network that detects an object of similar event and transmits the data to the sink node.Whenever an additional sensor node has to be included in the network using conventional types of network, though results in increasing only a small portion of accuracy in the sensing file, due to highly correlated nature of the sensed data, but compromises the energy consumption level.On the other hand, the BNEPD technique distributes the sensor nodes that detect an object of similar event into specific regions using Bayes principle which identifies minimal energy for sensed data being routed. G = , then V denotes the set of all sensor nodes and E denotes the set of all communication stations connecting all sensor nodes in WSN. With this principle, the problem of routing is defined as the process of detection of object of similar event initiated at the source node and finally ending at all the destination nodes, based on the network frequency. This routing path detection of similar event is in the form a spanning tree Tree = which includes the source sensor nodes and all target object nodes for which it is specifically made.Let us consider the problem of routing using Bayes Node Energy Distribution and Polynomial Regression Function in Wireless Sensor Network in terms of a graph. If Bayes principle using BNEPD technique makes probabilities for each sensor node with the aid of probability models of corresponding frequency for which it is specifically made . In order to accomplish this, Bayes\u2019 principle is used as a hypothesis for adjusting the degrees of belief given the observed evidence .The Bayes principle in BNEPT technique obtains the probabilities of object detection of similar events. Then, the conditional probability concerning similar events is evaluated. The evaluated similar events are then sent to the sink node for further processing. The proposed Bayes principle for node energy distribution is then given as followsSN1 and TO2 are the sensor node and target object nodes respectively. Let Disti denotes the distance between the source and the target object nodes in WSN. Then the distance between SN1 and TO2 are measured as given below:From , SN1 anBased on the distance measurement given from , sensor Upon successful completion of identification of objects of similar events by sensor nodes and sending it to the sink node, the task of sink node for data aggregation comes into action. The aggregation of data in BNEPD technique involves the process of collection and aggregation of similar events. With the objective of reducing the energy drain rate, data aggregation using polynomial regression function is used in the proposed work to save the limited resources through which the network lifetime is also increased and maintain high energy nodes in the routing path.Data Aggregation using Polynomial Regression Function (DAPRF) in BNEPD technique represents the sensed objects in the form of polynomial functions. The idea behind BNEPD technique is that DAPRF performs aggregation of data with the aid of polynomial coefficients that represent sensor object.Many sensor nodes send their objects to the sink node. The sink node, on receiving the objects from multiple sensor nodes, then uses the Polynomial Regression Function to obtain the coefficient values of regression for each object. As a result, separate coefficient values are obtained for pressure, temperature and so on. The Polynomial Regression Function applied in BNEPD technique is given as belowa, and b objects of similar events are calculated. The five regression values are then sent to the sink node, as well as the frequency of the objects of similar events for which the polynomial coefficients are evaluated. The sink node now has the following two sets of data on receiving the coefficients from its target object of similar events:Thus the minimum and maximum value of i\u2019 th and \u2018i + 1\u2019 th value include the sum of polynomial coefficient of first object, coordinate range of first object of similar event, sum of polynomial coefficient of second object and finally the coordinate range of second object of similar event respectively.From and 5),,5), the The advantage of BNEPD technique is that the target object sends the polynomial coefficient obtained through polynomial regression function instead of sending the entire objects of similar events to the higher level. When a new target object is detected, an updated regression polynomial function is evaluated at higher level using polynomial coefficients sent by the objects of similar events and values obtained from the source object. As the BNEPD technique only sends the polynomial coefficient value, energy savings take place and therefore energy drain rate is reduced improving the performance of the entire network.Finally, a novel distribution algorithm called Poly Distribute is designed to reduce the communication overhead. The Poly Distribute algorithm is distributed in such a way that it helps in the data aggregation improvement in BNEPD by designing the network in such a way that certain sensor nodes stop sensing and transmitting the data to the sink node for a time period. Finally, using Poly Distribute algorithm in BNEPD technique, the sink node uses Bayes principle on the object which it has received for the entire network. The algorithmic description of Poly Distribute applied in BNEPD technique is given belowConstruction of Poly Distribute algorithmStep 1: Determine the number of nodes to sense the target object of similar eventStep 2: Determine the number of nodes to be in sleep stateStep 3: BeginStep 4: RepeatStep 5: Detection of target objects by sensor nodes of similar eventStep 5.1: Apply Bayes principleStep 5.2: Send objects of similar events to sinkStep 6: Perform data aggregation Step 6.1: Apply polynomial regression function Step 6.2: Send minimumand maximum value objects of similar events Step 6.3: Evaluate polynomial coefficientStep 7: Until Step 8: EndThe above algorithm I shows the construction of poly distribute algorithm which performs three important steps. The first step involved in poly distribute algorithm is the selection of nodes. In this step, the number of nodes to be in the sleep state and the senor nodes to sense the target object of similar event is determined in a random manner. The random selection is designed on the basis of the number of nodes in the network and based on the remaining energy of each sensor nodes.The second step is to detect the target object of similar events by the sensor nodes through Bayes principle based on the probability of events and distance function to the sink node. Finally, the sink node performs the data aggregation using polynomial regression function that determines the minimum and maximum value objects of similar events through polynomial coefficient. The above said process is repeated for all objects in the network. In this way, the communication overhead is reduced.In this section, we evaluate the proposed BNEPD technique and compare its performance to two other known data aggregation: Data Routing for In Network Aggregation (DRINA) for WSNs and CellSimulations were carried out in an extensive manner with random number for 70 iterations. To illustrate the simulation results for BNEPD technique, 350 sensor nodes were used with energy of each sensor node being 5 J, network size of 1000 * 1000 m with a transmission range of 100 m. We evaluate the BNEPD technique performance under the following metrics: Energy consumption, Energy drain rate, Communication overhead and time complexity.Energy consumption using BNEPD technique is the product of sensor nodes, power (in terms of watts) and time (in terms of seconds). The energy consumption is measured in terms of Joules (J). The mathematical formulation for energy consumption using BNEPD technique is given as belowThe energy drain rate for BNEPD technique is measured using exponential weighted moving average that represents the previous and newly calculated values. It is measured in terms of joules (J). The mathematical formulation for energy drain rate is given as belowThe communication overhead generated using BNEPD technique is the ratio of difference between actual communication time and computed communication time to actual communication time. It is measured in terms of milliseconds (ms). The mathematical formulation for communication overhead is given as belowTime complexity using BNEPD technique is the time taken to sense the data or object of corresponding frequency. It is measured in terms of milliseconds (ms).In this section, simulation results are presented to evaluate the energy aware routing in wireless sensor network and to demonstrate the performance of the proposed distribution algorithm. In order to analyze the characteristics and functionality of the BNEPD technique, we quantitatively accessed the performance with the network size of 1000 * 1000 measured at 100 to 800 (m/s) using Dynamic Source Routing (DSR) Protocol by comparing the outcomes, the results achieved with the Poly Distribute algorithm. The Bayes Node Energy Polynomial Distribution (BNEPD) technique is compared against the existing Data Routing for In Network Aggregation (DRINA) for WSNs . The expTo support transient performance, in The mathematical evaluation for energy consumption using BNEPD, DRINA and CBPS is given below. It is the product of number of sensor nodes, power consumed and time taken using three methods.The Energy drain rate of our BNEPD technique is presented in The mathematical evaluation for energy drain rate using BNEPD, DRINA and CBPS is given below. It is the product of number of sensor node, old energy and new energy drain rate respectively.Mathematical evaluation for communication overhead using BNEPD, DRINA and CBPS is given below. It is the difference between the actual communication time and computed communication time . The communication overhead obtained using four methods are as given below.The targeting results of communication overhead using BNEPD technique with two state-of-the-art methods ,2,12] i i12] in In Mathematical evaluation for time complexity using BNEPD, DRINA and CBPS is given below. It is the time taken to perform the communication overhead for different number of sensor nodes. The time complexity using three methods are as given below.To explore the influence of time complexity on BNEPD technique, simulations are performed by applying 350 different sensors with a network size of 1000 * 1000 m in a transmission range of 100 ms depicted in We have proposed a new Bayes Node Energy Polynomial Distribution (BNEPD) technique to solve the issues related to routing and reduce energy drain rate with energy aware routing in Wireless Sensor Network. The energy aware routing in Wireless Sensor Network has been formulated as a Bayes Node Energy Distribution problem and solved through a novel probability method of object detection of similar events using novel Poly Distribute algorithms. The obtained results demonstrate better performance of objects of similar event detection and minimal energy for sensed data being routed over a number of sensor nodes. Our results also show that with the application of Bayes principle, node energy consumption is extensively reduced that effectively detects object of similar event and transmit the data to the sink node. Furthermore, the experimental part indicates that by applying polynomial regression function, the proposed BNEPD technique, converges minimized communication overhead based on the novel Poly Distribute algorithm that stops stop sensing and transmitting the data to the sink node for a time period. The performance of BNEPD technique is compared to other known data aggregation techniques. The performance of BNEPD technique was compared with different system parameters, and evaluated the performance in terms of different metrics, such as energy consumption, energy drain rate, time complexity and communication overhead. The results show that BNEPD technique offers comparatively better performance than the other techniques in all of the scenarios considered.In the forthcoming research work, the proposed energy control for distributed estimation in wireless sensor networks needs to be expanding the wide range of network that uses more number of sensor nodes. This enables data to be stored and processed by devices with more resources.S1 File(PDF)Click here for additional data file."} +{"text": "Dentoalveolar injuries are common and are caused by many factors. Dental trauma requires special consideration when a missing tooth or tooth fracture accompanies soft tissue laceration. A tooth or its fragment occasionally penetrates into soft tissue and may cause severe complications. This report presents a case of delayed diagnosis and management of a displaced tooth in the vestibule of the mouth following dentoalveolar injury. This report suggests that radiography can lead to an early diagnosis and surgical removal of an embedded tooth in the soft tissue. Dental trauma can result in a number of different injury types involving teeth and their supporting structures. Six types of luxation and seven types of tooth fracture have been described . The freThis paper reports a case of dentoalveolar injury in which a canine was embedded in the vestibule of the mouth and surgically removed from the soft tissue.A 46-year-old female was referred to the Department of Dentistry and Oral Surgery of Hakujikai Memorial General Hospital for clinical examination of the left maxilla with spontaneous pain. The patient had sustained an injury to the lower face 12 days earlier. She promptly consulted a neighboring emergency hospital because of laceration of the lower lip and gingiva of the maxilla. Thereafter, she was treated with suture of the lower lip under local anesthesia by a general surgeon and was instructed to put pressure on the bleeding gingiva with gauze. In addition, she lost the left maxillary lateral incisor and canine due to trauma. She could confirm the existence of one of the teeth, but the existence of the other tooth was unclear. No treatment or examination was provided for the missing teeth. She received prescriptions for antibiotics and analgesics and returned home. The bleeding easily stopped afterwards. However, because of swelling and pain of the left maxilla, the patient consulted our hospital.Her chief complaint at the time of the first medical examination was swelling and oppressive pain of the left maxilla . IntraorThe recognition and identification of an embedded tooth or its fragment are important because continuous movement and contraction of the muscles may dislocate the foreign bodies. Moreover, oral bacteria flora can infect the wound and deep tissues. Failure to remove an embedded tooth or its fragment in the soft tissue may result in persistent chronic infection, pus discharge, or disfiguring fibrosis . PreviouThis case report demonstrates the importance of an accurate patient history, physical examination, and radiographic evaluation of such a patient. When dentoalveolar injury occurs, both hard and soft tissue structures must be examined carefully for evidence of an embedded tooth."} +{"text": "Dental implants placement in the anterior mandible with flap or flapless technique is a routine procedure and is considered to be safe. However, serious life-threatening complications may occur. We report the first case of massive lingual and sublingual haematoma following postextractive implant placement in the anterior mandible with flapless technique. A 45-year-old female patient underwent placement of four immediately postextractive implants in the anterior mandible using flapless technique. During the procedure, the patient referred intense acute pain and worsening sign of airway obstruction, dysphagia, dyspnea, and speech difficulties. Bimanual compression of the mouth floor, lingual surface of the mandible, and submental skin was maintained for approximately 25 minutes in order to stop the bleeding. Computerized tomography highlighted the massive lingual and sublingual haematoma. The symptoms and signs had almost completely resolved in the next 48 hours. The prevention of these complications is mandatory with clinical and CT analyses, in order to highlight mandibular atrophy and to select carefully the correct length and angulation of bone drilling and to keep more attention to the flapless technique considering the elevation of a lingual mucoperiosteal flap to access the mandibular contour intraoperatively and to protect the sublingual soft tissues and vasculature in high risk cases. Placement of dental implants in the anterior mandible with flap or flapless technique is a routine procedure and is considered to be safe . HoweverA 45-year-old female patient underwent placement of four immediately postextractive implants into the sockets of four parodontally compromised mandibular incisors using flapless technique . When de submental artery, the largest branch of facial artery, is given off from this one just as that vessel quits the submandibular gland: it runs forward upon the mylohyoideus, just below the body of the mandible, and beneath the digastricus. It supplies the surrounding muscles and anastomoses with the sublingual artery and with the mylohyoid branch of the inferior alveolar artery; at the symphysis menti, the submental artery turns upward over the inferior border of the mandible and divides into a superficial and a deep branch. The superficial branch passes between the integument and quadratus labii inferioris and anastomoses with the inferior labial artery; the deep branch runs between the muscle and the bone, supplies the lip, and anastomoses with the inferior labial and mental arteries. The sublingual artery arises at the anterior margin of the hyoglossus and runs forward between the genioglossus and mylohyoideus to the sublingual gland. It supplies the salivary glands and gives branches to the mylohyoideus and neighbouring muscles and to the mucous membrane of the mouth and gums. One branch runs behind the alveolar process of the mandible in the substance of the gum to anastomose with a similar artery from the other side; another pierces the mylohyoideus and anastomoses with the submental branch [ incisive arteries [The soft tissues of anterior floor of the mouth, delimited by mandibular arch, are supplied by a rich anastomosing vascular rete with three sources of arterial blood: (1) the submental arteries, 2) the sublingual arteries, and 3) the incisive arteries. The the subl the inciarteries . These oSevere haemorrhage from this anastomosing plexus has been reported as a complication of dental and surgical procedures involving perforation of the lingual cortex and/or laceration of the adjacent soft tissues . In our Implant-based prosthetic rehabilitation in the anterior mandible is becoming a standard management option in partial or total edentulous patients. Flapless postextractive technique is one of the techniques used for implant placement in the intercanine area and is considered to be safe. However, severe and potential life-threatening complications, such as severe arterial bleeding and large-sized haematomas in the floor of the mouth, may occur. If, during the implant placement in the intercanine area, intense acute pain is noticed, haemorrhage and postdrilling progressive swelling, immediate suspension of surgical procedures, inside-outside bimanual compression on the floor of the mouth, Guedel pattern airway insertion, and local (haemostatic agent) and systemic (cortisone and tranexamic acid) medical therapy should be carried out. The prevention of these complications is mandatory with clinical and CT analyses, in order to highlight mandibular atrophy and to select carefully the correct length and angulation of the bone drill and to keep more attention to the risks linked to the flapless technique choosing the elevation of a lingual mucoperiosteal flap to access the mandibular contour intraoperatively and to protect the sublingual soft tissues and vasculature in risky cases."} +{"text": "Malnutrition and starvation's possible adverse impacts on bone health and bone quality first came into the spotlight after the horrors of the Holocaust and the ghettos of World War II. Famine and food restrictions led to a mean caloric intake of 200\u2013800 calories a day in the ghettos and concentration camps, resulting in catabolysis and starvation of the inhabitants and prisoners. Severely increased risks of fracture, poor bone mineral density, and decreased cortical strength were noted in several case series and descriptive reports addressing the medical issues of these individuals. A severe effect of severely diminished food intake and frequently concomitant calcium- and Vitamin D deficiencies was subsequently proven in both animal models and the most common cause of starvation in developed countries is anorexia nervosa. This review attempts to summarize the literature available on the impact of the metabolic response to Starvation on overall bone health and bone quality. Starvation describes the most severe form of malnutrition, where a severe deficiency in energy intake evokes a metabolic response focused on the subsistence of the vital organs to allow for the survival of the affected individual. Nearly 805 million people are estimated to suffer from malnutrition. 25% of children experience stunted growth due to malnutrition, whilst approximately 45% of deaths in children under five can be correlated with starvation , 2.Starvation may be caused either by an insufficient caloric intake or an inability to properly digest food. Environmental circumstances such as draughts or other natural catastrophes affecting the agriculture, poverty, or forceful withholding in certain geopolitical circumstances such as war or political prison camps may contribute to the unavailability of food. This occurs most commonly in less developed countries. In more developed countries, the primary causes of starvation are medical. Diseases such as anorexia nervosa or depression which lead to a self-induced lack of food intake are not uncommon causes of starvation if the diseases are not diagnosed and treated correctly.The initial metabolic response to starvation does not differ physiologically from the postabsorptive phase in between meals which may usually be observed in a well-nourished human being , 4. The Starvation may occur for either limited periods of time followed by a return to a regular food intake or subsist over extended periods of time, thereby leading to a chronic adaptation to the low caloric intake or absorption.Starvation induced changes of the bone have been described and experimented with in various animal models.The most naturally occurring physiologic cause of starvation which can be observed in nature occurs during the hibernation of black, brown, and polar bears. Osteoblastic activity levels have been reported to decrease tremendously during hibernation, caused most likely by both immobility and starvation . NonetheProspective animal studies most commonly performed in a rat model have shed great light on the effect of energy restriction and starvation on fetal bone development in utero, associated hormones and consequences for the adult animal. Hermanussen et al. demonstrOverall, the observations in wild moose and laboratory rats with regards to the metabolic bone response to starvation mimic human physiology more than the hibernating bear. A greater level of understanding of bear's ability to maintain their bone mineral density despite 5\u20138 months of immobility and starvation may inspire new studies investigating the effect of starvation on human bone.Malnutrition and starvation endured during famine may affect not only children and adults, but also fetuses in utero. The possibility of intrauterine programming of musculoskeletal disease developed in the adult human being was initially proposed by Lucas . It is gThe effects of periods of acute starvation in the child or the adult were discussed in several case series examining the potential effect of times of deprivation and malnutrition on the overall bone quality, incidence, and time of onset of osteoporosis, and risk of fracture. Immediate effects on overall bone health were reported by Winnick , detailiOne of the most commonly examined diseases with regards to its effect on bone health is anorexia nervosa, a psychiatric disorder more common in females than in males characterized by a self-induced restriction of caloric intake coupled with a variety of other possible symptoms such as distorted self-perception and compulsive eating rituals , 55. PosOverall, all studies examining the connection between starvation and the bone metabolism in laboratory animal models and humans found evidence of either developmental delays, stunted bone growth, decreased bone mineral density or decreased cortical strength. Given the importance of good bone health to the mobility and function of every human being, public health research investigating the prevention of starvation as well as research focusing on the optimization of therapeutic options for those who have endured periods of famine is in order."} +{"text": "In parallel with progress in understanding the canonical main olfactory system, a recent advance has accelerated elucidation of the structural organization and functional properties of the olfactory subsystems. These include the ganglion of Grueneberg, septal organ, and specific subsets of olfactory sensory neurons in the main olfactory epithelium. The emerging concept of the olfactory subsystems is that each subsystem expresses distinct classes of chemoreceptors and signal transduction molecules, and thereby detects distinct categories of odor or pheromone molecules. The chemical signals detected by the subsystems are sent via their axons to subsystem-specific domains in the glomerular map of the main olfactory bulb , which is released from conspecific mice under a threatening situation . Therefore, respiration rhythm plays a key role orchestrating the information processing mode across a number of regions in the central olfactory system, which includes the olfactory bulb and numerous areas of the olfactory cortex with high sensitivity . Because these olfactory sensory neurons project axons to the necklace glomeruli located at the posterior-most part of the glomerular map, they are called necklace olfactory neurons Luo, . These nThe septal organ is an island of olfactory epithelium located at the ventral base of the nasal septum. About 70% of olfactory sensory neurons in the septal organ respond to both odor stimulation and to mechanical stimulation, suggesting that they respond with spike discharges in synchrony with the nasal air flow even without odor input (Grosmaitre et al., The above discussions argue for the idea that at least some of the olfactory subsystems have an important role in detecting the timing of inhalation, exhalation, and air flow in the nasal cavity. Further experiments are needed to determine the functional role of each olfactory subsystem in detecting distinct phases of the respiration cycle. The timing of inhalation and exhalation plays a pivotal role in information processing in the central olfactory system (Mori et al., The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Neurogenesis persists in adult mammals in specific brain areas, known as neurogenic niches. Adult neurogenesis is highly dynamic and is modulated by multiple physiological stimuli and pathological states. There is a strong interest in understanding how this process is regulated, particularly since active neuronal production has been demonstrated in both the hippocampus and the subventricular zone (SVZ) of adult humans. The molecular mechanisms that control neurogenesis have been extensively studied during embryonic development. Therefore, we have a broad knowledge of the intrinsic factors and extracellular signaling pathways driving proliferation and differentiation of embryonic neural precursors. Many of these factors also play important roles during adult neurogenesis, but essential differences exist in the biological responses of neural precursors in the embryonic and adult contexts. Because adult neural stem cells (NSCs) are normally found in a quiescent state, regulatory pathways can affect adult neurogenesis in ways that have no clear counterpart during embryogenesis. BMP signaling, for instance, regulates NSC behavior both during embryonic and adult neurogenesis. However, this pathway maintains stem cell proliferation in the embryo, while it promotes quiescence to prevent stem cell exhaustion in the adult brain. In this review, we will compare and contrast the functions of transcription factors (TFs) and other regulatory molecules in the embryonic brain and in adult neurogenic regions of the adult brain in the mouse, with a special focus on the hippocampal niche and on the regulation of the balance between quiescence and activation of adult NSCs in this region. Neural stem cells (NSCs) in the embryonic and early postnatal murine brain generate neurons and glia, including astrocytes and oligodendrocytes. The transition of proliferative and multipotent NSCs to fully differentiated neurons and glia is called neurogenesis and gliogenesis, respectively. Neurons are generated from early embryonic development until early postnatal stages, with only a few neurogenic zones remaining active in the adult or a few types of neurons (granule neurons and periglomerular neurons in the V-SVZ) of the telencephalon, the formation of the DG involves the generation of a dedicated progenitor cell source away from the VZ and in close proximity to the pial surface. This additional proliferative zone remains active during postnatal stages and eventually becomes the SGZ, which is the site of adult hippocampal neurogenesis Figure .The DG originates from the dentate neuroepithelium (DNE), also called primary matrix, a part of the VZ of the medial pallium that is in direct contact with the cortical hem (CH) and becomes clearly distinguishable from embryonic day 14.5 , characterized by rapid divisions and the expression of a series of neurogenic TFs are produced early on by the telencephalic roof plate and later by the CH , although a more direct role of Tbr2, which is expressed by a small subset of NSCs, is not ruled out drives the differentiation of IPCs into glutamatergic cells in the DG from development to adulthood of the genes that are regulated between the quiescent and activated NSC states. Interestingly, a significant fraction of NFIX-regulated genes control cell adhesion, cell motility or extracellular matrix production family have been implicated in the generation of neuronal and glial cells during hippocampal development. In particular, Tlx, also known as Nr2e1, is an orphan nuclear receptor that is involved in patterning of the embryonic telencephalon. Tlx is expressed throughout the telencephalic VZ, except in the dorso-medial region that will give rise to the hippocampus. Tlx expression remains low in neurogenic regions during late embryonic and postnatal stages and is upregulated only at adult stages is a key component of the cell cycle machinery that controls the transition between the G1- and S-phases of the cell cycle, together with the other Cyclin D proteins (CcnD1 and CcnD3) and the Cyclin-dependent kinases signaling has been shown to stimulate stem cell proliferation and neurogenesis in the adult DG, and this pathway has been proposed to mediate the stimulating effect of physical exercise (running) on neurogenesis in the regulation of adult hippocampal neurogenesis has been the focus of several recent studies and G protein-coupled GABAB receptors (GABABR), and the loss of both types of receptors increases stem cell proliferation in the DG , for instance, is segregated to the apical membrane in embryonic radial glia is highly expressed in stem cells and non-neuronal cell types of the embryo, where it represses the expression of neuronal-specific genes : they are the source of instructive signals that determine the fate of neighboring stem cells. However, in contrast with stem cells in the developing brain that must cope with a continuously changing environment, adult stem cells are surrounded by a relatively stable niche. The V-SVZ and the SGZ niches share many common features. However, while the cellular and molecular composition of the V-SVZ niche has been relatively well investigated, we lack a similar level of understanding of the SGZ niche. Further studies of the signals and cellular interactions that control NSC behavior in the DG will be required before we can appreciate the similarities and divergences in the regulation and function of stem cells in the two adult neurogenic niches Figure .Genetic analysis of adult neurogenesis suggests that it is an unstable process, since removal of individual regulatory genes often results in dramatic changes in the behavior of adult stem cells. This inherent instability might reflect the strong impact that environmental cues have on stem cell activity. That defects in single quiescence pathways are sufficient to drive the cell cycle re-entry of subsets of stem cells also suggests that different pools of adult stem cells might receive and/or respond to different niche signals. Further investigations will determine whether adult NSCs in the DG are indeed heterogeneous and whether this is due to exposure to different niche signals or to intrinsic differences between distinct NSCs.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Despite the widespread use of repetitive transcranial magnetic stimulation (rTMS) in both research and clinical settings, there is a paucity of evidence regarding the effects of its application on neural activity. Studies investigating the effects of rTMS on human participants ] as the primary measure of changes in neural activity. Studies using MEPs to measure changes in the responsiveness of cortico-motor pathways however, cannot determine the source of observed changes, nor the manner in which individual neurons contribute to the overall effect. Often a \u201cchange in excitability\u201d of motor pathways is described when MEP size is altered following rTMS. This term is somewhat misrepresentative however, as it is likely that changes in MEP amplitude represent a hybrid of changes of the intrinsic excitability of neurons within the activated pathway and alterations in the strength of the connections between these neurons. These processes occur through different mechanisms amplitude (Aydin-Abidin et al., ex vivo slices. They found that increasing rTMS intensity resulted in a falloff in LTP then a suppression at high intensities which is somewhat surprising and could occur as a result of rTMS-induced damage to the brain. This highlights the need for additional investigation of the effects of rTMS intensity in animal models, and also demonstrates that rTMS applied to the cortex can influence the activity of subcortical structures such as the hippocampus. The latter was also displayed by Ahmed and Wieraszko (The intensity of rTMS is critical to the resulting effects on neural tissue, as it is for single pulses. Ogiue-Ikeda et al. delivereieraszko who descElectrophysiological studies such as those reviewed, provide much needed information on the effects of TMS pulses on neural responses. This information is an important step toward a clearer understanding of the effects of rTMS and the design of more efficacious protocols. What these studies do not provide however is an insight into the changes in intrinsic excitability and synaptic plasticity that may be occurring in the intact brain following rTMS. In order to obtain this kind of information, a different approach is required.Intracellular recording techniques provide information on both changes to neuron membrane properties that indicate alterations in intrinsic excitability and information on the strength of synapses in the recorded pathway. Unfortunately, the complexity of recording intracellular responses during TMS has made this a significant technical challenge (Matheson et al., NM drafted the manuscript. All authors critically reviewed and revised the manuscript and approved the final submitted version.NM was supported by a University of Otago PhD scholarship. JR received support from a Rutherford Discovery Fellowship from the Royal Society of New Zealand.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Measures of climate change adaptation often involve modification of land use and land use planning practices. Such changes in land use affect the provision of various ecosystem goods and services. Therefore, it is likely that adaptation measures may result in synergies and trade-offs between a range of ecosystems goods and services. An integrative land use modelling approach is presented to assess such impacts for the European Union. A reference scenario accounts for current trends in global drivers and includes a number of important policy developments that correspond to on-going changes in European policies. The reference scenario is compared to a policy scenario in which a range of measures is implemented to regulate flood risk and protect soils under conditions of climate change. The impacts of the simulated land use dynamics are assessed for four key indicators of ecosystem service provision: flood risk, carbon sequestration, habitat connectivity and biodiversity. The results indicate a large spatial variation in the consequences of the adaptation measures on the provisioning of ecosystem services. Synergies are frequently observed at the location of the measures itself, whereas trade-offs are found at other locations. Reducing land use intensity in specific parts of the catchment may lead to increased pressure in other regions, resulting in trade-offs. Consequently, when aggregating the results to larger spatial scales the positive and negative impacts may be off-set, indicating the need for detailed spatial assessments. The modelled results indicate that for a careful planning and evaluation of adaptation measures it is needed to consider the trade-offs accounting for the negative effects of a measure at locations distant from the actual measure. Integrated land use modelling can help land use planning in such complex trade-off evaluation by providing evidence on synergies and trade-offs between ecosystem services, different policy fields and societal demands. The first scenario is a reference scenario that represents a continuation of ongoing economic and demographic trends and includes a number of important ongoing policy developments affecting land use. The second scenario is based on the same macro-level assumptions but includes a package of spatial policies that are related to adaptation measures. Both scenarios were evaluated with a series of models that translate scenarios of macro-economic change to spatial patterns of land use change. Finally four indicators of impacts on ecosystem services were calculated: flood risk, carbon sequestration, biodiversity and habitat connectivity. Based on these indicators the tradeoffs and synergies of the adaptation measures are evaluated. Figure\u00a0000\u20132030.increasing food and feed demand in emerging countries, i.e. the BRIC countries ;changing trade regimes because of increasing competitiveness of Asian and Latin-American regions;.changing environmental constraints because of resource scarcity and climate change support , and current protected nature areas . In this way the reference scenario offers business-as-usual baseline conditions that allow a proper assessment of the impacts of policy alternatives.An alternative policy scenario was developed to evaluate the spatial planning of land use for the conservation of soil and regulation of water in connection to climate change. The macro-level socio-economic developments Table\u00a0 and clim2 pixel is designated as experiencing an inundation of 50\u00a0cm or more in a 100\u00a0year flood event according to a map prepared by the EC Joint Research Center under conditions of climate change. This assessment of potential river-flood risk does not incorporate the conditions of flood defence systems and the effects of upstream land use change on flood occurrence. Therefore, the indicator is especially meant to highlight those areas where new assets become exposed to flood risk. Flood risk from the sea is not included in the analysis.The second indicator used in this paper is an indicator of carbon sequestration. This indicator is based on a carbon bookkeeping approach that takes into account effects of soil and forest age on carbon stock changes. Emission factors are specified by individual countries and land cover types to account for differences in farming practice and ecosystem function across Europe. Details of the indicator are described by Schulp et al. .Two indicators are designed to capture the impacts of land use change on biodiversity at the spatial and thematic resolution of the land use modelling results. The first indicator is a measure of the suitability of the habitat for maintaining biodiversity while the second indicator aims to provide a measure of the connectivity of the habitats. Both indicators represent different aspects of habitat quality....The biodiversity indicator is a Mean Species Abundance (MSA) index which is derived from land use, land use intensity (agriculture and forestry), nitrogen deposition, spatial fragmentation, infrastructure developments and policy assumptions on high nature value (HNV) farmland protection and organic agriculture. The methodology used is based on the GLOBIO3 approach initially developed for biodiversity assessments at a global scale natural area. The overall connectivity of an area is assessed by calculating the average resistance (or travel time) to reach the larger patches of natural vegetation from the smaller patches within a neighbourhood or administrative region. As the indicator is not including information on the quality of different land use types, it only offers an indication of the potential coherence of possibly valuable natural areas. The indicator has been defined in such a way to be as much as possible independent of the area of natural land use types in the region and solely capture the spatial arrangement. Therefore, also areas with a relatively small area of nature may still have a good connectivity if the green infrastructure is well-developed. Alternative indicators for landscape connectivity, such as the frequently used proximity indicator (Gustafson and Parker .Figure\u00a0The maps indicate that the adaptation measures are not likely to influence the overall patterns of major land change processes in Europe in the coming decades. Land abandonment will still be concentrated in the most marginal areas and urbanization will take place in the already heavily urbanized regions. However, the adaptation measures will influence land change processes at selected locations and alter regional patterns of conversions. When analyzing the results in more detail the implications of the adaptation measures on land change patterns become apparent.An example of such regional differences in land use configuration resulting from the measures in the policy alternative is provided in Fig.\u00a02 of new urban area is located in flood prone areas in the reference scenario while this area only amounts to 34\u00a0km2 in the policy alternative. The small increase in flood risk in spite of the restrictions on building in flood risk areas under the policy alternative is a result of the increase of the flood prone area during the scenario period while the land use policies are based on the area currently under risk of flooding.The flood risk indicator shows the success of the spatial policies in reducing the exposure to potential flooding Fig.\u00a0. WithoutThe indicators of biodiversity and carbon sequestration are used to investigate if the adaptation measures lead to synergies with other ecosystem services. Figure\u00a0In general a decrease in the resistance to reach habitats Fig.\u00a0 is found....Adaptation to climate change consists of a wide range of measures related to different levels of governance. Measures range from local modifications of urban sewage systems to deal with higher peak flows to changes in national scale spatial planning policies and modifications in the common agricultural policy at EU level. In many of the measures land use plays a central role. Given that land use is central to the state of the environment and is linked to multiple economic sectors it is likely that policies in other fields will affect the effectiveness of adaptation measures while at the same time adaptation measures may provide synergies or trade-offs with other sectors. This paper presented a quantitative approach to analyze this mutual interaction between climate adaptation measures and other policy objectives in the context of a multi-scale analysis of land use dynamics. The results indicate that indeed the evaluated set of adaptation measures also impacts the other ecosystem services analyzed. In this paper only a small range of measures is analyzed and the impacts are assessed based on a limited set of indicators, only representing some of the ecosystem services provided in the study area. However, the approach allows for the evaluation of different scenarios and multiple impacts on ecosystem services (Kienast et alThe effectiveness of the adaptation measures in reducing flood risk are not analyzed in this paper. The flood risk indicator solely indicates the exposed assets using a flood risk map that accounts for changes in climate conditions. The results make clear that in the absence of adaptation measures the urban area under flood prone conditions is likely to increase strongly. Changes in the hydrological circumstances as result of improved retention and reduced run-off in the upstream parts of the catchments are not accounted for and may reduce flood risk. Accounting for such changes would require a dynamic coupling of the land use simulations with a hydrological model. Such an approach was taken by Hurkmans et al. for analThe specification of the scenario options as an interactive process with the policy makers turned out to be a time-consuming process. However, during the specification an improved mutual understanding of the possible implications of the measures as well as an understanding of the capacities and limitations of assessment models to evaluate such measures was obtained. While initially defined in broad terms, the need for quantitative specification of the scenarios in the model provided a platform to discuss the more detailed implications of these policy themes for land use planning practices. In the end, the joint specification of the scenarios assisted the interpretation of the final modeling results because the policy makers had been involved in the process of specification which creates a feeling of ownership.The analysis presented in this paper shows that integrative analysis of the tradeoffs and synergies of policy measures in a dynamic scenario context can benefit the targeting and selection of adequate policy measures. The analysis provides information to support discussion between different policy fields and allows to better explore the potential synergies and avoid unforeseen trade-offs. The process of scenario and model specification as a collaborative effort revealed the challenges of effective science-policy communication. While simple straightforward answers and assessments were preferred by the policy makers the discussion of the specification and implementation of scenario options in the model helped policy makers to understand the need for a clear specification of the broader policy objectives to be able to assess their impacts. The presentation of results in maps helped to understand the complexity of the outcomes. Trade-offs and synergies between adaptation measures and ecosystem service indicators are location and context dependent and land change assessments therefore do not always provide crisp and uniform answers to the questions of policy makers. As such, the science-policy interface emerged into a joint learning process in which the role of specific policies in complex human-environment interactions becomes clearer to both scientists and policy makers..The approach presented in this paper is an example of operationalizing the ecosystem services approach to inform policy (Daily et alAs land use is both a driver and result of human-environment interactions it provides a proper platform for discussing the way we can best adapt to changes in the earth system and secure the ecosystem services provided by the land."} +{"text": "Neurons in the brain exhibit a broad spectrum of heterogeneities even within a given morphological or physiological class. In a recent modeling study, Mejias and Longtin investigated the effects of heterogeneity in the voltage threshold for spike generation on the dynamics of random networks of excitatory and inhibitory neurons or quantities that are static in the time scales of interest (in experiments), and dynamical heterogeneity, which refers to measures of ongoing neuronal activity such as firing rates and correlations. The relationships between the two can be usefully explored in both directions: while Mejias and Longtin explored the dynamical consequences of different levels of biophysical heterogeneity (bottom-up), others started from experimental observations of dynamical heterogeneity, and investigated neuronal models that are consistent with the observed dynamics in the presence of noise , both of which display important dynamical differences with respect to homogeneous all-to-all connectivity. We propose that a similar approach could be fruitfully applied to other forms of biophysical heterogeneity, and ultimately result in useful taxonomies of the different sources of biophysical heterogeneity, describing the dynamical heterogeneities they result in and the interactions between their effects. This level of understanding would facilitate the conceptual integration of different results and eventually lead to basic functional principles of neuronal processing beyond area- or species- specific details.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Reports of clinical trials often include adjusted analyses, which incorporate covariate data into the analysis model. Adjusting for covariates can increase the precision of treatment effect estimates and increase the power of statistical tests, without the need to increase sample size. In individually randomised trials, the main reason to adjust for a particular covariate is that it is expected to be strongly associated with the primary outcome. The larger the association between covariate and outcome, the greater the increase in power achieved from an adjusted analysis.A valid analysis of a cluster randomised trial (CRT) must take into account the clustered structure of the data, for example by using a mixed effects model. The selection of covariates for an adjusted analysis of a CRT is more complicated because covariates exist at both the cluster level and individual level. Further, adjustment for an individual level covariate can affect the residual variance of the outcome at both the cluster and individual levels.Using results from simulations, and some analytic investigation, I show how the size of clusters and the intra-cluster correlations of the covariate and outcome affect power and precision in adjusted analyses of CRTs using linear mixed effects models. I also consider logistic mixed effects models and show how adjusting for individual level or cluster level covariates affect what treatment effect is being estimated."} +{"text": "CompARE is a pragmatic multicentre open-label phase III randomised controlled trial aiming to determine if intensification of treatment in intermediate and high risk oropharyngeal cancer (OPC) patients improves the definitive primary outcome measure of overall survival time. The trial evaluates three experimental arms separately against one control arm using a MAMS design, with three interim assessments of disease-free survival time. Experimental arms will be discontinued if they fail to meet the interim assessment criteria. The timing of these assessments is driven by the number of control events, with the study engineered so these occur approximately annually. The design characteristics will be presented.A potential additional experimental arm for treatment of OPC was proposed during CompARE initiation, and could be introduced into the trial after one year if approved. The straightforward implication is an increase in the number of patients required to recruit per year or an increase in trial duration. However in a complex MAMS design, a balance of multiple factors such as a feasible sample size, trial duration and appropriate number and timing of interim assessments with appropriate statistical error rates, have to be carefully considered.Using the nstage and artpep programs in Stata, a review of the operating characteristics of both the original design and expanded design with the new experimental arm was undertaken. The recruitment and statistical implications of the addition of the new experimental arm and an evaluation of the study duration to changes in recruitment predictions will be presented."} +{"text": "In vivo research carried out in various animal models as well as epidemiological and clinical data support the existence of a particular phenotype \u2013 osteoporotic OA. In fact, subchondral bone has become a potential therapeutic target in OA. Depending on the ratio between formation and resorption, subchondral bone remodeling can culminate in either a sclerotic or an osteoporotic phenotype. Patients with osteoporotic OA may thus achieve clinical and structural benefit from treatment with bone-targeted interventions.The identification of well-defined phenotypes along the course of the disease may open new avenues for personalized management in osteoarthritis (OA). Arthritis Research & Therapy, Wang and colleagues demonstrate that osteoporosis aggravates cartilage damage in an experimental model of knee OA in rats [Subchondral bone has become a potential therapeutic target in osteoarthritis (OA). In a previous issue of in rats . Interes in rats . The sigThe controversy regarding the relationship between subchondral bone quality and cartilage integrity originates from the complex biological and mechanical nature of the osteochondral junction . OA progThe effects of estrogen deficiency on the knee joint have been reported in various experimental animal models of OA. The findings obtained by Wang and colleagues on subchondral bone quality and articular cartilage damage support previous research carried out in rabbits, in which osteoporosis aggravated instability-induced OA . In thisThese findings in animal models could be translated to humans, and together with epidemiological and clinical data they support the existence of a particular phenotype \u2013 osteoporotic OA . Indeed,The original approach of using ESWT in OA by Wang and colleagues remains intriguing. These authors have reported previously that the application of ESWT to subchondral bone of the proximal tibia showed a chondroprotective effect in the initiation of knee OA and regression of established OA of the knee in rats. These effects were attributed to the ESWT multifunctional actions on cartilage and bone. Yet achieving such beneficial effects in this osteoporotic OA model suggests that the main mechanism of action of ESWT may be improving subchondral bone structure . HoweverIn summary, the study by Wang and colleagues further supports the existence of the osteoporotic OA subtype and the potential benefit of bone-acting therapeutic interventions. Consequently, the identification of patient phenotypes along with the discovery of specific therapeutic interventions targeting relevant pathogenic mechanisms during the course of the disease could lead to a personalized approach to the management of OA."} +{"text": "Allergens in food pose a risk to allergic consumers, especially if they are present in food without declaration or warning. While there is EU regulation for allergens present as an ingredient, this is not the case for unintended allergen presence (UAP). Food companies use precautionary \"may contain\" labels to inform allergic individuals of a potential risk from UAPs. However, the use or absence of precautionary label has a limited correlation with the level of UAP and consequently the risk of an unexpected allergic reaction. Allergen risk assessment using probabilistic techniques enables estimation of the residual risk after the consumption of a product that unintendedly contains an allergen.\u00ae . The allergens of interest were milk, wheat, peanut and hazelnut. The probabilistic model estimates the level of risk for objective allergic reactions posed to the allergic consumers in the UK and gives insight in the health implications of the measured unintended allergen levels.Previously, the UAP was determined among 500 packaged food products from 12 food groups in the UK, either containing a precautionary label or not . In the present study, the UAP results were combined with food consumption data from the National Diet and Nutrition Survey (NDNS) in the UK and the allergen-specific population threshold distribution in a probabilistic model. Data used for the threshold distributions were gathered during the scientific review of the VITAL\u00ae (mg protein) and the product specific consumption (kg).Also, the levels of UAP were compared to a product-specific action level (ppm) based on the reference doses as determined for VITAL\u00ae reference doses in evaluation of the use of advisory labeling in the UK.The results of this study will assess the public health risks posed by the levels of allergen cross-contamination found to be present in food in the UK retail market and will provide insight regarding the implications of the potential implementation of the VITAL"} +{"text": "The European Paediatric Mycology Network (EPMyN) was launched in 2014 to create a European platform for research and education in the field of paediatric mycology. The EPMyN aims to address the lack of paediatric specific evidence and knowledge needed to (1) improve the management and outcome of invasive fungal infections in children and neonates and to (2) enhance and develop paediatric antifungal stewardship programmes. The European Paediatric Mycology Network (EPMyN) was launched in 2014 to create a European platform for research and education in the field of paediatric mycology. Its mission is to increase the knowledge of the epidemiology and pathogenesis and to improve the management of invasive fungal infections in neonates and children. The detailed objectives of the EPMyN are threefold: (1) to investigate the clinical epidemiology of invasive fungal infections in neonates and children, (2) to investigate new diagnostic and treatment modalities of fungal infections in specific paediatric groups, and (3) to create a forum for educating and training colleagues in the field of paediatric mycology.www.penta-id.org), a recognised level 1 network for paediatric infectious diseases in Europe by the European Networks of Paediatric Research at the European Medicines Agency (Enpr-EMA). PENTA-ID was developed from the well-established PENTA (Paediatric European Network for Treatment of AIDS) network, originally collaboration between paediatric HIV centres in Europe addressing relevant questions in the field of paediatric HIV. Numerous clinical trials of antiretroviral therapies in children have been successfully performed over more than 20\u00a0years and form the basis of current paediatric HIV treatment guidelines [The EPMyN is a member of PENTA-ID (www.ipfn.org), are the differences in management strategies , differences in the fungal epidemiology and in the risk factors associated with the development of invasive fungal infections (e.g. antibiotic prescription policies). A number of colleagues being a member of the EPMyN steering group are actively involved in the IPFN as well and this does reflect the complementarity of these two networks. In addition, developing and executing cross-Atlantic clinical research, taking into account the differences in regulatory, data safety and legal issues between the USA and Europe will enhance future collaborative activities aimed at a better understanding and improved outcome of invasive fungal infections in neonates and children.The need for EPMyNcan be found in the unique epidemiology of invasive fungal infections in neonates and children, differences in pharmacokinetics of antifungal agents and usefulness of fungal diagnostic measures compared to adults, and the lack of clinical phase-III trials to assess the efficacy of antifungal agents in the paediatric populations. The lack of paediatric specific evidence results in inappropriate use of diagnostic measurements and antifungals and hampers the development of paediatric antifungal stewardship programmes. Arguments favouring the EPMyN, in addition to the well-established International Paediatric Fungal Network (IPFN) led by colleagues in the USA and the European Organisation for Research and Treatment of Cancer (EORTC). During this 2-day course, the participants were provided with state of the art lectures on the epidemiology, prevention, diagnosis and treatment of invasive fungal infections in neonates, children with primary immunodeficiencies, children with malignancies and those undergoing haematopoietic stem cell transplantation. Interactive case presentations with active involvement of the participants led to vivid discussions and revealed the absence of paediatric specific data to guide clinical decisions.Aspergillus fumigatus in the paediatric population.Current activities are focussed on collecting the necessary information about the management of invasive fungal infections in children and neonates from a large number of European centres. This information is collected by an electronic survey and the data is captured in the REDCap database. The results of this survey are expected to define specific areas of future research, to highlight gaps in paediatric specific knowledge in clinical mycology and will emphasise difficulties encountered in our daily practice which need to be addressed. In addition, specific surveys are being developed to obtain an enhanced insight in the epidemiology of invasive mould infections in the paediatric population, to describe the experience of fluconazole dosing in neonates with a focus on the use of higher dosages as suggested by recent pharmacokinetic studies \u20134 and towww.ema.europa.eu). Investigating new antifungals in paediatrics do require specific expertise compared to clinical trials in adults. A PIP need to consider an appropriate medicine\u2019s formulation acceptable for use in children, the need of coverage of all paediatric age groups from birth to adolescence and how to measure its efficacy and side effects. The EPMyN within the PENTA-ID is able to cover those paediatric specific aspects and will provide in expertise needed.Next to these investigator-initiated studies, EPMyN aims to provide a platform for pharmaceutical companies to assist in developing and performing the studies as required in the so-called paediatric investigation plan (PIP) set out by the European Medicines Agency \u2013Fungal Infections Study Group (EFISG) guideline for the management of invasive candidiasis in neonates and children was the first to be published \u2022. In thiIn complementation of these guidelines, the development of an outline of a paediatric antifungal stewardship programme to be used as a format in individual European countries is under consideration. The need for a paediatric antifungal stewardship programme is directly related to the challenges encountered in the management of invasive fungal infections in neonates and children, the development of antifungal resistance and the high costs of inappropriate antifungal prescriptions. Invasive fungal infections are characterised by unspecific signs and symptoms in already extremely vulnerable children transplants), poor-sensitivity of culture-based microbiologic tests, and the pressure to start treatment early due to the high morbidity and mortality of these infections. Most antifungals in paediatric settings are therefore prescribed for empiric/pre-emptive therapy. Suboptimal dosing of antifungals in neonates and children has been described and may contribute to suboptimal clinical outcomes \u2022, 8. To Collaborative efforts in the field of paediatric mycology are critical to improve our knowledge and to facilitate research with the ultimate goal of improving the management and outcome of invasive fungal infections in children and neonates. The EPMyN has taken up the responsibility to provide a platform for research and education in the field of paediatric mycology. The activities undertaken by the EPMyN will facilitate the development of paediatric specific antifungal stewardship programmes built on increased evidence and knowledge."} +{"text": "This review aims to summarize these results with highlights on the pathophysiological function of the RAS under hypoxic conditions. It is concluded that the maladaptive changes of the RAS in the carotid body plays a pathogenic role in sleep apnea and heart failure, which could potentially be a therapeutic target for the treatment of the pathophysiological consequence of sleep apnea.The renin-angiotensin system (RAS) plays pivotal roles in the regulation of cardiovascular and renal functions to maintain the fluid and electrolyte homeostasis. Experimental studies have demonstrated a locally expressed RAS in the carotid body, which is functional significant in the effect of angiotensin peptides on the regulation of the activity of peripheral chemoreceptors and the chemoreflex. The physiological and pathophysiological implications of the RAS in the carotid body have been proposed upon recent studies showing a significant upregulation of the RAS expression under hypoxic conditions relevant to altitude acclimation and sleep apnea and also in animal model of heart failure. Specifically, the increased expression of angiotensinogen, angiotensin-converting enzyme and angiotensin AT The phys Figure . As suchArterial chemoreceptors in the carotid body are important for the rapid adjustment of respiratory and cardiovascular activities via the chemoreflex elicited by the sensory afferent activity of the chemoreceptor responding to changes in chemical stimuli in the arterial blood. The carotid body is a highly vascularized organ with blood perfusion far exceeding the needs of its local tissue metabolism. Thus, changes in arterial oxygen tension or pH, circulating humoral and locally produced signaling substances acting as paracrines or autocrines can readily diffuse to the chemosensory components of the carotid body. In addition to the response to hypoxia, hypercapnia and acidosis, the carotid chemoreceptor responds to Ang II because AT receptors are expressed in the chemosensitive glomus cell of the carotid body Allen, . Moreove1 receptors expressed in the glomus cells.The expression and localization of several key RAS components, notably angiotensinogen, which is an indispensable component for the existence of an intrinsic RAS, have been detected in the rat carotid body , which mobilizes the endoplasmic calcium to store and elevate intracellular calcium of Ang IV , a biologically active peptide converted from Ang I and Ang II, respectively, by ACE2 and ACE. Recent study reported the expression of Mas receptors in the rabbit carotid body and also its decreased expression under a disease condition associated with heart failure is increased by 2 folds in the carotid body of rats exposed to 10% inspired oxygen for 4 weeks (sustained hypoxia) it increases the mRNA and protein level of angiotensinogen expressed in the chemosensitive glomus cell, and (ii) elevated the mRNA expression and enzymatic activities of ACE associated with recurrent apneas closely related to pathophysiological conditions including sleep-disordered breathing, obstructive sleep apnea and hypertension . Hence, Ang II augments hypoxia-induced renal sympathetic nerve activity (RSNA) and there are significant increases in the expression of ATSleep-disordered breathing with central or obstructive sleep apnea is frequently observed in patients with heart failure. Sleep-disordered breathing has been known to have a negative impact on the CHF patient and so clinical treatment of sleep-disordered breathing could improve cardiac performance and long-term outcomes in these patients. Also, cardiac dysfunction may play a role in the pathophysiology of sleep apnea, although the interrelationship between heart failure and sleep apnea remains to be established (Caples et al., 1 receptor regulates the excitability of the carotid chemoreceptor. Hence, Ang II elevates the level of intracellular calcium in the chemosensitive glomus cells and the activity of carotid chemoreceptors. As a result, activation of the chemoreflex could be a peripheral control important for the physiological response to hypoxia and the maintenance of electrolyte and fluid homeostasis. In addition, the expression of AT receptors in the carotid body is regulated by hypoxia. In effect, sustained hypoxia induces an upregulation of AT1 receptor expression, which increases the sensitivity of the chemoreceptor response to Ang II. This regulation may be important in the modulation of the carotid body functions responsible for the hypoxic ventilatory response, for enhancing the cardiorespiratory response and adjusting electrolyte and water homeostasis during sustained hypoxia. Furthermore, RAS components are locally expressed in the carotid body and the increased RAS expressions are closely relevant to the pathogenesis of disease including sleep-disordered breathing and heart failure. Specifically, the upregulation of the expression of angiotensinogen, ACE and AT1 receptors could play a significant role in the augmented carotid chemoreceptor activity, via the increased activity of chemoreflex, contributing to the pathophysiology of sleep apnea and the sympatho-excitation that is central to the endothelial dysfunction and heart failure during the course of pathogenesis. Future studies in this direction warrant a better understanding of the pathogenic role of RAS in the carotid body in the disease associated with hypoxemia.Findings of expression and functional studies suggest that the ATThe author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Atraumatic avulsion of the tibial attachment of patellar tendon in adults is a very rare injury with only few published case reports. Here we are sharing the successful management and follow-up of a similar case with a different suture material for repair of the tendon, the FiberWire. We believe that the management we are discussing allows for early return to activity with good functional outcome. Isolated avulsion of patellar tendon from tibial tuberosity is very rare injury in the adults . Spontan52-year-old man, plumber by profession, presented at the emergency department with one day history of having sudden onset spontaneous pain and functional impairment of left knee while he was walking. The patient had history of hypertension. He had no history of taking medications such as corticosteroids or fluoroquinolones and did not remember having suffered any knee pain previously. On clinical examination effusion was present in knee. It was possible to palpate a gap between the distal patellar tendon and the tibial tuberosity and patient was not able to extend the leg. X-ray of the left knee showed a high patella and presence of calcification in the distal part of the patellar tendon . UltrasoPatellar tendon rupture occurs almost exclusively in males as a consequence of landing after a fall or a jump which produces a rapid contraction of the quadriceps muscle with a partially flexed knee or a consequence of direct trauma to the knee . Mostly Management of patellar tendon rupture varies from primary repair strengthened by cerclage augmentation and immobilization of the extension for 6 weeks , 4 to prPotential complications with use of FiberWire are rupture of the sutures, increased tendinosis by the sutures, and reduction of the strength in knee extension. One of the rare potential complications described in literature with use of FiberWire is reaction to the synthetic material leading to discharging sinuses. In our case, none of the abovementioned complications was observed.Avulsion of distal insertion of the patellar tendon is an extremely rare injury in adults that requires reliable fixation followed by supervised rehabilitation to get the best possible functional outcome. There are very few case reports in the literature describing surgical repair of these types of injuries. In contrast to conventional treatment options, our management of reinsertion of the tendon using transosseous suture with FiberWire is an attractive alternative treatment option that provides excellent resistance combined with good biocompatibility."} +{"text": "MEFV gene, which encodes the protein named pyrin , are associated with the autoinflammatory disease familial Mediterranean fever (FMF). Recent genetic and immunologic studies uncovered novel functions of pyrin and raised several new questions in relation to FMF pathogenesis. The disease is clinically heterogeneous reflecting the complexity and multiplicity of pyrin functions. The main functions uncovered so far include its involvement in innate immune response such as the inflammasome assemblage and, as a part of the inflammasome, sensing intracellular danger signals, activation of mediators of inflammation, and resolution of inflammation by the autophagy of regulators of innate immunity. Based on these functions, the FMF-associated versions of pyrin confer a heightened sensitivity to a variety of intracellular danger signals and postpone the resolution of innate immune responses. It remains to be demonstrated, however, what kind of selective advantage the heterozygous carriage conferred in the past to be positively selected and maintained in populations from the Mediterranean basin.Mutations in the Autoinflammatory diseases are a group of genetically determined multisystem disorders caused primarily by the dysfunctions in innate immunity. These rare disorders are characterized by recurrent episodes of generalized inflammation and fever in the absence of infectious or autoimmune causes . FamiliaMEFV gene, which is composed of 10 exons and encodes a 781 amino acids protein called pyrin or marenostrin or TRIM20 (http://fmf.igh.cnrs.fr/infevers/). The FMF-associated mutations are predominantly located within exon 10 of the gene, and they primarily result in amino acid substitutions. The phenotypic variability of the disease is thought to be partially associated with particular mutations and allelic heterogeneity I chain-related gene A has a modifier effect on the disease phenotype . Comparative analyses of amino acid substitutions in the ret finger protein (rfp) domain of pyrin among primates and diseased people have demonstrated that some human mutations actually represent the recapitulation to the ancestral amino acid states and these exist as wild type in other species . Inspectutations . The metutations . The autPositive selection to maintain the high frequency of the heterozygotes should be sufficiently strong to overcome the negative effects such as the increased morbidity and mortality rates among the homozygotes and compounded heterozygotes . There hMEFV encodes the protein called pyrin , which is supposed to play a key role in apoptotic and inflammatory signaling pathways. The protein belongs to the large family of proteins sharing a conserved domain structure with the tripartite motif (TRIM) consisting of an N-terminal RING domain, B-box domain(s) and a C-terminal coiled-coil domain amyloidosis, which usually affects the kidneys. Amyloidosis is the result of tissue deposition of amyloid, which is a proteolytic cleavage product of the acute phase reactant serum amyloid A SAA; . Overpro alleles . Clinica alleles .MEFV mutations with different inflammatory pathologies, such as systemic onset juvenile idiopathic arthritis . The evidences for positive selection of mutations in the MEFV gene are given in several works cited in this review, and they have a strong support from the formal analytical approaches of population genetics. What remains unclear, however, why the mutations have been selected and maintained in the Mediterranean populations and what are the mechanistic explanations for the advantage at the biochemical level? In particular, we know well about the negative effects of the homozygous or compounded heterozygous mutant allele combination, which result in a higher morbidity/mortality rate. At the cellular level this genetics presumably results in the impaired assembly of pyrin inflammasomes, in the launch of excessive and extended inflammatory responses, and in a less efficient resolution of inflammation. Given the multiple functions of pyrin and the expected pleiotropic effects of mutations in the MEFV gene, the significance and contribution of each of these functions are difficult to ascertain in terms of clinical presentation in pathology or the heterozygote advantage at the phenotypic level. All we know at the clinical level is that the homozygous or compounded heterozygous state results in the enhanced and extended inflammatory response to some of the innocuous factors that are tolerated well and handled efficiently by the normal immune system. The diseased state is certainly disadvantageous but we don\u2019t know about the possible advantages conferred by the heterozygote state. Investigation of this previously ignored group could potentially reveal the characteristics and traits that were responsible for the selective advantage and maintenance of these mutations in the Mediterranean populations.Understanding the role of pyrin in innate immunity has progressed rapidly in recent years, uncovering its numerous functions in the cell from the formation of several supramolecular structures and inflammasome assembly, to sensing various intracellular danger signals, to mounting the innate immune responses, and to the resolution of inflammation (All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Sleep is critical for regulation of synaptic efficacy, consolidation of memories and learning. It has been proposed that synaptic plasticity associated with sleep rhythms could contribute to consolidation of memories acquired during wakefulness. It was suggested that in the active state of sleep slow wave oscillation, the hippocampal formation activates latent memories stored in the neocortex (replay) and induce permanent changes in synaptic conductances.In this study, we present a thalamocortical network model of the slow-wave sleep activity characterized by repeatable < 1 Hz) transitions between active (Up) and silent (Down) states of the network. The model consisted a layer of thalamic relay (TC) and reticular (RE) neurons in the thalamus as well as a model of cortical column with pyramidal neurons and inhibitory interneurons. All neurons were modeled based on the Hodgkin-Huxley kinetics. Spike-timing dependent synaptic plasticity was implemented to regulate synaptic efficacy. The slow oscillation was driven by intracortical dynamics; active states were initiated by the spontaneous miniature synaptic releases at different network sites [ Hz transWe found that the pattern of the active state propagation depended on the spatiotemporal pattern of activity on the previous cycle of slow oscillation and could be influenced by external input. Because of the refractoriness properties of the network, a probability of the next active state initiation was higher for the network site that initiated activity at the previous cycle. Furthermore, even weak external stimulation delivered to the network results in increase probability of up-state induction at the stimulation location. This suggests that spatially and temporally sparse hippocampal input could influence the spatiotemporal pattern of slow oscillation.Location of the initiation site and the pattern of active state propagation determined the relative timing of spiking in cortical neurons. When spike-time dependent synaptic plasticity (STDP) was implemented, there was a net decrease in synaptic strengths and, at the same time, an increase in the strength of specific synapses which were associated with the sequence replay. The change in synaptic weights between any two neurons was determined by the direction of active state propagation and by the distance between the neurons.Our study propose a mechanism of how interaction between cortically generated slow waves and sparse external input, possibly representing input from hippocampal formation, may lead to reorganization of synaptic strength during stage 3/4 sleep."} +{"text": "A mechanism must operate directly on the Achilles tendon which in effect introduces an obstruction to the outward movement of the Achilles tendon, but the features of this obstruction are largely unexplored. We hypothesized that the obstruction arises from the differences in mechanical properties between muscle contractile tissue and non-contractile tissue. A possibility is that the pennate arrangement of muscle fibers results in a mechanical system which applies force vectors perpendicular to the muscle fiber axis, similar to that described for the action of intercostal muscle on the rib cage. The distal region of soleus (Sol) muscle has an unipennate arrangement, with fibers oriented between the posterior aponeurosis and anterior surface of the muscle. Although this configuration can constitute a constraint to the posterior movement of the Achilles tendon, the Kager\u2019s fat pad, being non-contractile tissue, will be unable to actively develop any force and \u2013 render it mechanical incapable of constraining the movement of the tendon. If our hypothesis is true, the action of obstruction should be strongly synchronized to that of the tip of the Sol muscle as the ankle rotates.The anatomical location (x and y coordinates) and tissue movement (velocity) of Achilles tendon inflection point, which corresponds to the obstruction, and also those of the extremity of Sol distal edge were determined during passive and active contractions using MRI (n=6). A simple geometrical model was used to investigate how the position of the obstruction influences force and velocity gains.With increasing ankle angle, inflection point and extremity of the Sol distal edge moved in proximal and anterior directions Figure . The disThe Achilles tendon obstruction is likely to be emerged the location of boundary region between the Sol muscle and Kager\u2019s fat pad when ankle positioned plantarflexion. Further, obstruction can provide a means of managing the tradeoff between force and velocity inherent in a finite power source and may effectively emerge in a location of terminal part of a joint such as foot or hand due to responsible for quicker movement rather than larger force exertions."} +{"text": "Several UMCG-wide implementation projects improving quality and patient safety have been organized to stimulate the (intrinsic) motivation to work safely in order to bring about the intended change. We deployed \u2018change agents\u2019 in various involved groups of professionals and trained them in supporting and stimulating their colleagues to work safely according to the latest evidence based guidelines. We will discuss the effects on implementation success of this strategy, using three examples of improvement projects, namely the implementation of a screening instrument for elderly patients to identify frailty, the Surgical Patient Safety System checklist (SURPASS) in the perioperative process, and the guideline Personal Hygiene.We used e-learning modules, instruction sessions, follow-up sessions, feedback, e-mail and the intranet to inform and instruct the change agents. We measured the effects on improving quality and patient safety by a pre-post evaluation of the degree of implementation, partly by individual completion rates of the checklist and the screening instrument and by observation of personal hygiene indicators. Furthermore, we investigated the influence of leadership and team climate on the effectiveness of deploying change agents using vaThe study showed a positive effect of the change agent on the degree of implementation of the various innovations .A transformational leadership style of change agents resulted in higher usage rates of change recipients. Climate within the team in which the change agent acts also positively influences the degree of implementation. From the qualitative data we also learned that both the perceived status of the role by change agents themselves and the feedback received on the achieved degree of implementation seem to be affecting the success of a change agent in motivating and stimulating their colleagues to work according to the new guideline.Deploying change agents is an effective strategy in encouraging implementation of guidelines. The effect is larger when change agents show transformational leadership and work in a positive team climate. When change agents perceive their added value in implementation success they can motivate their colleagues better."} +{"text": "Loss of renal tissue and renal function results in an increase in function and mass of the remaining intact kidney tissue. A prominent example for this so called renal compensatory hypertrophy is the removal of one kidney for instance in living kidney donors. Although it is well established that kidney size and function of the remaining kidney markedly increase in these patients and that these adaptations are a prerequisite for living kidney donation, several open questions regarding the regulation of the compensatory mechanisms remain. For instance the initial signals inducing the increase in glomerular filtration rate (GFR) are largely unknown. Based on previous studies demonstrating that I. the rapid increase in GFR post-UNx is mediated by a circulating factor and II. the cardiac natriuretic peptides ANP and BNP both are capable of increasing the GFR, we speculated that natriuretic peptides might be the long sought factor mediating the functional adaptation of kidney function in response to a loss of kidney tissue.Our observations in different gene targeted mouse models reveal that natriuretic peptide signaling via guanylyl cyclase-A is critical for the rapid increase in GFR which occurs within the first days post uninephrectomy. Thereafter, the functional adaptation and the hypertrophy of the remaining kidney is independent of the cardiac natriuretic peptides, so that GFR and kidney size are regularly elevated six weeks post-UNx even in the absence of GC-A. However, natriuretic peptide / GC-A signaling has a marked renoprotective effect in this chronic phase of renal compensatory hypertrophy, since it prevents podocyte damage and albuminuria. These beneficial effects of GC-A activation on renal integrity is independent of the blood pressure and is mediated via a direct effect on podocytes.Natriuretic peptide / GC-A signalling has at least two important functions in the renal adaptation to the loss of kidney tissue: It is critical for the rapid increase in the glomerular filtration rate in the first days after uninephrectomy and it ameliorates podocyte damage and albuminuria in the chronic phase of renal compensatory hypertrophy."} +{"text": "Synaptic plasticity mechanisms are usually discussed in terms of changes in synaptic strength. The capacity of excitatory synapses to rapidly modify the membrane expression of glutamate receptors in an activity-dependent manner plays a critical role in learning and memory processes by re-distributing activity within neuronal networks. Recent work has however also shown that functional plasticity properties are associated with a rewiring of synaptic connections and a selective stabilization of activated synapses. These structural aspects of plasticity have the potential to continuously modify the organization of synaptic networks and thereby introduce specificity in the wiring diagram of cortical circuits. Recent work has started to unravel some of the molecular mechanisms that underlie these properties of structural plasticity, highlighting an important role of signaling pathways that are also major candidates for contributing to developmental psychiatric disorders. We review here some of these recent advances and discuss the hypothesis that alterations of structural plasticity could represent a common mechanism contributing to the cognitive and functional defects observed in diseases such as intellectual disability, autism spectrum disorders and schizophrenia. Dendritic spines are the major site for excitatory transmission in the brain. They are usually contacted by en passant presynaptic terminals and most often surrounded by astrocytic processes, forming complex structures that dysplay a high degree of functional and structural plasticity. While most research attention has usually focused on the functional aspects of synaptic plasticity and their key contribution to learning and memory mechanisms, work in the last decade has clearly demonstrated the importance of the associated structural rearrangements. These consist of different types of morphological changes , affecting different partners and taking place on different time scales (minutes to days), making them sometimes difficult to relate to the functional changes. These structural rearrangements are also tightly controlled by activity, they are usually NMDA receptor dependent, and have the potential to significantly affect the development and organization of local synaptic networks. Recent advances have started to unravel some of complex molecular mechanisms and signaling systems regulating these synaptic rearrangements, notably at the postsynaptic level. We will therefore mainly focus on these aspects in this review and highlight the multiplicity of mechanisms that may affect structural plasticity and the development of synaptic networks and thereby contribute to cognitive disorders.in vitro and in vivo studies have shown a high correlation between the size of the spine head, the size of the postsynaptic density, the size of glutamate-evoked responses and the stability of the spine , there appear to be two major structural types of changes that have been reported both in Identification of stimulated synapses within a network has been and still is one of the important limitations for understanding how activity and plasticity regulate synapse properties. This has been overcome in only a few studies using either 2-photon uncaging of glutamate to achieve local stimulation of identified synapses or calcium imaging coupled to electrical stimulation to reveal functional synapses. Although previous electron microscopic (EM) analyses had already suggested that LTP could be associated with morphological changes , regulators of Rho GTPases , srGAP3, Disc1, SynGAP), effectors of Rho GTPases (PAK3), as well as cytoskeletal regulatory proteins or adhesion and scaffold molecules , intracellular mediators implicated in mTOR signaling and regulators of protein synthesis (Kumar et al., Functional synaptic plasticity properties, by quickly changing synaptic strength, allow fast adaptations of network activity which are critical for information processing. However, on a longer time scale, structural plasticity properties may allow a more significant and stable rewiring of synaptic networks through both the formation of new connections and the stabilization of specific contacts. These properties of structural plasticity are particularly important during development where they contribute to shape the structural organization of brain circuits through activity. Molecular analyses of these structural properties started to identify key signaling pathways implicated in these synaptic reorganizations, which also appear to be strong candidates for contributing to cognitive and psychiatric disorders. Hence a common denominator of developmental disorders could involve alterations in spine dynamics that would affect the connectivity and specificity of brain circuits. More systematic analyses of these properties and their functional consequences should allow a better understanding of how they affect information processing and this could eventually lead to new possibilities of treatment of these disorders.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The oral microbiome is composed of a multitude of different species of bacteria, each capable of occupying one or more of the many different niches found within the human oral cavity. This community exhibits many types of complex interactions which enable it to colonize and rapidly respond to changes in the environment in which they live. One of these interactions is the transfer, or acquisition, of DNA within this environment, either from co-resident bacterial species or from exogenous sources. Horizontal gene transfer in the oral cavity gives some of the resident bacteria the opportunity to sample a truly enormous metagenome affording them considerable adaptive potential which may be key to survival in such a varying environment. In this review the underlying mechanisms of HGT are discussed in relation to the oral microbiome with numerous examples described where the direct acquisition of exogenous DNA has contributed to the fitness of the bacterial host within the human oral cavity. The human oral microbiome is an incredible example of a species rich collection of micro-organisms living together primarily as a multispecies biofilm. The constant challenges the biofilm inhabitants have to cope with include interactions of co-operation and antagonism whilst the individual cells have to adjust to an ever changing onslaught of environmental perturbations. Availability of carbohydrate sources, temperature changes and the interaction with transient, non-oral species of bacteria are just a few examples of the challenges the individual members of the multispecies oral biofilm have to adjust to.There are many recent reviews concerning the actual numbers, and species composition, of bacteria within the human oral cavity and the reader is directed to these occurring in bacteria from ecologically similar environments . A recent study on the genome evolution of the genus The oral cavity is by no means a static environment; rather it is an environment where diverse ecological pressures exist. As a portal to the distal part of the digestive tract the oral cavity is open to the environment and also has a variety of foods (substrates) pass through it. There is therefore a great deal of variability encountered in terms of physical, chemical and physicochemical characteristics.Bacteria will have to cope with multiple defense mechanisms within the oral cavity including, but not limited to the production of host antimicrobial compounds such as lactoperoxidase and lysozyme, bacterially derived antimicrobials and bacteriocins, production of immunoglobulins A, G, and M, mucus layers on mucosal surfaces and the constant shedding of epithelial cells. There are also relatively strong mechanical forces which result during chewing, talking and the movement of the tongue. Forces up to 150 Newtons (N) are generated whilst chewing foods such as meat whilst the maximal biting forces have been estimated to be between 500 and 700 N Wilson, . In addiVeillonella utilizing the lactate produced by cariogenic streptococci have been shown to be exported from Escherichia coli in vesicles and furthermore have been shown to successfully transform Salmonella via membrane vesicles into the developing biofilm and provides therefore an important source for genetic material via this novel mechanism protein, a ribosomal protection protein which reversibly binds to the 23S rRNA subunit of the ribosome and prevents tetracycline binding therefore preventing protein synthesis, or removing a bound tetracycline molecule before binding itself conferring macrolide, lincosamide and streptogramin resistance, Tn6009 encodes resistance to both inorganic and organic mercury by the action of MerA and MerB respectively and Tn1545 and Tn6003 both encode resistance to kanamycin via the product of aphA-3 , which is secreted and accumulates in the extracellular environment to trigger competence development in a quorum sensing dependent way , and compared to the commensals itself, S. mutans is highly H2O2 susceptible and molecular oxygen (O2) to H2O2, carbon dioxide (CO2) and the high-energy phosphoryl group donor acetyl phosphate in an aerobic environment. Knock-out studies with putative pyruvate oxidase orthologs in S. sanguinis and S. gordonii confirmed SpxB as main H2O2 producer , which is considered a bacterial immune system. It is an adaptive and inheritable system which recognizes and destroys foreign DNA therefore preventing infection by bacteriophages, transposons and plasmids. It is an RNA based protection mechanism, which stores parts of the DNA of previously encountered bacteriophage, transposons and plasmids in the CRISPR chromosomal locus. The cell is therefore able to prevent potential harmful DNA of integrating into the chromosome by RNA interference using the stored information of the CRISPR (Gasiunas et al., The successful transfer of a new genetic trait is dependent on several events. In the case of eDNA, the genetic material needs to persist in the environment long enough to be taken up by a competent bacterium. The eDNA integrity and persistence is compromised by host and bacterial derived extracellular nucleases (Kishi et al., The increasing amount of evidence for HGT in the human oral cavity shows that these processes are important in the adaptability of the oral community. The nature of some of the evolutionary strategies involving HGT is much more complex than simple acquisition of DNA released from dead cells or acquisition of a plasmid or transposon from a donor member of the oral community. More evidence for this is found in oral metagenomes; it has recently been found that the incidence of CRISPRs and the numbers of MGEs associated with oral cavity derived metagenomes is far more than in the GI tract of man (Zhang et al., The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "AbstractWell-organized acute and intermediate rehabilitation after stroke can provide patients with the best functional results. Several studies led to major changes in recommendations concerning remobilization therapies following stroke. Controlled studies including early mobilization in stands and training with partial body weight support on treadmills and \"gait training\" systems showed superior results compared to traditional treatment strategies. In case of spasticity and equinovarus and stiff knee pattern following stroke, botulinum neurotoxin A injections and/or casting enable the achievement of adequate alignment of the ankle for stance phase and allow the improvement of joint mobility during swing phase when restricted. It represents an emergency because once the lesion has installed there is no efficient therapy which can restore the function of the brain tissue [2]. The ideal medical therapy is the prevention of stroke. In addition, efforts have been made in order to precociously diagnose the installation of stroke, in order to be able to intervene during the evolution of a vascular thrombosis. This presupposes that the patients\u2019 medical problems are discovered in the stage of asymptomatic atherosclerosis. Consequently, the therapy for the atherothrombotic disease with the affection of the extra- and intracranial vessels can be divided as it follows: stroke therapeutic primary prevention measures, measures of reestablishing the blood flow and stopping the pathological process in case of stoke, physical and neurological post-stroke rehabilitation therapy, measures of prevention of progression and repeating of the stroke (secondary prevention). These ideas have been also included in the Diagnostic and treatment guide for the cerebrovascular diseases, emitted by the Ministry of Health, 2009 [2]. The stroke is one of the main causes of chronic disability and death [2]. The thrombolysis done by administering TPA (tissue plasminogen activator recombinant) for 4,5 hours is a therapeutic option for ischemic stroke, which aims at reducing the disabilities which appear immediately after stroke [1]. The concept \"time means brains\" has appeared as a result of a research which showed that in the stroke in evolution, 1,9 millions of nervous cells die each minute [1]. In the case of the stroke of the vertebrobasilar system, an intervention can be made in 6 hours to remove the arterial thrombus (thrombectomy), by applying interventional radiology [1]. The factors which influence the seriousness of the stroke are the following: localization, lesion size, lesion type [ Stages of neurological rehabilitation 4]. The functions restoration in the first months after a stroke mostly depends on the spontaneous healing, which is dependant on the compensation potential and the spontaneous plasticity of the brain [2]. In addition, the main purpose of precocious rehabilitation is cardiovascular rehabilitation training, reeducation of orthostatism, of feet coordination, of waking, of cognitive functions. Stroke rehabilitation is a process beginning during acute hospitalization and continues with later phases of rehabilitation. The important phases of stroke recovery are acute, subacute and chronic phases (more than 6 months post-stroke) [2]. The rehabilitation of the vertical position (orthostatism) and the walking must be started as soon as possible. The first mobilization of the patient should take place during the acute therapy of stroke [2]. For documentation of different neurological deficits and of severity of physical disability the best validated assessment instruments are the Barthel Index, Rankin Scale, Scales Ashworth (AS) and Ashworth modified (MAS) for spasticity, Mini Mental State Examination (MMSE) [ Locomotion rehabilitation 4]. The locomotion therapy mainly encompasses the rehabilitation of orthostatism and walking ability. The walking therapy is realized by an active training of walking which presupposes a frequent repetition. The training oriented towards \"functional movements\" used to ensure daily basic necessities has proved to be the most effective of all. Movements are \"functional\" when they allow us to reach an efficient, safe, adapted occupational behavior. The normal movement is possible due to the interaction between the musculoskeletal system and the central nervous system [5]. Different authors have stated that the critical speed is of 110 steps per minute [6]. A severely affected patient can reach a number of 50 to 100 steps only if helped by 2 therapists. The training of walking on a walking tape with partial support of the weight with one or two therapists can reach the level of 300 to 400 steps per training. In case of a training supported by robotic equipment and with the help of a therapist, the patient can reach up to 800-1000 steps per therapeutic session. In case of an intense taking over of the walking activities by the equipment used, the rehabilitation process can be slowed because no exercises which involve the use of the patient\u2019s potential are practiced anymore. When referring to walking tape exercises, the degree of difficulty can be progressively raised by establishing new tasks [7]. Researches regarding the physiology of walking have shown that the spinal locomotion centers will be adequately stimulated only at a certain frequency of the steps; this way, a physiologic walking can appear [9]. The robotic locomotion training with the support of the lokomat can accomplish an intensive functional training , with the possibility of changing the training parameters according to the kinetic object established, as well as the use of a biofeedback and visual feedback of the results [10]. The application of the things learnt while walking follows after the training supported by the robotic equipment. The other purposes of this training are the following: the raise in the duration and speed of walking, the use of the escalator. The therapist will apply the training of walking outside the training room, as for example in the street, while trying different means of support which compensate the neurological deficits and can ensure the independence of the patient [ Orthoses11]. The most commonly prescribed orthosis with a view to improving gate is an ankle-foot orthosis, but there are also orthoses for toes, knees, arms, elbow, fist/ hand or/and fingers. The orthosis for the ankle is recommended when the paresis of the dorsal flexion is present or when the spasticity of the flexors is important. In patients with foot inversion due to spasticity, the use of the orthosis allows the improvement in the symmetry of walking and can reduce the consecutive danger of a traumatism [8].The orthosis for the knee is most often applied in the precocious rehabilitation stage of walking. In case of knee extensors paresis, it helps in the knee joint stabilization. The knee hyperextension is dangerous in the spasticity of the extensors; the retroversion of the knee can lead to painful joint modifications in time [ Functional electrical stimulation 15]. FES uses its effects on the intact neurons and incorporates the movement produced in a functional activity. FES has been used to help the weak or paralyzed muscles develop activities such as orthostatism, walking or movements of the arms, with the support of neuroprosthesis [12-14]. There are developed multifunctional advanced functional electrical stimulation systems that send low-level electrical impulses and can assist patients' functional movement, for example the Neural Electrical Stimulation System (NESS): H(and) 200 \u0219i L(imb) 300, 300 Plus [12]. During the precocious rehabilitation period, when the patient is still inactive, electrical stimulation can be used together with the physical exercises in order to maintain the muscular integrity. During the training of walking, it can be directly used the tibialis anterior muscles or the peroneal stimulation nerves at the level o the fibula end, which can lead to the improvement of the dorsal flexion function. The studies show an improved walking condition from the qualitative point of view. It seems that the use of electrical stimulation improves the post-stroke muscular force, as well as the endurance and the muscle force, if FES is administered together with the resistance opposed to the contraction generated by the electrical stimulation of the affected muscles [ Recovery and serial casting16]. Most of the times spasticity compels the muscle to remain in a contracted position which shortens it for long period of times; this way it may lead, also through consecutive muscle retractions, to the deterioration and limitation of functional mobility, the loss of the ability of performing routine activities, sometimes associated with severe pains which amplify the functional limitations, altering, in the same time, the quality of life. The joints of the elbow, of the wrist of the hand , of the toes and ankle are the ones which most often benefit from this treatment [16]. Serial casting has been successfully used since 1989 in the therapy of spasticity of the arm and leg, having great results. The principle of action is the reduction of muscular hyperactivity, which influences in a positive manner both the vicious position of the leg/arm and the pain [20]. Opposition involuntary muscular hyperactivity that appears against an exogenic force, does not answer to BT therapy, just like posture apraxia, which sometimes mimics spastic conditions. Contracture can also answer to therapy and most of the times the pain is reduced. BT application is not recommended in the following cases: fixed contractures, bone deformities, patients who undergo an anticoagulant therapy. The botulinum toxin is efficient in the therapy of spasticity in upper neuromotor syndrome both in adults and in children [21]. In case spasticity is localized in fewer muscles, the local injections of botulinum toxin together with kineto-physical therapy can lead to substantial improvements. In spasticity, there is a need for a careful selection of the muscles and it is important that the injection of the spastic muscles is done strictly at the intramuscular level [18]: BT therapy is indicated in spastic syndroms which have the following characteristics [ - dynamical (not fixed) - especially in muscular hyperativity - relevant for the daily activity The use of BT in children presupposes attention to the following: - the dose for muscles according to the body weight - the whole dose for each patient according to the body weight - the period of time between the injections. 12]. The BT-A and neuromotor electrical stimulation (FES) effects on the spasticity of the plantar flexor and the movement amplitude of the dorsiflexion in children with cerebral palsy, were also examined. The management of spasticity of upper limb with botulinum toxin therapy was examined in multicentric study and the most important goals for the clinical practice were defined [21]. What has been carefully paid attention to are the effects of the BT-A combined with FES injections, together with functional exercises on spasticity and the functions of hemiplegic patients. The results show that BT-A injections combined with electrical stimulation for 3 days post-injection therapy can improve the functional capacities and spasticity in treated patients [ Social and professional rehabilitation 16]. An extra element, which is important in motor rehabilitation, is social and professional rehabilitation. This way, the rehabilitation and social reintegration therapy of a stroke patient, can be completed [2]. Its purposes are the following: skills , areas of activity (daily activities \u2013 DA), education, working, playing, social participation. The performances which are necessary to do some normal activities belong to the motor field , sensorial and perception field , the field of emotional regulation , cognitive field , communication and social life field . In his rehabilitation activity, the occupational therapist must take into account the internal organization of a person , the patient\u2019s habits , will , and the entire cultural, social, spiritual, personal, time context [16]. In addition, occupational therapy has an important role in the rehabilitation process of a stroke patient ["} +{"text": "We present four movies demonstrating the effect of flicker and blur on the magnitude and speed of adaptation for foveal and peripheral vision along the three color axes that isolate retinal ganglion cells projecting to magno, parvo, and konio layers of the LGN. The demonstrations support the eye movement hypothesis for Troxler fading for brightness and color, and demonstrate the effects of flicker and blur on adaptation of each class of retinal ganglion cells. By judging which stimulus fades first to gray before evoking a negative after-image, an observer can directly compare the speed of peripheral adaptation against central adaptation, i.e., the Troxler effect, under three conditions. The three conditions of the movie demonstrate the effects of central blur and peripheral flicker on the speed of adaptation: First, the three stimuli have same shape and temporal properties, the classic Troxler effect is observed with peripheral stimuli disappearing faster than the central one; second, we add blurred edges to the central stimulus in order to decrease the effect of eye movements, the central stimulus disappears significantly faster; third, we add intermittent flicker to the peripheral stimuli in order to simulate the effect of eye movements that intermittently shift the positions of receptive fields between the background and the stimulus , a time-varying stimulus modulates along the reddish\u2013greenish axis in the three conditions demonstrating the effects of central blur and peripheral flicker on the speed of adaptation of the parvo-cell pathway: First, the three stimuli have same shape and temporal properties, a chromatic Troxler effect is observed with peripheral stimuli disappearing faster than the central one; second, blurred edges of the central stimulus decrease the effect of eye movements, but unlike for the achromatic brightness stimuli, this does not change the relative speeds of central and peripheral adaptation; third, intermittent flicker of the peripheral stimuli makes the peripheral stimuli disappear significantly slower.In Movie 4 , a time-varying stimulus modulates along the yellowish\u2013bluish axis of color space that isolates the adaptation properties of the konio-cell pathway. For the three conditions, similar results to Movie 3 are observed.These demonstrations support the eye movement hypothesis for Troxler fading for both color and brightness by showing directly the difference of adaptation time-course with eccentricity. However, they also show that the effect is mediated by spatial and temporal response properties of ganglion cells: Magno cells are responsive to much higher spatial frequencies than are parvo or konio cells. Blurring the edge then has a bigger effect on adaptation of magno cells than on parvo or konio cell, and this was reflected in the psychophysical results . The fli"} +{"text": "Over the course of neural development, changes in the morphology of the neural tissue are accompanied by changes in patterns of activity. One form of activity that is highly studied in cultured cortical networks is neuronal avalanches, characterized by bursts whose distribution follows a power law. Despite a detailed characterization of neuronal avalanches, much remains unknown about their gradual emergence during development . Here, wTo examine the implications of this trend, we evaluated communication between pairs of neurons using a measure of transfer entropy that quantifies the amount of information (in bits) in a neuron found in the past history of another neuron . TransfeIn sum, this study links the gradual development of power law scaling with increased communication efficiency in networks of cortical neurons. Incremental changes in network dynamics suggest that power scaling of avalanches and communication are shaped concurrently over the course of in vitro development, and may arise from a common origin. This developmental trend poses a particular challenge for computational models of avalanches that typically focus on the endpoint of development , and the"} +{"text": "Approximately 50 % of CML patients are 66 and older.3After reviewing the definition of age in physiologic terms, this editorial explores the biology of CML, the effectiveness and tolerance of TKI in older people and the age adjusted survival of CML patients since the introduction of TKI.Aging is associated with a progressive reduction in life-expectancy and functional reserve, and with increased polymorbidity. The ability of independent living of the older person may be impaired by loss of memory and judgment and by inadequate sight and hearing. The ability of independent living and stress tolerance may be determined by the availability of adequate social support.Though a number of early studies summarized in reference 2 demonstrated that the advanced age was an independent prognostic factor for poor survival, none of them demonstrated a more advanced disease stage at presentation, or a higher prevalence of multiple cytogenetic abnormalities in older individuals. Likewise, more recent studies failed to demonstrate an association of age with BCR/abl mutations purporting TKI resistance.Two large population studiesMost of the information available concerns imatinib. In clinical studies the riskCurrent evidence suggests that the biology of CML is not affected by patient\u2019s age and that older individuals may benefit from TKI treatment to the same extent as the younger ones. Older individuals may require more frequent dose reductions, as well as pharmacologic adjustments, to avoid drug interactions. The assessment of physiologic age through a CGA may reveal the patients more likely to benefit from treatment. The cost of these agents may represent a treatment barrier for older individuals living in countries without universal health care."} +{"text": "ISDR is a 5 year NIHR funded programme of applied research aiming to introduce a step-change in screening for sight threatening diabetic retinopathy utilising personalised risk based intervals. The research programme includes a Randomised Controlled Trial (RCT) to assess the validity of these intervals.The RCT aims to recruit 4400 participants from a pool of 18,000 subjects in Liverpool. Due to the size of the cohort and that much of the data required for the study is collected routinely in multiple NHS organisations, the RCT relies heavily on the integration of multiple databases/data sources and automated systems across multiple networks to reduce the workload on the study team and medical practitioners and to maintain data quality.Data for the RCT is obtained from the following systems: EMIS Web (GP Data), OptoMize , OpenClinica (study database), randomisation system (bespoke system), risk engine and a record of those subjects who have consented for their data to be used by the ISDR programme.This paper will present the processes and solutions undertaken to automate, test and validate the import/export of data, create bespoke systems to clean these data and integrate the different data sources. Thereby ensuring the timely transfer and format of data minimise the disruption to patient care while ensuring integrity of the trials data.The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health."} +{"text": "Between the late 1980s and 2000s, Northern Uganda experienced over twenty years of armed conflict between the Government of Uganda and Lord\u2019s Resistance Army. The resulting humanitarian crisis led to displacement of a large percentage of the population and disruption of the health care system of the area. To better coordinate the emergency health response to the crisis, the humanitarian cluster approach was rolled out in Uganda in October 2005. The health, nutrition and HIV/AIDS cluster became fully operational at the national level and in all the conflict affected districts of Acholi and Lango in April 2006. It was phased out in 2011 following the return of the internally displaced persons to their original homelands.The implementation of the health cluster approach in the northern Uganda and other humanitarian crises in Africa highlights a few issues which are important for strengthening health coordination in similar settings. While health clusters are often welcome during humanitarian crises because they have the possibility to improve health coordination, their potential to create an additional layer of bureaucracy into already complex and bureaucratic humanitarian response architecture is a real concern. Although anecdotal evidence has showed that implementation of the humanitarian reforms and the roll out of the cluster approach did improve humanitarian response in northern Uganda; it is critical to establish a mechanism for measuring the direct impact of health clusters on improving health outcomes, and in reducing morbidity and mortality during humanitarian crisis. Successful implementation of health clusters requires availability of other components of the humanitarian reforms such as predictable funding, strong humanitarian coordination system and strong partnerships. Importantly, successful health clusters require political commitment of national humanitarian and government stakeholders.Although leaving health coordination entirely to governments (in crises where they exist) may result in political interference and ineffectiveness of the aid response efforts, the role of government in health coordination cannot be overemphasized. Health clusters must respond to the rapidly changing humanitarian environment and the changing needs of populations affected by humanitarian crises as they evolve from emergency towards transition, early recovery and development. Disasters are common phenomena which disrupt socio-economic development and negatively impact on the health and nutrition status of the World\u2019s population. According to the Centre for Research on the Epidemiology of Disaster (CRED), a yearly average of 392 natural disasters was recorded globally between 2000 and 2008 . In 2012Effective emergency response to disasters is often constrained by weak coordination. Since the early 1970\u2019s and 1980s, the number of complex emergencies which require special coordination bodies have increased , as has An independent review of the global humanitarian system was commissioned in 2005 to better understand and correct the deficiencies in global humanitarian response . The repIn considering the report of the Secretary-General, the 60th session of the UN General Assembly adopted resolution A/RES/60/124 on the strengthening of the coordination of emergency humanitarian assistance of the UN . This reTo implement this international humanitarian reform programme, nine clusters were designated at the global level. These clusters included the Health Cluster, whose main objectives were to provide health leadership for emergency preparedness, response and recovery; prevent and reduce emergency related morbidity and mortality; ensure evidence based health actions, gap filling and sound coordination; and enhance accountability, predictability and effectiveness of humanitarian health action . Uganda,In this article, we review the key issues in emergency health response coordination using the experiences, successes, challenges and lessons learned from the implementation of the Health, Nutrition and HIV/AIDS Cluster (HNHAC) in Uganda. Based on the lessons learned from Uganda and other similar countries that have implemented the cluster approach, we propose a few recommendations which can be used to improve health coordination during both acute and chronic humanitarian crises.This article is a retrospective analysis of the humanitarian response to the northern Uganda crisis with particular emphasis on the operations of the HNHAC in improving health response. The main methodology used for the analysis was desk reviews of various documents on the northern Uganda crisis. Information for the introductory section was obtained from the humanitarian response review and reform documents as well as reports of joint assessments and various surveys. The issues, challenges and lessons learnt were obtained from a review of the minutes of cluster meetings, joint project and cluster evaluation reports, annual reports and monthly bulletins of the HNHAC. Key informant interviews were also held with selected HNHAC members and also with key Ministry of Health (MOH) officials to further validate the findings of the desk review.Two out of the five authors participated actively (as coordinator and chairpersons) in many of the cluster meetings and used this as opportunities for \u201cparticipant observation\u201d of the cluster dynamics. A third author attended selected cluster meetings as an independent observer and also provided further insights into the cluster dynamics and understanding of the principles of the humanitarian reforms and cluster approach by cluster members.Between the 1980s and 2000s, northern Uganda experienced over twenty years of armed conflict between the Government of Uganda (GoU) and Lord\u2019s Resistance Army (LRA). The resulting humanitarian crisis led to displacement of a large percentage of the population and disruption of the health care system of the area. At the height of the conflict, over 90% of the population of Acholi sub-region and a lesser percentage of the population of Lango sub-region were displaced into Internally Displaced Persons (IDP) camps . Living Since the landmark Cessation of Hostilities Agreement (CHA) between GoU and LRA was signed in August 2006, northern Uganda has witnessed significant improvements in the security and peace situation. This stability resulted in spontaneous return of the IDPs to their original homelands. Currently, more than 98% of the IDP populations in both Acholi and Lango sub-region have returned to their original homelands or have resettled in new locations .The humanitarian response to the northern Ugandan crisis was done through a combination of project and budget support approaches. A significant percentage of external funding (70% to 80%) was channelled through the international humanitarian response mechanism which comprise of UN agencies, national and international Non-Governmental Organizations (NGOs) and Civil Society Organizations (CSOs) to directly support projects aimed at alleviating the humanitarian consequences of the crisis in the IDP camps. Almost all the funding from government was provided as budget support to the administration of the affected districts to support humanitarian as well as developmental activities. Within the health sector, the MOH had the overall oversight for implementation of the emergency health response activities while the District Health Management Teams (DHMT) were directly responsible for the implementation and coordination of the response activities at the district level. The dual approach resulted in proliferation of humanitarian partners, which necessitated the establishment of an effective humanitarian coordination system.Delivery of health care in Uganda rests on the Health Sector Investment Plan III (HSIP III) which is implemented using a Sector Wide Approach (SWAP). Under this approach, a Health Policy Advisory Committee (HPAC) chaired by the Permanent Secretary (PS) of the MOH is the overall operational, advisory and coordination body within the health sector. HPAC has several Technical Working Groups (TWGs) where technical issues are discussed in-depth and viable options are agreed upon before presentation to HPAC for final decision making. The development partners (comprising of donors and other stakeholders) who provide support to the health sector of Uganda also have a forum known as Health Development Partners Group (HDPG) where issues of common interest are discussed and consensus reached before engaging MOH via HPAC. Agreed positions are presented to HPAC by the chair of the HDPG. The above policy making and coordination applies mainly to developmental issues within the health system of the country. The Office of the Prime Minister (OPM) coordinates disaster and emergency related issues. The Disaster Risk Reduction (DRR) platform which comprises of relevant government Ministries, UN agencies, national and international NGO and is chaired by the OPM serves as the overall coordination mechanism for all disaster and emergency responses. The DRR platform holds monthly meetings and spearheads the development and implementation of the policy for disaster management which was approved by parliament recently. The platform has several technical working groups which comprise of government line Ministries or sectors. However, the participation of the sectors in the DRR platform is limited, resources are inadequate and emergency preparedness interventions limited Figure\u00a0.Figure 1The cluster approach was rolled out in Uganda in October 2005 with the designation of five pilot clusters including the HNHAC. The introduction of the approach was done with little or no consultation with relevant national authorities, UN agencies and NGOs present in the country [There was minimal functional or technical interaction between the HNHAC and the national health sector coordination mechanism (HPAC) due to a number of reasons. Firstly, none of the HPAC working groups had direct responsibility for humanitarian response; secondly, due to inadequate staffing, it was not possible for the MOH to second a fulltime staff member to work with the HNHAC; thirdly, as result of the minimal consultation with the national authorities during the introduction of the cluster approach to the country, many of the key MOH officials were not aware of its existence and in cases where they were, had very limited understanding about how it worked. In many instances, the cluster approach was equated to the SWAP which the GoU was already implementing thus resulting in further confusion about the cluster approach and compounding the belief that it was duplicating government efforts. However, at the district level, government participation in HNHAC activities was much better largely due to the fact they (affected districts) were cashed strapped and needed the supplementary funding brought by the cluster approach.In its five years of existence, the health cluster implemented several key activities which contributed to better coordination of the emergency health response in the conflict affected areas. These activities included among others, monthly (and sometimes weekly) health cluster meetings at the national and district levels, conduction of several surveys including the health services availability mapping surveys (conducted in Acholi sub-region in 2006 and in Lango sub-region in 2007), health and human rights survey (conducted in Acholi sub-region in 2007) and gender based violence risk assessment (done in Kitgum district in 2007); these surveys provided information for evidence-based planning. In addition, quarterly cluster bulletins, monthly cluster reports and biannual mapping of cluster partners were produced and disseminated widely to improve information sharing while several cluster training workshops were held. From 2006 to 2009, the cluster supported the planning, implementation, supervision, monitoring and evaluation of a joint inter agency emergency health, nutrition and HIV/AIDS response programme in the IDP camps of northern Uganda. Between 2007 and 2010, the health cluster also supported the MOH to develop a health sector recovery strategy and plan aimed at rebuilding (back better) the health system of the conflict affected districts of northern Uganda.With the prevailing peace in northern Uganda and the return of majority of the IDPs to their original homelands or resettlement in new locations, the humanitarian situation in these parts of the country has improved. HNHAC was gradually phased out from 2009 to 2011 and its activities were merged into the already existing health sector coordination mechanisms covering all the conflict affected districts of northern Uganda often creates poor understanding of the cluster approach systems which ultimately results in poor ownership by national governments and stakeholders. Under such circumstances, reconciling the differences in the mandates and agenda of MOH and health cluster partners becomes a daunting challenge. The Uganda experience showed that reconciling such differences was found to be much easier to achieve during acute crises as compared to chronic ones. This is perhaps due to the national and international attention which acute emergencies often generate and the attending political pressure on government and health cluster partners to quickly bring such crises under control. Coordination across sectors and addressing cross cutting issues still remains a serious challenge. Health sector response during disaster requires strong collaboration with the partners working in the other sectors especially, water, sanitation, camp management, food and agriculture, and other relevant sectors. Given the limited capacity available, both locally and globally, and the urgency for instituting lifesaving interventions for the population at risk this additional responsibility of reaching out to other sectors can be a daunting challenge.In many countries experiencing humanitarian crises, the terms \u201chealth sector\u201d and \u201chealth cluster\u201d are often used interchangeably. However, the Ugandan experience did demonstrate the importance of differentiating between these two terminologies (and their roles and responsibility in humanitarian crises) especially in countries implementing SWAP. Understanding these differences has far-reaching implications for ensuring a smooth transition from emergency health response to recovery and development of the health system . While hThe role of health clusters in transition, health system recovery and post-emergency development is unclear. Although in Uganda the health cluster supported the DHMT and MOH to develop a health recovery strategic document and plan of action, the limited interaction and collaboration between the health sector and cluster impeded the timely implementation of the plan for three main reasons. Firstly, some health sector partners felt that there was no need for a health system recovery strategy and plan as health system recovery was already addressed in national health strategies and plans albeit very scantily. Secondly, the health sector partners had inadequate knowledge about the key issues in health transition from emergency to recovery and development while on the other hand most health cluster partners (who are mainly emergency oriented) lacked a clear understanding of the approaches to the health system recovery and development. Thirdly, the lack of appropriate guidelines for managing health system recovery also significantly contributed to the delayed implementation of the recovery programme.The use of fulltime versus part-time health cluster coordinator (who have other programme responsibilities) remains a contentious issue. While dedicated cluster coordinators are preferred, their sustainability over time is a major challenge. In Uganda, the health cluster was managed by a part-time (double hatted) cluster coordinator, an arrangement which had its pros and cons. While the cluster coordinator had direct control of the cluster lead agency\u2019s resources such as funds, emergency field staff and logistics support system and could deploy these to facilitate timely cluster response to emergency situations and effectively fulfil the cluster lead agency provider of last resort functions, it was often difficult to avoid conflict of interest between the two roles. To the best of our knowledge, to date Uganda is the only country where health, nutrition and HIV/AIDS were combined into one cluster. These had important pros and cons; the combination ensured that HIV/AIDS, a cross cutting issue was given enough attention within the cluster. The HIV/AIDS working group of the cluster had membership from all sectors which facilitated the mainstreaming of the health aspects of HIV/AIDS into the other sectors. The inclusion of nutrition in the health cluster ensured that the inter linkages between health and nutrition were well explored and properly addressed. However, the difference in leadership of the health and nutrition clusters at the global and country levels resulted in inadequate technical support from the global nutrition cluster to the HNHAC.The cluster approach employs voluntary participation among the various stakeholders operating in an area of disaster. Though in principle NGOs and UN agencies are all accountable both to the beneficiaries and the donors, in practice there is no binding rule to ensure proper coordination and bring all stakeholders to rally behind a comprehensive plan that is developed with full participation of all, including the government, beneficiaries, NGOs and UN agencies. In spite of this drawback, experience from Uganda shows that provided that there is good leadership at all levels, it is possible to bring most partners including the government to work together for the betterment of the life of beneficiaries. The importance of donor cohesion and support in this regard cannot be overemphasized , 23. TheTo address the inherent challenges of weak humanitarian leadership, lack of clear accountability framework in humanitarian response and the overly process driven approach of humanitarian clusters, the IASC Principals agreed on a set of actions in late 2011/early 2012 . The ageThe phasing out of HNHAC proved to be an onerous task due to a number of reasons. HPAC which was supposed to take over its function was already overloaded and had limited capacity to take on additional responsibilities. Although it (HPAC), is the highest coordination and decision making body in the MOH and has several TWGs, some of which could take over the health cluster responsibilities, there were concerns that these TWGs already had several other agenda and as a result health system recovery and development in northern Uganda would not receive enough attention. Perhaps the most plausible reason for the poor integration of the health cluster into the HPAC structure is the poor collaboration between both bodies ab-initio.The findings and conclusions of this study may have been biased by the active involvement of two of the authors as cluster coordinators and chairpersons of the cluster meetings at various times. A number of steps were taken to mitigate this bias; key informant interviews with cluster members were used to validate the findings of the authors\u2019 participant observations. Furthermore, findings of the independent cluster evaluations were also used to corroborate the observations of the authors. Participation of one of the authors as an independent observer in some of the cluster meetings also provided further independent information which were used to cross check the findings and conclusions of this study.Although anecdotal evidence have shown that implementation of the humanitarian reforms and roll out of the cluster approach did improve humanitarian response in northern Uganda ; it is iCoordination is a means to an end and not an end in itself , hence hIt is important to ensure that health cluster partners see and reap the benefits of participating in the health cluster coordination mechanism. Health clusters must create demand for coordination by demonstrating that their benefits offset their disadvantages. The clusters must do business differently from the coordination systems that existed before them (if any) and ensure that it effectively performs it roles while at the same time ensuring that it does not create another layer of bureaucracy for programme implementation.Health clusters should invest in ensuring that its partners understand and respect the mandates of each other and ensure that decision making within the cluster is transparent, evidence based and is by consensus.Although leaving health coordination entirely to governments (in crises where they exist) may result in political interference and ineffectiveness of the aid response efforts, the role of governments in health coordination cannot be overemphasized. For this reason, it is critical for health clusters to build on and safeguard existing government health coordination mechanisms to ensure sustainability. In this regard, health cluster lead agencies and partners should ensure MOH leadership in the conceptualization, planning, implementation, monitoring and evaluation of health clusters. Furthermore, health cluster engagement of national MOHs would facilitate standard setting and regulation since the government has the primary mandate for doing this.Donors have a strong role to play in successful roll out and implementation of health clusters and the health cluster lead agency and partners should ensure that they are involved at every stage of the cluster implementation.The decision to deploy a full or part-time cluster coordinator should be guided by the context, the prevailing humanitarian situation and availability of predictable and sustainable funding to the cluster lead organization which further underscores the importance of donor participation in health cluster activities.Health clusters must respond to the rapidly changing humanitarian environment and the changing needs of populations affected by humanitarian crises as the crises evolves from emergency towards transition, early recovery and development.The role of health clusters in health services delivery during the transition, early recovery and development phases should be clearly defined using durable, sustainable and context-specific models. In this regard, building the capacity of health cluster partners on post conflict/disaster health system recovery is key.Establishment of new health clusters should be done within the framework of the transformative agenda. This would foster stronger leadership, development of joint and mutually agreeable strategic plans and most importantly accountability of its members to both beneficiaries and donors.Health clusters must develop and negotiate clear exit strategies right from their inception and gradually work toward implementing these strategies as a humanitarian crisis progresses. They should focus on gradually building the capacity of relevant government partners or mechanisms to ensure that they can take over full coordination of emergency response and early recovery efforts as soon as practicable during a humanitarian crisis.Drawing from experiences and lessons learned from the health cluster implementation in Uganda and coordination of humanitarian crises in Afghanistan, Mozambique, Rwanda and Pakistan we propoOO was the HNHAC coordinator and head of WHO emergency and humanitarian action programme in Uganda from 2005 to 2009. He was health cluster coordinator for Zimbabwe from February to July 2009 and is currently the Outbreak and Disaster Management (ODM) focal point of WHO for eastern and southern Africa. In this position, he has oversight for the health clusters in the sub-region including Uganda.AU was the WHO Technical Officer in charge of response and recovery operations in the African region. In that position, he had oversight for all the health clusters in the region including Uganda. He participated in several technical support missions to the health clusters. He is currently the WHO Representative to Eritrea.SW was the HNHAC coordinator and head of WHO emergency and humanitarian action programme in Uganda from 2009 to 2011.DC was a Technical Officer in WHO Uganda from 2005 to 2008. He provided technical support to conduct the mortality survey and several other surveys in northern Uganda and also to roll out the health cluster in the country. He continues to provide oversight to the country in his current position in UNICEF headquarters.OW was WHO Representative to Uganda at the height of the crisis. He provided the overall supervision for WHO\u2019s response to the northern Uganda crisis and was also in the forefront of rolling out the health cluster."} +{"text": "This paper investigates the effect of thermal radiation on unsteady convection flow and heat transfer over a vertical permeable stretching surface in porous medium, where the effects of temperature dependent viscosity and thermal conductivity are also considered. By using a similarity transformation, the governing time-dependent boundary layer equations for momentum and thermal energy are first transformed into coupled, non-linear ordinary differential equations with variable coefficients. Numerical solutions to these equations subject to appropriate boundary conditions are obtained by the numerical shooting technique with fourth-fifth order Runge-Kutta scheme. Numerical results show that as viscosity variation parameter increases both the absolute value of the surface friction coefficient and the absolute value of the surface temperature gradient increase whereas the temperature decreases slightly. With the increase of viscosity variation parameter, the velocity decreases near the sheet surface but increases far away from the surface of the sheet in the boundary layer. The increase in permeability parameter leads to the decrease in both the temperature and the absolute value of the surface friction coefficient, and the increase in both the velocity and the absolute value of the surface temperature gradient. Convection and heat transfer in porous medium appear in many disciplines, such as thermal and insulation engineering, geophysics and chemistry. In the past few decades, the study in this area has attracted extensive attention of many researchers. Raptis The above-mentioned studies In addition, when the unsteady stretching surface is located in porous medium, the impact of different factors on the heat transfer is discussed in some recent works. For instance, in the presence of a magnetic field, the viscous incompressible conductive fluid flow along a semi-infinite vertically porous moving plate is researched by Kim Similar to steady condition, the process of unsteady fluid flow not only includes heat transfer but also mass transfer. When there exists the heat source or sink, Ibrahim et al. According to physical properties of most of realistic fluids, the viscosity and the thermal conductivity are usually related to the temperature and may vary dramatically with temperature. Thus, it would be more reasonable to take into account the effects of temperature dependent viscosity and thermal conductivity in mathematical modeling so as to accurately predict the flow behaviour. In view of this, Seddeek To the best of the author's knowledge, no attempt has been made to analyze the combined effects of both temperature dependent viscosity and thermal conductivity on unsteady convection and heat transfer over a vertical permeable stretching surface in porous medium in the presence of radiation. Hence, the aim of the present investigation is design a suitable physical model to describe unsteady two-dimensional incompressible viscous fluid flow past a vertical permeable sheet in a porous medium and solve such an issue numerically using Runge-Kutta fourth-fifth order method with secant shooting technique, which is important from both theoretical and practical point of view because of its wide application to polymer technology and metallurgy. And finally the impact of various physical parameters on the velocity and temperature profiles is displayed in the form of tables and graphs. It is hoped that the results obtained from the present investigation will provide useful information for application and also serve as an effective complement to the previous studies.Consider convection and heat transfer of the unsteady two-dimensional incompressible viscous fluid flow along a vertical permeable sheet in a porous medium. The origin of the Cartesian coordinates The Because of the behavior of the boundary layer, the temperature gradient along The appropriate boundary conditions to this physical problem areUsing Rosseland approximation for radiative heat flux term Raptis , this paf Raptis , supposeIn view of (9), The similarity transformations for In the above equations, variable physical parameters are defined asThe corresponding boundary conditions (6)\u2013(7) for the velocity and temperature fields are transformed toTwo physical quantities of interest in this problem are the surface friction coefficient The boundary value problem of higher-order ordinary differential Thus, the above two coupled higher order differential The boundary conditions (14) are converted to the following initial value conditionsIn the next section, by using the above method, the detailed results of numerical simulation will be given to show the impact of various physical parameters on the absolute value of surface friction coefficient, the absolute value of the surface temperature gradient, the velocity and temperature profiles, respectively.In order to validate the method of this paper, ignoring the effects of More detailed results are shown in This paper investigates convection flow and heat transfer of an incompressible viscous fluid along a vertical permeable sheet through a porous medium. The resulting ordinary differential equations are solved numerically by using fourth-fifth order Runge-Kutta scheme with secant shooting method. The numerical results are presented for the major parameters including unsteady parameter Compared with the previous literature, this paper not only considers temperature-dependent viscosity and thermal conductivity, but also studies the effects of both the permeability of porous medium and radiation in mathematical modeling, which is helpful to accurately predict the flow behavior. Some novel numerical results are obtained as follows:A different numerical method is used to investigate the impact of the physical parameters (Numerical results show that the velocity and temperature monotonously decrease to 0 with the increase of With the increase of viscosity variation parameter An increment in the permeability parameter"} +{"text": "In patients with chronically relapsing infectious diseases (CRID), syndromal forms of the immune-mediated disorders depend crucially on the stage of the inflammatory process occurring in the targeted organs or tissues and the overall chronization of the disease.The association between microbial landscapes (\u203a 50%), whereas the incidence and thus the contribution of postinfectious autoimmune syndrome (PIFA) and autoimmune syndrome associated with postinfectious secondary immunodeficiency (PIFASID) do not exceed 20%. At the subsequent stages, the clinical manifestations are different, viz., the incidence and thus the contribution of the autoimmunity rises dramatically (to 50% at the intermediate stages (PIFA) and to 60% at the final stage (PIFASID).The correlation between the stage of CRID and the form of PICIS is also characterized by the involvement of an additional (the third) component, viz., a clinical form of CRID. Here are several analytical examples related to:(1) clinical form of CRID: in patients with primary pyelonephritis (PPNP) and infectious myocarditis (IM), PIFSI is detected in 75% of clinical cases, whereas in patients with secondary pyelonephrites (SPNP) and autoimmune myocardites (AIM) the contribution of PIFSI is notably reduced (to 25%) giving way to the autoaggression ;(2) stage of CRID: at the early stages (< 3 months for CPN and < 1 month for myocarditis (M)), PIFSI is detected in 40% of cases; however, at the advanced stages of CRID its incidence reduced appreciably, while that of autoimmune syndromes increases in contrast;(3) rate of progression and chronization of CRID: in patients with relapsing or rapidly progressing CRID , the incidence and thus the contribution of PIFSI do not exceed 32-36%, while the share of autoimmune syndromes reaches 80-100%. In such patients, persistent forms of chronic meningoencephalitis or AIM associated with myocardial dystrophies are predominant.chronization of CRID are controlled by postinfectious autoaggression factors, such as PIFA and PIFASID which are thus also considered as predictive factors to monitor transformation of subclinical forms into the clinical ones.These findings suggest that PIFSI is not only the outcome of the infectious process, but also the predictive sign to promote chronization of the disease and to represent a factor responsible for getting the clinical course chronic. Further progression and"} +{"text": "Trypanosoma brucei is a protozoan parasite responsible for sleeping sickness in Central Africa. It is transmitted by the bite of the tsetse fly. During its complex life cycle, it undergoes significant morphological changes including extensive variations in flagellum length (3 to 30 \u00b5m). The flagellum is an essential organelle for parasite survival as it is involved in morphogenesis, movement, division and adhesion of the trypanosome. Intraflagellar transport (IFT) refers to the movement of protein complexes between the membrane and the microtubules. Like in other eukaryotes, IFT is essential for the construction of the trypanosome flagellum. Studies in the green alga Chlamydomonas suggested that the total amount of IFT proteins injected in the flagellum defines its final length: the higher the amount, the longer the flagellum. Moreover, the total amount of IFT proteins would be injected at once.Trypanosoma brucei. Expression and localization of IFT proteins was analyzed by immunofluorescence with antibodies against two IFT proteins (IFT22 and IFT172).We have studied the distribution of IFT proteins during the development of Chlamydomonas. However, the IFT protein concentration per unit of flagellum length is constant at any steps of flagellum formation. These results were confirmed by monitoring protein trafficking in live cells expressing the TdTomato::IFT81 fusion protein.The analysis of fluorescence intensities revealed that the total amount of IFT proteins in the flagellum is directly proportional to its length in all stages examined, as observed in Our results lead to a new model where IFT proteins would be progressively recruited to the flagellar compartment during elongation of the organelle. This raises the question of the regulation of IFT injection to the flagellum that will be addressed by studying the formation of very long or very short flagella in several development stages in the tsetse fly."} +{"text": "Biomedical research directed to understand the pathophysiology of Alzheimer's disease (AD) has contributed to the characterization of two pathological hallmarks of this terrible disease, namely senile plaques and neurofibrillary tangles. Neuropathological studies indicated that the level of accumulation of these pathological lesions correlate with the severity of the clinical presentation a cause or consequence of the pathophysiological process? Is the lack of detectable tau aggregates enough evidence to determine that a specific brain region is resilient to tauopathy or that the brain is free of toxic tau? Perhaps, the temporal characterization of different tau species using conformation-specific anti-tau antibodies and labeling molecules, such as pFTAA or tau-specific PET radiotracers (Shimojo et al., Johnson et al. and Sch\u00f6IEV revised the literature and wrote the manuscript.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The rarity of thymic epithelial tumors (TETs) creates unique challenges in conducting translational research and developing newer paradigms of treatment . NeverthWithin the last few years, a number of unique genomic changes have been described in TETs . Thymic The association between thymomas and autoimmune paraneoplastic disorders is well recognized . Some paTreatment strategies for newly diagnosed and recurrent TETs have also evolved over time. Surgery is considered the cornerstone of management of early stage TETs and complete resection of the tumor has a major impact on prognosis . For locContributions to the research topic on \u201cNovel Treatments for Thymoma and Thymic Carcinoma\u201d highlight recent developments in the understanding of the biology of TETs and review various aspects of management of TETs. Huang and colleagues describe previously unreported changes in the expression of apoptosis-related genes in WHO subtype B3 thymomas and thymic squamous cell carcinomas . These cMartinez and Browne review immunological deficiencies associated with thymoma and suggest a paradigm for comprehensive immunological evaluation in patients with thymoma, which should include an assessment of quantitative immunoglobulins, lymphocyte phenotyping, a vaccine challenge in patients suspected to have antibody deficiency and detection of anti-cytokine antibodies, whenever possible . PossiblShapiro and Korst discuss the role of surgery for thymic tumors with pleural involvement . SurgicaThe role of radiation therapy in the management of thymic epithelial tumors is reviewed by Komaki and Gomez . The indFinally, the review by Chen and colleagues focuses on the latest advances in systemic therapies for TETs . ResultsThe aforementioned manuscripts provide a snapshot of important research efforts related to TETs. Continued advances in the field have resulted in an ever increasing stream of data that offer newer insights into the biology of these rare tumors and support the use of newer paradigms of management for patients with thymoma and thymic carcinoma.Conception and design: all authors; manuscript writing: all authors; final approval of manuscript: all authors.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Beef cows herd accounts for 70% of the total energy used in the beef production system. However, there are still limited studies regarding improvement of production efficiency in this category, mainly in developing countries and in tropical areas. One of the limiting factors is the difficulty to obtain reliable estimates of weight variation in mature cows. This occurs due to the interaction of weight of maternal tissues with specific physiological stages such as pregnancy. Moreover, variation in gastrointestinal contents due to feeding status in ruminant animals is a major source of error in body weight measurements.Develop approaches to estimate the individual proportion of weight from maternal tissues and from gestation in pregnant cows, adjusting for feeding status and stage of gestation.Dataset of 49 multiparous non-lactating Nellore cows (32 pregnant and 17 non-pregnant) were used. To establish the relationships between the body weight, depending on the feeding status of pregnant and non-pregnant cows as a function of days of pregnancy, a set of general equations was tested, based on theoretical suppositions. We proposed the concept of pregnant compound (PREG), which represents the weight that is genuinely related to pregnancy. The PREG includes the gravid uterus minus the non-pregnant uterus plus the accretion in udder related to pregnancy. There was no accretion in udder weight up to 238 days of pregnancy. By subtracting the PREG from live weight of a pregnant cow, we obtained estimates of the weight of only maternal tissues in pregnant cows. Non-linear functions were adjusted to estimate the relationship between fasted, non-fasted and empty body weight, for pregnant and non-pregnant cows.Our results allow for estimating the actual live weight of pregnant cows and their body constituents, and subsequent comparison as a function of days of gestation and feeding status. The breeding herd accounts for about 70% of the total energy used in beef cattle production . HoweverBW) change and reproductive performance are commonly measured for assessment of the response of animals to experimental treatments. The deposition of body tissue reserves as well as fetal and uterine tissues contributes to the increase of BW of the cow leading to a complicated interpretation of the BW change. The comparison of the BW of a pregnant cow at the beginning and end of a study may not accurately represent the different physiological status because of the increased weight due to deposition of body tissue reserves or due to the growth of the components related to pregnancy, such as the gravid uterus and mammary gland. As such the standardization of BW of a cow in pregnant or non-pregnant condition is the first step to meet their nutrient requirements .A summary of the equations and relationships generated to adjust the weights of pregnant cows to a non-pregnant status and also to establish the relationship between BW, EBW and SBW in pregnant or non-pregnant beef cows is presented in B. taurus cows.A detailed example of practical application and calculation of the equations suggested in this study is presented in It is not possible to make all the proposed weight adjustments without incurring some errors. One of the first problems is the fact that there are some circular references in the theory of the proposed equations. To try to avoid this problem we used several fixed parameters, estimated for the average of the sample of animals used to generate the proposed equations.Another key point to consider is that the growth of the components related to the pregnancy and maternal tissues does not occur independently. During pregnancy changes occur in the maternal tissues to support nutrition and growth of the gravid uterus. Reduction in the size of the viscera and mobiThe live weight of pregnant cows can be adjusted for non-pregnant condition by deduction of uterus and udder accretion weight due to pregnancy, which can be estimated as a function of days of pregnancy and shrunk body weight. Udder weight accretion as a function of pregnancy occurs after 238 d of pregnancy in Nellore cows.S1 SpreadsheetBos taurus or crossbred cattle was made using the information described in the section \u201cPractical usage of BW adjustments in pregnant cows\u201d of the paper. Crossbred results are average of Bos indicus and Bos taurus calculations.The adaptation of the results from this study for (XLSX)Click here for additional data file.S1 Appendix(DOCX)Click here for additional data file."} +{"text": "The male germline of flowering plants constitutes a specialized lineage of diminutive cells initiated by an asymmetric division of the initial microspore cell that sequesters the generative cell from the pollen vegetative cell. The generative cell subsequently divides to form the two male gametes (non-motile sperm cells) that fuse with the two female gametophyte target cells to form the zygote and endosperm. Although these male gametes can be as little as 1/800th of the volume of their female counterpart, they encode a highly distinctive and rich transcriptome, translate proteins, and display a novel suite of gamete-distinctive control elements that create a unique chromatin environment in the male lineage. Sperm-expressed transcripts also include a high proportion of transposable element-related sequences that may be targets of non-coding RNA including miRNA and silencing elements from peripheral cells. The number of sperm-encoded transcripts is somewhat fewer than the number present in the egg cell, but are remarkably distinct compared to other cell types according to principal component and other analyses. The molecular role of the male germ lineage cells is just beginning to be understood and appears more complex than originally anticipated. The male gametophyte (pollen) of angiosperms is among the most reduced independent multicellular organisms in biology. Pollen is comprised largely of a vegetative cell that forms a pollen tube, which conveys the non-motile sperm cells that it contains into the female gametophyte. The male germline arises from an eccentric division of the post-meiotic haploid microspore that cleaves a relatively small lenticular generative cell from its much larger brother vegetative cell. This sessile generative cell migrates into the vegetative pollen cell and is the founding cell of the male germ lineage. Ultimately the generative cell forms two sperm cells\u2014either in the pollen grain or pollen tube depending on the plant\u2014that constitute the male gametes of flowering plants. Remarkably, both of the male gametes are required in the process of double fertilization. Fusion of one sperm cell with the egg cell results in an embryo\u2014which forms the next generation; whereas fusion of the other sperm cell with the central cell initiates the endosperm\u2014a tissue that is typically a nutritive lineage for the embryo and contributes to its embryonic development. The endosperm and double fertilization are sufficiently unique that they are often used as defining features of angiosperms.The male gamete has traditionally been the less understood partner in flowering plant reproduction. Although the first realization that flowering plants displayed sexuality began with the work of Camerarius , as an activator of YODA, was recognized to initiate an asymmetric pattern of cell division of the zygote, which forms a strongly asymmetric and polarized two-celled proembryo that contains a small apical cell at the tip and a larger basal cell , which is removed from this region during later maturation. The generative cell migrates into the interior of the vegetative cell through a unique separation mechanism that is correlated with the disappearance of callose labeling on the newly formed crosswall, and an intensification of labeling in the area of separation , but angiosperms may fuse in either G1 or G2 phase , in which fertilization may be effected in <30 min. Comparisons of elongating rice pollen tubes with pollen grains reveal that the most major conspicuous change in rice pollen tubes is the intensification of metabolic response in secretory pathways with few other detectible changes in gene expression differential centrifugation, typically requiring the collection of cells from a continuous Percoll or discontinuous sucrose density gradient Russell, , and (2)Transcriptomes of male gametes have clearly illustrated the divergent nature of gene expression in the male germ lineage compared to that of the vegetative pollen. Estimates of the number of genes present in the sperm, pollen and seedlings of Arabidopsis have yielded microarray presence calls of 27% for sperm cells (corresponding to 5829 genes), 33% for pollen (corresponding to 7177 genes), and 64% for seedlings using the MAS5 algorithm and gene counts normalized to the Arabidopsis genome was used to portray The RNA-Seq data revealed expression profiles reflecting an upregulation of genes involved in chromatin conformation, indicating an unexpected degree of chromatin activation in the sperm cells. The transcriptomes of the egg and sperm reveal major differences in gene expression that will presumably be altered within the zygote. These differences represent the native state for parent-of-origin gene expression and will be a baseline for further studies of the zygote during fertilization and early embryogenesis. Particularly pathways affecting epigenesis, methylation, hormonal control, cell cycle and specific gametic functions were examined in anticipation of their potential contributions to early zygotic and seed development , which may be present in remarkably differing quantities in different plants and different environmental conditions. The vegetative cell is known to be the site of considerable TE activity, which is prevalent enough that it has been directly observed using transposon displays in pollen are known to be transmitted into the zygote of Arabidopsis, translated, and their products expressed in the fertilized egg cell. As the SSP protein activates expression of YODA, this male contributed protein sequence establishes the asymmetry of the two-celled proembryo , thus the onset of zygotic expression in animals coincides with a suppression of messages from the egg cell and the onset of expression from both sets of chromosomes is known to preferentially fuse with the central cell forming the endosperm, whereas the other sperm cell (Sua) preferentially fuses with the egg cell to produce the embryo (Russell, vn appeared similar to an expected endosperm-enriched profile, whereas that of the Sua appeared more similar to an embryo profile. This appears to represent an instance where the precocious development of the embryo may be accelerated by providing targeted paternal genes to be activated upon double fertilization. With modern increases in the molecular sensitivity of characterization techniques and use of greater resolution techniques such as RNA-Seq, the accuracy of this prediction could be examined and potentially tested, providing essentially transcriptomic coverage during early embryogenesis to test the role of male gamete transcriptomes in early post-fertilization development.Among model systems for reproduction, Russell, . In ordeRussell, , it was Russell, . Using cRussell, . Thus, tThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Maternal mortality in South Africa, as in many developing nations, is avoidably high. The causes of death are well documented because statutory notification of mortality, happening during pregnancy and for 42 days after delivery, has been in place for 15 years now. The mortality data have been compiled into a triennial report (Saving Mothers) published by the National Department of Health.The death of a pregnant woman may adversely affect the chance that her surviving children will thrive. In South Africa approximately 1 600 women die every year because of pregnancy complications. Many others suffer the burden of on-going morbidity related to childbirth. Preventing premature death and disability among women and children is a priority to which the National Department of Health has committed itself. Given the pivotal role of women in society, especially within poorer communities, this targeted intervention is one with which few would take issue.The epidemiology of maternal mortality informs a variety of proposed recommendations aimed at reducing the risk of death related to childbirth. The burden of disease is described by Soma-Pillay and Sliwa in this issue (page 60). The contribution of cardiac disease in pregnancy is recognised to be the single most prevalent medical disorder giving rise to death during pregnancy among South African women. Reducing deaths due to cardiac disease has not yet been accomplished. The need for accurate diagnosis and appropriate management depends on identifying women with some evidence of cardiac disease, followed by referral to an appropriate level of medical care where the greatest available level of expertise may be employed in the further management of such patients.However, such a simple principle is difficult to implement. Often those providing care at the community level (where most South African women deliver their babies) are ill-equipped to recognise significant disease and even less able to provide the necessary medical management. Innovative approaches have been necessary and are also part of the recommendations made in the triennial report.et al. have described the function of a combined obstetric and cardiac clinic where multi-disciplinary care is provided to women with suspected heart disease.Sliwa Such clinics are feasible in metropolitan areas of the country where the greatest concentration of people live. Smaller towns and rural communities have less access to the same level of care. Nevertheless, co-responsibility for patient care between practitioners with different skills sets is recognised to be beneficial, and combined obstetric and medical clinics have been suggested as an attainable goal throughout the country. Monthly joint clinics would enable more considered evaluation of suspected medical disorders during pregnancy and an enhanced level of care together with appropriate referral to regional hospitals.The difficulty of discerning between normal pregnancy physiology and clinical disease, as well as understanding the impact of pregnancy physiology on underlying medical disease has not been taught or examined in the post-graduate training curriculum of general physicians. The anticipated benefits of combined care would only be realised once essential aspects of pregnancy physiology and pathophysiology and their influence on the expression and management of medical disease complicating pregnancy is incorporated into the university curriculum. Such changes are under consideration at present and to that end, this publication establishes a template for understanding the epidemiology of cardiac problems in pregnancy, understanding the (patho)physiology of pregnancy and how interventions, both obstetric and medical, may influence the outcome of these pregnancies. The broader object of this process still remains the targets set 15 years ago and enunciated as the millennium development goals.3"} +{"text": "The Gulf of Aden, although subject to seasonally reversing monsoonal winds, has been previously reported as an oligotrophic basin during summer, with elevated chlorophyll concentrations only occurring during winter due to convective mixing. However, the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) ocean color data reveal that the Gulf of Aden also exhibits a prominent summer chlorophyll bloom and sustains elevated chlorophyll concentrations throughout the fall, and is a biophysical province distinct from the adjacent Arabian Sea. Climatological hydrographic data suggest that the thermocline, hence the nutricline, in the entire gulf is markedly shoaled by the southwest monsoon during summer and fall. Under this condition, cyclonic eddies in the gulf can effectively pump deep nutrients to the surface layer and lead to the chlorophyll bloom in late summer, and, after the transition to the northeast monsoon in fall, coastal upwelling driven by the northeasterly winds produces a pronounced increase in surface chlorophyll concentrations along the Somali coast. The Gulf of Aden is a deep-water basin located between Yemen and Somalia, with a narrow connection to the Red Sea through the Straits of Bab el Mandeb in the northwest and a wide opening to the Arabian Sea in the east .Unlike the neighboring Arabian Sea, which experiences intensive summer and winter chlorophyll blooms in response to, respectively, the southwest and northeast monsoons and has attracted intensive studies , 2, the Nevertheless, relatively high resolution seasonal surveys showed tMeanwhile, modeling studies suggested that the basin-scale upwelling, instead of coastal upwelling, is responsible for the cessation of the winter Red Sea outflow and the subsurface intrusion of the Gulf of Aden intermediate water into the Red Sea , 9. TherIn this study, we use satellite ocean color data to show that, in addition to the winter increase of chlorophyll, the Gulf of Aden exhibits a prominent summer chlorophyll bloom and sustains elevated chlorophyll concentrations during fall. The summer and fall surface chlorophyll evolutions are found indeed to be critically modulated by the seasonal fluctuation of the thermocline in the Gulf of Aden, as revealed by monthly climatological hydrographic and dissolved inorganic nutrient data. In the following sections, the spatiotemporal variability of the thermocline and surface chlorophyll distributions in the Gulf of Aden is described in detail, and the altimetric sea level anomaly (SLA) data and surface wind stress data derived from satellite scatterometer are further used to investigate additional physical mechanisms that help transport the nutrients to the surface and induce the chlorophyll increases.a ocean color data (http://oceancolor.gsfc.nasa.gov/). The in situ monthly climatological temperature and nutrient data are obtained from the one-degree objectively analyzed World Ocean Atlas 2009 (WOA09) [The monthly chlorophyll concentration data for 1997\u20132010 used in this study are derived from the 9-km Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) chlorophyll- (WOA09) , 12. The (WOA09) , which iThe satellite retrieval of ocean color data in the Gulf of Aden is highly affected by the heavy aerosol loads in this region and so fThe surface winds in the Gulf of Aden are influenced by the Indian monsoon system and reverse seasonally from northeasterly during the northeast monsoon (from November to April) to southwesterly during the southwest monsoon (from June to September). The surface winds in the gulf during the northeast monsoon are similar to those in the Arabian Sea . During It is well established that the thermocline in a broad region off the coast of the Arabian Peninsula in the Arabian Sea is drastically lifted upward during the southwest monsoon from the winter condition, due to the upward Ekman pumping associated with the positive wind stress curls at the left side of the Findlater Jet axis . The resHowever, the shoaling of the thermocline is not limited to the region off the coast in the Arabian Sea, but also in the entire Gulf of Aden with a similar magnitude, about 100 m, , which iIt was suggested that the open-ocean upwelling associated with the Ekman pumping is a critical physical process for the elevated biological productivity during August-September in the Arabian Sea . Here, tFrom the onset of the southwest monsoon in May, the upwelling in the Gulf of Aden induces an upward displacement of cold, deep water as indicated by the rising isotherms . The theThe seasonal cycles of the SeaWiFS surface chlorophyll concentrations in the Gulf of Aden averaged for the period from 1998 to 2010 are displayed in \u22123 during August when the thermocline is at the shallowest depth. The summer bloom is followed by a plateau of chlorophyll concentrations of ~0.8 mg m\u22123 from October to December when the thermocline is still at shallower depths than during winter.After the low-productivity spring (from late March to early June), the SeaWiFS data reveal a prominent chlorophyll summer bloom (from July to September) and elevated chlorophyll concentrations throughout the fall (from October to December), in addition to the winter (from January to early March) increase in the chlorophyll concentrations. It can be seen that the evolution of summer and fall surface chlorophyll concentrations are closely linked to the seasonal cycles of the thermocline and the vertical nutrient distributions . The sumAt the same time, further examinations of the relation between the surface chlorophyll concentrations and the vertical profiles of nutrient suggest that the biological processes also play an important role in controlling the temporal variability of the surface chlorophyll concentrations. For instance, the decrease in nitrate in the upper layer during August is probaThe seasonal cycle of the surface chlorophyll in the Gulf of Aden is similar to that in the northwestern Arabian Sea shown in , both inThe climatological monthly surface chlorophyll distributions are shown in \u22123) in the whole gulf. Even though no subsurface data are available for the whole euphotic zone, the spring condition for the gulf is likely to be oligotrophic, as there are no evident physical processes to supply nutrients to the surface layer because the water column becomes more stratified and the locations of the thermocline and nutrientcline remain at deep depths.The winter (from January to March) surface chlorophyll shown in The summer (from July to September) distributions exhibit a persistent pattern with surface chlorophyll blooms appearing in both the western and the eastern parts of the basin and a contrasting oligotrophic region in the central part around 48\u00b0E. This pattern is mainly related to the mesoscale eddies as discussed later. During fall (from October to December), elevated chlorophyll levels are mainly found along the Somali coast, and the different spatial pattern in the fall distribution as compared with the summer bloom indicates that the former is not a continuation of the latter. Instead, the fall chlorophyll distributions in the Gulf of Aden are consistent with the coastal upwelling driven by the northeasterly winds during the northeast monsoon (see As no significant coastal upwelling is observed during the summer bloom and the advection of surface chlorophyll from the northwestern Arabian Sea is unlikely due to the presence of a surface outflow from the gulf during summer , the effMesoscale eddies are observed from the SLA and drifter data in the Gulf of Aden throughout the year . These eThe effect of meso-scale eddies on the summer and fall surface chlorophyll concentrations in the Gulf of Aden are examined with a synthesis of the AIVSO SLA data and the SeaWiFS data in The effects of the cyclonic eddies on the surface chlorophyll is modulated by the mean thermocline depth in the gulf. As the thermocline drops substantially from the shallowest depths and the nutrients in the upper layer are subsequently reduced, the cyclonic eddy around 48\u00b0E in November 2003 induces a relatively weak increase of chlorophyll. At the same time, the surface winds along the Somali coast becomes stronger than those along the Yemeni coast during the southwest monsoon , and theIt is shown in this study that the Gulf of Aden, although subject to similar seasonally reversing monsoonal winds, is a separate biophysical province from the northwestern Arabian Sea, exhibiting distinct spatiotemporal variability in the surface chlorophyll distributions that are controlled by different physical mechanisms. The SeaWiFS ocean color data reveal that, contrary to previous studies, the Gulf of Aden experiences a summer surface chlorophyll bloom and a sustained fall surface chlorophyll concentration increase in addition to the well-known winter bloom.The climatological hydrographic data indicate that the shoaling of the thermocline, therefore the nutricline, in the entire gulf during summer and fall is of central importance by providing upward nutrient flux and preconditioning the summer and fall surface chlorophyll increases. During summer, as the thermocline and nutricline reach the shallowest depths, facilitated further by vertical nutrient pumping associated with the cyclonic eddies, the surface chlorophyll concentrations reach the peak values in the annual cycle. During fall, when the thermocline and nutricline drops but still remains uplifted and the northeast monsoon prevails in the Gulf of Aden, the surface chlorophyll concentrations increase along the Somali coast as a result of coastal upwelling.The upward nutrient fluxes caused by the thermocline shoaling have geographically wider implications than the Gulf of Aden. First, the aforementioned summer subsurface intruding Gulf of Aden intermediate water would provide a substantial amount of influx of nutrients to the Red Sea, because the intruding water in the Red Sea has an observed temperature range of 16.5\u201320.0\u00b0C , 21, andSecond, the sustained increase of surface chlorophyll concentrations during fall in the gulf can be seen as a combined result of the northeast monsoon and the persisting effect of the southwest monsoon on the thermocline structure during fall. As the seasonal variations of thermocline structure in the gulf and in the northwestern Arabian Sea are nearly synchronized , the resAs the satellite ocean color data and climatological hydrographic data shown in this study reveal the basic physical processes that regulate the seasonal evolution of the surface chlorophyll in the Gulf of Aden, in situ measurements that cover the entire basin with resolutions to resolve the highly variable spatial patterns are required to provide field validations against the remotely sensed results presented in this study. Furthermore, modeling studies that integrate both the physics and biological processes are required to provide more complete insights about the mechanisms that govern the temporal variability of the surface chlorophyll on various spatial and temporal scales."} +{"text": "IJOTM has extended its activities in publishing articles on tissue banking. It is a great pleasure to witness expansion of the journal activities in this new dimension that heralds a bright future through interactive reinforcement of the country\u2019s two major transplantation units by sharing mutual experience to further the regional goals of the MESOT. On account of the prevailing sanctions on the overall growth of the country\u2019s scientific output, policies to optimize funds and efforts are most welcome.From April 2013, the The decision to create closer ties and enhance existing collaboration in research and publication between the two centers, was reached in the latest session of MESOT between Dr. Malek-Hosseini from Transplantation Research Center, Shiraz University of Medical Sciences and Dr. Mahdavi-Mazdeh from Tissue Bank, Tehran University of Medical Sciences. We anticipate fruition of this merger in our scientific and health care endeavors and will brief our interested audience of the outcome of our policies in due time."} +{"text": "Heart failure (HF) is a serious debilitating condition with poor survival rates and an increasing level of prevalence. HF is associated with an increase in renal norepinephrine (NE) spillover, which is an independent predictor of mortality in HF patients. The excessive sympatho-excitation that is a hallmark of HF has long-term effects that contribute to disease progression. An increase in directly recorded renal sympathetic nerve activity (RSNA) has also been recorded in animal models of HF. This review will focus on the mechanisms controlling sympathetic nerve activity (SNA) to the kidney during normal conditions and alterations in these mechanisms during HF. In particular the roles of afferent reflexes and central mechanisms will be discussed. Heart failure (HF) is a complex syndrome, arising secondary to a wide range of cardiac structural and functional abnormalities, with the manifestations being shortness of breath, fatigue, exercise intolerance, and oedema. Elevated sympathetic drive is well recognized to play a key role in the pathophysiology of HF, with increased sympathetic nerve activity (SNA) to the kidneys resulting in renal vasoconstriction, increased renal sodium retention and increased renin release, and consequently elevated angiotensin II and aldosterone levels induced HF model in the rat and pacing induced HF in the rabbit, sheep, and the dog. The majority of studies have indicated an increase in directly recorded RSNA during HF with ejection fractions of 36\u201345%. The MI induced model of HF in rodents results in a significant increase in baseline resting levels of RSNA \u22121]. It receives its blood supply via an arterial branch arising from the internal or external carotid artery, which ensures that the CB responds acutely to changes in arterial oxygen levels and other contents . Stimulation of the CB drives an increase in systemic sympathetic tone through direct signaling to the nucleus tractus solitarii and rostral ventrolateral medulla is the most perfused organ per gram weight in the body [2000 ml minMore recently the role of the CB chemoreflex has been implicated in mediating the increased RSNA during HF , and the A5 noradrenergic cell group , microinjected in the PVN increased baseline levels of RSNA , has been suggested as a cause of the centrally mediated sympathoexcitation in HF. In conscious normal rats, unilateral PVN microinjections of the NO donor, sodium nitroprusside (SNP) , decreased MAP levels are decreased in HF, in particular, in neurons of the PVN (Patel et al., Studies have also examined the role of the PVN in modulating the reflex regulation of RSNA. The primary reflex studied in this regard has been the cardiopulmonary afferent reflex. In essence, an increase in circulating blood volume using infusion of plasma or plasma expanders results in an increase in renal blood flow that is mediated by a decrease in SNA to the kidney. In this context, the PVN plays a critical role in mediating this inhibition of RSNA in a variety of species (Badoer et al., disinhibition of the PVN attenuates the RSNA inhibition in contrast to conscious animals where inhibition of the PVN attenuated the RSNA inhibition. One important unanswered question is how inhibition of the PVN leads to an inhibition of RSNA. In this context, there are numerous inhibitory interneurons in the PVN. When muscimol is microinjected into the PVN, it is still unclear what subsets of neurons are being inhibited and whether inhibition of inhibitory interneurons occurs. This makes it hard to decipher neuronal networks within the PVN. Irrespective of these limitations, the PVN plays an important role in reflex regulation of RSNA and appears to be important in mediating the impaired reflex regulation of RSNA during HF.It must be mentioned that in anesthetized animals, Understanding the role of RSNA in HF is difficult in humans where measurement is indirect at best. When considering findings from animal studies we must be mindful that the degree and model of HF, species and sex of the animals and presence of anesthesia all add complexity to the interpretation. Evidence does suggest that deterioration in cardiac function in HF is closely associated with elevations in RSNA, although it would appear that increased NE spillover only becomes significant once the ejection fraction is reduced below 30%. The exact mechanisms driving the increase in RSNA remain controversial, in part reflecting the diverse nature of the origins of HF. While any changes in sensitivity of the reflex regulation of SNA may simply be secondary to the increase in RSNA, there is emerging evidence that, at least in some instances, the peripheral chemoreceptors may be involved in driving the increase in RSNA. While in severe HF, there may be an impaired ability of the baroreflex to inhibit RSNA in males, it is difficult to suggest that the arterial baroreceptor reflex is the primary driver for the increase in RSNA. Additionally HF is characterized by impaired regulation of body water content. The PVN is highlighted as one central area which is associated with the impairment of the regulation of blood volume in HF, with an impaired ability of the PVN to inhibit RSNA in response to the increased blood volume observed in HF. Understanding the drivers for the increased RSNA and the variability that occurs between individuals is key to the successful management of HF.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The paper by Femel et al. demonstrWhile 2013 has been selected by Science's editors as the year of cancer immunotherapy for the positive clinical results on two immune checkpoint inhibitors, the field of cancer vaccination is also picking up slowly after a 30-year period of gains and losses. The attractiveness of cancer vaccination is clearly the harnessing of the immune system, which we know is capable of recognizing and destroying tumors and even able to cure patients in rare cases. Cancer vaccines are biological response modifiers aiming at redirecting or focusing the body's immune response to destroy the tumor. A specific and underestimated class of these vaccines is designed to recognize the tumor vasculature.Inhibition of angiogenesis has long been considered an attractive adjuvant option for the treatment of cancer. The early detection of vascular endothelial cell growth factor as a major regulator of angiogenesis and the development of its inhibitor bevacizumab has focused the field's attention mainly towards this and other growth factor signaling pathways. But it seems that a ceiling has been reached with these drugs in the prolongation of patient survival. It is therefore suggested that the direct targeting of tumor endothelial cells may be a better option. Since it is already known that inhibition of angiogenesis has a positive impact on anti-tumor immunity , using tA major requirement for the success of a vaccination approach against the vasculature is the availability of highly specific targets. Several studies have reported on increasingly specific markers of the tumor vasculature , allowinThe papers by Femel and Zhuang highlight the impact of this exciting field of research, which is anticipated to confirm similar efficacy in clinical trials in the near future."} +{"text": "Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is used to relieve motor symptoms of Parkinson\u2019s disease. A tripartite system of STN subdivisions serving motoric, associative, and limbic functions was proposed, mainly based on tracing studies, which are limited by low numbers of observations. The evidence is compelling and raises the question as to what extent these functional zones are anatomically segregated. The majority of studies indicate that there is anatomical overlap between STN functional zones. Using ultrahigh-resolution magnetic resonance imaging techniques it is now possible to visualize the STN with high spatial resolution, and it is feasible that in the near future stereotactic guided placement of electrical stimulators aided by high-resolution imaging will allow for more specific stimulation of the STN. The neuroanatomical and functional makeup of these subdivisions and their level of overlap would benefit from clarification before serving as surgical targets. We discuss histological and imaging studies, as well as clinical observations and electrophysiological recordings in DBS patients. These studies provide evidence for a topographical organization within the STN, although it remains unclear to what extent functionally and anatomically distinct subdivisions overlap. The subthalamic nucleus (STN) lies deep within the brain on top of the cerebral peduncle and plays a central role in both the direct and indirect pathway of the basal ganglia , which catalyzes the conversion of glutamate into GABA . The magnitude of the effects was maximal in the ventral part of the STN and dependent on dopamine medication (Buot et al. DBS stimulation has profound effects on STN function, although stimulation of adjacent brain areas cannot be excluded (Fontaine et al. The need for a better understanding of the STN\u2019s functional neuroanatomy is evident, both for increasing insight in the pathogenesis of movement disorders as well as optimization of DBS for improvement of motor symptoms and prevention of unwanted side effects. In all of the studies describing anatomofunctional subdivisions or zones within the STN we have reviewed, the respective authors carefully discuss the existence of topographical overlap between the subdivisions. The magnitude of the overlap is of clinical relevance. With the development of new high-resolution MRI techniques in combination with in vivo electrophysiological measures during stereotactic surgery, and technical advancements in the available electrical stimulators, DBS will potentially become even more valuable in the future. It is feasible that neurosurgeons will be able to selectively target the specific parts of the STN. However, we feel that at present the evidence supporting the existence of subdivisions of the STN without information on the degree of overlap between these subdivisions is insufficient to provide surgeons with specific targets within the STN. We have reviewed support for STN subdivisions from different research disciplines within the field of neuroscience. A topographical organization within the STN may be present; however, it remains unclear to what extent functional and anatomical subdivisions/zones overlap. This review and recent publications by us and others (Alkemade and Forstmann"} +{"text": "Staphylococcus aureus (3-4X106 CFU). The inoculated animals were divided into three groups (6 donkeys in each group). The arthroscopic examination was carried out before induction of septic carpitis and 3 days (group I), 14 days (group II), and 28 days (group III) after induction of infection. The arthroscopic examination of group I revealed hyperemia of synovial membrane and hypertrophied villi. In group II, severe hyperemia of synovial membrane, hypertrophied villi, pannus in the joint cavity and beginning of articular cartilage erosion were found. In group III, severe hyperemia of synovial membrane, hypertrophied villi and more prominent articular cartilage erosion were present.Experimental septic arthritis was induced in the radiocarpal joint of 18 donkeys by intra-articular injection of In adult horses, direct trauma is the most common cause of infectious arthritis where tissue destruction and cellulitis can lead to an open joint and subsequently cause infectious arthritis . The most common microorganism isolated from donkeys suffering from septic carpitis was Staphylococcus aureus .Septic arthritis can be caused by hematogenous infection, traumatic injury, or itrogenic infection .Arthroscopy provides a far more thorough examination of the carpal joints than arthrotomy by effectively moving the light source and thus enabling the examiners a closer inspection into the joint. Complete examination of the joint with the use of an arthroscope cannot be emulated by any arthrotomy incisions stated that the arthroscopic examination of the radiocarpal joint of the donkey from the lateral portal revealed that the medial joint angle formed by the distal surface of the radius and the proximal surface of the radial carpal bone was first seen. They also added that arthroscopic examination of the radiocarpal joint from the medial portal revealed that the first area seen was the lateral joint angle formed by the proximal articular surface of the intermediate carpal bone and the distal articular surface of the radius.Magda et al., 2011; Spahn et al., 2011).Arthroscopy is considered the most valid method for cartilage evaluation, but in case of small cartilage lesions there is a risk of overestimation of the defect size. The advantage of arthroscopy compared to the other diagnostic modalities is the possibility of immediate treatment of the identified joint problem .Adequate arthroscopic examination of the intercarpal or radiocarpal joint is possible through a single dorsal arthroscopic portal for each joint. Using two dorsal separate portals improves visualization by increasing freedom of movement because of reduced soft tissue tension around the arthroscope and a reduced tendency to slip out of the joint when examining areas close to the arthroscopic portal . This may cover the foreign material and devitalised tissues, act as a nidus for bacterial multiplication and is rich in inflammatory cells, degradative enzymes, and radicals.It is also a barrier to synovial membrane diffusion, thus compromising further intrasynovial nutrition and limiting the access for circulating antimicrobial drugs. The quantum and the nature of the pannus appears to be dependent on the type and number of infecting organisms .Arthroscopic examination of septic arthritis of the tarsocrural joint in horses revealed erosions and irregularity of articular surface, fissure, articular degeneration and hypertrophied villi .Borg and Carmalt (2013) concluded that joint infection rate in the horse population that had elective arthroscopy without antimicrobial prophylaxis compared favorably with other reports citing 0.9% sepsis in horses after arthroscopy.(Staph. aureus) of the radiocarpal joint in donkeys.This study was aimed to describe the arthroscopic anatomy of the radiocarpal joint in donkeys, and to shed light on the arthroscopic assessment of induced septic arthritis Staph. aureus until the infective inoculums were obtained. The remaining 18 donkeys were divided into three groups; each group consisting of 6 animals. In these animals, induction of septic carpitis in the radiocarpal joints of donkeys by Staph. aureus was induced. After the induction of septic carpitis, arthroscopic examinations were performed.The present study was experimentally induced on 32 healthy, adult donkeys of both sexes. Fourteen animals were used for serial propagation of the These animals were divided into three groups. The examination of the three infected groups was done 3, 14 and 28 days after the induction of septic carpitis of the radiocarpal joint. The ethical committee in the Faculty of Veterinary Medicine, Benha University approved this work.Staph. aureus colony (standard colony). Staph. aureus used for induction of septic arthritis in this study was isolated from a donkey suffering from septic arthritis. A sample from the infected synovial fluid of the carpal joint was aspirated, inoculated into nutrient broth and incubated for 24 hours at 37\u00b0C. Each radiocarpal joint was inoculated with 3-4X106 CFU of viable Staph. aureus.The induction of septic arthritis in donkeys was performed using a viable Arthroscopic examination was performed on donkeys under the effect of xylazine HCl (1mg/kg) as the tranquilizer and thiopental sodium (5% solution intravenously with a dose of 6-8 mg/kg body weight).The carpus was clipped circumferentially from the proximal metacarpus to the distal radius, cleaned and draped with sterile towel immediately prior to surgery. The donkeys were placed in dorsal recumbency and hooves were suspended on bar for ease of control of the degree of flexion of the joint.The arthroscope was inserted in the radiocarpal joint while the joint was 20 degree flexed. Two arthroscopic portals were used for each joint. The lateral portal was located half way between the tendons of the extensor carpi radialis and the common digital extensor tendon. The medial portal was located at about 1 cm medial to the tendon of extensor carpi radialis.Distension of the joint was obtained with 15-20 ml of 0.9 normal saline using an 18 gauge needle inserted at the proposed site for insertion of the arthroscope mentioned above. The arthroscopic sleeve was inserted through a 1 cm skin incision at the site of needle insertion. With the aid of a sharp obturator, the sleeve was advanced into the joint until the fluid began to escape from the cannula. The sharp obturator was replaced with a 25 & 4 mm, forward oblique viewing arthroscope. An ingress fluid line and fiber optic light cable were attached to the arthroscope. The joints were viewed with a video camera and a monitor. The joints were irrigated throughout the procedure with normal saline delivered from an infusion set.Staph. aureus in donkey\u2019s carpal joints revealed that the infective dose of septic carpitis was 3-4X106 CFU.The serial propagation of Arthroscopic examination of the radiocarpal joint from the lateral portal revealed that the medial joint angle formed by the distal surface of the radius and the proximal surface of the radial carpal bone was the first area seen .Withdrawing of the arthroscope makes easier the visualization of the proximal surface of the intermediate carpal and the distal surface of the radius in its middle portion could be easily seen. The lateral joint angle could be seen from this portal with some difficulty, but it was better examined from the medial portal.Arthroscopic examination of the radiocarpal joint from the medial portal revealed that the first area seen was the lateral angle formed by the proximal articular surface of the ulnar carpal bone, the lateral aspect of the intermediate carpal bone and the distal articular surface of the radius .The joint space, synovial membrane and villi could be easily examined. The shape of the villi varied from polyp like , slenderArthroscopic examination of infected group I revealed that the cartilage remained clear and did not change. Mild degree of synovitis which was characterized by hyperemia and petechiation of the villi with sliArthroscopic examination of infected group II revealed the presence of a severe degree of synovitis which was characterized by hyperemia and petechiation of the villi. Severe degree of congestion and hypertrophied villi were present. Pannus was present at this stage in addition to the beginning of articular cartilage erosion .Arthroscopic examination of infected group III revealed the presence of a more severe degree of synovitis characterized by hyperemia and petechiation of the villi. Severe degree of congestion which appear as patches, hypertrophied villi and more prominent erosion of articular cartilage was present as seen in Staph. aureus was used in the induction of septic carpitis in donkeys because the most commonly isolated microorganisms from clinical cases was this microorganism. This study was performed in donkeys to examine the severity of every stage of septic carpitis and furthermore, use it for the clinical trials. These findings were in agreement with that reported by El-Maghraby and Al-Bawa\u2019neh (2002).In the present study, et al. (2005). However, Martin and McIlwraith (1985), McIlwraith (1990) and McIlwraith et al. (2005) recorded that adequate arthroscopic examination of the radiocarpal joint was possible through a single arthroscopic portal.The obtained results revealed that, the arthroscopic examination of the all aspects of the radiocarpal joint was accomplished through medial and lateral portals. This finding were in agreement with Magda, Using two separate portals, in the present study improved visualization of all joint structures by increasing the freedom of movement of arthroscope inside the joint cavity, reduction of soft tissue tension around the arthroscope and its tendency to slip out of the joint.et al. (2005).The lateral portal of the radiocarpal joint was positioned halfway between the extensor carpi radialis tendon and common digital extensor tendon at the level of the radiocarpal joint. The medial portal was located medial to the extensor carpi radialis tendon at the level of the radiocarpal joint. This finding were in agreement with Magda et al. (1987), McIlwraith (1990) and McIlwraith et al. (2005) in horses. Arthroscopic examination of the radiocarpal joint was best achieved with the joint flexed at 20 degrees. The same finding was recorded by Magda et al. (2005) in donkeys and also by Martin and McIlwraith (1985) and McIlwraith et al. (2005) in horses. Arthroscopic examination of the joint flexed at 20 degree reduces the tension of the synovial membrane on the dorsal aspects of the articular surfaces of the radial carpal bone, intermediate carpal bone and the radius.The same portals also were described by Martin and McIlwraith (1985), Hurtig and Fertz (1986), McIlwraith Arthroscopic examination in cases of septic arthritis revealed synovitis which was characterized by hyperemia and petechiation of the villi. This finding were in agreement with McIlwraith (1984) who illustrated that synovitis was characterized by hyperaemia, petechiation of the villi, development of small hyperaemic villi in abnormal locations and new forms of villi. With severe inflammation, fusion of villi and the presence of fibrinoid strands may be observed.Arthroscopic examination of septic carpitis (group II) revealed the presence of a severe degree of synovitis with hyperemia and petechiation of the villi. Severe degree of congestion and hypertrophied villi were present. Pannus was present at this stage in addition to the presence of erosion of the articular cartilage. These findings were in agreement with McIlwraith (1984), Wright (2002) and AbdEl-Glil (2012).Arthroscopic examination of septic carpitis (group III) revealed presence of more severe degree of synovitis characterized by hyperemia and petechiation of the villi. Severe degree of congestion which appear as patches, hypertrophied villi and erosion were also present. These findings were in agreement with McIlwraith (1984) and AbdEl-Glil (2012).In conclusion, arthroscopy provides advantages of diagnosis and management of infected joints to choose the suitable method of treatment."} +{"text": "Spinal cord injury is a life-transforming condition of sudden onset that can have devastating consequences. A multidisciplinary, functional goal-oriented programme is required to enable the tetraplegic patient live as fully and independently life as possible. Physiotherapy is a very important part of the multidisciplinary team required to prevent many of the immobilization complications that may result in serious functional limitations, reduce overall morbidity and achieve well patterned recovery. This study therefore highlights the neuromuscular adaptations of tetraplegic patients to physiotherapy over a period of six weeks. Fifteen patients participated in this study and the results showed that even though changes in the musculoskeletal parameters are inevitable in tetraplegics, the extent/degree of reduction of these parameters was grossly minimized in the studied subjects through the administration of physiotherapeutic measures. However, further research using a large sample size will be required to evaluate the physiologic adaptations of the neuromuscular system to the physiotherapy interventions among patients with spinal cord injury. Spinal cord injury (SCI) is a life transforming condition with a great risk of development of an array of debilitating and potentially life- threatening complications . Its cliTetraplegic patients have impairment of motor and/or sensory function in the cervical segment of the spinal cord . Tetraplwww.spinal-injury.net).The post traumatic dysfunction of the spinal cord causes loss of homoestatic and adaptive mechanisms which keep people naturally healthy. There are diverse causes of SCI but any mechanism that causes injury to the vertebral column or the bones in the back can damage the spinal cord. These causes can be traumatic (84%) or non-traumatic (16%) . The mosTraumatic spinal cord injury (SCI) in the UK affects an estimated 10\u201315 people per million population per year so there are around 40,000 individuals in the UK living with a traumatic SCI . The estBielby and Joint contractures or permanent limitations of joint movement usually due to poor positioning, lack of movement and/or muscle spasticity; muscle atrophy, a shrinking or wasting of musculature due to lack of use; osteoporosis, deterioration of the bone that may occur due to decreased weight bearing, as well as factors related to the injury itself; development of deep venous thrombosis and possible pulmonary embolism due to lack of or reduced muscular pump activities for adequate venous return resulting from paralysis or weakness of muscles; and decreased respiratory function which can create such problems as increased risk for respiratory infections, congestion, rapid breathing and/or increased shortness of breath.Management of the tetraplegic patients requires a multidisciplinary team approach, physiotherapy being one of the major inputs required in the care. Physiotherapy commences as soon as the patient is admitted and he/ she is stable clinically. The main objectives of physiotherapy in the acute phase include prevention of the above listed risks, maintenance of full range of motion (ROM) of all joints within the limitations of the injury, maintenance/ strengthening of all innervated muscle groups and to monitor neurological status and manage appropriately .Comprehensive rehabilitation (physiotherapy as a key part) along with early surgery, where necessary, is believed to markedly reduce the overall morbidity of spinal cord injured patients by enabling the patient to lead an independent life .Physiotherapeutic inputs have been considered an important adjunct in the care of the tetraplegic patients following spinal cord injury but no study has systematically and subjectively documented to what extent these measures or inputs maintain and improve functions towards eventual mobilization of tetraplegics after the acute care. This study therefore aims to fill this gap in knowledge.The following research questions were raised to guide the study.Would physiotherapeutic inputs achieve the expected goals in the recovery pattern / trend of tetraplegic patients? Specifically, the investigation probed into whether there would be a maintenance or improvements in the muscle strength, muscle girth, and range of movement among tetraplegic patients following physiotherapy measures in the acute phase of their management?The following hypotheses were formulated and tested.Physiotherapeutic inputs would not significantly contribute positively towards mobilization of spinal cord injured patients with tetraplegia during the acute phase of their management.The specific hypothesis formulated and tested was whether there would be no significant improvements in the muscle strength, muscle girth, and range of movement among tetraplegic patients following physiotherapy measures in the acute phase of their management.Fifteen subjects participated in this study which comprised of thirteen (13) males and two (2) females with average ages of 29.3 and 37.0 years old respectively. They were randomly selected from those admitted in the neurosurgical ward of the University of Benin Teaching Hospital, Benin City, Nigeria, between 2009 and 2011. The age range of the males was between 12 and 50years old while that of the females was between 27 and 47 years old.Inclusion criteria was the enrollment of patients with quadriplegia resulting from SCI without associated injuries like head injuries, psychiatry condition, fractures of the limbs or ribs; medical conditions like diabetes mellitus and positive retroviral states. Exclusion criteria included aged patients of over 65years old and those with chronic (old) SCI. Ethical approval of the research and ethics committee of the study institution was sought and obtained.This study measured musculoskeletal parameters of fifteen (15) tetraplegic patients at the commencement of physiotherapy immediately at their being admitted to the neurosurgical ward, two weeks, four weeks and six weeks post admission. The progress or otherwise on the specified variables during the period of immobilization were compared. Physical measurements of these parameters were taken using validated instruments at the different stages of care.The present physiotherapeutic measures employed in the management of SCI patients with tetraplegia were Chest Physiotherapy, Passive Mobilization, Proper Positioning on bed, strengthening exercises, soft tissue mobilization, pain management procedures and preparation for mobilization.Parameters that were measured adapted Adams and Hamblen\u2019s (1990) protocol which focused on muscle strength, muscle girth, range of joint movements/motion, forced expiratory volume and maximum inspiratory volume/capacity. Muscle strengths were measured subjectively using Medical Research Council grading as follows: 0= no contraction; 1= a flicker of contraction; 2= slight power, sufficient to move the joint only with gravity eliminated; 3= power sufficient to move the joint against gravity; 4= power to move the joint against gravity plus added resistance; 5= normal power .Muscle girths were measured using measuring tape rules. The girths of arms, forearms, thighs and legs were measured in these patients using the International Standards for Anthropometric Assessment (ISAK) recommended points and positions. The bilateral arm girth measurements were taken at the level of the mid acromiale-radiale with the patients assuming a relaxed supine lying position and the elbow being in an extension with the tape positioned perpendicular to the long axis of the arm.The forearm girths bilaterally were taken at the maximum girth of the forearm distal to the humeral epicondyles with the forearm supinated while relaxing the muscles of the forearm. For consistency in the measurement, a point 10cm away from the tip of the olecranon was marked as the reference point. For the thigh and leg girths measurements, a distance of 12.5cm above and below the upper and lower tips of the patellae was used as the reference point for the purpose of uniformity. The patients were measured in a relaxed supine lying position with the knee joints extended. In all the girth measurements, it was ensured that the tape was firmly passed round the limbs without excessive tension and that it did not indent the skin or slip. Three measurements were taken on each point and the average values of the measurements were taken and recorded.The joints range of motion of all the patients were measured with the aid of a goniometer. The joints used in this study were the shoulders, elbows, wrists, hips, knees and ankle joints bilaterally. Measurements were also taken at two, four and six weeks on admission.The neurosurgical management of a tetraplegic compulsorily requires a period of six weeks of immobilization on bed for postural reduction. Tetraplegic patients admitted in the neurosurgical ward in supine lying positions with either a cervical collar or Crutchfield tongs/calipers or Gardner-Wells traction fixed on the skull depending on the degree of instability to immobilize the injured cervical spine for spinal stabilization were immobilized on bed for an average of six weeks. A few of the patients with unstable spine or observed not to have improved satisfactorily with conservative approach were operated upon using surgical procedures for spinal stabilization. In such an instance, they spent less than six weeks on bed post operation for proper spinal stabilization and rehabilitative care. All the patients had daily physiotherapy during the acute phase of their management. The daily treatment consisted of manual therapy of Chest Physiotherapy of vibration and shaking of the chest for about 5- 10minutes along with active deep inspiration exercise. Passive mobilization of all the extremities of the patients carried through safe range or degree, strengthening exercises all the muscles of the extremities, soft tissue mobilization with aid of talcum powder, proper positioning of the limbs on bed, pain management procedures and preparation for mobilization in the form of ambulation when due. The treatment regimen takes an average of one hour per patient daily and each patient had four treatment sessions per week. The treatment period was six week of postural reduction on bed. The data obtained were fed into SPSS version 16 statistical analysis. Descriptive statistics of mean, standard deviation and percentage as well as inferential statistics of one-way repeated measures ANOVA were used to test the hypotheses at alpha level of 0.05.The results are presented in tables In terms of inferential statistics using one-way repeated measures, three out of sixteen parameters were not statistically significant. Specifically, In The second and third measurements for right upper limb, below elbow, showed statistically significant values of 16.783 with 0.058 and 7.647 with 0.058 respectively as shown in The purpose of this study was to investigate effect of early physiotherapy measures on musculoskeletal parameters that are required for functional activities in patients with spinal cord injury (SCI).The outcome of the study revealed significant differences in the muscle girth measurements in majority of the body segments of the extremities of the patients measured. The study revealed no significant difference in the reduction of the chest circumferential measurements taken two weeks on admission but a significant difference in the increase in the circumferential measurement obtained by six weeks during rehabilitation. However, there were significant differences both in the initial reduction and the eventual increase in the waist circumferential measurement of the patients during the six weeks period of admission for postural reduction. The chest and waist circumferential measurements were used as surrogates for chest and abdominal muscle strength and lung ventilation capacity (inspiratory and expiratory functions) in this study. Respiration is a function that is critical for sustaining life but also significantly important for generating the pressure needed to cough, speak, and swallow . RespiraTurning and positioning are measures that can minimize spasticity and prevent flexor muscle hypertonia in SCI patients who are recumbent on bed. The study also recorded full range of motion on all the joints of the extremities measured throughout the study period. This has been achievable through the daily administration of passive mobilization of the limbs by the physiotherapists. Contractures (reduced joint mobility) were due to loss of extensibility in soft tissues spanning joints, and were found to be a common complication of spinal cord injury. One study found that spinal cord injured patients had, on average, seven contractures at between 6 and 7 weeks after injury . ContracIn the present study, occurrence of pain was anticipated and was also prevented by the use of medications by the neurosurgeon to avert occurrence of chronic pains. Topical analgesic gels were also applied on the joints of the upper and lower extremities to relieve pain where applicable. These were gently massaged into the joints by the physiotherapists; patients\u2019 relatives were taught how to apply these gels where necessary in order to ensure twice or thrice daily applications when the severity of pain demanded such.This study has presented results that further strengthened the effects of physiotherapeutic measures as important adjuncts in the management of SCI patients with tetraplegia especially during the acute care phase of management. It showed that even though changes in the musculoskeletal parameters are inevitable in tetraplegics, the extent/degree of reduction of these parameters can be reduced grossly through the administration of physiotherapeutic measures. It was evident from the results that the rate of neurophysiological responses that follow the spinal cord injuries which result in deterioration of neuromuscular parameters like the muscle mass, tones, power and joint flexibility can be greatly influenced by application of physiotherapy techniques and measures. Therefore, it is highly recommended that structured physiotherapeutic measures should be an adjunct to the care and management of SCI patients. It is also recommended that further studies that will make use of larger samples and that will also follow up effects of physiotherapeutic measures on recovery pattern of patients with SCI beyond the acute stage be conducted."} +{"text": "Suggestion has been defined as a form of communicable ideation or belief, that once accepted has the capacity to exert profound changes on a person's mood, thoughts, perceptions and behaviors . The hypnotized individual will be concentrating on the hypnotist's voice relaying a series and variety of suggestions to which the individual may or may not respond. Under the influence of hypnotic suggestion (or post-hypnotic suggestion which is a suggestion given under hypnosis but activated post-hypnosis) the experience of pain suggests that placebo effects require the involvement of the prefrontal cortex whereas hypnotic suggestibility is increased when the prefrontal cortex is hypoactive. Applying TMS to the left dorso-lateral prefrontal cortex (DLPFC) to impair function in this region results in increased hypnotic suggestibility matter microstructure and hypnotic suggestibility (Hoeft et al., The PFC is a large region of the brain, and one might consider the role of different regions of the PFC in the varieties of suggestion. The involvement of the DLPFC could differentiate hypnosis and placebo effects. The role of more ventromedial regions in other forms of suggestion (Asp et al., Lifshitz et al. draw a dNotably, research suggests that measures of placebo suggestibility do not correlate with hypnotic/imaginative suggestibility measures (Kihlstrom, In conclusion, whilst there are potentially informative similarities between hypnotic suggestion and placebos, their differences, particularly with regard to the differential contributions of regions of the prefrontal cortex, are also potentially informative as to the nature of suggestion more generally.The author confirms being the sole contributor of this work and approved it for publication.This research was supported by Bournemouth University.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "European Commission and Patients Associations identify Registries as strategic instruments to improve knowledge in the field of Rare Diseases ,2. InterIn this study, exploratory data analyses were applied to the EPIRARE (European Platform for Rare Diseases Registries) Registry Survey in order to generate a macro-classification and characterization of RDPR and to deepen different informative needs.At first, a Multiple Correspondence Analysis (MCA) suggested associations between selected variables characterizing the structure of RDPR Figure . Then, aThese results, identifying different profiles of RDPR and specific informative needs, represent an informative support aimed at addressing the activities for the design of an European platform of Rare Diseases. Identification of informative cores could address the activities of a platform able to enhance the sharing of information between RDPR with common aims, but also to facilitate a coherent dialogue between RDPR with different profiles.Guide to interpretation: the arrows indicate the directions of association among the aims; the dimension of the circles represents the frequency of the variable. The higher are the coordinate and the frequency of the variable, the more it contributes to the interpretation of the factorial axis; variables placed on the same direction are correlated."} +{"text": "Despite a wealth of fossils of Mesozoic birds revealing evidence of plumage and other soft-tissue structures, the epidermal and dermal anatomy of their wing\u2019s patagia remain largely unknown. We describe a distal forelimb of an enantiornithine bird from the Lower Cretaceous limestones of Las Hoyas, Spain, which reveals the overall morphology of the integument of the wing and other connective structures associated with the insertion of flight feathers. The integumentary anatomy, and myological and arthrological organization of the new fossil is remarkably similar to that of modern birds, in which a system of small muscles, tendons and ligaments attaches to the follicles of the remigial feathers and maintains the functional integrity of the wing during flight. The new fossil documents the oldest known occurrence of connective tissues in association with the flight feathers of birds. Furthermore, the presence of an essentially modern connective arrangement in the wing of enantiornithines supports the interpretation of these primitive birds as competent fliers. Exceptional fossil preservation allows access to crucial information regarding the integumentary structures244457911The distal wing of MCCMLH31444 is likely exposed in ventral view, as evidenced by the presence of ventral flexor depressions in the exposed side of the pre-ungual phalanges of digit I and II and the metacarpal II15Confuciusornis sanctus101010Three epidermal structures indicate that the connective tissues of MCCMLH31444 are mainly preserved as calcium phosphate while the plumage has likely undergone a carbonization process . PhosphaThe parallel pattern of plait-like fibers alternating with an unorganized matrix in MCCMLH31444 is characteristic of modern tendons and ligaments that are subject to multidirectional mechanical stresses1919229Ligg. phalangoremigalia distalia, digitationes remigiales999In modern birds, networks of smooth muscles attached by elastic tendons to the outer walls of feather follicles control the coordinated movement of groups of feathers91024The evolution of the modern network of smooth muscles, elastic tendons and ligaments involved in the function of the wing\u2019s flight feathers was a paramount event in the fine-tuning of aerodynamic capabilities in birds. By allowing these feathers to be repositioned in tandem, this sophisticated dermal system helped the wing to maneuver as a single functional unit and to cope with the strenuous aerodynamic stresses of flapping flight. The 125-million-year-old MCCMLH31444 provides the oldest reported evidence of this intricate connective network and its earliest phylogenetic occurrence. The remarkably modern anatomy and arrangement of the connective tissues preserved in the wing of MCCMLH31444 implies that Early Cretaceous enantiornithines had already developed forelimbs morphologically well adapted for flight, losing most of the primitive grasping functions attributed to the dinosaurian forerunners of birdsProtopteryx fengningensis4418418Although still showing a suite of primitive skeletal traits4How to cite this article: Naval\u00f3n, G. et al. Soft-tissue and dermal arrangement in the wing of an Early Cretaceous bird: Implications for the evolution of avian flight. Sci. Rep.5, 14864; doi: 10.1038/srep14864 (2015)."} +{"text": "Manualised Family Based Treatment (FBT) is considered the best evidenced treatment for adolescents with Anorexia Nervosa. However, the replication of outcome data from research trials into treatment in a general clinical setting can be a challenge. Following on from the presentation entitled 'Overcoming challenges of implementation of FBT at the Regional Eating Disorders Service (REDS) in Auckland, New Zealand', the results of the implementation of the delivery of FBT at Auckland's Regional Eating Disorders Service (REDS) are presented. Results are based on a clinical audit of treatment outcomes in the year preceding the implementation of FBT. Data will be presented of the period of 18 months while FBT was established, and the 6 months afterwards. A comparison of these different periods will be made with respect to weight recovery and psychometrics as well as effects on length of admission and re-admission rate to the paediatric hospital.Service Initiatives: Child and Adolescent Refeeding and FBT stream of the 2014 ANZAED Conference.This abstract was presented in the"} +{"text": "According to World Health Organization (WHO), dental caries is common among 60-90% of school children and nearly among 100% of adults. Further, severe form of periodontal disease is found in 15-20% of 35-44 years old patients. WHO experts emphasize that timely prevention of dental diseases will not only contribute importantly to mainitaining oral health but would also significantly reduce cost of care and treatment of oral diseases.An important component in the prevention of oral diseases is informing of patients about the possibility of maintaining oral health with the help of available oral hygiene products. For adequate and efficient planning of activities to raise oral health awareness it is necessary to estimate level of oral health literacy of the community. In different countries there are certain questionnaires which are primarily used during scientific research. Many of these questionnaires and tests are not well-suited for use in patients with different culture and language.We suggest that standardization of oral health literacy assessment would be conducive to more effective evaluation of oral health care needs of a given population. Thus, we have developed an index of oral health literacy which has the following characteristics:- questions are universal and reflect current trends of prevention of oral diseases;- questions have no cultural contexts and attachments;- questions can be translated into any language;- questions have some variants for answers;- quantitative assessment of oral health literacy.Pilot studies showed that the developed index of oral health literacy can reliably predict the level of knowledge in young patients and the state of the oral health.In conclusion, the use of the developed index should will allow assessment of oral health literacy of patients regarding aspects of oral health preservation irrespective of nationality and language should facilitate the development of personalized approaches to prevention of dental diseases."} +{"text": "Controlled Clinical Trials are the Gold Standard to assess the efficacy and safety of a new therapeutic intervention. Furthermore, their results are used to make decisions in relation to public health issues. This stAll trials led by CENCEC from 1992 to 2013 were characterized according to specific variables. A review of available information at the Cuban Regulatory Agency regarding the registration process of new drugs was performed. Interviews to sponsors, clinical investigators and health authorities were carried out to identify products registered outside the country, health benefits and impact of the results of clinical trials conducted by CENCEC.In this period, 133 clinical trials were completed evaluating 58 products from 28 sponsors, with participation of 1075 clinical sites from 90 hospitals and 60 primary care health care units involving 4241 researchers. Some of those studies were performed for the evaluation of health technologies and to search evidence for public health decisions. The activities of CENCEC were carried out according to international standards: Quality Assurance System Certificated (ISO 9001:2008), Cuban Public Registry of Clinical Trials (WHO Primary registry [The benefits of \"new intervention\" clinical trials were related to morbidity and mortality rates, new patterns of disease management, better Infrastructure of clinical sites, improvement of the quality of medical care, the introduction of new technologies and the building of capacity of clinical investigators .The experience of Cuba, a low income country with a defined health policy, shows that the results of clinical trials are an effective tool to improve health services and for an efficient introduction of evidence in medical practice, for decision making in public health leading to improvements in clinical care. The experience gained could be applied in other countries of Latin America to achieve public health results ."} +{"text": "The aim of the present study was to determine different affections of the salivary ducts in buffaloes with special reference to diagnosis and treatment. The study was carried out on 39 buffaloes suffering from different affections of the salivary ducts. The recorded affections of the salivary ducts in buffaloes include; ectasia of the parotid duct (21 cases), parotid duct fistula (15 cases) and sialocele (3 cases). Each case was subjected to full study including case history, clinical examination, diagnosis, and treatment whenever possible. Exploratory puncture and radiography were used for confirmation of diagnosis. Intraoral marsupialization was performed for treatment of parotid duct ectasia. Salivary fistula was corrected by one of two successful techniques; the first by reconstruction of the parotid duct and the second by ligation of the parotid duct just caudal to the fistula opening. Sialoceles were corrected by removal of the mandibular salivary gland of the affected side. The main salivary glands in buffaloes consist of three pairs namely; parotid, mandibular and sublingual glands.et al., 1988).The parotid duct emerges from the distal extremity of the gland and courses forward along the ventral border of the masseter muscle, then turns upward and gains the rostral border of the muscle. After a short course, the duct opens into the buccal vestibule at the parotid papilla .The sublingual duct originates from the rostral extremity of the gland and courses deeply with the mandibular duct until it opens alone or into a common duct with the mandibular duct at the sublingual caruncle .The common affections of the salivary ducts in buffaloes were recorded in the available literatures. They include; ectasia of the parotid duct, salivary fistula, sialolith and sialoceles . Three affections of the salivary ducts were recorded in the present study namely; ectasia of the parotid duct, parotid duct fistula and mandibular sialocele.Ectasia of the parotid duct was diagnosed in 21 animals aged 4 months to 8 years. 15 animals were females and 6 were males. Diagnosis was established depending on the case history and clinical symptoms. Exploratory puncture and radiographic examination were used for confirmation of diagnosis. Marsupialization was the treatment of choice and was performed under the effect of the tranquilizer xylazine HCl 2% with a dose rate of 0.05 mg/kg intramuscularly.et al., 1991). The second technique was applied in 9 animals and was performed by ligation of the salivary duct just caudal to the fistula opening under the effect of xylazine HCl 2% at a dose rate of 0.05 mg/kg intramuscularly with subcutaneous local infiltration analgesia using 10 ml of lidocaine HCl 2% at the site of operation.Parotid duct fistula was diagnosed in 15 female buffaloes aged 2 to 8 years. Diagnosis was established depending on the presence of an opening along the course of the parotid duct discharging saliva. The condition was corrected by two techniques. The first one was applied in 6 animals and was performed by the application of a segment of polyethylene tube at the site of fistula for reconstruction of the patency of the salivary duct were applied with 2 cm distance between them and tied after withdrawal of the probe. After that, the skin wound was closed using simple interrupted stitches (# 2). Also, the site of fistula opening was freshened and trimmed followed by simple interrupted silk stitches. Sialocele was recorded in 3 adult buffaloes .A soft fluctuating painless swelling was detected at the mandibular space since a few months as per the case history. No case history was established explaining the beginning of the condition except a gradual appearance and enlargement of the swelling. Exploratory puncture revealed the presence of saliva and aspiration of the cystic swelling resulted in its collapse but refilling was detected within one hour or more.Treatment was performed in only two cases, where unilateral removal of the mandibular salivary gland of the suspected affected side was performed in both cases. Recovery was uneventful in two cases and the third one was not operated at owner\u2019s request.Salivary ducts carry a huge amount of saliva from the salivary glands to the mouth cavity. Anatomically, the parotid salivary duct runs mostly subcutaneously along its course while mandibular and sublingual salivary ducts are embedded in tissues except at their terminal parts where they run submucosally inside the mouth cavity.et al., 1991). Sialoceles occurs due to leakage of saliva from the injured mandibular or sublingual salivary ducts . Intraoral trauma may cause the formation of sublingual or mandibular sialoceles . Diagnosis of salivary duct affections was performed depending on clinical presentation of all cases; however a degree of confusion may be present in the differential diagnosis between ectasia of the parotid duct and sialoceles and formation of a cystic swelling at one side of the head. However, the presence of parotid duct ectasia in calves may have a congenital link . Salivary fistula of the parotid duct was corrected successfully by reconstruction of the duct using a segment of polyethylene tube or by destruction of the gland through ligation of the duct just caudal to the fistula opening. Selection of either technique was based on the surgeon\u2019s preference.Complete reconstruction of the affected parotid duct in cases of parotid duct ectasia was performed in one buffalo with successful results. However, complete resection of the dilated duct and its replacement by a polyethylene tube is a tedious operation and should be performed only under general anesthesia .The presence of sialoceles at the middle of mandibular space does not reveal the affected side. However sialography using contrast medium will help in differentiation of the healthy side from the affected side .The main affections of the salivary ducts in buffaloes are two; ectasia of the parotid duct and salivary fistula of the parotid duct while the third affection is mandibular sialoceles which occurs only sporadically. Fortunately, both common affections can be corrected easily, safely and in a short time without postoperative complications and with good prognosis."} +{"text": "Cancer is one of the most urgent health issues of today. According to WHO, the number of cancer cases is expected to increase by 75% in the next two decades . DespiteThe aim of any anti-cancer treatment is to selectively kill cancer cells by targeting key biological properties essential for the maintenance of tumorigenicity and malignant progression . CurrentA newer type of anti-cancer therapy generally called molecularly targeted therapy relies on rationally designed agents to target, with a high degree of specificity, well-defined molecules or pathways that operate in cancer cells to maintain their malignant potential. Although both cytotoxic and molecularly targeted therapeutic approaches generally exploit differences between neoplastic and normal cells, only targeted therapies enable the so-called precision medicine. Recent advances in the field of molecular profiling have opened up a real possibility to make better informed treatment decisions based on the data from personalized tumor profiling . HowNew generation sequencing methodologies while enabling to identify genomic alterations associated with different types of cancer with an unprecedented completeness also revealed the high degree of genetic diversity existing not only between different types of cancer but also between individual tumors of the same histotype . A These considerations are of particular relevance in the context of the CSC hypothesis, which postulates that CSCs constitute only a minor fraction of tumor cells capable of initiating tumor growth . Inde novo and recurrent AMLs, it was established that a minor AML clone underrepresented in the primary tumor became dominant in recurrent tumors as a consequence of chemotherapy heterogeneous types of cancer cells can be separated spatially within a tumor raises several important questions concerning the identity of tumor clones that are capable of escaping from anti-cancer treatments and repopulating the tumor. There is some evidence that exposure to therapy may influence the dynamics of clonal repopulation and lead to the alternation of clonal dominance as a consequence of treatment. For example, by applying next generation sequencing to compare somatic mutations in matched pairs of otherapy . Similarotherapy . In the in vitro might be more feasible. Such an approach has the advantage of reducing the variability in treatment conditions and dissecting the effects of single and combined treatments. By comparing treatment responses in different types of cancer cells from the same tumor should allow to improve predictions on the efficacy of a particular treatment scheme in a particular tumor.The realization that intratumor heterogeneity poses one of the major challenges to overcome resistance to anti-cancer therapy raises a number of questions: are there common molecular denominators underlying resistance to different types of therapy? Is there an interaction between different populations of cancer cells residing in the same or different geographic regions of the same tumor? What is the impact of different types of anti-cancer therapy in the emergence of resistant clones? To address these issues, there is a need of suitable methodologies that would take into account the spatiotemporal patterns of intratumoral diversity. It has been proposed that multiple sampling analyses of multiple regions from matched pairs of untreated and recurrent tumors would be required to assess the impacts of intratumoral diversity on the development of resistance to anti-cancer therapies . Such anIt should be noted that the degree of intratumoral heterogeneity may not necessarily reflect an enhanced malignant potential. It is believed that a considerable portion of new mutations arising in the course of tumor evolution are passenger mutations . In suchThe emerging scenario of recurrent tumor growth reveals key roles of intratumoral heterogeneity in intrinsic and acquired resistance to cytotoxic and targeted therapies. Understanding spatiotemporal patterns and dynamics of intratumoral heterogeneity before and during therapy is crucial for the ability to design individual-tailored treatment regimens best suited to a particular molecular context.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Gram- negative bacteria utilize a diverse array of multidrug transporters to pump toxic compounds out of the cell. Some transporters, together with periplasmic membrane fusion proteins (MFPs) and outer membrane channels, assemble trans-envelope complexes that expel multiple antibiotics across outer membranes of Gram-negative bacteria and into the external medium. Others further potentiate this efflux by pumping drugs across the inner membrane into the periplasm. Together these transporters create a powerful network of efflux that protects bacteria against a broad range of antimicrobial agents. This review is focused on the mechanism of coupling transport reactions located in two different membranes of Gram-negative bacteria. Using a combination of biochemical, genetic and biophysical approaches we have reconstructed the sequence of events leading to the assembly of trans-envelope drug efflux complexes and characterized the roles of periplasmic and outer membrane proteins in this process. Our recent data suggest a critical step in the activation of intermembrane efflux pumps, which is controlled by MFPs. We propose that the reaction cycles of transporters are tightly coupled to the assembly of the trans-envelope complexes. Transporters and MFPs exist in the inner membrane as dormant complexes. The activation of complexes is triggered by MFP binding to the outer membrane channel, which leads to a conformational change in the membrane proximal domain of MFP needed for stimulation of transporters. The activated MFP-transporter complex engages the outer membrane channel to expel substrates across the outer membrane. The recruitment of the channel is likely triggered by binding of effectors (substrates) to MFP or MFP-transporter complexes. This model together with recent structural and functional advances in the field of drug efflux provides a fairly detailed understanding of the mechanism of drug efflux across the two membranes. Multidrug resistance or polyspecific transporters (MDRs) are present in all living systems. However, they are particularly abundant and diverse in bacteria and comprise 2\u20137% of the total bacterial protein content are primary active transporters which couple substrate translocation with binding and hydrolysis of ATP. MDRs in all the other superfamilies are secondary transporters which utilize electro-chemical gradients of ions (most frequently protons but sometimes sodium) to transport their diverse substrates. Both primary and secondary transporters are ubiquitous in bacteria, however their relative presence seems to correlate with energy generation: fermentative bacteria tend to rely more on the primary transporters while genomes of aerobic bacteria contain somewhat more secondary transporters binding of a substrate on the cytoplasmic or periplasmic side of the membrane, (ii) conformational changes in a transporter leading to re-orientation of the binding site to the other side of the membrane, and (iii) the release of the substrate. The conformational change leading to reorientation of substrate binding sites and relaxation of a transporter are usually energy-dependent steps, which is provided by either ATP hydrolysis (ABC pumps) or by a proton motive force (PMF) or sodium motive force. The basic mechanism of energization of transporters by ATP and PMF are well understood on examples of ABC and MF transporters and the outer membrane channels (OMFs) which are largely responsible for the low permeability characteristic , ABC and MF superfamilies showed significantly lower affinity to AcrB by stimulating the efflux activities of transporters and (ii) by engaging OMFs and opening them for substrates to be expelled from the cell that interact with transporters within the cytoplasmic membrane , the MFP oligomerization was treated more as an artifact of crystallization, partly due to the fact that hydrodynamic studies and the size exclusion chromatography studies showed only monomers in solution in the transport could be enhanced by additional subunits? In case of MFPs, this could be the separation of roles in the stimulation of a transporter and the recruitment/opening of OMF. In transporters, this could be an added substrate specificity or a separation of energy transduction and substrate translocation as suggested for MdtBC to identify MFP-transporter and MFP-OMF molecular interfaces that are critical for activation of efflux and (ii) to characterize conformational transitions in all components of the complexes leading to efflux across the outer membrane. Non-typical transporters could be powerful tools in understanding how the trans-envelope efflux is achieved. There is a significant gap in knowledge about functions of MFPs in the context of Gram-positive cell envelopes.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Identification of a linear filter in cascade with a spiking neuron has been previously considered under thHere we propose a new system identification methodology to identify a more general NARMAX representation of the nonlinear receptive field in cascade with an ideal-IAF neuron, based only on measurements of the input stimulus and the spike time sequence of the IAF neuron. By using an orthogonal forward selection algorithm we are able to derive the NARMAX representation of a scaled version of the nonlinear filter directly from the reconstructed input of an ideal IAF neuron. The method is further extended to leaky-IAF neurons which require estimating an additional spiking neuron parameter. We use the NARMAX methodology to identify recursively the nonlinear filter and the spiking neuron parameter in the presence of noise. Statistical and dynamical model validation tests are used to check if the identified nonlinear filter models are an adequate representation of the underlying nonlinear information processing mechanisms. The performance and robustness to noise of the proposed methods is demonstrated through numerical simulation studies. Specifically, we compared the generalized (higher-order) frequency response functions (GFRFs) of the original and the identified nonlinear filter and evaluated, for different noise levels, the SNR of the model predicted output on a validation data set ."} +{"text": "This data article contains eleven tables supporting the research article entitled: Cost-Optimal Design For Nearly Zero Energy Office Buildings Located In Warm Climates The data explain the procedure of minimum energy performance requirements presented by the European Directive (EPBD) This files include the application of comparative methodological framework and give the cost-optimal solutions for non-residential building located in Southern Italy. The data describe office sector in which direct the current European policies and investments In particular, the localization of the building, geometrical features, thermal properties of the envelope and technical systems for HVAC are reported in the first sections. Energy efficiency measures related to orientation, walls, windows, heating, cooling, dhw and RES are given in the second part of the group; this data article provides 256 combinations for a financial and macroeconomic analysis. Specifications TableValue of the data\u2022Identification of 256 combinations of energy efficiency measures for office buildings.\u2022Assessment of optimal energy solutions in terms of primary energy consumptions and global costs for non-residential buildings.\u2022Methodological approach to understand and exploit the possible solutions for nZEBs in warm climate.12 gas emissions, global costs) of non-residential buildings located in warm climate .Eleven tables are provided in order to give the input and output data having 1153 degree-days. The country is characterized of a non-extreme winter, but high aridity in summer and rainfall concentered in autumn and winter The input data about the geometrical features are derived from ENEA reports The aim of the research is to reduce the energy demands and the primary energy consumptions for office buildings in order to reach nZEBs level by applying several combinations of variants. The measures proposed include different type of walls, windows and technical systems . The selected highly external precast walls and the type of windows frame are reported in the 2 gas emissions, primary energy consumptions, global costs and the performance classification are listed in terms of financial analysis. Macroeconomic output values are given only for the best range of solutions (The software ProCasaClima2015 has been used for the evaluation of the energy demands, the primary energy consumptions and the global costs for all combinations, according with olutions .2 gas emissions ranges. Each one is characterized by a combination of specific technical system variants.The analysis led to define seven ranges of primary energy consumptions. 2 gas emissions, primary energy consumptions and costs have been obtained for several solutions compared to the reference building.The study contributes to define the strategies to reach high performance energy buildings for Mediterranean climate according with the national requirements. The reduction of COAll authors participated in preparing the research from the beginning to ends, such as establishing research design, method and analysis. All authors discussed and finalized the analysis results to prepare manuscript according to the progress of research."} +{"text": "P < 0.0000). The study showed an increase in the muscle fatigue of the temporal and masseter muscles correlated with the intensity of temporomandibular dysfunction symptoms in patients. The use of surface electromyography in assessing muscle fatigue is an excellent diagnostic tool for identifying patients with temporomandibular dysfunction.The aim of this study is to evaluate muscle fatigue in the temporal and masseter muscles in patients with temporomandibular dysfunction (TMD). Two hundred volunteers aged 19.3 to 27.8 years participated in this study. Electromyographical (EMG) recordings were performed using a DAB-Bluetooth Instrument . Muscle fatigue was evaluated on the basis of a maximum effort test. The test was performed during a 10-second maximum isometric contraction (MVC) of the jaws. An analysis of changes in the mean power frequency of the two pairs of temporal and masseter muscles (MPF%) revealed significant differences in the groups of patients with varying degrees of temporomandibular disorders according to Di ( According to various reports, the prevalence of functional disorders in the population aged 3\u201374 years ranges from 7% to 84% \u20136. AccorA review of epidemiological studies conducted by McNeill indicateSimilar conclusions regarding the disparity between the prevalence of subjective symptoms and the recorded evidence of functional disorders were presented by Mohlin et al. followinIn the light of the evidence presented, expanding the repertoire of modern noninvasive diagnostic methods should result in obtaining more objective research results \u201314.The aim of this study is to evaluate muscle fatigue in the temporal and masseter muscles in patients with temporomandibular dysfunction.The research was approved by the Ethics Committee of the Pomeranian Medical University in Szczecin, Poland (number BN-001/45/07) as being consistent with the principles of Good Clinical Practice (GCP). All the patients were informed about the aim and research design and they gave their consent in order to participate.Two hundred volunteers aged 19.3 to 27.8 referred to the Orthodontic Department of the Pomeranian Medical University in Szczecin participated in this study. Inclusion criteria were that the participants should be aged between 19 and 28 years and express consent to participate voluntarily in the study. As a result of the application of the adopted exclusion criteria listed in Anamnestic interviews which included the patients' general medical history as well as detailed information about their masticatory motor system were conducted. The patients were divided according to a three-point anamnestic index of temporomandibular dysfunction (Ai).The assessment of the function of the masticatory motor system included clinical as well as electromyographic examinations. The former involved visual and auscultatory assessment as well as palpation and made it possible to qualitatively and quantitatively evaluate the function of the masticatory system. The clinical index of temporomandibular dysfunction was used for the analysis of the data obtained from the clinical study . The intEMG recordings were performed using a DAB-Bluetooth Instrument . During these recordings each patient was sitting on a comfortable chair without head support and was instructed to assume a natural head position during electromyographic examination.Surface EMG signals were detected by four silver/silver chloride (Ag/AgCl), disposable, self-adhesive, bipolar electrodes with a fixed interelectrode distance of 20\u2009mm. The electrodes were positioned on the anterior temporal muscles and the superficial masseter on both the left and the right sides parallel to the muscular fibres, for the anterior temporal muscle: vertically along the anterior margin of the muscle; for the masseter muscle: parallel to the muscular fibres with the upper pole of the electrode at the intersection between the tragus-commissura labiorum and exocanthion-gonion lines. A reference electrode was placed inferior and posterior to the right ear .Before the recordings, in order to reduce impedance, the skin was carefully cleaned with 70% ethyl alcohol and dried. The EMG procedures were performed 5 minutes later.The DAB-Bluetooth Instrument was interfaced with a computer which presented the data graphically and recorded it for further analysis. The EMG signals were amplified, digitized, and digitally filtered.Muscle fatigue was evaluated on the basis of a maximum effort test. The test was performed during a 10-second maximum isometric contraction (MVC) of the jaws. Analysis of the mean power frequency (MPF%), as a variable independent of the complex impedance of the measurement system, did not require the use of a normalization process.The asymmetry between the activity of the left and the right jaw muscles was quantified by the Asymmetry Index (As). It ranges from 0% to 100% \u201320:(1)AsU test were used to verify the hypotheses relating to the existence or absence of differences between the mean values of the independent variables. The statistical significance for verifying all the hypotheses was set at P = 0.05.The Kruskal-Wallis test, the median, and the Mann-Whitney P < 0.0000, The analysis of changes in the mean power frequency of the two pairs of temporal and masseter muscles (MPF%) showed significant differences in the groups with varying severities of temporomandibular dysfunction according to the Di index (P < 0.0000). There was always greater depletion of the interference signal in the case of the masseter muscles relative to the temporal muscles.Resistance to muscle fatigue was modified at the level of the type of muscles examined (P < 0.0004), moderate dysfunction , and severe dysfunction according to the Di index. There were no significant differences in the fatigue of the right and left temporal muscles in each group of temporomandibular dysfunction according to the Di index (P < 0.0784).Changes in the mean power frequency (MPF%) of temporal muscles during 10\u2009s of maximum voluntary contraction were the lowest in the group with no symptoms of temporomandibular dysfunction (\u22122.92%). Significantly higher depletion of the interference signal was observed in the group with mild dysfunction , moderate dysfunction , and severe dysfunction according to the Di index. As in the case of the temporal muscles, the impact of dysfunction on the differences in fatigue between the right and left masseter muscles has not been confirmed (P < 0.0937).Similar to the temporal muscles, changes in the mean power frequency of masseter muscles were the lowest in the group with no symptoms of dysfunction (\u22123.97%). Significantly higher muscle fatigue, as evidenced by a greater reduction in the mean power frequency, was found in groups with mild dysfunction ) because of the noninvasive nature of measurements that it provides [Electromyography (EMG) is one of the few diagnostic tools that enable direct and objective assessments of muscle function. Practitioners dealing with functional disorders of the masticatory motor system are particularly interested in global electromyography and its changes linked to function are a reliable and objective indicator of muscle resistance to fatigue. Thus, changes in the frequency of the electrical activity of muscles, being a component of interference signal depletion, are a major predictor of susceptibility to muscle fatigue in EMG recordings. Muscle fatigue can also be determined by an increase in the EMG activity of muscles involved in generating a constant force. This is consistent with the view that generating a constant force as muscle fatigue increases must be associated with an increase in the electrical activity of muscles .Our own examinations showed significant differences with regard to the type of muscles examined. There was a significantly greater depletion of the interference signal in respect of the changes in mean power frequency of the masseter muscles than the temporal muscles during 10\u2009s of maximum isometric contraction in the intercuspal position.Changes in the interference signal with respect to the mean power frequency of muscles during maximum isometric contraction were also a strong predictor of functional disorders in the masticatory motor system. Resistance to fatigue in the temporal and masseter muscles was significantly higher in the group with no symptoms of temporomandibular dysfunction than in the group of patients with symptoms of dysfunction according to the Di index. There was a significantly greater depletion of the interference signal for masseter and temporal muscles in the group with TMD.Measurements of the mean power frequency of temporal and masseter muscles also showed high discriminatory efficiency for subjects with varying severities of temporomandibular dysfunctions according to the Di index. There were significant differences in terms of fatigue between the groups with varying severities of dysfunction for both temporal and masseter muscles.The results of the study were based on the clinical index of temporomandibular dysfunction (Di). This index is simple and easy to use and is extensively used in research . AlthougIn studies of masticatory muscle fatigue conducted by Sforza et al. a muscleHori et al. recordedIn studies conducted by Gay et al. surface A study by Castroflorio et al. concerneLiu et al. found siThe studies presented, whose observations are consistent with the results of our own findings, provide justification for using the analysis of muscle fatigue in the identification and discrimination of subjects with symptoms of masticatory system dysfunction.The results of the presented study showed an increase in the fatigue of temporal and masseter muscles in direct proportion to the severity of symptoms of temporomandibular dysfunction in the examined patients.The use of surface electromyography in the assessment of muscle fatigue is an excellent diagnostic tool for identifying patients with temporomandibular dysfunction."} +{"text": "The dendrite of the cerebellar Purkinje cell is one of the most complex structures in the mammalian brain, receiving more than 150,000 synaptic inputs. It is also one of the most extensively modelled neurons in the mammalian brain, with theoretical analysis of the input-output relationships in its dendrite extending back 40 years. While most of this experimental and modelling work has been conducted using mammalian neurons, it has also often been noted that overall cerebellar structure as well as the general morphology of Purkinje cells has been highly conserved in all vertebrate species. The work described here seeks to identify conserved features of Purkinje cell function by examining the relationship between structure and function in a range of vertebrate species from fish to mammals.This study is based on passive compartmental models constructed from anatomical reconstructions of Purkinje cells obtained from fish, reptiles, birds and mammals. Morphologically, while the Purkinje cells of each of the species are flattened structures and appear to have the same geometrical relationship to other elements of the cerebellar cortical circuitry, the dendrites of birds, alligators and mammals have a much more highly articulated dendritic branching structure than the dendrites of fish and turtles. In previous studies of the passive electrical properties of these dendrites , we have"} +{"text": "The rarity of glucagonoma imposes a challenge with most patients being diagnosed after a long period of treatment for their skin rash (months-years). Awareness of physicians and dermatologists of the characteristic necrolytic migratory erythema often leads to early diagnosis. Early diagnosis of glucagonoma even in the presence of resectable liver metastases may allow curative resection. Herein, we present a typical case of glucagonoma treated at our center and review the literature pertinent to its management. Since its first description by Becker in 1942 This highlights the importance of early diagnosis since complete resection of the primary tumor and limiting metastases offers the only chance of cure. The slow growth of the tumor coupled with advances in liver surgery and transplantation may allow curative resection in patients with metastatic disease confined to the liver.Novel advances in management of metastatic pancreatic neuroendocrine tumors include complex liver resections and liver transplantation , percuta"} +{"text": "The aim of this article was to suggest some changes in the teaching-learning process methodology of the judo osoto-guruma technique, establishing the action sequences and the most frequent technical errors committed when performing them. The study was carried out with the participation of 45 students with no experience regarding the fundamentals of judo from the Bachelor of Science of Physical Activity and Sport Science at the University of Vigo. The proceeding consisted of a systematic observation of a video recording registered during the technique execution. Data obtained were analyzed by means of descriptive statistics and sequential analysis of T-Patterns (obtained with THEME v.5. Software), identifying: a) the presence of typical inaccuracies during the technique performance; b) a number of chained errors affecting body balance, the position of the supporting foot, the blocking action and the final action of the arms. Findings allowed to suggest some motor tasks to correct the identified inaccuracies, the proper sequential actions to make the execution more effective and some recommendations for the use of feedback. Moreover, these findings could be useful for other professionals in order to correct the key technical errors and prevent diverse injuries. Judo is one of the most important martial art sports practiced in the world . It inclIn this respect, the requirements for the improvement of the technical and tactical preparation have been the focus of some studies which achieved meaningful results with theoretical and practical applications for judo coaching . As an eLack of comprehension of the relation between a poor technique performance and the circumstances that may cause different forms of injuries has also recently raised a debate . In thisMany injuries are attributed to repetitive actions performed with a poor technique. These actions can bring about an excessive pressure on particular joints or muscles contributing to an injury.Finally, the feedback given by the trainees and the Therefore, the present research is intended to assist coaches and teachers of judo, showing the behavior patterns (errors and their sequences) hidden from visual perception. With the aim of achieving the technical understanding necessary to improve the teaching-learning process, we performed a systematic observational study. In this particular case, the most relevant errors and sequences of errors of the osotoguruma judo technique have been studied. The findings allow to propose some methodological recommendations in the development of tasks. The feedback of the process can be useful for performing the osoto-guruma throw efficiently and for helping professionals to correct the key technical errors, avoiding diverse injuries.In this research, we used an observational methodology which prThe observational design was nomoNovice students were filmed while following a subject of judo at the Faculty of Sciences of Education and Sport at the University of Vigo. The recordings were made during the course of five academic years (2003 to 2008), with the written authorization of the participants that the videos could be used for research purposes. All the ethical standards throughout the study were in accordance with the American Psychological Association (APA).The observational instrument developed for this study was the SOBJUDO-OSGU , combiniThe SOBJUDO-OSGU instrument fits the observational design presented. Thus, it is multidimensional in nature including the following structural criteria: grip, off-balance, right-foot position, right-arm position, hip position, left-foot position, leg\u2032s action, blocking action, throw stage, control stage and globality. Each dimension gives rise to a system of categories that meets the conditions of exhaustiveness and mutual exclusivity (EME).The data collection was performed by recording the student from two different angles with two digital cameras (JVC GZ-MG21E). To assist the analysis of the projections recorded, the filmed material was edited with the software Pinnacle Studio v.12. The software Match Vision Studio Premium v.3.0. , a multiThe technical execution of osoto-guruma was filmed all along the ordinary training period (\u223c4 months with 3 hrs of practice per week) at the University of Vigo, involving the learning process of a total of 17 projections. We only used the data collected from 10 of the techniques. The selection criteria of the techniques studied were based on: 1) the premises of an investigation reporting the performance difficulty of certain techniques of Gokyo ; and 2) The quality control of the data recorded by two observers was performed by means of the Cohen\u2019s Kappa coefficient (k), that guarantees the agreement between them when the value k is >0.80. The software GSEQ providedThe excel files obtained allowed to have the frequencies of all the codes of occurrences registered, which were successively transformed in order to enable further analyses. The codes of the instrument of observation SOBJUDO-OSGU were entered into the software THEME , 2005, aThe frequency of occurrence of the different errors committed during the osotoguruma throw performance was determined by means of a descriptive analysis using SPSS 15. The results of this analysis are shown in In The most common errors are those related to the initial imbalance (NOB), the improper position of the supporting foot (IPSF) and the incorrect blocking action (SF), with a deficient traction action and an incorrect direction of the arms accompanied by an insufficient rotation of the trunk at the end of the technique (IAT).In osoto-guruma technique. Thus, in tori to replace the leg previously used to perform the sweeping (REAP-RBL) .After a thorough revision of the literature, we have confirmed the lack of scientific studies regarding the technical errors and its sequences in judo. Only the most relevant authors of this field of study have pointed out the key aspects or the most common technical errors when describing a technique . Such inThus, several authors have highlighted that the imbalance of the opponent at the beginning of the technical action must be directed towards the back right diagonal (NOB) and tori must make uke be balanced only by the heel of his right foot .The incorrect placement of tori\u2019s supporting foot in the second and third phases of the projection (IPSF) is one of the typical errors observed, what has also been mentioned by other authors . AccordiAnother common error detected is the incorrect distribution of tori\u2019s mass on legs at the time of throwing the opponent. The most correct approach would be to lean the body mass on the leg that does not participate in the blocking action (SF and WLB). However, there is a tendency to poise the right foot fully on the floor. Although nobody has specifically mentioned this aspect as a typical error or a fundamental point, some authors state that the body mass should entirely fall on the left leg .Additionally, some individuals perform a sweeping action instead of performing the blocking action (REAP). Some authors, such as Several authors, such as The sequential pattern analysis of errors reveals that an improper imbalance of uke leads to inaccurate positioning of the body, preventing tori from distributing body mass in a suitable way to perform the projection. That makes it difficult to carry out an optimal action with the arms and to perform the body rotation towards the right direction. Few references were found regarding the pattern of this chain of errors and, to our knowledge, the connection between the major errors observed in osoto-guruma judo-technique had not previously been explained in detail.Several authors have reported some dual relationships between errors. One of the limitations of this study was the time available for students to practice (four months), as well as the nature of the participants (university students). Therefore, future research employing longer learning periods (>10 months), other origins of the participants or different age groups could be interesting. The process of learning judo techniques could possibly be better analyzed this way, establishing guidelines and setting the practice time required to identify the most common mistakes. Further studies could also be undertaken in order to identify the factors influencing the number and importance of errors and their sequential registration.A proper unbalance of the opponent\u2019s body promotes the correct location of the supporting foot.A proper placement of the supporting foot and a suitable distribution of the body mass simplify the subsequent blocking action on the uke\u2019s body and the correct execution of the projection phase of osoto-guruma technique.It is a matter of fact that the teaching and learning process of osoto-guruma technique could be improved by paying special attention to the following statements regarding the movement sequences, which will ensure that the throw is correctly executed:The results of this study also enable us to suggest a number of strategies, based on the knowledge of performance, with the aim of improving the teaching and learning of the osotoguruma technique.When demonstrating the technique, the student should pay attention to the key points highlighted in this study. As far as the throw theoretical aspects are concerned, we think that coaches could find it useful to incorporate some video-recordings or other images illustrating the fundamental features and the common errors detected here. In any case, teachers or coaches should only focus on the most relevant aspects.Instructors could design tasks or drills that draw the student\u2019s attention to the most significant errors and sequences of behavior detected.After a throw execution in the training, the subsequent communication between coaches and students could be improved by providing a more precise feedback. Coaches should firstly consider the most significant errors and sequences identified in the present study, leaving others for a later stage of the training. It would also be helpful to take into consideration just a few key aspects to avoid overloading the students with an excess of information. The results of this study can provide a platform for different kinds of feedback , which should always be positive in nature.Coaches could elaborate some observation/evaluation sheets based on the category system of the observation instrument used in this study. One model should be intended for the students to work in groups of three, with one of the trainees observing the other two while they are performing the throw. Thus, this student would conduct an observational analysis using the evaluation sheet, noting the errors made and providing an immediate feedback. The same observational analysis could also be carried out later by means of video recordings."} +{"text": "Growing clinical evidences indicate the benefits of human milk (HM) for appropriate growth and development of a newborns: the particular composition make it a unique and inimitable nutrient . Mother'Nonetheless storage and processing of human milk may reduce some biological components, which may diminish its health benefits but when donor milk is used instead of formula, it is demonstrated a reduction in the incidence of necrotizing enterocolitis and an enhanced feeding tolerance . PasteurFuture research should focus on the improvement of milk processing in HMB, particularly of heat treatment on the optimization of DM fortification and on further evaluation of the potential clinical benefits of processed and fortified DM."} +{"text": "Frontiers in Plant Science research topic provides a snapshot of current research into the application of environmental phytoremediation strategies.Human industry, farming, and waste disposal practices have resulted in the large-scale contamination of soil and water with organic compounds and heavy metals, with detrimental effects on ecosystems and human health. Conventional soil remediation methods are expensive and often involve the storage of soil in designated areas, postponing rather than solving the problem. In the last decade, the pressing need to find alternative methods has highlighted the scientific and economic benefits of plants and their associated microorganisms, which can be used for the reclamation of polluted soil and water Meagher, . This isPteris vittata with or without bacterial strains selected from autochthonous rhizosphere-derived microorganisms [chosen for their resistance to high concentrations of arsenic (As) and their ability to reduce arsenate to arsenite] showed that the efficiency of phytoextraction increased when P. vittata plants were inoculated with the selected microbial communities and Astragalus bisulcatus (Fabaceae), and the related non-accumulators Physaria bellis (Brassicaceae) and Medicago sativa (Fabaceae), revealed that isolates from Se hyperaccumulator species were more resistant to selenate and selenite, could reduce selenite to elemental Se, could reduce nitrite and produce siderophores, and several strains also showed the ability to promote plant growth with three of these isolates promoted plant growth and the removal of toxic metals from polluted soil, demonstrating that the interaction between plants and bacterial strains identified in contaminated areas could improve plant growth and the efficiency of phytoremediation (Khan et al., Brassica juncea and Ricinus communis plants inoculated with rhizospheric and endophytic bacteria isolated from a polluted serpentine environment accumulated more biomass and heavy metals than non-inoculated control plants (Ma et al., The phytoremediation potential of plants inoculated with bacteria isolated from the rhizosphere and endosphere of other plants grown in soil contaminated with heavy metals is discussed in two articles (Khan et al., The beneficial interaction between plants and rhizobia for the remediation of contaminated soil is discussed by Teng et al. . CertainPhragmites australis plants were exposed to carbamazepine, a widely-used drug that is present in the environment as a persistent and recalcitrant contaminant (Ternes et al., Two further articles discuss the use of plants and their associated microorganisms for the reclamation of land polluted with organic contaminants (Germaine et al., It is notable that all the articles submitted in this research topic focused on the use of naturally-occurring hyperaccumulator species rather than transgenic plants and/or microorganisms, although genetically-engineered plants and microbes can also be used for the efficient treatment of polluted soil and water (Van Aben, The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Hereditary cerebellar ataxias are a wide group of progressive, degenerative diseases, which manifest as lack of control and coordination of voluntary movements due to progressive loss of Purkinje cells and other cell types in the cerebellum. Cerebellar degeneration could affect people of any age, is often diagnosed after symptoms start to manifest and no prevention and no causal and really effective therapy is available today.There is high variability in pathogenesis in human cerebellar degenerations and high variability in onset and progress of the disease. Though the basal manifestation of all cerebellar degenerations is similar as it is related to dysfunction of the cerebellum, there are some differences between individual types of them and also some variability in the frame of one disease making the clinical diagnosis difficult. The manifestation of the mutation could be also modified by the genetic background. For all of that, individual approach of treatment is necessary.There is a vide spectrum of mouse models of genetically determined cerebellar degenerations including both spontaneous mutants and transgenic mice, some of which are available in several mouse strains. This situation is analogous to variability of human hereditary cerebellar disorders and genetic heterogeneity of human population. Mouse models can be used for investigation of efficiency of experimental therapy in individual types of cerebellar degenerations and for studying of the effect of genetic background on the pathological process. Such research could allow identifying factors determining progress of the disease and influencing the effect of the therapy. This is necessary to choose the optimal treatment for each individual patient and for development of new therapeutic methods.One of therapies which could benefit from such research is stem cell and neurotransplantation therapy. So far it seems promising in slowing down progress, but it might even mitigate neurological deficit caused by the cerebellar degeneration.Nevertheless, as experiments in mice shoved different types of grafts have different effects. For example mesenchymal stem cells prevent neurodegeneration progression, but do not reverse already developed cell loss and neurological deficit. Transplanted suspension of embryonic cerebellar cells generates new graft-derived Purkinje cells in Purkinje cell degeneration mice and improvement of motor function was described in them. On the other hand, in Lurcher mice functional benefit of cerebellar tissue transplantation has not been achieved yet. In addition, cultivated embryonic neural stem cells survive well in the host cerebellum and despite they do not create any new Purkinje cells they have been shown to rescue Purkinje cells in a mouse model of Niemann-Pick disease.Genetically defined mouse strains with different mutations causing neurodegenerative diseases and different mouse strain backgrounds for the same mutations and analysis of their genome and proteome would be valuable tool to identify genetic markers and factors important for assessment of prognosis and management of appropriate stem-cell therapy and for development of new therapeutic approaches based on the principles of personalized medicine."} +{"text": "The authors wish to retract this article at the request of the Scottish Haemophilus Legionella Meningococcus Pneumococcus Reference Laboratory (SHLMPRL). Although the work was part of an independently-funded research grant awarded to Stuart Clarke, Matthew Diggle and Robert Davies by the Scottish Executive, they did not have permission from the current SHLMPRL management to publish the data in their current form. The anonymised data in the MLST database will remain available. The authors apologise to the readers."} +{"text": "In 1980, Clair C. Patterson stated: \u201cSometime in the near future it probably will be shown that the older urban areas of the United States have been rendered more or less uninhabitable by the millions of tons of poisonous industrial lead residues that have accumulated in cities during the past century\u201d. We live in the near future about which this quote expressed concern. This special volume of 19 papers explores the status of scientific evidence regarding Dr. Patterson\u2019s statement on the habitability of the environments of communities. Authors from 10 countries describe a variety of lead issues in the context of large and small communities, smelter sites, lead industries, lead-based painted houses, and vehicle fuel treated with lead additives dispersed by traffic. These articles represent the microcosm of the larger health issues associated with lead. The challenges of lead risk require a concerted global action for primary prevention. Geochemist Clair C. Patterson became distressed about the health impacts of the massiveness of the 20th century quantities of lead in commerce and their residues . This toThe theme of lead risk assessment and health effects was explored and interpreted by the authors of 19 manuscripts. The papers reflect the international scope of lead research conducted in Australia ,3, BeninThe critical issue at this point is the need to revitalize primary prevention and develop skills and tools to curtail the risks of lead exposure. The most vulnerable individuals to the risks of lead exposure are children and because the exposure appears without symptoms this makes the issue invisible. One tendency is to rely on children\u2019s blood lead levels as a tool for discovery of environmental lead. This is not primary prevention. Unexpected results continue to mount regarding the long-term health outcomes associated with ever smaller amounts of lead exposure during early stages of development .lead loading. Within interior environments the practice of measuring the lead content of the vacuum cleaner bag dust was discarded in favor of floor wiping methods that measure lead loading. The methodology change occurred because there was a stronger association between lead loading and blood lead than between lead content and blood lead. In the outside soil environment, lead content is the standard method of measurement. However, an improved understanding about the quantity of lead in the environment could be achieved by recognizing the amount of lead loading of the soil surface; lead loading of soil generally exceeds by large factors the current standard for interior floor surfaces (which some consider too high) [Along with policy matters there are scientific barriers that thwart primary prevention progress. Measurement techniques are critical for addressing environmental lead and human exposure, and instrumentation developments, such as hand-held field X-ray Fluorescence, have allowed major strides toward identifying sources of lead in the environment. However, in the case of soil there is a critical issue apropos the difference between lead content and oo high) .margin-of-safety. The engineering and toxicology/pharmacology professions recognize the need to apply margin-of-safety factors to protect human health within the context of their respective trades. The margin-of-safety concept has not been included in the standards for lead and the outcome has been a continuing series of failures to protect children [margin-of-safety factor is an essential addition to the standards to make them relevant for protecting children [Another scientific issue involves the concept of children . Given tchildren . Primary"} +{"text": "Deep brain stimulation (DBS) is an established therapy for movement disorders, including tremor, dystonia, and Parkinson's disease, but the mechanisms of action are not well understood. Symptom suppression by DBS typically requires stimulation frequencies \u2265100 Hz, but when the frequency is increased above ~2 kHz, the effectiveness in tremor suppression declines (Benabid et al., Deep brain stimulation (DBS) is an effective treatment for the motor symptoms of movement disorders including essential tremor, Parkinson's disease, and dystonia, and recent data suggest that it may also be effective to treat epilepsy, obsessive-compulsive disorder, and depression. Although the mechanisms of action of DBS remain unclear, its effects are strongly dependent on stimulation frequency implemented in NEURON v7.2 as a measure of the regularity of neuronal firing and \u03ba, the number of discrete bins per ISI decade, was set to 20 pairs approximated those of a deterministic model, albeit with a higher overall entropy. Conversely, the responses of the stochastic model with a low density of sodium channels (200 channels per node) were substantially more variable than those of the deterministic model, and, unlike the other two models, this low density stochastic model nerve fiber exhibited spontaneous firing.The evoked patterns of activity in each model variant were quantified by calculating the firing pattern entropy for each axon in the population with the extracellular stimulation intensity selected to evoke action potential firing in half of the model axons at a stimulation frequency of 100 Hz Figure . At freqHigher stimulation frequencies produced block of conduction in fibers close to the electrode and the resulting cessation of activity produced firing pattern entropy of zero , the patterns and changes in entropy were similar to those in the deterministic model, and to reduce computational cost and complexity we used the deterministic model to analyze the effects of DBS on intrinsically active (bursting) model nerve fibers.We used a population of 100 deterministic model nerve fibers, with intrinsic burst activity generated by intracellular current injection at one end of each fiber, to quantify the effect of extracellular stimulation on firing patterns. The ability to mask the intrinsic bursting in the model depends on the distance between the node closest to the electrode and the proximal end of the axon where the patterned intracellular stimuli were delivered .We quantified the effect of increasing stimulation amplitude on the entropy of the firing patterns of intrinsically bursting model nerve fibers at different electrode to fiber distances Figure . At 200 The firing patterns of the fiber populations are summarized in the histograms of inter-spike interval pairs generate conduction block of activity in the model nerve fibers or (2) generate highly irregular patterns of activity in the model nerve fibers. Conduction block with kHz frequency signals has been demonstrated experimentally (Bowman and McNeal, DBS at clinically-effective frequencies entrains spike activity both downstream and upstream from the site of stimulation, although the entrainment occurs only for a fraction of the stimulation pulses (Agnesi et al., This modeling study enabled the identification of possible causes of the increase in stimulation amplitude required to suppress tremor with kHz frequency DBS. However, the model included a limited representation of possible geometries; the fibers were all orientated parallel to each other and had a linear geometry, which is not representative of the true anatomical arrangement of axons around the electrode. Furthermore, the model did not include other neuronal elements that are also affected by extracellular stimulation and may contribute to effects of thalamic DBS on tremor. Accumulation of potassium within the peri-axonal space may contribute to functional block of axons in response to high-frequency stimulation (Bellinger et al., Increased irregularity in the pattern of neural firing has been suggested as the cause of symptom exacerbation by low frequency DBS while masking of intrinsic neural activity and the consequent regularization of neuronal firing by high frequency DBS has been hypothesized as responsible for the therapeutic success of DBS (Grill et al., The loss of tremor suppression by thalamic DBS at kHz frequencies can be explained by the increases in the entropy of neuronal firing generated by kHz frequency DBS in model neurons. Entropy was subsequently reduced when the amplitude of the stimulus was further increased, consistent with the experimental observations of increased stimulation amplitude thresholds for tremor reduction with kHz frequency DBS (Benabid et al., JC performed computer simulations and analyzed the data. JC and WG designed the research and wrote the paper.This work was supported by NIH R01 NS040894 and R37 NS040894.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The authors re-exported source data from the animal ventilator (FlexiVent) and compared the output with the raw data originally received from the animal pulmonary physiology laboratory. The results from these initial comparisons suggested potential inconsistencies in the data, so the authors requested that an independent laboratory replicate the experiments of animal airway physiology presented in Figure 1 and Figure 4. The results of the replicated experiments validated the originally reported central role of mindin in airway hyper-responsiveness after exposure to either lipopolysaccharide (LPS) or ozone.Because the animal physiology laboratory at Duke University also analyzed cytokines, the authors had an independent laboratory replicate experiments to analyze the original role of cytokines reported in Figure 3 and Figure 6. In these replicate studies, which were limited in the number of animals and samples tested, the authors did not observe the same results and could not definitively determine whether or not the findings were valid.The inconsistent results led the authors to take a conservative approach, and they agreed to retract this paper. They regret any inconvenience to the scientific community."} +{"text": "Histopathological examination of the tissue samples removed from 50 tumor-like growths revealed three types of tumors; squamous cell carcinoma (70%), spiny keratoderma (22%) and fibroma (8%). An increased incidence of tumors was recorded in the medial when compared to the lateral toenails in both sexes. In females, the incidence in the medial toenails was 90/259 (34.75%) and 71/259 (27.41%) in the right and left forelimbs respectively when compared to the lateral toenails which was 25/259 (9.65%) and 5/259 (1.93%) for the respective right and left forelimbs. In the hind limbs, this ratio was 29/259 (11.20%) and 20/259 (7.72%) for right and left medial toenails respectively, whereas it was 17/259 (6.56%) and 2/259 (0.77%) for the right and left lateral toenails respectively. Similar to the observations in female camels, male camels also showed a higher incidence of these tumors in the medial when compared to the lateral toenails in both fore and hind limbs. Based on these findings, we conclude that in the dromedary camels, the medial toenails of the fore limbs are most commonly affected with tumors; with the most common tumor being the squamous cell carcinoma.A total of 275 dromedary camels of local Camelids are modified digitigrades with a small, non-weight bearing nail which is similar to a human nail and is located at the extremity of each digit. The nail is closely attached to the third phalanx via the corium. The pectoral limb is the weight bearing axis of the body and there is a slight difference in the anatomical configuration between the lateral and medial metacarpal bones and the digits in camels.The proximal articular surface of the third metacarpal bone is slightly higher than that of the fourth metacarpal and its mediopalmar facet articulates only with the second carpal bone.The distal articular surface of the third metacarpal bone is also at a slightly higher plane than that of the fourth metacarpal bone. The first and second lateral phalanges are ~2-3 mm longer than their medial counter parts .This anatomical configuration results in a slight but noticeable lateral deviation of the forelimbs extending from the knee to the foot. This configuration may subject the medial digit to increased pressure when compared to the lateral digit, rendering the former more prone and vulnerable to pressure-related injuries leading to abnormal growth lesions .et al., 1997), squamous cell carcinoma of the foot and chondrosarcoma of the left carpal joint in camels .Although the hind limbs in the camel are not a part of the weight bearing axis of the body, the anatomical configuration of the metatarsal bones and the digits resembles that of the corresponding bones of the fore limbs, as the first and second lateral phalanges are 1-2 mm longer than their medial counter parts . Previous reports on tumors in camelids include squamous cell carcinoma of the skin in a llama , cutaneous melanocytoma in a llama and a basal cell carcinoma in a dromedary camel have been previously published.A general report of neoplasia in camels .The development of the tumor is progressive and the animal is often presented for treatment when the tumor is at an advanced stage. Onychia (inflammation of the corium beneath the nail) and paronychia (inflammation of the tissue at the margins of the nail) in camelids can result from either contusions or lacerations of the nail. Neglected toenail trimming in such cases may allow the infection to migrate dorsally under the nail .If further neglected, such conditions can exacerbate and act aggressively leading to the involvement of the soft tissue beyond the toenail.A squamous cell carcinoma typically originates from the skin around the toenail and commonly affects the surrounding bone and soft tissue. Because this tumor spreads slowly, it can often be visualized before it spreads to other areas of the body.The most common symptom at this stage is a swollen toe with or without lameness. Radical surgical excision of the mass along with the toenail is most likely the only treatment that can consistently provide relief to the animal and also potentially avoid metastasis of the tumor to other sites .Squamous cell carcinoma is also reported in the dogs. Similar to that observed in the camel, the tumor in the dogs affects only one toe and is noticeable grossly as solid, raised skin mass over the affected toe. However, over time the tumor expands and loses its original solid mass-like appearance as the tissue within the mass dies. Eventually the tumor ulcerates and subsequently extends proximally to involve the digit .In our experience, the toenail tumor in the camel follows almost a similar course as that reported in the dog.Because the toenail/digit tumors in the dromedary camel are very frequent in the emirate of Abu Dhabi, United Arab Emirates, we sought to study this widespread condition in depth including an investigation of the incidence and nature of this endemic disease.The study included 275 cases that displayed growth around the soft tissues above the nail. These cases were reported to the Central Veterinary Hospital, Al-Wathba, Abu Dhabi, United Arab Emirates during a period of 8 years . The cases were categorized on the basis of sex, the limb and toenail involved .Our study shows that the growth involved the toenail only or almosIn each case, the severity and extent of involvement of the digit was carefully evaluated in order to decide the most appropriate surgical procedure to provide relief to the suffering animal.When only the toenail was involved, the third phalanx along with the toenail was removed by disarticulation at the distal interphalangeal joint . In caseWhen the tumor involved more than half or the entire digit; unilateral disarticulation of the digit was performed at the metacarpo-phalangeal or metatarso-phalangeal joint with excision of the diseased tissue while preserving the normal sole .This procedure resulted in restoring the maximum weight bearing surface of the foot, thus enabling the animal to attain a near-normal gait soon after the recovery.In all surgical interventions, a strict adherence to the goals of cancer surgery was assured: viz; a complete excision of the cancerous tissue along with the surrounding healthy tissue to minimize chances of tumor recurrence and to prevent an adverse metastasis.A total of 275 cases were operated for tumor-like lesions. Tissue samples from 50 cases showing different growth patterns were collected for histological examination. Representative sections from each tumor tissue were fixed in 10% neutral buffered formalin and were submitted to the Pathology Laboratory of Al-Noor Hospital, Abu Dhabi for a detailed histopathological evaluation.P < 0.05.The data was analysed using the chi-square test. The criterion for statistical significance was set at None of the operative wounds healed through first intention. The wounds in different animals required variable time periods to heal ranging from 2 to 4 months. Recurrence of the tumor was not observed in any of the operated cases.An increased incidence of the tumors was recorded in the medial toenails/digits of the fore and hind limbs in both sexes Tables and 3. IHistopathological examination revealed that the toenail tumors included one of three following types; Squamous Cell Carcinoma, Spiny Keratoderma or fibroma. The incidence of three different types of toenail tumors (in 50 tissue samples collected) is shown in Squamous Cell Carcinoma lesions consisted of moderate to well-differentiated sheets of small and average sized squamous cells.Frequent clusters of anaplastic squamous epithelial cells with central keratin pearls were also observed. The nuclei were larger in size and had an average number of 2-3 nucleoli per nucleus.The lesions showed frequent angulations with desmoplastic stroma and polymorphic inflammatory infiltrate. No vascular or perineural invasion was observed .Spiny Keratoderma lesions had thick spiny keratotic projections consisting of a well-defined compact column of parakeratotic horn material directly continuous with granular layer. Abrupt transition to orthokeratotic corneal layer at the margins of parakeratotic column was also observed. No dyskeratotic or vacuolated keratinocytes were observed in the underlying epidermis. Furthermore, no malignancy was observed .Fibroma lesions had thick skin tissue showing evident areas of acanthosis and spiny hyperkeratosis. There was abnormal deposition of fibrocollagenous material involving up to the subcutis with wavy fibrous tissue infiltrating and dissecting the underlying structures including the pileosebaceous units, muscles and sweat glands. Whorl appearance was also noticed in some of the areas .et al., 1995).None of the surgical wounds healed through first intention. Considering the anatomic location of the lesions and the type of surgical manipulations required, first intention healing of the wounds in such cases is not likely . Complete surgical excision of the tumor along with the surrounding healthy tissue most likely minimized the probability of tumor recurrence .In our opinion, the possible reason for the lower incidence of tumors in the hind limbs is that these are not a part of weight bearing axis of the body and therefore, are not subjected to higher pressure under the body weight of the animal. However, the anatomical configuration of the metatarsal bones and the digits resembles that of the fore limbs. This may be a plausible reason for an increased incidence of tumors in the medial when compared to the lateral toenails/digits in the hind limbs .et al., 1995; Rogers et al., 1997; Al-Hizab et al., 2007; Janardhan et al., 2011).The significant observation of our study that clinically aggressive tumor lesions have increased chances of being malignant agrees well with the previous reports (Tageldin and Omer, 1986; Gahlot In summary, salient findings from our study reveal that the medial toenails of the fore limbs are frequently prone to squamous cell carcinomas and that a timely surgical excision of the tumorous mass completely alleviates this afflictive condition in camels."} +{"text": "The ion channels kinetics are affected by the trauma-induced damage to neural membrane structure. The density of affected channels is at its highest in the nodes and the axon initial segment (AIS). In this study we focus on the consequences of modifying sodium channels kinetics in the AIS to model the effect of damage. It is inspired by a similar study on damaged nodes of Ranvier and by eo on AIS ,3. The AA neuron was simulated as three sections, each containing multiple compartments: the soma, the initial segment and the myelinated axon. Each compartment was modeled by the classical Hodgkin-Huxley equations using appropriate densities for each type of ion channel. The types of channels and their distributions were based on a previous study looking specifically at cortical pyramidal neurons having Nav1.2 and Nav1.6 at the AIS . The kinWe explored the effect of damage on the initiation and propagation of APs during repeated stimuli which deplete concentration gradients. Our model allows the investigation of the effect of CLS on the location and duration of the initiation and the threshold duration of Ap-initiatingcurrents. We find that the current threshold for action potential initiation decreases with increased damage or decreased length of the AIS."} +{"text": "The prevalence of both urinary and faecal incontinence, and also chronic constipation, increases with ageing and these conditions have a major impact on the quality of life of the elderly. Management of bladder and bowel dysfunction in the elderly is currently far from ideal and also carries a significant financial burden. Understanding how these changes occur is thus a major priority in biogerontology. The functions of the bladder and terminal bowel are regulated by complex neuronal networks. In particular neurons of the spinal cord and peripheral ganglia play a key role in regulating micturition and defaecation reflexes as well as promoting continence. In this review we discuss the evidence for ageing-induced neuronal dysfunction that might predispose to neurogenic forms of incontinence in the elderly. The lower abdomino-pelvic cavity contains organs including the small and large intestines, the bladder and components of the lower urinary tract. Numerous studies have demonstrated that these structures are subject to age-related changes that may lead to an increase in the incidence of bladder/bowel disorders in the elderly. The causes of these changes are likely to be multifactorial including ageing of the effector cells (e.g. smooth muscle) and also the cells that regulate their function . The aim of this review is to discuss the potential neurogenic (e.g. defective neurotransmission) mechanisms resulting in ageing of these different abdomino-pelvic organs with an emphasis on the spinal and peripheral neurons that provide the efferent (motor) innervation of these structures to regulate bladder and bowel function.Ageing of the bladder and lower bowel may result in problems of storage of faecal material and urine (waste), manifest in incontinence and neurogenic incontinence have been suggested to be equally common in elderly individuals (aged 80+) in residential care with a study showing that in a small sample of both men and women 52\u00a0% of patients developed faecal incontinence secondary to faecal impaction whilst in 48\u00a0% the incontinence was neurogenically mediated but the properties and arrangement of the cells in these organ systems is very different. Here the main focus of this review is on the neuronal control of the bladder, bowel and lower urinary tract structures and a detailed discussion of other cell types is not included.The bladder is a hollow muscular organ connected to the kidneys by the ureters. Urine is excreted from the kidneys, passes through the ureters, and is stored in the bladder before elimination via the urethra during the micturition reflex are also present in the bladder and urethra acetylcholine (Ach) and Substance P , which is equivalent to the more diffuse plexus of postganglionic cells supplying the pelvic organs of larger mammals , are present in the detrusor muscle, but there is also a significant innervation of the lamina propria lying next to the urothelium [see above and Birder ]. These A group of sympathetic preganglionic neurons, located within the intermediolateral cell column and dorsal grey commissure of upper thoracolumbar spinal cord to the lumbosacral spinal cord . Sensory information is also relayed via the thalamus to other brain regions including the hypothalamus and areas of the forebrain such as the insula and pre-frontal cortex level have shown structural changes in ageing rat bladder Elbadawi . The rectum exhibits some differences from, but has the same general structure and organisation as the distal colon, while the ASC has some distinct anatomical features.Like that of the rest of the GI tract, the rectal wall consists of two outer smooth muscle layers , between which a complex network of ganglia, the myenteric plexus, is located. A second network of smaller ganglia, the submucous plexus, is present in the connective tissue that lies between the muscularis externa and the mucosa. These two networks of intrinsic ganglia constitute the enteric nervous system (ENS). The smooth muscle layers are richly supplied with nerve fibres, including intrinsic nerves and the processes of extrinsic fibres , and the structurally similar fibroblast-like cells (FLCs) are present throughout the GI tract, including the terminal bowel. They are present around the myenteric ganglia (ICC-MY), in the muscle layers (ICC-LM and ICC-LM), and in some gut regions, in the submucosa see e.g. Komuro . In the The intestinal mucosa consists of the epithelium, which is composed of diverse epithelial cells that release gut peptide hormones or serotonin, absorptive enterocytes and mucus-secreting goblet cells) and underlying connective tissue that is richly supplied with nerve fibres, associated glial cells, blood vessels and immune system cells. The intestinal mucosa is thus a highly complex tissue with a range of important roles, including barrier function and defence, as well as absorption of nutrients and also neural and endocrine signalling and appetite regulation. It is also the interface with the microbiota, which are now appreciated to have a major influence on the whole organism.There is evidence that changes in mucosal epithelial cells take place during ageing. For example, some populations of gut hormone-producing EECs have been found to increase in number, while others decrease in old age [reviewed in Saffrey ]. ElucidLike other parts of the digestive tract, the ano-rectum is innervated by intrinsic neurons of the ENS, and by extrinsic sensory and autonomic neurons Furness , 2012. TThe terminal bowel (but not the EAS) receives postganglionic parasympathetic and sympathetic innervation from neurons of the pelvic ganglia. As is the case for the innervation of the bladder, the ganglionic organisation of the peripheral neurons that innervate the terminal bowel varies between species. The preganglionic input to sympathetic neurons is from the lumbar spinal cord; that of the parasympathetic is from the sacral spinal cord see e.g. Keast . The senNormal defaecation depends upon the voluntary relaxation of the EAS and pelvic floor muscles and the involuntary relaxation of the IAS. This involuntary action is the recto-anal inhibitory reflex (RAIR). The RAIR is stimulated when stools pass into the rectum from the sigmoid colon and cause rectal distension. This distension is sensed by mechanoreceptors in the rectal wall, which cause a neurally-mediated relaxation of the IAS muscle. The reflex is an intrinsic one, as it occurs even after spinal cord transection and is absent in patients with Hirschprung\u2019s disease, in which the terminal bowel lacks enteric neurons receives extrinsic parasympathetic efferent innervation via the vagus nerve and gut regions during ageing have not yet been performed; \u00a0with\u00a0work focusing mainly on the innervation of the stomach and small intestine, and to a lesser extent, on the colon. Here we briefly summarise the changes that have been reported in these areas, to highlight that this is likely to be an area of importance in the terminal bowel.Analysis of changes in the density of extrinsic parasympathetic nerve fibres in the GI tract during ageing is difficult, because populations of intrinsic enteric neurons express the same markers as extrinsic parasympathetic nerves. A similar problem exists in many species for extrinsic afferent fibres. Studies of change to the extrinsic innervation have therefore involved anterograde tracing , hence these can be used as markers. A reduction in the density of varicosities and glyoxylic acid fluorescence has been described in the rat small intestine, along with swollen axons to the anal sphincter in the elderly have been described Hall which inIn preparation).Interstitial cells are now known to play essential roles in the regulation of intestinal muscle contractility and most likely also in the bladder (McCloskey The major pelvic ganglion (MPG) in rodents has been the focus of a number of ageing studies and in the context of this review is important because it contains autonomic postganglionic neurons that project to both the bladder and terminal bowel (colon/rectum) then studies of age-associated changes to these neurons may be key to understanding neurogenic forms of bladder and bowel dysfunction in the elderly. In a series of studies using quantitative immunocytochemistry a number of significant changes have been noted in terms of the innervation patterns contained within specific spinal nuclei that govern these behaviours. Similar to the relationship described for the MPG (see above), sympathetic preganglionic neurons and their innervation may be more prone to the effects of ageing. Studies have shown that sympathetic preganglionic neurons, projecting to the MPG, are reduced in length and exhibit less branching of the rat, which contains somatic motoneurons that innervate the external urethral sphincter and ischiocavernosus muscles. Results were again variable with serotonin immunoreactive densities significantly decreased, substance P and tyrosine hydroxylase showing no change and vesicular acetylcholine transporter containing terminals significantly increased . A loss of hind limb motor function has previously been linked to the neurodegenerative loss of serotonin releasing long-descending axons from the medulla to lumbosacral regions of the spinal cord that govern hind limb muscle contraction that might influence bladder and bowel function in the elderly. Some recent work has employed functional magnetic resonance imaging to look at brain activation during micturition in humans with urge incontinence. These studies have found lower activation in structures including the insula and dorsal anterior cingulate cortex, leading the authors to conclude that with increasing age there are weaker signals in the bladder control network as a whole and that this may be responsible for the development of urge incontinence occur in some of the populations of neurons that regulate bladder and bowel function. Not all neuronal populations, however, seem to be similarly affected e.g. parasympathetic cells versus sympathetic cells in controlling bladder function. There is also variation in the extent of the changes seen in some populations of enteric neurons. The differences between different populations may in part be due to differences in the inherent properties of distinct groups of neurons. Differences between studies that have quantified neuronal losses have also emerged ,\u00a0and bowel, and the microbiota. The gut microbiota are now appreciated to play a major role in many aspects of the biology of the whole organism, including the gut-brain axis, and recent evidence indicates that changes in the intestinal microbiota occur during ageing (see Saffrey"} +{"text": "Ixodes ricinus, and how these may affect the modelling outcomes. We aim to provide a background of rules against which develop reliable models for these parasites. The use of partial sets of occurrences of the tick might produce unreliable associations with climate because the algorithms cannot capture the complete niche to which the tick is associated. Reliability measures of the model cannot detect these inaccuracies, and undesirable estimations of the niche will prevail in the chain of further calculations. The use of inadequate environmental variables (covariates) may lead to inflation of the results of the model through two statistical processes, called autocorrelation and colinearity. The high colinearity existing in climate products derived from interpolation of climate recording stations is demonstrated, and it is explicitly advised the training of climate models with satellite-derived information of climate, of which colinearity of the time series has been removed through a harmonic regression. The high uncertainty if inference on the climate niche is applied into different time slices, like projected climate scenarios is also pointed in the results.There is a growing interest in inferring the associations of health-threatening arthropods to capture the climate niche to which they associate, projecting such inference on a territory. This is intended to predict the range of distribution of the tick and to understand their responses to climate scenarios, using the so-called correlative models. However, some methodological gaps might prevent to obtain an adequate background against which test hypotheses. We explore, describe and illustrate these procedural inaccuracies with examples focused on the tick"} +{"text": "Rhinogenic contact point headache (RCPH) is a pain that arises from two opposing mucosal surfaces in the nose. The diagnosis is made by history and physical examination along with nasal endoscopy and imaging. The mainstay of treatment is surgical removal of contact area.To evaluate the effectiveness of non-surgical managements including behavioral and pharmacotherapy in treatment of RCPH.Thirty patients with confirmed diagnosis of RCPH underwent non-surgical management of RCPH using recommendations for absolute contralateral nostril breathing during the pain period, applying warm cushion and medical treatment with non-steroidal analgesics and beta blocker. The results of treatment compared with the results of a group of patients who underwent surgical management.More than eighty six percent of patients experienced a significant relief in the severity of the pain. In contrast to the surgically managed group, the frequency of headaches was not altered.Non-surgical management of RCPH may have a role in patients who do not accept surgery or are not a candidate for surgical management.No conflict of interest."} +{"text": "Shifting the line of sight is naturally accomplished by the movements of both eye and head. We are studying the oclumotor system responsible for planning such head-free gaze-shifts. This includes not only the study of the kinematic mechanisms for coordination of eye and head in three-dimensional space, but also an inquiry into the nature of the internal representations that underlie the observed behavior. The latter is believed to be based on neural representations of sensory signals, coding information is receptors\u2019 frame of reference, being transformed into representations of motor commands, coding information in effectors\u2019 reference frame.At the behavioral level, we propose a kinematic model which gets retinal error and initial eye and head orientations as input and describes an experimentally-inspired sequence of rotations including saccadic eye movement, head movement and vestibule-ocular reflex (VOR). Experimentally observed constraints, Listings\u2019 law for eye and Fick strategy for head , have beAt the representation level, we have used the neural engineering framework (NEF) to impleThe success of our theoretical study will be evaluated by its success in simulating the known behavior and replicating internal neural signals resembling those recorded by neurophysiologists. The kinematic model has been evaluated based on successfully simulating the known behavior: the accuracy of the gaze shifts and obeying the kinematic constraints for eye and head. The neural network model will be evaluated based on its success in replicating the internal neural signals resembling those recorded by neurophysiologists: the frames of reference and position dependencies of the artificial units."} +{"text": "Status Epilepticus, which have apical dendritic alterations and spine loss, and control animals, which do not have these alterations. This complex arrangement of cells and processes allowed us to study the combined effect of mossy fiber sprouting, altered apical dendritic tree and dendritic spine loss in newborn granule cells on the excitability of the dentate gyrus model. Our simulations suggest that alterations in the apical dendritic tree and dendritic spine loss in newborn granule cells have opposing effects on the excitability of the dentate gyrus after Status Epilepticus. Apical dendritic alterations potentiate the increase of excitability provoked by mossy fiber sprouting while spine loss curtails this increase.Temporal lobe epilepsy strongly affects hippocampal dentate gyrus granule cells morphology. These cells exhibit seizure-induced anatomical alterations including mossy fiber sprouting, changes in the apical and basal dendritic tree and suffer substantial dendritic spine loss. The effect of some of these changes on the hyperexcitability of the dentate gyrus has been widely studied. For example, mossy fiber sprouting increases the excitability of the circuit while dendritic spine loss may have the opposite effect. However, the effect of the interplay of these different morphological alterations on the hyperexcitability of the dentate gyrus is still unknown. Here we adapted an existing computational model of the dentate gyrus by replacing the reduced granule cell models with morphologically detailed models coming from three-dimensional reconstructions of mature cells. The model simulates a network with 10% of the mossy fiber sprouting observed in the pilocarpine (PILO) model of epilepsy. Different fractions of the mature granule cell models were replaced by morphologically reconstructed models of newborn dentate granule cells from animals with PILO-induced Status Epilepticus (SE) by pilocarpine (PILO). Several days after the injury the new cells show different morphological alterations, for example, in dendritic spines and branching patterns, as well as with the formation of axonal sprouting. The way in which these new cells are integrated into the hippocampus is still unknown with conflicting data in the literature. Here we used computer simulation to test if the activity of the dentate gyrus is affected by the presence of different proportions of new cells after PILO-induced SE. Our results show that the specific morphological alterations present in the granule cells in rats with PILO-induced SE may be responsible for increasing (mossy fiber sprouting) or decreasing (spine loss) the activity in the network. The imbalance between these effects may be manifest as an epileptiform network behavior.Neurogenesis is currently a well known phenomenon in the adult brain, in special in some areas such as the subventricular zone and the dentate gyrus in the hippocampus, in which different endogenous and exogenous factors provoke cell proliferation. In the specific case of the dentate gyrus, granule cells proliferate exhibiting altered morphology after the induction of Status Epilepticus (SE) induced by kainic acid or pilocarpine Several evidences have shown that prolonged seizures such as those in the animal models of In this scenario, some of the most studied phenomena are the plastic alteration of the circuits of the DG GCs. In fact, DG GCs present a series of morphological and anatomical alterations induced by temporal lobe epilepsy (TLE) Mossy fiber sprouting is the most widely studied of the above mentioned alterations using different approaches including animal models Neuronal morphology is an important factor in the determination of the electrophysiological behavior of a cell Status Epilepticus (SE) induced by pilocarpine (PILO) and control animals. The models in the two groups (PILO and control) have the same distributions of ionic channels and the same maximal conductance densities over their dendritic areas measured according to their distance from soma . With this approach we aimed to evaluate the effect of morphological alterations alone on single-cell excitability. We found that GCs with altered morphology are less excitable when stimulated in a simulation of a patch clamp protocol In a previous research In the present work, we extended our single-cell study to a network context (a collection of neurons) to obtain information on the effect on DG hyperexcitability of seizure-induced apical dendritic morphological alterations in GCs in addition to dendritic spine loss. We use a previously described network DG model, which incorporates detailed structural and biophysical information on DG network and cells The model of Santhakumar et al. Thus, the objective of our study was to obtain mechanistic insights on the integrated action of three types of GC seizure-induced morphological alterations, namely mossy fiber sprouting, apical dendritic tree alterations and dendritic spine loss, on DG hyperexcitability.We used a 1\u22362000 scaled-down model of the dentate gyrus The network model contains 500 granule cells (GC), 6 basket cells (BC), 15 mossy cells (MC) and 6 hilar perforant-path associated cells (HC), arranged in a ring structure with topographic connections between cells . Each MChttp://senselab.med.yale.edu/ModelDb/ShowModel.asp?model=51781) for the condition with 10% of MFS. The only change we made was to replace the two dendrites model of GC by another with realistic morphology from a sample of 74 normal GCs three-dimensional reconstructions available in neuromorpho.org. The sample consists of neurons reconstructed by the groups of Brenda J. Claiborne Our model of DG was identical to the model by Santhakumar et al. The GC computational models were constructed following the methodology described in Tejada et al. For the location of the synapses it was considered a virtual molecular layer with thickness equal to the average maximum dendritic length observed in the three-dimensional models plus and minus the standard error , we generated two big families of models with different proportions of seizure-induced neurogenesis (from 0 up to 100%) in which the mature GC models were replaced by three-dimensional reconstructions of newborn (30 days old) GCs. In each of these models the mature GCs that were replaced by newborn cells were chosen at random from the entire population of mature GCs in the network. The sample of newborn cells consists of 40 reconstructions of doublecortin-positive DG GCs have spines implemented explicitly, so we had to represent spine loss in PILO GCs in an indirect way. We chose two ways of doing so here. The default way, present in all spine loss simulations, was done to represent spine loss by a reduction in the probability of connections of the PILO GCs inserted in the model to account for the reduction in dendritic synaptic sites. Three values of probability reduction were considered: 25%, 50% and 75%. The second way, considered together with the reduction in connection probability in some of our simulations, was done to introduce corrections in the membrane resistivity and capacitance of the PILO GC models to account for the reduction in membrane area To implement the change in the probability of connections of the newborn PILO GCs with spine loss, when the network was built a newborn PILO GC had a reduction of 25%, 50% or 75% of receiving a connection from other cell types in the network. However, a change in the probability of connections can affect the convergence and divergence parameters of the network. Because of this, we simulated two possible scenarios in which spine loss is associated (co-exists) with MFS and dendritic alterations. In the first scenario, which we called spine loss 1 (SL1), we maintained the divergence and convergence parameters of the network by compensating the loss of connections by connecting the cells that would be connected with the newborn GCs that lost spines to other mature cells. In the other scenario (SL2), we did not create new connections with mature cells to compensate for the smaller amount of connections with newborn GCs with spine loss.http://www.neuron.yale.edu) to ran the simulations, with a time-step of 0.1 ms and custom-made Matlab scripts for data analysis and graphs.Our first study was done to compare the intrinsic excitability of the mature and newborn GC models from control (YOUNG) and PILO samples. We compared with the mature GC models only the newborn GC models which have dendrites that reach the outer molecular layer and, consequently, can receive perforant path stimulation (6 YOUNG and 2 PILO GC models). The excitability was assessed via two protocols. The first was used to estimate the minimum depolarizing current pulse of 500 ms duration applied at a proximal dendrite required to evoke a single action potential (rheobase current). The second was used to measure the number of spikes evoked by one depolarizing synaptic-like pulse applied at intervals of 100 ms during 1000 ms at a single distal dendrite (to simulate synapses at the outer molecular layer). The results are shown in The simulations of the DG mature cells networks showed that the insertion of GCs with realistic morphology did not modify the expected response of the DG network model. Our simulations are consistent with the original model of Santhakumatar et al. To perform our studies on the effect of the introduction of newborn GCs on the mature DG network, we chose the mature network with 10% of MFS. This case was chosen because this level of sprouting was almost sufficient to produce activity that spread to the entire network , so we cThe addition of spine loss to the PILO GC models dramatically reduced the increase of activity produced by the newborn cells in the case with MFS . In the To quantify the effect of the introduction of newborn GCs on the activity of the network for the case with 10% of MFS we generated the graphs shown in The curves in Incidentally, it is interesting to mention that for proportions of inserted newborn GCs (both YOUNG and PILO) around and above 50% the overall frequency of the network with 10% of MFS became higher than the overall frequency of the mature network with 50% of MFS (data not shown).Corroborating what was observed in In the cases with spine loss in which the convergence and divergence factors of the network were maintained (MPSL1 and MPSL1c), The reduction in the overall frequency of the network due to spine loss was much stronger in the cases in which the convergence and divergence factors of network were not maintained (MPSL2 and MPSL2c). In these cases, the overall frequency of the network always remained at or below 50% of the overall frequency of the networks without spine loss. Moreover, in the cases with spine loss and altered divergence and convergence factors of the mature cells the growth of the overall frequency of the network with the proportion of inserted newborn GCs was much slower than in the cases with either no spine loss or spine loss with unaltered divergence and convergence factors of the mature cells. This indicates a strong limiting effect of spine loss over the network excitability in situations in which the rearrangement of connections do not preserve the original convergence and divergence factors of the mature cells that remain in the network.The effect of altering the probability of connections of newborn PILO GCs with spine loss can be seen in The substitution of the simplified GC models by morphologically realistic GC models into the DG model originally proposed by Santhakumar et al. Another change in comparison with the original model of Santhakumar et al. per se is not able to provoke significant changes in the network behavior despite these cells being smaller and therefore more excitable were maintained. This reduction is consistent with the idea that spine loss acts to maintain homeostasis Besides the reduction in the number of connections, spine loss also reduces the membrane surface area of a cell with consequent changes in the membrane resistivity and capacitance Our computational modeling study has shown that the different neuronal morphological alterations and seizure-induced neurogenesis considered here act much more in combination than as specific features of epileptogenic network activity. On the one hand, there are alterations that increase the number of recurrent connections, such as MFS, having as outcome an increase in the activity of the circuit. On the other hand, alterations such as spine loss can reduce the number of connections and consequently decrease the activity of the network. Along with the above, the DG is constantly exposed to the generation of new cells, which are usually shorter, much more branched Our simulation suggests that the changes in the morphology of GCs provoke diverse effects in the network activity. Some of these changes could be responsible for the increased activity observed in TLE but other could be acting in the opposite direction, decreasing the excitability of the circuit. The balance between these two drives may depend on other factors that were not included in our model, in special, ion channel alterations which may provoke more complex interactions between all of the factors present in an altered circuit.In relation to the adaptation made in the original model of Santhakumar et al. Future studies will obviously add new features to the current modeling, in order to approximate it even more to the complexity and emergent properties of the actual DG and its associated plastic substrates, for example: presence of synapses on basal dendrites In conclusion, our findings strongly suggest that the combined presence of morphological features such as MFS, altered apical dendritic tree and spine loss, in a computational model of the DG network, can explain better the inherent complexity of the circuits associated to temporal lobe epileptogenicity. The current network perspective reliably mimics dysfunctional characteristics not necessarily present when simplified or isolated parameters are considered."} +{"text": "The normal development of an organ depends on the coordinated regulation of multiple cell activities. Focusing on tubulogenesis, we review the role of specialised cells or groups of cells that are selected from within tissue primordia and differentiate at the outgrowing tips or leading edge of developing tubules. Tip or leading cells develop distinctive patterns of gene expression that enable them to act both as sensors and transmitters of intercellular signalling. This enables them to explore the environment, respond to both tissue intrinsic signals and extrinsic cues from surrounding tissues and to regulate the behaviour of their neighbours, including the setting of cell fate, patterning cell division, inducing polarity and promoting cell movement and cell rearrangements by neighbour exchange. Tip cells are also able to transmit mechanical tension to promote tissue remodelling and, by interacting with the extracellular matrix, they can dictate migratory pathways and organ shape. Where separate tubular structures fuse to form networks, as in the airways of insects or the vascular system of vertebrates, specialised fusion tip cells act to interconnect disparate elements of the developing network. Finally, we consider their importance in the maturation of mature physiological function and in the development of disease. Failure to regulate any one of these in time and space has disastrous effects and all need to occur in coordination with the others to produce the final patterned and fully functional structure. While many aspects of developmental control result from reciprocal signalling involving all or the majority of constituent cells, a special class of distinctively placed cells at the tips of tubes or the leading front of migrating cell groups have been found in a wide variety of systems to regulate the activity of their neighbours at key stages of organ development. In this review we discuss the selection and distinctive characteristics of these so-called tip cells and chart their activities and, where known, the mechanisms by which they exert their influence.Dictyostelium) to the most complex and in tubular systems as well as in groups of migrating cells signalling (VEGFR2/3) and low levels of inhibitory VEGFR1 signalling lead to enhanced expression of the Notch ligand, Dll4, enabling these cells to outcompete their neighbours for the tip cell fate signalling in trachea and of Wingless and JAK/STAT in Malpighian tubules (Breathless (FGF receptor) clones in developing dorsal tracheal branches indicates that cells receiving higher levels of FGF signalling than their neighbours always acquire tip cell fate but that the final outcome is determined by Notch-mediated competitive interactions. However, Araujo and Casanova vs. trailing cell fate. Once specified, tip cells exhibit altered patterns of gene expression, changes in cell shape and in the activity of the cytoskeleton and extrinsic factors . Ectopic expression of either \u03b2-catenin or Ceh-22 is sufficient for both daughters to develop as DTCs In contrast to these mechanisms for tip cell selection, all of which involve competition between cells that receive different levels of signalling, the specification of the distal tip cells (DTCs) in the nematode ependent D. The goThus in each case tip cell fate results from intrinsic and/or inductive factors that confer tip cell potential and, in most cases, refinement of that potential through competitive interactions.3Tip cells regulate tubulogenesis through both tissue-intrinsic and tissue-extrinsic mechanisms, which must be tightly coordinated to generate a physiologically competent organ of appropriate size and shape. In some cases, tip cells play important mitogenic roles, regulating organ growth in a tissue-intrinsic manner, as seen in the nematode gonad and insect renal system and CED-10 (Rac1 GTPase) acts in the DTC to control cell shape changes during cell migration Cells at the leading edge of the growing tubes play important steering functions to guide the developing organ appropriately through the body cavity. In the Tip cells also play important guidance functions in mammalian systems such as sprouting angiogenesis. Here, pioneering tip cells are found at the outgrowing front of newly sprouting endothelial branches B. These Drosophila salivary gland, cells at the leading front of the primordium guide gland migration along several tissues to establish their final position within the body cavity is expressed locally in patches of epidermal and mesodermal cells around the tracheal sacs via the FGFr2b receptor, which is required for budding. Cells in the lung bud tips upregulate target genes including sprouty2, bmp2 & 4 and sonic hedgehog that act to further refine FGF signalling and FGF10 expression Although distinctive \u2018tip cells\u2019 have not been identified in developing mammalian lung and kidney, groups of cells at the tips of outgrowing branches show specific patterns of gene expression and distinctive behaviours. Branching morphogenesis is induced at the tips of primary lung buds and ureteric buds by local reciprocal interactions with surrounding mesenchyme. In the lung, FGF10 is expressed dynamically in mesenchyme surrounding the primary epithelial buds The developing ureteric bud (UB) of the vertebrate kidney arises from the Wolffian duct, and undergoes a complex pattern of growth, branching and remodelling to form the urinary collecting duct system 3.4A major step during tubulogenesis is the dramatic elongation of the primordial buds into the long tubes of the mature tissue. In the insect tracheal and renal systems, the process of tube extension involves cell rearrangements in the plane of the epithelium, in the absence of cell proliferation. Strikingly these cell intercalation events are triggered by cells at the tips of the developing tubes.Drosophila tracheal system extend dynamic filopodia and lamellipodia in response to FGF signalling, activated by local sources of the ligand Bnl. This tip cell activity leads to the directed migration of the tracheal branch towards the FGF source that anchor the heart. They contact these muscles at abdominal segment boundaries, moving progressively forwards as cell intercalation lengthens the tubule. Tip cells make a final target, always the alary muscle at segment A3/4 boundary, and this contact persists into the adult fly. As the tubules elongate they encounter guidance cues provided by TGF-\u03b2 to the collecting duct (close to the site of urine outflow). Looping of both the nephron and its vascular supply creates a counter-current system that maximises the efficiency of ion and fluid homeostasis. Both the site of connection to the ureter and the tubule tip, the renal corpuscle, are established early in organ development so that tubule extension occurs between these fixed points 3.6In the development of both the insect tracheal system and vertebrate vasculature, sets of discontinuous tubular elements must be connected to generate a fully functional tubular network.Drosophila tracheal system, fusion cells are specified at the distal ends of the tracheal branches and act to interconnect independent or distant tracheal tubes , they act to explore the environment, responding to guidance cues to contact a series of target cells (thereby stabilising tubule architecture during morphogenesis) and they finally make a permanent specific contact, which maintains the shape and positioning of the mature tubule in the body cavity The activity of tip cells in the development of The distinctive characteristics of tip cells that enable them to regulate such a range of cell activities during tubulogenesis derives from their ability to act as sensors as well as regulators. They respond to signals, for example in responding to high levels of signalling as they promote branching in tracheal/vascular systems or in the recognition of specific cues for tissue guidance or for attachment to target tissues. However equally important is their ability to send signals , to transmit mechanical stress (in tubule extension) and to modify the environment through the secretion of specific factors or by directed uptake and transcytosis of specific cargoes (such as MMPs or polarised endocytosis and transport to clear the overlying basement membrane).This range of tip cell activity clearly depends on changes in their patterns of gene expression and deployment of the resulting gene products. While both mitogenesis and tissue polarity result from the release of EGF from the renal tip cell lineage, tubule architecture is regulated by distinctive, stage-specific filopodial activity, clearing of overlying basement membrane components, cell recognition of targets and the expression of tightly controlled levels of adhesion molecules. While it is likely that transcription of many tip cell specific genes is downstream of the AS-C, which acts to specify tip cells The versatility of renal tubule tip cells depends not only on these changes in tip cell activity but also on alterations in the responding cells. It is striking that the tip cell signal, EGF, controls mitosis during embryonic stages 11\u201312 and that the same signal regulates the planar polarised activity of Myosin II in the same, now post-mitotic, responding cells from stages 13\u201316 4The tip cells of many organs, including the insect tracheal system and the mammalian vasculature, remain active post-embryonically and modulate tissue structure in response to changes in organismal physiology or environmental conditions. This is particularly important during periods of hypoxia, where tip cells promote the growth of new tubular branches to adapt to local oxygen availability Acheta domestica are anchored to the body wall to ensure the hundreds of tubules are well dispersed throughout the haemocoel There is currently no evidence that the insect renal system is extensively remodelled post-embryonically. However, renal tip cells remain attached to their muscle and neuronal target tissues throughout larval and adult life 5Tip cells play pivotal roles in multiple ways during normal tubule development as well as in their physiological responsiveness. We have reviewed the defects that arise when their normal activities are compromised; for example without tip cells fly renal tubules contain half the normal number of cells and the short, stumpy tubules fail to elongate or take up their normal positions in the body cavity. As a result, the clearance of toxins from the haemolymph as well as ionic and water balance are severely compromised"} +{"text": "Oxygen starvation is observed in a variety of pathological states and serves as one of the urgent problems in medicine. A decrease in oxygen supply to tissues is accompanied by the inhibition of metabolic processes (primarily of energy metabolism), which impairs functional activity of the brain. The main source of energy for brain is adenosine triphosphate (ATP). It was shown that the components of adenylate pool can be used as early predictors of hypoxia. Aim of the study: the aim of our study was analysis of adenosine triphosphate (ATP) and adenosine monophosphate (AMP) experimental concentrations and integral coefficient K= in intact animals brain tissue and in disturbances of the oxygen regime by methods of mathematical analysis, as well as detection of some regularity in the character of their changes under the impact of hypoxia for the assessment and prediction of direction of production and utilization energy in metabolic pathways. Methodology: in this study empirical dependencies and criteria of statistical significance of mathematical modeling of quantitative relation between specified brain nucleotide stock indicators for the assessment and prognostication of brain energy state in extreme conditions were used. Results and area of their application: The use of empirical dependencies methods allowed to create multiregression models, subtly enough to unite experimental indicators ATP and AMP in hypobaric hypoxia and ischemia with different-term exposure. Obtained models can be used for prognostication of ATP and AMP concentrations in disturbances of the oxygen regime in a short or over a long period of time, as well as to receive information of indicator K= changing depending on brain hypoxia. Conclusion: functional dependencies are presented in this study to analyze shape, closeness and stability of relations between adenine nucleotides characterizing coupling of production and utilization energy processes, and also to predict the direction of these processes under hypoxic condition."} +{"text": "Drosophila before going on to discuss our recent findings on the coordinated development of muscles and tendon-like structures in Drosophila leg. By altering apodeme formation during the early steps of leg development, we affect the spatial localization of subsequent myoblasts. These findings provide the first evidence of the developmental impact of early interactions between muscle and tendon-like precursors, and confirm the appendicular Drosophila muscle system as a valuable model for studying these processes.The formation of the musculoskeletal system is a remarkable example of tissue assembly. In both vertebrates and invertebrates, precise connectivity between muscles and skeleton (or exoskeleton) via tendons or equivalent structures is fundamental for movement and stability of the body. The molecular and cellular processes underpinning muscle formation are well-established and significant advances have been made in understanding tendon development. However, the mechanisms contributing to proper connection between these two tissues have received less attention. Observations of coordinated development of tendons and muscles suggest these tissues may interact during the different steps in their development. There is growing evidence that, depending on animal model and muscle type, these interactions can take place from progenitor induction to the final step of the formation of the musculoskeletal system. Here, we briefly review and compare the mechanisms behind muscle and tendon interaction throughout the development of vertebrates and In vertebrates, the progenitors of axial tendons arise from a dorsal subdomain of the sclerotome, called syndetome, that is immediately adjacent to the myotome from which myogenic cells originate skeleton in order to transmit the force generated by fiber contraction.Much as in vertebrates, Drosophila studies cells that are singled out from a cluster of exoskeleton cells called the apodeme and large Indirect Flight Muscles (IFM)] and the leg muscles are specified as early as the third larval instar (L3) near these tendon-like precursors , an ortholog of Lbx1, which is a key regulator of appendicular myogenesis in vertebrates genes are required for proper patterning of leg muscles and that different levels of Lbe protein contribute to myoblast diversity within the leg were built and visualized using Imaris\u2122 Software than after SrDN expression in tendon-like precursors , the Association Fran\u00e7aise contre les Myopathies (AFM), the Fondation pour la Recherche Medicale (FRM) and the national grant IDE-CELL-SPE from the Agence Nationale de la Recherche (ANR).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Understanding the growth pattern in Marfan syndrome (MFS) is useful for predicting adult height and for planning the timing of growth reduction therapy. We have reviewed the growth parameters of patients with MFS in Korea and generated the Korean MFS-specific growth curve for understanding the growth pattern in MFS.Anthropometric data were available from 187 males and 152 females with MFS through a retrospective review of medical records. The standardized growth curves were constructed for weight and height according to gender. Comparisons between MFS patients and the general population were performed using a one-sample T-test.Korean MFS patients had similar height and weight compared with the general population at birth. However, linear growth curve of Korean MFS after two years of age showed that the 50th percentile of MFS is above the 97th percentile of normal in both genders. Regarding body mass, although the mean body weight of MFS patients was larger than that of the general population in males and females, the gap of the mean weight curve was small. In the Korean MFS growth curve, the growth pattern and final adult height were nearly analogous to those of the United States (US).Korean MFS-specific growth charts showed that an excessive growth pattern began in the early infant period, which was prominent in terms of linear growth compared to body mass. There were no ethnic differences in the growth pattern compared with Western MFS patients."} +{"text": "Peristaltic transport of copper-water nanofluid in an inclined channel is reported in the presence of mixed convection. Both velocity and thermal slip conditions are considered. Mathematical modelling has been carried out using the long wavelength and low Reynolds number approximations. Resulting coupled system of equations is solved numerically. Quantities of interest are analyzed through graphs. Numerical values of heat transfer rate at the wall for different parameters are obtained and examined. Results showed that addition of copper nanoparticles reduces the pressure gradient, axial velocity at the center of channel, trapping and temperature. Velocity slip parameter has a decreasing effect on the velocity near the center of channel. Temperature of nanofluid increases with increase in the Grashoff number and channel inclination angle. It is further concluded that the heat transfer rate at the wall increases considerably in the presence of copper nanoparticles. Mixed convective flows have received numerous attention from the researchers the world over due to their wide industrial and engineering applications. Convection is seen in the ocean currents, sea-wind formation, rising of plume of hot air from fire, formation of micro-structures during the cooling of molten metals, solar ponds and in fluid flows around heat dissipation fins. Convection in channels plays a vital role in heat exchangers, removal of nuclear waste and in modern cooling/heating systems. \u201cNanofluids\u201d refer to heat transfer liquids with enhanced heat transfer capability. Such materials are obtained by suspending nanoparticles in traditional heat transfer liquids e.g. water, ethylene glycol and engine oil. A number of experimental studies elaborate that the convective heat transfer in nanofluids is considerably higher than that of the base fluids. Enhanced heat transfer in nanofluids has enabled there use in several electrical and engineering applications. This gives rise to a new branch of mechanics named nanofluid mechanics. The wide utility of nanofluids is the reason for growing interest in nanofluid mechanics by the researchers of modern era. The term nanofluid was introduced by Choi Peristalsis is a well-known mechanism for fluid transport in physiology. In this mechanism sinusoidal waves travel on the walls of tube like organs of human beings propelling the fluid contained within the tube in the direction of their propagation. In physiology the principle of peristalsis is seen in the transport of food through oesophagus, movement of chyme in intestines, urine transport from kidneys to the bladder, bile transport in bile duct, transport of spermatozoa, vasomotion of blood vessels and many others. Peristalsis has proven very useful in fluid transport over short distance preventing the fluid from being contaminated. Subject to its utility, such mechanism has been adopted by the engineers in designing several industrial appliances including roller and finger pumps and peristaltic pumps in heart lung and dialysis machines. Latest hose pumps of several kinds operate through the principle of peristalsis. Peristaltic type flow is readily being used in the nuclear industry for the transport of corrosive fluids. Subject to such wide occurrence of peristalsis in physiology and industry many theoretical investigations are carried out in different flow configuration It is noticed from existing information on the topic that not much has been said yet about the peristaltic motion of nanofluids. Few recent investigations constructed such flow models see . In thes\u03bb and amplitude Consider the peristaltic flow of copper-water nanofluid in a symmetric channel of width Wave geometry is of the formRelations for effective density, specific heat and thermal conductivity of nanofluid by Tiwari and Das The quantities from the laboratory frame Pressure gradient, pressure rise per wavelength, axial velocity and trapping are important quantities in the study of peristalsis. On the other hand the analysis of heat transfer phenomena is significant in the study of nanofluids. Hence the present section is prepared to analyze the behavior of pressure gradient, pressure rise per wavelength, axial velocity, streamlines, temperature and heat transfer rate at the wall for variation in different embedded parameters.Pressure gradient is studied through Pressure rise per wavelength versus flow rate is shown in the Effects of several flow parameters on axial velocity are shown in Trapping for different involved parameters is studied through Gr and Impact of several embedded parameters on temperature and heat transfer rate at the wall are examined through Gr and Br. Numerical values of heat transfer rate at the wall for variations in Heat transfer rate at the wall for variation in nanoparticle volume fraction is analyzed in Velocity and thermal slip effects on the peristaltic transport of copper-water nanofluid in an inclined channel are studied. Findings of present analysis indicate that the addition of copper nanoparticles reduces the pressure gradient. Increase in the value of nanoparticle volume fraction enhances the pressure rise in retrograde pumping region. Nanofluid with low concentration of copper nanoparticles possesses higher value of velocity at the center of channel. Velocity slip parameter has a decreasing effect on the maximum value of axial velocity. Trapping is reduced in presence of nanoparticles. Grashoff number and channel inclination angle have increasing effect on the temperature of nanofluid. Increase in nanoparticle volume fraction largely enhances the heat transfer rate at the wall."} +{"text": "The neocortex is unique to mammals and its evolutionary origin is still highly debated. The neocortex is generated by the dorsal pallium ventricular zone, a germinative domain that in reptiles give rise to the dorsal cortex. Whether this latter allocortical structure contains homologs of all neocortical cell types it is unclear. Recently we described a population of DCX+/Tbr1+ cells that is specifically associated with the layer II of higher order areas of both the neocortex and of the more evolutionary conserved piriform cortex. In a reptile similar cells are present in the layer II of the olfactory cortex and the DVR but not in the dorsal cortex. These data are consistent with the proposal that the reptilian dorsal cortex is homologous only to the deep layers of the neocortex while the upper layers are a mammalian innovation. Based on our observations we extended these ideas by hypothesizing that this innovation was obtained by co-opting a lateral and/or ventral pallium developmental program. Interestingly, an analysis in the Allen brain atlas revealed a striking similarity in gene expression between neocortical layers II/III and piriform cortex. We thus propose a model in which the early neocortical column originated by the superposition of the lateral olfactory and dorsal cortex. This model is consistent with the fossil record and may account not only for the topological position of the neocortex, but also for its basic cytoarchitectural and hodological features. This idea is also consistent with previous hypotheses that the peri-allocortex represents the more ancient neocortical part. The great advances in deciphering the molecular logic of the amniote pallium developmental programs will hopefully enable to directly test our hypotheses in the next future. The Neocortex is a pallial structure that is divided in multiple sub-regions and is made by six layers of distinct neuronal types. Despite more than a century of intense research and speculation, the evolutionary origin of this brain region is still unclear Reiner, . Early wNonetheless, subsequent work showed that during early development pallial progenitors of all tetrapods are regionalized into at least four conserved domains, referred as medial (MP), dorsal (DP), lateral (LP), and ventral (VP) pallium, that give rise to distinct radially migrating glutamatergic neurons Segregation of functions, in which two sister cell types lose complementary parts of the gene modules of the former precursor cells. (3) Co-option of functions, in which the precursor cell co-opts the gene modules of an unrelated cell type Expansion and Segregation: gene modules underlying specific functions of UL and DL were present in a single precursor cell in the ancestral DP derivatives and became segregated and subsequently refined in distinct sister cell types. In this case the temporal patterning of DP progenitors will be a mammalian innovation. (3) Spatial to Temporal patterning switch: DP progenitors co-opted the expression of gene modules specifying the neuronal types of other pallial regions , thus leading to the appearance of new cell types in the DP derivatives. The temporal patterning of neocortical progenitors may thus represent a patchwork of formerly spatially segregated developmental programs. In this case part of the neocortical cells may have a sister cell type in a different pallial domain.The specification of neocortical neurons depends on spatial patterning events delimiting the DP progenitors Figure , followeLacerta Muralis, a lizard. However, virtually identical cell types were observed in the LP and VP derivatives of both lizard and mammals thus supporting the spatial to temporal patterning switch hypothesis is a microtubule associated protein involved in cytoskeletal dynamics during migration and differentiation of immature neurons . These authors raised the possibility that an independent progenitor domain giving rise to neurons of both dorsal and lateral cortex may actually exist in some living reptiles. In contrast to this interpretation however, Ulinsky reported that during development the layer II of the reptilian dorsal and lateral cortex is a continuous stratum of cells that is secondarily ruptured during differentiation (Ulinsky, Another interesting aspects of the hypothesis that the neocortex derived from the superposition of lateral and dorsal cortex, is that it may account for some hodological features that the UL shares with the olfactory cortex but not with the reptilian dorsal cortex. For instance, the olfactory cortex of tetrapods possesses homotopic projections to the contralateral hemisphere passing through the anterior commissure (Zeier and Karten, Ulinsky, . StartinUlinsky, . This crUlinsky, . AccordiUlinsky, . An intrUlinsky, .Several crucial questions remain regarding the emergence of the inside out-gradient of neurogenesis, the appearance of layer IV cells and the arrival of the collo-thalamic projections to the dorsal pallial derivatives.A hallmark of the evolution of the mammalian neocortex is the emergence of a SVZ in the DP (Mart\u00ednez-Cerde\u00f1o et al., In this perspective, in the last years different authors have analyzed the pattern of expression of the sauropsid orthologs of genes expressed in specific neocortical layers (Nomura et al., in vitro (Suzuki et al., Surprisingly, early dorso-medial and dorso-lateral progenitors of the chick pallium were able to sequentially produce cells expressing DL and UL markers Nonetheless, these results introduce the intriguing possibility that an intrinsic temporal patterning mechanism specifying pallio-fugal, thalamo-recipient, and pallio-pallial neuronal types was present in pallial progenitors of the common ancestor of all amniotes or even vertebrates. This idea would be consistent with the fact that temporal patterning of primary progenitors is a major mechanism for generating neuronal diversity in Drosophila (Li et al., In conclusion, our understanding of the genetic logic of cell type specification in the neocortex and other pallial regions of amniotes is constantly growing and this will likely enable to test current theories of the evolution of the mammalian pallium. These analyses would be greatly helped by the comparison of the genetic fingerprint of more restricted cell populations and the layer II DCX+/Tbr1+ cells represent an attractive candidate for such analyses.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Although emotional processing in words became a strong focus of research recently, less attention was given to the question of functional localization of emotion effects in the stream of visual word recognition directly. Here, the impact of emotional connotation of words on different processing stages of reading is investigated. Or put alternatively: How is emotional valence represented within the linguistic representational system?From a psycholinguistic perspective there are at least two types of linguistic representations which are central to visual word recognition. These are lexical and semantic representations. It is a challenging endeavor to define the term lexical: Whether low-level lexical representations should be differentiated from higher-level lexical representations , is for example an open issue. Furthermore, orthographic processing may comprise sublexical processing on the level of letters and syllables, and lexical processing on the level of the complete word form. The term semantic commonly refers to the meaning of words, presumed as internally represented concepts made of smaller elements of meaning organized by semantic similarity.In psycholinguistics separate lexical and semantic representations are presumed. Accordingly, most models of visual word recognition assume that lexical representations are retrieved after basic low-level visual perception of line forms and colors, which then culminate in activation of semantic knowledge. Models of word recognition differ with respect to their assumptions about discreteness of the processing stages and to mechanisms of accessing the lexical and semantic representations. While early models of visual word recognition postulated discrete processing stages components in visual word processing should first be considered irrespective of emotion. Higher-level lexical representation effects are observed already 100-ms post-stimulus. Since word frequency is broadly accepted to be a lexical factor, such modulations imply that lexical access is underway already starting in the time course of the P1 emotional valence denotes whether a stimulus is being perceived and experienced as positive or negative, and (ii) arousal constitutes the intensity of the appraisal process. I will limit the article to discussion of valence effects which can be understood as the dimension that underlies the quality of emotional experience. Considering the time course of emotional valence effects three different components of the ERP were observed with words. Very early emotion effects have been observed in the time course of P1 exhibit a shorter latency to emotional than to neutral words (Kissler and Herbert, Taken together, it can be assumed that emotional valence is a semantic feature, possibly the first semantic feature to be retrieved from semantic memory when reading words. A growing body of evidence is pointing to a second possible locus of emotion in the lexical linguistic representations. The exact level of lexical representation and the underpinning learning mechanisms are open issues. The conclusion that emotional valence impacts word recognition on multiple stages and might be both part of lexical and of semantic representations is pinpointing future challenges for models of visual word recognition, that is, first, the need for integration of models that either have a focus on lexical or on semantic processing, and second, the integration and prediction of word dimensions like emotion within such models.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Most ofThe signal of dopamine neurons in classical and instrumental conditioning has been explained in terms of the temporal-difference (TD) algorithm. However it is not clear whether and, if so, how reinforcement learning can account for the dopamine signals in complex decision-making tasks with noisy sensory information and temporal uncertainty of the relevant task events, as is the case in the detection task mentioned above . We haveThe dopamine phasic activity predicted by the model matches the experimental data and the prediction of the psychometric curve is consistent with the animal performance. Furthermore, the model provides an interpretation of the condition-dependent dopamine response to the go cue instruction in terms of reward prediction error. Using Bayesian inference the model constructs an internal belief about the presence of the somatosensory stimulus. This belief reflects the confidence about the sensory perception and thus the value assigned to this perceptual judgment. The large belief in stimulus-present decisions represents a high degree of confidence in the sensory perception and a great expectation for future rewards. On the contrary, stimulus-absent choices reflect a small belief and consequently a larger uncertainty about the decision and the future reward. This computational description of belief agrees with the previous interpretation of the data and desc"} +{"text": "Some evidence suggests that women with primary dysmenorrhea (or painful period) often have traumatic experience with parental attachments, but the exact relationship between styles of the parental bonding and the detailed aspects of the disorder is unclear.From university-student women, we invited 50 primary dysmenorrhea patients and 111 healthy volunteers to undergo tests of the functional and emotional measure of dysmenorrhea (FEMD), the Family Relationship Questionnaire (FRQ), and the visual analog scale for the pain intensity experienced.Besides the high scores of the FEMD functional and emotional scales, the dysmenorrhea patients also scored significantly higher than the healthy controls on the FRQ scales of paternal dominance and maternal abuse. In patients, the FEMD Emotional scale was negatively predicted by the Paternal Freedom Release scale and the FEMD functional scale was positively predicted by the Maternal Dominance scale.Inappropriate parental bonding or chronic traumatic attachment styles have respective relationships with the functional and emotional disturbances experienced by the primary dysmenorrhea patients."} +{"text": "Due to the imminent growth of the world population, shortage of protein sources for human consumption will arise in the near future. Alternative and sustainable protein sources like insects, such as mealworm, are now being explored for the production of food and feed. Before novel products can be launched on the market the assessment of food safety is vital. One of the key aspects of food safety is the risk of the development of food allergy. Food allergies specifically have a significant impact on the quality of life of allergic patients and their daily functioning, and they may even be life-threatening. TNO together with the University Medical Center Utrecht (UMCU) developed a risk assessment strategy to assess the allergenicity of novel proteins, in accordance to the European Food Safety Authority (EFSA) guidelines to assess the safety of Genetic modified organisms. Using this strategy the safety of mealworm (Tenebrio molitor L.) proteins for use in human consumption was assessed. In this study, 15 shrimp allergic patients were tested on their allergic reaction to mealworm proteins. For this purpose different in vivo (skin prick tests) and in vitro tests (immune blot and basophil activation) were used. In all tests a positive reaction to mealworm extracts was seen. This indicates that shrimp allergic patients are at risk for developing allergic reactions when consuming food containing mealworm proteins. Double blind placebo controlled food challenge will have to confirm this in the coming months."} +{"text": "This paper analyzed the relationship between some indicators of reproductive history and body fatness in relation to the timing of the menopause transition in Hungarian women using survival analysis after controlling for birth cohort.Data on menstruation and reproductive history were collected during the personal interviews in a sample of 1932 women (aged 35+\u2009years). Menarcheal age, the length of menstrual cycles and menstrual bleedings, regularity of menstrual cycles, number of gestations, lactation, the ever use of contraceptives, menopausal status and age at menopause were used as indicators of reproductive history. The body fat fraction was estimated by bioelectrical impedance analysis. Body fatness was also estimated by dividing women into obese and non-obese categories (considering body mass index and waist-to-hip ratio). Survival analyses were used to analyze the relationship between the indicators of reproductive history and body fatness during the menopausal transition.Only the menarcheal age among the investigated reproductive life characteristics showed secular changes in the studied decades in Hungary; the mean age at menarche decreased by approximately 2.5\u00a0months per decade from the 1920s until the 1970s. Ever use of hormonal contraceptives, a relatively long cycle length in the perimenopausal transition and higher parity were all related with lower risk of early menopause. Later menarcheal age, normal length of menstrual cycle or bleeding in the climacterium, irregular bleeding pattern and postmenopausal status were associated with a higher amount of body fatness, while never use of contraceptives, regular menstruation, postmenopausal status and relatively early menopause were associated with a higher risk of abdominal obesity.This report confirms that age of menarche is not significantly predictive of age at menopause but prior use of oral contraceptives, longer mean cycle length and smaller number of gestations all are. In addition, age of menarche, irregular bleeding pattern before the climacterium, length of menstrual cycles and bleedings during the climacterium and postmenopausal status were associated with obesity during the climacterium. The age at menopause is a distinctive milestone in female reproductive life since it indicates not only the increased risk of morbidity and premature mortality due to the decreased level of sex hormone levels but also a biological marker of overall ageing and general health status. Menopause is universal and shows little variation in the timing of its occurrence (~50\u00a0years) across contemporary human populations and has remained mainly constant over the last 100\u00a0years in developed societies .Previous studies on the onset of natural menopause have revealed that its variance is determined mainly by the interactions of multiple genes through control of the ageing processes of the neuroendocrine system , 5, and The main purposes of this study were to analyze (1) the relationship between the age at menopause and the other characteristics of reproductive life in Hungarian women and (2) the relationships between body fatness at the time of menopause and reproductive life characteristics. The secular changes of the studied reproductive variables were analyzed only in those women who never used hormonal contraceptives, since the use of this kind of contraceptives can significantly modify the natural characteristics of menstrual cycles.Since it was confirmed by many epidemiological surveys that theAccording to our hypothesis, the age at menopause is influenced not only by extrinsic factors (e.g. by socioeconomic factors) but also by the factors of the reproductive life; therefore, the relationship between the age at menopause and the other main characteristics of reproductive life was presupposed.The possible advancement in timing of puberty was hypothesized to be accompanied by the earlier onset of menopause due to the positive secular socioeconomic changes and the earlier onset of reproductive life and a decrease in the number of gestations per women over recent decades.Since the transitions between prereproductive, reproductive and postreproductive stages are accompanied not only by the changing levels of female sex hormones but also by significant changes of metabolic status and body structure, a hypothesis of a relation between the risk of obesity/abdominal obesity and the characteristics of reproductive life was proposed as well. The characteristics of reproductive life that can be related to higher level of female sex hormones were supposed to increase the risk of obesity in the menopausal transition because these reproductive life parameters were considered as indicators of a lifelong higher level of sex hormones.By considering the main purposes of the study and the results of the former menopause studies, the following hypotheses were posed:Subject recruitment was done by using multilevel multistage sampling so as to obtain subjects that represented the diverse types of settlements as well as the age distribution of Hungarian women [The main purposes of the Hungarian Menopause Study were (1) to analyze the nature of menopausal transition in Hungary; (2) to analyze the manner in which the onset of menopause and the length of menopausal transition are affected by the nutritional status and body composition, lifestyle and socioeconomic background of women; (3) to estimate the influence of genetic and endocrinological factors on reproductive ageing process by comparing autophagic activity and the female sex hormone levels in women being in different menopausal status; and (4) to identify the most important intrinsic and extrinsic risk factors that can lead to early or late menopause.Study volunteers included 1932 Hungarian women aged between 25 and 94\u00a0years , number of pregnancies , number of live births with or without lactation, age at which any irregularity of menstrual cycle length commenced, age at the last menstrual cycle and age at menopause were collected during the personal interviews. The questionnaires were constructed by considering the WHO and Stages of Reproductive Aging Workshop (STRAW) recommendations on collecting data on reproductive life characteristics in women , 13.Subjects were divided into four groups of premenopausal, early and late perimenopausal and postmenopausal on the basis of their menstrual cycle characteristics by considering the WHO and STRAW recommendations , 13 using the WHO cut-off points of BMI [The body fat fraction was estimated by bioelectrical impedance analysis . Subjects were divided into nutritional status categories were used to analyze the data. Initially, analyses were undertaken using only one independent variable and those that were significant were entered into a multivariate analysis . The omnibus test was used for testing the overall fit of Cox regression models.The statistical analyses were done by using SPSS v. 20. Hypotheses were tested at the 5\u00a0% level of random error.All subjects were asked to give their written informed consent to participate in the study. All attendants were provided by detailed information on the main purposes of the study and on all examinations before their approval. The participation was voluntary and anonym in the study. The research objectives, the research methodology and the questionnaires were approved by the National Human Research Ethics Committee (108/2011) and the Hungarian Scientific Research Fund (EIK-1001/2011).The main limitations of the study were the cross-sectional study design and the retrospective data collection . The personal interviews helped to diminish this methodological limitation of the study. The menopausal status and the reproductive life characteristics, especially the characteristics of the menstrual cycles, can be estimated more accurately by longitudinally. Moreover, the menopausal status can be estimated more accurately by following the complete STRAW staging system for ovarian ageing including menstrual and qualitative hormonal criteria . AlthougMenarcheal age showed a significant positive secular change between the 1920s and 1980s: the mean age of menarche decreased by birth cohorts from the age of ~14.0\u00a0years to 12.7\u00a0years the ever use of contraceptives, long menstrual cycles and higher number of gestations related to lower risk of earlier menopause, while (2) later menarcheal age, normal length of menstrual cycle or bleeding in the climacterium, irregular bleeding pattern and the postmenopausal status were associated with higher prevalence of obesity and (3) the never use of contraceptives, the regular menstruation, the postmenopausal status and relatively earlier age at menopause increased the risk of abdominal obesity in the studied sample. Overall, the characteristics of reproductive life were found to have significant influence on the onset of the menopausal transition. Moreover, the presented results suggest that the risk of obesity is also related to the reproductive life parameters.Contrary to expectations, the results from this study suggest there is no relationship between age of menarche and age at menopause. The results of the present study add to a growing body of literature showing that prior use of oral contraceptives, longer mean cycle length and smaller number of gestations are all associated with later age of menopause.Our results also confirm those from previous studies showing that the later age of menarche, the irregular bleeding pattern before climacterium, the normal length of menstrual cycles and menstrual bleedings in the climacterium and the postmenopausal status were associated with a higher risk of obesity in climacterium.By considering the relationship between the reproductive life characteristics and body fatness distribution in the menopausal transition, these findings confirm\u2014beyond the never use of hormonal contraceptives\u2014the formerly evidenced association that the menopausal status was related to abdominal obesity.Recent studies have shown that variation in age at menopause and the other biological characteristics of the menopausal transition, e.g. body structural changes, are associated with several factors, such as genetic, reproductive, socio-demographic and certain behavioural, lifestyle influences , 5, 57."} +{"text": "I read with interest the excellent article on sudden cardiac death risk stratification with electrocardiographic (ECG) indices by Dr. Gimeno-Blanes and his colleagues in the recent issue of Frontiers in Physiology . As the authors pointed out, it differs from the spectral method and while there are commonalities, there are critical differences in the approach and in the clinical evidence supporting the two methods. The MMA method employs the noise-rejection principle of recursive averaging. The algorithm continuously streams odd and even beats into separate bins and creates averaged complexes for each bin. These complexes are then superimposed, and the maximum difference between the odd and even complexes at any point within the JT segment is identified for every 15 s and reported as the TWA value. The highest TWA levels within the entire 24-h period are recorded for each subject and used for analysis of risk for sudden cardiac death or cardiovascular mortality. The established TWA cut-point of 47 \u03bcV indicates a positive TWA test. Inspection of the TWA template permits verification of the waveform and provides opportunities for insights into the pathophysiology, as distinct patterns are associated with differing disease states. Samples of the rhythm strip and QRS-aligned template are provided method. We have studied this parameter in our laboratory for more than two decades. The MMA methodology has run the complete gambit from development through clearance by the United States FDA and a recent positive coverage decision by Center for Medicare and Medicaid Services and Beth Israel Deaconess Medical Center and licensed to GE Healthcare, Inc. ."} +{"text": "Historically, the semen analysis results are the foundation on which clinicians base their decision for treatment for a given couple. It has, however, become clear that semen parameters are insufficient for the determination of male fertility potential. A continuous search for better markers of male fertility has led to an increased focus on testing of sperm chromatin integrity in fertility workup and ART. Sperm DNA damage is a useful biomarker for male infertility diagnosis and prediction of assisted reproduction outcomes. It is associated with reduced fertilization rates, embryo quality and pregnancy rates, and higher rates of spontaneous miscarriage and childhood diseases. Successful fertilization of the human oocyte from spermatozoa with damaged DNA may lead to paternal transmission of defective genetic material with adverse consequences to embryo development. Sperm DNA fragmentation has shown to be an independent predictor of success in couples undergoing intrauterine insemination (IUI). The speaker will discuss the conflicting reports on the role of sperm DNA fragmentation in relation to fertilization, pre-embryo development and pregnancy outcome in The speaker will provide a summary of the most recent studies in the literature in the field of sperm DNA damage in the clinical setting. He will discuss current laboratory tests and the accumulating body of knowledge concerning the relationship between sperm DNA damage and clinical outcomes. Next he will talk about the pros and cons and clinical applicability of the current sperm DNA fragmentation assays and the biological significance of sperm chromatin damage in the male germ line. Finally, as sperm DNA damage is often the result of increased oxidative stress in the male reproductive tract, the potential contribution of antioxidant therapy in the clinical management of this condition will be debated."} +{"text": "The progressive increase in incidence in cow's milk proteins allergy (CMPA) in the past decades required primary prevention strategies for children at high-risk. Evidence of the role of gut microbiota in promoting the maturation of the immune system during early life encouraged supplementing the diet with probiotics in order to facilitate tolerance and delay or prevent sensitization. The efficacy of this strategy has not been consistently proven . Breastf"} +{"text": "The hopes placed in the new treatment regimens for tuberculosis (TB), with the emergence of rifampicin were partially shaded by the side effects which were frequently recorded, able to impose temporary or definitive abandonment of this tuberculostatic.The patient to be presented illustrates relapse of TB infection, in the context of significant comorbidities: chronic renal failure, uremic stage, in chronic hemodialysis program; primary hyperuricemia with urate nephropathy; stage II arterial hypertension with very high cardiovascular risk.In the absence of clinical, definite biological and morphological criteria, the diagnosis of drug hepatopathy relies predominantly on causal relationship between the drug administration and the occurrence of the therapeutic accident. Non-hepatitis jaundice occurred after 2 weeks of daily tuberculosis treatment, suggesting the involvement of rifampicin in the conjugation of bilirubin and conjugated bilirubin excretion in the bile. The peculiarity of the case: In the patient presented, the fundamental mechanism behind the cholestatic syndrome consisted of: compromising the active transport of bilirubin through the liver cell, without being present the reflux of bile constituents in the circulation.Adverse reactions to drugs due to their frequency and harmfulness have opened a new nosology chapter in modern medicine. Adverse effects of tuberculostatic drugs affect in a negative way both the management and the dynamics of TB infection.Written informed consent was obtained from the patient for publication of this Case report and any accompanying images. A copy of the written consent is available for review by the Editor of this journal."} +{"text": "Nodular thyroid disease affects thousands of people in Poland. Tumors of the thyroid account for about 1% overall human cancers. Thyroidectomy is the most common surgical operation in endocrine tumors. Operative therapy for benign thyroid nodules is recommended for: progressive increase in nodule size, substernal extension, compressive symptoms of the neck, the development of thyrotoxicosis and patient therapeutic prevalence. In Poland thyroidectomy is the fifth surgical procedure and comprises about 25000 operations yearly. Reduction of surgical injury with simultaneous retention of current safety and radical nature of surgical intervention forces the work into a relatively small operating field. Electric devices enabling the achievement of full and lasting haemostasis during thyroidectomy supplant traditional surgical method with no impact on the incidence of perioperative complications, while at the same time allowing to shorten the duration of the procedure. The haemostatic effect is associated with generation of heat, which apart from the intended result may bring about thermal tissue injury. During the surgical procedure it is important to determine the thermal spread around the active tip of electric devices in the operating field during thyroidectomy, and the safe temperature range during the operation to protect important structures of the neck. The mean safe distance of the active tip of an electric device from important anatomic structures is 5mm minimally and depends on the device type, time of operation and its power settings. All the modern techniques of vessel sealing are associated with generation of heat and its spherical spread, which causes thermal injury to the surrounding tissues. Their mode of operation through, among others, structural changes in collagen and elastin, leads to durable connection of sealed vessel walls and tissue structures. These systems enable a safe sealing of vessels of up to 7 mm in diameter.In conclusions: In the cases analyzed by the author concerning the thyroidectomy techniques, it is recommended to replace electric devices with ligatures or clips or human fibrinogen in place near the laryngeal nerves, parathyroid glands and the trachea. The decision of the change of the method of haemostasis maintenance in the vicinity of crucial structures has been left to the surgeon."} +{"text": "This paper is a country case study for the Universal Health Coverage Collection, organized by WHO. Gemini Mtei and colleagues illustrate progress towards UHC and its monitoring and evaluation in Tanzania.Please see later in the article for the Editors' Summary This paper is part of the PLOS Universal Health Coverage Collection. This is the summary of the Tanzania country case study. The full paper is available as Supporting Information file Achieving universal health coverage (UHC)\u2014defined as access to needed health services to all and protection against financial risks arising from paying for health services Immediately after independence in 1961, Tanzania adopted a free health care policy that lasted until the early 1990s when user fees were reintroduced, accompanied by exemptions and waivers for the poor The assessment of progress in financial protection used data from the National Household Budget Survey 2007, the Strategies for Health Insurance for Equities in Less Developed Countries (SHIELD) survey 2008, and the National Panel Survey (NPS) 2010/2011. Access indicators were derived from the analytical report prepared for medium term review of the health sector strategic plan 2009\u20132015. Indicators were selected on the basis of availability of data, their priorities in the health sector strategic plans, and the degree to which the indicator addresses a critical health system problem in Tanzania . The study uses the framework presented in Out-of-pocket (OOP) payments account for about 2% of people's income. About 2% of the population incurs catastrophic health care expenditures and 1% becomes impoverished because of OOP payments. Although the observed levels of incidences of catastrophic events and impoverishment are still low in Tanzania compared to other low- and middle-income countries The burden of OOP payments is significantly large among the poorest segment of the population as indicated by the negative Kakwani index targets by 2015. Although the proportion of households owning insecticide treated nets is large, malaria prevalence among outpatients is still high at 34% Click here for additional data file."} +{"text": "Periodontal disease, a bacterially mediated inflammatory disease of the gingival and adjacent periodontal attachment apparatus, represents, after dental caries, the leading cause of tooth loss among adults in developed countries due to the destruction of the periodontal ligament and the loss of the adjacent supporting bone, the tissues which support the teeth is more rational than one that is disease specific and serotonin (5-hydroxytryptamine 5-HT) has been reported in depression studies axis and further determines cortisol and adrenal disturbances, as well as immune dysfunction and excessive secretion of proinflammatory cytokines of poor oral hygiene and halitosis, frequent characteristics of patients with periodontal disease (Morita and Wang, Periodontal disease is also one of the leading causes of edentulousness due to the inflammatory destruction of the tooth supporting tissues: the periodontal ligament and the alveolar bone (Kassebaum et al., Finally, periodontal disease may contribute to the onset of depression through different pathways:An interdisciplinary approach in psychoimmunology and periodontology has been used to highlight the biological and psychosocial mechanism and mediators of the depression and periodontitis connection, in order to call attention to potential new therapeutic strategies for both depressed individuals and periodontal disease patients.The author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer PV and handling Editor declared a current collaboration and the handling Editor states that the process nevertheless met the standards of a fair and objective review."} +{"text": "Synthetic biology is an ever-expanding field in science, also encompassing the research area of fungal natural product (NP) discovery and production. Until now, different aspects of synthetic biology have been covered in fungal NP studies from the manipulation of different regulatory elements and heterologous expression of biosynthetic pathways to the engineering of different multidomain biosynthetic enzymes such as polyketide synthases or non-ribosomal peptide synthetases. The following review will cover some of the exemplary studies of synthetic biology in filamentous fungi showing the capacity of these eukaryotes to be used as model organisms in the field. From the vast array of different NPs produced to the ease for genetic manipulation, filamentous fungi have proven to be an invaluable source for the further development of synthetic biology tools. Saccharomyces cerevisiae because of its vast toolkit in making different genomic manipulations promoter, the pXyl xylose inducible promoter in the fungus resulting in the activation of this silent gene cluster. This HAT, GcnE, part of the Saga/Ada complex was also found to play a role in the regulation of different NPs and in a further study it was shown that when specific amino acids on the histone tail of H3 were replaced mimicking acetylation or non-acetylation it affected the production of different NPs gene and overexpression of the transcription factor in the native host. The native promoter was then used with an unknown PKS that was found to produce the antituberculosis agent pyrrolocin , thus creating a fungal expression system with the native promoter and transcriptional activator is advantageous synthase, which is a PKS, was coexpressed with the A. nidulans phosphopantetheinyl transferase (PPTase) gene and resulted in yields of 6-MSA as high as 2.2 g/L . Up until now, much work has been done to understand the mechanisms of these enzymes in both fungi and bacteria . A lot oBesides the work on NR-PKS there has also been a lot of work on highly reducing PKS (HR-PKS) in filamentous fungi. These PKSs are also of interest because they come in different modes of reduction leading to a wide range of chemical diversity. Early studies also focused on the understanding of these complex enzymes and especially on the PKS responsible for the statin, lovastatin , the mycSynthetic biology for the discovery and understanding of fungal NPs is an ever-evolving field. With the advancement of the different techniques presented herein the possibilities are endless for the production of novel/bioactive NPs. Additionally, with the rapid discovery in different genome editing and genetic manipulation tools such as the bacterial derived CRISPR-Cas9 system , these cThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Drosophila dopamine transporter and the engineered protein, LeuBAT. The comparison reveals that careful computational modeling combined with experimental data can be utilized to predict binding of molecules to proteins that agree very well with crystal structures.Understanding of drug binding to the human biogenic amine transporters (BATs) is essential to explain the mechanism of action of these pharmaceuticals but more importantly to be able to develop new and improved compounds to be used in the treatment of depression or drug addiction. Until recently no high resolution structure was available of the BATs and homology modeling was a necessity. Various studies have revealed experimentally validated binding modes of numerous ligands to the BATs using homology modeling. Here we examine and discuss the similarities between the binding models of substrates, antidepressants, psychostimulants, and mazindol in homology models of the human BATs and the recently published crystal structures of the The human biogenic amine transporters (BATs) represent important drug targets for the treatment of many psychiatric diseases such as depression, anxiety, obesity, drug abuse, obsessive compulsive disorder, attention deficit hyperactive disorder, and schizophrenia . They arAquifex aeolicus, LeuT, was published -methamphetamine, cocaine and the cocaine analog RTI-55 as well as SNRIs, NET-specific reuptake inhibitors (NRIs) and SSRIs bound. For the first time it is accordingly possible to directly compare the binding of substrates to that of different types of inhibitors in a DAT structure.Importantly, in May 2015 an arsenal of new crystal structures of dDAT with various ligands bound were published . These nSeveral models have been published describing the binding of substrates, antidepressants, psychostimulants and mazindol to either of the human BATs using homology models constructed based on the structure of the bacterial homolog LeuT .Herein, we compare the binding of drugs to dDAT and LeuBThe structure of the dDAT protein compared to a homology model of the human DAT previously published is shownThe homology models of hDAT, hNET, and hSERT are compared with the dDAT crystal structure by alignment of the central binding site residues Figure . For theD-amphetamine and (+)-methamphetamine bound derived compounds. The binding of PP and an analog has been studies computationally using homology models of hDAT and hSERT . The recne bound . In Figune bound . As can ne bound . The binne bound . This co+ group, most likely caused by subtle differences of the phenylalanine within the aromatic lid. The overall location of cocaine in The binding of cocaine has previously been studied through homology modeling both of hDAT and hSERS-citalopram was previously biochemically validated to bind in the central S1 site of hSERT . A large number of published homology models of the human BATs have beeWe observe that the early models of the human BATs constructed based on the bacterial homolog LeuT are in excellent agreement with the subsequent crystal structures, but importantly to note is that these predictions require careful selection of ligand binding modes combined with the consideration of more than top docking poses. This provides us with great confidence in the ability to use extensive modeling combined with experimental validations to provide initial insight to drug binding to proteins. Although the computational docking models are able to predict the binding of compounds to the BATs, a substantial limitation of the static models is the inability to predict the difference in function between the drugs. To understand the function, non-static methods such as molecular dynamics simulations or electThe dDAT structure has been seen to possess a pharmacology profile resembling hNET more than dDAT which shThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Bacillus thuringiensis and apparition of resistance of the codling moth Cydia pomonella to the C. pomonella granulovirus have, for example, been described. In contrast with the situation for pests, the durability of biological control of plant diseases has hardly been studied and no scientific reports proving the loss of efficiency of biological control agents against plant pathogens in practice has been published so far. Knowledge concerning the possible erosion of effectiveness of biological control is essential to ensure a durable efficacy of biological control agents on target plant pathogens. This knowledge will result in identifying risk factors that can foster the selection of strains of plant pathogens resistant to biological control agents. It will also result in identifying types of biological control agents with lower risk of efficacy loss, i.e., modes of action of biological control agents that does not favor the selection of resistant isolates in natural populations of plant pathogens. An analysis of the scientific literature was then conducted to assess the potential for plant pathogens to become resistant to biological control agents.The durability of a control method for plant protection is defined as the persistence of its efficacy in space and time. It depends on (i) the selection pressure exerted by it on populations of plant pathogens and (ii) on the capacity of these pathogens to adapt to the control method. Erosion of effectiveness of conventional plant protection methods has been widely studied in the past. For example, apparition of resistance to chemical pesticides in plant pathogens or pests has been extensively documented. The durability of biological control has often been assumed to be higher than that of chemical control. Results concerning pest management in agricultural systems have shown that this assumption may not always be justified. Resistance of various pests to one or several toxins of The durability of a control method for plant protection is defined as the persistence of its efficacy in space and time. Erosion of effectiveness of conventional plant protection methods has been widely studied in the past. The durability of chemical control has for instance been studied because of the frequent and recurrent apparition of resistance to fungicides in major plant pathogenic fungal populations . The breBacillus thuringiensis (Bt) has been described shortly after the market approval of products based on various strains of this bacterium. McGaughey was the first to observe resistance in the field to Bt toxins in the Indian meal moth Plodia interpunctella, a major post-harvest pest of grain . It may also depend on the specific mode of action of biocontrol agents. Various modes of action are involved in the protective effect of biocontrol agents against plant pathogens. Although the number of studies done on this subject is important, knowledge of the precise mode of action of biocontrol agents is still partial. However, it is generally considered that there are three main ways for a biocontrol agent to control a plant pathogen : first, The main objective of this review is to assess the potential for plant pathogens to become resistant to biocontrol agents. To this end, an analysis of the scientific literature was conducted and scientific papers reporting the diversity of efficacy of biocontrol agents toward plant pathogens or those describing the ability of plant pathogens to produce natural mutants with reduced susceptibility under the selection pressure exerted by biocontrol agents were analyzed. The link between a potential loss of efficacy of a biocontrol agent and its mode of action was carefully explored.Identifying publications focused on this topic proved to be a challenging task. A survey of the Web of Science database between 1973 and the end of 2014, using the keyword combination [ AND ((pathogen OR disease) AND plant)], yielded 7872 references on biological control against plant diseases. Entering keywords describing the diversity of efficacy of biocontrol agents toward plant pathogens, i.e., [(divers* OR variab* OR variation) AND (resist* OR toleran* OR susceptib* OR sensitiv* OR insensitiv* OR defense)], allowed to refine this survey to 593 references. However, the analysis of this subset yielded only six relevant references. A second subset was generated from the initial 7872 references by entering keywords describing the durability of efficacy of biocontrol agents toward plant pathogens, i.e., (durability OR durable OR sustainable OR sustainability). Among the 274 references obtained, only one was relevant. Efforts to improve the keyword combinations failed to yield any additional references, a situation presumably reflecting the scarcity of studies specifically dedicated to this topic. The set of seven references was then complemented with studies including a comparison of biocontrol efficacy for two or more strains of a given plant pathogen. Attempts to automate this search with keywords failed, probably due to the fact that such comparisons were not the focus of these studies. Eventually, additional references were obtained through direct consultation with scientists implicated in biocontrol of plant pathogens. It is thus likely that the final set of 30 references analyzed in the present review may not be exhaustive, but it already provided quite illustrative examples and 2,4-diacetylphloroglucinol , two compounds produced by antagonistic strains of fluorescent Pseudomonas spp. present in the wheat rhizosphere was observed, with isolates of some anastomosis groups being less sensitive than others , P. medicaginis being less sensitive than P. aphanidermatum and P. torulosum differed considerably in their sensitivity to 2,4-DAPG. The difference of susceptibility among Pythium species to various natural toxic compounds produced by biocontrol agents and the fact that several of these species could occur at the same time in a similar specific place may impact the success or at least the regularity of the effectiveness of biological control conferred by these antibiotic-producing bacterial strains.This diversity of sensitivity to antimicrobial compounds produced by biocontrol agents was also observed among different anastomosis groups within a species or different species of plant pathogens that could occur and can infect plants at the same time. For instance, variation in sensitivity among 18 isolates of n others . This waensitive , for kanorulosum , for the species , and for species . One strAgrobacterium tumefaciens, the causal agent of crown gall on many dicotyledonous plants, by biocontrol agent A. rhizogenes strain K84, which produces the antibiotic agrocin 84 that hyperparasitizes the fungus Cryphonectria parasitica and reduces its pathogenicity on chestnut trees, resulting in hypovirulent isolates of the fungus , the protective efficacy of B. subtilis QST713 was found to range from 40 to 86% on tomato leaves and from 0 to 80% on lettuce leaves , which was shown to control powdery mildew of barley mainly through induced resistance and especially by enhancing expression of leaf-specific thionin in leaves (Podosphaera xanthii (syn. Sphaerotheca fuliginea) and on leaf disks for 116 isolates of Pseudoperonospora cubensis revealed limited diversity among isolates of either species sachalinensis, whose mode of action is generally associated with plant-induced resistance . The same results were observed for all five strains of B. cinerea, regardless of their initial phenotype of sensitivity to the antibiotic in an experimental evolution study on melon leaves treated with an extract from roots of Chinese rhubarb was able to rapidly adapt to the effect of the antibiotic-producing biocontrol agents tested. However, in the case of the pyrrolnitrin-producing biocontrol agent, a detrimental effect for the plant pathogen of the resistance to pyrrolnitrin would limit the risk of complete loss of efficacy. Based on the study with physcion, it is likely that this plant extract will have a low to moderate risk for resistance development of powdery mildew and downy mildew of cucurbits. Additional studies with other biocontrol agents having different modes of action would be necessary to infer a possible link between the durability of efficacy of a biocontrol agent and its mode of action. Similarly, additional studies with other plant pathogens would allow to generate hypotheses about a possible link between the durability of biological control and specific traits related to the plant pathogen.To our knowledge, the three experimental evolution studies described here are the only examples illustrating the potential of plant pathogens to adapt to the effect of biocontrol agents. In two of these studies, the fungal pathogen was also shown to provide resistance to toxins produced by antagonistic bacteria P. fluorescens and Pantoea agglomerans , or MSF (major facilitator superfamily) transporters. These membrane proteins are able to transport a wide range of compounds and can evacuate outside of the cells diverse toxic metabolites. The tolerance conferred by these transporters is quite low but it is significant because it provides time for pathogens to activate some mechanisms of detoxification . ABC traudomonas . Severaludomonas . Multidrudomonas . Ajouz elomerans . TherefoMycosphaerella graminicola to a sublethal dose of 1-hydroxyphenazine, an antibiotic produced by the biocontrol bacterium Pseudomonas, led to increased production by the fungus of catalases, peroxidases, and superoxide-dismutases and an increase in melanin synthesis, allowing the degradation of the antibiotic and the protection of the fungus , that still produces agrocin 84 but lacks the region necessary for the transfer of the plasmid pAgK84. The judicious use of biocontrol agents and particularly of the new strain K1026 should minimize the risk of pAgK84 transfer into pathogenic Agrobacterium strains and thus help to preserve the effectiveness of A. radiobacter as a biocontrol agent against crown gall.Finally, natural plasmid transfer between bacteria was also shown to provide a possible mechanism for plant pathogens to become resistant to biocontrol agents. This was the case for example for the soilborne bacterium d pAgK84 . The resol agent . To miniFusarium udum to maintain a slow growth and the production of conidia in presence of the hyperparasitic bacterium Bacillus subtilis AF1 . For example, ABC transporter activity has been implicated in the tolerance of sativum , of B. cof grape and of Go tubers . SimilarGcABC-G1 . This alGcABC-G1 .Botrytis cinerea is for instance able to detoxify the alpha-tomatine, a phytoanticipin found in tomato plants involves the production of reactive oxygen species that leads to cell death and it is generally consider as a signal to induce the synthesis of antimicrobial compounds like phytoalexins. It usually prevents the fungal penetration into host tissue but it has been shown that HR facilitates the colonization of the plants by necrotroph fungal pathogens like rotiorum or Rhizoa solani . These pa solani .P. fluorescens 2\u201379 and Q72a80 to persist in the wheat rhizosphere levels of sensitivity to biocontrol agents, regardless of the complexity of their mode of action. Certain pathogens also have the potential to adapt in a few generation to the selection pressure exerted by biocontrol agents. Available information is sufficient to suggest that the assumption that durability of biological control is necessarily higher than that of chemical control may not always be justified. However, it is not sufficient to draw conclusions about the existence of specific traits related to the plant pathogen or related to the biocontrol agent that could explain the loss of effectiveness of a biocontrol agent in practice.There is much need for further studies on issues related to this topic. Among them, the establishment of baseline sensitivity would be helpful in future surveys of biocontrol agents. Such monitoring is carried out routinely for fungicide resistance in fungal plant pathogens world-wide . It is aIn addition to the population approach described above, experimental evolution studies are also needed to evaluate the ability of plant pathogens to evolve under the selection pressure exerted by a biocontrol agent. This approach will result in identifying risk factors that can foster the selection of strains of plant pathogens resistant to biocontrol agents. Various biocontrol agents and different plant pathogens must be tested in the future. This will result in identifying types of biocontrol agents with lower risk of efficacy loss, i.e., modes of action of biocontrol agents that does not favor the selection of resistant isolates in natural populations of plant pathogens. Even though all biocontrol agents should create selection pressure on target populations of plant pathogen, some modes of action may present a clear opportunity for pathogens to evolve resistance. For example, a mechanism involving antibiosis would, by analogy with the fungicides, be considered a high risk to be overcome, whereas a multiple or a more complex mode of action would indicate relatively low risk. This knowledge is essential to ensure a durable efficacy of biocontrol agents on target plant pathogens.Even if the data are too sparse to suggest general statement on the use of biocontrol agents in practice, this review highlights the necessity for careful management of their use once they become commercially available in order to avoid repeating the mistakes made with chemical fungicides. For instance, we should consider alternating or combining biocontrol agents with different mechanisms of action. In addition to possibly ensure the durability of biological control, the combination of several biocontrol agents have shown to improve the efficacy and reduce the variability of efficacy . The conSignificant research efforts are also needed to anticipate the potential failure of biological control and integrate durability concerns in the screening procedure of new biocontrol agents. According to the results mentioned in this review, caution should be applied when screening and selecting isolates of biocontrol agents, to ensure a wide representation of the targeted plant pathogen population. To test the efficacy of potential biocontrol agents or to screen for new biocontrol agents against plant diseases, most studies continue to use a single isolate or relatively few isolates. Innovative screening procedures based on the known mode of action have already been developed . We mustThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Introduction Health facilities assessments are an essential instrument for health system strengthening in low- and middle-income countries. These assessments are used to conduct health facility censuses to assess the capacity of the health system to deliver health care and to identify gaps in the coverage of health services. Despite the valuable role of these assessments, there are currently no minimum standards or frameworks for these tools.Methods We used a structured keyword search of the MEDLINE, EMBASE and HealthStar databases and searched the websites of the World Health Organization, the World Bank and the International Health Facilities Assessment Network to locate all available health facilities assessment tools intended for use in low- and middle-income countries. We parsed the various assessment tools to identify similarities between them, which we catalogued into a framework comprising 41 assessment domains.Results We identified 10 health facility assessment tools meeting our inclusion criteria, all of which were included in our analysis. We found substantial variation in the comprehensiveness of the included tools, with the assessments containing indicators in 13 to 33 (median: 25.5) of the 41 assessment domains included in our framework. None of the tools collected data on all 41 of the assessment domains we identified.Conclusions Not only do a large number of health facility assessment tools exist, but the data they collect and methods they employ are very different. This certainly limits the comparability of the data between different countries\u2019 health systems and probably creates blind spots that impede efforts to strengthen those systems. Agreement is needed on the essential elements of health facility assessments to guide the development of specific indicators and for refining existing instruments. A large number of health facility assessment tools currently exist, using different methodologies and indicators and limiting the comparability and comprehensiveness of health systems data.Consensus is needed on specific indicators for monitoring the capacity and functionality of health facilities in low- and middle-income countries.The need to move beyond individual health services and strengthen health systems has become a critical component of global public health and international development . CompareMost would agree that to improve population health, health services must be available, accessible, efficacious and used by the population. To achieve this, comprehensive interventions are needed that strengthen not only service delivery, but also the laws and policies that influence the functionality of a health system and the health-seeking behaviour of the population. To achieve those reforms, one needs a pragmatic assessment of where gaps or weaknesses exist within the system using rigorous and valid methodologies to determine the above-listed factors. Collecting data at the level of health facilities allows for a detailed assessment of the various components that function (or do not) at the level of service delivery, which is a useful level of analysis for identifying the weaknesses within a national health system health systems building blocks or healtThe fact that health facility assessments are often patchy or incomplete is problematic for achieving important health gains in LMIC. Major investments in initiatives to scale up health services necessarily rely on reliable, accurate and comprehensive information on both capacities and gaps in health services availability to prioritize interventions and ensure equitable access to care. To achieve this, there is a need to ensure the development of evidence-based policies and interventions to improve the performance of health systems in LMIC when necThis study asked whether the vagaries of health system assessments have a common root: the quality and comprehensiveness of health facility assessment tools. To answer this question, we performed a comparative analysis of the different tools that are currently used to assess the administrative and service delivery capabilities of health facilities in LMICs, charting their similarities and differences. We hypothesized that if a genuine consensus on health system strengthening existed, then the assessment tools used to assess the capabilities at the point of actual service delivery ought to be quite similar, allowing perhaps for differences of form but few if any differences of substance.The MEDLINE, EMBASE and HealthStar databases were searched for articles published in English using the keywords and search strategy described in Appendix 1, developed with the assistance of a medical librarian. To locate non\u2013peer-reviewed reports, a keyword search was conducted in the following databases: (1) the The inclusion criteria were the presence of an assessment tool, checklist or questionnaire that evaluated the availability of health resources or services at the health facility level. All questionnaires, regardless of the level of care evaluated , were included, consistent with WHO\u2019s definition of health systems as being inclusive of \u2018all organizations, people, and actions whose primary intent is to promote, restore or maintain health\u2019 . We exclThe analysis of these assessments was conducted using a health systems framework to contextualize them in the essential capacities of health systems in LMIC. We used the WHO\u2019s health systems building blocks as an organizing framework and treaData were extracted into an Excel database from all included reports using a thematic analysis of the health services included in the assessment tools by one reviewer (JWN). All the tools were analysed twice: the first time to compile a list of the assessment indicators present in each of the tools and the second time to evaluate which of these indicators were measured by each of the assessment tools. Each extracted indicator was categorized into a broader domain reflecting a group of health services. For instance, an indicator measuring the availability of surgical instruments was included under the domain \u2018surgery\u2019. Each of these domains was then mapped to the health systems building blocks contained in the health systems framework. Using the defined list of assessment domains and their indicators, a second reviewer screened 30% of the assessment tools to ensure consistency in the data extracted. Disagreements in data extraction were resolved through discussions among two reviewers until consensus was achieved.A recently published review of health facility assessment tools examined four of the assessments included in our study for the purpose of developing indicators of newborn care of the 41 assessment domains. The majority of the differences fell within the areas of health services delivery, with considerable variation in the types of health services assessed and with a preference towards assessments of services at the primary care or community level rather than secondary-level services such as surgery or intensive care. A summary of the assessment domains and tools is included in Eight of the included assessments contained some form of basic information concerning the organization, ownership and leadership of health facilities. These data consisted of basic descriptive data of the ownership of the facility, department heads and basic questions of how the facility was organized and run. While these data were frequently available, they were generally very basic indicators of ownership or leadership of the facility and provided little in the way of measuring the quality of leadership, which may well be beyond the scope of these tools.Only two assessments collected information on how the health facility was financed and only five collected information on whether user fees were charged. Countries where the health system is financed through governmental schemes may account for the absence of these questions in some assessments and only three collected data on the availability of emergency staff in-house 24 hours a day.Significant variation was uncovered in the assessment of diagnostic services, essential medicines and laboratory services. All of these domains have the potential for expansive lists of assessment criteria , although most used a selective sampling rather than an exhaustive list. For example, most assessments included indicators for basic equipments such as a stethoscope, blood pressure cuff and adult scale.Assessments of diagnostic services (including laboratory and diagnostic imaging) were frequently included, although again the specificity of these assessments varied. All 10 of the assessment tools evaluated the availability of some form of laboratory services, while only 7 assessed the availability of diagnostic imaging. These assessments ranged from indicators for specific analytical tests and equipment to general questions of capacity .Our results located two kinds of information and research of relevance to this building block: health information systems and clinical practice guidelines.Eight of the included assessments in our review contained assessments of different aspects of health information systems. Caseload data were the most frequently collected and were included in eight of the tools, while assessments of communicable disease surveillance systems were the second most frequently collected data in six of the tools. Other data collected were less consistent across the assessments, such as vital statistics, patient charts and vaccination activity.Seven of the included assessments contained indicators for assessing the availability of evidence-based guidelines for relevant health conditions. These guidelines were often integrated into assessments of service availability, assessing services for the availability of essential resources and equipment as well as relevant clinical practice guidelines. Five of the included tools assessed whether clinicians had received continuing medical education or training in specific areas, generally over the past two years.The delivery of health services is the point of contact between patients and the health care system, where diagnosis and treatment occur. Not surprisingly, service delivery accounts for the bulk of the data collected by the assessments included in our review, collected across a large number of different clinical domains. The range of clinical services evaluated are in some ways misleading as some tools included only single indicators of complex, yet poorly defined packages of services rather than specific measures .All of the assessment tools also included some form of basic structural assessments, such as the condition of walls, floors and heating/cooling systems. As a basic structure constitutes an integral component of a health facility, this is included as an element of the health services delivery building block. Another key infrastructure assessment, the hospital bed census, was included in six of the included assessment tools, representing a significant absence of basic structural information in these tools.Nutrition services were included in five assessments, including indicators for malnutrition screening services and therapeutic feeding. Environmental health services were less frequently included in assessment tools, with only three of the included assessments containing relevant indicators. One assessment included indicators for the availability of water and food inspection services, although this was not included in any others, nor does it seem a particularly relevant focus for health facilities. The other two assessments that included environmental health services were both related to malaria bed net distribution.Health system strengthening requires reliable, accurate and comparable data sources across the health system to identify gaps in coverage and to identify priority health needs. The monitoring and evaluation of the health system necessitate a focus on how inputs and processes contribute to outputs and their impact on relevant health indicators . The absence of any of these data sources results in incomplete information and unreliable assessments of priority areas , this momentum appears to have left much of the essential health systems data behind. The results of our literature review located only four publications that implemented a standardized assessment in peer-reviewed publications; this low number may be the result of our search strategy, but it appears that there is either little awareness or little use of the tools currently in existence. A likely explanation is that although there is extensive experience in using these assessments by donors and ministries of health, the evidence base for these tools appears to be severely lacking and appears to be based on a restricted set of preferences or priorities rather than on a rigorous evaluation of the essential functions of health facilities or health systems.Second, we noted a preference towards the evaluation of primary care services, with secondary and tertiary care being absent from many assessments, despite a need for these services in LMIC or the Alma Ata Declaration , which may also reflect donor or organizational priorities oriented towards specific kinds of programmes. This is an important finding, as these tools appear designed to assess the programmes that donors fund rather than initiatives that are led and monitored by the countries themselves.Only one of the tools included in our review (HeRAMS) made use of an explicit framework to guide the assessment of available health resources and services. While the HeRAMS framework is explicit, it also does not clearly link to a broader health systems framework. This is, in part, because of the nature of the tool\u2019s initial deployment in the Darfur region of Sudan, an area of ongoing conflict rather than stable development.This review provides a systematic catalogue of the broad domains employed in the assessment of health facilities. This catalogue should provide a foundation for the development of more detailed assessment indicators based on these domains and with the goal of making future assessments reflective of the essential characteristics of well-functioning health systems.A significant number of gaps were uncovered that would likely lead to an incomplete assessment of the administrative or health service delivery functions of health facilities, which should raise serious concerns. While some building blocks are perhaps easier to quantify than others , each is important for ensuring that health systems evolve comprehensively and not disproportionately. Furthermore, many of the included indicators were evaluated differently, leading to concerns not only of incomparability among the tools, but also a lack of agreement of how best to measure specific functions or services.Assessments of the leadership of a health facility, for example, often allow one to guess or infer other features of the facilities. A public hospital run by an international non-governmental organization may have access to different supply chains or human resources than a public hospital run by the ministry of health, for example. Combined with relevant geographic and population demographic information, this information allows for important analyses of equity in the distribution of health services and resources and the identification of underserved populations when health facilities and population data are compared , but rather to generate a framework of essential assessment domains, linked to health systems building blocks, that could be applied in the development of future assessments. To that end, we feel justified in excluding these service-specific assessments, in favour of this more comprehensive approach.Further studies that expand on this work could also consider including additional databases for their search, which may result in additional references being located in peer-reviewed journals and in the grey literature. This project\u2019s findings are representative of the search strategy employed and expanding on the databases or keywords may result in additional sources, such as those in languages other than English.Our approach utilized a broad health systems framework as the foundation for aligning health facilities assessment domains with a broader objective of health systems strengthening. In doing so, we have identified common domains that ought to be included as part of a health facilities census, which should guide the development of more specific assessment indicators to correspond to each of these domains similar to those service-specific tools mentioned above. Our review and recommendations fall short of prescribing particular indicators or assessment strategies; rather, we propose this as the next step in the evolution of these assessments: defining the indicators that best align with countries\u2019 packages of health services and the information needs of specialists working in these areas of health service delivery to ensure that detailed assessments of specialized health services are comparable among countries.Our results should be interpreted to recognize that different agencies have a desire to exert ownership of their own data collection tools and processes, structured in a way that makes sense for the delivery or support of their own programmes. Rather than proposing the development of one tool that should be universally applied, our results propose a broad framework to be populated with internationally accepted indicators and basic datasets that can be used to guide the development of these tools, thereby ensuring a more comprehensive and coherent approach.This review highlights a fundamental problem in the collection of health facilities and health services availability data: the absence of common assessment tools yields incomparable data making it difficult, if not impossible, to track progress towards increasing access to health services, globally. Our review found 10 different health facility assessment tools currently in use. Our comparative analysis of these tools revealed that there are significant gaps in the areas evaluated by many of them, often orienting their focus towards primary care rather than the broader health system.This review provides a framework in the form of 41 assessment domains linked to health systems building blocks that should guide the development of new health facilities assessment tools and the refinement of existing ones. Furthermore, these assessment domains provide a useful starting point for defining more detailed assessments that correspond to specialized health services. Future developments in this area should integrate existing specialized indicators into assessment tools to enhance the comparability of the data collected and to align these data with existing standards.No specific funding was obtained for this project. Jason Nickerson was supported by a University of Ottawa Admission Scholarship and an Ontario Ministry of Training, Colleges, and Universities Ontario Graduate Scholarship. Amir Attaran was supported by a Social Sciences and Humanities Research Council Canada Research Chair. Peter Tugwell was supported by a Canadian Institutes for Health Research Canada Research Chair.Conflict of interest statement. None declared."} +{"text": "Brazilian Bioethanol Science and Technology Laboratory (CTBE) one of the laboratories of the Brazilian Center of Research in Energy and Materials (CNPEM) has as one of its main goals contribute to the development of the process of obtaining second generation ethanol ( GII ) mainly from lignin cellulosic fractions of cane sugar.The To achieve this goal leads its own line of research, develop partnerships with other public and private institutions or acting as national laboratory offers its facilities of laboratories and process development pilot plant to researchers involved in projects to produce ethanol, energy and chemicals from lignocelluloses materials.The activities of CTBE are based on research and development line that integrates the current process of obtaining first-generation ethanol ( GI ) of the sugars extracted from cane sugar coupled with the second generation process of deconstruction of lignocelluloses fractions (bagasse and straw), enzymatic hydrolysis of cellulose and fermentation of pentose and hexoses.The production of ethanol GII although technically demonstrated, at the present time not achieved results that enable production at competitive costs.The critical barriers to overcome to achieve this goal are:\u2022 Pretreatment: efficient fractionation of lignocelluloses material for better recovery and hydrolysis of pentose and cellulignin.\u2022 Hydrolases: Obtaining a more efficient and productive hydrolases complex;\u2022 Enzymatic hydrolysis: optimization of hydrolysis of cellulose;\u2022 Ethanol from pentose: development of pentose fermentation to ethanol.The negative impact of these barriers on the process of obtaining second-generation ethanol will be discussed. The research lines proposed for overcome it and improve the performance of this process integrated with first generation ethanol will be presented."} +{"text": "Ameloblastic fibrodentinoma is a rare benign mixed odontogenic neoplasm usually occurring in the first two decades of life. It is more common in males and the most common site of occurrence is in the mandibular premolar molar area. This report presents a case of ameloblastic fibrodentinoma in a 12-year-old boy in the maxillary anterior region, a less common site for the occurrence of ameloblastic fibrodentinoma. A 12-year-old boy presented with a midline diastema in 11 and 21 region and a swelling in the palatal aspect of 11 and 12. Intraoral periapical radiograph showed the presence of rarefaction of bone on the mesial aspect of the cervical and middle third of the root of 11. Excision biopsy was done. The specimen was processed and stained with hematoxylin and eosin. Microscopic examination showed islands, chords and strands of odontogenic epithelium in a primitive ectomesenchyme resembling dental papilla. The odontogenic epithelium exhibited peripheral ameloblast-like and central stellate reticulum-like cells. The presence of dentinoid material was seen adjacent to the odontogenic epithelium in some foci. The lesion was diagnosed as ameloblastic fibrodentinoma. Ameloblastic fibrodentinoma also called dentinoma, arising from the odontogenic epithelium and ectomesenchyme, is a rare benign mixed odontogenic tumor with an occurrence of less than 1% of the odontogenic tumors [The majority of these hamartomatous lesions were observed during the first two decades of life, but it has been reported in older individuals as well. Ameloblastic fibrodentinoma is a slow growing asymptomatic lesion with both central and peripheral counterparts. There is a distinct male preponderance with a male: female ratio of 3\u2009:\u20091 and it is usually seen in the mandibular premolar molar area . SometimA 12-year-old boy came to the Department of Oral Medicine and Radiology with a chief complaint of a growth of the gingiva on the palatal aspect of 11. The growth appeared along with the eruption of the upper front teeth. At the time of eruption, there was midline diastema in 11 and 21 region with a nodule of tissue in between, which the patient pinched out 1 year back Figures . He gaveExcisional biopsy of the lesion was done. The gross specimen was a granulomatous tissue in the nasopalatine region in a cavitary space with regular margins and smooth floor . The tisOn microscopic examination of the specimen, islands, chords, and strands of odontogenic epithelium in a primitive ectomesenchyme resembling dental papilla were observed. The odontogenic epithelium exhibited peripheral ameloblast-like and central stellate reticulum-like cells. Presence of dentinoid material was seen adjacent to the odontogenic epithelium in some foci . A finalAmeloblastic fibrodentinoma is a slow growing, asymptomatic lesion, generally occurring in the first two decades of life, with the posterior region of the mandible being the most common site of occurrence . It is sAmeloblastic fibrodentinoma is an odontogenic tumor with or without formation of dental hard tissues. Radiographically, it presents as a well-defined radiolucency with varying degrees of radioopacity. Our radiograph also showed the presence of a well-defined radiolucency in relation to the mesial aspect of the cervical and middle third of the root of 11 surrounded by a corticated border without any detectable radioopacities within the radiolucency.Histologically ameloblastic fibrodentinoma is composed of proliferating epithelium embedded in a cellular ectomesenchymal tissue that resembles dental papilla, and the proliferating epithelium exhibits varying degrees of inductive changes on the mesenchyme, leading to the formation of varying amounts of dentin . If therConfusion exists on the nature and interrelationship of the mixed odontogenic tumors. Some authors are of the view that all the three mixed odontogenic tumors are interrelated and that ameloblastic fibrodentinoma is an intermediate stage between ameloblastic fibroma and ameloblastic fibroodontoma depending on the degree of histodifferentiation , 10. TheBased on the stage of development, ameloblastic fibrodentinoma can be histologically divided into immature and mature type. The malignant counterpart of ameloblastic fibrodentinoma is ameloblastic fibrodentinosarcoma. The mechanism of malignant transformation of ameloblastic fibrodentinoma is unknown. Multiple surgical procedures of recurrent lesions have been suggested as an important factor for the malignant transformation. Metastasis of the malignant counterpart is also uncommon .This case presented a rarity in that the lesion presented clinically as a case of gingival enlargement in the palatal aspect of the maxillary anterior region. Presence of a well-defined radiolucency on radiographic examination suggested that it was a case of false gingival enlargement, and the histopathologic examination confirmed the case as ameloblastic fibrodentinoma. Hence, this case is unique because of the fact that an intraosseous ameloblastic fibrodentinoma presented as a false gingival enlargement in the anterior maxillary region, a less common site for the occurrence of ameloblastic fibrodentinoma. This case highlights the importance of thorough evaluation of any gingival enlargement to rule out the possibility of an underlying neoplastic process."} +{"text": "The medical fields of otolaryngology and head and neck are evolving rapidly. For the past years, much new knowledge and many techniques have been introduced into the basic researches and clinical practices of otolaryngology and head and neck for a comprehensive understanding of diseases and practical clinical applications. Just recall the recent development of many medical devices designed for enabling techniques and surgery, including the newly designed hearing aids and coagulative surgical instruments; the trend of newly developed technologies is likely to be a constant flow. In addition, many distinct diagnostic modalities and research approaches have also been invented to investigate the underlying phenomenon and mechanisms of diseases. The cornucopia of all novel technologies and approaches serve as important blessings for both doctors and patients in the fields of otolaryngology and head and neck. This special issue is to showcase the diversity and advances in recent progress that contributes to the different subspecialties of otolaryngology and head and neck surgery.Recent advances both in basic and clinical studies have introduced new concepts and technologies to be applied in otolaryngology and head and neck surgery. In this special issue, several special topics regarding the applications of hearing devices and vestibular evoked myogenic potential, the surgical tools for potentiating the procedure of thyroidectomy, and identification of the novel therapeutic agents and underlying mechanism of head and neck cancer will be presented. One article of each field will be selected as the representative example of the progress. These articles will demonstrate the current advance of medical development in otolaryngology and head and neck surgery.An article will discuss the role of screening of cognitive function in old adults because of the efficacy of using hearing aid assistance. Previous studies had found that hearing loss is associated with poorer cognitive function. This study examined cognitive function in elderly hearing aid users. The study investigated whether the screening cognitive function should be considered in the individuals with hearing impairment. On the other hand, for the adult patients with unilateral microtia and atresia, many devices of implantable hearing amplification have been created for hearing ability rehabilitation. An article in this special will give a comprehensive review of the development, the progress, and the technical advantages of the different types of implantable hearing devices available. After understanding the pros and cons of each specific type of devices, it is easy to make the clinical decision of recommending a device to match the need of patients.Apart from the hearing ability, the vestibular function, another essential component of the 8th cranial nerve, has also drawn much attention of researchers and clinicians. An article included in this special issue will demonstrate the use of skull tap vestibular evoked myogenic potentials of diagnosing cerebellopontine angle tumor. The tap vestibular evoked myogenic potentials (tap VEMPs) are another method of inducing VEMPs by tapping method in contrary to the traditional one of using auditory stimulation. In the diagnosis and pretreatment evaluation of cerebellopontine angle tumor, the involvement and the extent of tumor are critical for the following treatments and can be clearly identified by clinical physiological examinations. In this article, the tap VEMPs could be successfully induced in some of the recruited patients. It shows a potential of clinical utility of combining both AC VEMP and tap VEMPs in the evaluation of patients suffering from cerebellopontine angle tumors. Salvia miltiorrhiza Bunge. It has been documented to be an antiproliferative reagent on various human cancer cells. This research aims to evaluate its role in nasopharyngeal carcinoma. By showing the effects of antiproliferation and apoptosis induction, Tan IIA is believed to be a potential candidate of exploring the anticancer drugs. On the other hand, bleeding is always a major concern in the procedure of thyroidectomy. Many distinct skills have been developed to control bleeding, and multiple devices and surgical instruments have been created to facilitate the hemostasis. Ligasure and bipolar thermofusion are two surgical tools that have been widely used to achieve hemostasis during thyroidectomy. In an article included in this special issue, the comparison between these two techniques will be addressed when more than 2122 lobectomies were performed within 8 years. The experience earned from the large patient population provides convincing evidence of using these distinct surgical tools for bleeding control of thyroidectomy.With the innovation of many tools of basic research, the understanding of molecular and cellular mechanism has achieved a profound advancement in the head and neck cancer. An article in this special issue will address the importance of finding novel and potential therapeutic reagents that target the mitochondria-dependent pathway. Tanshinone IIA (Tan IIA) is an active phytochemical retrieved from the dried root ofThe researches and development of new technologies and knowledge of the field of otolaryngology and head and neck surgery are committed to improving excellent medical care in all subspecialties to the patient groups worldwide. It is also committed to educating the health provider and investigators in the basics and the clinical practices of otolaryngology and head and neck surgery for the further progress in each subspecialty. The current knowledge and understanding have led to the exploration of new mechanism underlying the diseases and also development of novel therapies, surgical techniques, and tools to change the way of clinical practices. It is believed that the evolvement of medical researches and clinical health care may substantially improve the clinical outcomes and life quality of patients.Tsung-Lin Yang Robert L. Ferris Tetsuya OgawaPeter Hwang Michael Tong Mu-Kuan Chen"} +{"text": "Lactobacillus jensenii TL2937 in the late 1990s, initial investigations were focused on understanding their role, stimulating immune responses against infectious agents. Yet, the human body is home to a myriad of TLR agonistic bacteria that have not only established symbiosis with the immune system but also likely contribute to the induction and maintenance of immune homeostasis by dampening immune responses. In fact, stimulation of TLRs might be critically involved in this process and ultimately contribute to preventing development of inflammatory and autoimmune diseases. Sander De Kivit and colleagues present an overview of those aspects, notably outlining the role of gut epithelial TLRs in the induction of immunity and the maintenance of tolerance . Alexandra Zanin-Zhorov and colleagues described an immune regulatory mechanism conferred by TLR expression on T cells, which the authors review herein and discuss in the context of TLR-induced T cell effector functions . The rolThe author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Anorexia Nervosa (AN) and restrictive eating disorders present a significant burden of disease within the Australian Community with AN having the highest mortality rate of all psychiatric disorders. Family Based Treatment for Adolescent Anorexia Nervosa (FBT) has the best evidence worldwide for successful treatment outcome and sustained recovery.In October 2012, The Children Health Services District established a specialised weekly clinic to provide FBT for community based treatment of AN. Multidisciplinary staff with a background in Family therapy were trained in FBT. The aim was to develop a specialist service targeted at a high risk and resource-intensive population.Prior to establishing the clinic in this form Family based Treatment was provided at three separate community clinics. The intention of establishing a single clinic was to facilitate greater access to supervision and increase model fidelity, increase the capacity of the service, enhance the sense of team cohesion and increase the overall number of referrals.The presentation will outline the outcomes from the first two years of this specialist clinic. These include, a reduction in length of stay in hospital inpatient units, an increased proportion of families completing FBT, reduced length of time in treatment and an increased identification of anorexia nervosa with corresponding increased referrals into FBT."} +{"text": "This work analyzes and implements finite axonal transmission speeds in two-dimensional neural populations. The biological significance of this is found in the rate of spatiotemporal change in voltage across neuronal tissue, which can be attributed to phenomena such as delays in spike propagation within axons, neurotransmitter activation and the time courses of neuron polarization and refraction. The authors build upon the finite transmission speed work in .Linear analysis about a spatially homogeneous resting state of the neural population dynamics is performed. The analyses of the resulting analytical expressions guide the parameter selection for simulations. For simulation, computation of the transmission-delayed convolution between the kernel and firing rates is performed with a fast Fourier transform as in .The Neural Field Simulator is used We find Turing patterns appear when starting the simulations with the derived conditions for stationary instability. This is shown in Figure"} +{"text": "Plasmodium naturally infect humans. Some species cause non-specific symptoms that can sometimes progress rapidly to severe and fatal outcomes. Early diagnosis and appropriate treatment interrupts progression and cures disease. Confirmation of diagnosis of malaria currently relies on microscopy, or on application of rapid diagnostic tests (RDTs) that are becoming increasingly widely available and are recommended to confirm infection before treating it. The diagnosis of malaria by itself is not sufficient to optimise individual therapies because there is a growing problem of multidrug resistance in parasites . This limits the use of some drugs or combinations by severely compromising their efficacy.Five species of Diagnostic strategies for management of malaria can therefore be improved in several ways. First by an increase in sensitivity of detection of parasites that should improve of the thresholds for detection by the current generation of RDTs because they cannot identify low parasitaemias. Second, a diagnostic that can differentiate all the naturally infecting species of parasite needs to be developed. Finally, rapid (point-of-care) assessment of the drug resistance status of parasites will add greatly to the treatment strategies available to manage individual patients. Technological platforms that can deliver information that is currently missing for the personalised management of malaria infections are being developed and these advances will be presented and discussed in the context of the global burden of disease caused by malaria."} +{"text": "Lateralization of brain and behavior in both humans and non-human animals is a topic that has fascinated neuroscientists since its initial discovery in the mid of the nineteenth century Broca, . HemisphBased on these observations, the present Frontiers in Cognition Research Topic aimed to further investigate the relationship of lateralization and cognitive systems in the vertebrate brain. Overall, the Research Topic encompasses more than 30 novel publications, ranging from Original Research Articles to Reviews and Mini Reviews, Perspective Articles and Hypothesis and Theory Articles. From the beginning, the present Research Topic was conceptualized with a comparative multi-disciplinary inter-species approach in mind. This idea is reflected in the broad diversity of animal models included in the Research Topic, ranging from invertebrates (Frasnelli, Taken together, the wide variety of cognitive systems in different species covered in the present Research Topic highlights the enormous importance of understanding how and why the vertebrate brain is asymmetrically organized for almost any subfield within cognitive neuroscience. We hope that the excellent papers assembled in the present Research Topic will help to stimulate more research aimed at understanding the complex mechanisms underlying the interaction between hemispheric asymmetries in stimulus perception and higher cognitive systems.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "BMC Biology addresses this issue with live imaging of gut explants from mouse embryos.The complex physiology of the gastrointestinal tract is regulated by intricate neural networks embedded within the gut wall. How neural crest cells colonize the intestine to form the enteric nervous system is of great interest to developmental biologists, but also highly relevant for understanding gastrointestinal disorders. A recent paper in http://www.biomedcentral.com/1741-7007/12/23.See research article: BMC Biology by Young and colleagues ]? Or]? Oret an vitro . In thiShedding light on the cellular and molecular mechanisms that control the colonization of the intestine by neural crest cells is of obvious medical importance. A number of conditions are thought to result from deficits in the migration and ultimately the colonization of the gut by neural crest cells and the normal development of the ENS. Foremost among them is Hirschsprung\u2019s disease, a congenital neurodevelopmental deficit of the ENS, which occurs in 1:4,500 live births and is characterized by absence of enteric ganglia from the distal colon. In addition, a number of ill-defined conditions, termed collectively as functional gastrointestinal disorders, are thought to originate from subtle changes in the connectivity of enteric neurons. Therefore, understanding the mechanisms that regulate the migration of neural crest cells and how they may influence the connectivity of enteric neurons is critical for deciphering the pathogenesis of a broad range of congenital and acquired deficits of enteric neural activity and digestive function."} +{"text": "Access to the endodontic lesion located between the central and lateral incisors was achieved by reflection of a full mucoperiosteal flap. Granulomatous tissue as well as aberrant root was removed and the surface of the root and adjacent coronal region were reshaped. Three years later, the patient was orthodontically treated. Seven years after completion of surgical/orthodontic management, the tooth remained asymptomatic and functional with normal periodontium/vital pulp. Radiographically, the healing of the lesion was observed. Actually, vitality of the invaginated tooth and communication between the invagination and the root canal were the most important factors in determining such minimally invasive treatment protocol. Depending on the anatomy of the root canal system, surgical amputation of an invaginated root can be performed to achieve a successful outcome in Oehler\u2019s type III dens invaginatus cases, even though it is associated with apical periodontitis.The developmental abnormality of tooth resulting from the infolding of enamel/dentin into the root is called dens invaginatus. Management of such cases is usually challenging due to the morphological complexity of root canal system. This report presents a rare treatment protocol of a clinical case of Oehler\u2019s Malformations of teeth with variations involving either the crown or the root or both are described as dental hard tissue anomalies , 2. DensType I is an enamel-lined minor invagination confined to the crown without extending beyond the cemento-enamel junction. Type II describes the extension of the invagination into the root, beyond the cemento-enamel junction ending in a blind sac. Some type II cases may communicate with the dental pulp. Type III includes penetration of the root by the invagination to form an additional apical or lateral foramen, usually without an immediate communication with the dental pulp [Oehler classified dens invaginatus into three distinct types according to the depth of invagination and its communication with the periodontal ligament or periapical tissues; tal pulp , 6. type III cases, treatment procedures are usually chosen from the following procedures: root canal treatment of the invagination only [Based on the complex root canal anatomy and the shape of the pulp in the ion only , endodonion only , combineion only , intentiion only , and extion only .The following unique case report describes a successful surgical aberrant root amputation of an invaginated extra-root with an endodontic lesion in a vital maxillary lateral incisor.A 16-year-old male was referred to private clinic by a general dentist for evaluation and treatment of the maxillary right lateral incisor with enlarged clinical crown. The patient described moderate pain associated with the involved tooth. At the time of clinical evaluation, the patient\u2019s symptoms had subsided. His medical history was not contributory and there was no history of trauma. The involved tooth was caries free and responsive to thermal and electrical pulp tests while the adjacent teeth were also responding. The clinical crown was larger than the contralateral tooth, especially the mesiodistal aspect. Crown appeared to be intact with a small palatal tuberculum without a pit or foramen coecum and revealed no discoloration, but mesial tilting of clinical crown with a slight depression on palatal surface near the mesial marginal ridge was evident . No tendtype III dens invaginatus and associated chronic periradicular periodontitis. Radiographic examination revealed large periradicular radiolucency along the mesial surface adjacent to the mesial aspect of the tooth. There was a mesially directed radiopaque enamel-lined tract, which is related to the periapical radiolucency. Enamel lined tract was separated from the main root canal. The abnormal hard tissue structure of invaginated root was extending from the mesial aspect of the middle third of the main root canal to the cervical distal margin of the adjacent central incisor. The diagnosis was a vital tooth with The treatment plan included the surgical amputation of bucally-located anomalous root-like structure of dens invaginatus. Under local anesthesia, a full mucoperiosteal flap was elevated; access to the pathological area was obtained between left central and lateral incisors. The lesion was curetted and the specimen was sent for histopathological examination. The abnormal hard tissue structure of invaginated root with groove was removed using a tapered diamond fissure and finishing burs under continuous irrigation with saline solution. The surface of the main root adjacent to the cervical region was reshaped and smoothed while exposure of the dentinal tubules was minimized. The flap was replaced and sutured . The palThe histologic diagnosis of the biopsy material was granulation tissue with severe chronic inflammation. A postoperative four-year periapical radiography taken during orthodontic treatment showed resolution of the periradicular radiolucency while the tooth remained asymptomatic and continued to give a positive response to cold test. Periodontal probing depths were found to be within normal limits . Seven-yTreatment of dens invaginatus is undoubtedly an endodontic challenge, especially because of the complicated morphology of tooth and the complexity of associated root canal system. It is therefore important to identify the morphologic character of the canal before treating a tooth with dens invaginatus associated with pulp pathosis .type III dens invaginatus, the pulp is confined within the wall of the invagination process, there is usually no communication between the root canal and the invagination itself. Although the invagination may seem to be completely lined by enamel, the apical portion of the invagination is more often lined only by cementum [In cases of Oehlers\u2019 cementum . Bactericementum . Following eruption, teeth with dens invaginatus, lose the main blood supply of the connective tissue in the invaginated space which subsequently results in necrosis of the tissue within the invaginated tract and this in turn, leads to the development of periradicular inflammation and the The presence of communication between the invagination and the pulp may be an important prognostic value, but in the present case, it was presumed that such communication does not exist. On the other hand, communication between the invagination and the oral environment was not apparent clinically, but the abnormal shape of an invaginated root caused a plaque-retentive area prompting the development of a localized advanced periodontitis that may lead to occurrence of periradicular lesion.type III dens invaginatus with endodontic failure or in teeth, which cannot be treated non-surgically because of anatomical problems or failure to gain access to entire root canal system [Researchers suggested that pulp necrosis and periradicular pathosis may occur in permanent incisors with abnormal crown or incisors with root groove defect , 15. Furl system . type III dens invaginatus besides the diagnosis of periradicular lesion, pulp health was retained after endodontic treatment [Despite the complex anatomy of the reatment or combireatment . To the type III dens invaginatus associated with an apical lesion without interfering with the vitality of the main root.The present case report describes the successful surgical treatment of an extra-root of a"} +{"text": "There is a paucity of data about whether our treatment philosophy is different for our patients as compared with what we would have wanted for ourselves, or while acting as surrogate decision-makers for our loved ones.An anonymous survey was sent to all the members of Australia and New Zealand Intensive Care Society and the College of Intensive Care Medicine (CICM). The first section comprised a hypothetical case scenario spanning over 6 weeks of ICU stay for a patient. At four different stages of the ICU stay, responders were requested to answer multiple-choice questions regarding the philosophy of treatment, based on their perceived prognosis of the patient at that particular time. The following two sections contained the same set of questions with the hypothetical scenario of responders acting as surrogate decision-makers for the patient and that of responders being patients themselves, in the same situation. The responses were compared amongst three sections at each stage using the chi-square test.A total of 115 responses were received from the fellows of CICM. The results are presented in Tables Of the ICU physicians who would withdraw care for their patient, the majority would also want the same for themselves. The disparity between decision to continue to treat the patients versus treating self or family increased with increasing length of stay."} +{"text": "In cluster headache (CH) during the active period we described a facilitated temporal summation (TS) of nociceptive signals at spinal level linked to a defective suprapinal control of pain and followed by a normalization of the values during the remission period. TS of sWe studied 10 episodic CH patients during both active and remission period and 17 healthy subjects (HS). Two types of stimulation blocks were delivered during the fMRI scanning according to the stimulation paradigms previously determined to evoke both the TST of the NWR (SUMM) and the NWR single response (SING).The analysis of the hemodynamic signals showed a comparable activation of sensory and pain related areas in both CH (during active and remission period) and HS. The most relevant differences emerged in the deactivation of both posterior cingulate cortex (PCC) and bilateral angular gyrus (AG) and in the activation of the anterior cingulate cortex (ACC). CH during the active phase showed a lack of deactivation of PCC and AG and a more relevant activation of the ACC when compared to CH during the remission phase and HS.PCC, AG and ACC are considered to be pivotal in default mode network (DMN), with a high activity correlated to the rest and reactive deactivation during most tasks where the attention is directed externally. Our data have demonstrated that in CH during the active phase of the disease, the facilitation in temporal processing of nociceptive stimuli is linked to a defective functioning of the DMN. Interestingly, both these abnormalities are dependent on the clinical activity of the disease.Written informed consent to publication was obtained from the patient(s)."} +{"text": "To compare the similarities among the multimorbidity patterns identified in primary care patients from two European regions (Spain and the Netherlands) with similar organisational features of their primary care systems, using validated methodologies.This observational, retrospective, multicentre study analysed information from primary care electronic medical records. Multimorbidity patterns were assessed using exploratory factor analysis of the diagnostic information of patients over 14 years of age. The analysis was stratified by age groups and sex.The analysis of Dutch data revealed a higher prevalence of multimorbidity which corresponds with the clustering of a higher number of diseases in each of the patterns. Relevant clinical similarities were found between both countries for three multimorbidity patterns that were previously identified in the original Spanish study: cardiometabolic, mechanical and psychiatric-substance abuse. In addition, the clinical evolution towards complexity of the cardiometabolic pattern with advancing age -already demonstrated in the original study- was corroborated in the Dutch context. A clear association between mechanical and psychosocial disorders was unique to the Dutch population, as well as the recurrent presentation of the psychiatric-substance abuse pattern in all age and sex groups.The similarities found for the cardiometabolic, mechanical and psychiatric-substance abuse patterns in primary care patients from two different European countries could offer initial clues for the elaboration of clinical practice guidelines, if further evidenced in other contexts. This study also endorses the use of primary care electronic medical records for the epidemiologic characterization of multimorbidity. Multimorbidity, which refers to the coexistence of two or more chronic conditions, is highly prevalent in older populations and, in absolute numbers, even more common in the adult population Lately, the global move towards the primary care-led management of chronic diseases has incorporated the analysis of multimorbidity patterns into the research agenda, with the goal of providing health and social care professionals with a better clinical and epidemiological understanding of the synergies and effects associated with coexisting diseases Further knowledge regarding the similarities and differences between the patterns obtained in different countries is expected to pave the way for the establishment of a sound and rigorous methodology for the identification of patterns of multimorbidity.The present work aims to assess the similarities among the multimorbidity patterns identified in primary care patients from two European regions (Spain and the Netherlands) with similar organisational features of their primary care systems, using validated methods.Data for the Spanish population were obtained from electronic medical records of people over 14 years of age who consulted their general practitioner at least once in 2008 at any of the 19 primary health care centres included in this study. The centres were located in two north-eastern regions of Spain and had more than two years of experience using electronic records by all general practitioners and nurses Data for the Dutch population derived from the Registration Network Family Practices of The Netherlands Both primary care databases contain basic demographics as well as all relevant health problems affecting the present functional status of patients and/or their future functioning coded using the International Classification of Primary Care (ICPC) To facilitate the management of diagnostic information, diseases were grouped according to the Expanded Diagnostic Clusters (EDC) of the ACG system in both contexts. This system groups ICPC codes into 260 EDCs based on the clinical, diagnostic and therapeutic similarities of diseases All information was anonymised using a unique patient identifier. The Spanish database was approved by the Clinical Research Ethics Committee of Aragon (CEICA). In the Netherlands, no ethical approval is necessary when analysing anonymised data The statistical analysis applied to the Dutch data was the same as that performed in the original Spanish study, and previously by other authors The extraction of the disease patterns was based on the principal factor method, and the number of factors to extract was established using a scree plot To determine which health problems (EDCs) belonged to each of the multimorbidity patterns, those with scores equal to or greater than 0.25 for each factor were selected. To calculate the prevalence of the patterns according to the most common definition of multimorbidity, an individual was assigned a specific pattern of multimorbidity if he/she presented at least two of the diseases included in the pattern. The entire analysis was stratified by age groups and sex.As in the original work in the Spanish setting, only diagnoses with a prevalence equal to or greater than 1% in each age and sex group were included in the analyses so as to increase the epidemiological interest of the study.The STATA 12.0 software was used to conduct the statistical analysis.The Dutch study population consisted of 79,291 individuals and the Spanish one of 275,682, with a slightly smaller proportion of middle age individuals in the latter group . MultimoIn the original Spanish study In total, 6.3% of young Dutch men were assigned to at least one of the three factors obtained for this age group . Factor Over a quarter of Dutch men in this age group (27.3%) were assigned to at least one of the two identified factors . The firTwo thirds of the men in this age group (66.9%) were assigned to at least one of the two factors identified for this group . Factor At least one of the three factors identified for young Dutch women were present in 12% of this group . Factor Two patterns affecting more than a quarter of these women (26.2%) were identified . Factor Almost two out of three women (64.8%) were assigned to at least one of the three factors identified in this age group . The firIn this first population-based study comparing multimorbidity patterns in two different European regions, relevant similarities were found for three of the five patterns identified in the original Spanish study, namely the cardiometabolic, mechanical and psychiatric-substance abuse. The analysis of Dutch data revealed a higher prevalence of multimorbidity which corresponds with the clustering of a higher number of diseases in each of the patterns.To date, very few studies focusing on disease patterns encompass populations covering broad age ranges. This study attempts to take into account health problems throughout the entire lifespan, supporting the generalizability of the findings The pattern with highest similarities in the Spanish and Dutch results was the cardiometabolic one. This pattern, as shown in the Spanish study The greater complexity of multimorbidity patterns in the Dutch population starting at middle age could be partly explained by the use of different diagnostic protocols in the two countries. In this regard, it is worth mentioning that in 2010 functional payments for the total episode of care were introduced in general practice in the Netherlands for specific conditions and cardiovascular risk management, under the premise that this new payment system would improve the position of the patient and the quality of care In the Dutch population, a mechanical component was evident in young and elderly women and in men up to middle age, comparable to the Spanish mechanical pattern. Diseases such as arthropathy, cervical and low back pain, varicose veins of the lower extremities, gastro-oesophageal reflux, anxiety and obesity coincided in both studies and confirm previous results found by other authors Two of the diseases associated with the psychiatric-substance abuse pattern described in young Spanish men (substance use and anxiety/neuroses) together with other psychosocial conditions were also present in the Dutch population, but in all age and sex groups. The recurrent presentation of this cluster of psychosocial problems in the Dutch population could be a consequence of the development in the past 15 years of the Dutch mental health sector; about half of Dutch adults with a serious mental or addictive disorder receive care and two-thirds of them receive satisfactory care The main strengths of the present study are the similarities concerning the study populations and the organisational features of primary care in both the Spanish and Dutch contexts. Both studies were based on primary care populations served by general practitioners who systematically register diagnoses using ICPC. Moreover, the public nature of both health care systems and the high access of citizens, as well as the one-year observation periods guarantee that selection bias is reduced. Still, several limitations need to be considered.Diagnostic habits and registration of clinical data may vary in the two countries, due to differences in the level of professional training, the degree of implementation or content of clinical practice guidelines, the use of active protocols for detecting certain diseases, organisational factors, etc. Indeed, the Dutch general practice holds an outstanding position regarding quality assurance and guideline implementation with respect to other European countries Another limitation, also mentioned in the original Spanish study, was the cross-sectional design used in both studies, limiting the establishment of causal relationships and/or the evolution of patterns over time The study of the Dutch patterns from the perspective of the pathophysiology of the identified disease interactions was out of the scope of this study, focused rather on comparing the multimorbidity patterns obtained in two different health care contexts. As a consequence, Dutch patterns lacked specific clinical labels.The assessment of the differences/similarities between the results obtained in both countries was largely qualitative and descriptive. A statistical analysis of such differences does not seem very relevant in this study since, given the large numbers, all differences are expected to be statistically significant. Therefore, a qualitative comparison seems to be more pertinent and better adapted to the clinical appraisal of the associations and interactions among diseases.The similarities found for certain multimorbidity patterns in primary care patients from two different European countries could offer initial clues for the elaboration of clinical practice guidelines, if further evidenced in other contexts. This has been repeatedly mentioned and urgently requested by the scientific community This study also endorses the use of primary care electronic medical records for the epidemiologic characterization of multimorbidity. Moreover, electronic medical records would enable a longitudinal approach to the multimorbidity phenomenon.File S1Combined file of supporting figures. Figure A - Scree plots for the different age and sex groups in the Spanish setting. Figure B - Scree plots for the different age and sex groups in the Dutch setting.(DOC)Click here for additional data file."} +{"text": "The purpose of this study was to utilize the Context, Input, Process and Product (CIPP) evaluation model as a comprehensive framework to guide initiating, planning, implementing and evaluating a revised undergraduate medical education programme. The eight-year longitudinal evaluation study consisted of four phases compatible with the four components of the CIPP model. In the first phase, we explored the strengths and weaknesses of the traditional programme as well as contextual needs, assets, and resources. For the second phase, we proposed a model for the programme considering contextual features. During the process phase, we provided formative information for revisions and adjustments. Finally, in the fourth phase, we evaluated the outcomes of the new undergraduate medical education programme in the basic sciences phase. Information was collected from different sources such as medical students, faculty members, administrators, and graduates, using various qualitative and quantitative methods including focus groups, questionnaires, and performance measures. The CIPP model has the potential to guide policy makers to systematically collect evaluation data and to manage stakeholders\u2019 reactions at each stage of the reform in order to make informed decisions. However, the model may result in evaluation burden and fail to address some unplanned evaluation questions. The CIPP model addresses all the steps of an education programme, even when the programme is still being developed.Context evaluation is very important in convincing the faculty members and policymakers at the onset of a major programme reform.Input evaluation might be helpful in saving precious resources which might be lost by performing evaluation at the end of the programme.The CIPP model provides ongoing information to decision-makers to ensure that the implemented programme is on the track.The CIPP evaluation model may fail to address some important but unplanned evaluation questions and may result in evaluation burden.The past two decades have witnessed an international call for fundamental changes in medical education programmes \u20123. Many The use of the Context, Input, Process and Product (CIPP) evaluation model has been thoroughly recognized in a variety of educational and non-educational evaluation settings \u201213. AddiThe CIPP evaluation model addresses all phases of an education programme renewal , accommoThis article elaborated the use of the CIPP evaluation model as a comprehensive framework to help to initiate, develop, install, and evaluate a new undergraduate medical education programme in a period of eight years. We examined five specific research questions:How does the CIPP evaluation model effectively facilitate the management of the stakeholders\u2019 reactions during the undergraduate medical education programme reform?What are the needs of the undergraduate medical students and the community?What is an appropriate model for an undergraduate medical education programme to address the identified needs?What are the strengths and weaknesses of the new undergraduate medical education programme?To what extent has the new undergraduate medical education programme achieved its outcomes in basic science phases?This study was conducted at the School of Medicine of Tehran University of Medical Sciences. This school, which is one of the largest and oldest among Iranian medical schools, delivered a traditional Flexnerian undergraduate medical education programme for a long period of time. This programme was composed of two and half years of basic sciences, one year of pathophysiology, a two-year clerkship, and an 18-month internship. The idea of reform in the traditional programme was raised seriously in the early 2000s when the change seemed inevitable in our institution in response to profound contextual changes and the recommendations of the Iranian Ministry of Health and Medical Education.This longitudinal evaluation project started from 2006. The entire process was supervised by Educational Development Office of the School of Medicine. Fig.\u00a0In order to understand the necessity and scope of the change, we conducted a comprehensive context evaluation from 2006 to 2009 which comprised five projects. The projects included exploring the challenges of the traditional programme from the stakeholders\u2019 viewpoint, evaluating the quality of the traditional programme in graduates\u2019 perceptions, assessing the educational environment from the students\u2019 perspectives using the Dundee Ready Education Environment Measure (DREEM) inventory, self-study of the traditional programme in comparison with the national undergraduate medical education standards and evaluating the competency of medical students in clinical skills through an objective structured clinical examination Table\u00a0.In a two-year input evaluation project, we carried out three consecutive activities. In order to set down a sound model for the revised undergraduate medical education programme, the responsible taskforce reviewed the relevant literature on authoritative medical education journals and visited the websites of leading medical schools around the world and also collected national documents of undergraduate medical education programmes. Next, expert panels were held to generate a preliminary draft of the framework of the undergraduate medical education programme on the basis of context evaluation and literature review results. The preliminary draft was converted to the final version of the programme during a participatory process. Meetings were conducted with faculty members from different departments, medical school administration and students\u2019 representatives in both basic and clinical sciences phases, as well as with the recent graduates, to receive their input. Finally, expert judgment was considered to determine the feasibility of the proposed model and adjustments were made to improve it. Overall, 170 faculty members and administrators participated in the input evaluation phase. We also asked three experts for their comments from abroad. We involved students considerably during the planning phase: 18 students in committees and subcommittees, 35 students in panels and some others in workshops.In September 2011, School of Medicine implemented the revised undergraduate medical education programme with extensive changes on the basis of input evaluation results. The process evaluation started from scratch when the new programme was launched. Information was regularly collected through diverse methods. For instance, online questionnaires were administered and focus groups were conducted after each interdisciplinary organ-system block in order to receive the students\u2019 viewpoints. We also reviewed the course syllabi to make sure all classes and sessions were held as planned with the basic science phase, and 56.12\u2009% of the students strongly agreed or agreed that they achieved the vertically integrated theme outcomes. The DREEM questionnaire scores, student grade point averages, the failure rates, and National Comprehensive Basic Science Exam results did not differ significantly between the traditional and renewed curriculum (all By employing unique features of the CIPP evaluation model, we succeeded in convincing stakeholders of the need for major changes in the undergraduate medical education programme. We were also successful in creating a sense of ownership of the new programme in our stakeholders. We achieved some success in reassuring students, faculty members and administrators during the programme installation about the programme progress by continuous process evaluation and initial product evaluation, communicating successes and challenges with the stakeholders and using evaluation results for improvement. However, proving the effectiveness of the programme demands the passage of time. We also found that maintaining stakeholders\u2019 collaboration and enthusiasm was a difficult task. Hence, establishing a reward system for compensating faculty participation and overload teaching might be beneficial.The aim of this evaluation study was to utilize the CIPP evaluation model to assist decision-makers to initiate, develop, establish, and evaluate a revised undergraduate medical education programme in one medical school in Iran. This is the first study applying all four interrelated components of the CIPP evaluation model in a longitudinal work throughout the renewal cycle of an undergraduate medical education programme. The results of this study showed that the components of the CIPP evaluation model could successfully address all steps of the reform even when the new programme is still being developed. We took advantage of context and input evaluation before, as well as process and product evaluation after the implementation of the new programme. Context evaluation identified the weaknesses and strengths of the traditional programme and the needs of the learners and the community which, in turn, directed the rest of the renewal process. Input evaluation resulted in formulating a new programme tailored to our context and was helpful in saving precious resources. Process evaluation enabled us to improve the weaknesses early on in the reform installation. Product evaluation examined the extent of the initial achievements in the outcomes in basic science phase.Few studies have applied the CIPP evaluation model longitudinally to the evaluation of medical education programmes. Steinert et al. conducteThe CIPP evaluation model was also helpful in managing the stakeholders\u2019 reactions through the reform process. We found it challenging to initiate and sustain major reforms in the undergraduate medical education programme in a large and old medical school with a history of success. Therefore, we conducted a comprehensive context evaluation with triangulation of the evaluation sources and methods. The context evaluation revealed the problems of the traditional programme deeply and broadly, which was very helpful to convince decision-makers and faculty members about the need for broad changes in the programme. Triangulating the evaluation data has been mentioned in the literature on medical education reform as an important factor to create the need for change as well , 28.Once the need for change was confirmed, the prevailing reaction of the stakeholders was that our medical school is different and models of the undergraduate medical education programme reform are not necessarily suitable to our context. The steps taken during the input evaluation phase along with extensive stakeholder engagement were effective strategies to design an educationally sound undergraduate medical education programme that was readily adaptable to our situation and more importantly was accepted by the programme stakeholders .Ongoing process evaluation and formative product evaluation with the use of qualitative methods enabled us to explore stakeholders\u2019 reactions systematically during the implementation of the revised programme. For example, the first cohort of students was confused about the details of the reform and concerned about the success and continuation of the programme Table\u00a0. AdditioApplying the CIPP evaluation model is a time-consuming and demanding task which needs full administrative support and leadership stability. We faced the challenge of four changes in medical school dean during the reform process. However, the behaviour of all the administrators was supportive and we dealt with this instability successfully by creating a shared responsibility for the reform process between different groups of stakeholders. We also found that gathering evaluation data from different sources, and combining and timely reporting of these triangulated data, were difficult tasks that needed administrative support and expertise. We tried to overcome these challenges by involving our medical students in graduate courses in the evaluation process. We also assigned some parts of the evaluation practice to a volunteered group of medical students. We intend to involve the course directors in gathering evaluation data directly from students in the near future. Although these strategies were helpful, medical schools that may consider using this model should prioritize the evaluation questions carefully in order to manage the evaluation burden. Another weakness of the CIPP evaluation model was its focus on evaluating the predetermined plan and product. Therefore, some important questions such as the extent and nature of unintended outcomes of the programme might have remained unanswered in our study. Beyond the limitations of the CIPP evaluation model, our study mainly focused on the basic sciences phase of the undergraduate medical education programme during the process and product evaluations. We need to continue the project to examine the extent of outcome achievement, especially in the clinical phase.The CIPP evaluation model has the potential to guide policy makers and other stakeholders to systematically collect evaluation data at each stage of reform in an undergraduate medical education programme in order to make informed decisions. Moreover, this model seems useful in managing the change process in terms of stakeholders\u2019 reactions. The use of this evaluation model in other programmes in the context of medical education should be further studied."} +{"text": "Alcohol misuse is a significant public health problem with major health, social and economic consequences. Systematic reviews have reported that brief advice interventions delivered in various health service settings can reduce harmful drinking. Although the links between alcohol and oral health are well established and dentists come into contact with large numbers of otherwise healthy patients regularly, no studies have been conducted in the UK to test the feasibility of delivering brief advice about alcohol in general dental settings.The Dental Alcohol Reduction Trial (DART) aims to assess the feasibility and acceptability of screening for alcohol misuse and delivering brief advice in patients attending National Health Service (NHS) general dental practices in North London. DART is a cluster randomised control feasibility trial and uses a mixed methods approach throughout the development, design, delivery and evaluation of the intervention. It will be conducted in 12 NHS general dental practices across North London and will include dental patients who drink above the recommended guidance, as measured by the Alcohol Use Disorders Identification Test (AUDIT-C) screening tool. The intervention involves 5\u2005min of tailored brief advice delivered by dental practitioners during the patient's appointment. Feasibility and acceptability measures as well as suitability of proposed primary outcomes of alcohol consumption will be assessed. Initial economic evaluation will be undertaken. Recruitment and retention rates as well as acceptability of the study procedures from screening to follow-up will be measured.Ethical approval was obtained from the Camden and Islington Research Ethics Committee. Study outputs will be disseminated via scientific publications, newsletters, reports and conference presentations to a range of professional and patient groups and stakeholders. Based on the results of the trial, recommendations will be made on the conduct of a definitive randomised controlled trial.ISRCTN81193263. The brief alcohol advice intervention has been modified and applied for the first time to the setting of a National Health Service (NHS) general dental practice.The training programme was designed after extensive consultation with key stakeholders and was tailored to address the needs and circumstances of NHS dental professionals and their patients.This feasibility study will inform the design of a future definitive evaluative study.Alcohol misuse is a significant public health problem with major health, social and economic consequences. Findings from the latest Health Survey for England show that approximately 24% of men and 18% of women exceed the current Department of Health alcohol consumption guidelines.2The identification of people who are drinking above the recommended guidelines and the provision of brief advice by NHS primary care professionals are important components of an alcohol control strategy. National Institute of Health and Care Excellence (NICE) guidelines recommend that primary care professionals should screen all patients for alcohol misuse.5The links between alcohol and oral health are well established. Most importantly these include increased risk for oral cancers, with and without the synergistic effect of smoking,A limited body of research has been conducted on the use of certain screening tools to assess the prevalence of high-risk drinking among dental patients. Two clinical audits assessing the completion of an alcohol consumption question in a standard medical history form used in the emergency clinic at the university dental hospital in Cardiff, found that the question was either completed incorrectly, or not used at all. In the second audit, the M-SASQ question was answered more often and resulted in more patients receiving alcohol advice compared to the standard question.A limited number of randomised controlled trials have investigated the effectiveness of brief advice interventions in clinical dental settings, using dental care professionals rather than dentists. A randomised controlled trial in Cardiff used a nurse-led brief intervention utilising motivational interviewing with young males who were treated at an oral and maxillofacial surgery outpatient clinic for alcohol related facial injury and demonstrated a significant reduction in alcohol consumption in the intervention group compared to the controls at 12\u2005months follow-up.Exploratory work has, however, revealed that general dental practitioners are reluctant to engage with patients about alcohol due to lack of confidence in discussing the subject and concerns that this may adversely affect the practitioner\u2013patient relationship.The Dental Alcohol Reduction Trial (DART) study aimed to assess the feasibility and acceptability of screening for alcohol misuse and delivering brief advice to patients attending NHS general dental practices in North London.To explore the views of dental professionals and dental patients on the relevance and importance of alcohol misuse to oral health, as well as the acceptability of screening and providing brief alcohol advice in general dental settings.To develop and evaluate a brief alcohol advice intervention tailored for use in NHS general dental practices and to assess through a process evaluation the feasibility and acceptability of the intervention to patients and dental professionals.To assess the feasibility of the main trial methodology including the use of a screening tool, subject recruitment and retention, and data collection procedures including economic evaluation to inform the future design of a definitive randomised controlled trial.The study objectives were:The developmental phase which involved separate focus groups with dental professionals and dental patients. This phase informed the development of the training programme and intervention for the main phase of the trial.The feasibility trial which involved: (A) engagement and recruitment of dentists, (B) training dental teams on research governance (control and intervention arms) and on how to deliver alcohol brief advice (intervention arm), (C) participant recruitment, (D) data collection (baseline and 6\u2005months follow-up data), (E) process evaluation including assessment of intervention fidelity, acceptability to participants and dental professionals and feasibility of cost-effectiveness evaluation.The DART study comprises of two phases:The study is registered as a primary clinical trial (ISRCTN number: 81193263).The DART study is a cluster randomised control feasibility trial and uses a mixed methods approach throughout the development, design, delivery and evaluation of the intervention. The trial took place in general dental practices across North London . The study timeline is presented in The developmental phase of the study involved the completion of five focus groups; two with dental professionals and three with dental patients. Recruitment for the developmental phase started in December 2013. The discussions explored dental teams\u2019 understanding and views of the importance of alcohol misuse in relation to oral health; perceived extent of alcohol misuse among patients; dental professionals\u2019 attitudes towards screening, providing advice, and referral when necessary; acceptability and barriers of engaging in alcohol screening and advice; best ways of delivery; and also perceived training and support needs.Similarly, discussions with dental patients explored their views on the relevance of alcohol to oral health and dentistry, acceptability of dental teams enquiring about alcohol intake and offering advice, views on the best ways for dental teams to approach the subject and provide advice and support, and on suitable information for participants in the group not receiving the intervention. The information gathered during these discussions was used to design the intervention.NHS dental practices were recruited into the study through a number of complementary ways. The principal investigator along with the research team had already established excellent links with a number of dental practices across North London, which had participated in previous research studies and had expressed interest in being involved in future research. In addition a mail was sent out to all NHS dental practices in the selected areas of North London and interested practices then visited by the principal investigator. Finally, a snowballing recruitment method was also used with dental professionals already engaged in the study recommending other colleagues who might be interested in participating in the research.To avoid risk of contamination, dental practices were randomised to intervention and controls. Lists of all the dental practices which expressed interest in participating in the trial were compiled and equal numbers allocated to each arm of the trial using simple randomisation. A member of staff not involved in data collection or delivering the intervention was responsible for allocating dental practices to groups. The researchers collecting the 6-month follow-up data will be masked to the practices\u2019 allocation status.All members of the dental teams who participated in the study at each practice were trained in research governance issues. This included how to introduce the study to potential participants, recruitment strategies, obtaining verbal and written consent, data collection procedures, confidentiality and data storing/handling.All dentists from the intervention practices attended a 6\u2005h training course over 2\u2005days . The traThe first session provided the participants with essential theoretical knowledge on issues around alcohol, including its epidemiology, public health burden and links to oral health. Key terminology was defined , followed by an introduction to raising the issue of alcohol and using the AUDIT-C screening tool. Exercises included estimating alcohol units in various popular alcoholic drinks and determining patients\u2019 AUDIT-C score using real-life scenarios. During the second session, dental professionals were introduced to a specifically modified version of the brief advice tool tailored to oral health. Using role-plays of increasing complexity, dentists learnt how to raise the issue of alcohol with their patients, go through the AUDIT-C tool, provide feedback based on the AUDIT-C score and then proceed with delivering brief advice. Concerns and barriers to delivering the intervention were explored and addressed. Area specific information on local alcohol services for referral was also provided to each practice. The training programme was evaluated using a pretraining and post-training questionnaire which assessed participants\u2019 knowledge, skills and attitudes towards screening for alcohol misuse and providing preventive advice.Different patient recruitment approaches were used by the dental professionals in the participating practices. Some practices used a targeted approach based on their knowledge of their patients, and others approached all patients attending for treatment during the recruitment period. Recruitment for the main trial phase began in May 2014. Dental staff at each participating dental practice were trained and briefed on effective ways of approaching potential participants.Patients were eligible to take part in the study if they consumed alcohol above the current recommendationsIn addition, participants were excluded if they were already involved in a research study conducted in the dental practice and if they were already seeking or receiving help for alcohol dependence.From our exploratory work, it was ascertained that compared to the M-SASQ and FAST alcohol screening questionnaires, the AUDIT-C questionnairePatients who expressed an interest in participating in the study completed the screening questionnaire, which was then given to the dentist in order to ascertain eligibility for the study. A score of 5 or above in the AUDIT-C questions indicates drinking levels above the current recommendations and therefore eligibility to enter the study.Consent to participate was obtained in a two-stage process. Dental staff obtained verbal consent to screen participants using the AUDIT\u2013C measure. Patients who obtained a positive AUDIT-C score were invited into the trial. Information sheets explained the details of the study and any queries were addressed at this point. If the participants were happy to continue, written consent was obtained at this point by the dental professional.All eligible and consenting participants were asked to complete a short baseline questionnaire which comprised of basic demographic questions and the EuroQoL five dimensions questionnaire (EQ-5D).Six months after the completion of baseline measures, all participants will be contacted via telephone by a researcher masked to the participant's allocation status. The full AUDIT tool will be administered.Eligible participants attending the intervention practices were given up to 5\u2005min of simple, structured, brief advice using a modified version of the brief advice tool \u2018Brief advice about alcohol risk\u2019 which was developed for the Screening and Intervention Programme for Sensible drinking (SIPS) studyIn the control practices, eligible participants received standard oral healthcare and were initially given an oral cancer prevention leaflet which included brief guidance on reducing alcohol intake and stopping smoking. After all the follow-up data are collected, they will then also be offered the 5\u2005min brief advice and given a copy of the Change for Life leaflet. A summary flow chart of the study process in control and intervention practices can be found in We will measure recruitment and retention rates as well as practicality of engagement with dental practices, fidelity of delivery of the intervention and general acceptability of the study procedures from screening to follow-up. In addition, we will also assess if outcome measures proposed for the main trial could successfully be collected. The proposed primary outcome of the full trial is the score on the full AUDIT questionnaire, with a cut-off of 8 or more as used in the SIPS programme.For feasibility studies a detailed sample size calculation is neither appropriate nor necessary. The primary aim of conducting feasibility trials is to provide data to inform power calculations for a future larger scale trial.30Based on pragmatic considerations it was estimated that 12 dental practices were required with a sample of 240 participants at baseline\u201420 participants per practice. Assuming a 30% drop out rate this would give a sample of 168 participants at the 6\u2005months follow-up. This figure was deemed to be reasonable as the average dental practice has approximately 1500 adult patients of whom approximately 25% were drinking at higher risk levelsNo published studies have reported the effect of brief advice provided in general dental practice. The variance in the proposed primary outcome for the definitive trial (AUDIT score at 6\u2005months) will be used in conjunction with recruitment and follow-up rates as well as an estimate of the intraclass correlation coefficient (and 95% CIs) in order to calculate the sample size for the definitive trial.To ensure that the study was relevant and informed by patient experience, a dental patient research forum was established in one of the dental practices involved in the study. The principal dentist invited a diverse mix of dental patients to join the forum. Semistructured discussions took place with the forum members. These discussions were facilitated by the patient and public involvement co-investigator (CG) and members of the study team. The forum was consulted on a variety of practical aspects of the study, such as the design of recruitment materials and data collection questionnaires, the best ways to introduce the study to potential participants, suggestions on retention of the study sample and effective follow-up methods. The group also assisted with piloting the data collection questionnaires once they were finalised.The research team developed a comprehensive monitoring system with monthly visits to the dental practices to collect the screening and baseline questionnaires and to ascertain if recruitment and data collection were going well. Visit checklists were designed and completed at each visit and feedback emails were sent when deemed useful for the practitioners.The process evaluation will consist of a mixed methodology using both qualitative and quantitative methods. This will include an assessment of the recruitment procedures, satisfaction among participating dental patients and dental teams, and intervention fidelity.All dentists at the intervention practices were asked to complete a fidelity form for each patient who received the brief intervention to ensure that the intervention was delivered consistently. The form was based on a checklist used in a previous trial,Face to face individual interviews will be held in the intervention and control dental practices to gain an understanding of the dental professionals\u2019 experience and views of the study. Dentists and reception staff who were directly involved with the recruitment and data collection process will be interviewed by the research team on an individual basis. The discussions with the intervention practices will assess views on the recruitment procedures, value and relevance of the alcohol brief advice training, the appropriateness of delivering advice on alcohol in the dental practice and overall perceptions surrounding participation in research. In the control practices, the discussions will focus on the study protocol procedures namely the challenges and facilitators in recruiting dental patients for the study and the data collection process. In addition, dentists in the control arm will be asked if they would find the training for delivering alcohol advice relevant and useful to them.The experience of participants will also be evaluated through individual telephone interviews and will focus on their experiences of the study and general perceptions around participating in research. Members of the research team will also be interviewed one-to-one by an external interviewer, discussions surrounding recruitment of dental practices, training of dental teams and general perceptions of the organisation of the study will be carried out.We will assess key feasibility parameters, such as recruitment and retention rates, number of participants screened in order to assess eligibility, the practicalities of screening and delivering the intervention in dental settings, in order to inform the acceptability of the study procedures in both patients and dental professionals. The study instruments will be evaluated in terms of ease of administration as well as acceptability by participants in order to inform their suitability for the main trial. Descriptive analysis of the proposed main outcome measure (AUDIT score) will provide a SD which will be subsequently used in the sample size calculation for the definitive trial.The process evaluation interviews with dentists, dental teams, dental patients and members of the research team involved in the study will be transcribed and coded, classified and organised into main themes using thematic analysis.A decision to process to the full scale study will be based on recruitment of at least 85% of the planned sample within the allocated time, completion of primary outcome follow-up interviews with no less than 75% of participants and receipt of brief advice by at least 60% of those randomised to the intervention arm.Alcohol misuse remains a major health and social problem in the UK affecting a significant proportion of the population. Excessive alcohol consumption adversely affects oral health in a variety of ways, but currently most dentists do not ask their patients about their alcohol intake or provide any advice to those that require support. Indeed, many health professionals lack the necessary knowledge and confidence to discuss alcohol misuse with their patients.This paper has described the study methodology and intervention design for an alcohol screening and brief advice intervention delivered in NHS dental practices in North London. The comprehensive feasibility study will provide detailed insights into the development, implementation and evaluation of the alcohol intervention which will inform the design of a future definitive trial. Findings from the initial exploratory phase of the study will help ensure that the intervention training programme and resources are tailored to the needs and circumstances of NHS dental professionals.Furthermore, the feasibility trial will explore the logistics and practicality of delivering brief advice to dental patients in a NHS dental setting. The planned process evaluation will determine the acceptability of the intervention to dental professionals and patients. These implementation and feasibility issues will be explored in depth as they will likely influence uptake of this approach in primary dental care. The comprehensive methodology of this study will provide useful data on this important and under researched topic and provide evidence which will inform primary dental care practice in and outside of the UK."} +{"text": "Nursing education is a key component to the delivery of safe epidural analgesia in the post-operative critical care setting .An audit of our practice revealed a significant failure rate in epidural analgesia in elective major abdominal surgery patients and a high proportion of missing documentation in cases where epidural analgesia had failed.To devise and deliver an interdisciplinary education program to enhance nursing knowledge and management of epidural analgesia in critical care patients.To improve patient safety through earlier recognition of adverse events by encouraging improved monitoring of epidural analgesia.We developed a teaching program for nursing staff on the care of patients receiving epidural analgesia.The program consisted of a lecture course and interactive teaching session with a pre course questionnaire to assess existing knowledge of the benefits and complications of epidural analgesia and management of common side effects. We repeated the questionnaire after the course without the participants having prior knowledge that this would occur.The group of 36 nurses who participated had been qualified for a median of 5 years IQR, they had worked in a critical care environment for a median of 2.5 years IQR and 25% of them had some experience of managing epidural analgesia in an environment outside of critical care.Following the education program there was 56% increase in the number of nurses who could identify at least two advantages of epidural analgesia for patients who had undergone major abdominal surgery.There was a 26% improvement in the number of nurses able to identify signs of high sensory level blockade, 47% improvement in the number of nurses able to name methods for improving an inadequate block and a 67% improvement in the number of nursing staff being able to identify signs of a potential epidural haematoma.Baseline nursing knowledge of clinical assessment and recognition of associated complications of epidural analgesia was limited despite a pre-existing trust-wide education program.Interdisciplinary training is beneficial to the acquisition of short-term knowledge with regard to optimising effective pain relief and identifying common side effects.The post course short answer exam highlights a significant increase in nursing awareness of potential serious complications of epidural analgesia.We expect that increased nursing knowledge of the benefits of epidural analgesia and its potential problems will lead to comprehensive monitoring, superior post-operative analgesia and a reduction in adverse events."} +{"text": "The perception of unpleasant stimuli enhances whereas the perception of pleasant stimuli decreases pain perception. In contrast, the effects of pain on the processing of emotional stimuli are much less known. Especially given the recent interest in facial expressions of pain as a special category of emotional stimuli, a main topic in this research line is the mutual influence of pain and facial expression processing. Therefore, in this mini-review we selectively summarize research on the effects of emotional stimuli on pain, but more extensively turn to the opposite direction namely how pain influences concurrent processing of affective stimuli such as facial expressions. Based on the motivational priming theory one may hypothesize that the perception of pain enhances the processing of unpleasant stimuli and decreases the processing of pleasant stimuli. This review reveals that the literature is only partly consistent with this assumption: pain reduces the processing of pleasant pictures and happy facial expressions, but does not \u2013 or only partly \u2013 affect processing of unpleasant pictures. However, it was demonstrated that pain selectively enhances the processing of facial expressions if these are pain-related . Extending a mere affective modulation theory, the latter results suggest pain-specific effects which may be explained by the perception-action model of empathy. Together, these results underscore the important mutual influence of pain and emotional face processing. Emotions possess immense power to alter pain perception. The influence of experimentally induced emotions on experimental pain has been investigated with various affective stimuli like affective pictures e.g., , 2008, pA theoretical framework for the explanation of the emotional modulation of pain is the motivational priming hypothesis , which aAs mentioned above and also common in emotion research, most of the studies investigating emotional modulation of pain used affective pictures e.g., , 2008 orA possible theoretical explanation for the interaction of viewing others\u2019 facial expression of pain and the own sensation of pain is offered by the perception-action model (PAM) of empathy . The PAMThe aim of this mini-review is to selectively summarize research on the influence of visual affective stimuli on pain perception, but mainly on the opposite effect of pain on the processing of affective stimuli such as facial expressions. Given the growing interest in pain modulation by facial expressions of pain and the lack of studies which used other affective stimuli, we focus on studies on facial expressions. In addition we seek to extend the viewpoint of a mere affective modulation of pain with regards to the theory of vicarious pain and include a recent experiment from our lab which aimed at investigating the mutual effects of the perception of facial expressions of pain and pain perception. This review is far from exhaustive; it only summarizes the literature relevant for our work within the research group \u201cEmotion and Behavior\u201d at the University of W\u00fcrzburg, Germany. We are fully aware that much more research is available on the topics of emotional modulation of pain and vicarious pain, and we direct the attention of the interested reader to the excellent reviews by As mentioned above, the effect of pain on emotion processing has been investigated much less, although from a clinical perspective the high prevalence of mood disorders in chronic pain suggests effects in this direction . One stuIn a further study, we then investigated the effect of tonic pressure pain on the electrocortical correlates of processing of facial expressions . Here, fWe conclude that on the one hand there is some evidence that experimental pain alters perception and processing of positive affective stimuli (scenes and faces), although most effects were observed with regards to attentional mechanisms. On the other hand, little is known about how pain alters processing of facial displays of pain and vice versa. Given the match between observed and experienced pain, one may argue that selective enhancement and mutual influences have to be expected. Before we report an experiment in which these mutual influences were investigated, we will briefly summarize why facial expressions of pain may be special compared to other facial expressions.The sensation of pain is accompanied by distinct albeit not uniform facial expressions e.g., . Pain exFigure 1).Compared to neutral facial expressions, facial expressions of pain receive prioritized processing and elicit enhanced initial orienting . SimilarCorroborating our findings, Since emotional stimuli have a great impact on pain perception as summarized above, one may assume that a highly salient signal such as the facial expression of pain should also modulate pain perception. This assumption is further strengthened by the fact that the cortical regions involved in the decoding of pain and emotThe interaction of facial expressions of pain and perception of pain is rarely investigated. In one study, volunteers viewed videos showing different levels of pain expression before noxious electric shocks were delivered. Viewing stronger pain expressions generally increased pain unpleasantness ratings, the amplitude of the nociceptive flexion reflex, and corrugator responses to the noxious stimulation . In anotBesides the lack of information of the pain-specificity of these effects, little is known to date about the mutual effects of the perception of facial expressions of pain in others and the own pain sensations. Given the observations from functional neuroimaging that seeing others facial expression of pain leads to activations in pain-related areas in the brain of the observer, one may assume strong interactions between seeing pain of others and feeling pain. A number of studies using methods such as functional magnetic resonance imaging and electrophysiological recordings have provided support for this view by showing increased activity in pain-related brain regions during perception of pain in others includinFigures 2A,B, the selective enhancement of arousal ratings of pain faces by pain in Figure 2C.In a recent study we aimed at investigating both the effects of facial expressions of pain on the actual perception of pain, and vice versa the influence of pain sensation on the affective ratings of facial expressions of pain . To thisThese findings demonstrate that the relation between pain and emotion is bidirectional with pain faces showing selectively mutual influences. This study provides further experimental evidence that processing painful stimulation and the facial expressions of pain in others are highly interconnected. Extending previous findings it also shows pain-specific modulations of pain perception such that highest pain ratings of painful thermal stimuli were obtained while participants watched faces of pain compared to other facial expressions. Importantly, this effect was also larger for pain compared to fear faces, suggesting that the facial expressions of pain enhance self-pain perception not only due to its negative valence but due to its pain relevance. In addition to the predictions from the motivational priming theory, these results support the notion that not only the valence of a facial expressions enhances pain perception, but that the expressed pain itself primes the sensorimotor system, which might drive a potentiating pro-algesic mechanism . AdditioAnother potential mechanism of pain modification in addition to the affective priming hypotheses has been put forward as the PAM of empathy . This moThis mini-review featured recent work on the emotional modulation of pain perception by affective stimuli such as facial expressions, but more importantly on the reverse impact of pain on emotional face processing. The presented studies also further underscore the special relevance of facial expressions of pain. The functional significance of pain faces for human social interaction, however, is still under debate, therefore future work needs to clarify whether they elicit predominantly approach or avoidance behavior in the observer . This woThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "A middle-aged woman was undergoing elective rhytidectomy by the senior author using a high superficial musculoaponeurotic (high-SMAS) approach deep plane facelift. After horizontal SMAS division and elevation over the zygomatic arch, the temporal division of the facial nerve was seen crossing the zygomatic arch approximately 1 cm anterior to the tragus with an initial vertical trajectory before coursing anteriorly . The facInjury to the temporal branch of the facial nerve is one of the most undesirable surgical outcomes in the temporal region. Its course is most commonly described as running along Pitanguy's line, although deviations from this trajectory are seldom discussed among clinicians. Moreover, conflicting data have been written about its fascial relationships over the years. Given its vulnerability during extended aging face procedures, as well as surgical approaches to the temporomandibular joint, maxillofacial trauma, or temporal artery biopsy, a thorough understanding of the temporal branch anatomy in terms of trajectory and fascial planes is essential to avoid iatrogenic injury.What is the classic trajectory of the temporal division of the facial nerve?In what instances is this trajectory defined as abnormal?What are the fascial relationships of the temporal branch in the temporal region?What are the clinical correlates of these relationships?-Several anatomical studies have described the trajectory of the temporal division of the facial nerve after its emergence from the parotid.1To understand the normal anatomical variations in its trajectory, multiple dissection studies have dissected the temporal branch of the facial nerve. We have reported a select group of landmark studies in 5Much has been written on the fascial relationships of the temporal branch of the facial nerve in the temporal region. The most recent high-quality evidence has consistently shown that the temporal branch travels deep and transitions into a sub-SMAS plane before entering the muscles in the frontal region.The mastery of these relationships is essential to surgeons performing common procedures such as temporal artery biopsy, facelift, temporomandibular join surgery, maxillofacial trauma, or pterional neurosurgical approaches, among others. Understanding the normal and abnormal anatomy of the temporal branch of the facial nerve in terms of trajectory and facial layers is essential for the surgeon operating in the temporal region to avoid iatrogenic nerve injury."} +{"text": "The mechanical pressure difference across the bacterial cellulose membrane located in a horizontal plane causes asymmetry of voltage measured between electrodes immersed in KCl solutions symmetrically on both sides of the membrane. For all measurements, KCl solution with lower concentration was above the membrane. In configuration of the analyzed membrane system, the concentration boundary layers (CBLs) are created only by molecular diffusion. The voltages measured in the membrane system in concentration polarization conditions were compared with suitable voltages obtained from the model of diffusion through CBLs and ion transport through the membrane. An increase of difference of mechanical pressure across the membrane directed as a difference of osmotic pressure always causes a decrease of voltage between the electrodes in the membrane system. In turn, for mechanical pressure difference across the membrane directed in an opposite direction to the difference of osmotic pressure, a peak in the voltage as a function of mechanical pressure difference is observed. An increase of osmotic pressure difference across the membrane at the initial moment causes an increase of the maximal value of the observed peak and a shift of this peak position in the direction of higher values of the mechanical pressure differences across the membrane. P) across the membrane and volume flux (Jv) through the membrane cause changes in thickness of the CBLs and areP)max is smaller for all Ch/Cl max) are higher for electrodes located at a greater distance from the membrane is oriented with difference of osmotic pressure (greater activity of K+ ions for greater \u0394P). In turn, fixing of mechanical pressure difference across the membrane directed in an opposite direction to osmotic pressure difference causes smaller activities of K+ ions to be observed . The above statements are also valid when the activity of K+ ions is measured at points at nonzero distances from the membrane. Changes in time of activities of K+ ions are lower when the points are more distant from the membrane. Such changes in the activity of K+ ions in the vicinity of the membrane are due to a disturbance of reconstruction of CBLs by mechanical pressure difference across the membrane.As results from analysis of changes in time of activity of Kane Fig.\u00a0 applicatThe characteristic feature of voltage between electrodes in steady-states of the membrane system in concentration polarization conditions as a function of applied difference of mechanical pressure is the occurrence of a maximum. This maximum is observed in the range of difference of mechanical pressure directed in an opposite direction to difference of osmotic pressure . An increase of difference of osmotic pressure across the membrane at initial moment causes a shift of observed peak toward higher (more negative) values of differences of mechanical pressure and an increase in the value of maximum. Furthermore, as results from Fig."} +{"text": "In 1986, only 7 of the 50 states in the US had obesity rates above 10%; whereas in 2010 all 50 states had obesity rates above 10% and 13 states had obesity rates above 30% involved in the regulation of several hypothalamic and extra-hypothalamic cell populations involved in energy expenditure as well as glucose sensing/metabolism. The author also summarizes current data describing the role of neurotransmitters (gabaergic and glutamatergic inotropic neurotransmission) in the regulation of hypothalamic and hindbrain neuron activity.The review from Sohn discusseVanevski et al. (Vanevski and Xu, Khan , presentThe final review by Udit and Gautron focuses The aforementioned articles highlight that over the past century prevalent models of energy and glucose homeostasis have been developed from a better understanding of the neural circuits that, when perturbed, lead to the development of obesity and diabetes. Furthermore, these data also demonstrate how the neuronal processes involved in energy and glucose homeostasis also impinge on numerous CNS functions, including the regulation of autonomic outflow and other behaviors. Undoubtedly, utilizing combinatorial research strategies highlighted in these articles will be key to understanding and uncovering potential therapeutics in the treatment of endocrine disorders including obesity and diabetes.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The increase in paediatric patients suffering from chronic disease, often involves the transfer between different care sectors. The transition from the protected hospital environment to the patient\u2019s home is often faced with difficulties and discomfort. Therefore it is essential to ensure proper management of the course of treatment and care of the child and his family, from admission to return to home, as a single longitudinal and transverse episode. The transition phase invests heavily on empowerment of the patient and the family, as a necessary condition for safe care in self-management.The early assessment of care needs in the early hours of admission allows signalling through a standardized form (within 48 hours of admission) to the territory of residence / domicile post-discharge care needs. The immediate activation of the territory allows you to organize the take-over without solutions of continuity. The preparation in particularly complex cases of a joint multi-professional assessment Hospital / Territory before discharge ensures total care of both patient and family. The investment in empowerment of patients and their families through a card intended to guide the informative / educational activities and assessment of learning outcomes ensures a safe discharge as well as a competent share of the care pathway for the patient and family.All children requiring home care have been taken into care before discharge and the first access at home was made within 2 days of discharge. Patients and family members have continued training activity even at home until reaching complete autonomy.The detection of user satisfaction reported in 80% of cases who returned the questionnaire a good satisfaction.The focus in 2014 will be placed primarily in the reporting of at least 90% of the cases that require territorial continuity of care within 48 hours of admission and the onset of training already at this stage. It should be also developed a greater integration between hospital school and schools of the area of residence of the child for activation when needed to address the educational activity."} +{"text": "The paper by Jargin addresseIt also seems very likely that illegally distilled, counterfeit, and surrogate alcohol poses a risk to human health, playing an important role in the high level of alcohol-related deaths in Russia. It was found that home-made spirits contained the toxic alcohols that could cause damage to the liver . The finHistorically, government policies designed to raise prices and restrict availability of commercial alcohol beverages in Russia have driven black market growth. Surrogates consumption increased markedly following the prohibition of vodka sales in July 1914 as Russia mobilized for war .The development of events on the Russian alcohol scene in recent years shows how complex and multifaceted the problem of undocumented alcohol consumption is. In 2006, amendments were introduced to the Federal Law \u201cOn State Regulation of the Production of Ethyl Alcohol, Alcohol and Alcohol-Containing Products\u201d number 171, which seriously impacted the alcohol situation in the country . FirstlyMaking vodka less affordable through differential taxation was an essential element of the Russian alcohol policy in the most recent years . The govAccording to experts' estimates, the market of surrogate alcohol over the past six years has been up from 500 million liters to 800 million liters . Many ofThe major problem is that the informal alcohol market is largely immune to regulation and effective policymaking. The Russian government should consider a number of potentially effective approaches to addressing the problem of noncommercial alcohol, including raising public awareness of the risks of surrogates drinking and creating an alternative to strong alcoholic drinks by preferences to low alcoholic beverages."} +{"text": "Unfortunately, the original version of this article containeAs Dr. Ugra Singh is no longer at the University of South Carolina, the corresponding author has been changed to Dr Donald J DiPette in the author details above."} +{"text": "As neuronal pathologies cause only minor morphological alterations, molecular imaging techniques are a prerequisite for the study of diseases of the brain. The development of molecular probes that specifically bind biochemical markers and the advances of instrumentation have revolutionized the possibilities to gain insight into the human brain organization and beyond this\u2014visualize structure-function and brain-behavior relationships. The review describes the development and current applications of functional brain imaging techniques with a focus on applications in psychiatry. A historical overview of the development of functional imaging is followed by the portrayal of the principles and applications of positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), two key molecular imaging techniques that have revolutionized the ability to image molecular processes in the brain. We conclude that the juxtaposition of PET and fMRI in hybrid PET/MRI scanners enhances the significance of both modalities for research in neurology and psychiatry and might pave the way for a new area of personalized medicine. Traditionally, diagnostic imaging was restricted to the visualization of aberrations in morphological structures. However, neurological pathologies generally do not cause significant correlates to macroscopically or microscopically visible changes in cell morphology. Since the contrast obtained for the differentiation of pathophysiological abnormality defines the performance of an imaging modality, highly sensitive methods for imaging alterations in brain functioning had to be developed. In general, any kind of radiation that penetrates the human body can be used as signal source for diagnostic imaging. Hence, the diversity of imaging modalities reflects the variety of the capabilities of the different kinds of radiation used in clinical practice. The aim of this short review is to introduce the principles of the most common and most promising molecular imaging techniques for the investigation of alterations in brain functioning: positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) and to deduce their limitations and prospects.The contemporary imaging methods: computer tomography (CT), magnetic resonance imaging (MRI), PET, and ultrasound (US) can be traced back to the German physicist Wilhelm Conrad R\u00f6ntgen. In 1895 R\u00f6ntgen recognized that X-rays penetrate solid mater. He further discovered that the attenuation of X-rays depends on the characteristics of the object penetrated. When he subsequently deduced the technique of X-ray imaging a great invention took place: the visualization of structures inside the body without using surgery!Since then the imaging with X-rays has been shown to provide images of excellent anatomic resolution. The development of CT, the corresponding tomographic modality, has further enhanced X-ray imaging and up to the present billions of scans have been carried out. The specific strength of CT imaging is its high anatomic resolution. The imaging is straightforward as long as bony materials or calcified tissue is investigated. In soft tissues the differences of X-ray attenuation are marginal and contrast agents are used to provide the known high-quality CT images in the major areas of the body i.e., the vascular system, the lungs and the kidneys. Inclusion of elements of high atomic number is pivotal for the efficacy of CT contrast agents as the X-ray absorbency is almost directly proportional to the third power of an elements atomic number. However, unfortunately, most elements of high atomic number are not biocompatible and iodine containing contrast agents dominate the choice of contrast agents in clinical use. High doses of up to 42 g of the contrast agent are required to induce sufficient contrast passive agents that modulate an external signal i.e., in CT and US; and (b) probes that either produce a signal autonomously i.e., radiotracers and bioluminescent dyes, or transformations of an external signal, such as with MRI contrast agents or fluorescent probes. As passive contrast agents only enhance the body\u2019s signal, relatively high doses are required to delineate their endogenous contrast. While very high concentrations of CT contrast agents are required, the signal of radiolabeled tracers can be measured at an exceptionally high sensitivity. As the radioactive signal is exclusively emitted by the endogenous tracer it can be definitively traced back to the imaging agent. The short lived positron emitters Selective radioligands are often derived from psychoactive drugs and gain their specificity by mimicking neurotransmitters. The visualization of their specific binding to a target such as a receptor, a transporter or an enzyme allows revealing various pathologic conditions. In the following, PET tracers will be reviewed that target receptors, proteins or neurotransmitter, or glucose consumption.11C]raclopride and [18F]fallypride for dopamine D2/D3 receptors for differential diagnosis of movement disorders and for assessment of receptor occupancy by neuroleptics drugs in schizophrenia receptors and the 5-HT transporter. These transporters are dysregulated in affective disorders and shall be valuable structures for the assessment of activity of antidepressants. In the midbrain and amygdala of patients with major depressive disorder lower 5-HT transporter binding can be assessed with tracers such as [11C]McN5652 , the working horse of modern molecular imaging and 18F (108 min) very small amounts of the tracer are required to obtain the appropriate signal intensity and the total radiation exposure is comparable to the one obtained by whole body CT scans. The short half lives in turn entail the on demand synthesis of the radiotracers ideally with a cyclotron in close proximity to the PET imaging facility. The effective doses are continuously reduced by using tracers with short physical and biological half-lives, by minimizing the activity injected and by the ongoing enhancement of the sensitivity of PET scanners.The study of tracers that map the regional differences of blood flow represents a different approach to visualize pathological changes and to gain function information of the brain. While PET imaging allows the assessment of neurotransmitter concentrations in the brain, it is not suitable to reveal micro- and macroscopic structural aberrations in the white and gray matter. In addition, PET cannot be used to detect rapid changes in brain activation. Moreover, despite the enormous achievement of PET imaging there is always a concern about the radiation risks involved in this modality. Magnetic resonance spectroscopy (MRS) represents an alternative as endogenous compounds involved in the biochemistry of the brain are detected. The resonance frequency of protons depends on their chemical structure and MRS measures the concentration of marker molecules such as methionine or lipids are working indirectly by affecting the relaxation of water molecules in their coordination sphere. Due to the high exchange rate of water in the sphere of gadolinium complexes a large number of water molecules are relaxed within one scan . As a result, an apparent several thousand fold amplification of the effect is observed. Consequently, a huge variety of MRI contrast agents have been developed and gadolinium contrast agents had been described -effect can be traced back to neural activity (rather than spiking of neurons), probably mainly due to increased uptake of glutamate in astrocytes its clinical convenience\u2014fMRI is performed with the standard MRI scanners and does not require the injection of contrast agents, but is achieved due to the measurement of an endogenous contrast agent that is present at high concentration in the brain; (b) the fact that the contrast agent is functional and responsible to stimuli; (c) the fact that fMRI it is not associated with the use of radioactive probes provides a high ease of repeatability and therefore makes longitudinal studies of subjects possible; and (d) the possibility to study not only alterations in brain morphology that affect activation of a certain region, but also its connectivity to other brain regions.Nowadays, MRI scans of the brain are standard in clinical care in neurologic and psychiatric hospitals. Neurological, as well as vascular diseases of the brain can be easily detected and help guiding diagnostics and appropriate therapies. In addition, fMRIs can be acquired prior to brain surgeries to allow the surgeon an individualized anatomical mapping of visual and motoric functions of the patient.Moreover, fMRI has a prominent role in neurologic and psychiatric brain imaging research. For the sake of brevity, the following illustration of MRI and fMRI for psychiatric research will be exemplarily illustrated for schizophrenia. MRI studies on schizophrenia show a volume loss of brain tissue, specially an enlargement of the ventricles, replicating earlier CT studies, making it one of the most robust findings in MRI research and in fMRI the development of the instrumentation , as well as the analysis methods .However, improvements of the specificity of molecular imaging probes invariably go along with the loss of effectual resolution as the orientation guide\u2014the accumulation in the non-target tissue is decreased. Hence, bimodal imaging techniques that combine the outstanding molecular imaging probabilities of PET with imaging techniques with a better spatial resolution were developed. PET/CT is the prime example for such a bimodal imaging technique as the anatomical information gained at high resolution by CT synergistically matches the functional information of PET imaging. Due to the lack of tissue that provides significant attenuation of X-rays in the brain, PET/CT images of the brain do not significantly benefit from the CT-portion of the fused images. Since MRI has excellent soft tissue contrast, MRI should be perfect to navigate the PET information in the brain and high hopes are placed into the clinical outcome of the PET/MRI scanners that currently emerge. Hence, despite the exceptional technical challenges that had to be mastered for PET provide the basis not only for a better understanding of molecular processes in the brain, but also for the stratification procedures essential for the safe development of individualized therapies. The directly combined application of BOLD fMRI measuring brain function with PET radiolabeled reporters that map the distribution and function of receptors synergistic will form the basis of a new area in today\u2019s rapidly increasing insight into brain function.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In the current context of high fatality rates associated with American visceral leishmaniasis (VL), the appropriate use of prognostic factors to identify patients at higher risk of unfavorable outcomes represents a potential tool for clinical practice. This systematic review brings together information reported in studies conducted in Latin America, on the potential predictors of adverse prognosis and death from VL. The limitations of the existing knowledge, the advances achieved and the approaches to be used in future research are presented.The full texts of 14 studies conforming to the inclusion criteria were analyzed and their methodological quality examined by means of a tool developed in the light of current research tools. Information regarding prognostic variables was synthesized using meta-analysis. Variables were grouped according to the strength of evidence considering summary measures, patterns and heterogeneity of effect-sizes, and the results of multivariate analyses. The strongest predictors identified in this review were jaundice, thrombocytopenia, hemorrhage, HIV coinfection, diarrhea, age <5 and age >40\u201350 years, severe neutropenia, dyspnoea and bacterial infections. Edema and low hemoglobin concentration were also associated with unfavorable outcomes. The main limitation identified was the absence of validation procedures for the few prognostic models developed so far.Integration of the results from different investigations conducted over the last 10 years enabled the identification of consistent prognostic variables that could be useful in recognizing and handling VL patients at higher risk of unfavorable outcomes. The development of externally validated prognostic models must be prioritized in future investigations. In contrast to other clinical presentations of leishmaniasis in Latin America, American visceral leishmaniasis (VL) can lead to death in 5-10% of patients under treatment. The fatality rates associated with this disease have remained stable at a high level over the years in Brazil and are neither recorded in under-treatment patients from endemic countries of the Old World nor from non-endemic countries where such cases are imported. Since VL-induced lethality can occur even after the implementation of recommended therapy, the understanding of individual, clinical and laboratory factors that predispose to an unfavorable outcome might represent an important feature for informing better practice in the clinical management of cases. The present systematic review with meta-analysis brings together information on various prognostic variables associated with the severity of VL. Potential predictors identified in the studies surveyed were grouped according to the strength of evidence available, and 13 were considered to be of significant relevance. The gaps in the existing knowledge and the need for the development of externally validated prognostic models were also discussed. The results presented herein could be useful in identifying patients at higher risk of unfavorable evolution or death from VL, and might provide an aid in decision-making regarding the clinical management of VL cases. Visceral leishmaniasis (VL) constitutes a serious public health problem in endemic regions, especially in the Indian sub-continent, in North and East Africa, and in South America. However, VL is one of the most neglected diseases in the world Lutzomyia, which hosts the promastigote form of Leishmania infantumIn the Americas, the transmission of VL to humans occurs through the bite of female phlebotomine sandflies of the genus Treatment options for VL in Brazil are pentavalent antimonial compounds and formulations of amphotericin B The lack of reduction in the fatality rates of VL in Brazil can be explained not only by the limitations in therapy applied and the delay in diagnosis Generally, prognostic factors have received less research attention than etiological factors and therapeutics Considering the relevance of predictors of clinical evolution in reducing the number of VL-induced deaths, and the need for reliable prognostic models , the present systematic review with meta-analysis seeks to bring together information reported in studies of the potential predictors of death and other adverse outcomes of American VL. In addition, based on the analysis of the limitations of the published studies and of existing knowledge we propose possible improvements that might be incorporated into future research.Independent literature searches were conducted between March and September 2011 by two of the authors (VSB and DSB) using the databanks and keywords listed in P values or crude data that made possible the calculation of effect sizes (provided such information had not been obtained directly from the authors); (v) studies containing confusing text or incomprehensible analyses; (vi) studies exhibiting bias or inconsistencies that invalidated the results; and (vii) studies of prognostic factors related to genetic features or to quantification of cytokines.The systematic review encompassed epidemiological studies containing data that allowed us to estimate measures of association relating to predictors of death or of adverse prognosis independent of the occurrence of death in individuals diagnosed with VL. No restrictions were made regarding the age or gender of the patients or of the language of the publication. The exclusion criteria proposed were (i) studies performed outside Latin America; (ii) reports published as proceedings of symposiums or conferences; (iii) studies restricted to the description of signs and symptoms observed in VL-infected individuals without comparisons regarding the evolution of the disease; (iv) studies that simply described the existence of statistically significant (or not) associations without reporting at least the calculated The extraction of data from the publications was performed by one of the authors (VSB) and verified by the co-authors. Attempts were made to contact the authors of original reports when further information was required in order to calculate measures of association for possible inclusion in the meta-analysis. Data pertaining to individual patients were not requested.P-values using the Stouffer method, weighted proportionally to the inverse of the study squared standard error P-values was carried out as for the first group. For both groups of studies, we conducted theoretical discussions about variables that could not be submitted to meta-analysis, either because of the small number of studies involved or because of the non-uniform manner in which the data were presented or analyzed among the primary studies.The selected studies were separated into two main groups according to the outcomes, namely: (i) adverse evolution of the disease independent of death (as defined in the last section), (ii) evolution of the disease resulting in death. The first group of studies encompassed various possible outcomes and the information concerning each of the clinical or laboratory predictors identified was, if considered plausible , combined through meta-analysis of one sized 2I test, which describes the percentage of total variation across studies associated with real dispersion in effect-sizes (inter-study variation) rather than random error (intra-study variation). For each prognostic factor, the studies were separated according to the ages of the participants (adults and children) and evaluations were performed separately for each group. When the measures of association were similar in the two groups the data were combined, otherwise the combination of data was performed only within the specific group.Measures of association were combined using the random effects model, except when the number of studies was less than four in which case the fixed effects model was employed P-values, while CMA software version 2.0.057 was used for all other meta-analyses.Meta-P software was employed for the meta-analysis of The relative strength of each of the clinical and laboratory variables as a predictor of the severity of VL was evaluated according to defined criteria which were, in decreasing order of weight: (i) force of summary measure obtained through meta-analysis; (ii) pattern of data (direction of association and heterogeneity in studies where the outcome was death); (iii) number of statistically significant studies in which the control for confounding variables had been performed; and (iv) pattern of associations in studies where the outcome was unfavorable clinical evolution independent of death.There is no universally accepted or standardized tool for the identification of limitations or potential risks of bias in the analysis and/or presentation of data in studies relating to prognostic factors. Thus, in order to analyze the quality of studies reviewed we opted to use five publications Sistema de Informa\u00e7\u00e3o de Agravos de Notifica\u00e7\u00e3o; SINAN; 1/14) as shown in Of the 2945 studies identified and screened as part of the comprehensive survey, only 14 prognostic studies Each of the 14 studies reviewed employed appropriate criteria for selecting the study populations and defining the cases, and all except one All predictors of adverse evolution of VL and/or related mortality for which it was possible to perform meta-analysis are presApart from hepatomegaly, splenomegaly and weight loss, which were considered weak predictors, there was a predominance of statistically significant summary measures that showed, however, no significance in the majority of multivariate analyses and adults above 40 years are more likely to have an adverse evolution. The distribution of lethality with peaks among children and older adults suggests that different factors may be involved in the acquisition of infections and complications at different ages Together with the strong prognostic factors of groups I and II , it is wLeishmania replication by the adaptive immunosystem, particularly by Th1 cells, of undernourished patients could explain the lack of association between undernutrition and mortality risk The reduced strength of some relationships may be attributed to the specific therapeutic measures employed in some cases. For instance, individuals presenting hemoglobin levels below 7 g/dL would have received transfusions of packed red cells, as recommended by the Ministry of Health of Brazil Several potentially relevant variables could not be included in the categories of evidence proposed herein because of the scarcity of studies. Among these are factors that can be readily assessed in clinical practice with minimal cost and must be better evaluated in future research, for example, mean cell volume, eosinophil count, serum creatinine, inappetence, weakness or asthenia, dehydration, lymphadenopathy and occurrence of comorbidities such as diabetes, tuberculosis, heart or renal diseases and dengue fever. In this context, it is noteworthy that the influence of helminthiasis on the clinical evolution of VL was not investigated in any of the reviewed studies even though infection by intestinal parasitic worms is highly prevalent in urban and rural areas of Brazil The present review provides a reliable source of information for the identification of risk factors of adverse prognosis and mortality in VL and should be used as an aid in decision-making in clinical practice. It is important to emphasize, however, that the results presented herein do not directly allow the creation and validation of prognostic scores based on the signs and symptoms presented by patients. Thus, studies should be carried out with the specific purpose of developing such scores and performing external validation of prognostic models already proposed, along with the incorporation of prognostic factors or additional biomarkers as recommended by Pencina et al. Concerning other limitations in the analyzed studies, the procedures adopted to deal with the problem of missing information from medical records were generally unclear. According to Little and Rubin The majority of studies considered in the present review failed to define the criteria adopted for the stratification of continuous variables. The quality of studies could be improved by adoption of credible and unequivocal clinical and analytical stratification criteria Although the majority of the reviewed studies can be considered acceptable with respect to the adequacy of case definitions, statistical methods and multivariate analyses adopted, there were limitations in the models in cases where no interaction or multi-colinearity tests between the predictor variables were performed. In most of the studies, various prognostic factors were analyzed and many of them could be correlated, thereby producing the same explanation of variability in outcome Considering the limitations of the present review, none of the studies conducted in other parts of the world were analyzed since those studies would reflect specific clinical, social and epidemiological characteristics distinct from those of VL in the Americas. Other relevant issues included the problem of combining data acquired from distinct populations (in terms of areas and characteristics) as well as the inability to explore the causes of heterogeneity of effect sizes between studies, and the impracticality of determining the existence of publication bias. Most studies described the results for all of the variables analyzed, but four articles did not provide data regarding some associations, particularly for non-significant variables, and this may have modified the true effect of some of the calculated summary measures. The force of these measures may also have been overestimated because of the use of odds ratio as a proxy for the relative risk This is the first systematic review with meta-analysis on the prognosis factors relating to VL severity. The integration of information from different investigations conducted in Brazil in the last 10 years led to the identification of consistent predictor variables that might be useful in clinical practice for designing distinct therapies for patients at risk of an unfavorable outcome of the disease. The analysis of the quality of the published studies may be of assistance in future research, since positive features have been highlighted while logical criticism of the flaws, mainly relating to the external validation of multivariate prognostic models, has been offered. Similar assessments in different regions of the globe would be highly relevant since lethality of VL and the impact of this disease on our society can only be diminished by using consistent evidence-based medical approaches.Table S1Main characteristics of the studies included in the systematic review on prognostic factors relating to visceral leishmaniasis (VL) severity.(DOC)Click here for additional data file.Table S2Predictors, classified according to strength, of unfavorable clinical evolution independent of death and mortality for American visceral leishmaniasis identified in this systematic review.(DOC)Click here for additional data file.Text S1Forest plots for the variables submitted to meta-analysis.(DOCX)Click here for additional data file.Text S2PRISMA checklist (DOC)Click here for additional data file."} +{"text": "Orthopoxvirus and is endemic to Central and Western African countries. Previous work has identified two geographically disjuct clades of monkeypox virus based on the analysis of a few genomes coupled with epidemiological and clinical analyses; however, environmental and geographic causes of this differentiation have not been explored. Here, we expand previous phylogenetic studies by analyzing a larger set of monkeypox virus genomes originating throughout Sub-Saharan Africa to identify possible biogeographic barriers associated with genetic differentiation; and projected ecological niche models onto environmental conditions at three periods in the past to explore the potential role of climate oscillations in the evolution of the two primary clades. Analyses supported the separation of the Congo Basin and West Africa clades; the Congo Basin clade shows much shorter branches, which likely indicate a more recent diversification of isolates within this clade. The area between the Sanaga and Cross Rivers divides the two clades and the Dahomey Gap seems to have also served as a barrier within the West African clade. Contraction of areas with suitable environments for monkeypox virus during the Last Glacial Maximum, suggests that the Congo Basin clade of monkeypox virus experienced a severe bottleneck and has since expanded its geographic range.Monkeypox is a zoonotic disease caused by a virus member of the genus Orthopoxvirus genus, which includes other viruses pathogenic to humans , and produces mild to severe rash illness in infected individuals. The first human case of monkeypox was identified in the Democratic Republic of the Congo (DRC) in 1971 p < 0.001). The two Nigerian isolates group together in spite of having high genetic differentiation between them and are from a group that is sister to the one formed by the remaining isolates of the clade; in addition, all isolates obtained from captive animals form a single monophyletic group.Results of the phylogenetic analysis are shown in Within the CB clade, five monophyletic groups were formed after the phylogenetic analysis: group I includes the two western most isolates from this clade: Cameroon 1989 and Gabon 1987; group II contains 11 isolates, five from Sankuru District , one from South Sudan (Sudan 2005), one from the Republic of the Congo (Impfondo 2003), and four from other parts of DRC ; group III is the closest sister group to group II, twelve of the thirteen isolates in this group are from Sankuru District , the remaining one is from Yambuku; six of the seven isolates in group IV are from Sankuru District , the remaining one is Ikubi; group V only has one isolate from Sankuru District (JX878417). The groups are indicated in ENM projections into present day environmental conditions are similar to models presented in previous works ,39 and iGARP model projections onto climatic conditions during the Mid-Holocene show more connection between areas with suitable environments for MPXV in West Africa and also between those suitable areas in Nigeria and Central Africa; the most evident discontinuity of suitable areas separates such areas in costal Nigeria from the rest of West Africa and is located in Benin C. MaxentFinally, ENM projections onto environmental conditions during the LIG identified a larger and more continuous area of model agreement for both algorithms than that for the LGM located in central Africa with a few small patches along the coast in West Africa for GARP and Gabon for Maxent G,H.Pan [Praomys [Our phylogenetic analysis separates the MPXV isolates into two major groups that correspond to the previously identified WA and CB clades. The eastern most isolate of the WA clade is between the Niger River (west) and the Cross River (east); while the western most isolate of the CB clade is south of the Sanaga River B. These Pan , flying Pan , and mic[Praomys . The twoThe Cameroon Highlands are also located between the Cross and Sanaga Rivers; they are recognized as a high biodiversity ecoregion where the dominant vegetation types are tropical and subtropical moist broadleaf forest , represeThe Dahomey Gap is a savanna corridor that interrupts the West African rain forest in Togo, Benin, and Eastern Ghana , which hConsistent with the Pleistocene refuge theory ,75, ecolIn the present study, we first identified potential biogeographic barriers for MPXV that could be related to the CB-WA split. Further studies are necessary to determine whether the presence of a river, change in elevation, or change in the dominant vegetation cover is involved in the genetic differentiation of MPXV. The addition of MPXV isolates from the area between the Sanaga and Cross rivers would be ideal; however, cases of human or wildlife MPX have not been reported from this area. Second, we propose that the CB clade is a group with very recent diversification, possibly explained by the colonization of a bigger area with suitable conditions (refuge theory); however, dating the times of differentiation between and within clades is not possible with our current dataset. Tying MPXV cladogenesis to geologic or climatic events is a subject of future efforts. Additional field studies that result in the isolation of MPXV or the finding of serological evidence of infection with this virus in wildlife could be key to better understanding its natural history and biogeography."} +{"text": "In this article recent progress on the elucidation of the dynamic composition and structure of plastid nucleoids is reviewed from a structural perspective. Plastid nucleoids are compact structures of multiple copies of different forms of ptDNA, RNA, enzymes for replication and gene expression as well as DNA binding proteins. Although early electron microscopy suggested that plastid DNA is almost free of proteins, it is now well established that the DNA in nucleoids similarly as in the nuclear chromatin is associated with basic proteins playing key roles in organization of the DNA architecture and in regulation of DNA associated enzymatic activities involved in transcription, replication, and recombination. This group of DNA binding proteins has been named plastid nucleoid associated proteins (ptNAPs). Plastid nucleoids are unique with respect to their variable number, genome copy content and dynamic distribution within different types of plastids. The mechanisms underlying the shaping and reorganization of plastid nucleoids during chloroplast development and in response to environmental conditions involve posttranslational modifications of ptNAPs, similarly to those changes known for histones in the eukaryotic chromatin, as well as changes in the repertoire of ptNAPs, as known for nucleoids of bacteria. Attachment of plastid nucleoids to membranes is proposed to be important not only for regulation of DNA availability for replication and transcription, but also for the coordination of photosynthesis and plastid gene expression. Plastids are the characteristic organelles of photosynthetic eukaryotes. They are the sites of photosynthesis, and their biosynthetic pathways supply the plant cell with many essential compounds. Chloroplasts evolved from a cyanobacterial ancestor after a single endosymbiotic event, that was followed by an extensive reduction of the plastid genome size together with RNA and proteins are organized in structures that are similar to bacterial nucleoids. The compact structure of DNA in such nucleoids has been compared with the chromatin in the nucleus of eukaryotic cells . Interestingly, transcription by RNA polymerase II requires dynamic changes in the chromatin structures of the templates proteins Grasser, also plaPackaging of DNA by histones into nucleosomes is not a distinguishing feature of eukaryotes, but also occurs in some groups of archaebacteria which might have participated in the origin of eukaryotes were shown to be organized as rosettes with a compact central core from which supercoiled DNA loops with an average size of 10 kbp were observed to radiate The large single copy region (LSC) which in Arabidopsis comprises as much as 54% of the genome, (2) the small single copy region (SSC) making up 12% of the plastid genome in Arabidopsis, and (3) the two inverted repeats, IRs Green, . This doin situ hybridization showed that besides circular chromosomes, linear forms occur in plastids that were proposed to be the major forms in chloroplasts where many small nucleoids are attached to thylakoids and a stand-alone SWIB domain protein, the only type of SWIB proteins found in bacteria. Chlamydiae are a group of bacteria living as endosymbionts and parasites in other bacteria or in eukaryotic cells. Phylogenetic analyses suggested that an ancestral member of the group of Chlamydiae facilitated the establishment of the primary endosymbiosis between cyanobacteria and an early eukaryote and WHIRLY3 (pTAC11) that have been found in the proteome of transcriptionally active chromosomes (TAC) isolated from Arabidopsis chloroplasts and SVR4-like (MRL7-like), was found and SVR4-like proteins, which were originally identified as important proteins for chloroplast development in Arabidopsis and CHLI (Mg chelatase subunit I) genes. These mutants possess thylakoids but lack grana stacks and are devoid of the photosynthetic complexes resulting in compromised photosynthesis -dependent genes increases, whereas at the same time the expression of NEP (nuclear encoded polymerase) -dependent genes decreases , as well as with the two nucleoid associated superoxide dismutases FSD2 and FSD3 (Qiao et al., Of particular importance for the activity of nucleoids is their association with the thylakoid membranes where the photosynthetic machinery undergoes changes in composition in response to environmental conditions. A prerequisite for remodeling of the photosynthetic apparatus is the regulation of plastid gene transcription in response to light-dependent changes in the redox state of the photosynthetic apparatus (Pfannschmidt et al., In this context, proteins found to be located at the interface between nucleoids and the thylakoid membrane are of particular interest. MFP1 was proposed to anchor nucleoids to thylakoids in chloroplasts (Jeong et al., Transgenic plants with different levels of nucleoid/thylakoid associated proteins might help to elucidate the roles of these proteins in linking the activity of the photosynthetic machinery to organization and expression of plastid genes as well as the expression of nuclear genes.In the nucleus the availability of DNA for transcription is regulated mainly by posttranslational modifications, whereas in bacteria regulation of transcription involves the exchange of DNA binding proteins (Luijsterburg et al., It remains to be shown whether the different packaging of DNA in different regions of the nucleoid changes during development and in response to environmental cues. Whether, however, the central body of nucleoids with dense packaging (Sakai et al., The comparison of DNA organization in plastids, nucleus and bacteria shows that the shaping and organization of plastid nucleoids involves novel organelle specific mechanisms resembling those acting on eukaryotic chromatin besides mechanisms described for eubacterial nucleoids. During evolution of plants, the architectural proteins of bacterial nucleoids have been lost and replaced by new proteins. Some of these are enzymes that have acquired an additional function as DNA binding proteins. Others might have been contributed by Chlamydiae which facilitated establishment of the primary endosymbiosis between an early eukaryote and the cyanobacterial ancestor of plastids. These proteins do not exhibit sequence or structural conservation with the eukaryotic histones, but similar to the histones they might be regulated by posttranslational modifications.In comparison to eukaryotic chromatin, nucleoids of plastids have as those of bacteria a more open structure, that allows easy access for DNA transaction enzymes. The enrichment of enzymes involved in RNA processing and translation in the nucleoid fraction suggests that transcription, RNA processing and translation are tightly connected with each other.Similarly to bacteria, also in plastids, membranes seem to play a key role in the organization and maintenance of nucleoids. In chloroplasts, the proximity of nucleoids and photosynthetic machinery as well as the presence of several redox active proteins in nucleoids, allows for a tight coordination of photosynthesis and nucleoid function, i.e., replication and gene expression. It is striking that not only particular enzymes involved in gene expression but also architectural proteins are controlled by redox signals. Thereby these proteins might have a tremendous impact on the different enzymatic activities associated with nucleoids; in particular replication, transcription and DNA repair.The architectural organization of the plastid genetic machinery is not well understood. Since principles underlying the dynamic shaping of genomes are uniform in all forms of life, the knowledge about DNA organization in bacteria and eukaryotes can be used in future studies on the dynamic architecture of chloroplast nucleoids.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Automated analysis of premature electroencephalogram (EEG) for diagnosis is a crucial step to reduce the workload of neurologists. The grade of discontinuity gives important information about the maturation . For norIn conclusion, this research adds another valuable feature for the automated analysis of premature background EEG, which would improve the overall assessment in the NICU for EEG diagnosis"} +{"text": "In this study, the Spectral Relaxation Method (SRM) is used to solve the coupled highly nonlinear system of partial differential equations due to an unsteady flow over a stretching surface in an incompressible rotating viscous fluid in presence of binary chemical reaction and Arrhenius activation energy. The velocity, temperature and concentration distributions as well as the skin-friction, heat and mass transfer coefficients have been obtained and discussed for various physical parametric values. The numerical results obtained by (SRM) are then presented graphically and discussed to highlight the physical implications of the simulations. The study of boundary layer flow and heat transfer inducted by stretching surface has attracted considerable interest due to its wide applications in industrial processes such as the cooling of an infinite metallic plate in a cooling bath, the aerodynamic extrusion of plastic sheets, boundary layer along the material handling conveyers, the boundary layer along a liquid film and condensation processes. The quality of the final product depends on the skin friction coefficient and the rate of heat transfer. One of the earliest studies of the boundary layer flow problem was conducted by Sakiadis Unsteady flows in rotating fluid have numerous uses or potential applications in chemical and geophysical fluid dynamics and mechanical nuclear engineering. Using the Fourier series analysis, Soundalgekar et al. Many chemically reacting systems involve the species chemical reactions with finite Arrhenius activation energy, with examples occurring in geothermal and oil reservoir engineering. The interactions between mass transport and chemical reactions are generally very complex, and can be observed in the production and consumption of reactant species at different rates both within the fluid and the mass transfer. One of the earliest studies involving the binary chemical reaction in boundary layer flow was published by Bestman This work deals with the effects of chemical reactions with finite Arrhenius activation energy on unsteady rotating fluid flow due to a stretching surface with Binary chemical reaction and activation energy. The governing partial differential equations are solved using the spectral relaxation method (SRM). The SRM is based on simple decoupling and rearrangement of the governing nonlinear equations in a Gauss-Seidel manner. The resulting sequence of equations are integrated using the Chebyshev spectral collocation method. The SRM was introduced in Consider the three-dimensional, unsteady flow due to a stretching surface in a rotating fluid. The motion in the fluid is three dimensional. At time The following non-dimensional variables are introduced,The governing The non-dimensional skin friction in both In this section, the spectral relaxation method (SRM) is applied to solve the governing nonlinear PDEs (10 \u2013 13). For the implementation of the spectral collocation method, at a later stage, it is convenient to reduce the order of The spectral relaxation method algorithm uses the idea of the Gauss-Seidel method to decouple the governing systems of The initial approximation for solving Starting from given initial approximations (27 \u2013 28), the iteration schemes 19 \u2013 26) can be solved iteratively for can be sThus, applying the spectral collocation method and finite difference approximation on the SRM scheme (19 \u2013 26) gives35)(36)subject tIn order to determine the evolution of the boundary layer flow properties, numerical solutions of the set of governing systems of partial differential In this investigation, we considered the spectral relaxation method approach to solving an coupled non-linear partial differential equation system that governs the unsteady flow with binary chemical reaction and activation energy due to a stretching surface in a rotating fluid. The effects of the governing parameters namely the rotation rate parameter, the Schmidt number, the non-dimensional activation energy, the Prandtl number, the chemical reaction rate constant, the temperature relative parameter and on the flow characteristics as well as the local skin friction, heat and mass transfer coefficients have been studied. Small values the rotation rate parameter"} +{"text": "In a retrospective study of 113 patients presenting to a Brisbane general practitioner with eating disorders, 14 patients gave a history of anaphylaxis, whilst another 7 reported significant food allergies. This is significantly higher than population estimates of anaphylaxis of 1 in 1700.This high incidence leads one to ponder the origins of the issues - whether the fear of anaphylaxis contributes to a fear of food in general or whether the two conditions may share underlying immunological and/or genetic risk factors.Of particular note is the recent treatment of anaphylaxis in children using probiotics, the known association of PANDAS, recent discoveries linking the micro biome to mental health and the increased rate of asthma in mothers with depression."} +{"text": "Improving outcomes for women with epithelial ovarian cancer is a major health issue worldwide as 5-year survival has not improved significantly over the last two decades.CLASP1, a regulator of microtubule dynamics essential for mitotic cell cycling was positively associated with survival, either overall or disease free (The urgent need to increase our understanding of high-grade serous ovarian cancer (HG-SOC) led to it being chosen for the pilot project of The Cancer Genome Atlas (TCGA) and the genomes of over 400 HG-SOC samples are now freely accessible for interrogation. These data are being used for the discovery of biomarkers as well as the generation of hypotheses to understand the natural history of this malignancy and develop effective targeted therapies. In this Research Topic, Lisowaska and colleagues used gene expression profiling to identify that reduced expression of in silico resource now available via TCGA and other web-based portals, research in HG-SOC is hampered on a number of fronts. Comparison of results from cancerous or cancer-associated stromal cells with each non-cancerous equivalent is a fundamental research question, but what to use for normal cells is unclear. Recent evidence suggests that HG-SOC has its origins in the secretory cells located in the fimbrial end of the fallopian tube. This is contrary to the prevailing notion that HG-SOC arises from the epithelium lining the ovary and inclusion cysts. The contribution of each site to serous ovarian carcinogenesis is currently under debate. Jones and Drapkin recount the evolution of evidence for each site of origin and review the use and limitations of primary cell culture model systems developed from each site culture conditions. 3D culture systems feature in several articles in this Research Topic indicating the enthusiasm for this culture type and recognition of the deficiencies of the monolayer systems , 5, 6. Iin vitro development of treatment resistance and serous tubal intraepithelial carcinomas (STICs) all in the fallopian tubal epithelium, implicating this tissue as a site of origin for HG-SOC. George and Shaw provide a critical review of the literature related to these findings, noting that P53 signatures are found with similar frequencies in BRCA-mutation carriers and non-carriers, and that STILs and STICS are uncommon and identified with poor reproducibility as a background strain for breeding with relevant genetically engineered changes models was aimed at overcoming two of these deficiencies by using pieces of patient tumors that had never been cultured in vitro and that retained the microenvironment of the original tumor. PDX models are aimed primarily at testing drug responses. The site of implantation of PDXs can alter the rate of engraftment as well as the characteristics of the model. The advantages and disadvantages of the different sites of implantation are discussed by Scott and colleagues as part of their review of PDX model systems that have been trialed for EOC (Xenografts are the most utilized for EOC . They an for EOC , 14, 16.The rapid acceptance of PDXs as preclinical models underscores the increasing awareness of the contribution of the microenvironment to the pathogenesis and progression of cancer. An overview of the different components of the microenvironment such as proteases and extracellular matrix and their roles in promoting invasion and metastasis in EOC is presented by Davidson and colleagues . Dissemiin vivo models.In summary, this Research Topic showcases our current understanding of a number of key areas in EOC. It has a strong focus on model systems given the critical importance of having accurate systems and the particular difficulties and dilemmas faced especially in the development of The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Severe soft tissue defects as a result of lye contamination remain a huge challenge in the interdisciplinary approach of trauma surgeons and plastic surgeons. Free tissue transfer is a suitable surgical option for successful reconstruction of form and function of defects in the distal parts of the lower extremities. We report the successful two-stage reconstruction of a full thickness lye contamination at the dorsum of the foot with a free temporoparietal fascia flap covered with a split-thickness skin graft from the thigh. The described method is a suitable operative alternative to anterolateral thigh flaps or other thin fascia flaps regarding flap harvest and donor site morbidity and should be considered in the portfolio of the plastic surgeon. Soft tissue defects of the foot and ankle region are usually the result from mechanical, chemical or thermal trauma or are a consequence of systemic disorders with reduced perfusion, trophic dysfunction and soft tissue infections . The sucThe 24-year-old male was referred from an external hospital after a job-related chemical injury to the dorsum of his left foot. He accidentally poured an unknown amount of caustic soda solution over his foot. The emergency treatment was performed in a general hospital near to the patient\u2019s place of residence. He was admitted to our Burns Unit the following day. Patient examination showed extensive necroses from the ankle region until the distal third of the dorsum of his left foot and the metatarsals. Planta pedis, heel and toe region remained intact Figure 1 . Due to th day after admittance in a healthy condition with good scarring. The initial care was performed by our outpatient unit with weekly follow-up visits and short-term visits after 9 weeks based on perforators of the descending branch of the lateral femoral circumflex artery or the free serratus fascia flap , 6], , . They coDue to the extensive defect of our young but overweight patient we did not chose local flap options. Patient factors such as weight and postoperative weight bearing requirements as a very young patient led us to perform surgery with the microvascular free temporoparietal fascial flap with end-to-side anastomosis to the tibal dorsal artery. With this option we achieved an optimal consensus of form and function of the foot on the one hand and the aesthetic outcome on the other hand. The short-term follow-up of our patient shows a good and plane scarring, optimal donor site outcome and a very good postoperative function. Our results are in conformity with the experiences of Duteille et al., who showed similar results on twelve consecutive patients . All patThe complexity of soft tissue reconstruction of the foot arises with a relative high incidence of flap failure and secondary major amputations of the foot or distal lower leg in a lot of patients. One reason leading to amputation may be limited capacity for plastic reconstructive surgery and therefore limited surgical expertise in smaller health centers. A grave reason may also be a high complication rate of chronically ill patients with a lot of comorbidities leading care-givers to more definite options such as amputations. Ligh et al. from Duke University recently compared a group of patients which received free flaps to a group of patients which received local flaps . AccordiConsistent with the findings of Duteille et al. we deem the microvascular free temporoparietal fascial flap for a good operative option in patients with profound defects of the dorsum of the foot. It may expand the portfolio of the plastic surgeon in reconstructive surgery in this body region. The authors declare that they have no competing interests."} +{"text": "The up regulation of central BDNF gene expression has been suggested in the treatment of major depression. Chronic administration of dopaminergic agents activates the function of CREB which results into the up regulation of the BDNF gene expression. Statin therapy is associated with a reduced risk of depression and could be of therapeutic potential for major depression.We have examined a possible link amongst simvastatin, bromocriptine, haloperidol and levodopa in accordance with BDNF exon II gene expression using RT-PCR method in mice treated with standard paradigm of chronic mild stress procedure for 14 days. We specifically determined if the oral administration of simvastatin would affect the efficacy of bromocriptine, haloperidol or levodopa in mediating the regulation of the BDNF exon II gene expression.The results of RT-PCR method revealed the differential expression patterns for the expression of BDNF exon II gene in brain of mouse by indicating the three different bands, as evidenced previously to be the three different exon II transcript variants in mouse namely BDNF IIA, BDNF IIB and BDNF IIC. Mice treated with bromocriptine or levodopa in combination with simvastatin for 14 days could synergize the up regulation of for the expression of specific BDNF exon II transcript as compared to simvastatin alone whereas the mice treated with haloperidol in combination with simvastatin for 14 days could abolish the for the expression of up regulation of specific BDNF exon II transcript compared to simvastatin alone.2 like receptor and differential expression patterns for the expression of BDNF exon II gene in brain of mouse which further strengthen the emerging hypothesis, suggesting the ability of neuronal systems to exhibit the appropriate adaptive plasticity could contribute to the treatment of depression. Further, the dopaminergic agents in accordance with the cholesterol lowering drug as adjuncts may reduce the depressive like behavior more significantly and facilitation of antidepressant action of dopaminergic agents may be correlated with HMGR or cholesterol or mevalonate pathway.The results of the above study suggests linkage between the function of dopamine or dopamine D"} +{"text": "Introduction and Aims. Congenital absence of the vas deferens is an uncommon anomaly and this clinical condition is responsible for up to 1-2% of male infertility. It can be either unilateral or bilateral and the associated anomalies include cryptorchidism, seminal vesicles and ejaculatory ducts anomalies, and renal anomalies such as renal agenesis. We hereby present a case of congenital unilateral absence of vas deferens, which was found incidentally during an evaluation of undescended testis in a patient with ipsilateral renal agenesis. Case Presentation. A 10-month-old boy was referred to the urology clinic with an undescended right testis. Preoperative abdominal ultrasonography showed agenesis of the right kidney and the absence of right vas deferens and epididymis was confirmed during laparoscopic orchiectomy performed due to short right spermatic cord. There were no other concomitant anomalies of the genitourinary system observed in evaluation. Conclusion. Congenital unilateral absence of the vas deferens with cryptorchidism and renal agenesis is a rare diagnostic entity. Cryptorchidism or absent vas deferens found incidentally should lead the physician to evaluate the status of the contralateral vas deferens and conduct a renal tract ultrasound study. Congenital unilateral absence of the vas deferens (CUAVD) is an uncommon anomaly, which may contribute to male infertility and it has been associated with renal agenesis and a variety of other anomalies that was first described in 1870 by Reverdin , 3. TherA 10-month-old boy was referred to the urology clinic with an undescended right testis. A presurgical history of the patient revealed that the patient had agenesis of the right kidney and no other concomitant anomalies of the genitourinary system had been evaluated before surgery. The clinical examination presented orthotopically positioned left testis but right testis was palpable neither in the inguinal region nor in scrotum. Left hemiscrotum showed no redness or swelling and left epididymis was also verified by palpation. The inguinal area had no palpable mass referring to inguinal hernia. However, an absence of the vas deferens on the right hemiscrotum was found. Then a scrotal ultrasonography was undertaken and it confirmed cryptorchidism of right testis located at the level of right internal inguinal ring and the suspected diagnosis of agenesis of the right epididymis. Left epididymis and testis had no pathological signs seen . MoreoveDevelopment of renal system is closely integrated with the development of genital system. The ureteric bud develops from the mesonephric duct during the 5th week of gestation and the elongated stalk of the ureteric bud that is called the metanephric duct later forms the ureter. Moreover, the ingrowth of the branching ureteric buds into the metanephric blastema results in the characteristic lobulated appearance of the definitive kidney. The mesonephric duct differentiates into the bladder trigone, the seminal vesicle, the ductus deferens, and the distal two-thirds of the epididymis. An essential requirement for renal genesis after induction and differentiation of the intermediate mesoderm is the development of the ureteric bud at the caudal area of the mesonephric duct. The possible reasons for renal agenesis include the lack of induction of the metanephric blastema by the ureteral bud, primary absence of the caudal nephrogenic core, ureteral bud malformation, and dysplasia of mesonephric duct \u20136. CongeCUAVD with cryptorchidism and renal agenesis is a rare diagnostic entity. The exact pathophysiology of CUAVD still remains poorly understood. The absent vas deferens found in scrotal or laparoscopic abdominal surgery should lead the surgeon to explore the status of the contralateral vas deferens and conduct a renal tract ultrasound study. Genetic study for the evaluation of CFTR gene might be also helpful to evaluate the presence of latent cystic fibrosis. Moreover, scrotal palpation to confirm the presence of vas deferens should be recommended as a part of the routine physical exam in males."} +{"text": "The epidemiological transition has provided the theoretical background for the expectation of convergence in mortality patterns. We formally test and reject the convergence hypothesis for a sample of industrialized countries in the period from 1960 to 2008. After a period of convergence in the decade of 1960 there followed a sustained process of divergence with a pronounced increase at the end of the 1980's, explained by trends within former Socialist countries (Eastern countries). While Eastern countries experienced abrupt divergence after the dissolution of the Soviet Union, differences within Western countries remained broadly constant for the whole period. Western countries transitioned from a strong correlation between life expectancy and variance in 1960 to no association between both moments in 2008 while Eastern countries experienced the opposite evolution. Taken together, our results suggest that convergence can be better understood when accounting for shared structural similarities amongst groups of countries rather than through global convergence. Whether there is global convergence in well-being across countries remains in the realm of scientific debate. The hypothesis of convergence has been studied for a variety of dimensions of well-being, with mixed results. While economic convergence is not a general reality Our research advances the current understanding of mortality convergence by, first, formally testing the implications of the theory on the epidemiological transition, then second, uncovering trends in mortality patterns beyond the evolution of the mean, i.e. life expectancy, and variance of the considered distribution. Our chosen indicator of divergence, the Kullback-Leibler divergence (KLD), provides a comprehensive measure of the overall differences between distributions.Studies of convergence classify differences between countries using two broad categories: first there is consideration for the unequal positions of countries in the stages of their development; second are structural differences, dissimilarities that persist even should countries become equally developed In this work, we test and reject the convergence hypothesis for industrialized countries in the period 1960\u20132008. However, we acknowledge that the lack of convergence for our whole sample does not necessarily imply that there are not subgroups of countries converging. In fact, the concept of convergence among subgroups or clubs of countries has already received some attention in the mortality convergence literature Our findings on the lack of convergence are coherent with recent findings that highlight the relatively high variability of mortality at young adult ages across countries and its contribution to international differences in mortality patterns Our object of study, mortality patterns, is extracted from the period life-tables available in the Human Mortality Database The divergence of the age-at-death distribution for country Instead of focusing on the particular value of the KLD, in our exercise we set 1960 as the base year.Motivated by concerns of structural similarities and dissimilarities, we turn to the well-known literature on the effect of political and economic transitions of former Soviet countries, a process which has exerted a major influence in the form of a mortality shock for said countries Our first set of results, before we turn to the KLD trends, are based on the traditional mean and variance study of the mortality distribution. We provide a graphical analysis of the mean and variance of the ages-at-death distributions over time that also uncovers some interesting features of the epidemological transition. A remarkable feature of the data is the organization of Western countries in 1960 is mostly along a line that orders countries in the space of high life expectancy \u2014 low inequality and low life expectancy \u2014 high inequality. This correlation reflects the reality of countries in different stages of their epidemiological transitions. After 50 years, the picture that emerges is drastically different. Although there have been common trends among the majority of countries, i.e. the generalized reduction of variance and the increase in life expectancy, the resulting distribution of countries encompasses a heterogenous landscape. The previous correlation is less apparent, with countries with new profiles emerging from the distribution. For instance, we observe countries with similar variances but with large differences in life expectancy. Performing a similar exercise for Eastern countries yields an interesting contrast. In the latter case, the historical evolution is reversed, with countries evolving from many different profiles to the single dimension ordering as seen amongst Western countries in the 1960's. The mean-variance profiles we report are coherent with previous work In the next section we provide an evaluation of convergence based on the KLD. The main strength of the KLD is that it encompasses the whole differences in distributions, providing a clearer picture than analysis based on only a collection of moments. This is particularly relevant for the analysis of the mortality distributions as infant mortality breaks the normality of the ages-at-death distribution. The ages-at-death distributions of the earlier periods contained a considerably larger number of infant deaths than the contemporary distributions, making mean-variance comparisons a less accurate evaluation.Our results indicate a clear pattern of divergence in our sample of industrialized countries . The KLDThe relevance of taking into account the existence of clubs of countries becomes apparent when observing the differences in the trends between the two groups of countries. In the sample of Western countries we find a relatively flat profile of convergence for mortality patterns ; that isOur main finding suggests that, while the reduction in differences due to the catching up process of countries at earlier stages of development is an important catalyst towards convergence While our results uncover group specific trends, our classification of countries does not identify convergence clubs. There exists a period of convergence for Eastern countries prior to the 1990s but the recent history of development shows that Eastern countries are no longer approaching a common distribution. Given the robustness of the lack of overall convergence and the crucial role that group-specific dynamics play, further research is needed to uncover the determinants of club membership and pertinence.The discussion on the determinants of club membership can be divided into two areas of study depending on whether the focus is on the differences between developed and developing countries or the differences amongst industrialized countries. When focusing on the former, the literature has highlighted the existence of mortality traps"} +{"text": "As maternal deaths become rarer, monitoring near-miss or severe maternal morbidity becomes important as a tool to measure changes in care quality. Many calls have been made to use routinely available hospital administration data to monitor the quality of maternity care. We investigated 1) the feasibility of developing an English Maternal Morbidity Outcome Indicator (EMMOI) by reproducing an Australian indicator using routinely available hospital data, 2) the impact of modifications to the indicator to address potential data quality issues, 3) the reliability of the indicator.We used data from 6,389,066 women giving birth in England from April 2003 to March 2013 available in the Hospital Episode Statistics (HES) database of the Health and Social care Information centre (HSCIC). A composite indicator, EMMOI, was generated from the diagnoses and procedure codes. Rates of individual morbid events included in the EMMOI were compared with the rates in the UK reported by population-based studies.EMMOI included 26 morbid events (17 diagnosis and 9 procedures). Selection of the individual morbid events was guided by the Australian indicator and published literature for conditions associated with maternal morbidity and mortality in the UK, but was mainly driven by the quality of the routine hospital data. Comparing the rates of individual morbid events of the indicator with figures from population-based studies showed that the possibility of false positive and false negative cases cannot be ruled out.While routine English hospital data can be used to generate a composite indicator to monitor trends in maternal morbidity during childbirth, the quality and reliability of this monitoring indicator depends on the quality of the hospital data, which is currently inadequate. Most women in high resource settings give birth in hospitals or birth centres . In bothOutcome measures such as maternal and perinatal mortality are frequently monitored at both health centre and population levels. However as, in particular, maternal death becomes rarer, this becomes less meaningful as it is not very responsive to changes in care quality. The WHO introduced a complex maternal near miss/ morbidity indicator, which requires collection of detailed data including laboratory based parameters indicating organ system dysfunction . HoweverIn the UK the Royal College of Obstetrics and Gynaecology have developed a series of \u201cmaternity indicators\u201d which include elements such as caesarean section rate . HoweverHowever, it is not clear whether data quality is sufficiently high in England to adopt the Australian approach. The aim of this study was to investigate the feasibility of reproducing the Australian Maternal Morbidity Outcome Indicator using routinely available English hospital maternity data; the impact of modifications to the Indicator to address potential data quality issues as well as known maternal health concerns in the UK; and the reliability of the indicator.We used routine hospital data from the Hospital Episode Statistics (HES) database of the Health and Social care Information Centre (HSCIC) to develth revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) and procedures are coded using the OPCS Classification of Interventions and Procedures used by the NHS hospitals in the UK [We generated a list of diagnoses and procedures to be included in the composite indicator for England (EMMOI) by reviewing both the initial and final lists of the components of the Australian Maternal Morbidity Outcome Indicator developed by Roberts et al as well n the UK .We calculated the frequency and rate of the individual morbid events (diagnoses and procedures) in the study population and their annual rates over the period of 10 years in order to examine potential variation in coding practice or data quality. We calculated the incidence rate and 95% confidence intervals (CI) of maternal morbidity outcomes per 1000 women giving birth in England using the EMMOI for each year from April 2003 to March 2013 and conducted tests for linearity to examine their trend over time. We also calculated the percentage change in incidence of maternal morbidity during childbirth in England in 2012\u201313 compared to 2003\u201304 and 95% CI.2 test for trend. This was followed by univariable and multivariable logistic regression analyses to examine whether the change in the odds of maternal morbidity outcomes over the period of 10 years were attributable to the changes in maternal and pregnancy characteristics in England. A core logistic regression model was built including the nine maternal and pregnancy characteristics shown in We examined changes in the maternal sociodemographic and pregnancy characteristics by conducting \u03c72 tests for differences in proportions and \u03c7The composite indicator, EMMOI, included all 14 diagnoses and nine out of 10 procedures from the final list of components of the Australian indicator as well as sepsis, eclampsia and cerebral venous thrombosis, which are important causes of maternal morbidity and mortality in the UK \u201323. DiagOf the 6,389,066 women giving birth in England from 2003 to 2013, the EMMOI classified 24,427 women as having experienced one or more maternal morbidity during childbirth. There was an increase in the annual rate of maternal morbidity outcomes from 2003 to 2013 with a lThe overall frequency and rate of the individual component diagnoses and procedures included in the EMMOI are shown in Using routine hospital data on episodes of childbirth in England from April 2003 to March 2013 we were able to generate a composite indicator, EMMOI, to measure maternal morbidity outcomes during childbirth in England which included 26 morbid events (17 diagnoses and 9 procedures). Selection of individual morbid events was driven by the quality of the routine hospital data, which was variable and thus necessitated a number of adaptations. The composite indicator showed an increase in maternal morbidity outcomes during childbirth in England across the 10 years examined, with a 29% overall absolute increase in incidence. However, questions on reliability of the indicator remain due to mismatches between the rates of individual morbid events and the incidence of these conditions in the UK reported by population-based studies.Comparing the overall rate of the individual component diagnoses and procedures included in the EMMOI with incidence of these morbid events in the UK reported by population-based studies and with the rates of the individual components of the Australian indicator estimated in the Australian population health datasets , showed Irrespective of the variations in definitions and indicators used to measure maternal morbidity, several studies from across the world show an increase in the incidence of maternal morbidity outcomes \u201343. The The incidence of sepsis in the UK between 1 June 2011 and 31 May 2012 was estimated as 4.7 per 10,000 maternities and confThe HES data includes diagnosis of acute psychosis among women during the childbirth episode , which would be expected to be lower than the overall incidence of acute psychosis in the UK as this Incidence rates of 10 out of 26 morbid events constituting the EMMOI were in agreement with the rates of the individual morbid events included in the Australian indicator estimated in the Australian population health datasets . IncidenThis study was an attempt to create a composite indicator to measure maternal morbidity outcomes during childbirth using routine hospital data from 6,389,066 women giving birth in England from 2003 to 2013. Although we were able to demonstrate that a composite indicator to measure maternal morbidity outcomes during childbirth can be constructed using routine hospital data, we were not able to validate the accuracy of the composite indicator in classifying women as having suffered a morbidity event, since data protection and privacy laws allowed only for access to anonymised data. The use of a composite measure helps to avoid or reduce concerns about false negatives, but we cannot exclude the possibility of false positives.It is important to acknowledge the limitation of not being able to use the data on blood transfusion, PPH and \u2018repair of ruptured or inverted uterus\u2019 which we did not find to be reliable. While the information collated from the NHS hospitals goes through a process of cleaning and quality checks, the quality of routine data collected by the HSCIC including HES has been questioned by a number of audits \u201347. It hBeyond the coding limitations we describe, it is difficult to determine the reasons for the observed differences in codes and procedures relating to uterine rupture and postpartum haemorrhage. We could speculate that procedures, particularly where these require an operation under anaesthesia, such as repair of ruptured uterus, are more reliably coded than diseases/disorders when codes could be included in error simply from a list of differential diagnoses. Similarly, since blood transfusion is a more clearly defined procedure, requiring detailed checking, it may be more accurately recorded than postpartum haemorrhage; estimation of blood loss post-delivery is known to be inaccurate . ConversA single measure of maternal morbidity could be a reliable indicator of quality of pregnancy care in some settings, and an indicator which uses routine data could be a cost-effective tool to monitor the quality of pregnancy care in England. However, while our study showed that routine English hospital data can be used to generate a composite indicator to monitor trends in maternal morbidity outcomes during childbirth, the quality and reliability of this monitoring indicator is dependent on the quality of the hospital data. Using the available data, we found that some codes were unusable due to major concerns about their validity. We could not confirm the reliability of the indicator due to mismatches between the rates of the individual components of the EMMOI calculated using the HES data and the rates reported by population-based epidemiological studies in the UK. The ongoing efforts to improve the reporting and quality of NHS hospital data , could m"} +{"text": "Researchers and patients agree that rare diseases require new and better therapies. Trials in rare diseases have many challenges, relating to the scarcity of patients and experts, with the inevitable conclusion that large scale studies will require recruitment of patients from multiple centres in different countries. We describe the challenges of setting up an international trial of three steroid regimes in the management of children with Duchenne muscular dystrophy, funded by the US-based NIH(NINDS).Contract negotiations proved difficult. Differences in the definition of sponsorship between the US and the UK required a major discussion before resolution could be achieved. Risk aversion on behalf of all parties caused major delays. Concerns about exchange rate fluctuations meant that individual researchers were asked to bear the risk of an adverse shift in the exchange. Most of the 40 participating sites requested individual amendments to the model contract, in part because of a lack of compatibility with standard contracts commonly used in the different participating countries; recurrent issues included the currency to be used for payments and who bore responsibility for indemnification.The discrepancies in the interpretation of the EU Clinical Trials Directive across European countries and the differences between the regulatory and ethical approval processes in Europe, US and Canada meant that we had five separate and distinct timelines and frameworks to deal with in terms of getting regulatory, ethical and local approvals.A concerted approach is needed to resolve the specific challenges of setting up multicentre studies in rare diseases"} +{"text": "The force, mechanical work and power produced by muscle fibers are profoundly affected by the length changes they undergo during a contraction. These length changes are in turn affected by the spatial orientation of muscle fibers within a muscle (fiber architecture). Therefore any heterogeneity in fiber architecture within a single muscle has the potential to cause spatial variation in fiber strain. Here we examine how the architectural variation within a pennate muscle and within a fusiform muscle can result in regional fiber strain heterogeneity. We combine simple geometric models with empirical measures of fiber strain to better understand the effect of architecture on fiber strain heterogeneity. We show that variation in pennation angle throughout a muscle can result in differences in fiber strain with higher strains being observed at lower angles of pennation. We also show that in fusiform muscles, the outer/superficial fibers of the muscle experience lower strains than central fibers. These results show that regional variation in mechanical output of muscle fibers can arise solely from architectural features of the muscle without the presence of any spatial variation in motor recruitment. The mechanical output of a muscle is strongly affected by the length changes and shortening velocity of muscle fibers. The well-described force-length and force-velocity properties of muscles have long been considered important constraints on muscle performance Hill, . These fUnderstanding the relationship between the mechanical output of a muscle and the length trajectories of the muscle fibers has been complicated by the presence of spatial heterogeneity within muscles. The strain experienced along a muscle fascicle has been shown to vary along its length , the pennation angle of the fibers (\u03b1) and the length change of the muscle in two regions of the muscle using the following equation . In this muscle, the proximal fibers have the largest pennation angles in the muscle and pennation angle decreases distally , the maximum radius (midbelly) prior to contraction (R2), the initial lengths of the muscle (M), the initial lengths of inner (Li1) and outer (Lo1) fibers, and the muscle strain (\u03b5m). We first use these inputs to solve for the maximum radius (midbelly) at the end of the contraction (R3) using the following equation:We used a simple geometric model to map fiber strains in various regions of a fusiform muscle. We model the muscle as an isovolumetric barrel Otten, . As the o2) using the following equations:We then solve the geometry of the muscle at the end of the contraction and solve for the final length of the fiber . This muscle has a fusiform shape and lacks any internal tendinous inscriptions. The shape of the muscle is ideal given some of the geometric assumptions made in the model.The dimensions inputted to the model were based on the palmaris longus muscle of leopard frogs and that regional differences in fiber strain followed the predicted trend but were not statistically significant (p = 0.071).Our simple pennate muscle model predicts that the fibers with a lower pennation angle will undergo larger strains for a given amount of muscle shortening. Since the model is based on the architectural properties of the lateral gastrocnemius of wild turkeys, this suggests fibers from the distal part of the muscle will see greater strains Figure . This reIn many cases, the simple geometric model predicted larger strains than those actually observed. The most likely cause of this discrepancy is the explicit assumption in the model that the thickness of the muscle remains constant during the contraction. It has been previously shown that the thickness of a pennate muscle does increase during a contraction and that regional differences in fiber strain followed the predicted trend but were not statistically significant (p = 0.089).Our empirical measures of fiber strain in a fusiform muscle generally support the predictions of the simple model Figure . We consAn implicit assumption made in both our model and our empirical measurements is that the strain measured along fiber is representative of the strain at the level of sarcomeres. Here we are assuming that on average sarcomere lengths are homogeneous throughout the muscle. However, it is possible that alterations in the number of sarcomeres in series in fibers from different regions can function to counteract the effects of muscle architecture. For example, if the inner region of fusiform muscle has more sarcomeres in series the larger fiber strain observed would be distributed over more sarcomeres thereby reducing any effects on force production. We believe that the model and empirical data presented here can serve to inform hypothesis about regional variation in sarcomere length and number.The empirical data presented in this study were all performed at loads corresponding to 50% of maximum isometric force. The use of isotonic contractions allowed us to make measurements of fiber length without dynamic changes in length of series elastic elements. However, previous studies have shown that the relationship between fiber strain and muscle strain can vary as a function of force where all the motor units of the muscle are recruited. Any regional variation in fiber strain can be reasonably attributed to regional variation in architecture. In studies where isolated muscle experiments are not possible or practical, investigators can track regional fiber strain while passively actuating the joint or joints of interest. Again, any regional variation in fiber strain can be reasonably attributed to regional variation in architecture since the muscle is not active. Unraveling the relative contributions of structural and neural drivers of regional heterogeneity within a muscle will be an important future step in understanding muscle function during movement.The goal of this paper has been to highlight the potential for regional variation in strain and work within a muscle. Such variation may arise from purely architectural variation or could arise from regionally specific motor control strategies and well-defined neuromuscular compartmentalization (Higham and Biewener, The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Background and Objectives : It has been suggested previously that increased width of midfacial structure is associated with the development of palatal clefting. One of the most important heritable characteristics predisposing towards the development of orofacial clefting in an embryo is craniofacial morphology. The aim of the study was to compare nasomaxillary width of parents of children with unilateral complete cleft lip alveolus and palate with parents of noncleft children.Methods : 25 biologic parent sets of children with unilateral complete cleft lip alveolus and palate and 25 biologic parents of noncleft children were included in this study for PA cephalometric analysis.Results : There was no statistically significant difference between study and control groups. An association was found between the side of the cleft in the affected children and the parents in the same side with narrower nasomaxillary width.Interpretation and conclusion : The result of this study was in contrast with other previous studies. We observed a narrower nasomaxillary width, which suggested that this feature may be of morphogenetic importance in the etiopathogenesis of orofacial clefting in this geographic and ethnic group. Perhaps one of the most important heritable characteristics predisposing towards the development of OFC in an embryo is the craniofacial morphology Fully defining the parental craniofacial morphology in OFC will aid both the identification of the OFC morphogenes and the detection and counseling of parents determined to be at risk of having more children with OFC.Moreover, the identification of microform features in the relatives of the subjects with OFC, including cranio-facial form, lip pits and nasal deformities will also assert in the elucidation of gene-gene and gene environment interactions.Transverse asymmetry of the facial and nasomaxillary skeleton is commonly present in individuals with unilateral complete cleft lip alveolus and palate (UCLP) with the nasomaxillary complex being more asymmetric in affected individuals than noncleft controls.Interestingly, nasomaxillary asymmetry is also present in the general population as previously demonstrated by various investigators employing frontal (posteroanterior) cephalometric radiographs. Furthermore, the parents of children with cleft lip and palate also display asymmetric craniofacial features when compared with parents of noncleft children. Children born with clefts of the lip and palate have disruption of the hard tissues of the nasomaxillary skeleton.Asymmetries in the nasomaxillary complex are very common in patient with unilateral complete cleft lip alveolus and palate and have been previously studied by means of posteroanterior radiographs.Although a number of cephalometric studies have identified morphological differences between the parents of children with OFC and comparison groups, no study has investigated craniofacial asymmetry perse as a heritable predisposing factor towards the development of OFC in their offsprings. Specifically, the localization and quantification of craniofacial asymmetry could prove to be a crucial significant research for the morphogenes involved in OFC.Some clefts are caused by single mutant genes, some are due to chromosomal aberrations, and some are caused by specific environmental agents. The great majority are caused by the interaction of genetic and environmental factors each with relatively small effect.Many investigators inferred that if facial shape is genetically determined and also related to predisposing the cleft anomaly, the parents of children with cleft lip/palate should have facial dimensions different from those of general population.The identification of the parental craniofacial form in the etiopathogenesis of OFC may be important for several reasons:1. The parental craniofacial form (the phenotype) represents the hereditary influences on the craniofacial form of their offspring (the genotype). The craniofacial form in orofacial cleft is considered to be a predisposing factor in the development of OFC. For example, increased head and facial widths would logically mitigate against the palatal shelves for making contact The identification of microform features in the relatives of subjects with OFC will assert in the elucidation of the interaction of genes, both with other genes and their products, and with environmental factors. The identification of craniofacial features that are similar in several biological relationships may assert in the identification of genes involved in the etiopathogenesis of OFC.Hence, the current study was designed to evaluate the parental nasomaxillary asymmetry as a risk factor for development of palatal clefts in their offsprings.The subjects for the study were 25 sets of parents (25 biologic mothers and 25 biologic fathers) of children with unilateral complete cleft lip alveolus and palate. The study group consisted of parents of siblings reporting with unilateral complete cleft lip alveolus and palate deformities to the Dept. of Maxillofacial Surgery and Research Center, SDM College of Dental Sciences and Hospital, Dharwad with an average age of 27 for males and 24 for females.The affected children included 12 males and 13 females suffering from nonsyndromic unilateral complete cleft lip alveolus and palate. 68% (n = 17) of affected children had left side cleft and 32% (n = 8) had right side cleft.There were no subjects with syndromic cleft based on family and patient history as well as clinical examination.17 patients had unilateral complete cleft lip alveolus and palate on the left side of which 9 were males and 8 were females.8 patients had unilateral complete cleft lip alveolus and palate on the left side of which 3 were males and 5 were females.The 25 sets of parents constituting the study group had no evidence of any type of cleft while their progenies exhibited unilateral complete cleft lip alveolus and palate. 9 parents sets in the study group had a history of consanguinity.The control group subjects for comparison with study group were 25 sets of parents (25 biologic mothers and 25 biologic fathers) of noncleft children. The control group consisted of parents of noncleft children visiting for routine dental treatment to the Dept. of Pedodontics, SDM College of Dental Sciences and Hospital, Dharwad with an average age of 28 for males and 25 for females. Subjects for both the groups were Indian nationals.The criteria of selection for the control group were: Parents whose children had no orofacial clefts, no anomaly of skeletal, genetic, endocrinal or any other nature. Subjects with no gross skeletal defects. Although malocclusion was accepted. A full compliment of teeth from second molar to second molar in both jaws. Individuals who had no diseases of skeletal genetic or endocrine nature.A total of 100 posteroanterior cephalometric radiographs were obtained using a standard technique on a Planemecca PM 2002 CC Proline Panoramic X-ray unit within a period of 6 months and 2. EFollowing the standard technique, the head was stabilized in the cephalostat with the help of ear rods . The posEach radiograph was trac5 and Laspos et al (1997)9:For analysis of these frontal (PA) cephalometric radiographs, five bilateral landmarks were identified and traced on each radiograph and each measurement was assessed , as demoEuryon (REU and LEU), the most lateral point at the parietal surface.Medioorbitale (RMO andLMO), the most medial point on the medial orbital margin.Nasal point (RNA andLNA), the most lateral point in the nasal cavity.The maxillary notch (RMX andLMX), the most medial point on the maxilloalveolar surface.Zygoma (RZA and LZA), the most lateral point on the zygomatic arch.The line connecting latero-orbitale (RLO and LLO) ROL, the intersection between the lateral margin of the orbit and linea innominata, was used as the reference line for vertical measurements.A line drawn perpendicular to ROL at the midpoint of RLO-LLO was used as the reference line, LOM, for horizontal measurements.Following measurements of horizontal asymmetry were assessed on the basis of these landmarks : Head asymmetry, the difference of the perpendicular distance of REU and LEU from LOM. Orbital asymmetry, the difference of perpendicular distance of RMO and LMO from LOM. Nasal asymmetry, the difference of perpendicular difference of the perpendicular distance of RNA and LNA from LOM. Maxillary asymmetry, the difference of perpendicular distance of RMX and LMX from LOM. Zygomatic asymmetry, the difference of perpendicular distance of RZA and LZA from LOM.One examiner traced and measured all the 100 PA cephalogram.The readings of the 25 study parent sets (25 biologic mother and 25 biologic father) and 25 control parent set (25 biologic mother and 25 biologic father) were subjected to the following statistical tests Mean, Standard Deviation, Student\u2019s unpaired \u2018t\u2019 test.The ratio of children with left sides unilateral complete cleft lip alveolus and palate versus right side unilateral cleft lip alveolus and palate (UCLAP) was 2:1 in this study.The side of increased parental nasal and maxillary asymmetry was significantly associated with the opposite side of cleft in their children . In the Similarly, in the majority of parents with children suffering from the right sided cleft, nasal and maxillary width was larger on left compared with the right side.The association of the linear cephalometric variables of parents of nonsyndromic unilateral complete cleft lip alveolus and palate children (study group) and parents of healthy noncleft children (control group) is presented in The comparison of left and right side of affected children with their parents is summarized in The aim of the study was to evaluate the parental nasomaxillary asymmetry as a risk factor for development of palatal clefts in their offspring by comparing the nasomaxillary width obtained from PA cephalograms of parents of children with nonsyndromic unilateral complete cleft lip alveolus and palate with parents of noncleft children. The genetic contribution of characteristic craniofacial structure (nasomaxillary asymmetry) in parents of children with nonsyndromic unilateral complete cleft lip alveolus and palate has related to predisposition of non-syndromic unilateral complete cleft lip alveolus and palate in offspring was hypothesized. The focus of this study was to determine differences in craniofacial morphology on PA cephalogram among parents of nonsyndromic unilateral complete cleft lip alveolus and palate children and control group.The results from our study showed that the parental craniofacial morphology in nonsyndromic unilateral complete cleft lip alveolus and palate statistically does not differ from that of control group and association of side of parental asymmetry with the side of cleft in their children showed that majority of parents with children suffering from cleft in the left side, the nasal and maxillary width was larger on the right side compared to the left side, i.e. children with cleft in left side had small nasal and maxillary width in ipsilateral side of their parents.4, Suzuki A et al (1991)6, Raghavan R et al (1994)7, Mossey PA et al (1997)9, Laspos CP et al (1997)9, Mossey PA (1999)10, AL Emran SE et al (1999)11, Yoon YJ et al (2003)13], they found significant association between study and control groups of parental asymmetry and suggested a possible role of craniofacial form in orofacial clefting.Because there is no association between the control and study group, it does not exclude that the importance of craniofacial form has a genetic etiologic factor in the genesis of clefting. Various other studies is in contrast with our study .Some studies did not compare the study group with control group but found ipsilateral increase in the nasomaxillary width in the parents as one of the possible causes for development of unilateral complete cleft lip alveolus and palate [Yoon YJ et al (2003)Our study compared only parents of complete cleft lip alveolus and palate with control group. There were no quantitative difference of craniofacial structure identified in parents of nonsyndromic unilateral complete cleft lip alveolus and palate (study group) in this investigation as compared to the parents of noncleft children (control group). This study was unable to associate craniofacial form of parents with unilateral complete cleft lip alveolus and palate children. This does not indicate that such predisposing facial structures are unlikely to be only the determinant of cleft susceptibility. Further studies are required in this field, which might include large appropriate sample size.9 parent sets in study group gave the history of consanguinity. This adds to the etiologic heterogenicity of orofacial clefting in our study group. This consanguinity between the parents could be stronger feature compared to the craniofacial form for the development of orofacial clefting in their offsprings.This investigation suggested that unilaterally decreased nasomaxillary width in parents may play as a risk factor for development of palatal cleft in the offspring in our study group. This study suggested that a systematic approach in selection of subjects in both study and control group can help better understanding of genetic factors of craniofacial form associated with development of unilateral complete cleft lip alveolus and palate. This may ultimately contribute in the assessment of risk for palatal clefting in their offsprings. The features of this study was in contrast with many other previous studies suggesting that this feature may be of morphogenetic importance in the etiopathogenesis of OFC in this geographical and ethnic group."} +{"text": "The paper deals with a formation of artificial rock (clinker). Temperature plays the capital role in the manufacturing process. So, it is useful to analyze a poor clinker to identify the different phases and defects associated with their crystallization. X-ray fluorescence spectroscopy was used to determine the clinker's chemical composition. The amounts of the mineralogical phases are measured by quantitative XRD analysis (Rietveld). Scanning electron microscopy (SEM) was used to characterize the main phases of white Portland cement clinker and the defects associated with the formation of clinker mineral elements. The results of a study which focused on the identification of white clinker minerals and defects detected in these noncomplying clinkers such as fluctuation of the amount of the main phases and belite (C2S)), excess of the free lime, occurrence of C3S polymorphs, and occurrence of moderately-crystallized structures are presented in this paper. Clinker is a multiphase mixture and, so far, more than 30 constituent phases have been identified . Th. Th15]. Poor crystallization is reflected by the presence of incomplete crystalline phases SO6 01, and gelsLeafy and lamellar structures were observed SO4 11, and accuIt is possible to distinguish different morphologies and sizes of alite within the same sample. This indicates that baking is inhomogeneous in the oven and that the temperature fluctuates at least at the chamber level relative to the formation of alite. It is known from the literature that, at room temperature, only the monoclinic phase exists.The presence of free lime, the vitreous phase, and lamellar structures proves the instability of the oven temperature during baking or cooling.Excess lime and silica gels also seem to be related to the formulation and even the homogenization of the raw material. In fact, the kaolin is exceptionally rich in silica which results in baking being affected by the uncombined silica gel.SEM observation provides information on the degrees of cooking of clinker.The SEM observations of the poor quality clinker samples (noncompliant with the regulations in force) identified the following points:"} +{"text": "Lactobacillus reuteri which is used in women in the last pregnancy trimester, prevalence of atopic dermatitis in children of the first 6 months of life. One group of women took Lactobacillus reuteri as chewing tablets of probiotic Lactobacillus reuteri, with dairy-limited diet excluding walnuts prior delivery. The second group of pregnant women took only proposed diet. Cumulative prevalence of atopic dermatitis was lower in group of children which mothers took Lactobacillus reuteri (6.7%) during pregnancy than in group of children of mothers from control group (24.2%). Analysis of the clinical features of disease showed that in children of the main group was observed mild atopic dermatitis . In children of the control group was middle and severe of atopic dermatitis .To establish effect of probiotic Lactobacillus reuteri by women in the last pregnancy trimester was positive for general condition and status of gastrointestinal tract. Pregnant women which took Lactobacillus reuteri had increased metabolic activity of lactic flora and recovered balance between aerobic/anaerobic microorganisms. Disorders of intestinal microbial balance in pregnant women in control group were not recovered to the end of pregnancy.The use of"} +{"text": "Until recently, parasitic infections have been primarily studied as interactions between the parasite and the host, leaving out crucial players: microbes. The recent realization that microbes play key roles in the biology of all living organisms is not only challenging our understanding of host-parasite evolution, but it also provides new clues to develop new therapies and remediation strategies. In this paper we provide a review of promising and advanced experimental organismal systems to examine the dynamic of host-parasite-microbe interactions. We address the benefits of developing new experimental models appropriate to this new research area and identify systems that offer the best promises considering the nature of the interactions among hosts, parasites, and microbes. Based on these systems, we identify key criteria for selecting experimental models to elucidate the fundamental principles of these complex webs of interactions. It appears that no model is ideal and that complementary studies should be performed on different systems in order to understand the driving roles of microbes in host and parasite evolution. Figure 1). Alteration of any partner may cause a chain of responses in those remaining and change the outcome of infection.The term \u2018symbiosis\u2019 (from Greek \u201cliving together\u201d) was first coined by Albert Bernhardt Frank in 1877, when he described the relationship between fungi and algae in lichens, and was then used to define \u201cthe living together of unlike organisms\u201d by Drosophila melanogaster resistance to Drosophila C virus when associated with the defensive symbiont Wolbachia as early as 1969 and facilitation in 1971 , 1977. Wolbachia . The intolbachia or \u201cmicrolbachia .Acyrthosiphon pisum, the facultative symbionts Hamiltonella and Serratia provide defense against the parasitic wasp Aphidius ervi and Lota lota (burbot), respectively, and increased production of reactive oxygen species to modify the gut microbial community and insure its own development The host and parasite could be bred and maintained in the laboratory. It is a necessary step to perform controlled experimental infections and understand the transmission mechanisms of symbionts of interest. It is also crucial for testing the environmental and host genetic factors that regulate the abundance, diversity and stability of the microbiome.(2) A core microbiome would have been identified in the host suggesting that it selects for specific microbes. This in itself suggests that the host should be considered as a holobiont.(3) The parasite shares the same environment as microbes in the host body, which would suggest potential direct parasite-host-associated microbe interaction that remain to be tested.(4) The parasite would be associated with microbes for which the role in virulence will be assessed. The presence of a core microbiome in parasites and the role of parasite-associated microbes in driving the evolution of host defenses remain to be demonstrated.(5) Techniques have been developed to manipulate the composition of the microbiome of the host and/or parasite. The role of microbes in the result of infection can only be tested through manipulation of the microbiome and injection of isolated microbes. It is necessary to characterize the role of microbes individually and as a community in susceptibility, resistance, and infectivity.(6) The host and the parasite would have a wide geographical species range and ecological diversity, which would allow comprehensive fields studies to understand how environmental factors influence host-parasite-microbe interaction. It would allow to examine the role of microbes in local adaptation of parasites through cross-infection experiments.(7) The host and/or parasite would have short generation time, which would facilitate the use experimental evolution to test the role of microbes in host or parasite adaptation. For instance, the hologenome theory would predict that associated microbes participate in the arms race through shorter generation time and higher mutation rate but this hypothesis remains to be tested.(8) Closely related species of hosts could be parasitized with closely related species of parasites, thus allowing comparative studies of the role of microbes in the interaction and testing their role during parasite evolution, host switch, and host or parasite speciation.(9) The parasite would have various effects on the phenotype of its host, so that the role of microbes in any aspect of the \u2019normal\u2019 behavior or appearance of infected organism may be more readily detected and more reliably assessed for fitness consequences.(10) The genome of the host and parasite would have been sequenced, facilitating the use of high throughput comparative genomics approaches to characterize the genes involved inAll authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The purpose of the study was to compare the knowledge scores of medical students in Problem-based Learning and traditional curriculum on public health topics.We planned a cross-sectional study including the fifth and sixth year medical students of Dokuz Eylul University in Turkey. The fifth year students were the pioneers educated with PBL curriculum since the 1997\u20131998 academic year. The sixth year students were the last students educated with traditional education methods. We prepared 25 multiple-choice questions in order to assess knowledge scores of students on selected subjects of Public Health. Our data were collected in year 2002.Mean test scores achieved in PBL and traditional groups were 65.0 and 60.5 respectively. PBL students were significantly more successful in the knowledge test (p = 0.01). The knowledge scores of two topics were statistically higher among PBL students. These topics were health management and chronic diseases.We found that mean total evaluation score in the PBL group was 4.5 points higher than in the traditional group in our study. Focusing only on the knowledge scores of students is the main limitation of our study. Upon the graduation of the first PBL students in the 2002\u20132003 academic year, we are planning additional studies regarding the other functions of a physician such as skill, behaviour and attitude. During the last 25 years, ideas concerning the aim, structure and system of medical education have been discussed. Debates generally have arisen from the perception that medical education couldn't serve the purpose of improving health standards of the communities .\"Health for All\" was adopted in 1977 and launched at the Alma Ata Conference in 1978 to underline the fact that large numbers of people and even whole countries were not enjoying an acceptable standard of health . In ordeIn the Edinburgh Declaration of the World Medical Association in 1988, similar problems were mentioned and the purpose of the medical education was declared as training physicians capable of improving communities' health standards. This declaration suggested that medical education should be focused on common health problems of the large communities, and the medical school curriculum should be restructured according to the health requirements of the community. According to the declaration, medical students must gain professional skills and social values in addition to theoretical knowledge and the principle of lifelong medical education should be adopted .The ideas and suggestions mentioned above have aroused strong winds of change in the medical education arena. Mc Donald et al. from Mc Master University determined an approach based on the community's main health problems and stressed the importance of focusing on these problems while designing their medical school's curriculum .Since then, this approach has been adopted by many medical schools all over the world. The schools which designed their curriculum according to the priority health problems of the community, managed to raise the physicians' awareness of their community and the preventive measures and solutions of their main health problems.In Turkey, problems of medical education have been discussed since early 1970s. Several studies showed that the goals of medical education did not overlap with the health requirements of the Turkish community. The education of health professionals was abstracted from the realities of the country. In 1990s Turkish Parliament and Turkish Medical Association determined and reported the difficulties of medical education. In a 1991 report of the Turkish Parliament, the facts that the number of qualified physicians who were trained according to the health needs of the country was limited and that this number was not sufficient to improve its health standards were underlined. Several deans from different medical schools of the country contributed to Turkish Parliament's study and reported that a greater importance should be given to the health problems of the population while planning the educational programs and the medical education should not be restricted to the university hospitals .In The Turkish Medical Association's report the fact that medical education was not relevant to health needs of the country was emphasized. New medical graduates were not fully aware of common national health problems. The recommendations of the Turkish Medical Association to improve the health standards of the Turkish population were; training the general practitioners capable of working effectively in the primary health care and restructuring the medical education on a community basis and implementing Problem-based Learning methods .International developments and the reports of Turkish Parliament and Turkish Medical Association led the faculty of Dokuz Eyl\u00fcl University School of Medicine (DEUSM) to seek solutions to the problems mentioned in the reports. As a result, Problem-based Learning (PBL) a more active and student-centred learning- was adopted and launched in the 1997\u201398 academic year. One of the main features of the education program was its relevancy to the philosophy of community-based medical education .The curriculum of DEUSM was structured considering social, biological, behavioural and ethics objectives of medical education. The curriculum was structured in a modular system and adopted to a spiral configuration providing horizontal and vertical integration. During the first three years of undergraduate education, PBL sessions are the main focus of a modular structure. The weekly schedule of a module allowed for all the educational activities such as PBL sessions, lectures, field studies, communication skills and clinical skills courses lectures existing one hour a day in the weekly program support the PBL sessions and independent learning .PBL sessions were based on written problems, which are likely to happen in real life. Special emphasis was also given to the integration of knowledge, acquisition of professional and moral values and to the development of communication skills.Medical knowledge and practical skills that a physician is supposed to have were on the basis of the advice of Turkish Medical Association and the faculty departments. The Department of Public Health also contributed to the education program by setting social standards and determining the most important health problems of the community.PBL Curriculum of DEUSM aimed to teach the students the main health problems of the community, their prevention and ways of treatment.Public Health topics of Dokuz Eyl\u00fcl University School of Medicine consists of;\u2022 Holistic approach in health,\u2022 Basic principles of Public Health,\u2022 Personal and social points of view on health events,\u2022 Bio-psychosocial (holistic) approach to any individual,\u2022 Principles of preventive medicine,\u2022 Structure and mechanisms of national health organization,\u2022 Demographic structure and trends, factors affecting them,\u2022 Basic principles of planning and conducting a scientific research on health,\u2022 Sound knowledge on leading health problems of the country, personal and social approaches for their solutions,\u2022 Environmental and occupational factors threatening community health and their prevention.Cases in the scenarios of the PBL modules were selected among common and important health problems, for which early diagnosis or prevention is possible. Lectures and small group studies with students were also organized to contribute to the educational effectiveness of the modules. Public Health topics of the medical education may be achieved more easily when theoretical knowledge and practical skills are complemented by field studies . It is rPrior to the implementation of PBL curriculum in the 1997\u20131998 academic year, lectures on Public Health were presented to the first, the third and the last year students by the faculty members of the Department of Public Health. Lectures on bio-statistics and research methods were given weekly throughout the first year. The other topics of Public Health were held in 72-hour Public Health Courses at the end of the third year . In the Comparison of old and new curriculum using some measurement tools is mandatory to observe the effects of innovations. In the literature, the determination of students' performances in scientific or licensing examinations was used to compare the efficiency of traditional education and PBL. Nandi P. et al. reviewed the studies and meta-analyses comparing PBL and traditional lecture-based education methods. In meta-analysis of the data published between 1980\u20131999, they concluded that PBL helped students show slightly but not significantly better performance than the others on clinical examinations . SimilarBlake et al. compared formerly lecture-based educated and recently Problem-based educated graduates of Missouri-Columbia School of Medicine concerning their performances on medical licensing examinations. They reported that mean scores achieved on these examinations were better among graduates of PBL, but the difference between old and new graduates' scores was not statistically significant .Some other studies have attempted to compare students' performances on special areas of medicine instead of general evaluation. Antepohl and Herzig conducted a randomized controlled study among the students who enrolled for the course of basic pharmacology at the University of Cologne. They randomly divided the students into two groups of PBL and traditional lecture based learning in order to compare their final examination scores. They could not find any significant difference between the two groups. However, in short essay questions there was a tendency towards higher scores among the students in the PBL group. The authors also found that the PBL students reached almost identical scores in their multiple choice questions and their short essay questions whereas the students who had been in the lecture based group scored significantly lower scores in their short essays than in their multiple choice questions .In a multi-centric study conducted by Schmidt et al., comparison of PBL and lecture based learning students showed that PBL students had higher knowledge scores on the areas of primary care services, psychological health, collaboration of different sectors on health and occupational ethics .The purpose of our study was to compare the knowledge scores of medical students in PBL and traditional curriculum on public health topics.We planned a cross-sectional study including the fifth and sixth year medical students of DEUSM. The fifth year students (PBL students) were the pioneers educated with PBL curriculum since the 1997\u20131998 academic year. The sixth year students were the last students educated with traditional education methods. The knowledge scores of students on Public Health topics were evaluated. In both of the PBL and Traditional curriculum, all the knowledge acquired in the first five years of the school was reviewed during the two-month Public Health internship period in the sixth year. Since this period may remind the students of some issues which may have been previously forgotten, we decided to exclude the sixth year students who have completed their internship period. 56 fifth year students and 78 sixth year students who have not so far completed their internship period in the Department of Public Health were included in our study. Participation rates were 96.4% (54 out of 56 students) in the fifth year and 100% in the sixth.Before the application of the inquiry form, the purpose of the study was explained to the students and their oral consents were obtained.We analyzed the knowledge scores of the two groups of students' on Public Health issues. PBL and traditional programs were the independent variables. Descriptive variables were age and gender.By reviewing a five yearlong section of educational programs, we determined that nine Public Health main topics were common to both PBL and traditional programs. The main topics were communicable diseases, epidemiology, mother and child health, health management, chronic diseases, occupational health, nutritional principles in community, demography and environmental health.We prepared 25 multiple-choice questions in order to assess knowledge scores of students on selected subjects. The number of questions related to each topic was proportional to the time allocated for each of the topic in the curriculum. The content validity of the questions was tested by consulting experts in relevant fields. All the data were collected between February and March 2002. Scoring procedure was implemented over \"100 points\" where each correct answer was scored \"four points\" and each wrong answer was scored \"zero point\".Data were subjected to statistical analysis by the chi-square test and the t-test in SPSS 10.0.Overall mean age was 23.6 \u00b1 2.1 (21\u201345) years. The rates of male and female students were 55.4 % and 44.6 % respectively. There were no statistically significant differences between the two groups regarding mean ages, gender distribution or other personal variables.Mean scores achieved at the 25 question-test were 65.0 in PBL group and 60.5 in the traditional group. Students in the PBL group were significantly more successful in the knowledge test Table-.The knowledge scores of seven topics were higher among students in PBL curriculum. These topics were communicable diseases, epidemiology, health management, chronic diseases, occupational health, demography and environmental health. Traditional curriculum students were found to be more knowledgeable on two topics; mother and child health and nutritional principles in the community. However, the differences between PBL and traditional students' knowledge scores in only two topics, chronic diseases and health management, were statistically significant Table-.In our study, we found a statistically significant difference between knowledge scores of PBL and Traditional education groups in favour of the PBL group Table .The students of the PBL group had higher knowledge scores on 7 of the 9 identified topics. But the difference between mean scores of the groups was statistically significant in only two topics, \"health management\" and \"chronic diseases\". The reason of significantly higher knowledge scores among the students in PBL group may be that these students have more opportunities such as observations during field studies, work-shops or presentations to study on these two topics than those in the other group. They experienced a two week training period in a \"community health center\" at the end of the first year and observed the health center services and prepared a structured form concerning the procedures of health centers. They also studied in \"community health centers\" as small groups including two students in each fortnightly during their third year in the school and completed comprehensive forms about the topics on which they studied. The reason of better knowledge scores of PBL group on \"chronic diseases\" may result from the special educational efforts improving the effects of relevant modules on this topic. Actually special learning opportunities were provided for all topics and we were expecting to find a difference on remaining 7 topics too. On the other hand, the students in the traditional education group had slightly higher mean scores about the topics of \"mother and child health\" and \"nutritional principles in community\" although the differences between the groups' mean scores were not statistically significant. These knowledge deficiencies among PBL students were already revealed and an additional module was implemented in the curriculum to compensate them. Curriculum of DEUSM is being looked over by curriculum committee continuously and the departments try to make interventions for problematic parts.We found that the mean total evaluation score in the PBL group was 4.5 points higher than in the traditional group in our study. Actually, we expected a much larger difference between the two groups in favour of PBL students for their education was supported by lectures, small group studies and field studies in addition to the PBL sessions. They also had the advantage of studying on Public Health issues in each year of the school by means of homogenous allocation of the modules and blocks in the first five years instead of accumulation in a short period of time as it was in the traditional curriculum. Therefore, the difference between the evaluation scores of the groups did not meet our expectations although it was statistically significant. The reason for this underachievement of Public Health objectives among our PBL students may be related to both students and PBL tutors. The common perception among the students that they have enough knowledge to say something about social and behavioural aspects of PBL modules lead them to focus on biological objectives more and they do not need to study on social issues in depth. Furthermore, a common misunderstanding among faculty members that achieving the Public Health objectives in PBL is just the responsibility of the Department of Public Health may have led the PBL tutors to withdraw from the responsibility of focusing on these subjects sufficiently. Additionally, when they are less informed or less equipped with supporting material about Public Health objectives, they may not have felt very competent while facilitating their groups by asking appropriate questions.One assumption of curricular comparison studies, included this one, is that students will do better either in one or the other type of curriculum. However, each curriculum demands different skills and deployment of learning strategies from the students. This is important because, it is well known in the educational literature that not all students do well in one particular learning program and that they do better when the program adapts to their preferred way of learning. The studies of learning styles may shed light in why the differences between performance scores are always so close when medical curricula are compared.As we mentioned before, in DEUSM, the written problem used in PBL sessions are oriented to biological as well as social and behavioural objectives. In order to achieve all these three objectives the tutors must attach the same importance to each subject and ensure that their groups give enough time and effort for each objective. But when the tutors get inadequate information and support from the experts of the related subjects, they generally focus only on biological objectives and their groups can't manage to integrate all objectives. If the tutors are less sensitive to objectives other than biological ones, then their students will be less motivated to learn and, like their educators, will be equally insensitive to Public Health topics. In order to prevent this, faculty members of the department of Public Health who take place in the scenario committees review the PBL problems regarding Public Health objectives. They make every effort to insure that the Public Health objectives are included while writing the problems and that the tutors are sufficiently informed on these objectives before their sessions. Field Work Committee has been trying to increase students' motivation and raise their awareness on Public Health issues to increase the effectiveness of field studies.Focusing only on the knowledge scores of students is the main limitation of our study. Upon the graduation of the first PBL students in the 2002\u20132003 academic year, we are planning additional studies regarding the other functions of a physician such as skill, behavior and attitude.The authors declare that they have no competing interests.EG conceived of the study, participated in the design of the study and drafted the manuscript, BM conceived of the study, participated in the design of the study and coordination, performed the statistical analysis, GA participated in the design of the study and performed the statistical analysis, RU conceived of the study, participated in the design of the study. All authors read and approved the final manuscript.primary prevention method?While working in a health center as a general practitioner, you have noticed that hypertension prevalence is high among the people living in the region under your responsibility. Which of the following would be your choice as a) I would educate the hypertensive patients on their disease.b) I would treat the hypertensive patients with antihypertensive drugs.c) I would send the hypertensive patients to a secondary care hospital for further investigation and treatment.d) I would educate healthy individuals on risk factors associated with hypertension and prevention methods.most common childhood nutritional disorder in Turkey?Which of the followings is the a) Protein calorie deficiencyb) Marasmusc) Iron deficiency anaemiad) Ricketswrong?Which of the following is a) Demography is a science that analyse the body, structure and differentiations of human populations.b) The goal of family planning is to decrease current number of population.c) Dependent population ratio is found by dividing the total number of population younger than fifteen years and older than 65 years of age by the total number of population between 15\u201365 years of age.d) Principal of pronatalist population policy is to increase the total number of population.is not one of the basic records kept in a health center?Which of the followings a) Household determination card.b) Follow-up card for the females between 15\u201349 years old.c) Follow-up card for aged individuals.d) Antenatal and postnatal follow up card.e) Infant and child follow-up card.Which of the followings is not among the responsibilities of an occupational health unit?a) Health prevention services in work settingsb) Work safety preventionsc) Following up the health and safety conditions in work settingsd) Preventing any interruption in productione) Giving outpatient clinic services in work setting.not required as an immediate intervention?An 11 year old girl was bitten by a neighbour-dog while she was playing in her house-garden. Which of the followings is a) To investigate if the dog is vaccinated.b) To vaccinate the girl for rabies prevention.c) To clean the wound by soap and water.d) To apply one dose of tetanus vaccine.e) To try to understand how the dog bit the girl.most common used effective family planning (contraception) methods?Which of the followings is the a) Intrauterine deviceb) Withdrawal (coitus interruptus)c) Combined oral contraceptivesd) Condome) Subcutaneous implantsbest represents the environmental health related responsibilities of a general practitioner who works in a health centre?Which of the followings a) Waste control and giving education to correct misapplicationsb) Analyzing and chlorinating drinking water, control of potable waterc) Controlling and improving the condition of toilets,d) Coordination of conduction of above mentioned services by auxiliary personnel of health centre, although these services are among the tasks of municipality.e) All of the statements above are true.After looking over one-year medical records of an internal medicine outpatient clinic, it was found that 25 % of the diagnoses were Diabetes mellitus. Regarding this result a screening procedure was conducted in the field and Diabetes mellitus prevalance was found 5 %.can not be the conclusion of above mentioned situation?Which of the followings I) Outpatient clinic may admit people coming from other regions.II) Outpatient clinic records represent the health status of the community.III) One-fourth of the patients have Diabetes mellitus diagnosis.IV) Field studies are needed to determine the real prevalance of a disease.a) I, IIb) I, IIIc) II, IIId) I, II, IVe) II, III, IVThe pre-publication history for this paper can be accessed here:"} +{"text": "The development of miniature sensors that can be unobtrusively attached to the body or can be part of clothing items, such as sensing elements embedded in the fabric of garments, have opened countless possibilities of monitoring patients in the field over extended periods of time. This is of particular relevance to the practice of physical medicine and rehabilitation. Wearable technology addresses a major question in the management of patients undergoing rehabilitation, i.e. have clinical interventions a significant impact on the real life of patients? Wearable technology allows clinicians to gather data where it matters the most to answer this question, i.e. the home and community settings. Direct observations concerning the impact of clinical interventions on mobility, level of independence, and quality of life can be performed by means of wearable systems. Researchers have focused on three main areas of work to develop tools of clinical interest: 1)the design and implementation of sensors that are minimally obtrusive and reliably record movement or physiological signals, 2)the development of systems that unobtrusively gather data from multiple wearable sensors and deliver this information to clinicians in the way that is most appropriate for each application, and 3)the design and implementation of algorithms to extract clinically relevant information from data recorded using wearable technology. Journal of NeuroEngineering and Rehabilitation has devoted a series of articles to this topic with the objective of offering a description of the state of the art in this research field and pointing to emerging applications that are relevant to the clinical practice in physical medicine and rehabilitation. Understanding the impact of clinical interventions on the real life of individuals is an essential component of physical medicine and rehabilitation. While assessments performed in the clinical setting have value, it is difficult to perform thorough, costly evaluations of impairment and functional limitation within the time constraints and limited resources available in outpatient units of rehabilitation hospitals. Furthermore, it is often questioned whether assessments performed in the clinical setting are truly representative of how a given clinical intervention affects the real life of patients. While this observation has fostered a great deal of interest for the development and validation of outcome measures that largely rely on the use of questionnaires , researcA number of clinical applications of wearable systems in physical medicine and rehabilitation emerged in the past few years. They range from simple monitoring of daily activities, for the purpose of assessing mobility and level of independence in individuals, to integrating miniature sensors to enhance the function of devices utilized by patients to perform motor tasks that they would be otherwise unable to accomplish.Monitoring functional motor activities was one of the first goals of research teams interested in clinical applications of wearable technology. The focus was initially on using accelerometers -8 or a cA level of complexity was added when researchers started investigating motor disorders and the possibility of utilizing wearable technology to assess the effect of clinical interventions on the quality of movement observed while patients performed functional tasks. Two applications worth mentioning are the one to assess symptoms and motor complications in patients with Parkinson's disease -14 and tFinally, recent studies have been focused on integrating wearable, miniature sensor technology with orthoses, prostheses, and mobility assistive devices. Sensor technology is particularly appealing in these applications because it allows implementing closed-loop strategies that take advantage of the increased complexity and flexibility that robotics is contributing to the design of orthoses, prostheses, and mobility assistive devices. Namely, the characteristics of such devices can be constantly modified as a function of the task individuals are engaged into and environmental disturbances ,24.In all the emerging applications summarized above, either continuous recording of sensor data or at least monitoring over extended periods of time are necessary to design and implement an effective clinical intervention. Unobtrusive, wearable systems providing ease of data gathering and some processing capabilities are essential to achieve the objective of making the leap between the preliminary results obtained as part of the research carried on so far and the daily clinical practice of physical medicine and rehabilitation. Three areas of work are essential to achieve this objective: 1)the development of wearable sensors that unobtrusively and reliably record movement and other physiological data relevant to rehabilitation; 2)the design and implementation of systems that integrate multiple sensors, record data simultaneously from wearable sensors of different types, and relay sensor data to a remote location at the time and in the way that is most appropriate for the clinical application of interest; and 3)the development of methodologies to manipulate wearable sensor data to extract information in a clinically relevant manner to perform clinical assessments or control devices aimed at enhancing mobility in individuals with conditions that limit their level of independence. A series of papers have been assembled to provide the readership of Journal of NeuroEngineering and Rehabilitation with a description of the state of the art of the application of wearable technology in physical medicine and rehabilitation.A first set of the papers that have been assembled for publication on Journal of NeuroEngineering and Rehabilitation on the topic of wearable technology in physical medicine and rehabilitation has the objective of describing recent advances in wearable sensor technology. Two manuscripts describe attempts by different groups of measuring angular displacements for upper and lower extremity joints by embedding conductive fibers into the fabric of undergarments. The paper by Gibbs and Asada, entitled \"Wearable conductive fiber sensors for multi-axis human joint angle measurements\", reports encouraging preliminary results concerning monitoring lower limb joint displacements during ambulation by utilizing such technology. The manuscript by Tognetti et al, entitled \"Wearable kinesthetic system for capturing and classifying upper limb gesture in post-stroke rehabilitation\", describes the design and implementation of a system similar to the one proposed by Gibbs and Asada but geared toward monitoring movements of the upper extremities. The authors also explore the application of these wearable sensors to monitoring motor recovery in post-stroke individuals. Simone and Kamper focus their contribution on unobtrusively measuring finger movements in patients undergoing rehabilitation. Their manuscript \"Design considerations for a wearable monitor to measure finger posture\" summarizes the authors' recent work toward developing ways to record fine motor control tasks involving manipulation of objects requiring fine motor control of the hand and fingers. This technology has immediate application in patients such as post-stroke individuals undergoing rehabilitation that targets fine motor control skills. While initial research in the area of wearable technology was aimed at combining existing, miniature sensors with special fabrics or wireless technology, recent advances in this field have been focused on the development of sensing elements that can be even more easily embedded in clothing items. An example of such effort is reported in the paper by Dunne et al entitled \"Initial development and testing of a novel foam-based pressure sensor for wearable sensing\". This paper summarizes positive preliminary results by the research team aimed at measuring shoulder movements, neck movements, and scapular pressure. The sensing elements can also be used to monitor respiratory rate. Devoted to monitoring systemic responses is the last of the papers focused on wearable sensors. In this manuscript, Yan et al describe a new method to reliably measure heart rate and oxygen saturation. The paper is entitled \"Reduction of motion artifacts in pulse oximetry by smoothed pseudo Wigner-Ville distribution\" and demonstrates how advanced processing techniques may be necessary to derive reliable data when recordings are performed in the field.A second area of research relevant to the application of wearable technology in physical medicine and rehabilitation concerns the integration of wearable sensors into systems. Following the seminal work by Park and Jayaraman , severalA final set of papers is focused on applications that are relevant to physical medicine and rehabilitation. Sherrill et al describe in their paper entitled \"A clustering technique to assess feasibility of motor activity identification in COPD patients via analysis of wearable-sensor data\" a method to design classifiers of motor activities such as walking and stair climbing. The proposed technique relies on the examination of small datasets via clustering methods. Measures are derived from clusters associated with different motor activities to evaluate whether the set of wearable sensors and features derived from the recorded data are suitable to reliably identify the motor tasks of interest. Wang and Winters put the information gathered via wearable systems into a clinical context via processing that relies on neuro-fuzzy models. Their paper entitled \"A dynamic neuro-fuzzy model providing bio-state estimation and prognosis prediction for wearable intelligent assistants\" presents encouraging results indicating that the proposed method can put in the correct context dynamic changes observed in post-stroke individuals undergoing rehabilitation. Wang and Kiryu in their manuscript entitled \"Personal customizing exercise with a wearable measurement and control unit\" summarize their results on customizing machine-based exercise routines on the basis of physiological data that are continuously gathered from individuals performing such routines. Their results demonstrate the feasibility of a closed-loop system that optimally adapts workload. Dozza et al describe a wearable system designed to reduce body sway in individuals with severe vestibular problems. Their manuscript entitled \"Influence of a portable audio-feedback device on structural properties of postural sway\" summarizes positive results obtained with a prototype wearable system that utilizes audio-feedback to improve balance. Finally, Mavroidis et al describe how miniature sensor technology can be used to design a new generation of smart rehabilitation devices. Three devices are described in their paper entitled \"Smart portable rehabilitation devices\": a passive motion elbow device, a knee brace that provides variable resistance by controlling damping via the use of an electro-rheological fluid, and a portable knee device that combines electrical stimulation and biofeedback. These devices combine sensing technology and control strategies to enhance rehabilitation.This collection of papers provides an up-to-date description of the state of the art in the field of wearable technology applied to physical medicine and rehabilitation. The field is rapidly advancing and numerous research groups have already demonstrated applications of great clinical relevance. The potential impact of this technology on the clinical practice of physical medicine and rehabilitation is remarkable. A significant shift in focus is possible thanks to wearable technology. While the main focus of clinical assessment techniques is currently on methods that are implemented in the clinical setting, wearable technology has the potential to redirect such focus on field recordings. This is expected to allow clinicians to eventually benefit from both data gathered in the home and the community settings during the performance of activities of daily living and data recorded in the clinical setting under controlled conditions. Complementarities are expected between field and clinical evaluations. Future research will surely address optimal ways to combine these two types of assessment to optimize the design of rehabilitation interventions."} +{"text": "We have hypothesized that these PGE2 dependent effects on firing rate are due to changes in the inherent electrical properties of VMPO neurons, which are regulated by the activity of specific ionic currents.Physiological and morphological evidence suggests that activation of the ventromedial preoptic area of the hypothalamus (VMPO) is an essential component of an intravenous LPS-dependent fever. In response to the endogenous pyrogen prostaglandin E2 dependent firing rate responses were not the result of changes in resting membrane potential, action potential amplitude and duration, or local synaptic input. However, PGE2 reduced the input resistance of all VMPO neurons, while increasing the excitability of temperature insensitive neurons and decreasing the excitability of warm sensitive neurons. In addition, the majority of temperature insensitive neurons responded to PGE2 with an increase in the rate of rise of the depolarizing prepotential that precedes each action potential. This response to PGE2 was reversed for warm sensitive neurons, in which the prepotential rate of rise decreased.To characterize the electrical properties of VMPO neurons, whole-cell recordings were made in tissue slices from male Sprague-Dawley rats. Our results indicate that PGE2 is having an effect on the ionic currents that regulate firing rate by controlling how fast membrane potential rises to threshold during the prepotential phase of the action potential.We would therefore suggest that PGE This criterion is based on previous studies that indicate a functional difference for neurons which show this degree of inherent thermosensitivity .HJR, carried out the majority of cellular recordings and data analysis. JDG conceived of the study and participated in its design, coordination and completion. Both authors contributed equally to the drafting of this manuscript."} +{"text": "Coevolution between pairs of antagonistic species is generally considered an endless \"arms race\" between attack and defense traits to counteract the adaptive responses of the other species.When more than two species are involved, diffuse coevolution of hosts and parasitoids could be asymmetric because consumers can choose their prey whereas preys do not choose their predator. This asymmetry may lead to differences in the rate of evolution of the antagonistic species in response to selection. The more long-standing the coevolution of a given pair of antagonistic populations, the higher should be the fitness advantage for the consumer. Therefore, the main prediction of the hypothesis is that the consumer trophic level is more likely to win the coevolution race.We propose testing the asymmetry hypothesis by focusing on the tritrophic system plant/aphid/aphid parasitoid. The analysis of the genetic variability in the virulence of several parasitoid populations and in the defenses of several aphid species or several clones of the same aphid species could be compared. Moreover, the analysis of the neutral population genetic structure of the parasitoid as a function of the aphid host, the plant host and geographic isolation may complement the detection of differences between host and parasitoid trophic specialization.Genetic structures induced by the arms race between antagonistic species may be disturbed by asymmetry in coevolution, producing neither rare genotype advantages nor coevolutionary hotspots. Thus this hypothesis profoundly changes our understanding of coevolution and may have important implications in terms of pest management. Coevolution is the result of reciprocal selective pressures exerted by interacting species. Many studies have been devoted to the hypothesis of an endless \"arms race\" between antagonistic species, in which each species develops escalating attack and defense traits to counteract the adaptive responses of the other species. In reference to Lewis Carroll's book \"Through the Looking Glass\", Van Valen named thThe RQH was initially developed in the context of multiple species interactions, to account for the constant probability of species extinction. However, as modeling the coevolution of many species involves a number of difficulties, later studies based on the RQH have mostly been limited to interactions between pairs of species. From this restricted situation, two main predictions can be made: first, arms races induce an advantage of rare genotypes of which resistance or virulence is more efficient and thus, frequency-dependent fluctuations of resistance and virulence may be predicted. Second, a geographic view of the coevolutionary process suggests a dynamic mosaic structure, with local and temporary \"hot spots\" of antagonistic species coevolution .However, for plant-pathogen interactions and for Nevertheless, the RQH continues to lie at the heart of the debate concerning the coevolution of antagonistic species, probably because of the lack of plausible alternative hypotheses. Although metapopulation structures or time lags between the responses of antagonistic species to selective pressures have been evoked as reasons for the lack of clear experimental evidence in favor of the RQH , the addAdaptive responses to selective pressures exerted by antagonistic organisms are not necessarily the same for the target species and the consumer species. There is a first level of potential evolutionary asymmetry between them because the failure of virulence for a consumer is a delay in fitness acquisition whereas the failure of defense for a prey is the loss of its entire fitness. Such a difference was underlined by Dawkins noting AExtending the field of interest to interactions and coevolution between more than two species (i.e. diffuse coevolution) may offer new perspectives. Considering the reciprocal selective pressures exerted by many species leads us to take into account the specificity of virulence and resistance. Until now, few works have been devoted to this subject (but see ). HoweveA recent model described the evolution of resistance of one host subjected to the attacks of two types of parasitoids differing in their virulence and specificity (different genotypes of a species or different species) and the evolution of virulence of one parasitoid attacking two types of hosts differing in their resistance and specificity . The resThe asymmetry hypothesis (AH) may lead to differences in the rate of evolution of the antagonistic species in response to selection. For a specialist consumer capable of target choice, chosen targets constitute a more or less \"constant environment\". On the other hand, for a generalist defender, the diversified, facultative and fluctuating attacks by a set of enemies constitute a \"variable environment\". The constancy of the environment may lead to the faster adaptation of specialists (mostly consumers under AH) than of generalists (mostly targets under AH) . Thus, uDrosophila species to encapsulate parasitoid eggs. In this case, it can be also noted that the resistance trait is not specific when virulence traits (reviewed in [The asymmetry hypothesis seems to fit well with numerous data concerning geographic structures published in the literature (reviewed in but see iewed in ) are divLysiphlebus testaceipes and the aphid Aphis gossypii. To measure this variability, several populations of the parasitoid collected on different clones of the aphid will be confronted to different clones of this aphid. The different clones of the aphid will be confronted to the different parasitoid populations. This system that considers intra specific genetic variability for each trophic level will be used to evaluate the fitness of the consumers and of the targets in the different parasitoid /aphid combinations. An important point is that virulence will be estimated from the success rate of parasitism (i.e. the production of offspring) whereas defense will be estimated through the measure of the aphid fitness, i.e. the number of offspring produced by the aphid whatever the outcome of the parasitism (success or failure). Two key factors of the host fitness will be considered: 1) the host survival rate in case of parasitism failure ; and 2) the number of offspring produced by the host after parasitism: after a parasitoid's sting, the host may produce offspring until mummification in case of parasitoid embryo development or until a variable date in the case of parasitism failure. B- The analysis of the neutral population genetic structure of the parasitoid as a function of the aphid host, the plant host and geographic isolation. The specialization of L. testaceipes on different aphid clones can lead to a neutral genetic differentiation of the parasitoid as a function of the host and the plant because the reproduction of the parasitoid occurs soon after the emergence of the adults from their host. The genetic differentiation between populations of the parasitoid sampled from different aphid clones and from different plant species could be evaluated and compared to a putative effect of isolation by distance.We propose a dedicated test of the hypothesis of the specialization of the parasitoid virulence and the absence of specialization in the defense of the aphids, against the RQH. The study will focus on the tritrophic system plant/aphid/aphid parasitoid. Two complementary experimental approaches could be considered: A- The analysis of the genetic variability of virulence and defense of the parasitoid This test may allow local verification or rejection of the predictions of the AH . When performed several times on animals from diverse geographic origins, this should eventually allow rejection of the classical interpretation of RQH and the host spots theory of coevolution .The consequences of the asymmetry hypothesis are important. Genetic structures induced by the arms race between antagonistic species may be disturbed by asymmetry in coevolution, producing neither rare genotype advantages nor coevolutionary hotspots . HoweverThe AH cannot be applied to every situation. In the case of host-parasitoid associations, the high level of genetic variability in resistance observed within some local host populations ,21 is beAsymmetry may be suspected in cases involving successive trophic levels other than host-parasitoid associations, such as plant-insect or parasitoid-hyperparasitoid combinations, as soon as individuals of the upper trophic level can choose their target. However, random attacks, such as those due to plant pathogens, may favor more generalist traits of virulence, but the specificity of consumers, and therefore asymmetry, may be restored through indirect choices: pathogens transmitted by vectors may use or manipulate the specificity traits of the vector. Large sets of species interactions may thus lead to asymmetric coevolution.Acyrthosiphon pisum is parasitized by the hymenopteran Aphidius ervi and is specialized on different plant species. Hufbauer & Via [A. ervi than is the clover host population. If the association they studied dealt with two aphid and one parasitoid populations, under the AH, virulence specialization rather than resistance level variations can explain the observations: the parasitoid population is specialized on the clover host population, but keeps a partial virulence on the alfalfa aphid biotype. This suggests that short-term parasitoid specialization may be a key factor in biological control efficiency. For instance, consumers introduced to control a pest could rapidly specialize against non-target hosts.The AH may have consequences in terms of pest management. For instance, it is generally thought that variations in parasitism outcome result from a variability of host resistance due to the selection for higher resistance ,22. Howeer & Via observeder & Via also obsThe asymmetry hypothesis thus provides food for thought concerning diffuse coevolution and could be applied to domains beyond host-parasitoid coevolution. Similar thoughts may be applicable to the durability and efficiency of plant resistance or immunological responses to diseases transmitted by vectors. Its theoretical implications and its consequences in terms of population management are potentially important and remain unexplored.RQH: Red Queen HypothesisAH: Asymmetry HypothesisBoth authors have been involved in the elaboration of the hypothesis and the evaluation of its consequences, and in the drafting of the paper."} +{"text": "Breast cancer is the most common cancer in women worldwide with an estimated 2.3 million breast cancer cases diagnosed annually. The outcome of breast cancer management varies widely across the globe which could be due to a multitude of factors. Hence, a blanket approach in standardisation of care across the world is neither practical nor feasible.To assess the extent and type of variability in breast cancer management across the globe and to do a gap analysis of patient care pathway.An online questionnaire survey and virtual consensus meeting was carried out amongst 31 experts from 25 countries in the field of breast cancer surgical management. The questionnaire was designed to understand the variability in diagnosis and treatment of breast cancer, and potential factors contributing to this heterogeneity.The questionnaire survey shows a wide variation in breast surgical training, diagnosis and treatment pathways for breast cancer patients. There are several factors such as socioeconomic status, patient culture and preferences, lack of national screening programmes and training, and paucity of resources, which are barriers to the consistent delivery of high-quality care in different parts of the world.On-line survey platforms distributed to global experts in breast cancer care can assess gaps in the diagnosis and treatment of breast cancer patients. This survey confirms the need for an in-depth gap analysis of patient care pathways and treatments to enable the development of personalised plans and policies to standardise high quality care. Incidence and mortality associated with breast cancer varies widely across the globe. There are several socioeconomic and population-based factors which contribute to this heterogeneity , 2. TherTo assess variability in breast cancer management across the globe and to do the gap analysis of patient care pathway.The initial members of Global Breast Hub organised a group of Prime members who are leading experts in Breast Surgery with the ability to reach most of the surgeons caring for breast cancer patients in their respective countries. An online questionnaire survey was distributed amongst 31 experts from 25 countries . This QuThe main results from the questionnaire were as follows:Majority of the members confirmed that they had a National Health Care System in their country (87%) and 68% had a National Cancer Registry and 3. OGlobally the incidence, variability in treatment and financial burden of breast cancer is rising , 5. PartOur survey supports the literature that breast cancer screening availability varies globally as does technology available for screening . We idenThe questionnaire survey confirms a wide variation in both training and education as well as treatment protocols followed in different parts of the world. Availability of speciality breast training including oncoplastic breast surgery remains heterogenous and requires streamlining and closer collaboration globally . The useGlobal breast hub, using this survey platform, aims to identify global gaps in breast cancer diagnosis and treatment and work toward eliminating disparities in breast care by partnering with other global stakeholders whose mission is to improve education, allocate resources and change policy.Our survey amongst experts in the field of breast cancer surgery from six continents confirms significant variation in the availability of resources to diagnose and treat breast cancer across the globe. In addition, the survey confirms a lack of availability of structured training and a lack of expertise in some parts of the world. Our survey demonstrates the need for a tailored approach to address the knowledge gap and resource availability across the world to optimise breast cancer management and maximise survival and treatment outcomes.None.None to declare from any authors."} +{"text": "In the above mentioned publication, some of the authors' affiliations weren't correctly identified. The original article has been corrected and the proper representation of the authors and their affiliations is also published here."} +{"text": "A partitionable adaptive multilayer diffractive optical neural network is constructed to address setup issues in multilayer diffractive optical neural network systems and the difficulty of flexibly changing the number of layers and input data size. When the diffractive devices are partitioned properly, a multilayer diffractive optical neural network can be constructed quickly and flexibly without readjusting the optical path, and the number of optical devices, which increases linearly with the number of network layers, can be avoided while preventing the energy loss during propagation where the beam energy decays exponentially with the number of layers. This architecture can be extended to construct distinct optical neural networks for different diffraction devices in various spectral bands. The accuracy values of 89.1% and 81.0% are experimentally evaluated for MNIST database and MNIST fashion database and show that the classification performance of the proposed optical neural network reaches state-of-the-art levels. Deep learning is a machine learning method that predicts data by simulating multilayer artificial neural networks. Deep learning is widely used in various fields, including medicine ,2, commuThe use of optical systems to implement Fourier transform, correlation and convolution operations has long been valued by researchers because Although there have been many excellent research studies on diffractive optical neural networks ,18,19,20Furthermore, the optical neural network should use a reasonable optical design to adaptively adjust to the size of the input and output data, thus improving the computational efficiency and speed.In this paper, we propose partitioning a multilayer optical neural network in planar space optical modulation device and photodetector device.This method addresses the shortcomings of previous multilayer diffractive optical neural networks, which face difficulties in flexibly changing the number of layers in the network and the size of the input data. This system can improve the computational efficiency of the diffractive optical neural network while reducing the number of optical devices and the difficulty in aligning the optical path. In addition, holograms are introduced to assist in calibrating the positions of the phase plate and output plane, and the nonlinear characteristics of the photodetector are used to realize a nonlinear activation function in the optical neural network.The model of the conventional digital fully connected neural network layer is shown in The output avefront . The calThe optical model shown in n layers of n diffractive optical systems in series. According to Equation ReLU, and the beam is incident on an amplitude-only spatial light modulator, which we denote as SLM 1 (UPOLabs HDSLM80R). SLM 2 (UPOLabs HDSLM80R Plus) is a phase-only spatial light modulator that is used to load the phase plane weights. The CMOS photodetector (Daheng MER2-2000-19U3M-L) acquires the intensity distribution of the light field modulated by the phase mask in the output plane. SLMs 1 and 2 have a resolution of The optical experimental verification system of the proposed partitionable optoelectronic hybrid diffraction optical neural network is shown in To ensure that the neuron nodes in the optical neural network are linked correctly, the positions of the main optical surfaces in the optical system shown in n method and numez, and there is a rectangular hole aperture of size D in plane The implementation of multilayer networks in blocks in a plane requires that the interference between blocks in different network layers be analyzed. Due to the independence of light propagation, there is no interlayer interference in the free propagation process; thus, the analysis of the interference between blocks in different network layers needs to consider only the distribution and energy of the first-order diffraction between different blocks in the same plane. As shown in D in the phase and output planes, and the wavelength R can be calculated with Equation Rmax=The classification performance of the proposed partitionable and efficient multilayer diffractive optical neural network is validated with the Fashion-MNIST dataset and the The loss function of the optical neural network in this paper is shown in To apply the proposed method, the output of the first partition of the CMOS sensor must be used as the input of the second partition of SLM1. Similarly, the output of the second partition of the CMOS sensor must be used as the input of the third partition of SLM1, and so on; therefore, we use a partitionable multilayer optical neural network refreshing strategy as shown in sequence diagram t is the computational delay of the diffractive optical neural network. In our experiments The specific details of the computational time consumption of our experimental diffractive optical neural network are shown in For diffractive optical neural networks with higher number of layers, our proposed diffractive optical neural network structure should be suitably extended to avoid excessive network computation delay. If the optical path structure as shown in Partitioning on spatial light modulators and CMOS detectors to implement multilayer diffractive optical neural networks requires concern for the size of the partition. We tested diffractive optical neural networks with phase mask of different resolutions and phase mask of different pixel sizes by simulation experiments. Equation , it can In this paper, we propose a partitionable and efficient multilayer diffractive optical neural network architecture. This model addresses a disadvantage of the D2NN network, in which it is difficult to flexibly change the number of layers and the scale of the input data, by partitioning the optical diffractive devices in a multilayer network. The greatest advantage of partitioned multiplexing is that this method can improve the utilization of diffractive devices and the computational efficiency of the whole network while reducing the number of optical devices and the difficulty of assembling and adjusting the optical system. In addition to the above advantages, the network model achieves a classification performance similar to mainstream diffractive optical neural networks. Because the framework is not limited to the visible spectrum and can easily be extended to other spectra, this system has great application value."} +{"text": "The Norwich Patellar Instability (NPI) score is a tool for evaluating the impact of patellofemoral instability on joint function. It has not been translated or culturally adapted for the Brazilian population before. This study had the aims of translating and culturally adapting the NPI score for use in Brazilian Portuguese and subsequently assessing its validity for this population.Translation, cross-cultural adaptation and validation study conducted at the State Public Servants\u2019 Institute of S\u00e3o Paulo, Brazil. Sixty patients of both sexes (aged 16-40 years) with diagnoses of patellar dislocation were recruited. The translation and cultural adaptation were undertaken through translation into Brazilian Portuguese and back-translation to English by an independent translator. Face validity was assessed by a committee of experts and by 20 patients. Concurrent validity was assessed through comparing the Brazilian Portuguese NPI score with the Brazilian Portuguese versions of the Lysholm knee score and the Kujala patellofemoral disorder score among the other 40 patients. Correlation analysis between the three scores was performed using Pearson correlation coefficients with significance levels of P < 0.05.The Brazilian Portuguese version of the NPI score showed moderate correlation with the Brazilian Portuguese versions of the Lysholm score and Kujala score . The Brazilian Portuguese version of the NPI score is a validated tool for assessing patient-reported patellar instability for the Brazilian population. It mainly affects young individuals of both sexes, with predominance in females. It accounts for approximately 3% of all injuries involving the knee joint.,,Treatment for patellofemoral instability may be surgical or conservative, depending on the number of episodes of dislocation and anatomical risk factors. No consensus has been reached regarding which method is better, in terms of function, quality of life and number of recurrences.,,,except the NPI score have been translated and culturally adapted for the Brazilian population. The NPI score shows moderate inverse correlation with the Kujala patellofemoral disorder score and the Lysholm knee score and has high internal consistency .Outcome measurements can be used to determine functional performance and to aid in decision-making on treatment options. Currently, the outcome measurements that are used for assessing people with knee disorders include the Fulkerson patellofemoral score,Since the NPI score has not been translated or culturally adapted for the Brazilian population, and since this is the only score specifically designed for individuals with patellofemoral instability, the aims of the present study were firstly to translate and culturally adapt the NPI score for use in Brazilian Portuguese and secondly to assess its validity for the Brazilian population.This study was approved by the research ethics committee of the State Public Servants\u2019 Institute of S\u00e3o Paulo on August 16, 2018 . All participants signed an informed consent form or an assent form, depending on their age.The translation and cultural adaptation of the NPI score followed the procedure proposed by Price et al.The NPI score consists of 19 questions relating to the perception of instability among subjects with histories of patellofemoral instability in sports and activities of daily life. It is scored from 0 (slightest sensation of instability) to 250 (greatest sensation of instability). The Brazilian Portuguese version consists of two parts: the first is the patient-completed questionnaire , and theSixty participants were recruited from an orthopedic specialty outpatient clinic at the State Public Servants\u2019 Institute of S\u00e3o Paulo. All consecutive patients admitted were invited until we had 60 participants, and they had the same cultural/social background. Eligible participants were required to have a documented episode of unilateral or bilateral patellar dislocation. All participants were required to present with two of the following clinical signs of patellofemoral instability: positive apprehension test, tenderness of the medial retinaculum on palpation or reported patellar instability on rotation or knee extension activities. Participants were excluded if they had previously experienced meniscal, cruciate or collateral ligament injury of the knee, history of hip, knee or ankle osteoarthritis, and if they reported a previous lower limb fracture or had undergone spinal or lower limb surgery irrespective of the surgical indication.The pre-final version of the Brazilian Portuguese NPI score was piloted with 20 individuals of the 60 participants who had been diagnosed with patellar dislocation. This was used to evaluate their understanding of each item of the score. Once the Brazilian Portuguese NPI score version had been developed, the other 40participants with patellar dislocation were invited to the next phase of the study, to assess the concurrent validity of the score. Theparticipants filled out the questionnaire in person and without any assistance.Concurrent validity was assessed by comparing the NPI score with the Brazilian Portuguese versions of the Lysholm knee scoreThe descriptive data were represented by the mean (with standard deviation). The assumption of normality was evaluated through visual inspection of the histogram and using the Shapiro-Wilk test. This showed that symmetrical distribution was present for all the data analyzed. The Pearson correlation coefficient was used to analyze the correlation between the NPI score, Lysholm knee scoreThe 40 participants with atraumatic patellar dislocation who participated in the validation process answered all the items of the questionnaires. Their demographic characteristics and score results are presented in The Brazilian version of the NPI score showed moderate correlation with the Brazilian Portuguese versions of the Lysholm knee scoreThis study demonstrated the translation, cultural adaptation and validation of the NPI score for use in the Brazilian population and its correlation with the Brazilian versions of the Lysholm knee score and the Kujala patellofemoral disorder score.,,The translation and cultural adaptation of the NPI score followed the procedure proposed by Price et al.,,The Kujala patellofemoral disorder scoreDevelopment of the NPIThe results obtained from the present study regarding validation were similar to the findings previously reportedAlthough the cohorts used in the two studies were different , the results regarding validity were very similar. This suggests that the NPI score can be used for both conservatively and surgically managed patellar instability patients.The most notable limitation of this study was that the responsiveness of the NPI score, i.e. the capability of the instrument to detect changes in the progression of a disease,Based on the findings from the present study, the Brazilian Portuguese version of the NPI score was satisfactorily translated. It proved to be a valid tool for use in research and clinical practice, in following up patients with patellofemoral instability.The NPI score has now been translated and culturally adapted and has been demonstrated to have validity for use in Brazilian Portuguese. Following this, the NPI score may now be considered for use within clinical and research practice, to aid in assessment and decision-making for individuals with patellofemoral instability."} +{"text": "This study explores the manifestation of Buddhism's conception in underlying entrepreneurial performance. The study is a qualitative research approach with a development direction that comes from successful Buddhist small business entrepreneurs in Bekasi, Indonesia. The interpretive paradigm is used to interpret social life in the reality of successful Buddhist small business entrepreneurs on entrepreneurial performance. Data collection using in-depth interviews with Buddhist small business entrepreneurs in an open-ended format. Data analysis was done in many stages, including domain analysis, taxonomy analysis, component analysis, and theme analysis. The findings indicate that religion acts as an institution that legitimizes the formation of entrepreneurial performance. The performance of Buddhist small business entrepreneurs is manifest in their management of economic or material achievements, and their religious observance in a broad socio-economic context in the relationship of three aspects of human life, namely the individual, social, and environmental, as a form of entrepreneurial practice based on Buddhist values. This research reveals the embodiment of social responsibility for small business Buddhist entrepreneurs which is reflected in entrepreneurial performance through the manifestation of religious values. The findings provide theoretical relevance in institutional theory. Entrepreneurial performance is closely related to religious values. The linkages that lead to the application of religious beliefs and practices in business have benefits that shape organizational culture and identity , and the Noble Eightfold Path. The themes revealed in this study have implications as a form of measuring entrepreneurial success on a personal basis and the values that underlie the performance of Buddhist small business entrepreneurs. Entrepreneurial performance emphasizes achieving social welfare, moderating the implementation of entrepreneurship wisely, and maximizing the use of internal entrepreneurial results.The findings of this study reveal and provide an understanding of the role of Buddhism in exploring and interpreting the meaning of entrepreneurial performance in the context of personal business success. There is a similar understanding of each essence of experience conveyed by the informants regarding the role of Buddhist values in underlying entrepreneurial performance, namely Sabbe Satta Bhavantu Sukhitatt\u0101 through the application of appropriate entrepreneurial practices for the attainment of self-happiness by not harming other beings with the application of virtue. The concept of achieving entrepreneurial performance implies that the material achievements obtained from entrepreneurial performance provide benefits for social welfare, not an obsession with oneself that causes suffering. The economic model in Buddhism provides meaning to small business Buddhist entrepreneurs that the achievement of entrepreneurship is not solely directed at the maximum accumulation of wealth, but also pays attention to the process or method of obtaining and utilizing it. Buddhism ensures that wealth leads to the development of a potential quality of life .\u201cSabbe Satt\u0101 Bhavantu Sukhitatt\u0101 owns the meaning of developing loving-kindness or Metta for all living beings .\u201cMiddle Way of Buddhism, small business Buddhist entrepreneurs reflectively direct entrepreneurial attainment to a life that is balanced with material pursuits through ethical considerations and beneficence in the three aspects of life. Small business Buddhist entrepreneurs focus on the long-term orientation of achieving entrepreneurial performance for quality of life. The self-reflection that emerges from the internalization of the balanced perspective provides direction for small business Buddhist entrepreneurs to consider the impact of entrepreneurial decisions on long-term entrepreneurial performance.With a balanced perspective, Buddhist small business entrepreneurs seize an internal direction and ability to recognize entrepreneurial desires or ambitions that lead to suffering. Through the principles of the Buddhist small business entrepreneurs render priority to long-term oriented strategic entrepreneurial decisions with a tendency to avoid short-term entrepreneurial decisions that are oriented toward material pursuits, by full careful consideration of cause and effect also ethical-moral weight. The perspective of balance or the Middle Way reflects the character of small business Buddhist entrepreneurs' decision-making which is based on a sense of ownership and responsibility for the inherent consequences of each entrepreneurial decision.Right Livelihood, Buddhist small business entrepreneurs prevail in the Buddhist view of economics, which is the achievement of goals for a good personal, social, and environmental life , enjoying wealth (bhoga-sukha), and being free from debt (anana-sukha). Anana Sukha's perspective is interpreted as a form of freedom from the burden of suffering and self-happiness which is manifested in the measurement of entrepreneurial performance. The interpretation of the manifestation of religious values provides an understanding of the magnitude of the role of religion in entrepreneurship. The Buddhist small business entrepreneur Ferry stated the following:The narrative of the experience of Buddhist small business entrepreneurs reveals the meaning of achieving entrepreneurial performance with the happiness of being free from debt. Buddhist small business entrepreneurs are in a position that can harness the material achievements resulting from entrepreneurial performance. In the Obadia, by livinSuccess in entrepreneurship for me if I am able to meet the needs of life without being in debt. I don't want to have a nice and luxurious house from the results of my business but have a business debt to the bank.\u201d (Ferry).\u201cSuccess in entrepreneurship does not focus on maximizing the financial side but on the acquisition of wealth that is ethical, not detrimental, and able to use internal business results without having the burden of debt. Understanding the concept of acquiring wealth forms an understanding of the implementation of entrepreneurship without any elements of exploitation, ethics and does not become a burden that disturbs the balance in other aspects of life. Small business Buddhist entrepreneurs use internal business results appropriately for entrepreneurial development without any ties to external funding. Buddhism focuses on this relationship through the understanding that benefiting and sharing with others can only be done with the wealth gained in the Buddhist perspective implies that entrepreneurial financial achievement is a resource for creating benefits for three aspects of life in the form of achieving general welfare and developing the quality of life of Buddhist small business entrepreneurs.Proposition 5: The use of wealth (The Noble Eightfold Path (Right Livelihood) in the perspective of Buddhism underlies the meaning of achieving entrepreneurial performance in Buddhist small business entrepreneurs, namely forming and practicing entrepreneurship by obtaining material or wealth achievements in accordance with the conception of Right Livelihood to realize the attainment of prosperity and harmony in balance.Proposition 5.31: Based on a partial and holistic study of the theme of the role of Buddhism in entrepreneurial performance, a model for the role of Buddhism in underlying entrepreneurial performance is developed. The model framework is visualized in The findings of this study reveal the construction of a contingency framework embodied by Buddhist small business entrepreneurs in the perspective of entrepreneurial performance. Religion manifests its institutionalization process through normative, cognitive, and regulatory aspects in institutional activities that occur because of the actualized role of Buddhist small business entrepreneurs. The actualization is realized by Buddhist small business entrepreneurs in the perspective of entrepreneurial performance which illustrates that religion shapes the social context through its role at the individual level. Small business Buddhist entrepreneurs embody the construction of their entrepreneurial performance through the general cultural-institutional arguments approach. The findings of this study can be taken into consideration in making local government policies in formulating policies for fostering small-scale entrepreneurship by observing the harmony of religious values and norms. The limitation of this study is that it only discusses the issue from the angle of one religious value; further research can combine inter-religious values to observe the phenomenon of Buddhist small business entrepreneurs from the wider perspective of entrepreneurial performance.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.LM conceived of the presented idea. SS and LM developed the theory and performed the computations. DH verified the analytical methods. NI encouraged LM to investigate topic and supervised the findings of this work. All authors discussed the results and contributed to the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Detecting and responding to objects near the body in peripersonal space is essential for successful interactions with the environment. Presently, we lack knowledge about the neural coding of the distance of sounds relative to the head, or about the proprioceptive coding of 3-dimensional positions of objects relative to the body. Neural coding of nearby objects is an important subject for neuroscience, and there is a great need to advance knowledge about how sensory and motor systems mediate peripersonal behavior. The present Research Topic drew investigators' attention to the sensory and sensorimotor mechanisms underlying both the perception of objects, and the performance of actions near the body, and to facilitate efforts toward making progress in this area.We assembled seven original studies and one review article that address various aspects of the Research Topic. This expands the current literature on the sensory and sensorimotor mechanisms underlying behaviors dealing with objects and stimuli near the body.Hug et al. This study is a welcome addition to the present topic areas because it engaged multisensory systems and sensorimotor processes. The participants who could actively explore the sound source improved their distance judgements whereas the performance of those whose arms were guided by the experimenter were less accurate than the active group.Effects of active and guided exploration for a sound source on the perceived distance in peripersonal space . The relative contributions of the sensory systems in dynamic head-trunk orientation situations are more complicated and less studied than in static situations. The authors addressed the gap in knowledge in the horizontal (yaw) dimension. The results suggest that cervical proprioception is the primary determinant of perceived head-trunk orientation, but that either visual or vestibular information can provide additional information to improve head-trunk orientation accuracy.Phataraphruk et al. reasoned that initial arm posture would affect execution noise and reach accuracy, and that this would be more prominent when vision was unavailable. Indeed, the size and shape of the distributions of reach responses were determined by more complex interactions involving initial arm posture. Thus, these results provide insight into the multifactorial and multisensory aspects of human reaching behavior.Although human reaching behaviors to nearby objects are typically accurate, errors can arise from the initial encoding of the hand or goal location (sensory), the transformation of sensory signals into motor commands (planning), or the movements themselves (execution). Hsiao et al.. They used virtual-reality methods to manipulate the visual feedback of hand position as people moved. In the study, the real and virtual positions of the hand drifted apart gradually. Participants often made movements according to a combination of the visual and bodily information. Sometimes they became aware of the discrepancy and other times not. They found that small discrepancies were often unnoticed, yet could still affect movement. Larger discrepancies were more often noticed, and participants re-calibrated their movements.How the brain integrates information from the eyes and body is the fundamental neuroscience problem tackled by Reed et al.. Under divided attention, a visual cue presented before a target led to smaller visual-evoked potentials when either the hand or a neutral block was near the target compared to far away. When participants were cued to focus their attention on one side of space, the anchoring effects of the hand or block were not observed. These results raise new questions about how hand location influences visual processing, and about when and where in the brain these effects occur.When we place our hand near an object, is our attention allocated automatically to that object? This question is addressed by Kuroda et al. used a combination of virtual reality and stationary cycling to investigate the visual and proprioceptive contributions to the perception of a passable width during self-motion. The authors replicated previous findings by showing that participants perception of a passable width narrowed as perceived self-motion speed increased. The authors found that optic flow altered judgments of passable width even if participants were not pedaling. The results suggest that visual information about perceived self-motion may contribute more than proprioception to perceptual judgments of passable width.Lohse et al. reviewed the interactions between the auditory and somatosensory systems, and between the auditory and the motor systems. The review highlights \u201cthe importance of considering both multisensory context and movement-related activity in order to understand how the auditory cortex operates during natural behaviors\u201d.There is a continued need for further research in the areas of multisensory and sensorimotor processes that mediate peripersonal-space behavior. For example, the proprioceptive mechanisms that underlie coding of 3-dimensional coordinates of an object near the body and those of body parts remain to be identified, and how the motor and proprioceptive systems jointly utilize such 3-dimensional information in planning and executing movements of body parts to reach and grasp the target object has yet to be determined. Of additional interest is how information from multiple sensory modalities is combined and weighted to account for differences in the capabilities of each modality. For example, when a salient nearby object is behind, vision is not available. Thus, auditory and vibrotactile detection of such invisible objects would be of vital importance to an organism.DK drafted the editorial including summaries of two articles. NH commented on the draft and provided summaries of two articles. GM commented on the draft and provided summaries of one article. PZ commented on the draft and provided summaries of three articles. All authors contributed to the article and approved the submitted version."} +{"text": "Although the study of midlife has increased in recent years, it still lags behind study of life at the extremes of age. As a key threshold stage in the lifespan, our understanding can be enhanced by exploration of novelists and composers whose most substantive work began to be produced in midlife. This presentation will draw off the works of midlife works of composers who also composed into later life - Giussepe Verdi, Richard Strauss and Johannes Brahms - from the concert proposed by the Indianapolis Symphony during GSA 2022, as well as reflecting on the works of Anita Brookner as representative of other novelists and poets whose work and careers came to prominence in midlife."} +{"text": "A full-scale model for predicting low-velocity impact (LVI) damage and compression after impact (CAI) strength was established based on a subroutine of the material constitutive relationship and the cohesive elements. The dynamic responses of the laminate under impact load and damage propagation under a compressive load were presented. The influences of impact energy and ply thickness on the impact damage and the CAI strength were predicted. The predicted results were compared with the experimental ones. It is shown that the predicted value of the CAI strength is in good agreement with the experimental result. As the impact energy reaches a certain value, the CAI strength no longer decreases with the increase in the impact energy. Decreasing the ply thickness can effectively improve the damage resistance and CAI strength. One of the advantages of composite laminates is that different fiber-reinforced materials and design angles can be selected according to the load requirements of various structures, which provides greater freedom for the structural design and ply scheme optimization. However, the obvious anisotropy and low inter-laminar strength characteristics of the composites can also lead to multiple failure modes of structures, including fiber breakage, resin cracking ,2 and deThe impact damage and residual strength of composite laminates have attracted lots of attention. A lot of related works have been published, and experimental testing is one of the most direct and effective research methods. Using an experimental method, Sergii investig4s sub-laminate scaled laminates can be predicted accurately, and the maximum error is 10%. Yang An impact damage prediction model of the composite laminate was established to calculate the response under the impact load. According to the ASTM D 7136 standard, the kinetic energy of the impactor was implemented using the initial velocity, and the impact energy was set at 30 J. The damage states of typical interfaces are shown in The laminate state at the end of impact was used as the initial condition of the compression simulation. The failure state of the fiber, matrix and delamination at the beginning of compression was defined by field variables. Considering the storage space and computing efficiency of the computer, it was impossible to simulate the compression loading rate under real experimental conditions. The loading rate needs to be increased to ensure that the calculation time is within an acceptable range, and the enforced displacement loading rate used in this research was 50 mm/s.The selection of the damage criterion affects the prediction of the in-plane properties of the composite plies, and then, it affects the prediction of structural failure process and final damage state. The influences of the damage criteria on the prediction results of the impact damage and the CAI strength were studied through the FEM that was established in this paper. The 3D Hashin criterion is shown in The composite structures may be subjected to different impact energies, which will cause the bearing capacity of the composite structures to decline by different degrees. To study the influence of impact energy on CAI strength of the composite materials, the impact damage and CAI strengths under impact energies of 10, 20, 40 and 50 J were compared with the result of the test using 30 J. The impact damage of the composite laminate was calculated, and According to To further explain the above phenomenon, Raman shows thTo further analyze the mechanism of the effect of the single ply thickness on the CAI strengths, The comparison of contact forces and energy absorption with different single ply thicknesses are shown in An FEM for predicting the impact damage and CAI strength of composite laminates was established. The whole analysis process of the impact damage and CAI strength was conducted using the FEM. The predicted CAI strength of the composite laminate was compared with the experimental one. The influences of impact energy, damage criterion and single ply thickness on the impact damage area and CAI strength were investigated. Four important results emerging from the research are as follows:The damage zone of the fiber breakage and matrix cracking caused by the impact is approximately circular.The predicted CAI value is in good agreement with the experimental one, and the error is 4.7%.The CAI strength decreases as impact energy increases from 10 to 30 J. After the impact energy exceeds 30 J, the CAI strength remains basically unchanged.Decreasing the single ply thickness can effectively improve the damage resistance and CAI strength of the composite laminates. When the single ply thickness decreases to 0.04 mm, the CAI strength increases by 22.4%."} +{"text": "Endometriosis of the rectus abdominis muscle is an extremely rare form of extrapelvic localization of the disease. It is usually iatrogenic and develops after caesarean section or gynecological surgery. Preoperative diagnosis is very difficult and a challenge for gynecologists and surgeons; thus, the diagnosis is histological. The treatment of choice consists of wide local excision of the lesion on healthy margins. We cite a case of isolated endometriosis in the rectus abdominis muscles in a 46-year-old patient with a previous caesarean section, the diagnosis of which was made randomly when performing abdominal total hysterectomy for the treatment of chronic pelvic pain. Histological examination of the surgical specimen confirmed the diagnosis. Simultaneously, the surgical specimen of the uterus and ovaries was free of endometriosis. Postoperatively, the patient mentioned discharge of her symptoms. No further therapeutic intervention was deemed necessary, as it was considered that a complete resection of the endometrial tissue implantation from the muscles of abdominal wall was performed. The present case report lay emphasis on the significant difficulties involved in the preoperative diagnosis of endometriosis of the rectus abdominis muscle. Concurrently, it is pointed out that, despite its rarity, individual extrapelvic endometriosis located in the rectus abdominis muscle should be included among other pathological entities in the differential diagnosis of chronic pelvic pain in women of reproductive age, who gave birth by caesarean section or underwent gynecological surgery with abdominal or laparoscopic access. It was century . Endomet century and is m century . It is u century . In thisPatient information: a 46-year-old reproductive patient, with a medical history of one caesarean section and a known presence of multiple and small uterine fibroids, visited the outpatient gynecological practice for pain in the lower abdomen for about ten years. The personal medical history of the patient was free. No problems were found from the urinary or gastrointestinal tract, to which chronic pelvic pain could be attributed. In addition to performing appendectomy at a young age, our patient had not undergone any other surgery.Clinical findings: the patient described the onset of pain about 7 months after performing a caesarean section with a Pfannenstiel incision. Gradually, she reported a progressive deterioration of her condition. The pain described by the patient, initially as a feeling of heaviness in the lower abdomen, over the last one to two years has become of increasing intensity. Sometimes the pain may have been more intense during the days of menstruation, but usually the patient described non-intermittent pain in the lower abdomen of approximately the same intensity every day. The clinical examination did not reveal any palpable mass or other type of damage to the abdominal wall.Diagnostic assessment: the findings from ultrasound scan, computerized tomography and magnetic resonance imaging of the abdomen were compatible with the presence of uterine fibroids. The levels of cancer antigen 125 in the blood serum were within the normal range.Therapeutic intervention: based on preoperative evaluation, chronic pelvic pain was attributed to the presence of uterine fibroids. After informing the patient and her relatives in more detail regarding the therapeutic approach of the disease, it was decided to perform abdominal total hysterectomy. Intraoperatively, in the rectus abdominis muscles and slightly above the level of the pubic symphysis, infiltration of the muscular wall was found by a hard-textured flat mass, in length 4-5 cm, the surface of which was solidly adhering to the parietal peritoneum. A wide resection of the lesion was performed, including bilaterally a widespread part of the wall of the rectus abdominis muscles and the matted peritoneum (ritoneum . The plaritoneum . HistoloFollow-up and outcomes: postoperatively, the patient mentioned relief of her symptoms. No further therapeutic pharmaceutical intervention was recommended, as it was considered that the endometriotic foci was completely resected from the abdominal wall.Informed consent: this was sought and obtained from the patient. Anonymity was maintained for confidentiality.Abdominal wall endometriosis is one of the rarest extrapelvic forms of endometrial disease. It is usually iatrogenic and develops after caesarean section or open or laparoscopy surgeries for the treatment of gynecological diseases . The lesDespite the great development that has been achieved in recent years in imaging techniques, preoperative diagnosis is difficult and a challenge for gynecologists and surgeons. The diagnosis is usually made late and accidentally during gynecological surgery or postoperatively. Diagnosis is confirmed after histological examination of the surgical preparation . The cycUltrasound is a first line tool in diagnostics approach to extrapelvic endometriosis. With abdominal ultrasound can be detected, intramuscular endometriosis in the anterior abdominal wall, the characteristics of which may vary from a completely solid mass or a mix echogenic mass with solid and cystic elements . Also, tThe treatment of endometriosis of the rectus abdominis muscle requires surgery. The treatment of choice is wide local resection of the lesion on sound boundaries. The local exclusion of the lesion that can be done under ultrasound monitoring is usualPreoperative diagnosis of extrapelvic endometriosis with individual localization in abdominal muscles is a challenge in daily clinical practice. Despite the rarity of endometrial lesion in the muscular abdominal wall, the significant increase of the percentage of caesarean sections observed in recent years is necessary the inclusion of this rare extrapelvic form of endometriosis in the differential diagnosis between all the painful masses of the abdominal wall, thus allowing the early diagnosis and avoidance unnecessary diagnostic and therapeutic interventions. Early diagnosis and selection of the most appropriate therapeutic manipulations are judged currently necessary in order to minimize the risk of recurrence and to avoid the possibility of malignant recurrence of the disease."} +{"text": "Living organisms have never been solitary individuals and symbiotic relationships are challenging our very conception of the individual. Symbiosis, initially defined as a living together of different organisms represenChlamydomonas reinhardtii and diverse ascomycete fungi that provide nitrogen to the algae which challenges the long-held paradigm of dominance of cyanobacteria interactions with microalgae over NCDs and brings the first hints on how heterotrophic proteobacteria thrive in surface waters and oxygenated areas genomes of Dinoflagellates Symbodinium, essential symbionts of corals. Their in silico analyses demonstrate that a scalable k-mer approach largely agrees with the phylogenetic signal inferred from the LSU rDNA sequence. The combination of genomic and experimental data sometimes allows us to hypothesize about the metabolic bases of coexistence, as in the case of coexistence between Roseovarius and the green alga Ostreococcus tauri, where the bacterial genomes encodes the metabolic pathway to produce the vitamins needed by the microalgae . This study reports a stable coexistence maintaining the microalgae and the bacterium over several years, unlike the dynamic associations reported between Dinoroseobacter shibae and the microalgae Prorocebtrum minimum (Emiliania huxleyi (The articles included in this Research Topic provide a view of how symbiotic interactions can help bypass environmental stresses such as in minimum , or Sulf huxleyi .in silico and experimental approaches, and many novel insights into species interactions, their ecology and evolution are expected in the near future.The field of symbiosis in aquatic habitats is expanding quickly with both All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication."} +{"text": "The leading cause of blindness in inherited and age-related retinal degeneration (RD) is the death of retinal photoreceptors such as rods and cones. The most prevalent form of RD is age-related macular degeneration (AMD) which affects the macula resulting in an irreversible loss of vision. The other is a heterogenous group of inherited disorders known as Retinitis Pigmentosa (RP) caused by the progressive loss of photoreceptors. Several approaches have been developed in recent years to artificially stimulate the remaining retinal neurons using optogenetics, retinal prostheses, and chemical photoswitches. However, the outcome of these strategies has been limited. The success of these treatments relies on the morphology, physiology, and proper functioning of the remaining intact structures in the downstream visual pathway. It is not completely understood what all alterations occur in the visual cortex during RD. In this review, I will discuss the known information in the literature about morphological and functional changes that occur in the visual cortex in rodents and humans during RD. The aim is to highlight the changes in the visual cortex that will be helpful for developing tools and strategies directed toward the restoration of high-resolution vision in patients with visual impairment. To achieve successful therapeutic intervention in RD, the structures downstream of the retina should remain intact. Long-term sensory deprivation resulting from degenerating retina is associated with potential changes in the cortical circuit posing significant constraints on the success of RD treatment. Therefore, it is important to understand the consequences of RD on the state of the visual cortex for the successful implementation of restorative techniques. This review provides an up-to-date overview of the morphological and physiological changes in the visual cortex from various rodent and human studies.The retina is a light-sensitive tissue layer at the back of the eye equipped with the necessary machinery to process light information to create an informative image of the external world. During retinal degeneration (RD) the retina undergoes deterioration with the consequent death of photoreceptors such as rods and cones. The progressive loss of receptors and loss of vision has a greater impact on daily life such as not being able to recognize faces, read and find objects. The most common form of RD is age-related macular degeneration (AMD) with vision loss in the elderly population. A recent report on numerous population-based studies indicates the emerging global burden arising with the increase in the number of people suffering from AMD from 196 million in 2020 to 288 million in 2040 , the extent of pattern-vision degeneration was demonstrated using a short pulse of bright white light and gratings of different spatial frequencies (patterns) did not show any significant alterations in response to spatial-temporal variations in luminance contrast change. Overall these results indicate the diminished capacity of discriminating stimuli under different contrast conditions and adaptation of the visual system to environmental contrast evaluated the changes in the electrophysiological properties of the V1 such as orientation selectivity, spatial, and temporal frequency tuning and receptive field size. Degenerated rats had diminished orientation selectivity mostly in the lower layers of the cortex (layers V-VI) with better responses only at lower spatial and temporal frequencies. The size of the receptive field was smaller compared to normal seeing rats was used as a measure to determine the efficacy of information transmission between visual stimulus and neuronal activity for processing visual information and conscious visual perception. During RD the retina undergoes a profound increase in spontaneous activity as a result of increased glutamate concentration, and altered synaptic input due to the death of photoreceptors . Visual stimulation in these affected individuals shows decreased activity of the visual cortex and elevated activity in a few associative areas outside the visual field responsible for eye coordination such as frontal and supplementary eye fields, prefrontal cortex, intraparietal sulci, and parietal lobule but not at higher contrasts. The authors hypothesized that the observed differences between low and high contrast could be due to different processing streams (parvocellular and magnocellular) with different functional properties contributing to this effect. The results are in line with previous studies in rodents and corroborate the fact that degeneration affects the functioning of the cortex in a wider manner. Recent studies in rodents and humans show that the adult brain has the ability to undergo short-term plasticity to adapt to visual changes , Department of Science and Technology (DST), Govt. of India (Grant: RJF/2019/000040), and the Core Research Grant (CRG) of SERB, DST (CRG/2021/003472).The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Understanding the externalities of transportation networks in the process of the agglomeration and diffusion of production factors has theoretical and practical significance for the coordinated development of China's economic growth in urban agglomerations. Therefore, the social network analysis method is introduced in this paper with the case of the Pan Pearl River Delta Urban Agglomeration to analyze the characteristics of the traffic connection network of the production factor flow within this urban agglomeration, and subsequently, an econometric panel model is adopted to quantitatively analyze the effect of the connection network on the economic growth of the urban agglomeration. The results show that (1) the traffic connection of the Pan Pearl River Delta Urban Agglomeration has network characteristics typical of a \u201csmall world\u201d. Although the connections between cities are gradually strengthening, the regional differences are obvious, showing a core\u2013edge pattern of eastern agglomeration and western sparseness. (2) Among the network nodes, Guangzhou, Shenzhen and other cities have obvious agglomeration and diffusion effects, stabilizing economic growth while driving the development of surrounding cities. The \"polarization effect\" in Chongqing and Chengdu has significantly increased, and the accumulation of factors mainly meets their own economic development but has not yet spread. (3) The Pan Pearl River Delta Urban Agglomeration's transportation network influences the region\u2019s economic growth through the structural effect, as it strengthens the economic ties between cities, and through the action of resource factors, as the network represents the aggregation and diffusion path of factor flow. (4) Due to the different traffic connections and industrial structures across the Pan Pearl River Delta Urban Agglomeration, the factor flow of each suburban agglomeration has a differentiated impact on the regional economic growth under the traffic connection network. Therefore, to realize the coordinated economic development of the Pan Pearl River Delta Urban Agglomeration, it is necessary to \"adjust measures to local conditions\" and formulate accurate and precise policies. These urban agglomerations have become the hinterland of China's highly concentrated economic activities. The formation of urban agglomerations depends on a high degree of interconnection between urban transportation networks, but the impact of developed transportation networks on the economic growth of urban agglomerations has two sides2: the radiation effect and the polarization effect3. The \u201cradiation effect\u201d means that traffic reduces the transaction costs of production factors, accelerates the flow of labor and resources within and between urban agglomerations, and causes developed areas to have a driving effect on the economic development of surrounding areas4. The polarization effect means that the transportation network intensifies the polarized allocation of the internal elements of urban agglomeration, leading to the agglomeration of production resources in the central city in the network, resulting in the unbalanced spatial allocation of elements and further widening the economic gap between the central city and the surrounding cities5. Strengthening the \"radiation effect\" of traffic networks on the economic growth of urban agglomerations and reducing the \"polarization effect\" are the key issues in the process of realizing the integrated development of urban agglomerations.Over the past 40\u00a0years of reform and opening up, China has gradually formed urban agglomerations such as the Yangtze River Delta, Beijing\u2013Tianjin\u2013Hebei and Pearl River Delta, which have become the important growth poles of China's economy and reshaped the spatial pattern of economic development6. The greatest advantage of the development of an urban cluster economy is the formation of scale economies through the cooperation of industries among cities, resulting in the effect of \"1\u2009+\u20091\u2009>\u20092\". However, at present, many urban agglomerations have a \"center-periphery\" spatial structure, which leads to economic inequality8. A large number of studies have analyzed the causes of inequality in urban agglomerations from the perspectives of traffic network structure9, urban competitiveness10, regional industrial competition12, and social relations13. Since the self-organization process of urban agglomerations depends on large-scale transportation infrastructure investment, the influence of transportation networks on the economic growth of urban agglomerations has been widely discussed.An urban agglomeration is a group of cities with compact spatial organization, close economic ties and high degree of urbanization and integration, which is based on a developed transportation and communication network14 and from the unequal gravity between nodes and factors13. This is mainly related to two research fields: first, exploring the optimization direction of network structure from the perspective of improving the efficiency or fairness of network configuration; and second, finding a reference from research on the relationship between factor mobility and economic growth. The former research mostly focuses on the congestion and efficiency loss caused by the imperfect network structure, thus proposing the construction scheme of future networks16. The latter regards the transportation network as an exogenous environment and then discusses how to make decisions to maximize the benefits in the existing network19. In fact, these two kinds of studies together constitute a system of solutions to alleviate the economic inequality in the network. However, the investment in transportation infrastructure is irreversible, and the existing network structure is difficult to change. It takes a long time and large investment to improve resource allocation from the perspective of network structure optimization. Therefore, to achieve equitable economic growth of urban agglomerations, the construction of regional coordination mechanisms in the network should be considered. This requires a deep understanding of the internal relationship among the traffic network, factor flow and economic growth of urban agglomerations.Inequality in the network comes from the polarized allocation of factors by network structuresBased on the above discussion, this paper attempts to answer the following three questions: First, how can the urban correlation strength on the traffic network be described from the perspective of factor flow? Second, how does the transportation network affect the economic development of urban agglomerations? Third, what measures should be taken by local governments in an urban agglomeration to promote the coordinated development within the agglomeration?The specific research contents and objectives are outlined as follows: Taking China's Pearl River Delta urban agglomeration as the research object and using the measurement method of urban flow intensity, the study depicts the traffic connection network, which includes factor flow, economic connection and geographical space abstraction. Then, the social network analysis method is used to analyze the overall and local characteristics of the transportation network from the two dimensions of network structure and network nodes to reveal the influence path of urban agglomeration traffic-related networks on resource allocation and economic growth. Finally, regression analysis is used to identify the key factors of the economic growth of urban agglomerations under network constraints. Based on the results, we put forward some suggestions for the coordinated development of urban agglomerations.The contributions are as follows: (1) As a case study, from the perspective of the traffic network and factor flow, we put forward suggestions for the coordinated development of the regional economy in the Pearl River Delta Urban Agglomeration and similar urban agglomerations. (2) In a wider sense, the construction of a generalized traffic connection network including factor flow, economic connection and geographical distance provides a new research method to reveal the internal power driving the economic growth of urban agglomerations, which provides a reference for related research.20. However, the agglomeration and diffusion of resources flowing between urban agglomerations are complex, frequent and multidirectional, covering the transfer and exchange of many elements, such as people, logistics, and information flow, among cities. Obviously, GDP and population reflect the development level of the urban economy, but they lack a description of the path and intensity of multidimensional factor transfer and cannot describe the state of agglomeration and radiation in the process of factor flow. Therefore, this paper chooses the intensity of urban flow as the \"quality\" of the economy to express the overall development level of cities in urban agglomerations, which not only reveals the constant exchange of material and energy between cities but also more clearly depicts the characteristic that the transportation network supports the communication between cities22. Considering that the transportation network of urban agglomerations is mainly composed of roads and rails, so the volume of freight and passengers objectively reflects the flow forms of production factors among urban agglomerations, the traffic network correlation value is finally obtained:22 for the specific algorithm of the above two variables. Gravity models often use GDP and population as the \"quality\" to describe the relationship between different economies23. These three indicators can reflect the scale and efficiency of the factor flow within the urban agglomeration when applied to the analysis of the urban agglomeration traffic network structure, thus revealing the supporting role of the network for the coordinated economic growth among cities.On the basis of constructing a traffic correlation matrix by a gravity model, social network analysis (SNA) can be used to analyze the traffic correlation network structure. Common indicators of overall network structure include network density, clustering coefficient and characteristic path length. Among them, network density measures the closeness of the connection between cities. The greater the density is, the more ways to exchange information between cities and the more efficient the communication. The clustering coefficient reflects the node degree and node aggregation in the network, and aggregation means that the probability of community formation increases. The average shortest path indicates the rate and average cost of element flow in the network. The smaller the average path is, the less time it takes for element flow and the lower the transportation cost24. Centrality includes degree centrality (in-degree and out-degree), betweenness centrality and closeness centrality. The higher the degree centrality is, the more connections the city has with other cities in the traffic spatial correlation network, and the more central the city is in the network. Betweenness degree measures the ability of nodes as mediators, which indicates the extent to which the city can control the traffic exchanges between other cities. The closeness centrality describes the degree to which a city in the network approaches the center in the transportation network and reflects the center-edge structure. The closer a city is to the central node, the stronger its role in resource allocation in the network.This paper analyzes the network structure characteristics of each node through centrality and reveal the central position of node cities in the network structureThis paper focuses on verifying the structural effect of the traffic connection network structure in urban agglomerations, so it tries to analyze the mechanism by which the structural effect of the traffic connection network structure influences the economic growth of urban agglomerations from the two dimensions of the overall structural effect and individual structural effect of the traffic connection network.25 as the explained variable on economic growth, which is the embodiment of China's major strategic decision to integrate economically into Eurasia.For the Beibu Gulf City Group and Chang-Zhu-Tan-Chuan-Yu City Group, after adding the influencing factors of the transportation network , the transportation-related network has a positive impact, but the industrial structure, urbanization rate and the coefficient of foreign investment have decreased because the construction of the transportation network in these subcity groups is not enough to promote effective factor flow. On the other hand, there is irrational resource allocation in these urban agglomerations, and the industrial structure between cities tends to be homogeneous, which hinders the economic growth of the urban agglomerations. Moreover, under the influence of the transportation network, the urbanization rate accelerates the impact on the economic growth of urban agglomerations, and weakens the impact of FDI on economic growth. This is because the population and resource elements can achieve the goal of economic growth without more foreign investment under the agglomeration and diffusion effect between cities, relying on the path of the transportation network and taking the rational distribution of urban industries as the premise. Therefore, for these two urban agglomerations, we should continue to adjust the urban industrial structure on the basis of clarifying urban functions, promoting communication between noncentral cities and central cities, improving the transportation network, accelerating the pace of noncore cities\u2019 integration into the core circle, and building a multicenter and multifunctional urban system with core cities as the mainstay and other cities as the auxiliaries.Compared with other urban agglomerations, there is a highly negative correlation between the industrial structure of the secondary and tertiary industries on the west coast of the straits (v and v) and the economic development within urban agglomerations, which shows that the process of enhancing and upgrading the industrial structure of urban agglomerations cannot effectively promote economic growth. This is due to the blind pursuit of the upgrading of industrial structure along with the neglect of the rationalization of the industrial structure of urban agglomerations. In the model of urban flow intensity combined with the previous analysis, in the urban agglomeration on the west coast of the Taiwan Strait, the urban flow intensity of each city is much lower than that of other cities, which shows that the industries in the urban agglomeration are not closely connected, and the flow of resource elements among cities is not frequent. The development of Guangxi, Yunnan and Sichuan Provinces in the urban agglomeration on the west coast of the Taiwan Strait is relatively slow, and there are complex policy barriers between provinces and cities, which leads to the homogenization of industries among cities in the formed industrial clusters and makes cities more competitive than cooperative. Therefore, compared with other urban agglomerations, the urban agglomerations on the west side of the Taiwan Strait need to consider how to rationalize the upgrading of industrial structure. They should give full play to the intermediary role of central node cities, such as Nanning, Kunming and Chengdu, establish a multicore urban agglomeration network, adjust the rational division of labor among different cities by relying on their abundant natural resources, establish a complementary mechanism of urban functions, and improve the efficiency of the spatial aggregation and diffusion of elements to deepen the economic ties between urban agglomerations on the west side of the Taiwan Strait and other urban agglomerations and ensure healthy economic development in urban agglomerations.Comparing the aggregation and diffusion effects of factor flows in different urban agglomerations under the structure of the traffic correlation network and finding effective solutions to achieve regionally coordinated growth under the premise of individual growth are the focus of this paper.The overall structure of the transportation network of the Pan Pearl River Delta Urban Agglomeration has obvious characteristics of a small world network, showing a typical center-edge structure and gradually evolving to a multicenter transportation network. The density and aggregation coefficient of the network are increasing year by year, and the number of isolated cities in the network is decreasing, which means that the communication between cities in the Pan Pearl River Delta Urban Agglomeration is becoming closer. The reduction in the shortest path of information transmission and factor flow between cities and the reduction in transaction costs can promote the frequent flow and transmission efficiency of resource factors in urban networks. From the analysis of the characteristics of a single network structure, the aggregation and diffusion of resource elements in the transportation network of the Pan Pearl River Delta Urban Agglomeration lead to significant differences in the development of urban nodes.The increase in network density and the network aggregation coefficient and the decrease in the characteristic path of the traffic-related network in the Pan Pearl River Delta Urban Agglomeration can effectively promote the economic ties among cities in a city group. This shows that the improvement in the individual traffic network structure enables more node cities to effectively support the aggregation and diffusion of elements and act as bridges of element transmission, resulting in more frequent exchange of elements among cities and promoting the economic growth of urban agglomerations. In addition, the improvement in transportation network connectivity has stimulated the vitality of regional industry and investment, indirectly promoting economic growth. However, the impact of the transportation network on economic growth also shows regional heterogeneity, which depends on whether the internal industrial structure layout of suburban agglomerations can reasonably absorb resource elements to achieve cooperation rather than competition and whether the degree of transportation links between cities can maximize the efficiency of factor flow.This paper analyzes the spatial evolution pattern and network characteristics of the transportation network of the Pan Pearl River Delta Urban Agglomeration and studies the influence of the transportation network on the economic growth of the urban agglomeration. The main conclusion is as follows:From the perspective of urban agglomerations as a whole, mega-urban agglomerations contain a large number of urban individuals with different resource endowments and development patterns. Transportation is a physical bridge to promote information exchange and industrial cooperation among these cities and finally form integrated development. Therefore, it is a priority to further strengthen the construction of transportation infrastructure in mega-city groups. Of course, this kind of investment is not extensive, so we should give full consideration to the position of cities in urban agglomerations and make appropriate investments. For example, marginal cities need to increase investment in urban transportation infrastructure, rationalize the industrial layout, make effective use of the intermediary role of surrounding semicore cities, and strengthen ties with core cities. Moreover, cooperation between local governments is a necessary supporting measure to lower the administrative barriers between cities and avoid the hidden obstacles of local protectionism to the factor flow.From the perspective of internal suburban agglomerations in the Pan Pearl River Delta Urban Agglomeration, the intercity transportation network has become complete, and the industrial layout has gradually reached a high quality. The factor aggregation effect is obvious enough, and increased aggregation may support intercity diffusion, the export of surplus factors and practical experience and economic development in the surrounding areas. For the Beibu Gulf and Changsha-Zhuzhou-Xiangtan-Sichuan-Chongqing Urban Agglomerations, the internal industrial layout is more reasonable. The government should pay more attention to strengthening the communication between noncentral cities and central cities, improving the transportation network, and speeding up the process of integrating noncore cities into the core circles. For the urban agglomerations on the west side of the straits, we should give full play to the intermediary role of Nanning, Kunming and Chengdu, establish a multicore urban agglomeration network, and improve the efficiency of the spatial aggregation and diffusion of elements to deepen the economic ties between the urban agglomerations on the west side of the straits and other urban agglomerations and realize the coordinated development of the internal economy of the Pearl River Delta Urban Agglomeration.The above conclusions yield the following policy implications:Finally, our research mainly analyzes the influence of traffic networks on economic growth across the whole urban agglomeration, but we have not paid attention to the heterogeneous structure of the network and the specific differences in internal development modes in different regions. In addition, we have not revealed the internal mechanism by which the traffic network affects economic growth. This can be achieved with more detailed research and case analysis in the future."} +{"text": "The in-situ health condition of carbon fiber reinforced polymer (CFRP) reinforced structures has become an important topic, which can reflect the structural performance of the retrofitted structures and judge the design theory. An optical fiber-based structural health monitoring technique is thus suggested. To check the effectiveness of the proposed method, experimental testing on smart CFRP reinforced steel beams under impact action has been performed, and the dynamic response of the structure has been measured by the packaged FBG sensors attached to the surface of the beam and the FBG sensors inserted in the CFRP plates. Time and frequency domain analysis has been conducted to check the structural feature of the structures and the performance of the installed sensors. Results indicate that the packaged Fiber Bragg Grating (FBG) sensors show better sensing performance than the bare FBG sensors in perceiving the impact response of the beam. The sensors embedded in the CFRP plate show good measurement accuracy in sensing the external excitation and can replace the surface-attached FBG sensors. The dynamic performance of the reinforced structures subjected to the impact action can be straightforwardly read from the signals of FBG sensors. The larger impact energies bring about stronger impact signals. Carbon fiber reinforced polymer (CFRP) composites have been extensively used in strengthening projects due to their high strength, lightweight, corrosion resistance, and design flexibility. CFRP-reinforced structures have become the commonly used structural type in engineering ,2,3. ImpConsiderable research has contributed to exploring the dynamic performance of CFRP reinforced structures . ZanardoDue to the superior advantages of absolute measurement, anti-electromagnetic interference, good geometrical shape-versatility, high precision, compact size, lightweight, convenient multiplexing, and integration of sensing network over other sensors ,22,23,24Given the analysis above, the dynamic response of steel beam retrofitted by smart CFRP plates under impact action has been explored by the surface-attached packaged Fiber Bragg Grating (FBG) sensors and the FBG sensors inserted in the CFRP plates. Impact testing with different impact energies has been performed to check the performance of the reinforced structures. Time and frequency domain analysis has been conducted to assess the performance of the structures and the sensors. Based on the data analysis, a few suggestions on the sensor design and the reinforced structures have been provided.2, and the length of the beam is 1600 mm. The material properties of the steel beam obey the standard GB/T 11263-2017. Packaged FBG sensors and bare FBG sensors have been attached to the surface of the smart CFRP reinforced steel beam, and the layout is shown in To explore the dynamic response of CFRP reinforced steel beams under impact action, a testing sample installed with various kinds of FBG sensors has been fabricated. The testing sample consists of one I-steel beam reinforced by a smart CFRP plate. The cross section of the I-steel beam is 125 \u00d7 125 mmAll the FBG sensors have been connected to FBG interrogator si255 produced by Micron Optics (MOI), which has a sampling frequency of 5 kHz and a wavelength resolution of 1 pm. The impact action is provided by the free falling of steel balls with different diameters at different heights. Fixed constraints have been applied to the two ends of the beam by means of clamps. Four heights and four weights of steel balls have been tried in the testing, and the cases for the impact action are shown in It has been found from the wavelength incremental diagrams that the FBG signals under various impact conditions have similar waveforms. Similar to the time domain analysis of the measured signals on the web, it has been found from the wavelength incremental diagrams that the FBG signals under various impact conditions have similar waveforms. Wavelength increment diagrams of the embedded FBG of the CFRP plates under various impact conditions have similar waveforms. Time domain analysis shows that the FBG sensor can accurately monitor the dynamic response signal. To identify the dynamic characteristics of the CFRP reinforced steel beam under the impact action, frequency domain analysis is further required ,29. BecaThe equation for power spectrum estimation by the periodogram method is shown in Equation (3):It is found from the power spectrum that the impact signals of P-FBGb (FBGs in series: 1\u20136) on the beam bottom are less pronounced in either impact condition. The reason may be that the packaged sensor has not been well attached to the CFRP reinforced beam due to the poor installation quality. Another sensor P-FBGbb (FBGs in series: 1\u20136) also located at the beam bottom shows good sensitivity to the impact signal. Therefore, the frequency domain analysis of the measured signal on the beam bottom will focus on P-FBGb2, P-FBGbb (FBGs in series: 1\u20136) and B-FBGb.It can be seen from the power spectrum that the peak value of the high-frequency component of the FBG sensors embedded in the CFRP plate is extremely insignificant and difficult to identify due to the influence of the DC component of the FBG signal in the frequency domain. It can be seen from the power spectrum that when the impact energy is small, the peaks of the high-frequency component of the FBG sensors embedded in the CFRP plate or externally attached to the surface of the CFRP plate are extremely inconspicuous. It can lead to difficult identification and then the case with the largest impact energy is selected for analysis. To develop effective health monitoring techniques for identifying the dynamic response of CFRP reinforced structures, impact testing of steel beam retrofitted with smart CFRP plates has been performed. Time and frequency domain analysis has been conducted to assess the dynamic performance of the reinforced structures and the sensitivity of the installed FBG sensors. Through the study, the following conclusions can be drawn:(1) The amplitude of the wavelength increments of the FBG on the web increases with the growth of the impact energy, which validates the effectiveness of the proposed monitoring techniques. The CFRP reinforced steel beam shows good impact resistance during the impact testing, and no defect or damage has been identified from the time and domain analysis.(2) The packaged FBG sensor is more effective than the bare FBG sensor in monitoring the external excitation-induced dynamic response of the CFRP reinforced steel beam, which is different from the static case. The FBG sensors embedded in the CFRP plate can replace the sensors attached to the surface of the CFRP plate to perceive the impact response.(3) The peak values of the power spectrum at different impact energies vary. The peak frequencies transformed from the signals of FBG sensors attached on the web at each impact condition are basically identical . The peak frequencies transformed from the signals of FBG sensors attached to the beam bottom at each impact condition are basically the same . The slight difference can be attributed to the noise disturbance.(4) The sensor used for temperature measurement cannot be embedded in the structure with a hollow pipe, which is different from the static case. For the temperature compensation of structures under dynamic load, it is suggested that the sensor merely for temperature measurement should be around the structure to avoid the possible vibration-induced dynamic deformation."} +{"text": "Fear of falling (FOF) is prevalent among older adults. While the concept has been conceptually defined and factors associated with FOF has been extensively explored in nursing and other health sciences, the experience of this fear is often overlooked. The purpose of this study was to illuminate the phenomena of FOF using the rich descriptions elicited from four (Nf4) older adults. The meaning of these experiences was then interpreted in conjunction with the known and emergent FOF body of literature. Each participant was interviewed twice using videoconferencing as part of a larger study exploring the lived experience of being at risk for falling in the hospital. A total of eight interview transcripts were analyzed using van Manen\u2019s interpretive phenomenological methodology. The philosophical underpinning of this study is the philosophy of caring in nursing outlined by Kari Martinsen. Three major interpretive themes emerged: Loss of Self, Part of my Existence, and Remaining Safe Within the Boundaries of Fear. These themes describe how FOF is the fear of being suspended in time and space and losing connection with oneself both physically and mentally during a fall. The fear becomes a part of one\u2019s existence, ranging from worry to all-consuming panic, and the body becomes unpredictable. FOF means living within invisible boundaries of fear, where feelings of helplessness, uselessness, and isolation are common. Relationships with others can both temper and ignite the FOF, and caregivers must understand the meaning of this experience to improve support of older adults in managing this overwhelming experience."} +{"text": "Reply to the Editor:The author reported no conflicts of interest.Journal policy requires editors and reviewers to disclose conflicts of interest and to decline handling or reviewing manuscripts for which they may have a conflict of interest. The editors and reviewers of this article have no conflicts of interest.The ,,Over the past few decades, a wide variety of risk-stratification systems have been investigated and developed to quantify the perioperative risk of patients who undergo cardiac surgery.Taken together, while the risk prediction is undoubtedly important for clinical decision-making in individual patients, it should be considered as a model undergoing development that combines clinical judgment with new and established risk factors."} +{"text": "The innate and adaptive arms of the immune system are involved in maintaining organism homeostasis . PhysicaIn conclusion, the collection of the 10th Anniversary of Cells: Advances in Cellular Immunology focuses on some hot topics and novel advancements in our understanding of the mechanisms of regulation of autoimmune response. Indeed, when the cellular processes and molecular mechanisms involved are better clarified, novel designed drugs highly specific for suitable target molecules allow physicians to efficiently treat autoimmune diseases, avoiding the use of unspecific immunosuppressive molecules. On the other hand, preventing the break that leads to autoimmunity will help to reestablish the anti-tumor immune response."} +{"text": "The modality of assessment used at a University Clinical Practice in Brazil is interventive psychodiagnosis in which the active participation of children and families is considered. Orientation is given following the input provided by children and their parents.Evaluating the use of an electronic form to be fulfilled during the observation of a child\u2019s play in psychological session.A child at the age of 5yrs 4m was brought for psychological assessment with the complaint of aggressiveness and irritability. His parents answered the Child Behavior Checklist (CBCL -1 1/12 5 yrs) and the Psychology interns had to observe the child\u2019s play and fulfill an electronic form in which the choice of toys and plays, motricity, creativity, symbolic abilities, frustration tolerance, adequation with reality were verified.The results of CBCL indicated that the child was within the clinical range regarding anxiety and depression along with somatic complaints. The indicators observed in the electronic form such as rigidity in the modality of play, the lack of adequate ability of impersonating in role-playing, the difficulty of using creativity during play unless he was guided by peers or the Psychology interns and the constant anguish of separating himself from his parents were crucial for parents\u2019 orientation. The psychological treatment lasted five months and benefited from the information obtained through the form once the symptoms of irritability and aggressiveness were reduced.This modality of assessment can be instructional for parents and may also reduce financial and time costs once provides specific indicators to observe during play."} +{"text": "The test was applied by the therapists Differential Assessment of Autism and Other Developmental Disorders (DAADD)Among the 22 children participating in the research, 20 did not score the item apraxia. Only two children were referred with apraxia and twelve had receptive language and pre-academic skills proportional to their age. Of 22 participants, only three were overly excited for verbal productions.The analyzes of data suggests that the occurrence of CAS in children with ASD is low and underlying the disorder."} +{"text": "Ocular Coherence Tomography (OCT) to measure retinal thickness is the current method to observe neurological impairment in neurodegenerative diseases [1] and in mental disorders [2] due to the composition of the retina itself as an anatomic extension of the brain.There can be found some factors to improve the resilience like the years of study.Our aim is to evaluate cognitive and clinical impairment in Bipolar Disorder and see the correlation to the retinal thinning.Twenty-seven patients diagnosed with Bipolar Disorder were assessed in the context of the FINEXT programme (3). Selective attention, executive functions and verbal memory were measured among other variables. Using the OCT technique, we measured the thickness of the ppRNFL, the RFNL, GCL and IPL layers in the macula in both eyes through several radial segments. Partial correlations were performed with Bonferroni correction (p\u22640.006) adjusted for age and academic status except for the variable years of study which was adjusted for age.Significant direct correlations were observed between:- The study-years and the thickness of the retina in the NO and RFNL. -Selective attention and GCL and RFNL layers. -Executive function and the GCL and IPL.We can observe some preliminar results showing a significant correlation between some layers of the retina, upper segments more frequently, and the outcomes of the neurocognitive assessment. We can see a relationship as well between years of study and the thickness of the Retinal Nerve Fibre Layer in the retina and optic nerve head, the axons of the neurons in the eye.No significant relationships."} +{"text": "Suspenders are the crucial load-bearing components of long-span suspension bridges, and are sensitive to the repetitive vibrations caused by traffic load. The degradation of suspender steel wire is a typical corrosion fatigue process. Although the high-strength steel wire is protected by a coating and protection system, the suspender is still a fragile component that needs to be replaced many times in the service life of the bridge. Flexible central buckles, which may improve the wind resistance of bridges, are used as a vibration control measure in suspension bridges and also have an influence on the corrosion fatigue life of suspenders under traffic load. This study established a corrosion fatigue degradation model of high-strength steel wire based on the Forman crack development model and explored the influence of flexible central buckles on the corrosion fatigue life of suspenders under traffic flow. The fatigue life of short suspenders without buckles and those with different numbers of buckles was analyzed. The results indicate that the bending stress of short suspenders is remarkably greater than that of long suspenders, whereas the corrosion fatigue life of steel wires is lower due to the large bending stress. Bending stress is the crucial factor affecting the corrosion fatigue life of steel wires. Without flexible central buckles, short suspenders may have fatigue lives lower than the design value. The utilization of flexible central buckles can reduce the peak value and equivalent stress of bending stress, and the improved stress state of the short suspender considerably extends the corrosion fatigue life of steel wires under traffic flow. However, when the number of central buckles exceeds two, the increase in number does not improve the service life of steel wire. With the rapid development of highway transportation, long-span suspension bridges are constructed across mountains, valleys, and rivers for their good mechanical characteristics and excellent spanning performance. The construction of early large-span suspension bridges was limited by experience and technology, and structural vibration control measures were relatively lacking, leading to obvious vibration responses under external load. The longitudinal vibration displacement of structures caused by external load may lead to the fatigue of expansion joints and other ancillary components. Traffic load has been proven to be one of the main reasons causing the longitudinal vibration displacement of structures. Such vibration may cause the fatigue of expansion joints and other ancillary components ,2. SuspeCorrosion fatigue is the phenomenon of crack formation and propagation under the interaction of alternating load and a corrosive medium that leads to a reduction in fatigue resistance . ScholarFurthermore, the axial stress and bending stress fluctuations caused by relative displacements between the girder and cables easily damage the short suspenders along with fatigue degradation ,12. To rTraffic flow is an important vibration source in the suspender stress response. Suspension bridges are a flexible system and structural deformation is evident under the action of traffic flow, which varies with traffic density. Characteristics such as traffic flow parameters, vehicle type, and vehicle weight generally have random distribution ,21. The This study takes the Zhixi Yangtze River Bridge as the research object. The bridge is a single-span steel\u2013concrete composite girder suspension bridge. The section layout is shown in As a common vibration control measure, central buckles are used to improve the vibration response of suspension bridges. These buckles are generally installed in the middle span; examples include the Runyang Yangtze River Bridge and the Sidu River Bridge, in which the rigid central buckle is installed in the middle span. Existing research indicates that the rigid central buckle can improve the structure frequency and reduce the longitudinal displacement response of the girder . In the To simulate the structural characteristics, a three-girder model of a prototype bridge was established using ANSYS 18.0. The FE model is shown in The theoretical material properties and cable force vary from the actual state of the structure; thus, the FE model was modified according to the measured material properties and cable force in construction. Then, the structure frequency was calculated by the modal analysis module of ANSYS software, and the Block Lanczos feature solver based on the Lanczos algorithm was used in modal analysis. When calculating the natural frequencies of a certain range contained in the eigenvalue spectrum of a system, the Block Lanczos method is particularly effective for extracting modes. The frequencies of the FE model were compared with measured structure frequency to validate the FE model. The research team undertook the monitoring of the structural state during the bridge\u2019s construction. After construction, the actual vibration mode and frequency of the bridge were measured through the modal test analysis system. Besides the modal test, the vehicle loading experiment was conducted to test its deformation performance under external load. The stress of long-span bridge suspenders is caused mostly by dead load; thus, the amplitude of stress change caused by vehicle load and other live loads is relatively small and is far lower than the fatigue limit of steel wire. Therefore, the degradation process is a typical corrosion fatigue process; that is, the corrosion defects on the steel wire surface develop into initial crack damage. The entire steel wire degradation process can be divided into stages of the development of corrosion and crack propagation, as shown in 2 due to the specification of bridge designation [2. Thus, the depth of a Zn-Al alloy coating can be calculated according to its density (6.58 g/cm3) and ranges from about 29 \u03bcm to 34 \u03bcm. The corrosion of steel wire includes uniform corrosion and pitting corrosion. Uniform corrosion describes the degree of average corrosion of the steel wire surface, which directly causes the reduction in the diameter of the steel wire, and the extent of diameter reduction is assumed to stay unchanged along the steel wire length . The steignation . Accordidc is the corrosion depth of the zinc\u2013aluminum alloy coating, ds is the corrosion depth of the steel wire, t is corrosion time, and tc is the time when the coating is totally corroded.The uniform corrosion of high-strength steel wire undergoes a two-stage corrosion process; that is, the corrosion of the coating and the corrosion of the steel wire substrate. The corrosion rate can be described as Equation (1).t is corrosion time.According to the preliminary work of the research team, the corrosion process of Galfan steel wire is measured by an accelerated corrosion test and can be simulated by parabola distribution as Equation (2) . The corIn service conditions, the corrosion rate of Galfan coating is significantly different because of the exposure environment. As is well known, field exposure tests are difficult to conduct due to the high cost of time, and it is also difficult to find exactly matched field exposure test results. Thus, the time conversion scale was determined by the field exposure test of Galfan coating by Aoki and Katayama, in which a hot-dipped Galfan-coated steel plate with a 25 \u03bcm coating was investigated ,28. AssuWhen the metal material surface has a passive or protective film, the pitting pit on the substrate surface appears after the protective layer is consumed, greatly affecting the characteristics of the steel wire. Pitting corrosion occurs randomly, accompanied by uniform corrosion . Given tThe distribution of the maximum pitting coefficient conforms ribution , which cThen, the distribution parameter of any wires with different lengths and diameters can be calculated by Equation (5):Stress concentration happens due to the shape characteristics of the corrosion pit. As the depth of the corrosion pit increases, a crack will occur when the stress intensity reaches a critical value. The transition process from pitting to cracking can be determined by two methods: (1) the growth rate of the fatigue crack exceeding that of the corrosion pit and (2) the stress intensity factor of the corrosion pit reaching the critical threshold of fatigue crack propagation. This study adopts the former method. The steel wire crack dominates when the development speed of the pitting pit depth exceeds that of the crack.a is the depth of the crack, C and m are the parameters of the Paris criterion [R is the stress ratio of alternating load.Corrosion cracks expand until failure under the stress cycle caused by an operating live load. The Forman formula is used to analyze the growth rate of a metal corrosion fatigue crack, as shown in Equation (6).riterion , Kc is ta is the crack depth, b is the diameter of the steel wire, The stress intensity factor a is the crack depth, and b is the diameter of the steel wire.tion (8) .(8)Faabi; To consider the effect of daily traffic flow on the structure comprehensively, a crack depth development model is established on the basis of daily traffic flow operation according to the traffic load investigation, as shown in Equation (9).Vehicles can be classified into different types according to axle distance, axle number, vehicle load, etc. Vehicle subsystems are commonly simplified as a car body, wheels, a shock mitigation system, and a damping system. The corresponding dynamic models are established on the basis of the hypothesis that the mass of the damper and spring components are ignored. For example, a three-axle vehicle is shown in i denotes road surface roughness.Vehicle wheels always keep contact with the deck; the bridge deformation caused by an external load leads to the vibration response of the vehicle and bridge subsystems; the dynamic response is influenced by the overall total mass matrix, damping matrix, and the overall stiffness matrix of the subsystem; and the road surface roughness is the main excitation source. Therefore, the interaction force between the vehicle and bridge system is a function of the vehicle\u2013bridge system\u2019s motion state and road roughness, which can be analyzed in the established vehicle\u2013bridge analysis system . The roaThe vehicle load data monitored in a region is used to further evaluate the degradation process of suspenders under traffic load. The data were collected from the traffic load of a long-span bridge for one month by a weigh-in-motion (WIM) system. As a tensioned component, flexible central buckles cannot support the vibration response of the midspan main beam or cable as the rigid central buckles, but they can still affect the overall response of the structure by changing the fastening force. The connection system formed by the central cable and the suspenders changes the distribution of the force of the suspender near the midspan under traffic load. Traffic load is the main inducement of the bridge vibration response. To study the improvement effect of central buckles on bridge vibration, this study analyzes the response of bridges under conditions such as no central buckle and settled flexible central buckles. The detailed analysis conditions are shown in The fatigue life of suspender wires can be predicted by the corrosion fatigue theory, and the detailed prediction process is shown in E is the elasticity modulus of steel wire, and The dynamic test under truck load in Only the bending stress and axial stress of partial suspenders are shown due to layout constraints. The variation of the axial stress of short suspenders is small, whereas the length of short suspenders near the midspan is too small to release stress; the bending stress is greater than long suspenders. The influence of bending stress cannot be neglected in the analysis of suspender degradation. The settlement of flexible buckles considerably reduces the bending stress of suspenders but has minimal effect on the axial stress. The bending stress of short suspenders near suspender no. 26 (midspan) slightly decreases, whereas the long suspenders are almost unaffected. Thus, short suspenders nos. 21\u201326 are selected as analysis objects.The traffic load is divided into different levels according to traffic density and then used for loading to calculate the structural dynamic response under traffic conditions. The generation of pitting corrosion is a random process, and the maximum pitting depth directly affects the generation of cracks and fatigue life. In order to reflect the difference in steel wire life, the corrosion fatigue degradation of steel wire under different working conditions was simulated. A total of 150 samples for each analysis condition were sampled based on randomly generated maximum pitting coefficients, and then the transition from pitting corrosion to cracking and the crack development were calculated on the basis of the proposed predicting process. The intensity of traffic flow greatly influences the stress response of suspenders. The bending stress of short suspenders is considerably greater than that of long suspenders. The setting of flexible central buckles can effectively reduce the peak value of bending stress, but when the number of central buckles exceeds two, the increase in number does not remarkably weaken the bending stress. In addition, the buckles can share the axial stress of the suspender between inclined cables, and the weakening effect is affected by the setting position.According to numerical analysis results, the fatigue life of short suspender wires under traffic load is remarkably lower than that of the other suspenders due to large bending stress (about 27\u201335 years). The setting of buckles can effectively reduce the equivalent bending stress amplitude, but the equivalent axial stress amplitude does not remarkably decrease. The improved stress state of the short suspenders considerably extends the fatigue life of the steel wires under traffic flow (about 174\u2013179 years); by contrast, the increase in the number of buckles has a minimal effect on steel wire life and extreme stress values.The influence of a flexible central buckle on suspension bridge vibration was remarkable, but the control effect on short suspenders is still unknown. This study established the corrosion fatigue degradation model of high-strength steel wire based on traffic composition and explored the influence of flexible central buckles on the corrosion fatigue life of suspenders under traffic flow. To improve the consideration of traffic flow, the WIM data were processed according to traffic density and used to analyze the suspender response under traffic flow of different densities. The fatigue life of short suspenders without buckles and with different numbers of buckles was analyzed based on monitoring traffic data. The following conclusions were drawn:The dynamic motion of the bridges is complex for diverse loads. Moreover, the fatigue behavior of short suspenders and the vibration control effect are influenced by other loads, such as wind, earthquakes, and other special conditions. The optimal design of flexible central buckles should be studied further."} +{"text": "Adsorption of polymeric inhibitor molecules to calcium carbonate crystal surface was investigated. Inhibiting efficiencies of phosphonic acid-based antiscalants are dependent on the amount of adsorbed material on the growing crystal surface. A strong antiscalant even at a small dose provides the necessary rate of adsorption. Comparison of two phosphonic-based antiscalants was made both in laboratory and industrial conditions. A distinguishing feature of the strong antiscalant is the presence of aminotris (metylene-diphosphonic acid) ATMP. Experimental dependencies of antiscalant adsorption rates on the antiscalant dosage values were determined. Emphasis is given to the use of nanofiltration membranes that possess lower scaling propensities. Modernization is presented to reduce operational costs due to antiscalant and nanofiltration membranes. The main conclusion is that control of scaling should be implemented together with the use of nanofiltration membranes. Application of antiscalants to control sparingly soluble salts that deposit in reverse osmosis membrane units has become one of the most important and significant issues in reverse osmosis practice ,2,3,4. APhosphonates or phosphonic acid-based antiscalants are still recognized as leading products in the reverse osmosis desalination market ,10. ThisAt the present time, interest in using reverse osmosis in drinking water supply is growing. Many new suppliers of membrane products and service chemicals are constantly appearing on the market. The tender principle of procurement leads to the situation that a low quality and inefficient product is supplied. The problem is the correct formulation of the antiscalant composition requirements. This is due to an incorrectly formulated requirement to supply a mixture of sodium salts and phosphonic acids, which is not enough to solve the problem. The initial choice of the antiscalant should be based on the results of the analysis, but later, when large amounts of antiscalants are purchased, we should analyze antiscalant samples\u2019 nuclear magnetic resonance spectrums to ensure that the required product is used. In this article, the authors make an attempt to demonstrate a comparison of antiscalant efficiency with an account of the reduction of operational costs due to the dosing of high-quality product in the feed water.Determination of calcium carbonate crystallization rates depending on antiscalant dose and membrane type.Evaluation of antiscalant adsorption rates for different doses.Dependencies of adsorption rates on the antiscalant type and its dose.The goal of the experiments was to demonstrate the adsorption characteristics of different antiscalants and to connect adsorption ability and antiscaling efficiency of different products with the purpose of developing guidelines to reduce operational costs. The experimental program included:The laboratory test unit flow diagram is shown in The amount of calcium carbonate deposited on the membrane surface was determined by mass balance considerations as a difference between the amount of calcium in the feed water tank in the beginning of experiment and the amount of calcium carbonate at the moment of the experiment. Deposition rates of calcium carbonate were defined as the derivative of the function of the amount of deposited calcium over time.Feed water (ground water in the Moscow region) was added to feed water tank 1 and was then pumped by pump 2 to membrane module 3. Membrane modules model 1812 70 NE with nanofiltration membranes and 1812 BLN with low pressure reverse osmosis membranes were used. The Procon rotary pump was supplied by Procon Products, Smyrna, TN, USA and produced 180\u2013200 L per hour at a pressure of 16 bar. The experiments were carried out using serial membrane elements of the 1812 standard model produced by Toray Advanced Materials Korea Inc. with reverse osmosis membranes of the BLN model and nanofiltration elements of the model with membranes of the 70 NE type with a selectivity of 70%. The area of the membranes in the apparatus model 1812 was 0.5 square meters.Concentrate samples were taken from tank 1. Calcium, chloride, sulphate and bicarbonate ions concentrations as well as pH and TDS values were determined in the samples. The test procedure and calculation techniques to evaluate scaling rates in the presence of antiscalants and without their addition has been discussed in a number of publications . Figure During the conducted test runs, concentration values of antiscalants (concentrations of phosphate ions) were determined a. DependAminat-K demonstrates lower scaling rates than Jurbi-Soft at different doses. The membrane type also influences scaling rate. As can be seen in Antiscalant suppliers are usually limited by general recommendations to add 1 to 10 milligrams of antiscalant per one liter of RO feed water. More detailed recommendations can be provided in accordance with the calculations that suppliers have developed for their product for a variety of application conditions. Thus, the dosing of antiscalant changes depending on supersaturation ratio reached in the membrane module depending on feed water composition and recovery values . In our Application of antiscalants provides reduction of scaling and higher values in the period between the cleanings. Figures demonstrate results of scaling rate evaluations as a function of coefficient of initial volume reduction K for the cases of different antiscalants and different dose applications. In cases when another antiscalant is used which demonstrates lower efficiency, the time period between cleanings decreases. Very often. applied cleanings are economically unreasonable. If we extend the period between cleanings, the amount of accumulated calcium carbonate grows and the efficiency of membrane cleaning decreases ,16. ThusAntiscaling efficiency of the dosed antiscalant provides reduction in scaling rates and longer operational period between cleanings. Another expense item is concentrate disposal. Concentrate produced by a reverse osmosis plant used for drinking water supply is forwarded to a sanitation sewer. For cases when ground water in Moscow region is treated by RO, the K value is usually 3\u20134, which means that concentrate flow value is 1/3 to 1/4 of the feed water flow. Thus, the water user water tax includes not only the costs for water treatment but costs for additional wastewater discharge. To reduce concentrate flow, an additional stage of nanofiltration membranes is used . ConcentAntiscaling efficiency of the inhibitor depends on its ability to adsorb on the surface of growing crystals. The higher adsorption rate, the better the antiscaling behavior of the inhibitor.To efficiently control scaling and to reduce operational costs of the RO unit it is recommended to tailor the membrane plant with nanofiltration membranes. The joint use of low rejection membranes and efficient antiscalant provides substantial decrease in scaling rates in membrane modules and reduces operational costs.High adsorption abilities of phosphonic-based antiscalant enables us to reduce the antiscalant dose in the feed water without compromising effectiveness of scale control and thus reduce reagent consumption and operational costs.High efficiency of phosphonic antiscalants is attributed to the content of aminotris (methylene=phosphonic acid) in the product. Application of Nuclear Magnetic Resonance method helps to identify the presence of ATMP and to avoid buying low-quality products."} +{"text": "Allergic rhinitis and allergic conjunctivitis are so frequently associated that the need to coin a new name to describe the simultaneous manifestations generated the term allergic rhinoconjunctivitis. The significant impact of rhinoconjunctivitis on the quality of life and the wellbeing of the patients is the reason why the medical community shows a great interest to this disease. Another aspect is the financial burden that is not negligible. The anatomical connection between the organs involved facilitates the propagation of the disease. The allergic pathophysiological mechanisms implicated in allergic rhinitis and conjunctivitis also share common features. The diagnosis of rhinoconjunctivitis is based on the concordance between the symptoms, the clinical examination, and the diagnostic tests that should reveal the existence of an allergen specific IgE in vivo or in vitro. Whilst the nasal smear for eosinophils is considered a reliable diagnostic test for allergic rhinitis, the occurrence of eosinophils in the conjunctive is not a trustworthy indicator of allergy. The therapy of allergic rhinoconjunctivitis is based on patient education, pharmacotherapy, and allergen-specific immunotherapy. The local treatment for the allergic rhinitis is primarily based on topical corticosteroids that also manage the ocular symptoms. The first line of treatment of the ocular manifestations is represented by topical antihistamines and mast-cell stabilizers or double action drugs. The patients with allergic rhinitis and other respiratory allergies are commonly affected by symptoms of ocular allergy. Allergic rhinitis and allergic conjunctivitis are often concomitant diseases showing a strong epidemiological correlation. Allergic rhinitis and allergic conjunctivitis are so frequently associated that the need to coin a new name to describe the simultaneous manifestations generated the term allergic rhinoconjunctivitis. 1].Allergic rhinitis cannot be regarded as an isolated pathology. There are other allergic disorders that are usually associated. Whenever there is an allergic inflammation of the nasal fossae, this inflammation will also involve the mucosa of the sinuses, the mucosa of the middle ear, and the conjunctiva of the eye . The financial burden of the disease is not negligible since patients with allergic rhinitis have a twofold increase in medication cost compared to non-allergic patients. In US, allergic rhinitis is the cause for 3.5 million lost workdays and 2 million lost schooldays annually .Although both conditions, allergic rhinitis, and conjunctivitis, may be regarded as trivial, their impact on the general wellbeing and the quality of life of the patients is significant [6]. In relation with the severity of the symptoms, allergic rhinitis can be divided into mild or severe. Allergic rhinitis is an inflammatory IgE mediated disease of the nasal mucosa characterized by nasal congestion, rhinorrhea, sneezing and/ or nasal itching. It can also be defined as an inflammation of the nasal mucosa occurring when a person inhales an allergen. Allergic rhinitis is categorized in connection with the temporal pattern of exposure to triggering agents, to the severity of the symptoms or the frequency of the symptoms. Traditionally, allergic rhinitis is classified into seasonal or perennial allergic rhinitis. The classification in connection with the severity and the frequency of symptoms allows a more appropriate treatment selection. This classification divides allergic rhinitis into intermittent (< 4 days/ week or < 4 weeks/ year) and persistent (> 4 days/ week or > 4 weeks/ year) , the perennial allergic conjunctivitis symptoms last all year round as other types of allergens are involved .Seasonal allergic conjunctivitis and perennial allergic conjunctivitis differ by the timeframe of the symptoms. Whilst seasonal allergic conjunctivitis symptoms are mainly manifested for a defined period of time as a response to airborne allergens during the spring, summer or autumn [8].Vernal keratoconjunctivitis is an inflammation of the conjunctiva occurring in individuals with an atopic terrain. Conjunctival scarring and corneal complications are commonly associated with vernal keratoconjunctivitis. It is not associated with skin positive testing in 42-47% of cases proving that it is not just a IgE mediated disorder [9]. Atopic keratoconjunctivitis is a bilateral inflammation of the conjunctiva and of the eyelids connected with atopic dermatitis. A type I hypersensitivity disorder is involved in the pathophysiology of the disease [10]. Giant papillary conjunctivitis is a disease of the superior tarsal conjunctiva recognizing an immune mediated mechanism. It is believed that there is a combination between the type I and type IV hypersensitivity responses. M cells and B lymphocytes may be implicated in the pathophysiologic mechanisms of giant papillary conjunctivitis [11].Allergies represent the fifth leading group in chronic diseases, but the real prevalence of the involvement of ocular allergy is not well determined. A study on a on a substantial population published in 2010 found that up to 40% of the participants experienced ocular symptoms at least once in their lifetime [12].A study with 200 participants on subjects diagnosed with allergic rhinitis stated that approximately 90% of them also had ocular symptomatology [13]. Bousquet et al. found that approximately 88% of allergic rhinitis patients showing sensibilization to cypress pollen had concomitant allergic conjunctivitis .Evidence suggested that in seasonal allergic rhinitis, ocular symptoms are more frequent than in perennial allergic rhinitis .The original Gel and Coombs classification of allergic reactions divides the immune response in four subtypes. Type I immediate or IgE mediated; type II, cytotoxic or IgM/ IgG mediated; type III IgG/ IgM immune complex mediated; type IV delayed type hypersensitivity or T cell mediated . The sensitivity to allergens is predetermined by a genetic tendency. When exposed to a foreign inhaled protein, predisposed individuals develop a sensitivity reaction that leads to the production of specific IgE against these proteins. When the allergen is inhaled, it binds the specific IgE situated on the exterior of the mast cells generating the release of allergic mediators. The allergic immune response recognizes two stages, the early phase, and the late phase. The mediators involved in the early phase are histamine, tryptase, kinase, kinins, heparin, leukotrienes, and prostaglandins. These mediators account for the symptoms specific to allergic rhinitis: rhinorrhea, sneezing itching, nasal congestion. Mucous glands are stimulated, and the vascular permeability is increased explaining the rhinorrhea; vasodilatation explains the nasal congestion; sensory nerves are stimulated explaining the sneezing and itching. The early phase occurs in the first few minutes after the contact with the allergen. The late-phase response starts 4-8 hours after the exposure and may last for days. This phase is distinguished by the fact that it generates the recruitment of inflammatory cells such as neutrophils, eosinophils, lymphocytes, and macrophages. The symptoms of the late phase are similar to those of the early phase with the difference that the congestion and the mucus production is increased whilst the itching and the sneezing are diminished [11].Household dust and pollen were identified as most likely to induce both nasal and conjunctival symptoms producing rhinoconjunctivitis [19]. Unified airways disease is a theory that is built on the consideration that the upper and lower airways are a unified morphologic and functional unit. There are strong clinical, pathophysiological, and epidemiologic evidence that sustain this theory. The mucosa of the nose and of the bronchi present similarities and are constituted of ciliary epithelium, basement membrane, lamina propria, glands, and goblet cells, forming the so-called united airway that explains the same way of reaction. Rhinitis and asthma are chronic inflammatory diseases of the upper and lower respiratory system sharing the same allergic and non-allergic mechanisms. The treatment of rhinitis and asthma must be addressed to both inferior and superior airways in such a way to achieve a management of both diseases . The anatomical connection between the organs involved facilitates the propagation of the disease. The allergic pathophysiological mechanisms implicated in allergic rhinitis and conjunctivitis also share common features [21]. Treatment of the nasal allergy with intranasal corticosteroid can restore the patency of the nasolacrimal duct and diminish the ocular symptoms [22].The anatomical connection between the eye and the inferior meatus of the nasal fossae is realized by the nasolacrimal duct. This duct is a pathway through which the allergens and the mediators from the conjunctiva drain along with the tears in the meatus under the inferior turbinate. Nasolacrimal reflux with upward migration has been allegedly noticed by some authors but is considered unlikely to happen. The blockage of the nasolacrimal duct is expected to increase the tearing considering a simple mechanical perspective [23]. Multiple studies have established the fact that there is a nasal-ocular reflex, which explains the interrelation between the nasal and the ocular pathology. The irritation of the nasal mucosa produced by the histamine induces a response mediated by the parasympathetic nervous system. This response is experienced at the level of the conjunctiva and at the level of the opposite nasal cavity, thus generating the symptoms .Nasal allergy also induces an inflammatory response at a systemic level. This systemic upregulation might be responsible for a more rapid and intense infiltration of inflammatory cells in the conjunctiva [26]. In acute allergic conjunctivitis, mast cells are increased in number in bulbar and tarsal substantia propria. Eosinophils are present in some cases, but their absence does not eliminate the diagnosis of allergy. A study on the cytologic examination of the tarsal conjunctival scrapping of 4 groups of patients with pollen allergic conjunctivitis, atopy without conjunctivitis, acute nonallergic conjunctivitis, and normal subjects, concluded that the occurrence of eosinophils in the conjunctive is not a reliable indicator of allergy . The nasal smear for eosinophils is considered a reliable diagnostic test for allergic rhinitis with moderately high sensitivity and high specificity .The histologic exam of the nasal mucosa in allergic rhinitis has special features, presenting with nasal mucosal oedema, vasodilatation, glandular hyperplasia, and eosinophils infiltration in the lamina propria [30].The diagnosis of rhinoconjunctivitis is established by the concordance between the symptoms and the diagnostic tests. The simultaneity of the rhinitis with the ocular manifestations is of paramount importance. The symptoms encountered in the rhinoconjunctivitis are rhinorrhea, nasal obstruction, sneezing, nasal itching, tearing, swelling, ocular itching. There are special characteristics of the face of the patient with allergic conjunctivitis. The nasal crease is a horizontal crease on the nose bridge caused by the rubbing of the nose . Allergic shiners are dark circles around the eyes explained by the vasodilatation and congestion. The rhinoscopy alone cannot distinguish between the allergic and nonallergic types of rhinitis. Usually, a pale, swollen, bluish-gray mucosa in considered to be typical for allergy. The secretions typical to allergic rhinitis are watery and thin . Sometimes, in pollen allergies, differences between the results of the two tests can be observed. This phenomenon is explained by pan allergenic sensitization [32].The diagnostic tests should reveal the existence of an allergen specific IgE in vivo or in vitro. The gold standard is the skin-prick test. Detection of IgE to allergens is considered a second-line test .The therapy of allergic rhinoconjunctivitis is based on patient education, pharmacotherapy, and allergen-specific immunotherapy [34]. High local concentrations of the active principle can be attained in the nasal mucosa without systemic adverse effects. Local corticosteroids administered locally are usually well tolerated with only minor side effects. There are no major differences between the efficacy of different corticosteroid preparations. Studies have shown that their efficacity is quite similar [35]. New intranasal corticosteroids are also effective on the ocular allergic symptoms, showing efficacy through the suppression of the nasal-ocular reflex, down regulating the inflammatory cell expression, and re-establishing the patency of the nasolacrimal duct [36].Pharmacotherapy is based on local intranasal corticosteroids. Studies have shown that intranasal corticosteroids are effective not only in controlling the nasal symptoms, but also the ocular symptoms [37]. Antihistamines are also used in the therapy of rhinoconjunctivitis. The mechanism of action of antihistamines is the interference with the histamine receptors H1 in the nasal and ocular mucosae. Second generation antihistamines have a higher specificity for the H1 receptors and do not cross the blood-brain barrier and therefore they are favored, causing less sedation [38]. Ophthalmic lubricants can be utilized to increase the humidity of the eye.Topical decongestants and corticosteroids are used to control the ocular symptoms, but the safety and optimal dosing regimen is still a matter of debate. Topical antihistamines and mast-cell stabilizers or double action drugs are the first line of treatment [39].Allergic specific immunotherapy is nowadays recognized as an effective therapy for allergic diseases. It is recommended in allergic rhinitis and allergic conjunctivitis irrespective of their association with asthma, but when there is evidence of specific IgE sensitization for a relevant inhalant allergen [Allergic rhinoconjunctivitis, the allergic inflammatory disease of the nasal mucosa and of the ocular conjunctiva, is a condition encountered with an increased frequency, related to an important decrease of the quality of life. The conjunctiva and the nasal mucosa have the same type of epithelium, this explaining why the reactivity to both allergens is similar. The allergic pathophysiological mechanisms involved in allergic rhinitis and conjunctivitis also share common features. The anatomic connection through the nasolacrimal duct, the nasal-ocular reflex, and the systemic inflammatory response, explain the relationship between the allergic rhinitis and conjunctivitis and the concomitance of the manifestations. The diagnosis of rhinoconjunctivitis is based on the concordance between the symptoms, the clinical examination, and the diagnostic tests that should reveal the existence of an allergen specific IgE in vivo or in vitro. Whilst the nasal smear for eosinophils is considered a reliable diagnostic test for allergic rhinitis, the occurrence of eosinophils in the conjunctive is not a reliable indicator of allergy. The therapy of allergic rhinoconjunctivitis is based on patient education, pharmacotherapy, and allergen-specific immunotherapy. The local treatment of allergic rhinitis is mainly based on topical corticosteroids that also control the ocular symptoms. The first line of treatment of the ocular manifestations is represented by topical antihistamines and mast-cell stabilizers or double action drugs.Conflict of Interest statementThe authors state no conflict of interest.AcknowledgementsNone.Sources of FundingNone.DisclosuresNone."} +{"text": "The aim of this study was to elucidate the positional relationship between the courses of the angular veins and the facial muscles, and the possible roles of the latter as alternative venous valves.The angular veins of 44 specimens of embalmed Korean adult cadavers were examined. Facial muscles were studied to establish their relationships with the angular vein, including the orbicularis oculi (OOc), depressor supercilii (DS), zygomaticus minor (Zmi), zygomaticus major (Zmj), and levator labii superioris (LLS).In the upper face of all specimens, the angular vein passed through the DS and descended to the medial palpebral ligament. In the midface, it passed between the origin of the levator labii superioris alaeque nasi (LLSAN) and the inferior OOc fibers. The vein coursed along the deep surface of the inferior margin of the OOc in all specimens. At the level of the nasal ala, the course of the angular vein was classified into three types: in type I it passed between the LLS and Zmi (38.6%), in type II it passed between the superficial and deep fibers of the Zmi (47.7%), and in type III it passed between the Zmi and Zmj (13.6%). In the lower face of all specimens, the angular or facial vein passed through the anterior lobe of the buccal fat pad.This study found that the angular vein coursed along the sites where facial muscle contractions are assumed to efficiently compress the veins, likely controlling venous flow as valves. The observations made and analysis performed in this study will improve the understanding of the physiological function of the facial muscles as alternative venous valves. The angular vein, or upper part of the facial vein, is formed by the junction of the supratrochlear and supraorbital veins at the root of the nose and becomes the facial vein near to the level of the zygomaticus major (Zmj) . As the Several textbooks indicate that the angular, facial, and ophthalmic veins lack valves \u20136, but sSkeletal muscle contraction is well known to be important in controlling venous blood flow in the lower limbs and in controlling tears in the lacrimal sac and gland. During standing, venous return from the lower limbs is highly dependent on muscular activity, especially calf and foot muscle contractions, which is known as the \u2018muscle pump\u2019 , 9, 10. Maes (1937) suggesteThe angular and facial veins of 44 specimens of embalmed adult Korean cadavers with a mean age of 72.1 years (range: 40\u201394 years) at the time of death were examined. The face was dissected bilaterally to expose the angular vein, facial vein, and facial muscles including the OOc, depressor supercilii (DS), zygomaticus minor (Zmi), Zmj, and levator labii superioris (LLS). The angular and facial veins were traced to observe their courses in relation to the muscles in the upper face, midface, and lower face. No history of trauma or surgical procedure was observed in any specimen.This study was approved by the Institutional Review Board of the Catholic Kwandong University (IRB no. CKU-21-01-0409). All cadavers had been legally donated to the Catholic Kwandong University College of Medicine. Donors voluntarily consented to the dissection and preservation of their body for education and research purposes. The donor\u2019s family agreed with the contents and procedures according to the will of the donors. None of the donors was from a vulnerable population and all donors or next of kin provided written informed consent that was freely given. The study was performed in accordance with the Declaration of Helsinki .The course of the angular vein in the face was recorded in relation to the facial muscles and surrounding structures. In the upper face, it passed through the DS and descended to the medial palpebral ligament in all specimens . In the This study has revealed that the courses of the angular vein in the face appear to have close positional relationships with the facial muscles including the OOc, DS, and Zmi. As the angular vein courses deep to the margins of the facial muscles or passes between them, it can be compressed during muscle contractions, affecting venous flow and the spread of inflammation, especially under conditions of blood pooling such as supine, prone, or upside-down positions or during blood backflow during a Valsalva maneuver or exercise.In the upper face, the angular vein passed through the DS fibers that depress the eyebrow, acting as a muscle of the glabellar complex. When the DS contracts, its fibers can compress the angular vein. Thus, blood flow from the angular to the superior ophthalmic vein and then to the cavernous sinus can be controlled, reducing the likelihood of inflammation spreading to the cavernous sinuses.In the midface of all specimens, most parts of the angular vein coursed along the inferior margin of the OOc. The orbital portion of the OOc induces considerable lower eyelid elevation during eye closure . The OOcThe angular vein passed between the superficial and deep fibers of the Zmi, between the LLS and Zmi, or between the Zmi and Zmj. In the first two courses, it can be compressed more due to the narrow space between the muscle fibers. The Zmi also acts with the LLSAN and LLS as one of the muscles causing elevation during several facial expressions and mouth movements . ContracThis study has revealed that the angular vein passes through the buccal fat pad in the lower face, where the mouth and mandible move frequently. Nishihara et al. (1995) indicateThe experimental study of Cotofana et al. (2020) found thThe limitation of this study is that this observation based on the cadaveric dissection was unable to assess the blood flow of the angular vein. As a further study, venous blood flow and diameter of the angular vein or facial vein can be investigated via ultrasound imaging at the sites where the vein was located along with the muscles during several facial expressions.This study found that the angular vein coursed along the sites where facial muscle contractions are assumed to efficiently compress the veins, likely controlling venous flow as valves. The observations made and analysis performed in this study will improve the understanding of the physiological function of facial muscles as alternative venous valves."} +{"text": "People with schizophrenia experience higher levels of stigma compared with other diseases. The analysis of social media content is a tool of great importance to understand the public opinion toward a particular topic.The aim of this study is to analyse the content of social media on schizophrenia and the most prevalent sentiments towards this disorder.Tweets were retrieved using Twitter\u2019s Application Programming Interface and the keyword \u201cschizophrenia\u201d. Parameters were set to allow the retrieval of recent and popular tweets on the topic and no restrictions were made in terms of geolocation. Analysis of 8 basic emotions was conducted automatically using a lexicon-based approach and the NRC Word-Emotion Association Lexicon.Tweets on schizophrenia were heterogeneous. The most prevalent sentiments on the topic were mainly negative, namely anger, fear, sadness and disgust. Qualitative analyses of the most retweeted posts added insight into the nature of the public dialogue on schizophrenia.Analyses of social media content can add value to the research on stigma toward psychiatric disorders. This tool is of growing importance in many fields and further research in mental health can help the development of public health strategies in order to decrease the stigma towards psychiatric disorders."} +{"text": "An Infection Control Estimate (ICE) Tool was developed based on a previously published concept of applying military planning techniques to Infection Prevention and Control (IPC) management strategies in the acute healthcare setting.Initial testing of the outbreak management tool was undertaken in a large acute hospital in the North-West of England during a localised outbreak of COVID-19. The tool, developed using Microsoft Excel, was completed by trained IPC practitioners in real-time to log outbreak details, assign and manage meeting actions and to generate surveillance data.The ICE tool was utilised across five outbreak control meetings to identify and allocate tasks to members of the outbreak control team and to monitor progress. Within the meetings, the tool was used primarily by the trained IPC Specialist Nurses who were guided by and entered data into the relevant sections. Feedback indicated that the tool was easy to use and useful as the sole repository of outbreak information and data. Suggested improvements following the testing period were made and additional functionality was added.Utilisation of the ICE tool has the potential to improve our understanding of the efficacy of currently employed outbreak management interventions and provides a cognitive support and targeted education for teams responsible for the management of outbreaks. It is hoped that by guiding teams through an outbreak with prompts and guidance, as well as facilitating collection and presentation of surveillance data, outbreaks will be resolved sooner and risks to patients will be reduced. The COVID-19 pandemic generated numerous challenges to healthcare services and has in many cases hastened a transition towards digital ways of working. These challenges highlighted several key issues related to the management of outbreaks of infectious diseases. Firstly, an appreciation of the importance of robust and timely datasets to support decision making has been realised during contact tracing and epidemiological modelling efforts . SecondlThe most recent Centre for Workforce Intelligence review of the IPC (Infection Prevention and Control) workforce highlighted those approaches to infection control service delivery, including outbreak management, are varied and there is little consistency in the training, practice or philosophies underlying IPC services in the UK (United Kingdom) . In respThis article will discuss the first proactive application of this novel outbreak management tool during an outbreak of COVID-19 in a large acute hospital in the North-West of England.aide memoir as a cognitive support to ensure that key elements of planning are not missed. In the context of the ICE, a prototype digital tool was developed using Microsoft Excel and contains data entry fields and guiding questions. The tool supports the development, monitoring, and allows retrospective analysis of the efficacy of outbreak management efforts by facilitating data extraction with potential to be used as metric for outbreak management efficacy and quality. In a military context, the CE questions are typically used alongside an In 2021, the ICE protype tool was used for the first time during a live outbreak at a hospital in Northwest England. The outbreak concerned SARS-COV-2 and lasted 44 days, during which time 36 patients were infected. The outbreak was contained to a single ward.The ICE tool prototype was used to document information related to the outbreak and answers the seven questions which constitute the estimate . For exan = 41) of outbreak interventions were either elimination or administrative interventions with the minority of interventions (n = 12) representing substitution, engineering controls or personal protective equipment-based interventions.Throughout the outbreak, the ICE prototype tool was used to record key data related to the outbreak in addition to being used to help plan and evaluate control measures. A retrospective analysis of the ICE tool used during the outbreak identified 53 interventions employed. These were automatically analysed by the tool and categorised into the relevant hierarchy of control (HOC) categories . This anThe tool was utilised across five outbreak control meetings to identify and allocate tasks to members of the outbreak control team. The tool also indicates when these tasks were completed. Within the meetings the ICE tool was used primarily by the trained infection control specialist nurses who were guided by and entered data into the tool. It was reported that the tool was found to be easy to use and captured all necessary data, although new data collection fields were suggested during the outbreak including detailed recording of issues and negative impacts of management interventions.The ICE prototype is the first digital tool which centralises key data related to the management of outbreaks in the context of inpatient hospital settings. The data captured within the ICE tool has the potential to improve our understanding of the efficacy of currently employed outbreak management interventions, the economic impacts of outbreaks and provides a cognitive support and targeted educational tool for teams responsible for the management of outbreaks. By ensuring that all aspects of the ICE seven questions are addressed within the tool, teams can be more confident that all key elements of the outbreak control process have been considered and all potential interventions have been implemented and evaluated. Future development of the tool will include the addition of greater communication functionality in addition to automated analysis of data captured within the tool to help better understand the efficacy of interventions and facilitate rapid and effective dissemination, and archiving, of the outbreak management plans to relevant clinical staff and leaders. Whilst this initial pilot testing of the tool during an outbreak has demonstrated the basic usability of the tool, wider robust testing is required to evaluate the value and potential impact of the ICE on outbreak management processes."} +{"text": "Multifamily interventions have shown to reduce the risk of relapse of psychotic symptoms in first episodes of psychosis (FEPs) but are not frequently implemented in specific treatment programs. We have develop a pilot study for the implementation of the interfamily therapy in FEPs within a Mental Health Centre in the Community of Madrid.The aims were to examine: relapses , duration of re-hospitalizations and voluntary versus involuntary re-hospitalizations during participation in MFG compared with the previous year.21 subjects participated in a MFG during 12 months, 11 participants with a diagnosis of psychosis and 10 family members. Interfamily therapy works as a new model of interactive psychoeducation among families where they share their own experiences and look for comprehension and solutions all together.Our clinical experience in an interfamily therapy intervention over 12 months has led us to identify a high degree of participation and acceptance by users and their families, and we have observed a lower relapse rate, with fewer of psychiatric admissions and of shorter duration among patients during the year of participation in the MFG compared to the year before treatment.MFG has been well accepted by both patients and their families, with a high degree of participation.The results observed in our experience of MFG treatment are consistent with the findings of previous studies that support the reduction of the relapse rate, the number of hospitalizations and their duration when family interventions are incorporated into treatment in recent-onset psychosis, especially in a multi-family group format.No significant relationships."} +{"text": "The Editorial Office has been made aware of potential issues surrounding the scientific validity of this paper, hence has issued an expression of concern to notify readers whilst the Editorial Office investigates. There appears to be a duplicated image, with some possible signs of alteration, between the first two panels of Figure 4E."} +{"text": "The antenatal diagnosis of an unruptured true aneurysm of the uterine artery is extremely rare and has never been reported, whereas pseudoaneurysms associated with previous trauma or cesarean section have been reported several times. True aneurysms occur when the artery or vessel weakens and bulges, sometimes forming a blood-filled sac. Nearly all cases of pelvic true aneurysms involved ovarian arteries which ruptured during the peripartum period. The case presented here is unique in terms of being an unruptured true aneurysm of the uterine artery with a first diagnosis during pregnancy at 32 weeks of gestation and the spontaneous development of thrombosis in the aneurysm in late pregnancy, documented at 37 weeks of gestation. The diagnosis of a true aneurysm of the uterine artery was based on, (1) a demonstration of the cystic mass located in proximity to the lower segment of the uterus with ultrasound characteristics of arterial flow in the mass, and (2) the occurrence in a woman who had no history of trauma or surgery in the pelvis. The finding during cesarean section confirmed the prenatal sonographic finding. The pregnancy ended with successful outcomes."} +{"text": "The authors discuss functional characterization of Mousterian tools on the basis of their use-wear and residue analysis of five lithic tools from Mezmaiskaya cave and Saradj-Chuko grotto in the North Caucasus. The results represent the first comprehensive use-wear and residue analysis carried out on Mousterian stone artefacts in the Caucasus. This study unequivocally confirms the use of bitumen for hafting stone tools in two different Middle Paleolithic cultural contexts defined in the Caucasus, Eastern Micoquian and Zagros Mousterian. Homo . Our understanding of the use of composite tools by the Middle Palaeolithic (MP) Neanderthals in Eurasia relies on evidence of hafting and adhesives4. Most ideas on the development of Palaeolithic composite tool technologies are based on microscopic use-wear, including diagnostic impact fractures (DIFs) and other traces of use15, and diagnostic characteristics of hafting traces10 , and the morphology of tools . However, the exact hafting significance of use-wear traces and morphological features is not always clear16, and this evidence alone are not an exhausted indication of the presence of hafting technology. Also, some studies indicate that the interpretative potential of some impact fractures proposed as having diagnostic value for the identification of projectiles is still unclear18.The development of composite technology using adhesive materials is often seen as a hallmark of cognitive sophistication that played an important role in the social and technological development of the genus 20, two lumps of birch tar that were probably attached to a bifacial knife from K\u00f6nigsaue (Germany)21, and nine tools and flakes with pine resin, and one scraper with pine resin and beeswax from Fossellone and Sant\u2019Agostino caves 22 in Europe, as well as 14 tools and flakes with bitumen from the sites of Umm El Tlel and Hummal (Syria) in the Levant26. These studies document that adhesive technology was used in both Europe and south-west Asia by varied Neandertal populations and the MP production of adhesives was complex. Neandertals mixed pine resin with beeswax22 and bitumen with quartz and gypsum24, and distilled tar from birch bark20.The lithic residue analysis provides direct information that the lithic artefacts were hafted, as well as allows precise identifying the adhesive materials involved in the manufacture of these composite tools. The currently known unambiguous evidence of the securely dated, and chemically and spectrometrically identified MP hafting adhesives includes three flakes with birch tar from Campitello Quarry and Zandmotor (Netherlands)20], the level of adhesive technology applied for manufacturing composite tools among different Neanderthal groups is problematic given the lack of relevant data from the majority of MP regional contexts. This state of research demonstrates the need for detailed modern studies about the role of adhesives in hafting and the level of hafting technology in various MP regions.Despite the MP adhesive evidence is being increasingly documented in Europe and Asia established the methodological framework for examination of archaeological residues in our study. The absorption bands corresponding to specific bitumen bands were defined after75. Band assignments for Raman peaks were made based on36.The good preservation of the analyzed lithic artefacts allowed us to identify use-wear traces and residues on all archaeological samples. Various articles focused on the characterization and identification of common materials found in various applications"} +{"text": "The direct and indirect striatal pathways form a cornerstone of the circuits of the basal ganglia. Dopamine has opponent affects on the function of these pathways due to the segregation of the D1- and D2-dopamine receptors in the spiny projection neurons giving rise to the direct and indirect pathways. An historical perspective is provided on the discovery of dopamine receptor segregation leading to models of how the direct and indirect affect motor behavior. Prevailing models of basal ganglia function are based on two main pathways originating from separate populations of the spiny projection neurons (SPNs) in the striatum that either directly or indirectly connect to output circuits that affect motor behavior. These direct and indirect striatal pathways were proposed to differentially promote and suppress actions in hyperkinetic and hypokinetic clinical disorders . The disFrom a historical perspective research on the basal ganglia pioneered neuroanatomical approaches for understanding the organization of brain circuits underlying brain disorders. Seminal work by Arvid Carlsson and Oleh Hornykiewicz in the late 1950s-early 1960s established that Parkinson\u2019s disease results from the degeneration of dopamine systems in the basal ganglia. This led to the development of a precursor of dopamine, L-DOPA as an effective treatment that reversed the bradykinesia symptomatic of the disease .in situ hybridization histochemistry (ISHH) to localize genes expressed in neurons, retrograde and anterograde axonal tracing methods and intracellular labeling of the axonal projections of individual striatal neurons. These techniques were used to describe the direct and indirect striatal pathways that form the conceptual backbone of the functional organization of the basal ganglia and indirect (iSPN) output pathways was provided by a technique developed by in situ hybridization labeling of mRNA transcripts (ISHH) added a powerful technique that advanced characterization of striatal neurons giving rise to the direct and indirect pathways. Using this approach, ISHH localization of neurons expressing substance P or enkephalin mRNAs combined with fluorescent retrograde axonal tracers confirmed that dSPNs express substance P mRNA whereas iSPNs express enkephalin mRNA that activate transcription factors to regulate gene expression of IEGs and others to modify neuronal plasticity and physiology . In addi2+/calmodulin signaling systems that activate mitogen-activated protein kinase kinase (MEK) responsible for phosphorylation of ERK1/2 . In thesas-GRF1) , 2008, sas-GRF1) , 2021. Bas-GRF1) . Cocaineas-GRF1) . These sas-GRF1) . Direct + influx or Drd1 + influx and indi+ influx . ActivatA major area of research is to determine the functional cellular effects of dopamine mediated activation of signal transduction pathways . An examChanges in dendritic morphology and physiologic properties of dSPNs and iSPNs were studied following striatal dopamine denervation with treatments with L-DOPA that produced dyskinesias by Surmeier, Cenci and their colleagues . DopaminOriginal concepts of the role of the basal ganglia in the generation of voluntary movements focused on activity from the cerebral cortex through the direct striatal pathway inhibiting the output of the GPi and SNr to disinhibit thalamic and brainstem circuits that generate movements . The ide2+ influx as a measure of neuronal activity of the GPe to the output nuclei of the basal ganglia are diagrammed in To further study the opponent effects of activity in the direct and indirect pathway during normal behavior Chan and his colleagues used an These studies expand the conceptual model that the direct and indirect pathways function to promote selected actions and suppress alternatives . Rather The effect of the direct and indirect basal ganglia pathways on behavior is determined by the organization of cortical input to the striatum. In the prevailing model of the organization of the basal ganglia, parallel circuits originate from functionally defined cortical regions that project through striatal projections to the output targets in the thalamus, which project back to the cortex . SubregiActivity in the direct and indirect pathways is initiated by excitatory inputs from the cortex and thalamus. A critical question is whether individual cortical and thalamic neurons that provide inputs selectively target dSPNs and iSPNs or provide inputs to both. To address this question modified rabies virus was used to label cortical neurons that provide inputs to dSPNs or iSPNs that express Cre . ResultsA core feature of the basal ganglia are the opponent functions of parallel pathways that process cortical input through the direct and indirect pathways to affect action selection. These pathways originate from dSPNs and iSPNs in the striatum. Opponent effects of dopamine on these neurons is a consequence of its stimulation of dSPNs through the Drd1 receptor and inhibition of iSPNs through the Drd2 receptor . DopaminThe author confirms being the sole contributor of this work and has approved it for publication."} +{"text": "Different hip pathologies can cause geometric variation of the acetabulum and femoral head. These variations have been considered as an underlying mechanism that affects the tribology of the natural hip joint and changes the stress distribution on the articular surface, potentially leading to joint degradation. To improve understanding of the damage mechanisms and abnormal mechanics of the hip joint, a reliable in-vitro methodology that represents the in vivo mechanical environment is needed where the position of the joint, the congruency of the bones and the loading and motion conditions are clinically relevant and can be modified in a controlled environment. An in vitro simulation methodology was developed and used to assess the effect of loading on a natural hip joint. Porcine hips were dissected and mounted in a single station hip simulator and tested under different loading scenarios. The loading and motion cycle consisted of a simplified gait cycle and three peak axial loading conditions were assessed . Joints were lubricated with Ringer\u2019s solution and tests were conducted for 4 hours. Photographs were taken and compared to characterise cartilage surface and labral tissue pre, during and post simulation. The results showed no evidence of damage to samples tested under normal loading conditions, whereas the samples tested under overload and overload plus conditions exhibited different severities of tears and detachment of the labrum at the antero-superior region. The location and severity of damage was consistent for samples tested under the same conditions; supporting the use of this methodology to investigate further effects of altered loading and motion on natural tissue. Hip osteoarthritis causes debilitating pain and loss of function and affects over 300 million people worldwide . The cauDespite evidence from clinical and in silico studies , 5 demonWhile the in vitro assessment of the mechanical environment on the natural hip is limited, total hip replacements have been widely studied using experimental hip simulators. These in vitro simulators are able to apply clinically relevant loading and motion cycles to hip replacements in a lubricant that mimics synovial fluid. The effects of parameters such as the material the hip replacement is made from, its diameter, the relative position of the components; as well as load and motion cycles applied have been assessed \u20138.In order to develop further experimental simulation of the natural hip, a reliable in-vitro methodology where the position of the sample, the congruency of the bones and the demand of load and motion are clinically relevant is needed. Such a methodology can develop our understanding of abnormal mechanics of the joint and possible damage mechanisms. The aim of this study was to develop and validate a methodology for experimental simulations of the natural hip joints using an electro-mechanical simulator and use the developed methodology to demonstrate the effects of altering the mechanical environment (increased loading). We hypothesised that the developed method would enable us to differentiate between loading environments thought observation of damage to the natural hip cartilage and labrum.Porcine hip joints were selected for use because of their availability and lack of variation compared to cadaveric human tissue. The materials section describes the preparation of the porcine samples for testing; and the methods section describes the experimental simulator that was used, the loading and motion protocols adopted for testing and the post-test analysis.Twelve right hind legs from 6-month-old pigs were obtained within 24-48hrs of slaughter. Animals were slaughtered at the local abattoir and taken from the food chain.All soft tissue surrounding the hip joint was removed by dissection. Prior to disarticulating the joint , the neutral position between the femur and acetabulum was assumed to be when the centre of the transverse acetabular ligament (TAL) aligned with the inferior apex of the growth plate in the head in the direction of the lesser trochanter. At this position, the capsule also exhibited a neutral visible tension, indicating the resting position of the joint . The resOn the pelvic side, bone was resected so approximately 5 mm of peri-acetabular bone around the acetabulum was maintained and the laburm remained intact. The posterior periacetabular bone was resected parallel to the acetabular rim in order to avoid any interference with the acetabular fossae soft tissue that could have led to cement flowing into the articular surface .The diameters of the head and acetabulum were recorded. Head diameters were measured using circular templates at the centre of the head in a parallel plane to the epiphyseal line . PositioThe method of sample mounting was developed to ensure three main objectives: 1) the COR of the hip was coincident with the COR of the simulator, 2) the neutral position of the joint and the relative position between bones were anatomically correct, and 3) there was no impingement of the fixtures within the RoM used in the experimental testing. An overview of the fixtures used are shown in In order to mount the acetabulae in an anatomically correct position, the acetabulum was placed with the TAL positioned towards the raised portion of the acetabular cup holder . The aceThe (inverted) femur was placed in anatomical alignment in the acetabulum and fixed using a custom potting fixture . The samAn electro-mechanical Single Station Hip Simulator (SSHS); Simsol Simulation Solutions Ltd., Stockport, UK, was used to perform the in-vitro experimental simulations . The SSHThe porcine hip joints, mounted in holders as previously described Figs and 6, wFour porcine hips were tested under conditions representative of normal peak load (NOR). The load and motion profile selected was based on the ISO standard for total hip replacement (THR) wear testing (ISO 14242). The axial load was scaled due to the reduced load expected in a quadruped and the average mass of the animals \u201315. The The effects of mechanical loading on the porcine hip was considered in further testing with increased loading, as follows: the overload condition used the load profile described above with an increase in peak load to 1,130 N corresponding to an approximate increase of 25% of the NOR loading and the overload plus condition with peak load of 1340 N which represented an approximate increase of 50%. In all loading profiles the loadThe experimental simulator applied simultaneous motions representative of human gait to the femoral head, as follows: \u00b1 20 Flexion-Extension (FE), 8.8 to -4.8 degrees of Abduction-Adduction (AA) and 2 to -10 degrees of Internal-External Rotation (IER), . The medPhotogrammetry was used to characterise and catalogue the tissue condition and record the location/extent of damage labrum and/or articular cartilage pre, during , and post simulation for all the samples. Multiple photographs were taken (Canon 750D DSLR with wide lens EF-S 28-70mm F3.5\u20135.6 and EF 100mm f 2.8 USM Macro) of both articular surfaces and the labrum following a standard protocol to ensure photos were captured in a consistent manner .Output simulator data was monitored to ensure the input and output profiles were consistent. Input profiles, data generated in terms of input and output profiles and photographs for each test are openly available through the University of Leeds data repository .A methodology was developed that enabled the in vitro simulation of porcine hip with a standard walking cycle. It was possible to set up the simulation to ensure the COR of the femoral head and acetabular were consistently coincident with the COR of the simulator. During method development two technical failures (results excluded) occurred, one whereby there was cement ingress between the articular surfaces, and another where the femoral neck fractured.Loading conditions were derived for \u201cnormal walking\u201d (NOR) and two \u201coverload\u201d (OL and OL+) scenarios. Following testing under the normal (NOR) conditions (n = 4) for 14400 cycles there was no detectable damage to the articular surface or acetabular labrum. Representative images of the samples pre- and post- test are shown in Following testing under the overload (OL) conditions, damage to the cartilage labral junction was observed. Tears and detachment of the labrum at the antero-superior region were observed (n = 4) at the end of the simulation (14400 cycles). However, no damage was detected when samples were reviewed at 7,200 cycles.In the case of all of the Overload Plus (OL+) scenarios, similar damage to that in the Overload (OL) conditions were observed, however, this was also larger in size and occurred earlier in the testing , some signs of tearing were also observed after 2 hours of testing. Overload Plus conditions also showed cartilage bruising on the articular surface of the acetabulum. The type of damage to the articular surface and labral junction during the experiments at different time points are shown in Figs Different hip pathologies can lead to a change in bony shape cause abnormal loading. This change in loading effects the tribology of the hip joint, and the increased stress distribution on the articular surface leads to mechanical failure and accelerates degeneration. Hip joint simulators have been widely used to perform tribology studies to evaluate the wear of total hip replacement components under different conditions, such as increased loading and RoM from daily activities and also the effects of changing the position of the THR components \u20138. Such In the methodology presented in this study, a protocol was developed whereby the natural hip could be accurately and reproducibly set up in a simulator, accounting for sample variations. The bespoke fixtures ensured the COR was reliably set in all tests and was coincident with the COR of the simulator. The required disarticulation and the removal of the greater trochanter in sample preparation had the potential to compromise the stability of the joint, however, the method provided accurate positioning of the sample into the simulator and a very quick and reliable method for the macro evaluation of the articular surface (due to the disarticulation). The hip remained stable throughout testing.Surface assessment and photogrammetry provided an accurate and fast approach to evaluate the cartilage and labral surfaces during and after simulation. The wide lens photographs provided a clear visualisation of the overall condition of the samples every time they were taken to identify the location of possible tissue damage. The macro photographs allowed observations of the detailed condition of a particular region of interest and to identify the type and severity of damage occurred. Photogrammetry took place in the same location as the experiment, allowing quick and minimal manipulation of the sample.Following testing under the normal load condition, the porcine hip samples showed no evidence of damage. This provides evidence that this is an appropriate methodology and loading regime for in vitro studies on porcine hips and provides a benchmark to compare the effect of different loading conditions. The overload and overload plus conditions did cause increasing amounts of damage. This was is in agreement with previous in vitro testing of natural cartilage samples, that have demonstrated increasing damage with increasing load , 18, 19.The developed methodology has some limitations. The removal of the great trochanter in the porcine samples was necessary to allow an appropriate RoM and avoid impingement between bones. This could affect the stability of the joint . The joint was disarticulated, which was required to fix the sample and to evaluate the articular surfaces through the test. It would have been ideal to maintain the capsule and teres ligament to preserve their function during the motion. The number of samples per loading condition was limited (n = 4), however development of the alignment and mounting methodologies used more samples (n = 10), therefore we were able to refine this process and reduce variability from joint position and ensure the damage observed was consequence of the loading and not of malpositioning of the sample. Finally, testing used cadaveric tissue and consideration to the differences this makes compared to living tissue should be considered. This may include the lubrication mechanism of the cartilage and recovery of the tissue from loading, and the contributions from viable cells in the cartilage. The altered lubrication will increase, stresses in the articular surfaces and we postulate damage occurs faster than in living tissue.Future work will further assess the effects of altered loading and motion of the natural hip, the effects of misaligning the joint will also be assessed and can be used to gain insight into surgical processes used to reshape the hip (for example in the case of femoroacetabular impingement). The developed methods will be translated to human cadaveric tissue."} +{"text": "Many genetic disorders are a result of single or multiple genome abnormalities. A possible approach to circumvent genetic disorders is to use gene editing agents to correct these mistakes, but a major challenge remains in the mode of delivery of gene editing agents to different regions of the body. Banskota et al. present the use of engineered DNA-free virus-like particles (eVLPs) to deliver base editors to different organs in a mice model for improved outcomes, highlighting the potential of eVLPS to deliver base editors and as an efficient delivery mechanism, leveraging the advantages of viral and nonviral delivery methods. Abnormalities in the genome can bring about various genetic disorders. A possible approach for the treatment of genetic disorders is by correcting these harmful mutations. Gene editing agents have provided an avenue to manipulate genomic DNA in living organisms with great precision. This approach provides the possibility to edit the genome to alleviate the origin of many genetic diseases. A major challenge to carry out this repair is the delivery mechanism of the editing agents to different regions of the human body. Current approaches commonly use adeno-associated viruses (AAVs) to deliver the DNA that encodes for specific base editors. However, concerns surrounding the use of virus delivery systems includes prolonged expression causing off-site mutations and possible oncogenesis from viral vector integration.1 recently described the design of virus like particles (VLPs), which is in essence a hollow protein shell of the virus devoid of any viral genetic material, allowing for the delivery of ribonucleoproteins (RNPs) in place of DNA, to reduce the chance of off-target editing thanks to the shorter lifetime of RNPs in cells. Banskota et al. utilized these engineered VLPs (eVLPs) to deliver therapeutic RNPs, including base editors and Cas9 nuclease. The new design allows eVLPs to package 16-fold more base editor RNP compared to earlier versions of VLPs. The authors successfully applied the eVLPs for delivery to different sites in the animal host. The system was able to reduce serum Pcsk9 levels in the liver and restore visual function in a mouse model of genetic blindness.Banskota et al.The published system signifies the possibility of utilizing eVLPs to deliver gene editing therapies to circumvent genetic disorders at different locations in the human body with a minimized risk of off-target mutations. More importantly, the system can also be applied for the in vivo delivery of other therapeutic proteins and RNPs. The nature of the production of eVLPs means there is a possibility that cellular proteins and RNAs from the originating cell could be packaged alongside the eVLPs. Therefore, in depth protein profile analysis of the eVLPs with an optimized production system would have to be established to reduce immunogenicity issues. The pharmacokinetics of the eVLPs need to be completed to determine the half-life of eVLPS, the cargoes and dosing requirements, and although there are still some aspects of eVLPs that have yet to be clarified, the current developments bode well for the potential of eVLPs for the treatment of genetic disorders in the future."} +{"text": "Here, we report a case of an 80-year-old man with a history of hypertension and recurrent episodes of syncope and chest pain. First, he underwent a coronarography with revascularization of the right coronary artery and stent placement. The exploration of the left main coronary artery (LMCA) was not performed because of catheterization difficulties. A coroscanner was performed for suspicion of an anomalous of the left coronary artery. The coroscanner showed an uncommon case of left main coronary artery (LMCA) arising from the right sinus of Valsalva. The proximal part of the interventricular artery has an intra-arterial course between the aorta and the pulmonary outflow tract, with narrowing of this artery at this level (2.2 mm in diameter). Left main coronary artery (LMCA) or left anterior descending coronary artery (LAD) arising from the right sinus of Valsalva or right coronary artery (RCA) is referred to as an anomalous aortic origin of a coronary artery (AAOCA). The subsequent course is mostly between the aorta and the pulmonary artery on its way to the left ventricle. Different theories have been postulated as a cause of sudden death in these patients. The most accepted theory is the higher incidence of occlusion of the osmium secondary to a more slit-like orifice and occlusion during physical activity due to compression between the major arteries."} +{"text": "Recurrent Unipolar and Bipolar affective disorders are considered paradigms of biological entities in psychiatry. However recent theories have underlined the role that environment plays in the genesis of these disorders in interaction with genetic diatheses.This study examined the relationship between stressful life events (SLE) and recurrent major depressive disorders.Three groups of 50 subjects were assessed: Patients with recurrent major depressive disorder with melancholic features; patients with borderline personality disorder; and healthy controls. Interviews for DSM-V Disorders were used for diagnosis. Beck Depression Inventory, The Israel Psychiatric Research Interview Life Event Scale and the Coddington Events Schedule were used to measure life events and depression and were confirmed with an interview.The proportions of loss-related events in childhood and in the year preceding the first episode was higher in the depressed group than in the control groups during the same time period. Proportions of SLE, uncontrolled and independent events were also more common in the depressed patients in the year preceding the first episode.The study\u2019s conclusion is that SLE plays an important role in the onset of depressive disorders. There are specific kinds of SLE that occur in childhood and in the year preceding the first episode. SLE has a less significant role in the maintenance of this illness.No significant relationships."} +{"text": "Workplace social capital is the relational network, created by respectful interactions among members of a workforce, can contribute to the formation of a wholesome psychological work environment in an organization. Nurses' workplace social capital is a derivative of the workplace social capital, formed because of the complex interactions among the nursing and between the other healthcare professionals. Transformational leadership is a style of leadership that addresses the emotional wellbeing of its workforce and inspires shared group ethics, norms, and goals. The philosophy of transformational leadership is grounded on the premise of workforce as human beings with specific needs. Transformational leadership has been confirmed as a strong predictor of nurses' workplace social capital. Meanwhile, it is of an academic and/or healthcare industry operational value to scholarly assess and discern the theoretical influence of transformational leadership on nurses' workplace social capital. In this paper, we have attempted to explore the associations between transformational leadership and nurses' workplace social capital from a theoretical perspective. We have discussed the importance of each sub-dimension of transformational leadership in building up the social capital relational network. Finally, we have proposed a graphic framework of our analysis to facilitate understanding of the associations between the transformational leadership and nurses' workplace social capital, in formation of a healthy work environment which is the foundation for efficiency and productivity of the workforce. Nursing professionals account as the largest segment of the healthcare workforce. Public ranks nursing as the most trusted profession . The terWorkplace social capital is an intangible but a formidable source that can improve effectivity and productivity of a workforce . Nurses'Transformational leadership can strongly influence the development of nurses' workplace social capital . This stThe concept of social capital first was used by Hanifan in 1916 to depict the relationships and interactions within a community . The semThe pivotal role of social capital at workplace has been pronounced and palpated across the globe due to intra and inter connectivity within and among the workforce and the increase in demand for productivity . The colThe concept of \u201csocial capital\u201d was introduced into the field of academic nursing in the mid-1990s . E.A. Rerelational network configured by respectful interactions among nursing professionals and between the other healthcare professionals. These interactions are characterized by the norms of trust, reciprocity, shared understanding, and social cohesion\u201d suggests the direction of the relationships in the workplace social capital network. The current presentation of nurses' social capital displays Type as a standalone pillar. We would like to propose to reposition of placing the three segments of Type under the pillar of Component, within the classification of structural social capital. We rest our proposition based on the rational that these three segments describe the structural directions of workplace social capital relational network. We have allocated the rest of this manuscript to logically assert our proposal.The importance of leadership in human interactions and relationships cannot be denied. Since the dawn of civilization various models and philosophies of leadership have been proposed, implemented, and practiced. The most current model, transformational leadership, introduced in the late 1970 s, has been put into practice by many organizations and has become one of the predominant relationally focused leadership styles \u201310. Tran1) Modeling the Way; (2) Inspiring a Shared Vision; (3) Challenging the Process; (4) Enabling Others to Act and (5) Encouraging the Heart , which are the two main attributes of nurses' workplace social capital . The cognitive social capital embedded in this strong and healthy vertical social capital network will be incubated and flourished.The nurse manager is the closest leader to the other nursing professionals at any ward. If we make the analogy of clinic workplace as a ship sailing on the open sea, the nurse manager with transformational leadership style would be the sagacious navigator. She/he sets up the shared standards of excellence (shared understanding) in the ward and clarifies uncertainties so that the other nursing professionals can reach to a higher level of professional destinations. Nurse managers with transformational leadership style can be trusted and create mutual trust among nursing professionals that can lead to the affirmation of a strong vertical relational network is also an important attribute of cognitive social capital. In some healthcare settings, nurse professionals are still considered as the subordinate group and are prone to workplace discrimination can be strengthened with increased enthusiasm for a shared future blueprint is raised simultaneously along with these positive interactions.The second component of transformational leadership is bilities ; shared mination . Nursinglueprint ; team cochallenge the process always seek out opportunities for changing their own status quo and that of others in addressing and overcoming the workplace challenges and obstacles. Challenges in the delivery of healthcare services require cooperation across disciplines in finding practical and plausible answers to the problems on hand and generating valuable cognitive assets e.g., trust, reciprocity, cohesion .The nursing workforce has been faced with tremendous challenges as the results of the rapid changes and responsibilities that have been introduced into the profession since the turn of the 21st century . Transfof others . Transfo on hand . This conabling Others to Act is another important resource for building the workplace social capital. Group accomplishment cannot be fulfilled by the leaders alone. Transformational nurse leaders always strive to build good relationships with people at workplace through tolerance and consideration. The transformational nurse manager seeks opportunities for staff's personal and professional growth and performance extraordinaire between the transformational nurse managers and the other nursing professionals because the authenticity of their leaders gets exuded through sincere actions.The practice of Eorkplace . Nursingorkplace . Transfordinaire , 39. SucEncouraging the Heart of the transformational nurse managers contributes to the formation of nurses' workplace social capital. They acknowledge and celebrate group and individual victories in inspiring the team spirit and stru) in the team; furthermore, the transformational nurse manager can apply the accomplishments of the nursing professionals in developing and/or strengthening professional interactions and camaraderie between the nursing professionals and the other healthcare providers .Finally, the practice of m spirit . They heIn summary, transformational leaders can develop and establish intimate and yet professionally appropriate emotional ties, high-level trust, team interaction and shared visions with the members of their teams , 40. TheNursing professionals are the backbone of the healthcare industry and a strong force to improve the cost effectiveness of the delivery of healthcare services. Nursing professionals communicate and cooperate with each other and the other healthcare professionals within the complex social capital network. A well-woven nurses' workplace social capital network is the tenet for a healthy work environment and ensures the efficiency and effectiveness of delivery of healthcare services. Measures to enhance nurses' workplace social capital should be put on the agenda by institutional administrations to assert flourishing of nurses' workplace social capital; furthermore, other workplace measures should be addressed through legislations and implementation of healthcare and public policies to ascertain preparedness of the healthcare industry for future challenges. Empirical research has shown that the relational-oriented transformational leadership provides an effective way to strengthening the workplace social capital for nursing professionals.In our paper, we have proposed possible mechanisms of transformational leadership on nurses' workplace social capital from a theoretical perspective. The detailed analysis triangulating with the empirical results should formulate constructive suggestions for nurse administrators and policymakers when striving to develop workplace social capital. Additionally, our analysis should shed light on clues for researchers in other scientific fields when attempting to explore the interactions between leadership and social capital in their domains. The main strength of our study is its proposed theoretical perspective mechanism of influence of the transformation leadership on nurses' workplace social capital. Our perspective analysis has its limitation. Our theoretical work was designed to address the theoretical deficiency of the interaction between transformational leadership and nurses' workplace social capital. In consequence, the influence of variables such as task structure, or leader position power, or impact of situation type were not evaluated. Meanwhile, our work is a prelude for future theories for each specific nursing context, pillared by transformational leadership and nurses' workplace social capital. Future research should evaluate influence of elements such as leader and subordinates' relationship and/or task structure and their interaction with these two concepts. Despite its limitation, our work shed light on constructive suggestions for nurse administrators and policymakers when striving to develop workplace social capital for nursing professionals.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.J-MX and AS conceptualized the framework of the manuscript. All authors have contributed to the development and approval of the manuscript.This work was supported by Public Welfare Technology Application Research Project of Lishui Science & Technology Bureau, China (Project No. 2022GYX65).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Neurological and neuropsychiatric disorders pose a significant burden on human health and society throughout the world. In the past 30 years, the absolute numbers of deaths and people with disabilities owing to neurological diseases have risen substantially and adiponectin . Ge and Dai report about the effects of 3-week treadmill exercise on the electrophysiological and channel properties of serotonergic neurons located in the dorsal raphe nucleus. In the paper by Li\u015bkiewicz et al. it is analyzed whether stimulation of autophagy is one of the mediators of ketogenic diet induced neuroprotection in the hippocampus.Two articles report about further molecules that are involved in regulating physical exercise stimulated adult hippocampal neurogenesis, namely the cyclin-dependent kinase inhibitor p16Ink4a (This Research Topic aimed to bring about new findings on the cellular and molecular processes that impact brain function in health and disease. The data presented here mainly focus on the beneficial role of ketonic diet and physical exercise on brain function, including the effects on neuronal circuitries and neuronal plasticity. Taken together, the articles support the view that the balance between nutrition and exercise is of paramount importance for cognitive health and the maintenance of brain structures.The author confirms being the sole contributor of this work and has approved it for publication."} +{"text": "Prevention of the First Episode of Psychosis: What Have we Reached by 2021? The first episode of psychosis is usually preceded by a prodromal period or stage of psychosis, where early signs of symptoms indicating onset of first episode psychosis (FEP) occur. In the last twenty years, enormous progress was made in the tretment of FEP and subsequently schizophrenia, as the focus of treatment of FEP shifted to this prodromal period with the aim of preventing the first episode of psychosis in people at risk.Treatment for the prodromal stage of psychosis is provided within specialized early intervention services, which are somtimes part of the services for the treatment of FEP. Early intervention services, which have been gradually developed in many countries worldwide, usually incorporate multimodal treatment approaches . However, there are still many differences in the treatment of prodromes across early intervention services, even within one country, leaving open the questions on what kind or combinations of treatments really work in the prevention of FEP. The methods of studies in the scientific psychiatric literature do not allow easy translation of scientific data to clinical practice. In the presentation, an up-to-date overview of the available treatments offered witin early intervention services for prevention of FEP is given.No significant relationships."} +{"text": "Bioengineered porous bone tissue materials based on additive manufacturing technology have gradually become a research hotspot in bone tissue-related bioengineering. Research on structural design, preparation and processing processes, and performance optimization has been carried out for this material, and further industrial translation and clinical applications have been implemented. However, based on previous studies, there is controversy in the academic community about characterizing the pore structure dimensions of porous materials, with problems in the definition logic and measurement method for specific parameters. In addition, there are significant differences in the specific morphological and functional concepts for the pore structure due to differences in defining the dimensional characterization parameters of the pore structure, leading to some conflicts in perceptions and discussions among researchers. To further clarify the definitions, measurements, and dimensional parameters of porous structures in bioengineered bone materials, this literature review analyzes different dimensional characterization parameters of pore structures of porous materials to provide a theoretical basis for unified definitions and the standardized use of parameters. Consideaterials . Howeveraterials . NeverthZ axis through the individual superposition of 2D point structures in the Z axis. This results in a complete 3D space morphology in the porous structures; thus, most printed porous structures have controlled, regular, and connected 3D space shapes The pore throat size is the maximum cross-sectional diameter of the penetration channel of the cells into the interior portion of the pore structure. It is the maximum internal tangent circle diameter in the 2D plane at the surface of the pore structure. It can be calculated using SEM or Micro-CT or other methods by directly measuring the internal tangent circle diameter or cross-sectional equivalent circle diameter under 2D conditions by selecting a specific plane or cross-section.2)The pore diameter is the maximum space diameter that can allow the cells to grow after entering the interior portion of the pore structure. It is the maximum internal tangential sphere diameter in the 3D spatial environment within the pore structure and can be measured by the rod or wall spacing equivalent to the pore diameter within the pore structure using SEM or Micro-CT or based on the reconstructed 3D model after Micro-CT scanning. The software simulation can be used to obtain its internal pore diameter distribution data. However, it is worth noting that the pore size distribution curve obtained in this way includes the pore throat size.We analyzed the origin and internal logic of different definition methods for the evolution of pore structure dimension characterization in bioengineered porous bone materials and proposed that it is more practical to characterize pore structure dimension by pore throat size and pore size together.1)The pore throat size, which is the size of the channel that characterizes the internal access of cells to the pore structure, directly determines whether the cells can enter the pore structure smoothly. It also determines the specific state of the circulation of body fluids between the internal and external shelf tissues of the pore structure, which influences the specific process of osteogenesis within the pore structure in the form of nutrient supply. At the same time, the morphology and size of the pore throat, as a direct morphological structure perceived by cells adhering to the surface of the pore structure, can also affect the specific functions of cell proliferation and differentiation by changing the cell adhesion status. However, this conclusion is limited to the cellular level and has not been confirmed in animal experiments.in vivo, mainly because the cellular experiments lack the complex physiological environment and the mechanical stimuli in vivo. The function of the pore size in vivo is to co-intervene with parameters such as pore shape and rod diameter in the elastic modulus of the material to change the distribution of stress stimuli within the porous scaffold and influence the osteogenic state within the pore structure.2)As a characterization of the size of the space in which cells can grow within the pore structure, the pore size also represents the pore structure dimensions. The function of pore size at the cellular level is similar to that of pore throat size in that it changes the cellular adhesion state through morphology and size, affecting the specific functions of cell proliferation and differentiation. However, the conclusions of such cellular-level studies are not fully consistent with the results of actual porous scaffold implantation At the same time, based on the joint definition of pore throat size and pore size, the specific functions and mechanisms of their respective roles in bioengineered porous bone materials were analyzed. The results showed that both pore throat size and pore size could affect the cell growth state and the final osteogenesis in porous scaffolds in different ways.etc. The root cause of these problems is the lack of in-depth research on the pore structure and the inability to clarify the specific mechanisms of osteogenesis, vascularization, and fibrogenesis within the pore structure of porous materials, which cannot be precisely controlled and regulated. This paper proposed the characterization of pore structure dimension by pore size and pore throat size by reviewing, summarizing, and unifying the specific definition and measurement methods of pore size and pore throat size. On this basis, the possible roles and mechanisms of specific parameters of pore structure dimension that influence osteogenesis within porous materials were proposed to provide further theoretical references for the subsequent in-depth studies of pore structure.Currently, bioengineered porous bone tissue materials and their related products are initially applied in the first line of clinical practice. However, there are still various problems, such as intraoperative sinking, non-fusion of bone graft, pseudo-joint formation, postoperative implant infection,"} +{"text": "Our coarse-grained molecular dynamics (dissipative particle dynamics) simulations confirm the tentative explanation of the authors of the experimental study that the potent antimicrobial activity is a result of the entropically driven release of divalent ions into bulk solution upon the electrostatic binding of \u03b2-peptides to the bacterial membrane. The study shows that in solutions containing cations Na+, Ca2+ and Mg2+, and anions Cl\u2212, the divalent cations preferentially concentrate close to the membrane and neutralize the negative charge. Upon the addition of positively charged oligomer chains , the oligomers electrostatically bind to the membrane replacing divalent ions, which are released into bulk solvent. Our simulations indicate that the entropy of small ions (which controls the behavior of synthetic polyelectrolyte solutions) plays an important role in this and also in other similar biologically important systems.This computer study was inspired by the experimental observation of Y. Qian et al. published in ACS Applied Materials and Interfaces, 2018 that the short positively charged Even though the phospholipid bilayers are relatively simple self-assemblies as compared with a number of other functional biological structures, they belong to the most important constructs in nature. They separate living cells from their surroundings and from each other, and simultaneously they intermediate the communication of cells with each other and with the outside world, which is essential for their life cycles and function, including proliferation and apoptosis.Phospholipid membranes are inevitable parts of the cells of all living creatures ranging from bacteria to mammals. Individual membranes slightly differ in their mission, function and structure, and the chemical composition and the architecture of the phospholipid molecules that form them also varies. The membranes are often decorated by specific substituents and oligomeric motifs. They host a number of important receptors and channels for passive and active transport of various compounds which secure the proper function of cells. Due to the importance of membranes for life on the Earth, they have been amply studied by biologists, chemists and physicists and the most important features of their behavior have been described in detail in a number of textbooks .Despite a broad variety of functional membranes, their self-assembly and structure are controlled by the same physicochemical principles. All of them are results of a spontaneous self-assembly of amphiphilic compounds in aqueous media, i.e., results of an enthalpically driven process minimizing the number of unfavorable contacts of hydrophobic groups with water. As explained by Nagarajan and others more than three decades ago ,3,4, entEscherichia coli (E. coli) which triggers the disruption of the inner membrane and kills the bacteria [E. coli and methicillin-resistant Staphylococcus aureus (MRSA) on this surface-modified substrate. They found that the undesirable medical problems were considerably suppressed which they attributed to the generic (unspecific) effect of the entropically favorable release of fairly mobile calcium and magnesium ions in bulk solvent upon the electrostatic binding of cells to the oppositely charged brush of surface-tethered A few years ago, an interesting example of the effect of double charged calcium and magnesium ions on the function of cell membranes of gram-negative bacteria was published by Qian et al. . Inspireeptides, ,34,35,36bacteria but the As the smart polymer-modified surfaces offer the advantageous treatment of the bacteria-generated medical problems, the paper by Qian attracted the interest of a number of research groups. In the last two years, it has been cited more then eighty times and it is futile to try to cite here all important studies; hence we mention only two relevant recent papers ,39. UnfoMotivated by the observations by Qian et al. and by their tentative explanation, we performed an extensive simulation study aimed at the reported behavior and at the proof of their working hypothesis. The goal of the study is not the accurate emulation of their experiments and reproduction of their results. They were obtained in biological systems under complex conditions and we cannot preclude that they were partly affected by specific effects. We aim at general principles of the behavior and investigate how much the electrostatic binding of multiply charged oligomers to the slightly charged membranes affects the distribution of monovalent and divalent ions in their immediate vicinity, which, in the case of positive results, supports the explanation proposed by Qian et al. While the effect of monovalent ions on the electrostatic binding of peptides at the phospholipid membrane has already been theoretically studied , to the In this study, we used the dissipative particle dynamics, DPD (a coarse-grained variant of the generic molecular dynamics method), developed by Hoogerbrugge and Koelman and furtd Warren addressei and j are \u201csoft\u201d and do not diverge at short distances i and j, Because the DPD approach employs effective forces acting between larger parts of the system (coarse-grained beads), the pair interactions between beads studied . The sofk is the spring constant and i and T is the thermodynamic temperature) between 2 and 4, together with k and non-zero i and Flexible polymer chains are usually modeled as strings of coarse-grained particles connected by elastic springs emulating the covalent bonds. Most often the harmonic spring potential is used to describe the bond strength and elasticity:on force , the sofotential . In the otential . An inteotential consistsGroot and Warren mapped tIf some components of the studied system are electrically charged, the electrostatic interactions represent the third type of conservative forces that must be taken into account in DPD simulations. Explicit electrostatics was incorporated in the DPD method by Groot et al. and by oq is the charge fraction, e is the electron charge and i and j is then given as:Several types of smearing, e.g., the linearly or exponentially decreasing charge density from the bead center, Gaussian density profile, etc., have been proposed and tested . We use formula :(7)uijelAnalogously to our earlier studies ,56,57, tThe membrane-forming phospholipids are modeled as the amphiphilic double-tail surfactants containing two chains formed by six strongly hydrophobic beads connected via one medium hydrophobic bead to a hydrophilic bead see . The int tension preventiThe analogues of The solvated monovalent positive ions secures the formation of a compact continuous membrane of required physical properties (see the previous part) across the whole box oriented approximately perpendicular to four walls and parallel to the two remaining walls. Nevertheless, the membrane does not usually divide the box into two parts of the same volume. To simplify the evaluation of concentration profiles and other characteristics, the position of the basic simulation box was re-adjusted to receive almost equal volumes at both sides of the membrane.Second, we performed simulations in systems with added small cations and anions. We observed that the addition of ions to the membrane formed in the first step yields a virtually identical equilibrium membrane to that obtained by the simulation starting with a random mixture of all components. The former variant shortens the necessary re-equilibration period and improves the quality of simulation data. Therefore, we repeatedly added increasing amounts of uni-univalent and di-univalent salts at random at both sides of the equilibrated membrane and performed new simulation runs at several ion concentrations. We always added the same amounts of ions at each side of the membrane. Even though the periodic boundary conditions secure the motion of ions within the whole simulation box, the addition of equal amounts of ions at each side of the membrane non-negligibly accelerates the equilibration.Third, we performed the simulations with added positively charged oligomers (either 16 or 32). Based on the results of simulations of the membrane with added ions, we added the positively charged shorts chains into the box containing the equilibrium membrane and ions .The simulations of the ensemble of 1708 double-tailed lipids in the In r) is very low. In the next part of the study, we gradually added the mixtures containing (i) 50 NaThe next part addresses the competitive binding of univalent and divalent ions to the negatively charged membrane. The most important part of the study concerns the effect of positively charged soluble oligomer chains negatively charged phospholipid bilayer membranes in solutions containing the mixtures of monovalent and divalent ions not only because they are double charged and their interaction at short distances is strong but because their mass is higher and their mobility and consequently their translational entropy are lower and hence their localization close to the membrane does not decrease the overall entropy of the system as much as that of the mobile monovalent ions.The ions compensating the membrane charge can by efficiently liberated and replaced by multiply charged oligomeric chains or by short polyelectrolytes, e.g., by short positively charged by Qian . The binThe charged oligomer chains sticking firmly to the membrane cause a partial depletion of the concentration of inorganic ions in the water-membrane interfacial region, but they do not prevent the approach of some ions to the membrane surface at their elevated concentrations as witnessed by peaks in ion concentration close to the membrane surface, which partially diminish, but do not disappear. Our simulation results qualitatively agree with all experimental observations published by Qian et al. and corroborate their explanatory hypothesis. They contribute to studies aimed at the biological impact of bacteria ,34,35,36From the viewpoint of the DPD simulation method, the results generally confirm the numeric values of interaction parameters proposed by Shillcock et al. for the"} +{"text": "The diagnosis of intellectual disability (ID) alone does not predict the level of required care, functional outcomes or limitations in social and occupational participation. The International Classification of Functioning, Disability and Health (ICF) is a taxonomy of health and health-related domains. It provides a common language and framework for describing the level of functioning of a person within their unique environment. Furthermore, it helps to describe health problems of a person in line with the International Classification of Diseases (ICD-10).Introducing the ICF taxonomy exemplary in the care of individuals with ID and mental health problems in Germany.Comparison of the ICF\u2019s comprehensive multidisciplinary approach to assess an individual\u2019s level of functioning and care in relation to assessing the needs of persons with ID based on clinical experience.The ICF provides a standardised assessment instrument to determine individual functional needs for the care, rehabilitation and societal integration of individuals with disabilities, which is a statutory requirement in many European countries.Using the ICF for the assessment and management of patients with chronic health conditions, mental disorders and ID can help to accurately define individual therapeutic goals and monitor functional outcomes. A comprehensive narrative description of the patient\u2019s functional status and clinical needs is comparatively time-consuming, requires greater effort by the assessing clinician and carries a higher risk of omission of pertinent functional domains; furthermore, a single ICF item confers little additional benefit to the patient in terms of the treatment or care they subsequently receive.No significant relationships."} +{"text": "MBRP has become an established treatment in the field of addiction, but implementing the program in an outpatient setting remains a challenge.We investigated the feasibility of MBRP in an naturalistic outpatient setting and the effect of mindfulness on underlying factors of addiction.All patients treated between 2015 and 2019 in the MBRP program at Brugmann University Hospital and Addiction Center Enaden were eligible to participate. Patients were asked to fill in a questionnaire about underlying factors of SUD in the domains of pleasure, emotion regulation, stress, relationship with others and relationship with oneself as well as the effect of the completed training on these factors.Of the 147(74 F) recruited patients; 32 patients completed the questionnaire. The study population differed in terms of substance as well in their aims towards the substance . Participation of at least 4 of the 8 sessions was 63 % and overall satisfaction of patients was high. We found a positive effect of mindfulness on all of the underlying factors for SUD. Underlying factors of SUD, as well as the effect of mindfulness on these factors showed strong individual variation. The most frequently observed negative effect was acute craving; 1 patient became acute suicidal.MBRP is feasible and has a clinical relevant impact on underlying factors of SUD. Negative effects were also observed and should be carefully monitored.No significant relationships."} +{"text": "This work presents an overview of the latest results and new data on the optical response from spherical CdSe nanocrystals (NCs) obtained using surface-enhanced Raman scattering (SERS) and tip-enhanced Raman scattering (TERS). SERS is based on the enhancement of the phonon response from nanoobjects such as molecules or inorganic nanostructures placed on metal nanostructured substrates with a localized surface plasmon resonance (LSPR). A drastic SERS enhancement for optical phonons in semiconductor nanostructures can be achieved by a proper choice of the plasmonic substrate, for which the LSPR energy coincides with the laser excitation energy. The resonant enhancement of the optical response makes it possible to detect mono- and submonolayer coatings of CdSe NCs. The combination of Raman scattering with atomic force microscopy (AFM) using a metallized probe represents the basis of TERS from semiconductor nanostructures and makes it possible to investigate their phonon properties with nanoscale spatial resolution. Gap-mode TERS provides further enhancement of Raman scattering by optical phonon modes of CdSe NCs with nanometer spatial resolution due to the highly localized electric field in the gap between the metal AFM tip and a plasmonic substrate and opens new pathways for the optical characterization of single semiconductor nanostructures and for revealing details of their phonon spectrum at the nanometer scale. Th. Th\u22121 ismers TERS tip. Gap-mode TERS images of CdSe NCs on the Au nanodisk array were obtained for an array period of 150 nm and a size of the Au nanodisks of 100 nm as determined from the AFM image a. The coThe computational electrodynamic model made it possible to tune the nanodisk plasmon energy as close as possible to the energies of the CdSe NC exciton and the laser excitation. This approach allows performing TERS mapping and achieving a resonant enhancement of the gap-mode TERS response.2 monolayer between the NC and the cantilevers. The monolayer coating protects the NCs from possible mechanical shear and allows a TERS imaging of a single CdSe NC to be performed.However, there is a significant challenge for TERS mapping in the semicontact mode. During TERS mapping, the cantilever can randomly capture CdSe NC from the sample surface, making the TERS map blurry. The solution to this problem was the use of an intermediate protective layer of a MoS2 monolayer TEM and SEM experiments. SERS by optical phonons in CdSe NCs on the Au nanodisk arrays was found to be resonantly dependent on the Au nanodisk size and the laser excitation energy. The correlation between size dependence of the LSPR energy of Au nanodisk arrays and SERS enhancement maximum was established. This correlation together with SERS anisotropy observed for Au dimers evidences the electromagnetic enhancement mechanism of the SERS effect. SERS by optical phonons in CdSe NCs deposited on single Au dimers reveals a variation of the phonon peak frequency from one dimer to another that indicates that quasi-single NC phonon spectra are obtained.SERS and TERS are becoming powerful methods of studying optical phonons of CdSe semiconductor NCs. By adjusting the energy of the laser radiation, the plasmon energy, and the electronic transition in the NC, it is possible to achieve resonance conditions for the SERS and TERS experiments. The methods make it possible to study phonons and determine the NC size of single NCs. Using single dimers, we experimentally demonstrate the shift of the LO CdSe mode due to the size selective Raman scattering for NCs with different sizes. Gap-mode TERS imaging enable us to visualize a single CdSe NC located in the vicinity of Au nanocluster and to deliver information on the NC phonon spectrum. Consequently, SERS and TERS methods allow the detection of low concentrations of material on a nanometer scale, down to a single NC."} +{"text": "To the Editor:The authors reported no conflicts of interest.Journal policy requires editors and reviewers to disclose conflicts of interest and to decline handling or reviewing manuscripts for which they may have a conflict of interest. The editors and reviewers of this article have no conflicts of interest.The ,With great interest we read the study by Schaefer and colleauges,,In addition to VA-ECMO itself, the type of surgery performed is also important when discussing the cause of stroke; left-side valve surgery using prosthesis or patients with reduced ejection fraction would have a greater chance of developing an intracardiac embolic source."} +{"text": "Following the publication of this article , similarSimilarities included the following figures which appear to partially overlap, despite being published in different articles and representing different conditions:Lanes 2\u20133 of the GAPDH panel in Fig 3D of , and lanPLOS ONE did not receive responses to the queries regarding these concerns by the end of the original deadline and extension.Although the corresponding author initially replied to acknowledge receipt of our message, PLOS ONE Editors retract this article [The unresolved concerns call into question the validity and provenance of the reported results and the adherence of this article to the PLOS Authorship policy. Therefore, the article .All authors did not comment on the retraction decision, did not respond directly or could not be reached."} +{"text": "People with borderline personality disorder are at higher risk of repeating suicidal behavior. At the same time, numerous publications have demonstrated the relationship between cocaine dependence and suicide attempts of repetition.Review the relationship between cocaine addiction, borderline personality disorder and repeated suicide attempts. Present through a clinical case the effectiveness of a comprehensive and multidisciplinary therapeutic plan with different mental health devices.To review the psychopathological evolution of a patient with a diagnosis of borderline personality disorder; dependence to the cocaine; Harmful alcohol consumption and suicidal behavior from the beginning of follow-up in mental health services to the present. Review the existing scientific evidence on the relationship between cocaine addiction and repeated suicide attempts. Analyze the eficacy of the different treatments available.This is a longitudinal and retrospective study of the psychiatric history and evolution of a clinical case since the implementation of an individualized therapeutic program and the favorable results obtained. Intensive outpatient follow-up was carried out for high suicide risk and hospitalization in a psychiatric hospitalization unit, day care centre and therapeutic community.At present, the patient remains in abstinence with remission of suicidal ideation. The literature has shown the usefulness of intensive mental health follow-up programs to achieve remission of suicidal ideation and maintain abstinence from illegal substances.No significant relationships."} +{"text": "Remote sensing image fusion is a fundamental issue in the field of remote sensing. In this paper, we propose a remote sensing image fusion method based on optimal scale morphological convolutional neural networks (CNN) using the principle of entropy from information theory. We use an attentional CNN to fuse the optimal cartoon and texture components of the original images to obtain a high-resolution multispectral image. We obtain the cartoon and texture components using sparse decomposition-morphological component analysis (MCA) with an optimal threshold value determined by calculating the information entropy of the fused image. In the sparse decomposition process, the local discrete cosine transform dictionary and the curvelet transform dictionary compose the MCA dictionary. We sparsely decompose the original remote sensing images into a texture component and a cartoon component at an optimal scale using the information entropy to control the dictionary parameter. Experimental results show that the remote sensing image fusion method proposed in this paper can effectively retain the information of the original image, improve the spatial resolution and spectral fidelity, and provide a new idea for image fusion from the perspective of multi-morphological deep learning. Due to the limitations of satellite technology, most remote sensing images can only be panchromatic (PAN) images and low-resolution multispectral (LRMS) images of the same area. The goal of remote sensing image fusion is to fuse the spectral information of LRMS images and the spatial information of PAN images to generate a remote sensing image with both high spatial resolution and high spectral resolution . ClassicThe popular convolutional neural networks (CNN) method can learn the correlation between PAN images and LRMS images because of its excellent nonlinear expression and achieves better fusion results than traditional remote sensing image fusion methods ,6. ThereMorphological component analysis (MCA), proposed by J. Starck et al. ,11, has Therefore, in this paper, we propose a method combining the sparse decomposition-multi-scale MCA method and CNN for remote sensing image fusion, with optimal scale determined by information entropy. We use MCA to sparsely decompose the original images and acquire the texture components and cartoon components at multi-scale. Considering the variability of the different components of the image, we use information entropy to calculate the threshold of the decomposition parameters. This facilitates the extraction of the different components at the optimal scale and effectively acquires more detail from the image. We use the spectral and spatial information of the LRMS and PAN images, respectively, to input the cartoon component of the LRMS remote sensing image and the texture component of the PAN image into an attentional CNN for fusion. The remainder of this paper is organized as follows. We represent an image as Assuming that the remote sensing image contains only the texture component Similarly, for a remote sensing image According to the above model, for any remote sensing image To better retain fused image information, we analysis the morphological components of the PAN image with a single channel and the MS image with three channels, obtaining the texture components of the PAN image and cartoon components of the MS image. Equations (4) and (5) show the sparse decomposition of the PAN image and MS image, respectively:The existing MCA method uses a single scale , while hDifferent MCA decomposition parameters represent different scales, and different scales also represent different resolutions. As shown in As shown in Information entropy reflects the amount of information contained in an image at a certain position ,17. The In our previous work , we assution (6) . The relThe ideal fusion goal of image Based on the above analysis, assuming that The information entropy Selective visual attention enables humans to quickly locate salient objects in complex visual scenes, inspiring the development of algorithms based on human attention mechanisms . In the The proposed method is mainly composed of three parts, including MCA, feature extraction and feature fusion respectively. Firstly, the PAN image and the MS image are decomposed by MCA, the multi-scale texture components of PAN image and the multi-scale cartoon components of LRMS image are obtained. The spectral and spatial information are preserved while the redundancy and noise are removed. As shown in A The joint LDCT dictionary The threshold values of the parameters are calculated using the information entropy of the fused image from Step 3 to select the best extraction scale for the cartoon component The optimal-scale cartoon component In the fusion network, Let In the fusion network, the fusion results of the previous layer are referred to in the convolution operation of each layer. For each pixel in the final fused image, we can choose to increase the size of its convolution kernel or use a deeper network model to expand the area of its corresponding pixel in the original image to improve the fusion ability of the network model.n. To solve the fusion function We use a regression model to train the fusion function: Adam\u2019s algorithm , an adapTo assess the effectiveness of the proposed method, we conducted experiments on four sets of remote sensing images with different topographical areas. The first set of experimental data a,b is obWe use CC reflects the correlation between two images, and a larger correlation parameter indicates more similarity between two images.Among them, RMSE is the difference between the pixel values of the fused image and the reference image. The ideal value of RMSE is 0.h and l represent the resolution of PAN image and MS image respectively. L is the number of bands. The spectral and spatial quality of the fused image is evaluated using the ERGAS algorithm.PSNR reflects the degree of noise and distortion level of the image.The high value of PSNR indicates that the fused image is closer to the reference image and therefor of higher quality.C represents the number of bands. P indicates the PAN image. Q denotes the Q-index.For the third and fourth groups of experiments, we use the following three common objective evaluation indexes to evaluate the experimental results: quality without reference (QNR) index , and twoectively .(18)D\u03bb=The experimental results compare our proposed approach with Brovey , GS 28]28], IHS In this paper, we propose a remote sensing image fusion method using morphological convolutional neural networks with information entropy for optimal scale. Our method extracts the texture and cartoon components of remote sensing images at multi-scale using MCA and selects the best scale using information entropy theory. The spectral and spatial information of the input image is fully utilized while avoiding information loss. In the network design stage, we obtain the final fusion result using an attentional convolutional neural network to retain source image information while enhancing the extraction of the input image details. We provide an experimental analysis on different types of data acquired from different satellites to demonstrate that our method better maintains the spectral information and obtains richer spatial details than existing fusion methods.In future work, we will keep using the idea of MCA combined with deep learning to apply this work not only to MS image and PAN image fusion. Our scheme can be improved by continuing to refine the network structure to apply hyperspectral image and MS image fusion or hyperspectral image and PAN image fusion."} +{"text": "TOSPEAK had been disrupted in the family. TOSPEAK emerged de novo in an ancestor of extant primates across a 540 kb region of the genome with a pre-existing highly conserved long-range laryngeal enhancer for a neighbouring bone morphogenetic protein gene GDF6. We used transgenic mouse modelling to identify two additional GDF6 long-range enhancers within TOSPEAK that regulate GDF6 expression in the wrist. Disruption of TOSPEAK in the affected family blocked the transcription of TOSPEAK across the 3 GDF6 enhancers in association with a reduction in GDF6 expression and retrograde development of the larynx and wrist. Furthermore, we describe how TOSPEAK developed a human-specific promoter through the expansion of a penta-nucleotide direct repeat that first emerged de novo in the promoter of TOSPEAK in gibbon. This repeat subsequently expanded incrementally in higher hominids to form an overlapping series of Sp1/KLF transcription factor consensus binding sites in human that correlated with incremental increases in the promoter strength of TOSPEAK with human having the strongest promoter. Our research indicates a dual evolutionary role for the incremental increases in TOSPEAK transcriptional interference of GDF6 enhancers in the incremental evolutionary development of the wrist and larynx in hominids and the human capacity to speak and their retrogression with the reduction of TOSPEAK transcription in the affected family.The human capacity to speak is fundamental to our advanced intellectual, technological and social development. Yet so very little is known regarding the evolutionary genetics of speech or its relationship with the broader aspects of evolutionary development in primates. In this study, we describe a large family with evolutionary retrograde development of the larynx and wrist. The family presented with severe speech impairment and incremental retrograde elongations of the pisiform in the wrist that limited wrist rotation from 180\u00b0 to 90\u00b0 as in primitive primates. To our surprise, we found that a previously unknown primate-specific gene The breath-taking utility of the human vocal apparatus allows for thoughts and information encoded by language to be communicated through speech. The larynx and tongue are the primary organs of speech. In early infancy the human larynx and tongue have a more superior position as in non-hominoid mammals, from where they gradually descend during postnatal development starting from ~3\u20134 months of age ,3,4,5,6 Concurrent with the phylogenetic descent and reconfiguration of the larynx in hominoids was another pivotal evolutionary development of the skeleton in hominoids that involved reconfiguration of the radio-ulnar joint of the wrist dramatically increasing the angle of wrist rotation enabling brachiation ,8. The eTOSPEAK which evolved a human-specific promoter. We report a speech impaired family with disruption of TOSPEAK associated with the retrograde descent, growth, morphology and flexibility of the larynx concordant with retrograde increases in the length of the pisiform that severely limit wrist rotation. Affected family members displayed an amazing incremental series of elongations of the pisiform that represent the inverse (retrogression) of those incremental reductions in the length of the pisiform that had occurred during the progressive evolution of the wrist in hominoids and compared with either 1 ug of the parent pGL3-basic vector or with the Firefly luciferase reporter gene. Expression constructs were cotransfected with 10 ng of pHRG-B Renilla luciferase control plasmid to normalise transfection efficiency as described elsewhere . N. NTOSPEAprimates and no Gd family and GDF6 reduced A 12,13],13TOSPEATOSPEAK that blocked its transcription across long-range enhancers for the neighbouring GDF6 bone morphogenetic protein gene which regulate the transcription of GDF6 in the developing larynx and wrist .TOSPEAK was found to be a tale of two parts with one part highly variable (discussed above) and the other part perfectly conserved in all of the primates tested. With regard to the latter, that part of the proximal core promoter adjoining the transcription start site of TOSPEAK (+2 to \u221218) was perfectly conserved between all of the primates tested and as such was the only fully conserved 20 nucleotide string found in the entire promoter and mature transcript(s) of TOSPEAK combined. The exclusive conservation of only this 20-nucleotide string indicated evolutionary pressure to maintain TOSPEAK transcription. Indeed, we found TOSPEAK transcription was conserved in all primates tested. In contrast, the transcripts of TOSPEAK were not conserved between primates having been derived from a primate-specific long non-coding transcription unit (lncRNA gene) with weak conservation of exon structure and sequence. Moreover, TOSPEAK transcripts were all short and enriched with stop codons with no evidence of a protein coding domain. Therefore, in contrast to the perfect conservation of the transcription start site and proximal core promoter of TOSPEAK, and the conservation of TOSPEAK transcription in primates, the short and poorly conserved transcripts of TOSPEAK were not conserved in any meaningful way and were therefore judged highly unlikely to have an important role in the evolution of the primates. This conservation of TOSPEAK transcription but not the sequence or structure of the TOSPEAK transcripts ultimately led us to question regarding the function of TOSPEAK transcription in primates and how this might be related to the reduction in GDF6 expression in the speech impaired family?The de novo emergence and evolution of the core promoter of BMP genes like GDF6 during development are known to be regulated by tissue-specific enhancers and that these patterns of expression correlate with BMP regulation of skeletal morphogenesis .,29.28,29Increased flexibility of the hyoid thyroid ligament permitted phylogenetic descent of the larynx thus removing hyoid cartilage constraints on the flexibility and utility of the larynx and the tongue in hominids .Increased flexibility of the larynx and tongue increased their utility and the capacity to speak.TOSPEAK/GDF6 gene complex coupled the incremental molecular and structural evolution of both the larynx and wrist with the capacity for speech and wrist rotation (brachiation), respectively which ultimately led to a reduction in GDF6 expression at those sites which in turn caused an increase in the ossification and reduced flexibility of those structures regulated by the GDF6 enhancers. This in turn retarded the postnatal descent of the thyroid cartilage which reduced the flexibility and utility of the larynx causing severe speech impairment in the affected family [The findings of this study indicate an important role for connexus . Interesne 2022) ,35,36,37"} +{"text": "The journal retracts the 17 June 2022 article cited above.Following publication, undisclosed competing interests were brought to our attention, which undermined the objective editorial assessment of the article during the peer review process.Frontiers conducted a post-publication assessment of the article, including consulting with independent expertise, which concluded that the article does not meet the standards of publication of Frontiers in Psychology.This retraction was approved by the Field Chief Editor of Frontiers in Psychology and the Chief Executive Editor of Frontiers. The authors did not agree with this retraction."} +{"text": "Communication is fundamental to integrate individual functions into complex systems, whether it be in communities, organisms, or cellular related interactions. Consistent with this, communication has provided the bases for the progress of civilizations as well as for the increasing complexity observed through the evolutionary process . In highJannaway and Scallan).The circulatory system is a complex network in which the arterial and venous circulations are directly connected through the capillaries. Coordination of the complementary work of the arterial and venous systems is essential for the long-term function of the cardiovascular system ; howeverMussbacher et al.).In addition of the complementary work of arterial, venous and lymphatic vasculature, vascular function also relies on the communication among cells circulating in the blood stream in dynamic coordination with the changes in the vessel wall observed at vascular microenvironments\u2019 level . In this2+ concentration ([Ca2+]i) and Ca2+ sensitivity of the contractile apparatus and a signaling pathway that is initiated by the opening of Ca2+-activated K+ channels (KCa) of small (SKCa) and intermediate (IKCa) conductance in endothelial cells and leads to smooth muscle cell hyperpolarization . However, ion channels can also be involved in the progress of the endothelial dysfunction observed in cardiovascular-related pathophysiological conditions, such as hypertension, obesity, diabetes mellitus and ageing; as explained by Goto and Kitazono (Goto and Kitazano), who highlight the participation of endothelial transient receptor potential vanilloid 4 (TRPV4) ion channel in the endothelial dysfunction associated with cardiovascular disease risk factors.Homeostasis of each cell of the organism relies on the fine regulation of blood flow supply according to the changes in the metabolic demand of the tissues. Therefore, variations in cell activity must be paralleled by coordinated modifications in the diameter of resistance arteries controlling the distribution of local blood flow to the tissues . The magpparatus . The levtor tone . Therebyrization . TherefoVillar-Fincheira et al.). Likewise, the integrity of the endothelial layer lining the luminal surface of the vessels must be preserved to keep a proper vascular function .The arterial system is a complex network in which, at least, two functionally different vascular segments that must work in concert can be recognized: the conduit and resistance arteries . Althougfunction and, intWe understand that these fine articles will provide the reader with an appealing Volume II of the Cell Communication in Vascular Biology."} +{"text": "Family caregivers are the glue holding together the delivery and financing of long-term care. Replacing family care with paid care would cost roughly $470 billion each year. But family caregivers are struggling. They face many challenges \u2013 most notably the financial stress and the need for services and supports. Other challenges include lack of respite care, need for caregiver training, and lack of access to quality paid workforce. In order to address these challenges, Congress authorized the RAISE Family Caregiving Advisory Council. The RAISE Family Caregivers Act directs the Secretary of Health and Human Services to develop a national family caregiver strategy. This session presents the findings of two years of focus groups and interviews with family caregivers and hundreds of stakeholder organizations that support them, providing concrete input to the Biden administration on how to deliver on the broad objectives of the RAISE Act."} +{"text": "The prevalence of children with overweight and obesity is increasing. General practitioners in Denmark follow children throughout early childhood via the preventive child health examinations. These examinations are offered to all children from birth to the age of five. Thus, the general practitioners have a unique opportunity for early tracing and identification of overweight and obesity, but the impact of the examinations are not examined. Therefore, the aim of this study was to examine the association between attending preventive child health examinations and the risk of overweight and obesity at the age of six both for the total pediatric population and within groups of vulnerable children such as children of parents with low educational level or low household income.A population-based birth cohort study was conducted including all Danish children born from 2000-2012 using the Danish nationwide registers. Data included information on child participation in preventive health examinations at general practice, height and weight at the age of six, and parental information on socioeconomic factors.The analyses included 801,444 children. Attending preventive child health examinations were not associated with a lower risk of overweight at the age of six. A lower risk of obesity was seen in children attending the examinations, both in the general population and within vulnerable groups , low household income . The risk of obesity was greater in the vulnerable groups than in the not-vulnerable groups.Attending preventive child health examinations were associated with a lower risk of obesity at the age of six, but not overweight. This was seen for both the general pediatric population and within vulnerable groups. The lowest risk of obesity was seen in the not-vulnerable groups.\u2022\u2002The results indicated that attending preventive child health examinations in general practice reduced the risk of obesity at the age of six, but not the risk of overweight.\u2022\u2002The lowest risk of obesity was seen in the not-vulnerable groups attending the preventive child health examinations in general practice."} +{"text": "The study aims to determine the wear intensity of selected milling chuck assembly surfaces covered with a protective DLC (Diamond Like Carbon) coating, used on the production line for elements of selected lockstitch machines, and to analyze the stress distributions in the object fixed with such a chuck for the characteristic load systems of this object during its processing. A model of the workpiece was developed using the finite element method. The boundary conditions, including the load and the method of clamping the workpiece, resulted from the parameters of the milling process and the geometric configuration of the milling chuck. Stress distributions in the workpiece for specific milling parameters and for various configurations of the milling chuck holding the workpiece are included in the article. The model experimental studies of wear were conducted in the contact zone between two surfaces covered with DLC: one on the element of the milling chuck pressing the workpiece and the other on the eccentric cams of this holder. The obtained wear values and shapes for the worn surfaces are also shown. The wear intensities for the steel plunger fins modelling swivel arm of the holder were by an order higher than those of corresponding steel shaft shoulders modelling eccentric cam of the holder. The linear wear intensities for these mating components may be expressed in terms of a function of average contact pressure and sliding speed in a corresponding contact zone. The indentation of eccentric cam into mating surface of the swivel arm of the holder increased nonlinearly with the enhancement of number of cycles of the eccentric cam. The sewing process of various materials in the form of fabrics or knitted fabrics is used in many industries, including fashion (clothes) ,2,3, medThe aim of the present study is to investigate the intensity of wear of selected milling chuck assembly surfaces covered with a protective diamond-like coating (DLC), used on the production line for elements of selected lockstitch machines, and to analyze the stress distributions in the object fixed with such a chuck for the characteristic load systems of this object during its processing. This workpiece was the bobbin case of the first of the possible types of lockstitches including one needle (straight) and double needle (zigzag) . Figure During operation, the sewing machines can be noisy. Beran et al. experimeThe structure of the proper stitch obtained by lockstitch was presented in .The manufacturing process of sewing machines with various degrees of shape complication requires the use of various technologies, including casting and machining. The latter is realized with the use of 3 and 5-axis CNC milling machines.Such a design of the machined surfaces has a positive effect on the repeatability of the positions of the actuators attached to them, the bases of sensors of the sewing machine mechanisms, both in relation to the reference databases of the machine tool, and such attached elements in relation to each other. Lockstitch machines with complex structures equipped with electronic devices are used to produce automotive airbags.During processing, the bobbin case is pressed to the bottom fixing surfaces of the milling chuck by various components of such a chuck. Realization of such components has evolved with the design on the chuck. The latter version utilized the eccentricity, pressing a hinged plate holding the workpiece. The eccentricity and hinged plate were the integral components of the chuck. All surfaces of such components were covered by DLC layers. Multiple pressing and releasing of the eccentric with this plate took place under conditions of changing load and slip values, leading to wear of their mating surfaces. The value of this wear and the shape of the worn zones were determined because of the experimental tests conducted in the model of such contact under load and slip conditions close to reality. The obtained wear results in the model were used to estimate the contact wear of the mating surfaces of the actual milling chuck, covered with a DLC protective layer.During the machining process of the bobbin case positioned in the milling chuck, the stress distribution occurs, affected by the milling parameters and geometric parameters of the milling chuck and the method of positioning the workpiece in it. Using the finite element method (FEM), a model of the bobbin case was elaborated. The boundary conditions, including the load and the method of clamping the workpiece, resulted from the parameters of the milling process and the geometric configuration of the milling chuck. The analysis of the stress distribution was assumed to be in steady state conditions, under mean values of cutting forces in the contact of the cutter and the machined surface along whole machined surfaces.Nowadays, the application of DLC coatings is very wide, particularly for various elements made of steel, from which components of the milling chuck analyzed were made. Rajak et al. noticed Using ion beam technology, Aisenberg and Chabot obtainedThe DLC layers are widely applied for various surface-protective solutions ,27,28,29DLC coatings can be either doped: W-DLC ,31,32,33Tuszynski et al. noticed The doped-DLC coatings exhibited higher resistance to wear, good adhesion with the substrate, increased electrical conductivity, and weakened compressive internal stresses during deposition ,41,42,43Interestingly, the DLC layers were found chemically inert ,44,45. The mechanical and tribological properties of DLC films are highly influenced by the ratio of sp2 to sp3 hybridized carbon bonds and hydrogen content ,47. The According to , the DLCAccording to , dependi\u22128 mm3N\u22121m\u22121, while keeping low friction coefficient (0.005\u20130.5) The feed per tooth tion (1) .(1)fz=VThe determined values of Specific cutting resistance2N/mm][The specific cutting resistance tion (2) .(2)kc=F2].Peripheral componentN];2N\u22121].Watrin et al. proposedIn the present study, it was assumed that the linear wear intensity k1 and k2\u2014the wear intensity factor for the plunger fin and the shaft shoulder, respectively [Pa\u22122];Moreover, it was assumed that the linear wear intensity may depend on the slip velocity, and the function describing this relationship has the form described by Equation (17). When selecting the form of the function \u22122];\u22122s2];\u22121s].The wear intensity factor 3\u22c5s\u22121];\u22121];3];2];Considering Equations (16)\u2013(18), the dependence between the factor The mass wear P profile on the unit for roughness measurement. The obtained profiles were registered and the position of a front plane of the P profile. The axial position of the P profile. The area between the assumed position of the top plane of P profile line recorded at a length corresponding to the The volumetric wear Based on the obtained wear intensity factors for DLC coated steel of both the shaft shoulder and the steel plunger, the number of cycles of opening and closing the milling chunk can be estimated until decrease of the maximal vertical distance It was assumed that the change The wear intensity factors This section contains results of loading of the bobbin case and of swivel arm. The results of wear of elements of the model of the eccentric cam mating with the swivel arm are also presented. Additionally, the calculated number of cycles of opening and closing of the milling chunk until reaching the limit value of the force pressing the bobbin case during its milling was estimated.The obtained values of normal contact pressure in the contact zone between the swivel arm and the bobbin case as a function of average finite element size is presented in The resulting values of normal contact pressure in two contact zones between the swivel arm and the bobbin case for the average FE relative size equal to 0.015 and two values of the The resulting values of von Mises stresses in the bobbin case loading by cutting torque for two cases of cutters and relative to them, two values of the normal force The resulting values of normal contact pressure in the contact zone between eccentric and the segment of swivel arm as a function of average finite element size are presented in The resulting values of normal contact pressure in two contact zones between eccentric and the segment of swivel arm for the average FE relative size equal to 0.02 and two values of the The corresponding values of the total indentation depth The obtained views of the sample worn initially DLC covered surfaces of steel shaft shoulders after the mating with the front planes of steel plunger fins are shown in P profile registered. The red area represents the corresponding cross-section of a removed material.With the step decreasing radius of shaft shoulders, the mean contact pressure values in the relating contact zones between a shaft shoulder and the mating lunger fin enhanced in a step manner. It was due to the same force generated by the spring and loading such contact zones. That was accompanied by the step decrease of slide velocity, as the regime of the angle rotating speed of the shaft was the same. Additionally, the abrasive wear distance decreased in a step manner, respectively. The level of the abrasive removal of the fragments of the DLC protective coatings deposited on the front plane of the steel piston fin and the surface of the steel shaft shoulder was more clearly visible for the higher values of the mean contact pressure in relating contact zones. For the shaft shoulders with two consecutively lowest diameters, the whole DLC protective coatings were practically removed. Interestingly, the accompanying decrease in sliding velocity seemed to less influence the intensity of abrasive removal of the mentioned DLC protective coatings than the increase in contact pressures. Along the axis of the shaft, the profiles of the worn cylindrical surfaces were measured for consecutive shaft shoulders A, B, C, and D, respectively. The obtained courses of such a profile are shown in The average volumetric wear values for the shaft shoulders are presented in The masses of initial and worn plungers, their mass wear, and volumetric wear are shown in From the obtained values of volumetric wear of shaft shoulders, their linear wear intensity average values were estimated and are presented in Such values were expressed as a function of sliding speed for shaft shoulders with four cuts and are presented in From the obtained values of volumetric wear of plunger fins, the linear wear intensity average values were estimated and are presented in The corresponding values of wear of plunger fin expressed as a function of sliding speed for plunger fins are shown in P profiles only by about 3% for the case of the shaft shoulder with two cuts. Simultaneously, the average linear wear intensity of the plunger fin material approximated by the function of sliding speed differed from the average linear wear intensity of the plunger fin material determined based on measured values of mass wear only by about 6% for the case of the plunger fin mating with the shaft shoulder with two cuts. The loading force, number of cycles, and sliding speed for the case of this shaft shoulder significantly differed from those for the shaft shoulder with four cuts made. Therefore, to some extent, such observation confirms the correctness of the obtained mathematical model for the linear wear intensity characterizing the abrasive wear process of the rotating shaft initially covered with a DLC layer mating with the plunger fin front plane also initially covered with a DLC layer. Such an abrasive wear process proceeded under the conditions of additional impact wear resulting from small dynamic deflections of the tensioned spring affecting the load of the plunger fin front plane.It is interesting that the average linear wear intensity of shaft shoulder approximated by the function of sliding speed differed from the average linear wear intensity of the shaft shoulder determined based on measured values of wear from registered 2 N\u22121] described by Ejtehadi et al. [The values of the wear coefficient i et al. , as citei et al. , and of n et al. for shafi et al. , as citei et al. , were eqi et al. for the The indentation The intensity of the abrasive removal of the fragments of the DLC protective coatings deposited on the front plane of the steel plunger fin and the surface of the steel shaft shoulder of cooperating friction with each other under the conditions of non-lubricated contact was higher in the case of higher mean contact pressures. The accompanying reduction in sliding velocity had a lesser effect on the intensity of abrasive wear than the increase in contact pressures.2 for the sliding speed in contact zone below 0.03 m/s.The wear intensities for the plunger fins analyzed were by an order higher than those of corresponding shaft shoulders being below 0.00025 1/PaThe linear wear intensities for the plunger fins and the corresponding shaft shoulder may be expressed in terms of a quadratic function of average contact pressure and the polynomial sliding speed in a corresponding contact zone. The forms of the functions of sliding speed were presented in The indentation of eccentric cam into mating surface of the swivel arm increased nonlinearly with the increase of number of cycles of the eccentric cam.The use of DLC layer on both the cylindrical cams of the eccentric and the mating surface of swivel arm of the milling chunk provide the needed value of force pressing the milling bobbin case to the milling chunk support only for fifty numbers of opening and closing of the milling chunk. It is necessary to apply the harder protective layer on the surface of the milling chunk, which is more resistive to abrasive wear than the DLC one. The harder protective layer can be obtained using, for example, plasma nitriding process.The conducted tests were under dry conditions and low sliding speed values below 0.03 m/s and contact pressure. Therefore, further investigations can be carried out for lubrication with various lubricants and for higher values of sliding speed and contact pressure in contact zone between mating surfaces of elements made of steel. From the obtained results some conclusions can be made:"} +{"text": "In our paper, there were some concerns of representative images of cell invasion and Western blot assays. During the preparation of representative images, cell invasion images of Control and TGF-\u03b21 of Figure Given the experimental accuracy, we repeated the Western blot assay of Figure All authors have agreed to the erratum. We feel regretful for not detecting these mistakes before publication and sincerely apologize for our mistakes and any inconvenience this might have caused. After correction, the figure legends of these figures were not influenced. The corrected figures are as follows."} +{"text": "We aim to determine the proportion of infants entering care in Wales via the two primary legal routes Act 2014, and section 31 of the Children Act 1989), and associations between mode of entry and infant characteristics and outcomes.This is a longitudinal cohort study using routinely collected data held in the Secure Anonymised Information Linkage (SAIL) Databank. We will link the Looked After Children dataset with family justice (Cafcass Cymru) data on section 31 proceedings, to explore pathways through the care and family justice systems for infants aged <1 year entering care in Wales between 2012 and 2019. We will follow up each child for two years from the date of their entry into care to track legal outcomes and placement outcomes .Descriptive statistics will include frequencies and proportions of infants who initially enter the care system via voluntary arrangements (section 76) and care proceedings (section 31), by age, year, local authority, and category of need. We will describe the proportion and characteristics of those with voluntary arrangements who later become the subject of care proceedings, and the distribution of final legal orders and placement types by initial route of entry to care. We will use funnel plots to investigate variation between local authorities. We will use linear regression to test for statistically significant differences in the proportions of infants entering care via the two different routes over time, and chi-square tests to investigate associations between mode of entry and infant characteristics and outcomes.There is limited information on the care journeys of children in Wales at the individual level. This study will help us to understand the patterns of use of voluntary arrangements for infants over time, the proportion subsequently involved in section 31 applications and the impact on outcomes for children."} +{"text": "The long-lasting effects of trauma on mental health and the cumulative effect during the lifetime is one of the great interest in research and applied psychology. However, the effect of cumulative trauma in combination with cognitive biases, such as cognitive rigidity , on the severity of depression has not been tested yet.The aim of this study was to analyse these variables, while considering for differential gender effects on a sample of patients with the diagnosis of depressive disorder.A total sample of 177 patients (137 women) were assessed using the Cumulative Trauma Scale. Cognitive rigidity was measured with the Repertory Grid Technique and severity of depressive symptoms with the Beck Depression Inventory.indicated that high levels of cognitive rigidity and high frequency of perceived negative cumulative trauma predicted depressive symptoms; while high frequency of perceived positive trauma did not predict depressive symptoms. Moreover, gender did not explain variability of depression, and its interaction with frequency of perceived trauma was not significant.Overall, traumatic cumulative trauma frequency and its negative appraisal are key to the understanding the severity of depression but also cognitive rigidity seemed to be a relevant factor to consider. Thus, these results highlight the need to focus on traumatic and cognitive aspects to increase the efficacy of psychological interventions in depression.No significant relationships."} +{"text": "Over the last decades, the prevalence of sleep disorders has been reported to have substantially increased globally . A varieNutrition and sleep medicine\u201d various research groups provided evidence of the relation between dietary factors and sleep features from observational studies, further explored by additional studies on genetic biomarkers, while a mechanistic overview of the scientific literature contributed to summarize and better understand the retrieved associations. In the study of Sutanto et al., the authors aimed to investigate the relationship between protein intake and sleep quality in 104 healthy subjects between the age of 50 and 75 years old; the results showed that sleep duration was positively associated with dietary tryptophan to large neutral amino acid ratio, with significant results specific to plant tryptophan. Other two studies investigated dietary parameters and sleep quality in school children; the study of Shih et al. analyzed data from the Nutrition and Health Surveys in Taiwan involving 2,628 participants showing that those consuming more high-sugar sweetened beverages exhibited shorter sleep durations on school days and >2 h of sleep debt than those reporting low intake; the study from Ramirez-Contreras et al. was instead conducted on 588 children aged 5\u201312 years revealing an association of regular fish and vegetable consumption with the advance of the midpoint of sleep and fish and daily fruit consumption with fewer sleep disturbances, while the daily consumption of sweets and candy and having pasta or rice >5 times/week were significantly associated with a decrease in sleep duration. Two studies used genetic biomarkers from the UK Biobank to investigate the relation between dietary factors and sleep outcomes: the study of Zou et al. showed that genetically predicted short sleep duration is a potential causal risk factor for hyperuricemia for women but has little effect onmen; the study of Tang et al. reported that genetically increased triglyceride levels have independent causal effects on risk of sleep apnea. Finally, the study of Benton et al. summarized the evidence from the scientific literature on the role of carbohydrates on sleep features, emphasizing two major hypotheses for their potential benefits, including the increase in the uptake of tryptophan by the brain, where it is metabolized into serotonin and melatonin (hence resulting in improved sleep), but also the emerging role of glucose-sensing neurons associated with the sleep-wake cycle in the hypothalamus, which may be affected by the carbohydrate-induced changes in the level of blood glucose.In this Research Topic entitled \u201cIn conclusion, this Research Topic provided an interesting update of current evidence on the relationship between diet and sleep features. Nonetheless, the topic investigated needs further attention in future research while more preclinical studies are necessary to understand the mechanisms underlying the observed relation.All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication."} +{"text": "According to this manuscript, the median period from hernia repair to diagnosis of mesh migration was 60\u2009months,Even if the mesh has not migrated into the bladder, adhesion of the repair site of the hernia and the bladder occasionally makes it quite difficult to expand the Retzius cavity and perform laparoscopic pelvic surgeries , and vice versa. Hence, it is important for urologists and general surgeons to share the possible negative effects of inguinal and pelvic surgeries and develop joint strategies.The author declares no conflict of interest."} +{"text": "The number of refugees has reached unprecedented levels with approximately 24.6 million people fleeing persecution, including war and other forms of organized violence . RefugeeOn February 24, 2022, Russia invaded Ukraine, leading the United Nations High Commissioner\u2019s Refugee Council (UNHRC) to declare a Level 3 state of emergency (highest level) . The conThe war in Ukraine has shed light on multiple forms of racism\u2014the belief that certain people are inferior because of their skin color . First, Citizenship and belonging remain debated concepts. The racism experienced by refugees of color during the ongoing Russian invasion of Ukraine further complicates POC\u2019s conceptualizations of belonging to Western societies. The racist treatment experienced by refugees of color at different borders constitutes a crack in the Western unity against the Russian regime. Racial tensions at European borders highlighted once again the unfairness of certain Western ideologies and policies. If the West can achieve unity in the face of violence, why then is that same compassion not extended to non-White refugees? The treatment of refugees of color has impacted the remaining faith of POC in Western democracies . The morSome may think that discussions on the inhumane treatment of refugees of color can wait until after the war in Ukraine has ended. However, human lives are at risk every second and no refugee should be denied basic rights because of interactions between their identities and others\u2019 ideological beliefs. In addition, experiences of racism within Ukrainian borders not only impacts the health of thousands of refugees of color, but also POC in Western countries. Solidarity movements are occurring in Canada, the United Kingdom, France, and the United States among POC communities in direct support of refugees of color . We shou"} +{"text": "Despite the importance of healthcare acceptability, the public health community has yet to agree on its explicit definition and conceptual framework. We explored different definitions and conceptual frameworks of healthcare acceptability, and identified commonalities in order to develop an integrated definition and conceptual framework of healthcare acceptability.We applied qualitative thematic content analysis on research articles that attempted to define healthcare acceptability. We searched online databases and purposefully selected relevant articles that we imported into ATLAS.ti 8.4 for deductive and inductive analysis which continued until there were no new information emerging from selected documents (data saturation).Our analysis of the literature affirmed that healthcare acceptability remains poorly defined; limiting its application in public health. We proposed a practical definition attempting to fill identified gaps. We defined acceptability as a \u201cmulti-construct concept describing the nonlinear cumulative combination in parts or in whole of the fit between the expected and experienced healthcare from the patient, provider or healthcare systems and policy perspectives in a given context.\u201dWe presented and described a workable definition and framework of healthcare acceptability that can be applied to different actors including patients, healthcare providers, researchers, managers or policy makers. Acceptability of healthcare is gaining momentum in the literature and is evolving as an emerging discipline of public healthAcceptability of healthcare is a complex and many-sided concept describing appropriateness of healthcareGiven the broad meaning of terms associated with human interactions and perceptions, the concept of acceptability in healthcare remains poorly definedWe conducted a qualitative thematic content analysis on identified articles on acceptability of healthcareFollowing retrieval of purposively selected research articles in line with the developed search strategy, we imported them into ATLAS.ti 8.4 for analysis. We deductively and inductively coded and categorised the themes related to definitions and frameworks of healthcare acceptabilityTo ensure the validity, the researchers discussed the preliminary coding system developed by the principal investigator; they revised it until a final coding system was adopted10. The researchers also assessed the intra-coding reliability for the first ten coded documents and there was a perfect agreement (100%) in length and location for the relevant codesThe results of this study were based on a selection of 174 published articles after which the analysis did not show any new information (data saturation). There were six main themes emerging from the documents analysed including: (1) acceptability within the context of access to healthcare, (2) complexity of acceptability,, (3) context of acceptability, (4) semantic domains of acceptability, (5) definition of acceptability and (6) conceptual framework of acceptability.The concept of access to healthcare was introduced in healthcare literature around the early 1970sIt was beyond the scope of this paper to discuss the definition of access to healthcare and different conceptual frameworks, but it was important to note that many authors widely recognised acceptability as one of the dimensions of accessSome authors have referred to acceptability of healthcare as a unitary construct2 without clearly integrating the different elements or constructs of acceptability1. Other authors have used terms such as acceptance, satisfaction, feasibility, enjoyment and uptake as proxies for acceptabilityInitial publication proposed acceptability as a complex concept describing the best fit between the healthcare expectations of the patient and the healthcare systemMost of the articles we analysed emphasised that acceptability of healthcare could only be interpreted effectively if the context was considered1\u20136. However, the context of acceptability was not clearly described in most of the analysed studies. The analysed articles showed that the context of acceptability goes beyond the setting and population, embracing the content, scope and focus of acceptability. Nevertheless, most of the studies grappled to define the scope and focus of healthcare acceptability. Researchers used one of two theories to define the scope of acceptability of healthcare; acceptability was either referred to as a unitary or a multi-construct conceptThe concept of acceptability of healthcare is broad5 and encompasses components with overlapping meaningsWe analysed multiple definitions of acceptability in the literature and they all appeared to describe different aspects within the continuum of acceptability Many authors have agreed that one of the best ways of approaching acceptability is from patients, healthcare providers or healthcare system managers or policy makers' perspectivesBuilding on existing literature and having explored the context as well as the basic theories helping to unpack the complexity and semantic domains of acceptability, we proposed a more practical and comprehensive definition of acceptability. We defined acceptability as \u2018a multi-construct concept describing nonlinear cumulative combination in parts or in whole of expected and experienced degree of healthcare from patient, provider or healthcare systems and policy perspectives in a given context.'The articles in this analysis did not offer a shared framework of acceptabilityThe suggested approach to our conceptual framework is based on five essential features: (1) context, (2) basic theories, (3) dependent variables, (4) independent variables and (5) applications of acceptability in public health. Context of acceptability consists of the setting, population, content, scope and focus. The basic theories that can be used to generate a shared understanding of acceptability include demand-supply sides, best-fit, mutual exclusivity, complex phenomenon, stakeholder analysis and actor-network. It should be noted that the full description of these theories does not fall within the scope of this article.Dependent variables include a set of components that define acceptability of healthcare. The focus of acceptability either from patient, provider or healthcare system manager or policy maker viewpoints should guide the selection of relevant components. At the level of dependent variables, the researchers can only conduct descriptive analysisIndependent variables include factors that are not part of the definition of acceptability but can or have proved to have significant impact on it either positively or negatively7. Independent variables consist of predictor variables associated with acceptability of healthcareWith regard to applications of acceptability in public health, we designed a flexible and adaptable framework to various contexts. The essential added value of this framework is to clarify the description of acceptability of healthcare from the perspectives of the patient, provider or healthcare system manager and policy maker. In addition, the proposed framework clarifies the notion of dependent and independent variables that can guide and help wih the current confusion in literature. Furthermore, this framework provides practical and targeted application for assessing acceptability from component (micro) or unitary, construct (meso) or multi-component or dimension (macro) or multi-construct levels.In this paper, we presented a coherent definition of healthcare acceptability, which we converted into a conceptual framework. We considered acceptability within the context of access and, as a multi-construct, complex concept. Our analysis confirmed that imprecise definitions of acceptability and lack of a coherent conceptual framework have hindered the application of healthcare acceptability in healthcare systems and policyOur findings agreed with other publications describing acceptability as a dimension of access to healthcareThe findings from this analysis supported the claim of acceptability of healthcare as a multi-construct conceptThis paper was aligned with the description of acceptability as a multi-level complex concept5. Usually there are too little data describing the levels of complexity or acceptability leading to inconsistent definitions. This article added to existing literature in describing the semantic domains of acceptability corresponding to their level of complexity. The semantic domains include \u2018dimension' corresponding to the highest or macro level of acceptability, 'construct\u2019 corresponding to medium or meso level of acceptability and \u2018component\u2019 corresponding the lowest or micro level of acceptability. The results obtained concur with previous publication advocating for the mutual exclusivity of the healthcare acceptability constructs where no component is categorised under more than one constructOur findings agreed with other studies which declared a lack of clear-cut definition of acceptabilityThe results from this study confirmed the lack of shared interpretation of acceptability frameworks reported in the published literatureThis paper proposes a workable definition of healthcare acceptability considering perspectives from patients, healthcare providers and healthcare system managers or policymakers. Drawing on existing literature, we suggested a definition of acceptability as \u2018a multi-construct concept describing nonlinear cumulative combination in parts or in whole of expected and experienced degree of healthcare from patient, provider or healthcare systems and policy perspectives in a given context.\u2019 The paper also describes a new and comprehensive conceptual framework applicable to healthcare acceptability through quantitative, qualitative and mixed methods in public health research and practice. We recommend further studies in order to validate and widely adopt the suggested definition and conceptual framework. Nevertheless, we believe this paper provides substantial information contributing toward forging consensus on the concept of acceptability definition and its framework among public health researchers and practitioners."} +{"text": "Damping performance of the plates with constrained layer damping (CLD) treatment mainly depends on the layout of CLD material and the material physical properties of the viscoelastic damping layer. This paper develops a concurrent topology optimization methodology for maximizing the modal loss factor (MLF) of plates with CLD treatment. At the macro scale, the damping layer is composed of 3D periodic unit cells (PUC) of cellular viscoelastic damping materials. At the micro scale, due to the deformation of viscoelastic damping material affected by the base and constrained layers, the representative volume element (RVE) considering a rigid skin effect is used to improve the accuracy of the effective constitutive matrix of the viscoelastic damping material. Maximizing the MLFs of CLD plates is employed as the design objectives in optimization procedure. The sensitivities with respect to macrodesign variables are formulated using the adjoint vector method while considering the contribution of eigenvectors, while the influence of macroeigenvectors is ignored to improve the computational efficiency in the mesosensitivity analysis. The macro and meso scales design variables are simultaneously updated using the Method of Moving Asymptotes (MMA) to find concurrently optimal configurations of constrained and viscoelastic damping layers at the macro scale and viscoelastic damping materials at the micro scale. Two rectangular plates with different boundary conditions are presented to validate the optimization procedure and demonstrate the effectiveness of the proposed concurrent topology optimization approach. The effects of optimization objectives and volume fractions on the design results are investigated. The results indicate that the optimized layouts of the macrostructure are dependent on the objective mode and the volume fraction on the meso scale. The optimized designs on the meso scale are mainly related to the objective mode. By varying the volume fraction on the macro scale, the optimized designs on the meso scale are different only in their detailed size, which is reflected in the values of the equivalent constitutive matrices. Viscoelastic damping materials are often used to reduce the vibration and noise radiation of plate and shell structures. In particular, constrained layer damping (CLD) treatment has the advantages of simple implementation, low cost and high damping capability, and it has been widely used in the automobile, aviation, aerospace and naval industries . To desiThe topology optimization method was originally developed to find the optimized structural layout under given constraints . The modMeanwhile, optimizing the distribution of viscoelastic damping materials to minimize vibration response and sound radiation has received attention from many scholars. Zhang and Kang optimizeSince the physical properties of the viscoelastic damping layer have a great influence on the damping performance, there is a great desire to optimize the microstructures of the damping layer with desirable properties . SigmundHowever, the above works concerning the topology optimization of viscoelastic damping material are concentrated on a one-scale design problem. With the development of optimization algorithms dealing with large-scale optimization problems , the ideAt present, the concurrent topology optimization of viscoelastic damping structures is still limited. Zhang et al. presenteThe purpose of this work is to develop a concurrent topology optimization method for maximizing MLF of plates with CLD treatment. The plates with CLD treatment dissipate vibration energy through transverse shear strains induced in the viscoelastic damping layer, and the effective transverse shear moduli are the main focus. Therefore, it is assumed that the macrostructure of the damping layer is composed of the 3D periodic unit cells (PUC). The representative volume element (RVE) considering a rigid skin effect is used to improve the accuracy of the effective constitutive matrix of the viscoelastic damping material. A mathematical optimization model is established while maximizing MLF as the design objective. The sensitivities with respect to macrodesign variables are formulated using the adjoint vector method while considering the contribution of eigenvectors, while the influence of macroeigenvectors is ignored to improve the computational efficiency in the mesosensitivity analysis. The macro and meso scales design variables are simultaneously updated using the Method of Moving Asymptotes (MMA) to find concurrently optimal configurations of constrained and viscoelastic damping layers at the macro scale and viscoelastic damping materials at the micro scale. Two numerical examples are given to demonstrate the effectiveness of the proposed approach.The diagrammatic drawing in As illustrated in An eight-node hexahedron element was used to establish the finite element model of the RVE. The nine components of the equivalent elasticity matrix were obtained by solving nine different static analyses for the finite element model of the RVE. Boundary conditions of the RVE can influence the effective constitutive matrix . AccordiThe arbitrarily imposed averaged strains are expressed as follows:i-th load case can be obtained as follows:e-th element, respectively; i-th load case, and The total strain density energy corresponding to the From the first six load cases in From the last three load cases in e-th element.The effective density of the RVE was evaluated through the following relationshipIt was assumed that the damping characteristic of the viscoelastic damping material is expressed as follows:In the analysis of a structure with the CLD treatment by using the finite element method, the momentum equation for the free vibration of the structure is written as follows:i-th element stiffness matrices; i-th element mass matrices; the superscripts The global mass and stiffness matrices can be expressed as followsAccording to the Modal Strain Energy (MSE) method, the MLF The objective of the CLD treatment is to dissipate the vibrational energy, which can be improved by maximizing the MLF. The concurrent optimization problem regarding the macro and meso scales can be formulated as follows:i-th element in the RVE can be written as follows:j-th element in the RVE. In order to achieve a clear design layout, the penalization method was applied. Based on the Rational Approximation of Material Properties (RAMP) model , the masOn the macro scale, the design region is composed of a constrained layer and a viscoelastic damping layer. Using the same interpolation scheme, the mass and stiffness matrices can be interpolated as follows:The sensitivity of the objective function in Equation (11) with respect to the design variables in the macrostructure can be expressed as follows:r-th mode.The sensitivity of the objective function with respect to the design variables e design . The adjThe sensitivity of To remove the implicit derivatives of the eigenvectors and eigenvalues, the adjoint variables should satisfy the following equationsThe adjoint variables The derivatives of the global stiffness and mass matrices with respect to the design variables For sensitivity analysis on the meso scale, the sensitivity of the eigenvectors with respect to design variables The sensitivities of the global stiffness matrices with respect to the design variables Considering the components of the equivalent constitutive matrix expressed as Equations (4) and (5), the total strain density energy defined in Equation (3) and the material interpolation proposed in Equation (12), the sensitivity of the components of the equivalent constitutive matrix with respect to the design variables on the meso scale can be formulated as follows(1)Define the design domain and initialize the design variables on both scales;(2)Establish the finite element model of the RVE and calculate the effective complex matrix and density of the viscoelastic damping materials by using the RVE with a rigid skin effect;(3)Establish the finite element model on the macro scale and obtain the MLFs based on the Modal Strain Energy Method;(4)Calculate the sensitivities of the objective function with respect to the design variables on both the macro and meso scales. To circumvent the checkerboard and mesh-dependency problems, a mesh-independence filter scheme was empl(5)Update the design variables on both scales by using MMA; and(6)Check the convergence of the result. If the change in the objective function of twenty successive iterations is less than Once the sensitivity information was obtained, the MMA was used to update the design variables on both the macro and meso scales. The proposed concurrent topology optimization process for maximizing the MLF of PCLD structures is shown in The unit cell of the viscoelastic damping material is shown in ormation . Hence, A rectangular plate with four edges clamped is shown in The initial designs for RVE had a uniform distribution with a given volume fraction The penalty factor Two other optimization objectives are discussed here, namely, maximizing the second MLF (Objective 2) and maximizing the sum of the first two MLFs (Objective 3). The volume fractions were also A harmonic excitation force In this section, the effect of the volume fraction on the optimization results is investigated. Two optimization objectives were maximizing the first MLF and the second MLF, respectively. For each optimization objective, three combinations of k-th MLF, the k-th MLF is maximum and the resonance peak at the k-th eigenmode was the minimum. In the case of maximizing the sum of the first two MLFs, though the MLF and resonance peak in the k-th eigenmode showed inferior results which maximized the k-th MLF, the optimized design obtained a better equilibrium in the first two modes.In order to further verify the effectiveness of the proposed concurrent topology optimization method, another rectangular plate with a different boundary condition is studied in this section. The rectangular plate with two short edges clamped is shown in In this section, the optimization objectives and volume fractions were the same as the cases in A concurrent topology optimization approach is proposed here for the multi-scale design of a CLD plate to maximize the MLF. The equivalent constitutive matrix of viscoelastic damping material was calculated using the RVE with a rigid skin effect and was taken into account in the finite element analysis of the macrostructure of the CLD plate. The sensitivity calculation was performed on both macro and meso scales. The MMA was used to update the design variables on two scales and the optimized design was obtained. The numerical examples are presented using the proposed concurrent topology optimization approach to multiscale systems.k-th MLF, the k-th MLF is maximum and the resonance peak at the k-th eigenmode was the minimum. In the case of maximizing the sum of the first two MLFs, the optimized design obtained a better equilibrium in the first two modes. The proposed concurrent topology optimization method can provide an effective means to optimize the structural damping of the CLD structure and produce optimal layouts on both the macro and meso scales. The proposed concurrent topology optimization method is a good choice for the optimization of the structural damping of CLD structure and producing optimal layout on both scales.By analyzing the influence of the penalty factor in the RAMP model on optimization results, an appropriate penalty factor was chosen. The effects of optimization objectives and volume fractions on the design results were investigated. The results indicated that the optimized layouts of the macrostructure were dependent on the objective mode and volume fraction on the meso scale. The optimized designs on the meso scale were mainly related to the objective mode. When varying the volume fraction on the macro scale, the optimized designs on the meso scale were different only in their detailed size, which were reflected in the values of the equivalent constitutive matrices. When maximizing the"} +{"text": "The Hox genes have attracted the interest of scientist for decades. Their organization in genetic clusters, their ordered chromosomal alignment and the correlation of this disposition with the evolutionary conserved gene expression along the anteroposterior axis have excited the curiosity of researchers and prompted countless investigations. Hox proteins have also been studied as examples of transcription factors that activate particular genes at certain positions. The present Research Topic Mechanisms of Hox-Driven Patterning and Morphogenesis gathers a series of reviews and original articles on Hox expression and function in different model organisms.Drosophila cuticle due to mutations in Bithorax Complex (BX-C) genes. Subsequent studies revealed that Hox genes are expressed and required also in muscles or nervous system. Two reviews focus on the role of Hox genes in these tissues. The manuscript by Poliacikova et al. reviews the role of Hox genes in the different steps of muscle specification in Drosophila and vertebrates. The authors describe Hox activity in somatic/skeletal, cardiac and visceral muscle development, explain the cooperation of Hox proteins with muscle-specific proteins and explain the role of these genes in neuromuscular circuits. These neuromuscular networks are detailed in the work of Joshi et al. They report muscle-motoneuron circuits in Drosophila and how these impact on development and behavior. The authors first summarize Hox role in specification of the central nervous system and then review recent advances in Hox determination of motoneuron morphology and physiology, focusing in two larval behaviors, locomotion and feeding, and two adult ones, egg-laying and self-righting.The seminal work of Ed Lewis characterized transformations in the Li et al. demonstrates a role for Hox5 in adults. In a conditional triple mutant Hoxa5/Hoxb5/Hoxc5, in which Hox5 function is eliminated in the mesenchyme of adult mice, the investigators observed elimination of the elastin matrix, change in alveolar structures and an emphysema-like phenotype.Although it is well established that Hox genes are required during development, recent studies have uncovered a Hox function in adults, as described in the above review. The original article by Mitchel et al.,) the activity of Hoxa5 in the development of the mouse sternum is analyzed. The authors first characterize in detail the origin and development of this structure. Then, they show Hoxa5 expression and requirement in the presternum, demonstrate the coordinated activity of other Hox genes in the formation of this structure and discuss the evolutionary implications of such function. In the second original article, by Howard et al., it is demonstrated that high levels of Hoxb5b in a restricted time window expand zebrafish neural crest cell number, increase the expression of the vagal neural crest cell markers foxd3 and phox2bb, and extend the number of enteric neural progenitors; however, these progenitors do not expand later on to make enteric neurons along the gut. To explain the early expansion of neural crest cells but the later reduction of neurons, the authors argue that it can be due to the dynamic expression of Hox cofactors of the TALE family or to insufficiently timed signals needed for the continuous expansion and differentiation.Two other research articles also unveil Hox5 function. In one of them deals with the role of Hox gene duplication and divergence in the development of morphology and in the emergence of new features in vertebrates. The authors discuss Hox gene duplication in evolution, showing examples of conservation and divergence of gene function, and explaining the role of the different domains of Hox proteins in the acquisition of specificity. They conclude that new specificity of Hox function can be achieved with just a few aminoacid changes in conserved regions and through interactions with proteins of the PBC class.The relationship between Hox gene function and evolution, originally proposed by Ed Lewis, is addressed in two reviews. In the first one, by Cain and Gebelein discuss how the activity of different Hox proteins can determine the development of distinct structures by specific DNA binding and activation of particular genes. They summarize recent genomic and interactome data revealing how Hox proteins differ in the way they can bind closed chromatin, showing that some of them need PBC cofactors as pioneer factors. Then, they explain how the interaction of Hox proteins with cofactors and collaborators impinges on the way Hox proteins activate or repress gene expression, illustrating this differential Hox activity with the Drosophila Abdominal-A protein.Hajirnis and Mishra discuss Hox organization and function. They first describe dispersion and clustering of Hox genes in different species, and then review the ordered disposition of cis-regulatory modules in the BX-C, and the opening of the BX-C chromatin, with emphasis on the organization of the Abdominal-B gene. They also review Hox function away from their traditional role of specifying particular structures and finally stress the importance of controlling Hox levels of expression by presenting examples of how increasing Hox protein levels can lead to cancer.Finally, the comprehensive review by"} +{"text": "Younger schoolchildren with psychological development disorders have low cognitive activity, insufficient development of basic school skills, and a low level of educational motivation. In accordance with the requirements of the educational program for students with psychological development disorders, it is important to develop the ability to predict the results of their actions.The study of predictive competence in primary schoolchildren with psychological development disorders.The study involved 60 children aged 8-10 years with a psychological development disorder. To study predictive competence, the methodology \u201cThe ability to predict in situations of potential or real violation of social norms\u201d was used.The study revealed a low level of the cognitive and speech-communicative spheres of prognostic competence development in primary schoolchildren with psychological development disorders, as well as a deficit in prediction in the field of learning, which includes educational cooperation and educational communication of the child. Generalized statements, a passive position in future situations and pessimistic attitudes prevailed in the predictions of schoolchildren when constructing an image of the future. For schoolchildren with psychological development disorders, the prognosis is presented by monosyllabic answers, with the observable poverty of speech utterances.The features of prognostic competence revealed in the study make it possible to develop individual programs for the development of the prognostic abilities of schoolchildren with psychological development disorders. This paper has been supported by the Kazan Federal University Strategic Academic Leadership Program.No significant relationships."} +{"text": "Sleep plays a key role in the pathogenesis and clinic of mood disorders. However, few studies have investigated electroencephalographic sleep parameters during the manic phases of Bipolar Disorder (BD).Sleep management is a priority objective in the treatment of the manic phases of BD and the polysomnographic investigation can be a valid tool both in the diagnostic phase and in monitoring clinical progress.Twenty-one patients affected by BD, manic phase, were subjected to sleep monitoring via PSG in the acute phase (at the entrance to the ward) and in the resolution phase (near discharge). All participants were also clinically evaluated using Young Manic Rating Scale (YMRS) Pittsburgh Sleep Quality Index (PSQI), Morningness-eveningness Questionnaire (MEQ) at different timepoints.Over the hospitalization time frame there was an increase in quantity and an improvement in the quality and effectiveness of sleep (Sleep Efficiency). In addition, from the point of view of the EEG structure, clinical improvement was accompanied by an increase in the percentage of REM sleep.Sleep monitoring by PSG can be a valuable tool in the clinical setting both in the diagnostic phase, \u201cobjectively\u201d ascertaining the amount of sleep, and in the prognostic phase, identifying electroencephalographic characteristics that can predict the patient\u2019s progress and response to drug therapy. The improvement in effectiveness and continuity of sleep and the change in its structure that accompanies the resolution of manic symptoms also testifies how the regularization of the sleep-wake rhythm is to be considered a priority in treating manic phases.No significant relationships."} +{"text": "Younger schoolchildren with psychological development disorders have low cognitive activity, insufficient development of basic school skills, and a low level of educational motivation. In accordance with the requirements of the educational program for students it is important to develop the ability to predict the results of their actions and deeds.The study of predictive competence in primary schoolchildren with psychological development disorders.The study involved 60 children aged 8-10 years with a psychological development disorder. To study predictive competence, the methodology \u201cThe ability to predict in situations of potential or real violation of social norms\u201d was used.The study revealed a low level of the cognitive and speech-communicative spheres of prognostic competence development in primary schoolchildren with psychological development disorders, as well as a deficit in prediction in the field of learning, which includes educational cooperation and educational communication of the child. Generalized statements, a passive position in future situations and pessimistic attitudes prevailed in the predictions of schoolchildren when constructing an image of the future. For schoolchildren the prognosis is presented by monosyllabic answers, with the observable poverty of speech utterances.The features of prognostic competence revealed in the study make it possible to develop individual programs for the development of the prognostic abilities of schoolchildren with psychological development disorders, to teach how to predict the development of events in educational activities, to recognize the emotions of the participants in the events. This paper has been supported by the Kazan Federal University Strategic Academic Leadership Program.No significant relationships."} +{"text": "Restriction to the analysis of births that survive past a specified gestational age can lead to biased exposure-outcome associations. The objective is to estimate the influence of bias resulting from using a left truncated dataset to ascertain exposure-outcome associations in perinatal studies.We simulated the magnitude of bias under a collider-stratification mechanism for the association between the exposure of advancing maternal age (\u2265 35 years) and the outcome of stillbirth. This bias occurs when the cause of restriction (early pregnancy loss) is influenced by both the exposure and unmeasured factors that also affect the outcome. Simulation parameters were based on an original birth cohort from Western Australia and a range of plausible values for the prevalence of early pregnancy loss , an unmeasured factor U and the odds ratios for the selection effects. Selection effects included the effects of maternal age on early pregnancy loss, U on early pregnancy loss, and U on stillbirth. We then compared the simulated scenarios with the results from the original cohort in which bias was unadjusted.We found the overall magnitude of bias to be minimal in the association between advancing maternal age and stillbirth. The findings indicate that the stronger the effect of the unmeasured U on early pregnancy loss and stillbirth, the greater the departure from the null. When we compared the simulated model with the results of the original cohort, we found evidence of marginal downward bias which was most prominent for women aged 40+ years.Our simulations demonstrated a marginal downward bias in the association between advancing maternal age and stillbirth. We recommend that future studies should quantify the extent of such bias when using left truncated birth datasets to determine exposure-outcome associations."} +{"text": "This is the editorial to the special edition \u201cCardiac optogenetics: using light to observe and excite the heart.in vitro . While leading also to depolarization in cardiomyocytes .The application of fluorescent voltage sensitive dyes to study excitable cells was established 50 years ago but onlyin vitro . Ever siin vitro , defibriin vitro and cardin vitro . In thismyocytes , the mucScalco et al.), emphasizing the heterocellular, increasingly complex composition and functions of specific cardiac tissues, and further raising the importance of optogenetic strategies to explore these.One big advantage of optogenetic stimulation is the cell type-specific expression providing not only the chance for pain-free stimulation but also to characterize the specific role of different cell types by cell type-specific (e.g. ventricular cardiomyocytes versus Purkinje fibers) stimulation as well Mullenbroich et al.). The review examines many of the novel techniques that optical physics have provided to extend the use of optical probes and actuators while also posing the next set of challenges to be addressed to extend further the applicability of these techniques. In this content, Jan Lebert and Jan Christoph present new algorithms for the analysis of voltage imaging with motion tracking stabilization to avoid the alterations of cardiac electrophysiology by contraction inhibitors with significant side effects . Finally, Philipp Sasse\u2019s group expanded the optogenetic toolbox for cardiac research demonstrating that the human coneopsin allows to control Gi signaling in embryonic stem cell derived cardiomyocytes . Thus, the three canonical G-protein pathways of the heart and the groups of Christina Sch\u00fcler and Leonardo Sacconi developed new methods and platforms for cardiac toxicity screening which is one of the evolving cardiac research fields in which the use of optogenetic stimulation is becoming more and more standard , whereas intact hearts from mice and even bigger animals have to be cleared for imaging of the cell composition and structure .Daniel Pijnappel\u2019s group characterized potential long term effects of optogenetic stimulation via channelrhodopsins (standard . NotablyIn conclusion, this special issue is covering the broad range of dye-based imaging and optogenetic applications in the heart and the advances made in each branch of the subject by new technical improvements and comprehensive reviews. We hope that we and all contributors are able to trigger further interest in and advance the use of optogenetic stimulation and imaging within the field of cardiac research."} +{"text": "Editorial on the Research TopicSurgery and COVID-19 in oncologic patients: What does the recent coronavirus pandemic taught us?The Coronavirus outbreak has recently shocked the world and the health systems as well, overwhelming most hospitals and departments. The need to give universal recommendations has become mandatory during this health emergency, however many countries have reacted without an agreement or consensus flow-charts on the strategies to be adopted. This helped the infection spread into a pandemic given Sars-Cov-2 high transmission rate. Two main critical issues seemed to hit hard most of the health systems: the number of healthcare personnel who contracted the infection and the availability of hospital beds, mainly in ICU. The need for beds for COVID-19 patients and the lack of medical staff has severely impacted on surgical departments causing the delay or even the cancellation of many operations during the different waves of the pandemic, including in the oncologic field.\u2022the impact of Covid-19 on tumor behavior;\u2022the effect of the delayed surgery on perioperative and long-term outcomes for most common tumors;\u2022optimal patients triage based on tumor stage;\u2022alternative therapeutic treatments involving oncologists and radiotherapists;\u2022optimization of patients\u2019 pathway during the hospital stay to prevent any contamination;\u2022resources redistribution.Although International Societies such as the Society of Surgical Oncology have published guidelines for non-emergency surgical procedures, the effect of the acceptable waiting time between the diagnosis and surgery has not yet been well defined and a possibility of a worsened clinical outcome is still unclear. It is known that due to immunosuppressive responses and pro-inflammatory cytokines release, surgery could lead to a high risk of COVID-19 during hospital stay . Patients with diagnostic breast cancer at different times of the pandemic were therefore forced to increase waiting times, as expected, in the peak phase and shorter delays in the plateau phase.In their manuscript In their research Carissimi and the surgical team of the S. Gerardo Hospital retrospectively analyzed data of their Hepato-Bilio-Pancreatic (HBP) mid-volume surgery center to look for any differences caused by the reorganization of the hospital system. In terms of length of hospital stay, morbidity and mortality, there were no differences between the groups of patients operated on in the period prior to the pandemic and during that. This was possible thanks to a careful separation of departments able to accommodate negative Covid patients and to keep the ward as such thanks to serial controls with protocol of testing spread all over the world. Despite the need to provide health personnel to cover shifts in wards of patients affected by Covid-19, the Monza Hospital has been able to remain a third-level center for oncological HPB pathology, thus creating safe departments for patients. However, the analysis inevitably showed an increase in tumor burden in patients operated on during the pandemic period, demonstrating a significant effect of the delay between the period of diagnosis, the surgical indication and the operation itself. In the same way, the follow-up and checkup times have obviously increased. Therefore, the significance of ordering clear separation to create safe pathways for these patients in need of surgery seems to be consistent.Xiahoao Zheng et al. the prolongation of the waiting time does not appear to have adversely affected the number and severity of short-term complications of patients treated with total gastrectomy in the National Cancer Center. Fragile patients such as those undergoing total gastrectomy could benefit from early surgery given the advanced degree of the disease, although an increase in waiting time does not appear to have affected short-term complications.As shown in the manuscript by Aramini et al. instead went to study the effect of Covid-19 infection on the tumor microenvironment in patients affected by the virus. As is now known, the Coronavirus infection prefers in its most severe forms an involvement of the cardio-pulmonary district with consequent multi-organ failure. The colleagues\u2019 study analyzed the effects of activating the macrophage-neutrophil cascade at the level of the tumor microenvironment in lung cancer. Many studies have already shown how a pro-cytokine cascade can influence a reactivation of the immune system at the level of tumor foci with consequent spreading of micrometastases at a local level and the activation of extracellular neutrophil traps has been shown to create a spread of premetastatic cells in the lung. The deeper analysis of these phenomena will help in the development of further targeted therapies, both for cancer prevention and for the treatment of patients with COVID-19 (Dr. COVID-19 .To conclude, the management of cancer patients in a pandemic context can be very challenging but thanks to the amount of data collected in the last 2 years, new strategies can be adopted to make the best use of resources. The road travelled during this pandemic has certainly put us to the test, but the efforts made seem to have paid off to get out of the dark period. First, a reorganization and the establishment of Covid-free hubs has been a successful strategy. As Anna M. Perrone and colleagues at the University of Bologna Hospital have shown, prioritization of oncological surgical care and the allocation of resources during a pandemic in COVID-19 free surgical hubs is an appropriate choice to guarantee oncological protocols and to obtain high-standard results. It must also be admitted that the pandemic has unmasked the weak side of many Health Systems, exposing gaps already present in the years. However, the pandemic and its teachings could represent the opportunity for an advantageous restart at the same time \u20136.A more careful organization and an effective and wise use of resources will certainly guarantee us a better treatment for this class of fragile patients and will allow us to insert more precise attitudes into clinical practice and as the main coauthor of COVID-Surg Collaborative stated, \u201cguaranteeing the possibility of safely carrying out elective cancer surgery should be part of the health plans of each country to protect the health of the entire population\u201d . Indeed,"} +{"text": "Aim: the aim of this study is to assess and locate the Foramen of Huschke. Study design: anatomical. Material and Method: using contrast material like gutta-percha and barium sulfate, through extraoral radiographs, such as panoramic, submental vertex and corrected saggital linear Temporal Mandibular Joint tomograms in four skulls where we clinically checked the existence of foramen of Huschke. Results: The results proved that the foramen of Huschke can be observed in skulls submitted to contrast using radiographic techniques. Known as foramen of Huschke, the opening of the developing tympanic ring was first described by Emil Huschke, in 1889, as being a structure present during embryological development in the tympanic portion of the temporal bone that is normally closed by the age of 5 years .The persistence in adult subjects is known as an anatomical anomaly and it may be attributed to affections such as herniations of temporomandibular joint (TMJ), as well as otological problems in the external acoustic canal. Foramen of Huschke is located on the anterior wall of the external acoustic canal on the tympanic portion of the temporal bone, presenting a communication between the external acoustic canal and the mandibular fossa .Figure 1The purpose of the present study was to identify and locate the Foramen of Huschke through three different extraoral radiographic techniques in four dry skulls and to assess which radiological techniques may be used to identify the persistence of Foramen of Huschke.To assess the presence of Foramen of Huschke in radiological exams, we used 4 dry human skulls, 3 of adult subjects who had persistence of Foramen of Huschke bilaterally and one subject aged about 4 years, in which the presence of the foramen was within the normal range. The skulls were submitted to extraoral techniques of panoramic x-ray with OrtAfter the execution of the exams, we compared the images with and without contrast material.The results found after the corrected lateral linear tomography without contrast material showed the presence of radiolucent area of elliptical format .Figure 3Even using barium sulfate, we observed radiolucent image of elliptical format with radiopaque contour , and witThese images were located on the anterior wall of the external acoustic meatus on the tympanic portion of the temporal bone, presenting a communication between the external acoustic canal of the temporal bone tympanic portion, with the mandibular fossa of the temporal bone squamous portion.We observed that in the panoramic technique performed with barium sulfate, two radiopaque points presented communication between the external acoustic canal of the temporal bone tympanic portion, with the mandibular fossa of the temporal bone squamous portion .Figure 4In the submental vertex technique, performed without contrast material , we obseAs to execution of images with gutta-percha, we observed elliptical radiopaque image and the The characteristics of the obtained radiographic images were the same for the four studied skulls.After the study of radiological images of four dry skulls we found the same results in the four skulls, observing the presence of Foramen of Huschke in the linear tomography technique. In submental vertex and panoramic techniques it was less evident as a result of the overlapping of bone structures.Rosemberg and Graczik (1986) stated that the best radiological technique capable of identifying the structures that comprise the TMJ is corrected axial lateral tomogram, even though this radiological technique does not completely eliminate the overlapping of the TMJ region, because the structure that is located in the fulcrum of the rotation of the system will appear in more details and those that are below or above this point will seem to be blurred .Owing to the overlapping of bone structures in the TMJ region, Holmlund et al. (1986) reported difficulty in examining TMJ through conventional radiograms. Therefore, they suggested that CT scan was the preferred method to avoid overlapping in this region.Ali and Rubinstein (2000) detected in temporal bone CT scan a bone defect in the anterior aspect of the auditory canal, the defect known as Foramen of Huschke.As to comparison of images initially obtained, with and without contrast (gutta-percha and barium sulfate), we observed that images described in the three radiological techniques are compatible with the anatomical location of Foramen of Huschke.Foramen of Huschke could be observed after its evidence with contrast material in radiological images of 4 dry skulls using extraoral, panoramic, submental vertex (inverted Hirtz) and TMJ corrected lateral linear tomogram techniques.We concluded that among the used techniques, linear tomography proved to provide the best results and the literature recommends that CT scan is the best technique to visualize the Foramen of Huschke."} +{"text": "Wind turbines usually operate in harsh environments. The gearbox, the key component of the transmission chain in wind turbines, can easily be affected by multiple factors during the operation process and develop compound faults. Different types of faults can occur, coupled with each other and staggered interference. Thus, a challenge is to extract the fault characteristics from the composite fault signal to improve the reliability and the accuracy of compound fault diagnosis. To address the above problems, we propose a compound fault diagnosis method for wind turbine gearboxes based on multipoint optimal minimum entropy deconvolution adjusted (MOMEDA) and parallel parameter optimized resonant sparse decomposition (RSSD). Firstly, the MOMEDA is applied to the preprocess, setting the deconvolution period with different fault frequency types to eliminate the interference of the transmission path and environmental noise, while decoupling and separating the different types of single faults. Then, the RSSD method with parallel parameter optimization is applied for decomposing the preprocessed signal to obtain the low resonance components, further suppressing the interference components and enhancing the periodic fault characteristics. Finally, envelope demodulation of the enhanced signal is applied to extract the fault features and identify the different fault types. The effectiveness of the proposed method was verified using the actual data from the wind turbine gearbox. In addition, a comparison with some existing methods demonstrates the superiority of this method for decoupling composite fault characteristics. Wind turbines, gas turbines and other advanced equipment are used widely in modern industry. The gearbox, a key component in these devices, is prone to failure when running under severe operating conditions such as heavy loads, large temperature differences, corrosive media, and alternating loads ,2. As thFeature extraction aims at extracting feature information from the vibration signal to describe the operational status of the mechanical equipment. Over the past two decades, many scholars have explored the rotating machinery fault diagnosis field and introduced many diagnostic theories and methods. For example, empirical mode decomposition (EMD) ,4, wavelThe vibration signal collected by the sensor is regarded as a convolutional mixture of different excitation sources, fault sources and transmission channels, so the recovery of the fault signal can be considered a deconvolution process. In 2007, Endo et al. successfIn 2011, Selesnick proposed resonance-based sparse signal decomposition (RSSD) . It is dm (TQWT) accordinis (MCA) is emploelection ,26 make Motivated by the above discussion, a novel approach to wind turbine gearbox composite fault diagnosis based on MOMEDA and parallel parameter optimized RSSD is proposed in this article. Firstly, MOMEDA is used to deconvolute the signal to extract multiple periodic faults in the composite fault vibration signal. While achieving decoupling separation of the multi-fault, it effectively eliminates the influence of the transmission channels and external excitation sources. Secondly, the RSSD with parallel parameter optimized constructs the wavelet basis function bank to match the fault characteristics, which enables the interference components to be efficiently suppressed in the decoupled fault signal and the weak fault pulse to be enhanced. The superiority and effectiveness of the proposed approach are verified using the measured signals of the wind turbine gearbox.The main structure of this article is composed as follows: MOMEDA utilizes a target vector to define the position and weight of the pulse sequence to be solved and applies multi-point kurtosis values to determine the period of fault occurrence. Multiple pulse target identification and deconvolution algorithms are implemented at determined locations to obtain continuous periodic pulse components.When a rotating machine fails, the impulse signal x is recovered from the vibration signal y by an optimal filter. The MOMEDA solving process for the optimal filter can transform into a search for the maximum value of the multipoint D-parameter, using the multipoint D-parameter to reflect the shock characteristics of the filtered signal, and the related expressions are defined as follows:The impulse signal The extreme value of Equation (3) is obtained by taking the derivative of the filter coefficient to zero, we can obtain:Owing to Since the multiples of When performing target location and fault detection with MOMEDA, a chain of impulses with fault period MOMEDA introduces Multipoint Kurtosis (MK) as a measure of fault characteristics based on kurtosis. When the output result TQWT breaks through the disadvantage of the constant quality factor of the traditional wavelet transform and makes the selection of basis function more flexible by selecting quality factor To achieve perfect reconstruction, the frequency response function The high resonance components Achieving sparse representation of different components of the signal by MCA. The objective function of extracting the different components is expressed as follows:However, the vibration signal inevitably has interference by background noise in actual works, and the signal separation will be transformed from Equation (19) as follow:Whale Optimization Algorithm (WOA) is a novel intelligent optimization algorithm proposed by Mirjalili in 2016 . The opt(1) Surrounding the preyAssuming that the population size is (2) Bubbling net attackShrinkage encircling: The convergence factor Spiral updating position: whales spit out bubbles of different sizes for feeding while swimming towards the best position in a spiral posture. The mathematical model isDuring the process of prey encirclement, the whales shrink to surround and spiral forward simultaneously, and the probability of occurrence is 50% for either mode of travel.(3) Searching for preyThe whales will stop approaching the best whale individual in this stage and instead update their position by randomly searching a large area to approach any whale individual. In this case, the value of Kurtosis is a 4th order statistic that reflects the sharpness of the waveform for the random variable and is sensitive to the impulse component of the signal, which is defined as:Information entropy represents the uncertainty of the source information and the randomness of the event occurrence, and its value is only related to the probability distribution of the variables. Suppose a source entropy . The fauIn this article, we combine kurtosis and envelope spectral entropy, kurtosis as an indicator of time-domain feature can describe the impulsiveness of the signal, and envelope spectral entropy as an indicator of frequency-domain feature can represent the strength of periodic pulses. A composite indicator is constructed to reflect the time-frequency domain properties and the expression is as follows:This indicator possesses the advantages of kurtosis and envelope spectral entropy, and it can measure the impulsivity and periodicity of the signal at the same time. The more prominent the impulsivity and periodicity of the signal, the larger the value of the indicator.(1)The parameters of the algorithm are determined: population size (2)Population initialization: The optimal parameters should be bounded, and the correlation between quality factors should be as low as possible. The value range of takes as ,15, the takes as , and the(3)The objective function value is calculated: the composite index constructed by kurtosis and envelope spectral entropy serves as the objective function. The objective function value of an individual is calculated and the current optimal individual is determined.(4)The main loop of the algorithm is entered: if (5)Evaluating the whole whale population and iterative optimization until the algorithm converges, it obtains the optimal objective function value Overall, the parallel parameter optimized RSSD based on WOA as proposed in this article implements the process as follows:The method proposed in this article is suitable to separate and extract the compound faults of the gear faults and the bearing faults from the wind turbine gearbox. Firstly, the input vibration signal is pre-processed, the deconvolution period is set according to the fault frequency of the damaged part, and the vibration signal is decoupled into a single fault by MOMEDA. Secondly, the low resonance component is decomposed from the pre-processed signal with optimized RSSD. Finally, the envelope analysis of the low-resonance components is applied to extract the fault characteristic frequency. The flowchart of the method is shown in In this paper, 750 kW wind turbine gearbox data provided by The National Renewable Energy Laboratory (NREL) were useThe gearbox experienced two oil loss events during the actual test, where it damaged its internal bearings and gear components. The damaged gearbox was disassembled for fault analysis, and it was found that the large and small gears of the high-speed gear pair in the gearbox were seriously scuffed, and the inner ring races of the bearings and the two ends of the rolling bodies were overheated. The failure of each component is shown in The configuration parameters of the gearbox are listed in The deconvolution period The deconvolution period The deconvolution period The deconvolution period In summary, the analysis results show that the proposed method can not only successfully decouple and separate various types of fault characteristics from the composite fault signal, but also has a remarkable suppression effect for interferences, further highlighting the periodicity and impulsiveness of fault characteristics, which helps to improve the accuracy and reliability of wind turbine gearbox composite fault diagnosis.The superiority and effectiveness of the proposed method are verified through a comparative analysis using the two methods. The MCKD algorithm in Reference and the The two methods mentioned above are used to decouple the HSS pinion fault from the composite fault signal and analyze and process it. The parameters are set as follows: number M = 5 is shifted, length L = 500 is filtered, the above calculation results of the fault frequency are introduced, and the analysis results are shown in The envelope spectrum analysis shown in The same procedure is applied for the IMS gear, and the parameters obtained with the optimized RSSD are: The bearing inner ring fault signals and bearing rollers fault signals are directly processed with the optimized parameters of the method proposed in this paper, so as to imitate the process of artificially selecting parameters in traditional resonance decomposition. The results are shown in To evaluate the performance of extracting fault features quantitatively, the fault feature coefficient (FFC) is introA compound fault diagnosis method for wind turbine gearboxes based on MOMEDA and the parallel parameter optimized RSSD is proposed in this study. MOMEDA obtains the deconvolution period based on the fault frequency and obtains periodic continuous pulses in a non-iterative deconvolution manner, thus decoupling and separating the compound fault vibration signals. However, some weak fault features are easily buried in transmission channels and background noise and using MOMEDA alone is not immune to dealing with multiple faults. Therefore, its combination with RSSD for parallel parameter optimization is applied to suppress disturbances and enhance the relevant fault characteristics. The parallel parameter optimized RSSD takes the composite indicator with low resonance components as the objective function and adaptively obtains the best quality factor"} +{"text": "This Quality improvement project will look into the data collected over the same period in 2020 and 2021 to highlight patterns and changes as a result of the COVID-19 pandemic, and how to improve the quality of service provided by the team.The record of a total of 349 patients was accessed from the Alliance team spreadsheet and patient electronic records (Rio) between September and November in 2020 and 2021.All patients referred to the teamAll patients managed by the teamPatients referred between September and November 2020Patients referred between September and November 2021The inclusion criteria include:Presenting complaintDemographics- gender and raceSource of referralOutcome of referralTimeline of first contact after referralData collected include:The overall number of referrals between September and November 2020 was more than referrals over the same time period in 2021; 188 patients in 2020 and 161 in 2021Of the 188 referred in the 2020 audited period, 55%(102) were from minority ethnic groups compared to 50%(80) in the 2021 audited period. So the number and proportion of minorities requiring mental health support rose due to the impact of COVID pandemic infections, restrictions, and lockdowns.In 2020, the proportion of male patients was 26%(49) compared to 18%(30) in 2021. This is important because the majority of our patients are females which implies that the COVID pandemic had a significant effect on the entire population leading to more male patient referrals.The overall number of patients that presented with self-harm was greater in 2020 than in the 2021 period of audit.The overall number of patients that presented with anxiety was also greater in 2020 than in the 2021 period of audit.Of the 188 patients referred between September to November 2020, 58% (109) of them were seen within 24 hours of referral compared to 61% (99) in 2021. In the 2021 period, the restrictions have stopped and it has become far easier to carry out assessments at home and school while using the necessary protective gear.It was noticed that there was a lot of telephone support in 2020 but none in 2021. The majority of these patients were those who were already known to the service and were being supported but deteriorated mentally during the peak of the pandemic.There was a lot of referral from the single point of access (SPA) in between September and November 2020 while there was none over the same time period in 2021. This could have resulted from another impact of the pandemic when a lot of service providers were off sick and their patients could not reach them directly so they opted to go through SPA. Some new referrals also came this way.It is also noteworthy that 59% (112) of patients seen in the 2020 audited period were already known to the service while 54%(88) seen in 2021 were known. This implies that a lot of our patients deteriorated due to the pandemicWe also had more new referrals in 2021 than in 2020 for the same audited period.Six percent of the 188 patients seen 2020 audited period had telephone support while none did in 2021. Since all restrictions were lifted in July 2021, the service has opted for a more conventional approach of patient assessment which is face to face especially when expedient.Fifty-two percent (85) of 161 patients seen in the 2021 audited period were signposted to another service while 44% (72) of 188 seen in 2020 were signposted.This audit has proven that not only did the pandemic affect the overall volume of patients seen, but it also increased the proportion of male patients seen and the relative proportion of minority ethnic groups that used the service.The pandemic and government policies also influenced how patients were assessed seeing how 2020 had a lot of telephone support.It's impressive to know that the team managed to cope in these challenging periods without compromising the quality and standard of care as well as leaving behind an up to date medical records making this audit possible and easyCompleting annual audits on the pattern of clinical activitiesContinued review of quality and consistency of data collectionTo consider an alternative method for data collection to minimize the risk of human error.Regular training sessions for mental health crisis team in keeping with changes to mental health presentations during the COVID Pandemic.To review data collected and expand on the information collected to include gender and ethnicityImportant Recommendations includes:"} +{"text": "Neuronal Ceroid Lipofuscinosis: a Multidisciplinary Update. Both clinical and research issues have been addressed in this collection of articles. The first paper provides a broad introduction and subsequent articles cover epidemiology and genetics, diagnosis, natural history studies, treatment and ethical implications of novel therapies, cardiac involvement in the later stages of disease and the underlying pathological mechanisms.Eleven papers and fifty-one authors from seven countries have contributed to the Research Topic Simonati and Williams). Following a brief historical survey, a clinically-oriented approach was used to describe how the early symptoms and signs represent topographical signatures of the underlying brain dysfunction and may provide clues helping clinicians to reach a conclusive NCL diagnosis rapidly. The paper goes on to document advances in NCL research and the contributions of different experimental models to enhance knowledge of the pathogenic mechanisms underlying cellular pathology in this group of diseases. Lastly, translation of experimental data into novel therapeutic approaches and the importance of symptomatic treatments, which remain the main available therapeutic approaches, were outlined.The state-of-art in the field of childhood NCLs was described from a number of perspectives in the first review paper of the series . The authors have stressed that synergy between health providers, parent support organizations and the pharmaceutical industry have accelerated the use of modern diagnostic procedures.The world-wide distribution of NCL was emphasized by the retrospective epidemiological study from South America and the Caribbean region, in which CLN2, CLN6, and CLN3 disorders were identified as the most common NCL types in those regions (Gardner and Mole). Since the discovery of the first causative genes, more than 530 mutations have been identified across 13 NCL genes. Increasing numbers of variant disease phenotypes are being described. Identification of phenotypic heterogeneity combined with the genetic background of each patient is necessary in order to facilitate individually tailored precision medicine in order to modify disease progression in the approaching genomic medicine era.The significance of the advances in genetic studies in NCL was discussed in the review article by Gardner and Mole which focuses on the genetic basis of phenotypic heterogeneity and with ocular enzyme replacement therapies (CLN2 and CLN10 mouse models), has led to a clinical trial enrolling CLN2 patients to test the efficacy of intravitreal ERT. The long-term effects of these therapeutic interventions remain to be evaluated.The focus of a mini review by Kohlsch\u00fctter). He identifies two main topics, the first relates to the care of individual patients affected with dementia at a young age, the use of life-prolonging measures and the planning for the end of life, the second refers to new experimental treatments and the awareness that such approaches carry the risk of prolonged survival with poor quality of life. The paper gives examples experienced by the author which offer insights for the \u201ccritical thinking\u201d of readers. The issues encountered in caring for patients affected by NCL, but may well be common to other rare neurodegenerative diseases of childhood.Ethical issues in care and treatment are the topic of a paper which reflects the long personal clinical experience in the field of this author . Authors outline how EEG and (to a less extent) evoked potentials can promptclinicians to obtain a molecular diagnosis in the early phase of any NCL form, which will help to direct patients to appropriate targeted treatments (when available) efficiently.The importance of neurophysiological tools to describe disease evolution and supporting early diagnosis of NCL patients was reviewed through a careful analysis of their characteristics in several NCL types . It used an enzymatic assay of TPP1 activity in dry blood spots, carried out through pediatricians. Authors describe the test as easy to perform, inexpensive and reliable and conclude that such a test may contribute to early delivery of ERT in this condition.Reaching an early diagnosis was the aim of a nationwide screening project in Spain amongst children whose early clinical features were consistent with CLN2 .The next two articles concern cardiac involvement in CLN3 disease. In their report describing a case series of six, Takahashi et al., describe the current knowledge on the role of the different glia components in brain homeostasis. They go on to focus on the most up-to-date understanding of glial pathologies and their contribution to the pathogenesis of NCLs: they highlight some of the associated challenges that require further research as obtained by their studies using genetically modified mouse models. The emerging evidence of glial dysfunction questions the traditional \u201cneuron-centric\u201d view of NCLs, and would suggest that directly targeting glia in addition to neurons could lead to better therapeutic outcomes.In the only review article related to experimental models of NCLs, In summary, this series of articles is drawn from world experts in NCL. It brings together basic science and new clinical knowledge, while considering the ethical implications of recent progress on individual patients, families and their physicians and clinicians.All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.This work was funded by the Italian Ministry for Universities and Research (MUR) and Fondazione Mariani to ASi is acknowledged.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "In the process of top coal caving, coal gangue particles may impact on various parts of the hydraulic support. However, at present, the contact mechanism between coal gangue and hydraulic support is not entirely clear. Therefore, this paper first constructed the accurate mathematical model of the hydraulic cylinder equivalent spring stiffness forming by the equivalent series of different parts of emulsion and hydraulic cylinder, and then built the mesh model of the coal gangue particles and the support\u2019s force transmission components; on this basis, the rigid\u2013flexible coupling impact contact dynamic model between coal gangue and hydraulic support was established. After deducing contact parameters and setting impact mode, contact simulations were carried out for coal particles impacting at the different parts of the support and coal/gangue particles impacting at the same component of the support, and the contact response difference in the support induced by the difference in impacted component and coal/gangue properties was compared and studied. The results show that the number of collisions, contact force, velocity and acceleration of impacted part are different when the same single coal particle impact different parts of the support. Various contact responses during gangue impact are more than 40% larger than that of coal, and the difference ratio can even reach 190%. Top coal caving mining is the important mining method for thick and extra-thick coal seams: in the coal dropping stage of the top coal caving, the hydraulic support is parceled in the floor rock and coal gangue granule space body ,4,5,6,7.In the early stage, many studies have been carried out on the working characteristics of the hydraulic support and coal gangue recognition. Zhang et al. studied In top coal caving, the large number of coal gangue particles and the distribution characteristics of drawing space lead to the contact between coal gangue and various parts of the hydraulic support. Contact behavior between coal gangue particles and the hydraulic support involves the evolution of contact state and the real-time transmission of force. Overall, due to the lack of research method that can accurately describe the whole contact process between particles and hydraulic support, the precise contact characteristics and contact difference characteristics between coal gangue and the hydraulic support are not completely clear, in particular, the contact characteristics and contact difference between coal gangue and the different parts of the hydraulic support have not been studied. As a result, the selection of coal gangue recognition media lacks the theoretical basis in the research of coal gangue recognition technology. In view of the existing problems and deficiencies in the present studies and in order to further clarify the contact response characteristics and the contact response differences law between coal gangue and the hydraulic support, this paper proposed the idea of quantitative research, and the impact contact behavior between single particle coal gangue and the hydraulic support are taken as the research target. A rigid\u2013flexible coupling impact contact simulation model between coal gangue and the hydraulic support is proposed to study the system dynamic response. For this purpose, an accurate model of the hydraulic cylinder liquid stiffness series by five different parts stiffness is firstly established. The rigid\u2013flexible coupling impact contact simulation model between the coal gangue particle and the hydraulic support is established by combining the mesh model of the particle and the main force transmission components of the hydraulic support as well as the multi-body rigid dynamics simulation model of the hydraulic support. On the basis of determining the contact parameters and impact modes, the contact dynamics simulation analysis will be carried out when coal gangue impacts the different components of the hydraulic support with the same height and when the same size coal/gangue impacts the same position of the hydraulic support, respectively. Impact contact response characteristics when coal gangue impacts the different parts of the hydraulic support will be studied. Through comparative analysis, the difference rule of system impact contact responses caused by the difference in the impacted component and the particle material property will be determined, so as to explore the theoretical basis for the selection of coal gangue recognition media.(1)Considering the compression elasticity of the emulsified liquid, piston rod, tail of the piston rod, bottom of the cylinder and the circumferential extension stiffness of the cylinder, a more accurate equivalent spring stiffness mathematical model of the hydraulic cylinder is established.(2)The traditional scheme that studied the interaction between coal rock and the hydraulic support by replacing the contact process between coal gangue and the hydraulic support with force load is cancelled. The quantitative research method is put forward. Through the establishment of the rigid\u2013flexible coupling impact contact dynamics simulation model between coal gangue and the hydraulic support, the contact response between particles and the hydraulic support is studied, which provides the direct and effective research method for the interaction between coal gangue or surrounding rock and the hydraulic support.(3)Through the contact characteristics analysis between coal particles and the different parts of the hydraulic support, the variation rule of the system contact response caused by the change in the impact position is obtained.(4)Through the contact response study of the direct contact parts, indirect related parts and the parts connection units after coal gangue impact, the contact difference characteristics between coal gangue and the hydraulic support are clarified, and the available parameters for coal gangue recognition are determined accordingly.Our contributions in this paper are fourfold.In the hydraulic system of the hydraulic support, the props and the tail beam jack (collectively referred to as the hydraulic cylinder) are solid\u2013liquid coupling-compressible devices of the steel structure and high pressure emulsion, and the hydraulic oil in the hydraulic cylinder cavity has the compressibility. According to previous studies, when dynamic software is used to analyze the dynamic characteristics of the hydraulic support, the equivalent spring damping module is usually used to replace the hydraulic cylinder.In order to obtain accurate impact contact dynamic response between coal gangue and the hydraulic support, the equivalent stiffness of the hydraulic cylinder in the working process should be determined accurately first when using spring damping to analyze the dynamic characteristics of the hydraulic support. The prop and the tail beam jack in this paper are all single telescopic hydraulic cylinder. Taking the tail beam jack as the example, as shown in Equivalent compression stiffness of the emulsified liquid:Circumferential extension stiffness of the cylinder body:Axial compressive stiffness of the cylinder bottom:Equivalent compression stiffness of the piston rod:Equivalent compression stiffness of the piston rod tail:After each part is connected in series, the equivalent spring stiffness of the hydraulic cylinder can be obtained as follows:The purpose of this paper is to determine the contact response differences between coal gangue and the hydraulic support through the impact contact behavior analysis of coal gangue and the hydraulic support, and to reveal the impact contact characteristics and response difference law when coal gangue impacts the different parts of the support. Due to the large number and complex shape of the underground coal gangue particles as well as the existence of anisotropy, direct theoretical or simulation modeling is difficult to achieve. Moreover, the contact position of irregular shaped particles will cause the change in the equivalent contact radius and the contact responses, so it is impossible to study the influence parameters and the changing law of the contact characteristics qualitatively by using irregular shape. In order to conduct a qualitative study and reveal coal gangue contact difference characteristics caused by their own attribute differences, this paper conducted regular treatment to coal and gangue, uniformly treating coal and gangue particles as spheres, ignoring the plastic deformation and brittle damage of particles, and ignoring the influence of rock micro-cracks on contaIn order to improve the accuracy of simulation results, the rigid\u2013flexible coupling impact contact dynamic model between coal gangue and the hydraulic support is established to study the impact behavior. Particles and the main impacted part of the top coal caving hydraulic support such as top beam, the shield beam and the tail beam are mesh first, the structure after grid division is shown in After the 3D model of coal gangue particles and top coal caving hydraulic support is introduced into Adams, the meshing particles, top beam, shield beam and tail beam files are introduced, respectively, into Adams to replace the corresponding solid file. The rigid area of the pin hole in the top beam, the shield beam, the tail beam, the front and rear connecting rods and the base is connected by the revolute pair. The props and the tail beam jack are equivalent replaced by spring damping modules. The rigid insert plate and the insert plate jack are fixed on the tail beam, and the rigid base is fixed in the space coordinate system. The impact position of coal gangue particles can be adjusted according to requirements, so as to realize the impact of coal gangue on different parts or positions of the support. The completed rigid\u2013flexible impact dynamic model of coal gangue and the hydraulic support is shown in \u22122 m. The contact stiffness between coal gangue and the hydraulic support can be calculated by the contact stiffness calculation formula [The radius of coal gangue particles is 2.5 \u00d7 10 formula ,47,48,49Equivalent elastic modulus 7 N/m, 9.6 \u00d7 107 N/m and 1.1 \u00d7 108 N/m according to Equation (11).The props and the tail beam jacks are the single telescopic hydraulic cylinder, and the liquid column height in each cylinder is associated with the position of the hydraulic support. When the working height of the hydraulic support is 2.5 m and the coal dropping angle of the tail beam is 45\u00b0, the sizes of the props and the tail beam jack are shown in The essence of the impact contact behavior between the coal gangue particle and the hydraulic support is the nonlinear contact between the particle and the metal plate plane, so the contact model based on nonlinear spring damping in Adams is applied to its definition, the normal contact force can be combined describing by the Hertz contact theory-based elastic contact force and the system damping dissipative force, as shown in Equation (13) ,51,52:When the same coal particle impact on the top beam, shield beam and tail beam, respectively, from the same height of 0.8 m in the free-falling way, the coal particles collide with the top beam continuously more than 10 times, collide with the shield beam twice, then separate and eventually come into contact with the tail beam with rebound impact, and then collide with the tail beam only once and then separate. It follows that the change in impacted component will lead to the change in collision times between the particle and the hydraulic support.(2)Ftop beam > Fshield beam > Ftail beam, vtop beam < vshield beam < vtail beam, atop beam < atail beam < ashield beam. It can be seen that the size relationship between the contact responses produced by the same particle impacting different parts of the same support is not the same.When the same coal particle impact on the top beam, shield beam and tail beam, respectively, from the same height of 0.8 m, in the free-falling way, the change in the impacted component will lead to the change in the contact responses value, and the relationship between the contact responses between the coal particle and the impacted component is (3)When the coal and gangue particles with the same radius impact at the same component of the hydraulic support with the same height, all the contact responses amplitude of the direct contact component, the indirect contact associated components and the force transmission hinge points when the gangue impact are larger than that of coal.(4)When the coal and gangue particles with the same radius impact at the top beam with the same height, the contact response difference ratios of contact force, velocity and acceleration are above 0.8, and the difference ratios of contact force and acceleration are even above 1.7. When the single particle impact at the shield beam, the contact response difference ratios of contact force, velocity and acceleration are above 1.2, and the difference ratios of contact force and acceleration are even above 1.9. When the particles impact at the tail beam, the contact response difference ratios of contact force, velocity and acceleration are above 0.8, and the difference ratios of contact force and acceleration are even above 1.3. Therefore, when coal or gangue particles with the same size impact the same part of the hydraulic support, the contact response caused by coal and gangue is obviously different.(5)When the coal and gangue particles with the same radius impact at the same component of the hydraulic support with the same height, the associated responses of the non-directly contacting parts such as the velocity, acceleration and spring force were also significantly different, with the difference ratios above 0.8. The active forces at the hinge points when gangue impacts on the hydraulic support are greater than that of the coal, which the difference ratio is above 0.7, the differences were also significant.(6)When the impact occurs with the hydraulic support, the direct contact response, the indirect contact associated response and the contact response of the force transmission hinge points caused by coal or gangue are all significantly different. Therefore, it is feasible to identify coal and gangue based on the impact contact response.(7)When the coal and gangue particles with the same radius impact at the tail beam with the same height, not only the contact force, velocity and acceleration of the tail beam are obviously different, but the response force difference ratio of the tail beam jacks supporting the tail beam and the force difference ratio of the hinge point between the shield beam and the tail beam are both more than 0.8. Therefore, when conduct the coal gangue recognition technology research, the vibration responses of the tail beam with the contact responses of the tail beam jack and the connecting pin of the tail beam can be used as coal gangue recognition parameters.In order to further clarify the contact action law and contact difference characteristics between the coal or gangue particle and the hydraulic support, and then to further lay a foundation for studying the interaction characteristics between coal gangue particles and the hydraulic support in the drawing stage of top coal caving mining, the impact contact behavior between the single coal/gangue particle and the different parts of hydraulic support is taken as the research object in this paper. Based on the construction of the accurate equivalent spring stiffness mathematical model of the hydraulic cylinder, the rigid\u2013flexible coupling impact contact dynamic simulation model between single coal gangue particle and the hydraulic support was established by griding treatment to coal gangue particles and the main force transmission parts of the hydraulic support, and the simulation study on impact contact behavior between coal gangue and hydraulic support is carried out. The following conclusions are drawn:The research content of this paper reveals the difference rule of contact response induced by the difference in impacted location and the impacting material properties during the contact between the coal/gangue particles and the hydraulic support, thereby providing theoretical support for coal gangue recognition in top coal caving based on the contact response difference."} +{"text": "Fibrosis a new section of the Journal of Translational Medicine with the purpose of gathering current high-quality research to better understand the process of normal tissue repair as well as the pathogenetic mechanisms responsible for the onset and progression of tissue fibrosis that leads to organ dysfunction and failure. Fibrosis is the common denominator in a variety of chronic diseases including idiopathic pulmonary fibrosis, liver cirrhosis, and ulcerative colitis, among others. These fibrotic diseases affect a vast number of people across the world significantly impacting the quality of their life and increasing health care costs.We are delighted to announce the launch of Regardless of the organ affected, the dysfunction occurs following the excessive production and deposition of collagen and extracellular matrix by activated myofibroblasts altering the architecture and function of the organ . OxidatiThe pathogenesis of fibrosis is complex and not fully understood, therefore, the development of more effective therapeutic options has been challenging. There are several cell types and signaling pathways responsible for the development of lung fibrosis following repetitive exposure of the alveolar epithelial cells to a variety of injurious stimuli in combination with individual genetic, epigenetic, and immunological characteristics or predisposition . The useLiver cirrhosis is an organ-specific fibrosis that compromises important metabolic functions ultimately leading to multisystemic complications and historically death . Liver tThe goal is early diagnosis and prevention; however, most of these diseases manifest clinically when the affected organ is significantly damaged by fibrotic tissue. The time between initiation of fibrogenesis and symptoms onset varies; during the earliest stages of illness, it is difficult to prognosticate the disease course . The advFibrosis has also been further implicated in the proliferation and migration of cancer cells while creating conditions that compromise anti-tumor immunity and treatment response. The collective scientific effort is focused on understanding the interaction of profibrotic molecules and cells with a variety of cancer types aiming to develop anti-fibrotic agents that can also prevent and treat malignancies .Fibrosis will be focused on high-quality research from basic science to clinical trials. The expert members of our Editorial Board are committed to ensuring a productive scientific discussion through the rapid publication of internationally competitive and high-level peer-reviewed articles. We look forward to receiving your thought-provoking contributions to Fibrosis.The improved understanding of how fibrosis develops, causes morbidity, and promotes cancer is the foundation for making advances in diagnosis, treatment, and ultimately prevention."} +{"text": "Drosophila melanogaster Mutations\") is a direct descendent of the computer program \"Drosophila Viewer\" (This is a short notice announcing the publication of an e-book by the Brazilian Society of Genetics (SBG) and reviewing it. The publication , sponsored by the University of Indiana at Bloomington, Indiana. The FlyBase site houses an incredible number of files dedicated to the detailed, in-depth biology of Drosophila, including the excellent collection of photographs of classical phenotypic mutants by At approximately the same time the program \"Drosophila Viewer\" was produced, the internet online resources on Drosophila, the e-book was produced by working on all the original drawings used in the program and adding material obtained with permission from other sources, especially from Professors Thomas C. Kaufman , as well as their availability in reference Drosophila research and teaching centers. Each presentation consists of a box containing illustrations and photos of mutants. Each mutant is explained by a short text taken from the public domain Drosophila \"red book\" by In order to keep alive the graphic material from the \"Drosophila Viewer\" program and organize everything as an elementary Atlas on phenotypic mutations of Drosophila melanogaster; a few more original color diagrams were produced to represent some new variants from some photographs not included in the computer program or from the photos by Drosophila research group, who described the mutation. The boxes contain also original color photographs of whole-mounts of Drosophila mutants and/or photos of living specimens obtained from the FlyBase repository of Each box contains a diagrammatic but detailed color representation of the corresponding phenotypic mutation and some extra material. The color illustrations, taken from the archives of the program \"Drosophila Viewer\" had the brown color of the flies reworked to bring it to the true color of the wild-type singed. The box contains, upper row, left: an original color illustration taken and repainted from the \"Drosophila Viewer\" program, showing the bristles in the mutant and in the wild-type fly; upper row, right: a black & white drawing of a mutant fly by yellow white female (yw/yw) crosses. The F2 progeny of such crosses is formed by non-recombinant and by recombinant flies, due to gene recombination that occurred during gametogenesis in their female parent. The different F2 phenotypes depicted occur with expected frequencies that depend on the physical distance between the involved linked genes. Formal permission for all extra material included in the Atlas was obtained directly from the respective corresponding authors and editors listed in the Atlas and in the program \"Drosophila Viewer\", or in the publications describing it a,b."} +{"text": "The situation triggered by the war initiated by Russia in Ukraine, in addition to the widely known impacts that it has directly on the affected populations, has consequences that affect to almost the entire world population. The effects are verified either through increases in the reference prices of commodities, or by occasional ruptures in the supply chains, all of which contribute to the worsening of the economic and social conditions of most of the people.One of the sectors with predictably the greatest impact is food, through multiple mechanisms that end up having the same end result: the worsening of the quantity of food produced on the planet and potentially the decrease of diets quality.Russia and Ukraine are two of the biggest food exporters at a worldwide level, as illustrated by the WPF , accountTherefore, there are strong reasons to believe that in addition to potential food shortages in the war zone, there may be a worsening of the food condition in the near future in regions where it was already fragile or perilous. Even for countries with sufficient economic conditions to continue to obtain supplies at rising prices of food raw materials, it must be considered that many vulnerable groups already faced difficulties before the war in maintaining a sufficient, healthy and balanced diet.In addition to the above-mentioned situation, Russia is the world\u2019s largest exporter of fertilizers \u201312, partThe months of March and April are critical for the volume and quality of European crops and plantations when many sowings are fertilized. The failure at this point ends up having repercussions later, since not sowing or not fertilizing now will prevent or reduce the expected harvests in the coming months of July\u2013September. This means that even assuming a rapid resolution of the conflict, problems in the world food supply chain are expected until August 2023.The lack of food in the global supply chain always ends up triggering an increase in demand for new agricultural land, reinforcing deforestation, the use of less suitable soils for agriculture, an increase in the consumption of wild species, overfishing, and other protein- and carbohydrate-seeking behaviors.This global picture of the future of human and animal nutrition poses complex challenges to Public Health, but also opens the door to developments that would have been unthinkable until recently.We should now promote food education programs among populations around the world to reduce the amount of food commodities needed to provide the same final amount of food, reduce costs per final kilogram and improve the diet provided to each person, thereby promoting better health. Therefore, the Association of Schools of Public Health in the European Region (ASPHER) calls for reinforcement of two main pillars of action: Education and Public Policies.(1) Taking advantage of the food crisis to move towards healthier diets and thus combat the metabolic diseases that are highly prevalent in the wealthiest countries.(2) Combating food waste, with education interventions focused on reducing food losses along the supply chain (currently in immense amounts).(3) Regaining appreciation for local agricultural production and products with less commercial marketing, but with great food value. History shows that already happened in previous conflicts.(4) Considering options that integrate what habitually were considered wasted parts of the animals and plants, thus reducing the pressure in the search for other food sources.(5) Fostering the search for alternatives to animal proteins that require a high consumption of cereals per kilogram of final production by others of lesser consumption.(1) Balancing the production of biofuels using missing food commodities with their allocation for food purposes.(2) Suppressing bans on of food based on its \u201cbeauty,\u201d such as the \u201cugly fruit,\u201d non-standard size vegetables and other similar measures aimed at reducing food waste.(3) Protecting wild areas from the rampant expansion of agricultural land, including protecting forests and controlling overfishing to temporarily compensate for the lack of food. There are many technological resources that can be used in agriculture and livestock capable of optimizing the production, transport, and use of food since the immediate search for profit is not the main reason for production, but the search for the best possible food for humans.(4) Valuing foods that are currently undervalued in the world food supply chain for commercial reasons, but which have a high nutritional value.(5) Adding to the current diet resources that are currently only marginal in human and animal nutrition, such as edible algae, herbs still considered harmful in most countries .(1) Prevent the aggravation or appearance of famines resulting from the war between Russia and Ukraine outside the conflict area themselves potentially causing or potentiating conflicts in other planetary regions.(2) Mitigate food shortages in the conflict zone and world-wide.(3) Enhance the quality of the diet available to each citizen.(4) Improve food education of each citizen.(5) Reduce the impacts on the planet and therefore of climate change resulting from the production of human food.These measures aim to:"} +{"text": "MSc Psychiatry at Cardiff University is an established postgraduate programme offering students a sound theoretical basis in psychiatry as a medical science and specialty. The programme currently offers six taught modules , as well as a dissertation module that students complete towards the end of the programme. In catering for the professional needs of clinical students and students pursuing careers in academia, two additional taught modules have been proposed exploring Leadership and Management in Psychiatry and Advances in Psychiatric Research. Feedback on the proposed introduction of the new modules was collated from the current full-time and part-time student cohorts.The Forensic Psychiatry & Substance Misuse module and the proposed Leadership and Management in Psychiatry module, as well as a choice between the existing Child and Adolescent Disorders module and the proposed Advances in Psychiatric Research module.A total of 57 students currently enrolled on the programme were surveyed in relation to the proposed additional taught modules. The survey was created using Microsoft Forms and deployed via the programme's virtual learning environment . A mixed methods design was employed, with both Likert scale and open-ended items included in the survey. Students were informed that future cohorts would be offered a choice between the existing Responses from the current student body were collated and analysed. A total of 29 (51%) students surveyed were medical professionals, with the remaining 28 (49%) students being science graduates or other clinical professionals. Descriptive analysis of the quantitative data revealed that an overwhelming majority of students viewed the introduction of the new modules as a positive development that would further enhance the student learning experience and continuing professional development. Content analysis of the qualitative data revealed further insights on the nature of the proposed modules and preferences on how these should be included within the existing programme schedule.MSc Psychiatry favour the introduction of the proposed modules tailored to support professional development. Specifically, students view the proposed module on Leadership and Management in Psychiatry as catering to the needs of clinicians working in a variety of healthcare settings, whilst the proposed module exploring Advances in Psychiatric Research was considered to supplement existing course content on evidence-based medicine and caters for students with an interest in pursuing a career in academia.Students currently enrolled on the"} +{"text": "An Essay On The Shaking Palsy, first gave a unified description of a long-known set of scattered symptoms, and 60 years have passed since the first demonstration of the antiparkinsonian effects of intravenous and orally administered L-DOPA and the insight that the progressive neurodegeneration of dopaminergic neurons in the substantia nigra pars compacta (SNc) underlies the cardinal symptoms of the disease named after James Parkinson is still incurable, we do not know how to prevent it, and our essentially only therapeutic approach is the temporary palliative maintenance of dopaminergic signaling.More than 200 years have passed since James Parkinson, in his work arkinson . Since tFactors contributing to dopaminergic cell death . Furthermore, 17 new bis-sulfonamide derivatives have shown their potential to reduce oxidative stress, restore mitochondrial function and neuronal viability, and enhance the activity of NAD-dependent deacetylase sirtuin-1 .It is necessary to find new therapeutic approaches that help prevent the disease. This objective could be achieved using compounds such as metformin, which have already demonstrated their usefulness in the treatment of other diseases and may be an effective antioxidant and neuroprotective agent .To better understand the mechanisms contributing to dopaminergic cell death in PD, the cellular and molecular mechanisms of dopaminergic neurodegeneration are being investigated at increasingly detailed levels. A comparison of the neighboring dopaminergic populations of the SNc and the ventral tegmental area (VTA) appears particularly promising, as the dopaminergic neurons of the VTA are less susceptible to neurodegenerative processes. RNA sequencing of laser capture microdissected human dopaminergic SNc and VTA neurons from healthy and PD patient's brains has uncovered a set of differentially expressed genes that are unique to either SNc or VTA dopaminergic neurons as well as genes that are dysregulated in the SNc of PD patient brains and could thus serve as biomarkers (Prakash summarizes recent findings on the molecular factors involved in dopaminergic neuron development and how they influence their survival in the adult and aging brain.One of the most interesting and recent plot twists in PD research is that some of the factors that determine neuronal fate in our old age originate early in life, i.e., in the embryonic development of dopaminergic neurons in the midbrain. Identification of factors capable of exerting neuroprotective functions in a damaged brain or an aged brain could increase the survival of dopaminergic neurons and eventually enable a whole new range of treatments more ambitious than current palliative approaches and capable of halting neuronal loss. A cure for Parkinson's disease could then be within reach.Both authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication."} +{"text": "Few is known regarding the intervening variables between pathological narcissism and sadism personality. Specifically, envy is a psychoanalytical construct that appears especially promising in illuminating such relationships.To extend the knowledge regarding the nomological network of pathological narcissism.We administered to a sample of Italian adults a battery of self-report questionnaires including the Italian version of the Benign and Malicious Envy Scale, the Assessment of Narcissistic Personality, The Narcissistic Admiration and Rivalry Questionnaire and the Pathological Narcissism Inventory.First, the Italian version of the Benign and Malicious Envy Scale showed good fit indexes confirming the original factorial structure as well as configural invariance. We found that only the grandiosity facet of the Pathological Narcissism Inventory, the Rivalry subscale of the Narcissistic Admiration and Rivalry Questionnaire and the Malicious subscale of the Benign and Malicious Envy Scale positively and significantly predicted Assessment of Narcissistic Personality scores. Moreover, throughout a structural equation modeling approach, the hypothesis that rivalry and malicious envy both mediate the relationship between grandiosity and sadism was empirically supported.The use of the Benign and Malicious Envy Scale resulted to be promising in the investigation of the nomological network of pathological narcissism. Limitations and future directions are discussed.No significant relationships."} +{"text": "Suicide is one of the most common causes of death among young people worldwide. Adolescence is an important developmental period of life due to the increased risk of suicide and the prevalence of psychiatric disorders.To explore the suicidal ideation, intentions and risk factors of adolescents.A clinical case study presentation will be performed.An adolescent female, aged of 16 years old, was admitted to the Department of Psychiatry for Children and Adolescents of a General Hospital, diagnosed with behavioral and emotional disorder and active suicidal ideation on ground of sexual abuse. During her hospitalization, she exhibited self-destructive behaviour by swallowing objects or causing extensive skin scarring as well as serious suicide attempts by hanging. Her emotional and behavioral status was unstable and unpredictable. The adolescent had repeatedly expressed her will to escape from an unbearable life.The results of the presentation of our clinical case could contribute to the improvement of awareness regarding suicidal behavior in adolescence, which might have a significant effect on the prevention and treatment of this potentially lethal condition.No significant relationships."} +{"text": "This review study examines the cases of improving the therapeutic skills of therapists and areas of counseling and the important cases that midwives have to provide services and manage conditions if Diagnosis of an abnormal fetus requires attention.We aim to find the best ways of counseling for helping parents with diagnosed abnormal fetusesA search conducted by using the keywords congenital anomalies, psychological counseling, prenatal counseling in PubMed, science direct, clinical key and Google scholar search engine. after screening, the complete data of 20 articles were included in this review article.The results showed that pregnancy counseling with abnormal fetuses includes medical and psychological counseling. In medical counseling, knowledge of the types of tests and their interpretation is important, and prenatal screening training programs for health care providers should be revised based on their educational needs. In psychological counseling, to meet the needs of a changing population of clients Midwives in the context of the wider healthcare system need accurate knowledge of religious beliefs and cultural contexts of their clients in order to take the best approach to relevant care. The occurrence of a diagnosis of congenital anomaly during transmission to parents adds to the accumulation of stress-related events that may increase the risk of developing psychological symptoms in the early stages after diagnosis.Considering the different cultures of different countries of the world, midwifery counseling skills play an important part in the diagnostic and therapeutic process. Therefore, creating extraordinary educational programs on university education is needed for midwives.No significant relationships."} +{"text": "The production of rolling bearings is a complicated process that requires the use of many operations. The manufactured elements of rolling bearings should be of high quality while minimizing production costs. Despite many research studies related to the analysis of technological processes, there is still a lack of research and tools allowing us to satisfactorily assess the relationships between individual operations of the rolling bearing ring process of production and the quality. To perform such an assessment, one can use the concept of technological heredity phenomenon analysis. As the surface waviness of the bearing race is of key importance, the present paper aims at evaluating how the individual technological operations of the rolling bearing ring production process affect the formation of their surface waviness. The surface waviness of the bearing race was measured in both directions (two sections), i.e., along the circumference using the Talyrond 365 measurement system and across the circumference of the race using Talysurf PGI. The production of 6308-2z rolling bearings made of AISI (American Iron and Steel Institute) 52100 bearing steel was analyzed. The occurrence of the phenomenon of technological heredity in the production of rolling bearings was observed. The research results indicate that the turning operation reduces the surface waviness of the bearing rings obtained after forging, while the heat treatment causes a slight increase in surface waviness. On the other hand, grinding operation significantly reduces the waviness, with this reduction being greater for the outer ring. Furthermore, the research has shown that the waviness of the surface is an inheritance factor caused by individual operations of the rolling bearing rings manufacturing process. Rolling bearings are essential components of technical devices. Rolling bearings in addition to the obvious applications, e.g., in machines and cars , are alsIt may seem that due to the prevalence and standardized character of rolling bearings, the process of their production has been fully analyzed. However, the standardization does not prevent manufacturers of rolling bearings from trying to improve their production processes so that their products are of high quality . In addiIn mechanical engineering, in addition to determining the value of deviations in the product, it is important to determine at what stage of the technological process a given deviation (error) occurs and how it is formed during the implementation. Historical information about the magnitude of deviations (errors) from the previous process is important as it enables the determination of how the magnitude of deviations affects the final product. Therefore, the magnitude of the occurring error and its impact on the subsequent part of technological processes should be controlled at each stage of the technological process. We refer to these procedures as the concept of technological inheritance analysis. The term Technological Inheritance (TI) refers to the phenomenon of transferring certain features of an object, e.g., deviation of roundness or waviness of a surface, between successive parts of technological processes. If these properties remain constant in the final product, then the phenomenon of Technological Heredity (TH) occurs.The technological heredity phenomenon analysis allows us to increase the technological process reliability. This is especially important in the case of complex mechanisms with multi-bolted connection . Thanks The concept of technological heredity analysis is the subject of several scientific studies. In the study , the autThe research presented in the present paper is intended to show how the various stages of rolling bearing ring production affect the formation of surface waviness. The heredity coefficient was used to assess these changes. It should be added that the surface waviness is a critical feature of the rolling bearing race surface ,19,20. CFor the research on the evaluation of technological heredity, the outer and inner rings of 6308-2z bearings taken directly from the production process were used. Bearings of this type are commonly used in the mechanical industry and can be installed in mechanisms with high rotational speeds. Moreover, they can support axial and radial loads. The rolling bearing rings are made of AISI 52100 bearing steel with the chemical composition and properties given in The production process of type 6308-2z rolling bearing rings includes several key operations. The most important of them were selected to assess the transformation of surface waviness. The first operation consisted of making forgings from a pipe as a blank. Then, the forgings were turned, including turning of the face, as well as the outer and inner surfaces of the ring. Another operation in the manufacturing process of the rolling bearing rings is annealing. The final step is grinding the front, internal, and external ring surfaces. The races are additionally super finished. For metrological inspection, three inner and outer rings of 6308-2z bearings taken after specific operations of the technological process were used . In ordeThe surface waviness measurements on the circumference of the race were made using a Talyrond 365 roundness measuring instrument manufactured by Taylor Hobson. It is a high-precision measurement system using the radius change method with a rotating table. The parameter used to describe the waviness measured along the circumference of the race is the roundness deviation, RONt, determined based on the least-squares circle, LC. The filtration is performed using a Gaussian filter to separate the roundness and roughness components and leave only the waviness components. This is one of most popular parameters to describe the shape quality of cylindrical elements. It is defined as maximum deviation inside and outside reference circles. As the deviation was determined for the waviness profile, in the remaining part of the paper, the waviness deviation is understood as the RONt parameter.The measurements of the waviness of the surface race carried out across the circumference of the race were carried out using a Talysurf PGI contact profilometer. Three parameters were used to assess the waviness of the surface race measured transversely, i.e., Wa, Wq, and Wt. Parameter Wa is the arithmetic mean height waviness and is the average of the absolute value along the sampling length. This is one of the most popular waviness parameters applied in industry, but it gives only general information on the waviness without specifying the spatial structure. Another important parameter is root-mean-square waviness deviation described as the Wq parameter. It should be added that the Wa and Wq parameters are used to assess the surfaces that are lubricated and sealed, which is especially important in rolling bearings. Parameter Wt is defined as the total height of the waviness profile along the vertical distance between the maximum profile peak height and the maximum profile valley depth along the evaluation length. The Wt parameter should be used to assess surfaces involved in rotational motion of mechanisms.Additionally, for selected rings of rolling bearings, the surface topographies were assessed using a Talysurf CCI optical profilometer. Such tests allow for a complete analysis of the surface waviness and the assessment of its change and transfer as a result of individual operations of the bearing ring production process.y should be determined (see Equation (1)).xo\u2014analyzed technological operation parameter value, xpo\u2014previous technological operation parameter value, y\u2014analyzed technological operation index.To quantify how individual operations of the production process affect the shaping and transmission of surface waviness, the technological heredity factor THFThe technological heredity factor developed in this way allows determination of the percentage of change and the transfer of the bearing race waviness as a result of individual stages of technological operations in bearing ring production.The examination effects are presented in graphs and tables, broken down into the results for the races of the inner and outer rings. t\u2014technological heredity coefficients for turning, THFa\u2014technological heredity coefficients for heat treatment, and THFg\u2014technological heredity coefficients for grinding. The values of these coefficients were determined for the waviness deviation RONt and the surface waviness parameters, i.e., Wa, Wq, Wt, and are presented in In the transfer (inheritance) of the surface waviness parameters between four operations of the rolling bearing ring production process, three heredity coefficients were determined, i.e., THFBy analyzing the test results presented in the diagram in t = \u22123117.22% . Moreover, the values of surface waviness inherited as a result of individual technological operations affect further stages of the process of production. Excessive waviness values also adversely affect subsequent machining operations because they can cause additional vibrations in the entire machine system (machine tool\u2013chuck-cutting tool-piece) and propagation of additional shape errors. This can be eliminated by providing an additional allowance for grinding, but this is very energy-consuming, which is also disadvantageous.The research presented in the article is preliminary. In further studies, the author will analyze changes in other parameters as a result of other technological operations. The developed technological heredity coefficient will also be used to link the technological parameters of a given manufacturing process with the operational parameters of the manufactured part of the machine. The examination of other types of rolling bearings is planned. Furthermore, in future research, the author will examine changes in the surface texture described by waviness 3D as a result of technological operation parameters."} +{"text": "Decision support systems can seriously help medical doctors in the diagnosis of different diseases, especially in complicated cases. This article is devoted to recognizing and diagnosing heart disease based on automatic computer processing of the electrocardiograms (ECG) of patients. In the general case, the change of the ECG parameters can be presented as a random sequence of the signals under processing. Developing new computational methods for such signal processing is an important research problem in creating efficient medical decision support systems. Authors consider the possibility of increasing the diagnostic accuracy of cardiovascular diseases by implementing of the new proposed computational method of information processing. This method is based on the generalized nonlinear canonical decomposition of a random sequence of the change of cardiogram parameters. The use of a nonlinear canonical model makes it possible to significantly simplify the maximum likelihood criterion for classifying diseases. This simplification is provided by the transition from a multi-dimensional distribution density of cardiogram parameters to a product of one-dimensional distribution densities of independent random coefficients of a nonlinear canonical decomposition. The absence of any restrictions on the class of random sequences under study makes it possible to achieve maximum accuracy in diagnosing cardiovascular diseases. Functional diagrams for implementing the proposed method reflecting the features of its application are presented. The quantitative parameters of the core of the computational diagnostic procedure can be determined in advance based on the preliminary statistical data of the ECGs for different heart diseases. That is why the developed method is quite simple in terms of computation and can be implemented in medical computer decision systems for monitoring cardiovascular diseases and for their diagnosis in real time. The results of the numerical experiment confirm the high accuracy of the developed method for classifying cardiovascular diseases. Therefore, timely high-precision diagnostics of heart diseases, prevention and treatment at an early stage of the development of the disease acquire exceptional relevance. To improve the accuracy of diagnosing the state of the heart in recent decades, computer systems for automatic analysis6 of electrocardiographic data obtained during the processing of an electrocardiographic signal have been widely used. Automatic analysis of electrocardiograms is a rather complex theoretical problem. First of all, this is due to the physiological origin of the signal9, which is the reason for its indeterminacy, diversity, variability, unpredictability, non-stationarity and susceptibility to numerous types of interference.Medical statistics showAt present, the analysis process, as a rule, is a study of isopotential and other maps generated from the data obtained using the software supplied with the signal recording apparatus.In medical practice, the conclusions of cardiologists about patients\u2019 diagnoses have, as usual, qualitative or verbal character and are not always confirmed by enough number of quantitative data. In special or difficult situations with disease recognition, diagnosis errors by young or insufficiently experienced medical doctors are possible, and the real diagnostic process may be significantly extended until a final correct decision about the truth diagnosis.Decision support systems can seriously help medical doctors in the decision-making processes about the diagnosis of different heart diseases, especially in complicated cases. The most perspective approach is based on the recognizing and diagnosing heart disease using automatic computer processing of the electrocardiograms (ECG) of patients.In the general case, the change of the ECG parameters can be presented as a random sequence of the signals under processing. Developing new computational methods for such signal processing is an important research problem in creating efficient medical decision support systems.In this article, the authors consider the possibility of increasing the diagnostic accuracy of cardiovascular diseases by implementing the computational method, which is based on the generalized nonlinear canonical decomposition of a random sequence of the change of cardiogram parameters. The absence of any restrictions on the class of random sequences under study makes it possible to achieve maximum accuracy in diagnosing cardiovascular diseases. The main advantage is that quantitative parameters of the computational diagnostic procedure can be determined in advance based on the preliminary statistical data of the ECGs for different heart diseases. That is why the developed method is quite simple in terms of computation and can be implemented in medical computer decision systems for monitoring cardiovascular diseases and for their diagnosis in real time.Thus, the development of efficient mathematical models and computation methods for identifying the high-accuracy individual characteristics of an electrocardiogram (with subsequent classification), as well as the creation of an automated computer diagnostic support system, is an urgent and important task in \u201cmedicine\u2013computer science\u201d multidisciplinary research.The rest of the article covers multiple aspects related to the topic discussion. \u201cDiagnostics of electrocardiograms consist of three successive stages: (a) preliminary processing, (b) feature extraction , and (c) classification. Let us analyze all these stages consequently.Fist stage\u2014Preprocessing reduces signal measurement noise by smoothing the electrocardiogram signal, reducing drift suppression and baseline deviation. The most common existing methods used to reduce signal noise are (a) second order low pass and (b) high pass Butterworth filters10, (c) Daubechies wavelet11 and (d) orthogonal wavelet-filter12. Besides, for baseline adjustment, such techniques as median filter, linear phase high pass filter, mean median filter and others are used also.Second stage\u2014Feature extraction is an interactive process that includes a series of automatic data transformation procedures. In cases with a large number of measurements-features that describe the characteristics of the input signal, the correlation and factor analysis of data can be used to reduce the dimension of the problem. According to the extraction and analysis methods, the features can be divided into the following categories:13 ;Temporary features15 ;Spectral features17 ;Time-frequency/wavelet features18 .Signs of the complexity of geometric distortions14; direct feature selection19; genetic algorithms20; (b) filter methods (correlation)21; Chi-Square15; analysis of variance (ANOVA)22; ReliefF23; (c) built-in methods24.The stage of feature extraction ends with the optimization of their number, which allows reducing the set of redundant functions, reducing computational costs and improving the overall performance of the system. This step uses the following three main categories of feature selection methods: (a) wrapper methods 30; Decision Trees (DT)31.32, a combination of statistical, geometric, and nonlinear heart rate variability features32, a semantic web ontology and heart failure expert system33, signal averaging method, multivariate analysis34, RPCA\u2014recursive principal component analysis35, SPSA\u2014simultaneous perturbation stochastic approximation method36, ABT\u2014Amplitude Based Technique FDBT\u2014First Derivative Based Technique, SDBT\u2014Second Derivative Based Technique37, Hilbert transform38 and others.Several different approaches for ECG analysis are based on a chaos theoryAt the same time, each of the above-mentioned methods has its drawbacks and limitations. That is why the need to develop new effective methods of medical diagnostics has not lost its relevance.Thus, the change in the values of the electrocardiogram has a stochastic character; therefore, for the diagnosis of cardiovascular diseases, it is necessary to use methods for recognizing random functions and random sequences.39) is applied in conditions when the stochastic properties of classes of random sequences are fully known. If the prior probabilities of the classes of random sequences are not known, then equal values are assigned to them and the decision rule is modified into the maximum likelihood criterion. The criterion is especially important when solving problems of recognition, in which unlikely events cannot be excluded from consideration . However, for the maximum likelihood method, as well as for the Bayes rule, the problem of approximating the multivariate distribution density for a random sequence with a large number of sampling points remains unresolved.The main method for recognition of the realizations of random sequences is the Bayes decision rule, according to which a decision about the belonging of the realization to a certain class, for which the posterior probability is maximum, is made. The method is theoretically accurate, however, as well as many of its modifications is significantly simplified, however, the transition from the vector To eliminate all existing probabilistic relationships C are determined by the expressionsThe parameters of the matrix C. Consider this array as a vector random sequence A priori information onent of .8\\documeThe coordinate functions position are deteThe values The block diagram of the algorithm for calculating the parameters of the canonical expansion is shownExpression is a non51.The absence of assumptions about the form of the distribution density of random variables The computational method for diagnosing cardiovascular diseases consists in the realization of the following stages:Estimation of moment functions Formation of canonical decompositions for variSynthesis of one-dimensional distribution densities of independent random coefficients Calculation of the values Determination of the belonging of the cardiogram ion rule .The diagram of the functioning of the system of diagnostics of cardiovascular diseases based on the developed method is shown in Fig.\u00a0The proposed method was tested on the basis of statistical data of nine cardiovascular diseases: (a) mild neurocirculatory dystonia, (e) hypertrophy of myocardium, 52.For the numerical experiment two hundred different cardiograms for each disease 53 and the Bloom criterion54, showed the truth of the hypothesis about the independence of coefficients at 40 of the studied random sequences 55; (c) fuzzy logic method 58; (d) neural network 60 based on the Daubechies wavelet function of the fourth order and the Levenberg\u2013Marquardt algorithm for learning; (e) generalized non-linear criterion Neural network59.Daubechies wavelet function of the fourth order and the Levenberg\u2013Marquardt algorithm for learning were used Expressions for the determination of approximation coefficients and detailing of discrete wavelet transform are presented in the form:Output signal of each of separate neuron of output layer was forming asContinuous sigmoid bipolar function Tables\u00a0The data in Tables\u00a0riterion makes itmentclasspt{minimariterion by maximA computational method for computer systems for automated diagnosis of cardiovascular diseases based on a generalized nonlinear canonical decomposition of a random sequence of change of cardiograms has been obtained. The use of the canonical model made it possible to form the decision rule for the maximum distribution density in the form of a product of one-dimensional distribution densities of random coefficients. The canonical decomposition does not impose any significant restrictions on the class of random sequences under study, which makes it possible to maximally take into account the stochastic characteristics of sequences related to various cardiovascular diseases.Taking into account the recurrent regularity of calculations, the diagnostic method is quite simple in terms of computation and allows using an arbitrary number of input parameters. A significant advantage of the method is the ability to use characteristics not directly related to the cardiogram .During the operation of the diagnostic system based on the proposed computational method, new diseases unknown to medicine can be identified in the case of a significant difference in the values of the likelihood function for the investigated cardiogram and the classified cardiograms of known diseases.The results of the numerical experiment indicate a high reliability of the diagnostics of cardiovascular diseases based on the proposed method."} +{"text": "Epidemic wavefront models predict the spread of medieval pandemics such as the plague well. Our aim was to explore whether they contribute to understanding the spread of COVID-19, the first truly global pandemic of the 21st century with its fast and frequent international travel links.We analysed the spatial spread of reaching a threshold of very high incidence of new daily infections of the virus across European countries in the autumn of 2021 in which the Delta variant was dominant, as well as an even higher threshold of incidence in the subsequent spread of infections across the same set of countries during the winter of 2021/2022 when the Omicron variant of the virus became dominant.We found patterns that are consistent with wavefront models for both periods of the pandemic in Europe.Modern means of transportation strongly accelerated the spread of the virus and typically generated diffusion patterns along bidirectional constrained mobility networks in addition to stochastic diffusion processes. However, since the majority of mobility, including mobility across international borders, is over short distances, wavefront patterns in the spread of a pandemic are still to be expected. Tobler\u2019s well-known first law of geography claims that \u2018everything is related to everything else, but near things are more related than distant things\u2019 . Local nWavefront models predict the diffusion patterns of the plague, cholera and Ebola in Sierra Leone ,8 well, Our analysis differs from classical models explaining the spatial spread of infectious diseases, since we do not consider the wavefront of the virus itself . After tFew European countries have the capacity and willingness for extensive genome sequencing, with all but six of them sequencing <2.5% of tests during December 2021 In the UTo account for the higher infection rate of Omicron relative to Delta , we emplAn understanding of epidemic diffusion patterns can help public health officials to predict and therefore prepare for when and from where a significant increase in infections is likely to occur. To our knowledge, this is the first paper providing evidence for the existence of wavefront diffusion patterns during the Delta and the Omicron waves of the COVID-19 pandemic and that wavefront models still contribute to understanding the spread of infectious diseases such as SARS-CoV-2. Naturally, spread also occurs stochastically and along bidirectional constrained mobility network patterns, including seeded infections imported from far-away places facilitated by international air travel such that non-proximate spread occurs simultaneously to proximate wavefront-type spread . There aTobler\u2019s law is not the only possible explanation for the observable empirical patterns in"} +{"text": "In schizophrenia, there are disorders in all sensory modalities, but the regularities of their occurrence, their pathogenesis and attitude towards cognitive functions are not sufficiently studied.Examine the interrelation between the dysfunctions in different analysers and their dependence on the duration of the disease and the severity of psychotic symptoms and cognitive deficit in schizophrenic patients (F20 according to ICD 10 criteria).All subjects were determined the threshold of olfactory sensitivity to n-butanol, the ability to discriminate against odors and the amount of error in comparing the same sections. Cognitive functions were evaluated using the BACS scale.The inverse correlation between the value of the visual assessment error and the reduction of the threshold of olfactory sensitivity and the inverse correlation between the value of the visual assessment error and the ability to discriminate smells were revealed. There are no significant correlations between the duration of the disease and sensory disturbances. Olfactory and visual disturbances in schizophrenic patients were connected with cognitive functions .The data confirm that sensory impairments have a common pathogenesis and are closely related to cognitive deficits. Sensory and cognitive deficits in schizophrenia may be the result of top-down regulation failure.No significant relationships."} +{"text": "In the original publication of the article, the author noticed an error in Section, \"Classification of Edematous States\" under second paragrapgh, the sentence should read, \"It was Domenico Cotugno (1736\u20131822) who the first described the association of edema and proteinuria in his 1770 De ischiade nervosa commentaries \" instead of \"It was Domenico Cotugno (1736\u20131822) who the first described the association of edema and proteinuria in his 1770 De ischiade nervosa commentaries . but did not link it to kidney disease\"."} +{"text": "Brain functions rely on the communication network formed by axonal fibers. However, the number of axons connecting different brain regions is unknown. A study in PLoS Biology addresses this question and finds that most areas of the human cerebral cortex are linked by an astoundingly small number of fibers. Activity patterns and complex functions of the brain rely on the characteristic communication network formed by axonal fiber networks, but how many axons actually connect different brain regions? This Primer explores a study in PLOS Biology which finds that most areas of the human cerebral cortex are linked by an astoundingly small number of fibers. In recent years, network neuroscience has provided a new perspective of how brain function emerges from the communication among distributed, specialized regions in the brain. This framework explains patterns of neural activity and brain functions on the basis of the structural connections among brain regions. The structural connection scaffold has been uncovered systematically by histological approaches in non-human primate models as well Amazingly, however, while the general existence or the relative strength of pathways connecting regions of the human brain can be well estimated, the actual numbers of neurons linking one region to others are still uncertain. This uncertainty is due to methodological limitations. For example, the number of streamlines inferred in diffusion tractography varies with the voxel resolution of the imaging data, and the number of stained projection neurons in histological studies depends on the amount of injected tracer, so that current approaches only deliver estimates of relative, but not absolute connectivity.This problem was addressed by Rosen and Halgren who compThe small number of projections can be seen against the background of the number of neurons under one square millimeter of cortical surface, which is in the order of 60,000 neurons . About 8To give a functional example, Rosen and Halgren looked aDespite the generally very sparse connectivity, there do exist some substantial connections, highlighting the factors that underlie the existence of such highways among a majority of byways. Distance has been indicated as one of these factors, while another is the cytoarchitectonic similarity of connected regions, according to the structural model of connections , which aBut do these principles imply that cortical signals only propagate between strongly connected adjacent or architectonically similar areas? What then is the functional role and significance of the majority of sparse long-distance projections in the human cortex? More specifically, what are the mechanisms by which signals may become amplified even though they are traveling along just a few projection axons? Models by which signals simply diffuse across the global network do not appear plausible given the sparsity of most long-distance projections relative to the massive number of local connections. By contrast, potential mechanisms through which long-range projection signals could be amplified include particularly large projection neurons , the forThe finding of sparse connectivity crucially hinges on the inferred conversion factor of streamlines to axons. Therefore, Rosen and Halgren were conFuture comparative investigations may also address if the connection sparsity found here is particularly characteristic of the human brain, or if large brains of other animals possess similar sparsity. This question is relevant in the context of functional interpretations of sparsity as a basis for the segregation and stabilization of signals in the human brain .The skillful integration of cortical scales demonstrated by the present study may shap"} +{"text": "Autism spectrum disorders (ASD) is one of the most urgent problems of psychiatry because of their high prevalence, diagnostic difficulties as well as insufficient knowledge of the pathogenetic mechanisms.To determine the number of inflammation markers in patients with various forms of ASD in links with features of a clinical condition for creating diagnostic criteria for differential diagnosis and improve reliability.The clinical examination of patients (135 children with various ASD forms) was carried out by using psychometric scales . The activity of inflammation markers (LE and \u03b11-PI) and the level of autoantibodies to S-100b and MBP were measured in plasma. Complex evaluation of immune system activation was also conducted, taking into consideration interactions of innate and adaptive immunity.Non-psychotic ASD forms (Asperger\u2019s syndrome and Kanner\u2019s syndrome) were not accompanied by a change of the immunological indices in comparison with control. In psychotic ASD forms, a significant increase of the studied indices was revealed (\u0440<0.05). Correlation between the complex evaluation of the immune system activation and the stage of the disease was demonstrated. Also the significant correlations between the severity of autistic disorders according to CARS , catatonic disorders by BFCRS , and the assessment by CGI were observed.The immune markers as well as their complex evaluation may be used as additional diagnostic criteria in the clinical examination for differential ASD diagnostics and assessment of the quality of remission, and also monitoring of the patient condition.No significant relationships."} +{"text": "Jafar Mazumder The author regrets that the funding information was incorrectly shown in the acknowledgements section of the original manuscript. The corrected funding acknowledgement is as shown below.The author gratefully acknowledges the research facilities provided by King Fahd University of Petroleum and Minerals (KFUPM) and the financial assistance of the Deanship of Scientific Research, KFUPM, Saudi Arabia through Internal project (# IN131047).The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} +{"text": "The Reframing Aging initiative began in 2012 with the aim of changing the way in which the public views aging, and one of the key tenets of the initiative centers around language that perpetuates negative views of aging. Despite widespread knowledge about the initiative, recent publications in high-tier journals point to a gap in adopting the Reframing Aging initiatives outside of aging journals. Terms that project a negative view of older adults are still used in manuscript titles and within abstracts and bodies of research publications in non-aging journals. As researchers who publish aging-related work, members of organizations such as the Gerontological Society of America are often solicited as reviewers for their expertise in the field of aging. While many researchers in aging may review for aging-related journals, such expertise is often needed in non-aging journals. As such, it is critical for aging researchers to continue to advance the Reframing Aging initiative when conducting reviews of manuscripts that do not adhere to the guidelines. This presentation will provide explicit examples of such publications and review specific steps that reviewers can take in addressing the Reframing Aging initiative in future reviews."} +{"text": "For 40 years the Federal Centre for Health Education in Germany has been analysing the contraception behaviour of young people. The current Survey is the ninth iteration, carried out in 2019. This continuous monitoring generates insights on the sexual and reproductive health of young people in Germany. The survey provides an important basis for the development of sexuality education and family planning measures.A total of N = 6032 adolescents and young adults participated in the survey. Date collection was conducted by computer-assisted personal interviewing (CAPI). The current sexual and contraceptive behaviour of adolescents and young adults will be summarized using descriptive results. In addition, the association between contraception non-use and sociodemographic factors, characteristics of sexuality education and situated factors of first sexual intercourse is analysed by multivariate logistic regressions.A key finding of this iteration is that with regards to the age of the first sexual intercourse, the proportion of adolescents younger than 17 years has been declining for several years. For contraception, adolescents most frequently used condoms, and use of the pill has decreased. 9% of the participants reported non-contraception use at first sexual intercourse. This is significantly associated (p < .01) with unexpected and only unilaterally desired sexual intercourse and the absence of sexuality education in School. In addition, the younger the adolescence were at first sexual intercourse the greater the risk for contraception non-use.The data from the current iteration indicate safe and responsible contraceptive behaviour among young people in Germany. It is important to maintain the commitment in the field of sexual health promotion and expand prevention measures for young people. This is the only way to ensure the sexual and reproductive health also in the next generation.Data from Youth Sexuality Study indicate safe and responsible contraceptive behaviour among young people in Germany.Commitment in the field of sexual health promotion needs to be continued."} +{"text": "Schizoaffective disorder is a psychotic disorder of controversial nosological entity. Affective symptomatology and psychotic features of varying intensity coexist simultaneously in him throughout evolution. The lack of consensus on the existence of this entity determines its diagnostic delay and the absence of specific treatment guidelines.To review the diagnostic criteria for schizoaffective disorder and the published scientific evidence on the efficacy and safety of the different therapeutic options available. To analyze the efficacy of a multidisciplinary treatment plan implemented in an intensive follow-up program, presenting the evolution of a clinical case.To review the psychiatric history and psychopathological evolution of a patient diagnosed with schizoaffective disorder from the beginning of an intensive follow-up program in a day center to the present. Review the existing scientific evidence on the usefulness of the treatments used in this nosological entity.This is a longitudinal and retrospective study of a clinical case in which the areas for improvement are analyzed before implementing a multidisciplinary therapeutic program and the favorable results obtained today. Currently, the patient is euthymic and attenuated and chronic positive and negative symptoms persist that do not interfere with his functionality.From the implementation of an individualized, personalized and multidisciplinary maintenance treatment plan, an overall improvement in psychopathological stability and functional recovery is observed. Among the psychopharmacological options in this patient, Paliperidone Long Acting Injection (PLAI) stands out for its long-term efficacy and safety."} +{"text": "In this paper, we present an extensive image dataset produced during the detailed micropaleontological analysis of 146 samples of bottom sediments collected by multicorer and gravity corer AMK-5656 from the Westray Basin on the north-western Scotland shelf (North Atlantic Ocean). In total, 106 taxa (at species and genera level) of benthic foraminifera were identified and photographed using the high-resolution microscope camera. This dataset can aid as a guide for identification of the benthic foraminiferal taxa at the paleoecological studies, stratigraphic works and interregional paleoceanographic correlation in the North Atlantic Ocean.Geological studies in the seas and oceans often give preference to the study of benthic The North Atlantic (NA) is one of the key areas of the thermohaline circulation system of currents that transfer the heat, salt, dissolved elements and gases and sedimentary matter to the Subarctic and Arctic regions. This circulation being a part of the large-scale circulation of the World Ocean affects the warming and cooling of the global climate and regional oceanography in the NA and Arctic . The clist cruise of the research vessel Akademic Mstislav Keldysh from the latest Pleistocene to the the Holocene sediments in the Westray Basin on the north-western Scotland shelf (NWSS). The sediment material was collected in summer 2018 during the 71 Keldysh . Laboratforaminifera in the North Atlantic bottom sediments and gravity corer (GC) sediments on the AMK-5656 station obtained during the 71mer 2018 . The locThe NWSS is a shallow water region of western North Atlantic Ocean dominated by the warm saline surface water of the North Atlantic Current branch crossing the NWSS from west to east between Orkney and Shetland . The colBio-monitoring studies, based on the living fauna, indicate that the taxonomically diverse benthic foraminifers densely populate the high-latitude shelf areas . They seTable foraminifera at the GC core level of 295 cm. Under the level: the content of terrigenous matter increases, CaCO3 content is between 25 and 40%, total abundance of benthic foraminifers is low and the infaunal shelf/slope species Buliminamarginata and Fursenkoinafusiformis have increased concentrations indicating the high fluxes of the total organic carbon and oxygen-depleted conditions and 127 samples from the GC (every 5 cm). All samples were freeze-dried, weighed, washed in the distilled water through a sieve with mesh size of 63 \u00b5m as recommended in The microphotograph tables with images of benthic foraminifers can be used in the practical micropaleontological work with the modern and Quaternary sediment samples from the high-latitude areas of the North Atlantic. They will help the species identification, description of the foraminiferal assemblages and interpretation of the micropaleontological data for the biostratigraphy and paleoecology."} +{"text": "The ecological dimension is expressed, among other things, in the matter of movement and the process of appropriation of local spaces. The creation of public space is oriented towards centralising and bringing exchanges closer together. It is a recognition of the ways of life of the individual who has become aware of the other essentials for human well-being. How does the proximity of multimodality and culture strengthen the urbanity? And how does it influence urban intensity, livability, health & the salutogenic approach of public space? The study investigates the quality of public mobility spaces through design, multimodality and sustainable planning by surveying the case of Bourse-Grand-Place station in Brussels. This transformation project is the subject of an empirical method using the material of recent research on urban design and professional practice. Falling within the scope of the \u201cCities for People\u201d vision of the future, the design of this project integrates socio-cultural activities around the idea of \u201cStation for People\u201d. A concept based on universal accessibility ensures that all individuals can access it. Thereafter, an evolving social economy programme promoted cycling through equipment, maintenance, recycling, training, innovation and the encouragement of cycling culture. The breakthrough of the innovative multimodal design process based on multidisciplinarity could become a helpful urban strategy, oriented toward making proximate neighbourhoods both residentially and practically attractive. The present article carries out an enquiry of how design and urban activities take part in strategies to improve the quality of the public spaces. It reveals some hints that could help urban practitioners when making decisions regarding the quality of an urban place and \u2018living together\u2019 oriented developments. With a contribution to climate change issues, this article demonstrates how urban design can contribute to the quality of life of users and citizens."} +{"text": "Knowing how to train the next generation of gerontological leaders involves understanding where we are now and where we want to be in the coming decades. We outline the results of a survey of the membership of Directors of Aging Centers. The Directors of Aging Centers interest group in GSA has representation from the USA, Canada, and Europe. A survey was sent to the membership in late December with reminders in January and had 31 respondents. We discuss the results of the survey, highlighting the demographics of the current leadership (Neil Charness), perceived need for training by current leaders (Peter Lichtenberg), and consensus content of leadership training programs (Patricia Heyn). Patricia D\u2019Antonio provides a perspective on GSA\u2019s approach to professional development programs and avenues for soliciting funding for leadership training. Our discussant will reflect on the need for training in the context of building the next generation of gerontological leadership. We plan to devote significant symposium time to solicit audience input on next steps for supporting the effort to improve the quantity, quality, and diversity of the gerontological leadership workforce."} +{"text": "The aim of this paper is to present a novel case for the formation, operation and evaluation of a community advisory aboard comprised of Muslims residing in the San Francisco Bay Area, California that utilised a community based participatory approach to address local Muslim mental health needs. The CAB was recruited in partnership with the Muslim Community Association (MCA), one of the largest Islamic centres in the San Franscisco Bay Area. In addition to describing the development of the CAB, the authors present the findings of the evaluation and synthesis of best processes based on CAB members' feedback.et al. (To evaluate the perceived community advisory board members' perceptions of their roles and elicit feedback on how to enhance the relationship between the university team and the CAB, an evaluation was conducted by an independent team who was not part of the research process. Data was collected using anonymous individual surveys and small group open discussions that were conducted over three evaluation meetings. The evaluation utilised mixed method data collection strategies using questions from Schulz Results of the evaluation within the sphere of CAB operation indicated that CAB members found the greatest satisfaction from their contributions through direct participation in the research activities that were conducted by the university-CAB team. The collective responses indicated that most CAB members were satisfied with trust built between the university-CAB team and the diversity represented in the members of the board. However, given that the Bay Area is home to a very diverse Muslim community, challenges in recruiting representatives that account for all possible self-identifying groups was reported by the CAB with recommendations to recruit religious leaders. Recommendations also included eliciting funds for potential financial compensation for CAB members.The Stanford-San Francisco Bay Area CAB demonstrated that empowering community members through direct participation, creating channels and safe spaces for feedback help create community rooted research that carry the true voices of marginalised communities and reflects their evolving needs In addition to describing the development of the CAB, the authors present the findings of its evaluation and the synthesis of best processes based on CAB members' feedback.Our approach to the development of the CBPR partnership with Muslim communities in the San Franscisco Bay Area, California was guided by an adaptation of Wallerstein and Duran that conA summary of the conceptual logic model is illustrated in The Stanford Muslim Mental Health Lab & Islamic Psychology (MMHIP) lab was established in 2014 as an academic home for the study of mental health in the context of the Islamic faith and Muslim populations. The Lab aims to provide intellectual resources to clinicians, researchers, trainees, educators, community and religious leaders working with or studying Muslims. In 2016, the Center for Clinical and Translational Research and Education (SPECTRUM) at Stanford University awarded a pilot grant to the MMHIP lab in an effort to enhance an emerging research partnership with the Muslim Community Association (MCA), the largest Muslim community centre in the SFBA. Utilising a CPBR approach, one of the major goals of the project was the establishment of a CAB made up of Muslim representatives residing in SFBA to lead the research partnership and guide the design and implementation of research projects. The primary goal of the CAB was to help the research team design and conduct focus groups to explore barriers and facilitators to utilisation of mental health services among Muslims residing in SFBA.It is estimated that over 250\u00a0000 Muslims reside in the SFBA, California, making it one of the largest Muslim populations in the United States served on the CAB during the year of 2016.The CAB members agreed to serve a renewable, one-year term and participated in monthly 2-h meetings. The participation was entirely voluntary with no monetary compensation. The primary goal of the CAB was to guide the design and implementation of community sensitive focus groups to explore the barriers and facilitators of their fellow community members' utilisation of mental health services. To help the CAB carry on this role, the research team and the CAB were engaged in collaborative monthly meetings with ongoing mutual learning about the research process and what best fit the community. The CAB was introduced to the principles of CBPR and research process framework which included a brief introduction to the concept of the IRB, ethical research standards and confidentiality, the concept and steps needed to conduct focus groups. Details on the scope of the meetings are included in online Supplementary appendix 1. CAB participation spanned various domains that included providing feedback on the makeup and the needs of the SFBA community. With increasing dedication and motivation of the CAB, members contributed across the entire research design and implementation process, from participating in the development of focus groups manual and formulating the case scenarios that were used to facilitate focus groups discussions to eventually leading the recruitment process, including the development of the recruitment matrix, advertisement of the focus groups and the recruitment of participant inclusion. CAB activities were all conducted during the monthly meetings except for the day of the implementation of the focus groups.et al., It is important to note that the establishment of the CAB took place in 2016, a year in which a spike of Islamophobia was observed in light of the presidential election cycle and more men.\u2019 Suggested strategies to increase diversity were: (1) Physically rotating the site of meeting to create accessibility for all (2) Announcements at centres where marginalised communities congregate using social media and creating third spaces (3) Encouraging current CAB members to recruit individuals who are not represented.When presented with a multiple-choice question developed by the evaluation team to elicit understanding of the purpose of the group as perceived by community members, the consensus amongst participants regarding applicable statements amongst those provided were as follows: to assess, recommend and evaluate the availability and accessibility of mental health services to the Muslim community (43%), to evaluate and analyse the mental health care needs of the Muslim community (29%), to understand mental health services barriers/stigma (14%), to raise awareness of mental illness within the Bay Area Muslim community (7%) and to gather a group of educated Muslim professionals to get together and discuss the matter going on in their worlds (7%). Quantitative assessment of group dynamics also indicated overall satisfaction with the process of running the CAB. 86% of participants found the group meeting useful, 79% reported that they enjoyed attending the group meetings, and 71% reported that the meetings were well organised.In addition to overall group satisfaction and understanding, two main themes were discussed amongst participants during the open-forum discussion regarding CAB operation: successful strategies of CAB operation and recommendations for further enhancements. Successful strategies were identified based on expressed satisfaction from CAB members regarding the following topics: leadership, the positive culture that prevailed, the group dynamics, meeting workflows and direct participation and engagement in research design and implementation see .Table 1Recommendations for operation enhancement included further refinement of logistical operations with sustainability in mind, iterative development of CAB training and educational curriculum and the possibility of compensation of CAB members for their participation. One member suggested future utilisation of strategies to create opportunities to enhance creativity and heterogeneity of CAB feedback. A few CAB members also reported interest in incorporating spirituality into meetings stating that \u2018[they] would like to see a spiritual warm up or prayer scheduled into the meeting.\u2019The CAB members expressed interest in maintaining the work beyond the focus groups and crafting short and long-term outcomes. The goals suggested included more research activities, mental health interventions, building collaborations and mental health advocacy. One participant expressed that he would like the goals to include, \u2018raising mental health awareness, eliminating stigma, and easier access to services and providers.\u2019 Another member advocated for the development of more \u2018support groups and discussions , focus groups, and more meetings with key stakeholders in our community.\u2019 Some participants stressed the importance of building collaborations with other Muslim and faith-based mental health organisations. One participant offered insight into the nature of these collaborations and the types of institutions that should be involved, \u2018think tanks, meetings, and continuous dialogue to identify other mental health or social welfare organizations, particularly faith-based organizations; and collaborate, exchange ideas, and work on projects, all of which should start at the county level.\u2019This case study reviewed the formation, operation and maintenance of a CAB using best processes. The formation and operation of the CAB was founded on CBPR principles that included a strength-based approach that leverages existing resources and networks from within the community, models respect and values mutual learning and power sharing. The long-standing relationship between the Stanford Muslim Mental Health and Islamic Psychology Lab and the MCA provided an invaluable opportunity to build upon this tenet of CBPR by providing a strong foundation for the formation of the Stanford-SFBA CAB. Identifying key stakeholders who have credibility in the community is a vital step in building trust with a partnering community in any CBPR model. In our case, the MCA was the trustworthy entity that facilitated the inception of the partnership and provided a safe physical space rooted in shared values and morals that allowed for the partnership to flourish. In addition, the research team's lead investigator is from the Bay Area Muslim community and was well-known and respected for both her scientific and religious backgrounds, both of which provided credibility and comfort to research participants that this project would be spiritually and culturally congruent.et al., As part of the maintenance phase (Newman et al. (et al., Although feedback from CAB members and its evaluation was largely positive, key challenges were highlighted that were largely related to recruitment diversity, retention strategies and increased support of CAB member participation through monetary compensations. Given that the SFBA is the home to very diverse Muslim communities, challenges in recruiting representatives that account for all possible self-identifying groups was expected. However, through an iterative and social-justice integrated recruitment paradigm, CAB members are already improving community representation through continuous recruitment of underrepresented ethnicities, religious sectors and organisations. Retention continues to be a challenge to maintain a CAB with active participation of 8\u201314 diverse community members in meetings. Although the Stanford Muslim Mental Health and Islamic Psychology Lab team continues to apply for grants to secure monetary compensations, evaluation of CAB feedback only further highlights another of the five key tenets of CBPR according to McAllister et al. : suffici et al., .The Stanford-SFBA partnership, which was guided by a commitment to community-rooted research, brought religious and cultural sensitivity and contributed towards meaningful translational research in the field of Muslim mental health. The development of the Stanford-SFBA CAB and its subsequent outcomes, both intended and unintended, have exceeded our expectations as a research team. While our primary aim was to include community voices in the implementation of religiously and culturally sensitive focus groups, the CAB was empowered to become a platform for the inception of many mental health projects. Despite the need for public health funding, the CAB continues to function at a high momentum, fuelled by self-efficacy, dedication of its members and support from the SFBA Muslim communities."} +{"text": "Personality disorders are frequently encountered by all healthcare professionals and can often pose a diagnostic dilemma due to the crossover of different traits amongst the various subtypes. The ICD 10 classification comprised of succinct parameters of the 10 subtypes of personality disorders but lacked a global approach to address the complexity of the disease. The ICD 11 classification provides a more structural approach to aid in clinical diagnosis.A literature review of the diagnostic applicability of ICD 11 classification of personality disorders is presented in comparison with the ICD 10 classification.A retrospective analysis of the literature outlining the ICD 10 and 11 classifications of personality disorders, exploring the differences in evidence-based applications of both.The ICD 11 classification of personality disorders supersedes the ICD 10 classification in describing the severity of the personality dysfunction in conjunction with a wide range of trait domain qualifiers, thus enabling the clinician to portray the disease dynamically. The current evidence available on the utility of the ICD 11 classification gives a promising outlook for its application in clinical settings.The ICD 11 has transformed the classification of personality disorders by projecting a dimensional description of personality functioning, aiming to overcome the diagnostic deficiencies in the ICD 10 classification. The versatility offered by the application of the ICD 11 classification can be pivotal in reshaping the focus and intensity of clinical management of the disease.No significant relationships."} +{"text": "The problem of homosexuality is constantly in the spotlight of the mass media, social media and politicians. At the same time, the cultural and national specificity of attitudes towards the phenomenon of homosexuality seems obvious, as well as a significant polarization of opinions within Russian society itself. With significant attention to this issue, there are not many attempts to analyze the socio-psychological basis of representations about homosexuality. At the same time, in a number of foreign studies it was revealed that the modern Z Gen is distinguished by greater tolerance and freedom of views in terms of attitude towards traditionally segregated social groups.The purpose of this study was to identify representations about homosexuality among different generations of modern Russians.The methodological basis of the research was the study of the structure of social representations (Vergesse methodique). The research methods implied the author\u2019s questionnaire aimed at identifying representations about homosexuality and a modified version of the RAHI questionnaire. The sample was N = 444 .There was shown a significant difference between the Z Gen in terms of tolerance of representations about homosexuality. So called \u2018double standards\u2019 were identified in terms of attitudes towards male and female homosexuality. The rooted concept of homosexuality as a relationship based, rather, on a sexual rather than a romantic-spiritual level of relationships, was stated.Main hypothesis was confirmed: an inverse relationship between age and perceptions of homosexuality as normative was revealed.No significant relationships."} +{"text": "Background and Objectives: Basal carcinoma of the skin (BCC) is part of the nonmelanoma skin cancer (NMSC) family and is the most frequently occurring type of skin cancer in humans. A combination of clinical and histopathological approaches is necessary in order to establish the best treatment regime for patients who have been diagnosed with this type of cancer. The objective of the present study was to establish the statistical value of prediction for certain sociodemographic characteristics (age category and environment of origin) and histopathological parameters of the subjects that could be related to the incidence of diagnosis with certain histopathological subtypes of BCC. Materials and Methods: In order to verify the veracity of the established research hypotheses, we conducted a retrospective study based on the histopathological reports of 216 patients who were treated at the Pathology Department of Mure\u0219 Clinical County Hospital. Results: Cystic BCC is higher in patients who are older than 71 years of age, and the superficial multicentric and keratotic subtypes are more frequently diagnosed in urban areas. Patients who have been diagnosed with the superficial multicentric BCC subtype are not usually very old in contrast to the patients who tend to be diagnosed with the cystic BCC subtype. The nodular BCC subtype is positively associated with ulceration (p = 0.004); the superficial multicentric BCC subtype is positively associated with intra- and peritumoral inflammatory infiltrate and negatively associated with ulceration . The infiltrative BCC subtype is positively associated with ulceration (p = 0.021), and the keratotic BCC subtype is positively associated with peritumoral inflammatory infiltrate (p = 0.02). Conclusions: Depending on each patient\u2019s epidemiological and sociodemographic data, a pattern can be established regarding the appropriate clinical and treatment approaches for that patient, which can be supported based on the implications of the histopathological diagnostic. This can lead to an improvement in the patient\u2019s quality of life and increased satisfaction with the provided medical services. Basal cell carcinoma of the skin can be defined as a group of malignant tumors that are part of the nonmelanoma skin cancers (NMSC) that include basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and metatypical (basosquamous) carcinoma ,2,3. TheRisk factors for BCC include different categories of various associated parameters such as UV radiation, which includes sun exposure due to specific professions as well as exposure that occurs due to leisure activities, but other risk factors also include age, gender, a family history of skin cancer, and individual skin phenotype, among others ,9,10.Histopathological diagnostics remains one of the most important tools for the management of BCC, and it continues to be important due to the fact that the quality of life of patients who have been diagnosed with skin cancer must be improved, even though the rate of mortality due to BCC is not very significant ,12.The collection and analysis of data from histopathological bulletins from tissue samples obtained through surgical excision from subjects who have been diagnosed with BCC within the period of 2016\u20132020;The establishment of the incidence of histopathological subtypes of BCC that have been diagnosed and their comparison according to the years of diagnosis;The analysis of the degree of occurrence of histological and histopathological parameters in each subtype of diagnosed BCC;The establishment of the statistical value of prediction of certain sociodemographic characteristics of the subjects (age category and environment of origin) on the incidence of diagnosis with certain histopathological subtypes of BCC;The establishment of a correlation between the age of the subjects and the incidence of diagnosis with certain histopathological subtypes of BCC;The establishment of correlations between certain histological and histopathological parameters of the analyzed tumors and the different histopathological subtypes of diagnosed BCC.The main objective of our study is to analyze the characteristics of skin BCC, which can be subdivided into six additional target objectives: Based on these six target objectives, we aim to determine the veracity of three general research hypotheses, the first of which comprises the two following hypotheses:Hypothesis 1 (H1).\u00a0There are significant differences in the incidence of certain histopathological subtypes of BCC, and these differences depend on certain sociodemographic characteristics.Specific hypotheses:Hypothesis 1.1 (H1.1).\u00a0The incidence of certain histopathological subtypes of BCC differs significantly depending on the age category of the subjects.Hypothesis 1.2 (H1.2).\u00a0The incidence of certain histopathological subtypes of BCC differs significantly depending on the subject\u2019s environment of origin.Hypothesis 2 (H2).\u00a0There are statistically significant relationships between the age of the subjects and the incidence of certain histopathological subtypes of BCC that are diagnosed.Hypothesis 3 (H3).\u00a0There are statistically significant relationships between certain histological and histopathological parameters of the analyzed tumor formations and different histopathological subtypes of BCC that are diagnosed.In order to determine the veracity of the established research hypotheses, a retrospective study was performed between 2016 and 2020 that involved the analysis of data from histopathological bulletins of skin tissue samples that were obtained through surgical excision and that were received from the departments of Dermatology, Plastic Surgery, or General Surgery in the Pathology Department of Mure\u0219 County Clinical Hospital. Tissue samples came from outpatient visits, day hospitalizations, or continuous hospitalizations and were obtained from a total of 216 patients.The inclusion criteria for the patients to be enrolled in the study was the diagnosis of skin BCC established by our pathology department and the period when the diagnosis was established. The exclusion criteria were other histopathological diagnoses and a BCC diagnosis that was established outside of the study period. All of the histopathological diagnostics that met the inclusion criteria were included in the study. The limitations of the study were represented by the fact that the classification of BCC presented in this study is based on the data from the histopathological reports in our Pathology Department on a retrospective basis.Year of diagnosis;Age of the patient at the time of diagnosis;Area where the patient lived at the time of diagnosis;Gender of the patient;Tumor ulceration and the presence or the absence of a hemorrhage or inflammation area on the surface of the tumor;Excision within surgical safety limits and whether or not the tumor resection margins were invaded by the tumor in order to determine whether it had been completely excised and whether any tumoral tissue remained inside of the patient;Intratumorally inflammatory infiltrate and whether or not any inflammatory infiltrate was present inside of the tumor;Peritumoral inflammatory infiltrate and where there was any inflammatory infiltrate around the tumor;Perineural/perivascular invasion of tumor cells.Sample analysis was conducted by embedding the tissues in paraffin, and the tissues were stained with hematoxylin\u2013eosin; through this procedure, histopathological diagnoses of basal cell carcinoma (BCC) on the skin were established. All tissue samples came from complete excision of the lesion. The following factors and parameters were analyzed:After the data were collected, they were coded with their nominal and ordinal values so that they could be entered into SPSS . The collected data were coded in accordance with each type of data analyzed.t-tests were conducted for independent samples, and Pearson bivariate correlations were conducted to determine the veracity of the research hypotheses.The statistical analysis consisted of the use of frequency and comparative tables and graphs for each of the six histopathological subtypes of basal cell carcinoma that were diagnosed and the five histological and histopathological parameters of the analyzed tumors; t-tests were used for independent samples to analyze the differences between the following subgroups:Subjects under the age of 70 and those over the age of 71;Subjects from urban and rural areas.To verify the first research hypothesis and to measure significant differences in the incidence of certain histopathological subtypes of basal cell carcinoma, To verify the second research hypothesis, Pearson bivariate correlations were used to analyze the existence of statistically significant relationships between the age of the subjects and the incidence of certain histopathological subtypes of BCC that had been diagnosed.To verify the latest research hypothesis, Pearson bivariate correlations were also used to analyze the existence of statistically significant relationships between each of the five histological and histopathological parameters of the analyzed tumors and the six histopathological subtypes of BCC that had been diagnosed.To determine the veracity of the research hypotheses, the data from the histopathological bulletins of the tissue samples that had been obtained by surgical excision from 216 subjects who had been diagnosed with basal cell carcinoma between 2016 and 2020 were analyzed.There are significant differences in the incidence of certain histopathological subtypes of BCC that are dependent on certain sociodemographic characteristics of the subjects.This hypothesis refers to the diagnostic incidence of subjects with the six histopathological basal cell carcinoma subtypes , depending on their sociodemographic characteristics.We chose the age group to which each subject belonged and their environment of origin as two sociodemographic characteristics representing differentiating criteria, and thus, we divided this general hypothesis into two specific hypotheses. The two specific hypotheses refer to sociodemographic characteristics, according to which we assumed that there would be significant differences in terms of the diagnostic incidence of subjects with the six histopathological BCC subjects.The incidence of certain histopathological subtypes of basal cell carcinoma differs significantly depending on the age of the subject.t-test was used for independent samples in order to measure the existence of significant differences in the incidence of diagnosis with the six histopathological basal cell carcinoma subtypes that were diagnosed between subjects under 70 years of age (n = 112) and those over 71 years of age (n = 104). The values, significance thresholds of t, and the average incidence of the six histopathological basal cell carcinoma subtypes with which subjects under 70 years of age and those over 71 years of age were diagnosed are shown in In order to verify this hypothesis, a The incidence of certain histopathological BCC subtypes differs significantly depending on the environment of origin of the subjects.t-test was used for independent samples in order to measure the existence of significant differences in the incidence of diagnosis with the six histopathological basal cell carcinoma subtypes between urban subjects (n = 110) and those living in a rural environment (n = 106). The values, significance thresholds of t, and the average incidence of the six histopathological BCC subtypes with which urban and rural subjects were diagnosed are shown in In order to verify this hypothesis, a There are statistically significant relationships between the age of the subjects and the incidence of certain histopathological subtypes of BCC that were diagnosed.In order to verify the veracity of this hypothesis, we used Pearson bivariate correlations to determine the existence of statistically significant relationships between the age of the subjects and the incidence of their diagnosis with the six histopathological BCC subtypes: nodular, superficial multicentric, adenoid, cystic, infiltrative, and keratotic. The values and significance thresholds of the correlation coefficients are shown in Correlations were calculated based on the values related to the coding of each histopathological BCC subtype, with 1 = absent and 2 = present.There are statistically significant relationships between certain histological and histopathological parameters of the analyzed tumor formations and the different histopathological subtypes of BCC that were diagnosed.In order to determine the veracity of this hypothesis, we used Pearson bivariate correlations to determine the existence of statistically significant relationships between each of the five histological and histopathological parameters of the analyzed tumors, ulceration, excision within surgical safety limits, intra-inflammatory infiltrate and peritumoral, perineural invasion, and perivascular invasion, and the incidence of diagnosis with each of the histopathological BCC subtypes .The correlations between each parameter of the analyzed tumor formations and the incidence of diagnosis with the six histopathological basal cell carcinoma subtypes mentioned above are presented in Nonmelanoma skin cancers (NMSC) are divided into the categories of basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and metatypical (basosquamous) carcinoma. From these categories, BCC is the most frequent type of NMSC that is found worldwide, making it the skin cancer type that is the most frequently observed . The incThe first part of our study presented some demographic characteristics that may be attributed to the incidence of BCC among the Romanian population, and especially in those members of the population who visit the Pathology Department of the Mure\u0219 Clinical County Hospital, which is where the analyzed samples were collected .The COVID-19 pandemic strongly influenced the way that patients were treated in terms of surgery, especially oncological patients. The expectation is that the number of skin cancers that will be diagnosed will increase. Specifically, there will not only be an increase in the number of skin BCC diagnosed, but there will also be an increase observed in the other NMSC categories as well as in the number of melanoma cases. Unfortunately, we are prepared to see cases of skin cancer that are in more advanced stages due to the fact that patients did not receive adequate treatment on time, as well as because skin cancer management during the diagnosis and treatment phases was affected by the transformation of hospitals in to hospitals that were meant for the exclusive treatment of COVID-19. The Omicron variant of the SARS-CoV-2 virus may further the diagnosis of NMSCs by overloading healthcare systems worldwide ,16.The gender distribution of the entire group of subjects is approximately equivalent, with 54% of the cases being male patients and 46% being female patients. In terms of the gender distribution of the subjects, depending on the year when the patient was diagnosed with BCC, most of the male subjects were diagnosed in 2019 (39%), and most of the female participants were also diagnosed in 2019 (34%). In their research, Fidelis et al. mentioned that that there was a higher incidence of BCC in men than there was in women in Brazil in 2021 ; this isRegarding the age distribution of the subjects depending on the year in which they were diagnosed with BCC, most of the participants who were under 70 years old were diagnosed in 2019 (37%), and most of the patients over the age of 71 were diagnosed in 2019 (37%), 2018 (24%), and 2020 (17%). Based on Lang et al., most cases of BCC are diagnosed in individuals who are 71 years of age and over. This is also in accordance with the results that were obtained in our study .The distribution of BCC cases based on the origin of the patients at the level of the study group was approximately equivalent, with 51% of the individuals coming from urban areas, and 49% of the included individuals coming from rural areas. Regarding the environmental distribution of the subjects based on the year in which they were diagnosed with BCC, most of the people who were living in urban areas were diagnosed in 2019 (38%), and most of the people who were living in rural areas were also diagnosed in 2019 (35%). In their study, Carcin et al. observed a higher incidence of BCC in urban areas such as Dublin, Cork, Galway, and Waterford in Ireland, which can also be seen in our study area of T\u00e2rgu Mure\u0219 . Targu MThere are various BCC subtypes, with many lesions only presenting one BCC subtype; however, there have been cases where more than one BCC subtype has been found in an individual lesion. Our 216 subjects were divided into two main categories: those demonstrating only one BCC subtype in their histopathological diagnostic, and those with a mixed BCC type. A total of 60% of the 216 subjects were diagnosed with mixed BCC, presenting several histopathological subtypes of this neoplastic disease, while 40% were diagnosed with simple BCC, presenting a single histopathological subtype. One third of the subjects (35%) presented with two subtypes, and 20% presented with three histopathological BCC subtypes. Of the subjects with a mixed diagnosis, almost half (47%) represented the nodular BCC subtype, which was the most frequent one, a fact that was also mentioned by Chung in his research, but this type occurred less frequently in our study . ApproxiThe second part of our study was targeted on proving our three established research hypotheses. Taking the hypothesis that there are significant differences between subjects under 70 years of age and those over 71 years of age into consideration, we found that there were significant differences in terms of the incidence of the cystic BCC subtype. Taking Ulceration occurs in more than half of the cases of infiltrative subtype BCC alone, which is in direct accordance with Tanase, who states that the infiltrative subtype of BCC is the most common subtype that is able to present with ulcerations .Ulceration showed a highly significant and positive correlation with the incidence of the diagnosis of subjects with the nodular BCC subtype and with the infiltrative BCC subtype. The higher the incidence at which subjects were diagnosed with the nodular and infiltrative BCC subtypes, the higher the probability of ulceration in this histopathological subtype of BCC. It can be said that most diagnoses of the nodular and infiltrative subtypes of BCC are associated with ulceration. Ulceration showed a significant and negative correlation with the incidence of the diagnosis of subjects with superficial multicentric BCC, which means that the higher the incidence at which patients are diagnosed with superficial multicentric BCC, the lower the probability of ulceration. Most diagnoses of the superficial multicentric subtype BCC are not associated with ulceration, as proven by our results.Regarding excision within surgical safety limits, it can be seen that surgical excision safety was possible in more than half of the cases in all of the six subtypes of BCC that were diagnosed. In their systemic review from 2020, Quazi et al. observed that excision within surgical safety limits depends on the subtype of BCC . During As shown in our results, intratumoral inflammatory infiltrate occurred in more than half of the cases of the superficial multicentric BCC subtype and the keratotic BCC subtype, whereas peritumoral inflammatory infiltrate occurred in more than half of the cases for the superficial multicentric BCC subtype. Regarding the intratumorally inflammatory infiltrate, it is seen that it correlates significantly with the diagnostic incidence of two of the six histopathological basal cell carcinoma subtypes, specifically with superficial multicentric BCC subtype. Most diagnoses of the superficial multicentric subtype BCC are associated with intratumorally inflammatory infiltrate. Peritumoral inflammatory infiltrate correlates significantly with the diagnostic incidence of three of the six histopathological basal cell carcinoma subtypes, especially with the superficial multicentric and keratotic BCC subtypes. This means that most diagnoses of superficial multicentric and keratotic BCC subtypes are associated with peritumoral inflammatory infiltrate.Neither perineural nor perivascular invasion have an incidence in more than half of the cases of any of the diagnosed BCC subtypes.The data that were analyzed here showed the existence of statistically significant relationships between five of the histological and histopathological parameters of the analyzed tumors, specifically ulceration, excision within surgical safety limits, and intra- and peritumoral inflammatory infiltrate, and the different histopathological subtypes of basal cell carcinoma that were diagnosed, leading to the confirmation of the third general research hypothesis.The present study confirmed the research hypotheses that were established and emphasized the importance of taking the specific particularities of each individual patient starting from their age into consideration before arriving at an adequate diagnosis. The incidence of the cystic subtype of BCC is statistically higher in patients who are older than 71 years of age. In terms of environment, the superficial multicentric and keratotic BCC subtypes are found more often in patients from urban areas. The patients who had been diagnosed with the superficial multicentric BCC subtype tended not to be very old, which was in contrast with the patients who were diagnosed with the cystic BCC subtype, who tended to be older. The relationships between the statistically significant association between the diagnosed basal cell carcinoma subtypes and the histological and histopathological parameters that are present have shown that the nodular BCC subtype is positively associated with ulceration, the superficial multicentric BCC subtype is positively associated with intra- and peritumoral inflammatory infiltrate; the cystic BCC subtype is positively associated with excision within surgical safety limits; the infiltrative BCC subtype is positively associated with ulceration; and the keratotic BCC subtype is positively associated with peritumoral inflammatory infiltrate. In conclusion, using the epidemiological and sociodemographic data of a patient can create a pattern for clinical and treatment approaches. This can lead to improvements in patient quality of life and increased satisfaction regarding the medical services provided."} +{"text": "More than 92% of people in Bangladesh are deprived from any sort of mental health care due to severe scarcity of mental health professionals, widespread stigma, lack of awareness, the inability to travel from remote area to Dhaka and maintaining the cost of travel and clinics. Moreover, the COVID-19 crisis made the scenario worse. To solve this problem we designed, developed and implemented \u201cMonerdaktar\u201d.The process development Monerdaktar- website and mobile application started with the initial idea and concept by TRS followed by extensive literature review and naturalistic observation of the mental health care service delivery from two tertiary hospitals in Bangladesh. We conducted 3 focus group discussion with the patients, their care givers, mental health professions. Based on the user feedback and technical suggestion of the mental health professional and IT professionals we developed the prototype of the Monerdaktar mobile application and website. After piloting for two months, the final version of the mobile application and website was finalized incorporating the feedback of the patients and experts.Monerdaktar created the unique opportunity to connect with the most the reputed mental health professional both psychiatrists and clinical psychologists online. Moreover, monerdaktar delivered the service free of cost to more than 700 clients during the peak of COVID crisis in Bangladesh.The COVID-19 crisis has potentiated the acceptance and adaptation of the Moenrdaktar solved the long-standing crisis of access to mental health care in Bangladesh and ensure the evidence-based care from anywhere.The Monerdaktar website and mobile application was design and developed by under the leadership of Dr. Tanjir Rashid Soron. Though the initial support was delivered free of cost, the consulting expert psychiatrist and Clinical Psychologist may take their"} +{"text": "The field of child and adolescent psychiatry is receiving growing attention, although a number of local differences still exist in terms of academic curricula, board certifications and even definitions of what is to be considered part of this field or not. An Italian study showed that approximately 1 out of 10 children showed significant psychopathological problems ) that thThe main goal of this Special Issue is therefore to provide cutting-edge data regarding all aspects of child and adolescent psychiatry, including (but not limited to) the etiopathogenesis and presentation of the different disorders, treatment options and management of comorbidities. This is important because the peculiarities of the psychopathological manifestations in different age groups are increasingly recognized in classification systems, when a developmental perspective is applied .Two papers in this Special Issue are case reports. Colizzi et al. provide a detailed description of a case of mosaic trisomy 20, analyzing the neuropsychiatric aspects in the light of a careful analysis of the existing literature . EspositThree more papers focused on different psychopathological sequelae of traumatic experiences. Calvano et al. provide evidence of the utility of a highly specialized approach in the form of an outpatient trauma clinic to start the therapeutic process and increase victims\u2019 motivation towards a more prolonged intervention . ForresiTwo papers focused on the consequences of the COVID-19 pandemic. De Pasquale et al. studied the prevalence of online videogaming during the so-called \u201clockdown\u201d in Italy, with anxiety being a relevant predictor of videogame use and addiction .The other papers testify the variety of subjects and approaches in child and adolescent psychiatry. Thun-Hohenstein et al. applied a naturalistic approach to assess the outcome of a group of adolescents consecutively admitted to an inpatient and day-clinic treatment ; they alThese contributions, with their variety in terms of explored topics and used methods, provide a mirror of the complexity of the topic of this Special Issue; hopefully, they will be thought- and action-stimulating reading for all those willing to pay comprehensive attention to children and adolescents."} +{"text": "The latest developments in the field of road asphalt materials and pavement construction/maintenance technologies, as well as the spread of life-cycle-based sustainability assessment techniques, have posed issues in the continuous and efficient management of data and relative decision-making process for the selection of appropriate road pavement design and maintenance solutions; Infrastructure Building Information Modeling (IBIM) tools may help in facing such challenges due to their data management and analysis capabilities. The present work aims to develop a road pavement life cycle sustainability assessment framework and integrate such a framework into the IBIM of a road pavement project through visual scripting to automatically provide the informatization of an appropriate pavement information model and evaluate sustainability criteria already in the design stage through life cycle assessment and life cycle cost analysis methods. The application of the proposed BIM-based tool to a real case study allowed us (a) to draw considerations about the long-term environmental and economic sustainability of alternative road construction materials and (b) to draft a maintenance plan for a specific road section that represents the best compromise solution among the analyzed ones. The IBIM tool represents a practical and dynamic way to integrate environmental considerations into road pavement design, encouraging the use of digital tools in the road industry and ultimately supporting a pavement maintenance decision-making process oriented toward a circular economy. Road pavement management is a complex process that involves continuous monitoring and design of maintenance actions to keep the pavement in the desired structural and functional conditions and minimize life cycle costs ,2.Therefore, proper pavement maintenance planning can be regarded as a strategic approach to achieve capital spending rationalization, risk control, performance preservation, stakeholders\u2019 satisfaction and conservation of natural resources; all these challenges should be faced efficiently at all stages of the pavement life cycle by collecting and running analytics .Considering the increasing importance of delivering more sustainable and durable road pavements, the life cycle thinking approach has been regarded as an effective methodology to measure the overall sustainability of a project, both from an economic and environmental point of view, with a view on the whole life cycle from conception and design throughout the service life of the infrastructure up to dismission, demolition, disposal or recycling ,5.Some life cycle techniques are well established in the construction industry and especially in the field of maintenance planning and management, i.e., life cycle cost analysis (LCCA), while others, such as life cycle assessment (LCA), are still on the way to be fully integrated into common infrastructure sustainability rating systems applied to road pavement management.In detail, with the aim of lowering the consumption of nonrenewable raw materials and fossil fuels in the construction and maintenance of asphalt road pavements, researchers are trying to quantify, through internationally consolidated and standardized LCA methodology, the potential environmental impacts and the relative benefits of alternative bituminous materials containing secondary raw materials in substitution of natural ones ,7, bitumA common issue when applying LCA to support decision making in the field of road pavement management is that it often requires large time expenditure and data; therefore, LCAs are often performed at the end of the design phase, when most of the design configuration has been already defined, leaving no other time to incorporate the environmental sustainability rating into decision making .As new digital technologies are emerging in the construction sector, new tools are available to researchers to ease information management and consequent decision making by running analyses in an automated context before setting up the pavement design configuration.As a matter of fact, the digitalization of road infrastructure projects can be supported by workflows that improve not only the quality of the delivered project but also the efficiency of their development, improving communication and data flow between project participants. More and more digital tools and technologies are supporting construction and maintenance processes of road infrastructures, such as, in the case of linear infrastructures, infrastructure building information modeling (IBIM); IBIM has been already regarded as a tool that can help in the early detection of omissions and errors, improved productivity, structure simulation and analysis and improvement of communication between the actors of the process through more informed participation and data sharing. However, the adoption of IBIM in the infrastructure field and the issues related to linear asset management still significantly hinder further developments and adaptations to fully make the most of the listed benefits .Therefore, IBIM tools have recently begun to spread across infrastructure engineers to efficiently archive, store, manage and analyze large amounts of data generated by different actors and analytic tools involved in the project, ultimately aiming to support decision making in road asphalt pavement design and management ,15. For Looking more specifically at the leverage of IBIM tools to integrate pavement structural and performance data, Tang et al. developeGradually, the need to control sustainability indicators and generate continuously relevant information related to life cycle environmental sustainability throughout the infrastructure project life cycle has found the potential application of IBIM tools, moving forward from just geometrical detail, 3D visualization ,21 and iTherefore, the full development of IBIM potential delivers a digital and smart representation of data-enriched objects created through the collaboration between the involved parties to provide feedback at the earliest possible time, improving decision-making processes, and prompt project efficiency at all stages of the life cycle .Little effort has been dedicated to the full integration of IBIM potentialities with life-cycle-based methodologies to assess the environmental and cost sustainability of a road project in light of more efficient decision making.Looking at what has been achieved in other engineering fields, Ant\u00f3n and D\u00edaz introducLooking at how researchers leveraged the information exchange to implement basic environmental sustainability assessments in the IBIM environment, Liu et al. proposedAlthough the BIM/IBIM research has shown efficient possibilities to customize projects with detailed and automated analysis tools to integrate sustainability rating and criteria already in the design stage , a comprehensive effort has not been dedicated yet to (a) the depiction of a life-cycle-based sustainability framework specifically designed for IBIM of road pavement projects and (b) the integration of both the aspects of sustainability assessment/rating and planning of future maintenance interventions according to rational decision-making criteria within the IBIM of a road pavement.Definition of a decision support system oriented to reactive and predictive maintenance, which considers the variables related to economic and financial aspects up to the environmental and technical\u2013operational ones related to the decay of specific status indicators; the framework must be compliant with ISO 19650 information management protocols see and is aThe introduction of laboratory results related to road construction materials into a BIM workflow.Digitization of the road pavement management process through the definition of a pavement information model aimed at IBIM-based sustainable maintenance management. Integration is intended not only for the automation of data management but also for the definition of specific information exchange management protocols related to the life cycle of a road pavement.For these reasons, the present research aims to fill the following gaps:The novelty of the study mainly consists of filling the knowledge gap on the use of IBIM as a decision-making tool for the selection of pavement design configurations to be applied during the construction/maintenance process according to multiple sustainability criteria; to reach this goal, a theoretical framework has been developed and subsequently coded as an IBIM plugin.The main practical\u2013applicative feedback and relevant deliverable of the present work is the IBIM plugin in support of the designers and the road management bodies themselves, which interacts with the informative content of an IBIM pavement digital model, supported by continuous and updated flows of data related to the monitoring of the pavement in situ, to provide instant, automated and continuous prediction of the performance, costs and environmental impacts of the life cycle.The present section focuses on the main methodologies applied to design the methodological framework on which the actual IBIM-based analytical tool relies, also in light of the definition of a proper pavement information model.First, the condition indicators, their thresholds and consequent maintenance actions were set up.During the service life of a pavement, its conditions should be accurately evaluated to identify the severity of pavement damages and types of pavement distress. Therefore, monitoring systems are considered a significant step in the maintenance processes.A typical document based on which visual assessments of pavement distresses are made is the Distress Identification Manual for the long-term pavement performance program .The type of pavement distresses and their degree of severity and extension concur to a single global indicator of pavement condition, i.e., the PCI, a widely used index derived from individual distress deduct values developed in the late 1970s by the U.S. Army Corp of Engineers . It provThe reactive maintenance approach involves the continuous definition of the maintenance activities as a result of the distress surveys carried out through time; corrective measures are only initiated after clear pavement distress or other deficiencies in road condition have been identified . InsteadDisregarding any rational approach to road pavement maintenance will result in what is often addressed as \u201curgent maintenance\u201d, in which deep repairs are carried out after an event occurs that cannot be foreseen but requires immediate attention due to user safety concerns . The afoAiming to set up an automated reactive maintenance algorithm, a rigid set of rules and actions was originally developed in the present work based on the type, severity and density of each distress identified according to the Distress Identification Manual for the long-term pavement performance program . In detaSurface rehabilitation: it implies the milling and reconstruction of the wearing course and the assessment of the condition of the binder layer ;Deep rehabilitation: it consists of the milling and reconstruction of the wearing course and binder layer ;Reconstruction: it involves the full reconstruction of the asphalt layers and subsequent control of the subbase bearing capacity.PCI values above 85 imply no maintenance interventions due to the good quality of the pavement surface.In particular, three main maintenance interventions were triggered by the PCI values see :Surface Once the type of maintenance action is defined, a proper pavement library must be set up to choose from alternative design configurations.Road flexible pavement design is a fundamental step to ensure the compatibility of the materials, pavement geometry and subgrade conditions with the loads expected during the useful life of the pavement. Ultimately, pavement design aims to define the optimum thickness of the pavement layers, given the structural model as an elastic, homogeneous and isotropic multilayer and several input data, such as the mechanical performance of the materials used for each layer and the design service life, to prevent excessive extension of fatigue cracking damage and rut depth . A standThe target of pavement design is to ensure that specific distress indicators, such as fatigue cracking and rutting, are contained below certain predetermined thresholds.Fatigue cracking is a typical distress mechanism that occurs in flexible pavements; it starts at the bottom of the asphalt base layer and propagates to the surface as one or more interconnected cracks. An excessive extension of the area of the pavement affected by fatigue cracking may lead not only to reduced ease and safety in driving but also to the percolation of hazardous substances from the bonded layers to the soil and underground waters, especially when recycled materials are added to the asphalt mixtures .FD), which must be kept below 1 to avoid excessive deterioration of the surface quality , is determined as the sum of the seasonal share of relative damage produced by the ESALs passings during the i-th season, according to the Miner law, which is reported in Equation (1).in is the number of ESAL passages in the i-th season, which is computed through the AASHTO Design method for the j-th maintenance solution and i-th criteria, respectively, among all maintenance solutions.Since all the criteria reported into the decision matrix has different units of measurements, normalization was carried out to obtain the normalized decision matrix i-th maintenance alternative is assigned a synthetic score based on the utility method [j-th indicator and the weight assigned to the j-th indicator (see Equation (10)).i-th alternative according to the utility method, j-th indicator and j-th indicator of the i-th maintenance alternative.Finally, each y method , equal tBefore the actual coding of the analysis tool, a careful pavement information model was defined considering the expected life cycle management results. Then, the main explanatory variables of the maintenance alternatives required careful and broad data collection. All the data related to the application of the present methodology to a case study are collected in the Set up data templates to import the needed information in the programming interface and speed up the informatization of the pavement information model;Informatize the IBIM of a road pavement through property sets definition;Run calculations and update the property sets with the outcome of the maintenance algorithm and decision-making framework.The IBIM framework encompassed three main steps, as follows:Taking into account the engineered maintenance algorithm, a visual programming tool was lev\u201cVisual Programming Language\u201d is a concept that provides designers with the necessary means to construct unique relationships between digital objects using a simple graphical user interface. Rather than coding from scratch, the user can assemble existing custom relationships by connecting prepackaged nodes together to make a custom algorithm. The main consequence is that designers, who do not usually have developed coding skills, can implement computational concepts and enrich their projects with targeted calculations.Dynamo allows designers to automate processes, perform data manipulation, implement relational structures and analytic capabilities and control Vasari Families and Parameters, which would not be usually possible without a conventional modeling interface. Last but not least, Dynamo offers the designer the opportunity of doing so within the context of an IBIM environment.At this point, the IBIM of the road pavement (Civil 3D ) was fulInput property: its value is assigned directly from the data template imported by the user and does not require additional calculations;Output property: its value is the result of the calculations of the analytic tools supporting the pavement IBIM.Each property, belonging to a specific property set, can be regarded as:Pset_Pavement: the property set includes the current features of the asphalt pavement, such as the asphalt mixture identifiers of each layer and the coefficients of the PCI, FD and R decay curves;Pset_WearingCourse, Pset_BinderLayer, Pset_BaseLayer: the three input property sets, each one attached to the respective asphalt layer of the pavement structure, include all the necessary information that should be uploaded in the BIM environment before performing the LCA in absence of a specific EPD;Pset_MADM: the property set includes the input parameters that set up the boundary conditions for MADM ;Pset_Maintenance: the property set includes both input and output parameters referred to the current pavement configuration;Pset_LCAindicators: the output property set includes the environmental impact categories that will be filled once the analysis tool is run on the current pavement configuration. In particular, hierarchical recipe midpoint impact assessment methodology was chosPset_LCCAindicators: the property set includes both input (discount rate) and output parameters used to characterize the life cycle cost dimension of the current pavement configuration.The following property sets were implemented into the IBIM environment see :Pset_PavSince the creation of property sets is time-consuming and could easily lead to errors in structuring the data, the command block embedded into the Dynamo programming interface was leveraged to automate the creation and upload of the parameters into the BIM environment.From the point of view of the integration of IBIM and life cycle analyses, Dynamo, a Civil3D extension that creates a dynamic link between the BIM environment and an open-source visual programming environment, was leveraged to equip the pavement BIM with additional analytic tools that run calculations and update the values of the object properties with the outputs of the calculations.A schematic overview of the designed analytic tools is reported in The PMS tool gathers the information and calculates the type and timing of the maintenance interventions according to different maintenance approaches;The LCA/LCCA must be executed in series to produce the desired decision-making result and are as follows:Each module of the algorithm also includes the production of reports, exported as spreadsheets, such as the time evolution of the condition indicators for each unit sample and the timing, type, cost and environmental impacts of each maintenance strategy for each unit.The elaborated procedure was applied, and its effectiveness was tested in a real case study, in which IBIM was used as a decision support system to choose the optimum maintenance intervention and draft future maintenance plans based on the available life cycle and performance data at the time of the analysis. The reference analysis period of the case study is equal to 50 y, and all the collected data are reported in the The case study under analysis refers to an existing section of a rural road in the Campania region for which visual surveys of the type, extension and severity of distresses (and consequent PCI determination) have been carried out every 2 years for the past 6 years. The paper-based documentation regarding the geometry of the 1 km length existing road pavement section was collected and digitized to obtain the 3D digital model that was later enriched with the results of the decision-making framework.Additional data available to the managing road administration, such as the AADT and the predicted number of ESALs in 20 y, have been collected and reported in The current section briefly shows the results as they are obtained by feeding the information (collected into predetermined data templates) into the IBIM environment and running the BIM analysis tools.The final decision matrix obtained by running the algorithm in the IBIM environment is reported in 2, CH4, and NMVOC) from both industrial facilities and transportation of virgin raw materials to the asphalt plant, and the satisfactory performance in terms of FD and R accumulation, as well as the service life of the designed pavement. On the other hand, the traditional rehabilitation strategy with HMA both in the binder and wearing course does not provide satisfactory results in terms of GWP ; the underlying reason is the high frequency of maintenance interventions to keep the R condition index under the predetermined threshold of 20 mm depth, which entails a high consumption of virgin resources, fossil fuels and energy.Looking in detail at LCA results , which is able to dynamically interact and exchange information regarding, among others, the chosen design solution, in terms of layer thicknesses and the main features of the asphalt materials and the volumes of materials needed for the specific road works.The data collection for the calculation of the unit prices of the asphalt materials was carried out by analyzing the market prices of several local companies producing limestone and basaltic aggregates and asphalt mixtures, as well as companies carrying out road works that have already experimented with the cold in-place recycling technology procedures, other than the consultation of up-to-date price lists of regional public works , make up the overall pavement condition (PC);The LCC indicator alone represents the costs incurred by the road managing agency in the analysis period (LCCA);The 18 indicators obtained by LCA analysis were collected to constitute the environmental and human health performance of the asphalt mixtures (EHP).Before defining the weight vector for MADM, the 22 criteria were collected into three subgroups:The final weight vector was defined by distributing the weight evenly among each group.The best alternative that makes the most of the three groups of evaluation criteria is the reconstruction strategy with HMAC as the binder layer and CMRA as the base layer. In contrast, the alternatives with the traditional HMA as the base layer often qualify as the least performing, cost-effective and sustainable alternatives in the analysis period.A large number of examples are available in the scientific literature validating the use of MADM in pavement management problems, i.e., investment planning , prioritOne of the expected results in terms of IBIM information enrichment is the visualization of the results of the algorithm for each element of the pavement structure in an automated and dynamic way. As shown in With the concepts of sustainability and life cycle thinking becoming more important in the field of asphalt pavements and road pavement maintenance, the present work provided an effective framework to approach the use of IBIM platforms to provide additional analysis tools oriented towards sustainability in the field of road asphalt pavements, helping to assess the potential environmental impact and costs borne by the managing bodies to perform construction/maintenance treatments, expressed through multiple environmental and cost indicators calculated according to LCA and LCCA methodologies. The proposed tool expands the current knowledge on IBIM-LCA-LCCA integration oriented towards the promotion of sustainable pavement maintenance practices through tailored decision-making processes.Once the IBIM-based analysis tool was run, the environmental and cost assessment returned by the IBIM analysis tool allowed the in-depth analysis and comparison of several alternative road construction materials: the main source of variability of the environmental impact category indicators was the adoption of the cold in-place recycling technology for the reconstruction of the base layer, which lowered all the impact category indicators: on average, \u221222% for CMRA versus the HMA base layer. In particular, the substitution of natural aggregates with RAP lowered the emissions in water in terms of nitrogen and phosphorous compounds emitted during natural aggregates\u2019 production and supply to the asphalt plant; furthermore, the asphalt materials that showed the best synergy between the minimization of environmental impacts and costs and maximization of the service life of the pavement solutions were the PMA combined with the base layer HMAJ, increasing the service life of a traditional HMA stratigraphy by 11 years.From the point of view of the potential impacts of the present work in the road industry, decision-makers and road managing bodies could benefit from the use of the developed methodology and application; the results of the work represent a necessary innovation to comply with the future needs of both local legislative frameworks regarding the mandatory application of building information modeling to public bidding processes and the adoption of minimum environmental criteria to spread sustainability across the conception, design and operational stages of the life cycle of infrastructure.Further efforts will be dedicated to the inclusion of an uncertainty analysis, which investigates the uncertainty of the input variables and its effect on the outcomes of the decision-making problem, and to the integration of social aspects through the application of a social LCA to complement the LCA and LCCA results and the extension of the problem of life-cycle-based road maintenance management to the network level."} +{"text": "To report the findings of fluorescein angiography (FA) and optical coherence tomography angiography (OCTA) in a patient with malignant hypertensive retinopathy.A 41year-old male was referred to our clinic with sudden visual loss in both of his eyes after an acute rise of blood pressure (200/150 mmHg). Optic disc swelling, flame shape hemorrhages especially around the optic disc, arterial narrowing, vessel tortuosity, cotton wool spots, hard exudate deposition, and multiple deep orange spots (Elschnig spots) were visible in both eyes. In the OCTA, disruption in the normal tapering patterns of the superficial and deep capillary plexuses was observed. Elschnig spots were observed as hypointense spots in the choriocapillaris slab. Leakage of the optic nerve head was seen in the FA.When compared with the FA, the OCTA can illustrate the ischemic areas and the Elschnig spots with greater detail. The evaluation of hypertensive chorioretinopathy is possible by using both fluorescein angiography (FA) for retinal circulation and indocyanine green angiography (ICGA) for choroidal circulation. Recently, optical coherence tomography angiography (OCTA) can show all retinal vascular structures in association with choroidal circulation by the detection of erythrocytes movements in vascular structures without any contrast injection.The acute rise of blood pressure may influence the retinal and choroidal vessels with multiple clinical features including serous retinal detachment, choroidal ischemia as Elschnig spots, etc.The aim of this study was to evaluate the findings of OCTA in patient with malignant hypertensive retinopathy and comparing it with finding of FA.A 41-year-old male who experienced sudden visual loss in both of his eyes five days prior to presentation was referred to our clinic. He had a history of hypertension and experienced an acute rise of blood pressure (200/150 mmHg) five days prior due to forgetting to consume his antihypertensive drugs.The best-corrected visual acuity using the Snellen chart and manifest refraction were, respectively, 20/50 and \u20131.25Irregularity and wrinkling of the retinal surface, slight retinal thickening, subretinal fluid, hyperreflective deposits with posterior shadowing compatible with hard exudates in the inner retinal layers, and some irregularity at the level of retinal pigment epithelial (RPE) cells and the choriocapillaris layer were observed in the SD-OCT of both eyes.The hypo-autofluorescence area identified around the optic nerve head , with some scattered hypo-autofluorescence images (coincident to flame shape hemorrhages) and multiple hyper-autofluorescence images particularly in the temporal portion of the macula were all illustrated in both eyes in the fundus autofluorescence imaging [Figure 1]. Optic disc leakage, multiple hypo-fluorescence imaging in the early phase continuing with central hypo-fluorescence and marginal hyper-fluorescence in the late phase coincident with Elschnig spots and some points of blockage in the hemorrhage spots were seen in the FA [Figure 1]. In the FA, only the leakage of the optic nerve head was identified.In the fovea imaging, the disruption in the normal tapering pattern of the superficial and deep capillary plexuses especially in the inferior nasal and superior temporal of the fovea compatible with the ischemic areas were prominently defined. These findings were marginally revealed in 67 the FA [Figures 2 & 3]. Similar findings with less severity were observed in his left eye [Figure 4]. The 68 Elschnig spots were also revealed at the choriocapillaris level (30 \u03bcm below the RPE) in the form of 69 hypo intense spots in both eyes [Figure 2].,4Hypertensive retinopathy is defined with wide varieties of alterations in retinal vessels comprising diffuse or focal arterial narrowing, vessels tortuosity, hemorrhages and hard exudate deposits due to the hyperpermeability of vessels, and optic disc swelling. OCTA is a new amplitude and phase-based imaging technique that can illustrate separately the capillary vessels of the retina together with large vascular structures by sensing the transverse and axial movement of blood cells without any intravenous contrast injection.,6FA cannot show the majority of the parts of the capillary networks because of light scattering. In spite of that it is the standard method in the assessment of retinal circulation.The ischemic areas in both the superficial and deep capillary plexuses in addition to the enlargement of the foveal avascular zone (FAZ) were clearly visible in the OCTA; however, these findings were minimally revealed in the FA on our patient. On the other hand, there were no significant differences between the macular areas of both eyes in the FA, but the best-corrected visual acuity of both eyes showed considerable difference. The amount of ischemic areas and enlargement of the FAZ were obviously larger in both the superficial and deep capillary networks in the right eye as compared to the left eye in the OCTA. It might be approximately proportional with visual acuity. It can be concluded that OCTA may have the prognostic role in the assessment of the visual outcome in vascular diseases.Consequently, confirmation of this hypothesis requires further studies with larger sample sizes and long-term follow-up.,8 The existence of Elschnig spots in the choriocapillaris layer were prominently revealed as hypointens lesions in the OCTA in a greater quantity when compared with FA. Elschnig spots are clearly visible in the ICGA as hypofluorescent spots. Kawashima et al reported that more Elschnig spots were visible in the ICGA as compared to the FA. A greater amount of Elshnig spots were revealed and were prominently visible in the OCTA in comparison to the FA. It seems that the OCTA may be of more valuable assistance than the FA in the illustration of hypofluorescent spots in the choriocapillaris layer. Rotsos et al reported a finding of multimodal imaging in two patients with hypertensive chorioretinopathy. They observed hyperreflective lesions in the OCTA of their patients without detection of the focal ischemic area at the choriocapillaris level (Elschnig spots). They proposed that these hyperreflective lesions may be related to fibrin deposits developing on top of the RPE. They mentioned that the lack of displaying of the Elschnig spots on the OCTA might be explained by the resolution of the imaging. The inability of the OCTA in illustrating the Elschnig spots may be partly related to an artifact produced by hyperreflective deposits overlying the RPE. In contrast to their findings, we discovered that OCTA can detect Elschnig spots very well. Consequently, the OCTA is deemed more accurate in detection and better in the visualization of the microvascular structure of the retina and choroid than the FA in diagnosing hypertensive retinopathy.Focal necrosis in the choroidal arterioles results in ischemia of the overlying choriocapillaris layer and resultant RPE damage that is known as Elschnig spots.It may be concluded that the OCTA can illustrate the microvascular changes within the ischemic area in addition to identifying Elschnig spots with greater details than FA in malignant hypertensive retinopathy.None.There are no conflicts of interest."} +{"text": "The impact of bitumen components on soil and groundwater resources is of environmental importance. Contaminants\u2019 influx into the environment from bitumen components through anthropogenic activities such as exploration, mining, transportation, and usage of bitumen in all its forms have been reported globally. However, gaps exist in the geogenic occurrence of bitumen in the shallow subsurface such as in southwest Nigeria, contaminating the soil and groundwater resources. This review presents in situ bitumen seeps as a source of geogenic soil and groundwater contaminants in southwestern Nigeria. We conducted a systematic review of literatures based on defined selection criteria. We derived information on the state of knowledge about bitumen seep occurrences and distribution in southwestern Nigeria. Also, the processes that exacerbate bitumen contaminants\u2019 influx into soil and groundwater were enunciated. At the same time, case examples highlighted areas for possible in situ bitumen contamination studies in Nigeria. The results of this review showed that a multidisciplinary approach has been employed to assess and monitor the contaminants resulting from the various activities involving the exploitation and application of bitumen in Nigeria. These studies emphasize bitumen contaminants as emanating from anthropogenic sources. The results also suggested that bitumen studies have been mainly exploratory to improve the understanding of the economic potential of the hydrocarbon reserve. Also, recent advances in bitumen contaminants studies accounted for the heterogeneous nature of the bitumen. This allows for the optimized categorization of the mechanism and processes undergone by the different bitumen components when released as environmental contaminants. However, a knowledge gap exists in characterizing and understanding the effects of in situ bitumen seeps as a geogenic source of soil and groundwater contamination. This review identifies the possibility of geogenic soil and groundwater contamination by in situ bitumen seeps in the coastal plain sand of the Dahomey basin in southwestern Nigeria. The impact of the bitumen contaminants on the environment was discussed, while methods for accessing the occurrence and distribution of the bitumen contaminants were highlighted. Bitumen is a dense mixture of heterogeneous hydrocarbon compounds produced from the temporal degradation of lighter crude oil statement 2009 Geology of the study areaThis describes the state of knowledge on the history and mode of occurrence of bitumen deposits and seepages within the basin of interest. Stratigraphic and structural controls on bitumen seep formation were discussed.(B)Environmental importance of seeping bitumenHere, the possibility of bitumen components acting as a source of dense non-aqueous phase liquid (DNAPL) contaminants in the environment is discussed. This was described from the chemical composition of seeping bitumen from the study area with references to areas where bitumen contaminations have been reported globally.(C)Bitumen contamination processesBitumen within the surface or near-surface environment has been proven to be a source of soil and groundwater contaminant. This is made possible by various processes undergone by the bitumen components in the environment. This section describes the types of processes resulting in the contamination of soil and groundwater in the environment and the corresponding results of such contamination from in situ bitumen contamination observed within the study area and with references to similar scenarios in other regions with known bitumen contaminations.(D)Case historyThis aspect highlights and discusses key findings from relevant literature on the origin, occurrence, and composition of bitumen seeps within the study area, while also pointing to several works done so far in understanding the environmental impact of the seeping bitumen components on soil and groundwater resources.(E)ConclusionsThe review concluded by identifying the different methods employed within the study area to assess the effect of bitumen-sourced contaminants on soil and groundwater resources while drawing a comparison with work done globally. The article identifies knowledge gaps in previous studies on bitumen contamination within the southwestern regions of Nigeria. This review, therefore, presents a unique opportunity for further studies.The content of the review is grouped into sections as follows:Bitumen typically consists mainly of hydrocarbons categorized as saturates, aromatics, resins, and asphaltenes , and other oxygenated hydrocarbon molecules. D\u2019Auria et al., , \u201cA review of the occurrence, distribution and impact of bitumen seeps on soil and groundwater in parts of southwestern, Nigeria 2022\u201d, Mendeley Data, V1,\u00a010.17632/t27pdtbdnc.1"} +{"text": "Mental health disorders are considered a priority in health policies around the world. It is estimated that more than 900 million people worldwide have a mental disorder, in which stress-related disorders account for a high number of emergency department visits. The scientific literature has pointed out the importance of considering how gender and sex differences influence the clinical outcomes of people with mental illness, playing an important role in the clinical management of these patients.The aim of this report is to investigate the presence of gender differences in the care of psychiatric patients attending the emergency department (ED), taking into account the clinical characteristics, reasons for consultation and practices.The study considered all episodes of patients who visited the ED during 2017 and who were assessed by the psychiatric department. Statistical analyses were performed using IBM SPSS Statistic software.During the 12 months period, a total of 3180 episodes were evaluated by the psychiatric department in the ED. Of them, 1723 were female and 1457 male. Regarding clinical data, there were found statistically significant differences with respect to the pharmacological prescription in the ED, specifically in the prescription of benzodiazepines, psychiatric diagnoses after discharge and the indication of hospital admission between women and men.This study emphasizes the importance of considering the existence of gender differences in both the clinical presentation as well as in the care of psychiatric patients attending the ED. The analysis of these variables would help to improve the health care of psychiatric patients."} +{"text": "In this second presentation, the representatives of each of the four JADECARE oGPs, Jon Txarramendieta , Josep Roca , Manfred Zahorka and Kuno Strand Kudajewski will describe the main traits of the Core Features of their practices and the Next adopters that will transfer and adapt them highlighting the transfer challenges."} +{"text": "Enterprise performance is a critical component of any organizational success that is directly affected by its employees and the culture prevailing in the organizations. In order to gain strategic advantage from the employee brand equity it is important that organizations make efforts in retaining such employees that benefit the organizations. Therefore, this research examines the impact of employee brand equity and knowledge sharing culture on the enterprise performance with the mediating role of innovative capabilities. A self-administered survey was conducted among the 323 employees of information technology sector working in the software houses in China. Smart PLS has been used to analyze the data through partial least square structural equation modeling. Results of the study have demonstrated that knowledge sharing culture plays a significant role in the enterprise performance while employee brand equity could not find statistically significant impact on enterprise performance. In addition, the SEM analysis further showed that employee brand equity and knowledge sharing culture play a significant role in the innovative capabilities. Results also revealed that innovative capabilities mediate the effect of employee brand equity and knowledge sharing culture variables on the enterprise performance. This research enriches the literature by examining the role of knowledge sharing culture in enterprise performance and innovative capabilities. This research further offers certain implications for the human resource department in developing their human resources. This can be achieved by availing the maximum skills of the branded employees by creating learning opportunities for the other employees through training sessions where they help and share their experiences. Employee engagement is recognized as a persistent, positive, and pervasive work-related psychological phenomenon related to the dedication and vigor efforts taken by the employees toward the organization . This tyEnterprise performance is a concept of public management practice and research that mainly deals with aspects band factors that boost the performance . HoweverEmployee brand engagement is activity related to the employees\u2019 involvement in the organization\u2019s brand . This enOrganizations develop strategies to engage the employee to induce employee brand engagement. Branding itself is a complex phenomenon that needs the attention of researchers . ScholarThe literature has shown the importance of knowledge sharing that includes experiences, knowledge, and skills of the organizational members . Both prKnowledge sharing is comprised of three significant aspects i.e., the process of knowledge sharing, the type of knowledge shared within the firm, and the approaches of knowledge sharing . The twoSwift technological advancement and a competitive business environment foster the organizations to utilize their resources like human capital to enhance innovative capability . The innScientific developments and technologies in the era of globalization and a fast-changing environment benefit the firms to become flexible, efficient, and responsive . ConsequStudies, such as and Sak, have beThe present study developed a few objectives based on the literature, which are to: examine the impact of employee brand engagement on enterprise performance, investigate the influence of knowledge sharing culture on enterprise performance, analyze the role of employee brand engagement on innovative capabilities, and determine the role of knowledge sharing culture on innovative capabilities. The study also established the objectives related to the mediating role of innovative capabilities, which are to: investigate the mediating role of innovative capabilities in the relationship between employee brand engagement and enterprise performance and analyze the mediating role of innovative capabilities in the relationship between knowledge sharing culture and enterprise performance. The research questions have also been developed which are: what is the impact of employee brand engagement on enterprise performance? What is the influence of knowledge-sharing culture on enterprise performance? What is the role of employee brand engagement on innovative capabilities? What is the role of knowledge-sharing culture on innovative capabilities? The study also established the research questions related to the mediating role of innovative capabilities, and the questions are: do innovative capabilities mediate the relationship between employee brand engagement and enterprise performance? And do innovative capabilities mediate the relationship between knowledge sharing culture and enterprise performance?The study intends to analyze the role of organizational-related and employee-related factors that affect enterprise performance. In this regard, the study examined the role of employee brand engagement and knowledge sharing culture on enterprise performance in China. The study also determined the mediating effect of innovative capabilities in the relationship between employee brand engagement and enterprise performance. The framework of the study has been based on social exchange theory (SET) and employee stewardship theory (SET).The social exchange theory (SET) developed by Based on the conceptual framework of the study, the hypotheses proposed that a knowledge-sharing culture affects innovative capabilities and enterprise performance. In the light of the social exchange theory, the theory explicitly states that the benefits obtained by the individual during the social exchange are beneficial for the organization. Knowledge-sharing culture promotes social relationships and interactions within the organization because employees communicate with one another while sharing knowledge. Consequently, the social relationship built through knowledge sharing influences the innovative capabilities of the employees and enterprise performance.Employee stewardship theory (EST) developed by The framework of the study demonstrated that employee brand engagement impacts innovative capabilities and enterprise performance. This is based on employee stewardship theory which also states that employees\u2019 intrinsic motivation allows them to work effectively for the organization. Thus, intrinsic motivation (employee brand engagement) positively develops innovation capabilities of the employees to meet the goals and objectives of the organization and enhance enterprise performance. The innovative capabilities are significant for the employees because the present era requires employees who can think innovatively and creatively for the organization.Employee brand engagement is activity related to the employees\u2019 involvement in the organization\u2019s brand . It is oThe performance of the organization or enterprise is based on articulated, well-planned, and effective employee brand engagement policies and strategies. Studies have attempted to establish different techniques for developing employee brand engagement, refining employee brand engagement, and understanding this complex phenomenon . For exavia employee engagement. Moreover, The relationship between employee engagement and organizational performance among information technology (IT) employees have been found by H1.Employee Brand Engagement has an effect on Enterprise PerformanceIn this present era of a knowledge-based economy, the role of knowledge is crucial in enhancing the overall value of the organization . IndividOrganizational culture dynamically influences the system of learning in which employees can exchange and share ideas or job experiences through social interactions and communication . The cogThe knowledge-sharing ability of the employees develops the capability to attain a competitive advantage . ResearcH2.Knowledge Sharing Culture has an effect on Enterprise PerformanceEmployee innovative capabilities rely on employee engagement . The empThe engagement of the top management toward their organization impacts the employee engagement level . In brieEmployee engagement leads to innovation through the innovative capabilities of the employees, and due to these capabilities, employees do beyond their roles by collaborating with their peers, making improvements for the organization, and working for the organization to stand in the business market . EmployeH3.Employee Brand Engagement has an effect on Innovative CapabilitiesOne of the most crucial factors that encourage innovation in the employees is a knowledge-sharing culture . InnovatThere are mainly two types of knowledge i.e., tacit knowledge and explicit knowledge. H4.Knowledge Sharing Culture has an effect on Innovative CapabilitiesOrganizations are developing strategies and methods to maximize the innovative capabilities of the employees for sustaining competitive advantage . InnovatBoth motivation and employee engagement are the biggest drivers of innovative work behavior . EmployeInnovation is regarded as an important indicator of organizational success which is the key to attaining competitive advantage . AccordiAlthough studies have been conducted to examine the relationship between employee engagements, knowledge sharing culture, enterprise performance, and innovative capabilities, limited studies focused on the mediating role of innovative capabilities in the relationship between employee brand engagement and enterprise performance and between knowledge sharing culture and enterprise performance. Therefore, this study proposed that:5.H Innovative capabilities mediate the relationship between Employee Brand Engagement and Enterprise Performance6.H Innovative capabilities mediate the relationship between Knowledge Sharing Culture and Enterprise PerformanceThe effects of employee brand engagement and knowledge sharing culture on enterprise performance, in addition to the mediating role of innovative capabilities were checked and analyzed in this study. This research has used a quantitative research design along with deductive approach to check the above-mentioned effects and mediating relationships. In particular, the hypotheses have been developed through rigorous review of literature in the study for checking the effect of employee brand engagement and knowledge sharing culture on the enterprise performance. Quantitative research design has been used in this study to make sure that biases do not affect the analysis and results of this research. The data collection process for this research was aided through the questionnaire survey that was administered by the researcher themselves. The respondents were encouraged to stay neutral since there is no correct or false response to the questions given. The population selected in this study is the employees of information technology sector working in the software houses in China. The sampling technique used in this research was the convenience sampling as it gives the liberty to the researcher to collect data from the respondents based on the availability of the respondents and comfort of the researcher . The uniStructural equation modeling (SEM) analysis has been employed on the data to reach the concluding behavior of the data. It has followed the partial least square approach using the software Smart PLS. This helps the researcher to analyze data with the help of path modeling in a short time . ThroughA five-point Liker scale-based questionnaire has been used by the researchers to collect data from the respondents. Description of the scales used for each variable has been given in below.Scale for the independent variable of employee brand engagement consisting of nine items has been borrowed from Results obtained through the Cronbach alpha and composite reliabilities have been used in this research to check the construct reliability. In order to show the internal consistency in the data, The present research has also checked the discriminant validity of the scales using Fornell and Larcker criterion and HTMT (Heterotrait-Monotrait) ratio. Results of these tests are reported in the The coefficient of determination (r-square) explains the sustainability of the model fit indicating how much obtained values fit the regression line . A coeffp-values. According to p-value it should be less than 0.05.p-value reported for this relationship are , hence rejecting H1. The second hypothesis of the research states knowledge sharing culture has an effect on enterprise performance. The t-statistics and p-value reported for this relationship are , hence accepting H2. The third hypothesis of the research states that employee brand engagement has an effect on innovative capabilities. The t-statistics and p-value reported for this relationship are , hence accepting H3. The fourth hypothesis of the research states that employee brand engagement has an effect on innovative capabilities. The t-statistics and p-value reported for this relationship are , hence accepting H4.There are total six hypotheses in this study. Four hypotheses are checking the direct effects of the independent variables on the dependent variables. p-value reported for this relationship is , hence accepting H5. The second indirect effect of this research is about innovative capabilities mediate the relationship of knowledge sharing culture and enterprise performance. The t-statistic and p-value reported for this relationship is , hence accepting H6.The gap in the literature was addressed by examining the impact of employee brand engagement and knowledge sharing culture on enterprise performance with the mediation of innovative capabilities. The direct relationships between the variables were examined i.e., the effect of employee brand engagement on enterprise performance, the effect of knowledge sharing culture on enterprise performance, the effect of employee brand engagement on innovative capabilities, and knowledge sharing culture on innovative capabilities. The indirect or mediating effect of innovative capabilities was also analyzed in the present study which states that innovative capabilities mediate the relationship between employee brand engagement and enterprise performance and innovative capabilities mediate the relationship between knowledge sharing culture and enterprise performance. The study from the results obtained presented some important insights into the organizational dynamics.The first hypothesis posited that employee brand engagement has an effect on enterprise performance. The results of this hypothesis revealed that the relationship between employee brand engagement and enterprise performance is insignificant. These findings were contrary to the findings of The third hypothesis posited that employee brand engagement has an effect on innovative capabilities. The results of this hypothesis revealed that the relationship between employee brand engagement and innovative capabilities is significant. These results are in harmony with the findings of The fifth hypothesis got accepted which posited that innovative capabilities mediate the relationship between employee brand engagement and enterprise performance. These results are synchronous with the findings obtained by This research contributes significantly to the theory of organizational behavior and performance management. Firstly, the role of employee brand equity in the enterprise performance and innovative capabilities has been examined. Such setting of variables has not been examined before; hence, this study has significantly contributed to the management literature. Secondly, this research has found that employee brand equity does not play any role in the enterprise performance; rather it significantly impacts the innovative capabilities of the organizations. The literature has also been enriched by examining the role of knowledge sharing culture in enterprise performance and innovative capabilities. Thirdly, it has been found that knowledge sharing culture plays significant role in improving the enterprise performance through encouraging the innovative capabilities. This research has also explored the mediating role of innovative capabilities between the knowledge sharing culture, employee brand equity and the enterprise performance. Consequently, this research is vital as it gives insight into the role of knowledge sharing culture, innovative capabilities and employee brand equity in overall enterprise\u2019s performance.This research gives some solid practical implications for the organizations and corporate sector based on the findings of the study. Firstly, it is important for the organizations in corporate sector and software houses in information technology sector to encourage the knowledge sharing culture to enhance their overall enterprise performance. The study has shown that when the knowledge sharing culture prevails in the organizations, performance of the enterprise\u2019s flourishes. Secondly, when the employees owning brand equity work for organizations, it helps them nourish the innovative capabilities of the organizations through empowering the coworkers by engaging them with productive processes. The human resource department can then decide whether the branded employee should be paid extra for this extra role behavior and to retain them. Similarly, there lies a responsibility with the organizations and the human resource department to avail the maximum skills of the employee brand equity by creating the opportunities for the other employees through training sessions where the elite employees share their experiences. Furthermore, this study will help the top management of the organizations to invest in the employee brand equity by hiring them for the betterment of the organizational overall environment because it supports in the enhancement of the innovative capabilities which ultimately contribute to the performance of the enterprise.This research has been carried out in the information technology sector, considering the role of employee brand equity and knowledge sharing culture in the enterprise performance with mediation of innovative capabilities. The current study has used the sample of employees working in the software houses which limits the findings of the study. This research is based on the conceptual understanding of the researcher which opens new avenues for exploring other dimensions in different dynamics therefore, it can be checked in other working setups like construction industry, manufacturing industry or assembling industry can also be considered to get more insight into the conceptual model. This study can yield more interesting results if conducted in the European setting where wages and the labor laws are different from China that how employee brand equity plays its part in enterprise performance and innovative capabilities. Therefore, it is encouraged to explore more aspects to discuss the framework and findings of this research that will enlighten and validate the aforementioned relationships. Hence, empirical testing of the proposed framework is encouraged by introducing other important mediating factors like emotional intelligence, employee absorptive capacity or moral disengagement or the moderating variables like gender, employee well being. It will also be interesting to check the impact of employee brand equity and knowledge sharing culture in producing the employee ambassadorship in organizational setting.Enterprise performance is the most critical element for the organizations. Therefore, organizations keep on finding new ways and devise means to enhance their performance. In this study, certain determinants have been found that are important for the organizations in enhancing their performance. In present study, the role of employee brand equity and knowledge sharing culture has been examined in enterprise performance. Further, the mediating role of innovative capability has also been checked. Results of this research have described that employee brand equity does not play a role in the enterprise performance; however, knowledge sharing culture is an important determinant of the enterprise performance. The structural equation modeling also revealed that innovative capability significantly mediates the relationship of employee brand equity with enterprise performance; and the relationship of knowledge sharing culture and enterprise performance. It also offers a significant contribution to the literature by testing a comprehensive model on the employee brand equity and enterprise performance in the organizational setting in China. There are some implications as well for the human resource departments of the organizations in catering the branded employees by offering extra compensations for their extra role behaviors they exhibit in the organization in the form of innovative capabilities.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.The author confirms being the sole contributor of this work and has approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The 2020 year was the first year of Covid19 pandemy in Serbia. Epidemiological measures introduced to prevent the spread of the infection have shaped both the everyday life of citizens and the way the health system of our country functions. A large number of those infected required the redistribution of health personnel to work in covid zones and therefore the work with non covid patients suffered.The aim of the study is to process and present the epidemiological characteristics of hospitalized patients at the Clinic for Psychiatry of the Clinical Center of Vojvodina in Novi Sad in 2020.A retrospective analytical study of the epidemiological type was conducted.During 2020, a total of 1345 patients were hospitalized at our Clinic, which is over 30% less than during the previous year. Several males, aged 19 to 45, with a predominant diagnosis of psychosis, were hospitalized. Hospitalizations lasted significantly shorter than during the previous year. The number of relapses was significantly lower. Patients with other diagnoses of mental disorders are significantly less often hospitalized, except for those with addiction diseases who are hospitalized in a reduced percentage.Restrictive epidemiological measures led to a significant reduction in the number of hospitalizations at our Clinic, primarily because patients were prevented from exercising their right to health care, but also because of the mobilization of all healthy defense mechanisms in a collective crisis situation and consequently reduced psychopathological manifestations.No significant relationships."} +{"text": "Particle packing plays an essential role in industry and chemical engineering. In this work, the discrete element method is used to generate the cylindrical particles and densify the binary cylindrical particle mixtures under the poured packing conditions. The influences of the aspect ratio and volume fraction of particles on the packing structure are measured by planar packing fraction. The Voronoi tessellation is used to quantify the porous structure of packing. The cumulative distribution functions of local packing fractions and the probability distributions of the reduced free volume of Voronoi cells are calculated to describe the local packing characteristics of binary mixtures with different volume fractions. As a result, it is observed that particles with larger aspect ratios in the binary mixture tend to orient randomly, and the particles with smaller aspect ratios have a preferentially horizontal orientation. Results also show that the less dense packings are obtained for mixtures with particles of higher aspect ratios and mixtures with a larger fraction of elongated cylindrical particles. Particle packing is widely used in industry and nature. It is a significant subject that has been studied for a long time. Previous studies of particle packing have primarily focused on the packing of spherical particles due to their simple shape. However, non-spherical particles are usually more involved in industry and everyday life. In chemical engineering, cylindrical particles have been used for heat transfer and catalytic reactions ,2. The pResearchers mainly focused on the study of binary mixtures in order to improve the packing density of non-spherical particles. Previous studies were concentrated on the packing densification of binary spherocylinders and binaRecently, the discrete element method (DEM) developed by Cundall and Stark has beenInvestigating the behavior of the particle assembly as a whole, as well as the average packing fraction over the domain, would be beneficial ,16. HoweThe particle-centered technique, such as the Voronoi tessellation (VT), has been widely used in the design of local cells for spherical packings ,21,22,23Therefore, the aim of the study is to analyze thoroughly the porous structure of binary mixtures of cylindrical particles of different aspect ratios focusing on the local porosity structure. For this purpose, cylindrical particles are modeled by the SQ approach, packing densification of binary mixtures of the cylindrical particles is simulated using the DEM, the planar packing fraction is calculated by applying voxelization code, and finally, the microstructure of packing is analyzed by VT method for the poured packing of binary mixtures.The main contribution of this paper is to expand our understanding of the packing structure of binary mixtures of cylindrical particles with various aspect ratios. The authors compare the local packing structure of compacts using the Voronoi tessellation approach and analyze the planar packing fraction distributions along all axes. They also contribute to the study of the orientation of cylindrical particles in the compacts and the spatial distribution of different aspect ratio particles in the compact cross-section.The remainder of the paper is organized as follows. The present work follows our previous work on microstructure analysis of compacts of equal-size cylindrical particles by DEM . It focua, b, and c are the superquadric half-lengths along the x, y, and z axes, respectively. The shape sharpness parameter y-z and x-z planes, and the parameter x-y plane. The inside-outside function by Equation (1) is specified in superquadric-centered local coordinates. The cylindrical particles were generated using the SQ approach. The particle shape and size are determined by the geometric parameters of the superquadric, such as half-lengths and sharpness indices. The so-called inside-outside function defining the superquadric for a three-dimensional particle has the following form :(1)fA= is formulated asThe forces and torques affecting the particle are calculated asThe Hertz\u2013Midlin contact model is used in the present study .The mechanical and physical properties of the particles and the DEM simulation parameters are shown in Planar packing fraction is a measure of the packing density of a particle assembly on a certain plane. In order to obtain planar packing fraction by voxelization method, the open-source program was applcV/pV and cV/pV\u22121, respectively [cV is the Voronoi cell volume and pV is the cylindrical particle volume. The open-source program PySetVoronoi was usedectively . Here, VThe projected positions of particles in the horizontal plane are shown in The orientation of cylindrical particles is demonstrated in x and y directions, indicating the poor mating of the curved surfaces of the particles against the flat surface of the wall. Significant oscillations in bed porosity near the wall were also reported by Zhang et al. [z direction quickly decreases at the end, demonstrating that the particles at the top of the container are not evenly distributed. A similar trend in the axial distribution of local porosity of polydisperse cylindrical particle packed-bed was described by Zhang et al. [A weak wall effect can be observed from the analysis of the packing fraction variation plots for packings of cylindrical particles . Two prog et al. for the g et al. . The effThe cumulative distribution functions (CDFs) of the local packing fraction are shown in The probability distribution functions (PDFs) of the reduced free volume of Voronoi cells are represented in This study has focused on the microstructural analysis of poured packings of binary cylindrical particle mixtures with the same volume but different aspect ratios and volume fractions. The particles of approximately cylindrical shape were generated using superquadrics. By applying the DEM approach, binary mixtures of the cylindrical particles were simulated, and their packing microstructure was thoroughly analyzed via voxelization and Voronoi tessellation. The simulation results demonstrate that the particles with the larger aspect ratio in the binary mixture tend to orient randomly, and the particles with the smaller aspect ratio are preferentially horizontally oriented. The planar packing fraction curves show that 75% of elongated particles mixed with the 25% cylinders with AR = 1 demonstrate the lowest packing fraction. The cumulative distribution functions of local packing fractions are more uniform and have smaller median values corresponding to the looser packing for mixtures with more elongated cylindrical particles. The probability distribution functions of the reduced free volume of Voronoi cells become wider and shift to larger cell volumes with the increase of particle aspect ratio and volume fraction of cylinders with high aspect ratio. The microstructural characterization of the packing can be used in understanding the local packing structure. The limitation of the present research is that the cylindrical particles generated using superquadrics are geometrically symmetrical and convex with smooth boundaries. The packing structure of such particles could differ from the packing structure of the cylinders with planar faces. Therefore, the present research will be extended in the future by considering the packing of polydisperse cylinders with planar faces."} +{"text": "Aim: To study the sphenopalatine foramen in terms of its numeric variation and its location on the lateral nasal wall in relation to the bony ethmoidal crest of the palatine bone. Materials and methods: The anatomical studies were carried out in 54 hemifaces. Results: the sphenopalatine foramen presented the following numeric variation: single , double , and triple (1.9% or one specimen); it was located at the superior nasal meatus in 81.5%, or 44 specimens; 14.8% (8 specimens) between the middle and superior nasal meatus and in the middle nasal meatus in only one case (1.9%). Conclusion: We have been able to show a numeric variation of the SPF, its relation with the bony ethmoidal crest and its location in the superior meatus, middle meatus, and in both.Anatomical variations of the sphenopalatine foramen may correspond to alterations at the arterial nasal irrigation input, which is a relevant condition to treat severe epistaxis through ligation of the sphenopalatine artery. Nasal hemorrhage is one of the most common problems otorhinolaryngologists face in their specialty; severe cases may be characterized as a medical emergency.The sphenopalatine foramen consists of a notch on the superior border of the palatine bone, between the orbital and sphenoid processes; the notch becomes a foramen at the point in which the palatine bone articulates with the sphenoid bone in the lateral nasal wall. It may be a complete orifice or traversed by one or more bony spiculae, suggesting more than one orifice.Nikolic (1967)13According to some authors, the sphenopalatine foramen may be found in the superior nasal meatus in the nasal cavity.It is essential that surgeons possess ample knowledge of the anatomy, physiology, surgical techniques and complications,The Faculdade de Odontologia de Bauru, Universidade de Sao Paulo, exempted this study from requiring an approval protocol number from the Research Ethics Committee since it was conducted on anatomical specimens belonging to the Anatomy Department of Biological Sciences of that institution. We conducted anatomical studies of the nasal cavity in 54 female and male Caucasian and non-Caucasian adult half skulls. Nasofibroscopy was done initially in all half skulls to screen for anatomical variants or previous surgery. Half skulls consisted of median sagittally cut bone specimens. We aimed to identify the sphenopalatine foramen and its number variation by visually observing and photographing the half skulls. Anatomical observations to locate the sphenopalatine foramen relative to adjacent nasal cavity structures were done; this involved identifying the position of the bony crest of the middle nasal turbinate (COM) in relation to the foramen. The following criteria were used: the sphenopalatine foramen was considered as located in the superior nasal meatus (MS) when the posterior tip of the bony crest of the middle nasal turbinate pointed to the anterior and inferior border of the sphenopalatine foramen; it was considered as located in the middle nasal meatus (MM) when the bony crest of the middle nasal turbinate pointed to the anterior and superior border of the sphenopalatine foramen; and it was considered as located between the middle and superior nasal meatuses (MS-MM) when the bony crest of the middle nasal turbinate pointed to the median line of the sphenopalatine foramen. The parts that contained the bony crest of the superior nasal turbinate (COS) relative to the sphenopalatine foramen were also identified. A Nikon Coolpix 900 digital camera was used for photography. A 5-millimeter scale was placed next to each specimen as a size reference for each picture. The 54 color pictures were recorded on a CD-ROM, and are currently part of the author\u2019s personal collection.In this study we found that 47 specimens contained single orifices (87%) , six speA study of the location of the sphenopalatine foramen on the lateral nasal wall relative to the bony crest middle nasal turbinate revealed that in 44 specimens the bony crest of the middle nasal turbinate pointed to the inferior border of the sphenopalatine foramen, placing it in the superior nasal meatus (MS) . In eighWe also found that in four specimens with double sphenopalatine foramens, these foramens were placed one above the other and had different sizes; the superior orifice was larger and the inferior orifice was the smallest. In these specimens, the bony crest of the middle nasal turbinate pointed to the sphenopalatine foramen in such a way as to indicate a superior and an inferior orifice . In two In this study we were able to identify the presence of the bony crest of the superior nasal turbinate (COS) pointing to the sphenopalatine foramen in 30 specimens (55.6%) and 3.Our results support data in the literature about the number variation of the sphenopalatine foramenIt is recognized that the number variation of the sphenopalatine foramen is probably the main element explaining the failure of surgery when ligating the branches of the sphenopalatine artery in the treatment of nasal bleeding.Given the importance of identifying the bony ethmoidal crest of the palatine bone, onto which the middle nasal turbinate is linked, and which is an anatomical and surgical landmark for locating the sphenopalatine foramen in an endonasal access, that structure was also observed. Our results revealed that in 44 specimens (81.5%) the distal tip of the bony crest of the middle nasal turbinate pointed towards the inferior margin of the sphenopalatine foramen, locating it in the superior nasal meatus; the sphenopalatine foramen was located between the superior and middle nasal meatuses in eight specimens 14%); the foramen was fully located in the middle meatus in only one specimen (1.9%). These results disagree with those that place the sphenopalatine foramen only in the superior nasal meatus,%; the foAside from the comments about the bony crest of the middle nasal turbinate, we also noted that in 30 specimens (55.6%), the bony crest of the superior nasal turbinate pointing towards the superior border of the sphenopalatine foramen. This finding has also been often reported.Our study showed a single sphenopalatine foramen in most of the study specimens, although there were double and tripe foramens. We noted the number variation of the sphenopalatine foramen, and consequently of the branches of the sphenopalatine artery, supporting the surgical treatment of severe epistaxis with fewer failures."} +{"text": "Sensory and sensorimotor gating provide the early processing of information under conditions of rapid presentation of multiple stimuli. Gating deficiency is observed in various psychopathologies, in particular, in schizophrenia. However, there is also a significant proportion of people in the general population with low filtration rates who do not show any noticeable cognitive decline. The review article presents a comparative analysis of existing data on the peculiarities of cholinergic and dopaminergic mechanisms associated with lowering gating in healthy individuals and in patients with schizophrenia. The differences in gating mechanisms in cohorts of healthy individuals and those with schizophrenia are discussed. The mechanisms of sensory and sensorimotor gating provide adaptive patterns of responses to multiple stimuli presented with rapid succession in a condition\u2013test paradigm is considered the putative measure of sensory gating. A deficit of sensory gating may result in sensory overflood and inappropriate assessment of stimuli salience of the startle response is applied Graham, . In thisThe individual characteristics of sensory and sensorimotor gating are assumed to contribute to higher cognitive functions , which are deficient under psychopathological conditions and sensorimotor (PPI) gating in populations of mentally healthy people and patients with schizophrenia. The most attention was paid to the results of genetic and pharmacological studies.The startle reaction is a generalized defensive reaction evoked by a sudden intensive stimulus. In humans, startle is mainly estimated by the blink component. The most often used model is acoustic startle reaction (ASR) evoked by sudden sounds with intensity >95 dB. Prepulse inhibition (PPI) is the phenomenon of suppression of the amplitude of response to the intensive (main) stimulus by weaker (<90 dB) sounds (prepulse) presented with a short interval before the main signal. In different studies, the intervals between prepulse and the main stimulus (stimulus onset asynchrony\u2014SOA) varied within the range from 10 to 300 ms with the Met allele and also showed the stabilizing effect of the Met allele under conditions of menstrual-cycle-dependent PPI variation prepulse trials and enhanced sensorimotor gating in persons with low baseline PPI under conditions of low (75 dB) prepulse intensity and increased it to a level comparable to that of control values. The increase in PPI after amphetamine administration was more pronounced in carriers of the methionine allele of rs4680 and displayed a correlation with positive symptoms , while a significant quadratic trend was observed in patients with schizophrenia with regard to the maximum values of PPI in CT/GA heterozygotes and decrease it in carriers of the C allele. These effects of nicotine were observed at SOAs of 120 and 240 ms but not at an SOA of 30 or 60 ms subunit display high affinity for nicotine and are expressed in the cortex, hippocampus, basal ganglia, and cerebellum , increases the PPI in both healthy individuals and in patients with schizophrenia \u03b2(2) and \u03b1(3)-containing cholinergic receptors, which is in agreement with the results of previous genetic studies.The available results of pharmacological studies show that the dopaminergic mechanisms of sensorimotor gating in humans are almost as mysterious as the dopaminergic mechanisms of schizophrenia.The effects of dopamine receptor agonists, such as apomorphine, bromocriptine, and amantadine, have not been studied in patients with schizophrenia, which makes comparative analysis impossible. However, data obtained in a population of mentally healthy individuals deserve analysis. Of great interest is the pronounced dependence of the effects of these drugs on SOA length. Surprisingly, both agonists and antagonists of D2 receptors reduce PPI in healthy subjects. The most frequent decrease in PPI after D2 agonist administration was observed at an SOA of 120 ms.in vitro). Thus, amphetamine, which elevates dopamine synthesis in presynaptic endings, significantly decreased PPI at an SOAs of 10 ms and 20 ms, while bromocriptine, which suppresses the synthesis of dopamine in synaptosomes, increased PPI at an SOA of 20 ms can also disturb mental functions and behavioral processes. Indeed, while Bitsios et al. found poThese data raise questions surrounding the existence of optimal level of gating, and comprehensive comparison of healthy individuals and patients with schizophrenia with high and low PPI levels should be provided. Such a systematic study may include the analysis of relevant genetic polymorphisms, as well as psychophysiological and cognitive tests.The assumption of the participation of the dopaminergic system in maintenance of the optimal level of sensorimotor gating is also supported by the observed effects of haloperidol, which increase the PPI in healthy individuals with initially low levels and reduce in healthy individuals with initially high PPI levels.Auditory sensory gating is usually assessed in a paired-click paradigm that involves the presentation of two identical clicks (S1 and S2). Estimation of the auditory event-related potential at around 50 ms post-click (mid-latency P50 component) reveals a reduced amplitude in response to S2 (testing stimulus) relative to the amplitude in response to S1 (conditioning stimulus). The optimal interval between S1 and S2 is 500 ms.per se are mainly associated with hippocampal and prefrontal activity enhances its inhibitory activity was found to enhance sensory gating in healthy individuals with baseline low rates of P50 suppression and in patients with schizophrenia, but not in bipolar disorder (Whitton et al., In an attempt to determine the types of receptors mediating the effect of nicotine on sensory filtration, Hong et al. studied The existence of different mechanisms of the deficit of P50 suppression that are specific in healthy and schizophrenic individuals is also supported by the fact that only patients with schizophrenia who do acute cigarette smoking increase the correlation between prefrontal executive cognitive functioning and P50 suppression (Rabin et al., Although the results of genetic and pharmacological studies are not numerous in the literature, they indicate the relative specificity in the patterns of dopaminergic and cholinergic activities associated with the level of information gating in healthy and schizophrenic individuals. Some data also point to the expediency of developing the concept of the optimal level of sensory and sensorimotor activity.This can help to identify specific patterns of attention disorders and executive functions in numerous psychopathologies.An important condition for the success of such studies is the use of unified protocols in cohorts of healthy individuals and those with mental disorders.AP: formulation of the idea of the review, search and analysis of literature, writing, and design of articles.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling editor ZS declared a past co-authorship with the author.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Mental health is mostly affected by numerous socioeconomic factors that need to be addressed through comprehensive strategies. The aftermath of armed conflict and natural disasters such as Ebola disease virus (EVD) outbreaks is frequently associated with poor access to mental healthcare. To design the basis of improving mental health services via the integration of mental health into primary health care in the Democratic Republic of Congo (DRC), we conducted a scoping review of available literature regarding mental illness in armed conflict and EVD outbreak settings.This scoping review of studies conducted in armed conflict and EVD outbreak of DRC settings synthesize the findings and suggestions related to improve the provision of mental health services. We sued the extension of Preferred Reporting Items for Systematic Reviews and Meta-Analyses to scoping studies. A mapping of evidence related to mental disorders in the eastern part of DRC from studies identified through searches of electronic databases . Screening and extraction of data were conducted by two reviewers independently.This review identified seven papers and described the findings in a narrative approach. It reveals that the burden of mental illness is consistent, although mental healthcare is not integrated into primary health care. Access to mental healthcare requires the involvement of affected communities in their problem-solving process. This review highlights the basis of the implementation of a comprehensive mental health care, through the application of mental health Gap Action Program (mhGAP) at community level. Lastly, it calls for further implementation research perspectives on the integration of mental healthcare into the health system of areas affecting by civil instability and natural disasters.This paper acknowledges poor implementation of community mental health services into primary health care in regions affected by armed conflict and natural disasters. All relevant stakeholders involved in the provision of mental health services should need to rethink to implementation of mhGAP into the emergency response against outbreaks and natural disasters. Mental health is usually affected by a range of socioeconomic factors that need to be addressed through comprehensive strategies . In armePublic health emergencies are disproportionally linked to an increasing burden of mental illness among communities, especially those with existing psychological vulnerabilities . Recent Overtime, the integration of mental healthcare into PHC has been affected by lack of funds, scarcity of specialized workers, lack of mental health legislation, and impaired mental health information at different levels of the health system . These cTo date, DRC lacks sufficient trained health workers able to implement mental healthcare services based on the recommendations of mhGAP in several provinces. Most of persons with mental disorders receive treatment mental health facilities which, not only belong to private sectors but also are equipped by non-specialists workers. Furthermore, poor access to mental health services in public facilities is due to low coverage of mental health providers by 100,000 population in DRC .Therefore, to design the basis of improving mental health services via the integration of mental health into PHC in the eastern part of DRC, we conducted a scoping review of available literature conducted in this aforementioned region in order to better analyze the burden of mental health problems.The scoping review approach was performed to conduct this study, given that it is well-established as the first step in research evidence development . We usedLiterature search used a grey literature approach conducted by two independent experts (BMNV and MMV) who collected and extracted data using Medline, Scopus, Psych Info, Google Scholar, and CINAHL, using the following keywords: \u201cmental illness in DRC\u201d, mental illness in EVD outbreak settings\u201d, \u201cthe collision of EVD outbreak and armed conflict\u201d, \u201cMental illness during EVD outbreak and armed conflict in DRC\u201d. Search results were uploaded into Endnote software; duplications were removed through the control quality analysis.This scoping review of studies involved the screening of the abstract and full text of each article to check whether it included findings regarding the availability of mental healthcare services as well as the burden of mental disorders in the eastern part of DRC. Abstracts and full texts were analyzed for published studies including short communication, commentaries, original studies, and reviews. A pretested template was approved by all the authors regarding the focus of the study. We only included studies that including participants who were direct or indirect victims of armed conflict and EVD outbreaks.A narrative synthesis and discussion of the findings were performed in order to achieve the objective of this study. We collected data on study design, content and scope of the study, the main findings, and the suggestive measures to improve the mental healthcare services in the Eastern DRC. Relevant studies were identified using citation mapping. We selected published and peer-reviewed articles from January 2013 to August 2021.Grey literature searches yield a total of 49 records, among 11 duplicated papers that were removed from the database. From the 38 studies selected, the assessment of titles and abstracts set up 16 papers recruited. We included papers that summarize the implementation of mental healthcare, those that highlight the burden of mental illness, and those who suggested reforms for the provision of mental health services. The careful review of inclusion criteria and the revision of the full text bring out the sample of 6 studies that are included in this review. Ten articles were not included, given that they did not provide important information targeted by the objective of this review. In DRC, mental healthcare is characterized by a lack of infrastructure and trained mental workers .The first study included by this review concerned adults receiving healthcare at mental health units in Butembo city (North-Kivu province) and revealed that 60% of study participants reported lacking needed mental healthcare services prior to admissions. Also it found that predictors of affective and psychotic disorders were death of a loved one, history of sexual abuse, history of childhood trauma, and being kidnapping. This study suggested that both the relatives and community health workers should be involved in close monitoring of people with psychological distress during civil unrest and outbreaks . The secFourthly, a study pre-testing the integration of mental health services in rural zone reported that the average utilization rate of primary health centres for mental health problems was 7 new cases per 1000 inhabitants per year. The majority of patients were treated on an outpatient basis. This study indicates that the success of integration mental health into PHC depends on the quality of existing health system and the involvement of and non-health actors, including community leaders. Furthermore, this study showed that the major problems in terms of access and use of basic care indicate that the successful integration of mental health depends on the involvement of health and non-health actors . In thesResults of the available studies in DRC evidenced the mental health challenges and their contribution to the burden of mental illness among individuals living in a region affected by natural disaster/armed conflict settings. Also, they highlighted the need of involving mental health non-specialist and specialist in treating patients with mental health problems at daily or weekly basis .This paper reviews the existing literature on burden attributed to mental health in the Eastern DRC and summarizes the suggestive means of amelioration of integration of mental health into PHC. Although developed countries have implemented guidelines to cope with the increasing health challenges during armed conflict or natural disasters, few efforts have been done in developing countries to cover the burden of psychological distress and mental disorder during pandemics .We found that mental health services are not integrated into PHC across the eastern part of the country, despite the wide recognition of its contribution to the health system. In fact, there is less attention regarding the application of mental health legislation during public health emergencies. To date, less than 10% of individuals with mental illness have access to needed healthcare services in DRC . AdditioThere is strong evidence that outbreaks and armed conflict impeded the quality of life. A recent study demonstrated that 28.0% of the global population experienced depression; 26.9% of cases showed anxiety; 24.1% of cases presented the symptoms of post-traumatic stress disorder; and sleep-related problems were seen in 27.6% of cases during COVID-19 . Mental The eastern part of DRC has a large range of risk factors that can result in an explosion of mental disorders. While mental health is not fully integrated into health care among people living in armed conflict and outbreaks, except for the survivors of gender-based violence; mental illness is a major concern of public health . New polAs a result of the analysis of available evidence; a model based on\u00a0the implementation of\u00a0mhGAP at community level of the health system\u00a0is actually proposed by this review. This model should\u00a0aim to ameliorate mental healthcare services via the community engagement . Active To ensure that access to mental health services has improved, social workers and community health workers should be skilled and supported to reach all households at least weekly for collecting and addressing mental health problems . Also imFirst, raising the awareness of mhGAP programs during outbreaks and armed conflict needs important reforms of mental health legislation . SecondlThirdly, the implementation of mental health services at PHC requires the development of standardized approaches to use in outbreak and conflict zones settings. Furthermore, this implementation highlights the change of support provided to PHC workers and health communication regarding community mental health and psychosocial support. Mental health could be included in the national country\u2019s health communication systems, and a need for developing specific screening psychological tools useful by health care providers and policymakers at the provincial, operational, and community levels to strengthen mental capacity building.To ensure the integration of the mental health model into PHC in the aftermath of armed conflict and outbreaks; there is a need of implementation research study related to the mhGAP in three provinces of eastern DRC, namely South Kivu, North Kivu, and Ituri. This research will aim to identify the barriers and facilitators to scaling up mhGAP interventions and the integration of mental health services into PHC. This study will target to improve the uptake of the findings research for effective development of new policy in DRC. Furthermore, this study will address the following objectives: i) to identify operational strategies, implementation challenges, and gaps of the mhGAP interventions in conflict settings, and propose solutions with a potential influence of policies and practices of mental healthcare services in the three above-mentioned provinces; ii) to explore the factors influencing the proposed model to contribute to the promotion of mental health and well-being of individuals living in armed conflict concerned with EVD outbreaks; iii) to identify lessons about the implementation of a mhGAP regarding the prevention and management of mental illness as well as the promotion of mental health; iv) to propose feasible solutions able to determine the sustainability of the community mental health and psychosocial support model proposed by this paper.A multilevel strategy will be performed for an in-depth understanding of the process of integration of mental health services into PHC. A mixed approach using qualitative and quantitative will be used to collect data. Measurement will concern the evolution of common mental illnesses over time; the access to mental health facilities, as well as the involvement of all the relevant stakeholders in the provision of mental healthcare into PHC. Desk review, in-depth interviews and focus group discussions associated with consultation and brainstorming will be used to collect data that will have the potential to influence the control and promotion of mental health and well-being at the community level. Recommendations of this study will offer insight to all the relevant stakeholders including the Ministry of Health on the barriers and facilitators to scaling up mhGAP interventions and the integration of mental health services into PHC. However, these recommendations have to be read in DRC context.We have tried to provide insight into a safe space where communities can access mental health services during and in the aftermath of public health emergencies and armed conflict. The review of literature existing in DRC revealed that EVD outbreaks and armed conflict are important factors of mental disorders in both survivors and caregivers. All relevant stakeholders involved in the provision of mental health services should need to rethink to implementation of mhGAP into the emergency response against outbreaks and natural disasters."} +{"text": "The purpose of this project was to study how older learners think a university campus currently meets the 10 Age-Friendly University principles and what they see as potential steps to create an age-friendly campus community. Online focus group interviews were conducted with 17 members of Osher Lifelong Learning Institute at San Francisco State University in 2020. The participants were 60 years or older, and the majority were female and non-Hispanic white. The study participants received information about the 10 Age-Friendly University principles presented by the Age-Friendly University Global Network and were asked to discuss their thoughts about how the university satisfies these principles. The interview recordings were transcribed for the thematic analysis of qualitative data. The analysis yielded three themes. The first theme described the diversity of older adults\u2019 learning needs and desires that the university must recognize and accommodate. The second theme represented older adults\u2019 sense of optimism and anxiety about intergenerational learning. The third theme highlighted the challenges older adults tend to experience in accessing information, educational programs, and/or university facilities. The interviews with older learners were found valuable and indispensable in the work of Age-Friendly University assessments. The presentation is focused on the discussion regarding how older adults\u2019 voices can be incorporated in the assessments and ways in which higher education institutions should combat ageism on and off campus as part of work to address the issues identified in this study."} +{"text": "The sub-discipline of gerontologic biostatistics (GBS) was introduced in 2010 to emphasize the special challenges encountered in the design and analysis of research studies of older persons. These challenges center on the multifactorial nature of human aging, characterized by the parallel and progressive deterioration of diverse organ and cellular systems that eventually results in death. Ten years after the introduction of GBS, which initially focused on important aspects of design and analysis that ensure their statistical validity, we update how GBS has been enriched by evolving practices. We present individual sessions on three seminal developments in the practice of GBS: integration of data science and multiple streams of data, including those automated and or multidisciplinary in nature; enhanced methods of addressing the heterogeneity of treatment effects from health-related interventions for older patients; and how interactive visualization can help specific patients locate themselves along the continuum of individualized treatment effects. We conclude our presentation with a session that reviews three prominent trends in the validation of the heterogeneity inherent to the assessment of health among older adults. Reflecting this era of big gerontological data, we discuss several established modeling approaches for validation, the proliferation of signal intensive behavior phenotypes, and the deep characterization of phenotypes through OMICS studies and multimodal approaches. All talks discuss pitfalls and areas of future development and draw from published studies. We are submitting as an interest group collaborative panel submission between two interest groups: Epidemiology of Aging and Measurement, Statistics and Design."} +{"text": "This presentataion outlines the development of a post-membership masterclass programme in Perinatal Psychiatry, funded by Health Education England and delivered through the Royal College of Psychiatrists. The masterclass programme renges from 5-15 days and there are separate programmes for consultants, SAS doctors and senior trainees in psychiatry. The course is delivered by experts in the area and contains a mix of didactic teaching and small group work. The programme was developed to meet the workforce needs of rapidly expanding perinatal mental services throughout England. The programme also helps facilitate the needs of perinatal psychiatrists from Ireland and from the devolved nations of the UK .No significant relationships."} +{"text": "The COVID-19 pandemic has not impacted everyone equitably, including children . The objective of this study was to explore the association between school neighbourhood composition and kindergarten educator-reported barriers and concerns regarding children\u2019s learning during the first wave of COVID-19 related school closures in Ontario, Canada.In the spring of 2020, we collected data from Ontario kindergarten educators in an online survey on their experiences and challenges with online learning during the first round of school closures. We asked educators whether they experienced a number of barriers to learning and concerns about returning to school in the Fall. We linked the educator responses to 2016 Canadian Census variables based on the school postal code. Poisson regression analyses were used to determine if there was an association between neighbourhood composition and the number of barriers and concerns reported by kindergarten educators.Educators who taught at schools in neighbourhoods with lower median income reported a greater number of barriers to online learning and concerns regarding the return to school in the fall of 2020 . Educators also reported a greater number of concerns regarding the return to the classroom in neighbourhoods with a greater proportion of single-parent families.Our study confirms that the educational impacts of the COVID-19 pandemic may not have been felt equitably even by kindergarten children, as educators teaching in schools in lower SES neighbourhoods reported both more barriers to online learning, and more concerns about returning to the classroom in September 2020."} +{"text": "Following the publication of the original article the authThe correct author name, \"David Lapidus\", is included in the author list of this Correction, and has already been updated in the original article."} +{"text": "NBTXR3 nanoparticle injection is a relatively novel radioenhancer for treatment of various cancers. CT scans following NBTXR3 injection of metastatic lymph nodes from head and neck squamous cell carcinoma were reviewed in a small series of patients. The radioenhancing appears as hyperattenuating, with a mean attenuation of the injected material of 1516 HU. The material was found to leak beyond the margins of the tumor in some cases. NBTXR3 is a first-in-class radioenhancer composed of crystalline hafnium oxide nanoparticles functionalized by a negatively charged surface coating with a size centered on 50 nm. The nanoparticles augment the absorption of ionizing radiation, resulting in increased tumor cell death. Intratumoral injection of NBTXR3 followed by radiation therapy has been found to be feasible and safe for a variety of tumors, such as locally advanced squamous cell carcinoma of the oral cavity or oropharynx and soft tissue sarcomas ,2,3. HowThis retrospective imaging review was performed under a prospective trial approved by the institutional review board. The patients included in this study had a diagnosis of head and neck squamous cell carcinoma that was recurrent in a previously irradiated field with limited therapeutic options. The patients were enrolled in a clinical trial of radiation therapy to the injected lymph node followed by initiation of anti-PD1 immunotherapy after completion of radiation (NCT03589339). NBTXR3 was supplied by Nanobiotix S.A. as a sterile aqueous suspension of nanoparticles composed of functionalized hafnium oxide crystallites. The injection volume of NBTXR3 was pre-specified per protocol and was determined by volume of the target lesion as assessed on diagnostic imaging and the injection was performed under ultrasound guidance. The initial soft tissue neck CT scans obtained following NBTXR3 injection were reviewed by a board-certified head and neck radiologist with about 10 years of experience for the distribution of the injected material within or adjacent to the lesion. The extent of injected material with respect to the tumor volume was estimated visually based on the CT scans and expressed in quartile ranges. In addition, the attenuation of the injected material was measured using manually drawn circular regions of interest on two representative slices and reported as the mean of the two measurements.Four patients were identified that underwent NBTXR3 injection of metastatic lymph nodes from head and neck squamous cell carcinoma with available post-injection CT scans . The CT This study shows that the injected NBTXR3 nanoparticle solutions are readily identifiable on CT at least up to several weeks following the injection. The material appears as markedly hyperattenuating deposits without significant beam-hardening artifact, which facilitates delineating the dispersion (coverage of the tumor volume) of the injected material. The hyperattenuating appearance is attributable to the hafnium oxide component, which has the desirable property of high X-ray attenuation for treatment purposes as a radioenhancer . This prBased on this series, the injected material typically displays irregular margins due to how it permeates the tissues. The presence of the radioenhancer material beyond the margins of the tumor is attributable to leakage from the tumor, presumably due to elevated pressures from the injection, which results in infiltration of the adjacent fat planes. Furthermore, the consistency of the tumor could impact the dispersion of the injected material. For example, the solid lesion demonstrated the most prominent leakage of the injected nanoparticles as compared to the necrotic tumors. Thus, this characteristic of the tumor should be considered when injecting the nanoparticles. It is possible that other features of the injected tumor, such as size and extracapsular spread, can lead to variations in the dispersion of NBTXR3, but this can be further studied via a larger cohort. It is also conceivable that the injected radioenhancers could spread through the regional lymphatic channels. Previous clinical studies have not commented on the presence of injected material beyond the tumor margins consistent with the reported safety profile. The presence of injected material beyond the tumor margins observed in this report did not have a negative impact on the safety or toxicity of the normal surrounding tissues. It has been suggested that as little as 10% of tumor volume injection in conjunction with radiotherapy is adequate for patients with soft tissue sarcomas [It is suspected that the attenuation of the injected nanoparticle solution remains elevated for a prolonged period given that the case imaged at 18 days after injection displayed similar Hounsfield units to the other cases imaged immediately after injection. However, longer-term studies can address the longevity of the imaging results for the nanoparticles. Further studies to assess the significance of variations in degree of tumor volume injected and dispersion of the injected material in terms of efficacy and complications for head and neck squamous cell carcinoma are currently ongoing.Intralesional NBTXR3 nanoparticle injections used as radioenhancers for the treatment of head and neck cancer lymph node metastases can be readily delineated on CT scans as hyperattenuating material. This property helps verify adequate delivery of the radioenhancers for subsequent therapy."} +{"text": "UNESCO, less than 30% of researchers worldwide are women. The field of translational pharmacology is also impacted in several other ways, including the underrepresentation of female experimental animals in preclinical research as the therapeutic target for Raynaud\u2019s phenomenon (RP), a pathological condition caused by the hyperactive cold-induced vasoconstriction that is more prevalent in females than males. The authors proposed a pathway in how activation of GPER upregulates the expression of vascular alpha 2C-adrenoceptors (\u03b12C-AR).The contribution from Yang et al. discussed the role of melatonin in the development and the treatment of postmenopausal osteoporosis. The authors provided an overview of research findings regarding the correlation between the level of serum melatonin and bone mass, and the effect of melatonin on bone remodeling for promoting osteogenesis and suppressing osteoclastogenesis. As bone mass loss is one of the chief health concerns for the menopause population, this review brought an insightful perspective on utilizing melatonin as an alternative treatment strategy for postmenopausal osteoporosis.The literature review by van de Vyver et al., the researchers established an experimental protocol to investigate pharmacokinetic processes of brain uptake of a 195\u00a0kDa monoclonal antibody EGFRvIII-TCB in healthy rats after intravenous (IV) or intracerebroventricular (ICV) administration. The findings of this study may facilitate cross compound comparison, which the authors demonstrated by comparing EGFRvIII-TCB with two other tool compounds.In the original article from Arip et al. provided an oversight of the known resistances in bacteria, viruses, fungi and parasites and the remaining traditional treatment options including a discussion of the limitation of the current therapeutic approaches. The authors also presented an overview of several plant-based compounds, their mechanism of action and the potential in tackling antimicrobial resistance (AMR). We hope that readers of this article can benefit from the authors\u2019 knowledge in understanding the feasibility of utilizing plant-based metabolites and generate new ideas for research to tackle this global concern.The literature review by Huang et al. investigated the effect of two Chinese medicines, chitosan and danshen, on obstructed fallopian tubes, which is one of the main causes for reduced fertility among women. Chitosan is a non-toxic extract from the shells of marine creatures and danshen comes from the dried roots and rhizomes of Salvia miltiorrhiza Bge. This study showed that the effective constituent of chitosan and danshen injection was stable, effective at preventing tubal re-obstruction and increased pregnancy rates.The study by We, as an editorial team, sincerely appreciate the authors\u2019 willingness and enthusiasm to contribute their research stories to this topic. It is our hope that the number of female researchers will continue to grow in the future and that the research community will invest more effort in conditions impacting women."} +{"text": "The significance of tumor microenvironment (TME) heterogeneity is increasingly becoming recognized as playing an essential role in tumorigenesis and malignant biological behaviors. Dr. Hanahan D presentsOf note, inter-patient molecular heterogeneity has hampered the clinical practice of an expanding variety of targeted therapies and personalizing their prescriptions. An inter-patient molecular heterogeneity investigation using genomic and transcriptomic data for 4890 tumors from The Cancer Genome Atlas database showed that the repertoires of molecular targets of the clinical recommendations for accepted drugs were not congruent with the gene mutation patterns of different cancer types . Enhancevia copy number and cluster mutations , Science and Health Joint Research Project of Chongqing Municipality (2020GDRC013).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The last two decades have seen the timeliness of studying the connection between suicides and drunkennessTo evaluate the significance of suicidal risk factors in patients who had committed suicides while being under the effect of alcohol so as to be able to forecast suicidal risks and prevent suicides within this groupThe authors have carried out an analysis of medical documentation of suicides committed in the Sverdlovsk region. The data on suicides has been taken from forensic expertise acts. The following factors have been taken into account: age, gender, social status of suicide victim, supplementary somatic pathology, and concentration of alcohol in the victim\u2019s bloodAlcoholic addiction is a behavioral indicator of suicidal risk. The level of suicidal activity in persons with the syndrome of alcoholic addiction is much higher than within the general populace. The age of 25-49 is the peak of suicidal attempts among patients with chronic alcoholism. Genuine suicides prevail during the first stage of chronic alcoholism. The patients are inclined to demonstrate pathological suicidal reactions to social misplacement that show themselves in the form of conflicts within the family and at work. In addition to genuine suicidal attempts made by males in the state of abstinenceThe results received confirm the role of the alcoholic factor in the formation of suicidal behavior and have the aim of elaborating new forms and methods to help prevent suicides committed in the state of alcoholic drunkennessNo significant relationships."} +{"text": "The classic approach for cochlear implant surgery includes mastoidectomy and posterior tympanotomy. The middle cranial fossa approach is a proven alternative, but it has been used only sporadically and inconsistently in cochlear implantation.To describe a new approach to expose the basal turn of the cochlea in cochlear implant surgery through the middle cranial fossa.Fifty temporal bones were dissected in this anatomic study of the temporal bone. Cochleostomies were performed through the middle cranial fossa approach in the most superficial portion of the basal turn of the cochlea, using the meatal plane and the superior petrous sinus as landmarks. The lateral wall of the internal acoustic canal was dissected after the petrous apex had been drilled and stripped. The dissected wall of the inner acoustic canal was followed longitudinally to the cochleostomy.Only the superficial portion of the basal turn of the cochlea was opened in the fifty temporal bones included in this study. The exposure of the basal turn of the cochlea allowed the visualization of the scala tympani and the scala vestibuli, which enabled the array to be easily inserted through the scala tympani.The proposed approach is simple to use and provides sufficient exposure of the basal turn of the cochlea. The classic approach for cochlear implant (CI) surgery includes mastoidectomy and posterior tympanotomyThe middle cranial fossa (MCF) approach is a proven, valuable approach, although it has been used only sporadically in CI surgery without much procedural standardization to handle cases with ossified cochleae, chronic suppurative otitis media, or inner ear dysplasiaThe anatomy of the human temporal bone is regarded as highly complex, with nerve and vascular structures closely intertwined and often separated by a few millimeters. The literature on alternative surgical approaches to the cochlea is extremely limited, while the exact tridimensional topography of the cochlea inside the petrous bone has been scarcely studied. Although variants to CI surgery have been described, challenges pertaining to the anatomy of the site still abound when the classic transmastoid approach cannot be elected.This study aimed to produce a detailed description of a new approach to cochlear implant surgery via the MCF which allows for the precise location of the basal turn of the cochlea.This exploratory anatomy study was held at the Surgical Skills in Otorhinolaryngology Lab of the Medical School of the University of S\u00e3o Paulo (FMUSP). It was approved by the Ethics in Research Committee of the FMUSP, under research protocol # 309/11.Fifty temporal bones of adult cadavers of both genders preserved in formaldehyde were used in this study. The included bone specimens had adequate squamous and petrous portions, as well as the dura mater of the middle cranial fossa.The anatomic landmarks used were the superior petrosal sinus, the stripped petrous apex, the lateral surface of the meatal plane followed on the petrous apex from its more proximal portion (in reference to the projection of the acoustic pore), and the greater superficial petrosal nerve and 2. TThe temporal bones were placed in the position in which they would be seen during surgery using the MCF approach. Surgery was performed in accordance with the steps described below:Exposure of the lateral-superior petrous portion of the temporal bone by detaching the dura mater until the middle meningeal artery was identified.Visualization of the MCF floor and identification of the greater superficial petrosal nerve, arcuate eminence, and superior petrosal sinus. Medial drilling of the petrous apex toward the meatal plane area, adjacently to the superior petrosal sinus and anteriorly to the acoustic pore.Identification of the dura mater of the internal acoustic meatus (IAM) by transparency.Drilling along the greater axis of the IAM until its lateral extremity is identified and, right in front of it, until the more superficial portion of the basal turn of the cochlea is found and opened.Cochleostomy with a 1 mm diamond tip drill .Visualization of the osseous spiral lamina separating the scala tympani and scala vestibuli.Placement of a dummy array through the scala tympani, oriented in the direction of the arcuate eminence.The superficial part of the basal turn of the cochlea was easily found through this approach in all 50 temporal bones. The exposure of the basal turn of the cochlea allowed the visualization of the scala tympani and scala vestibuli. Thus, the array could be easily placed through the scala tympani.The placement of the dummy CI array was documented through temporal bone computerized tomography scans , and 5.FMany authors have reported variations in the anatomy of the MCF likely to be related to differences in aeration of the temporal boneHouse & SheltonThe meatal plane approach published by FischGarcia-Iba\u00f1ez & Garcia-Iba\u00f1eztegmen tympani.Bento et al.Jackler & GladstoneFew authors have looked into the projections and anatomic relations of the cochlea while approaching the turns of the cochleaA review published by Colletti et al.As seen on the CT scans, the array reached almost the entire length of the cochlea, with only a few millimeters remaining between the round window and the cochleostomy. The stimulation of the middle and apical portions of the cochlea by the implant involves more nerve interactions than the stimulation by the array of the basal turn of the cochleaIn all 50 temporal bones included in this study, only the superior part of the basal turn of the cochlea was uncovered. Exposure of the basal turn of the cochlea allowed the visualization of the scala tympani and scala vestibuli. The placement of the cochlear implant through the scala tympani oriented in the direction of the arcuate eminence was significantly eased. Even though this study was performed on specimens of temporal bone removed from the skull, the MCF approach has been reproduced in cadavers in surgery-like conditions. A bone window measuring 3 \u00d7 4 cm was produced on the squamous part of the temporal bone and the temporal lobe was retracted without leading to any additional difficulty accessing the basal turn of the cochlea or placing the dummy array.Exposure of part of the petrous apex requires more retraction of the temporal lobe and often calls for obliteration of the middle meningeal arteryThe approach described in this paper appears to be simpler and more reliable in locating the cochlea. It also ensures sufficient exposure of the basal portion of the cochlea while avoiding injury to other structures. The basal turn of the cochlea is located immediately below the MCF floor and can be easily accessed by drilling the bone lateral to the meatal plane, without posing harm to vital structures, once in this path there is only aerated bone. It is also possible to visualize the osseous spiral lamina and place the CI array through the scala tympani, reaching almost the entire length of the organ of Corti.The approach described in this paper simplifies the cochleostomy procedure and the placement of the array. When performed through the MCF approach, cochlear implant surgery takes less time, reduces the occurrence of surgical trauma, and mitigates postoperative complications. Additionally, facial nerve damage is avoided, as this approach does not require the stripping of any portion of the facial nerve, as seen in other popular procedures."} +{"text": "In order to improve the accuracy and adaptability of the Angle control of the aircraft platform automatic lifting and boarding synchronous motors, the high precision Angle adaptive control method of the aircraft platform automatic lifting and boarding synchronous motors is studied. The structure and function of lifting mechanism in automatic lifting and boarding device of aircraft platform are analyzed. The mathematical equation of synchronous motor in automatic lifting and boarding device is established in a coordinate system, the ideal transmission ratio of synchronous motor angle is calculated, and the PID control law is designed according to the transmission ratio. Finally, the high precision Angle adaptive control of the synchronous motor of the aircraft platform automatic lifting and boarding device is realized by using the control rate. The simulation results show that the proposed method can quickly and accurately realize the angular position control of the research object, and the control error is within\u2009\u00b1\u20090.15rd, which has high adaptability. The lifting system in the automatic lifting and boarding equipment of aircraft platform is related to the safety of passengers and automatic lifting and boarding equipment of aircraft platform2. It is a key part of automatic lifting and boarding equipment of aircraft platform. It is an equipment with high requirements for safety and practicability3. Once the automatic lifting and boarding equipment of the aircraft platform has problems, it will cause the elevator and adjustment plate to appear undesired vibration, and even cause the aircraft to lose control.. The lifting system in the automatic lifting and boarding equipment of aircraft platform is controlled by synchronous motor4. The lifting system in the automatic lifting and boarding equipment of aircraft platform is controlled by controlling the angle of synchronous motor to realize the application function of automatic lifting and boarding equipment of aircraft platform.The automatic landing and boarding equipment of aircraft platform is an airport facility connecting the aircraft and the waiting room to facilitate passengers to enter and leave the cabin. It plays an important role in fast, improving passengers' boarding experience and reducing airport labor costs5. The angle positioning can quickly determine the speed and position of rotor6. The high-precision angle positioning of synchronous motor has the advantages of high efficiency and fast dynamic response. With the development of synchronous motor, synchronous motor is widely used, which can be used in some blowers, high-power compressors and other devices, which can save mechanical speed-up devices. High-precision angle positioning of synchronous motor can not only improve operation efficiency, but also save energy7, so the high-precision angle adaptive positioning method of synchronous motor has become a research hotspot.In the mechanical field, synchronous motor has been widely used in industry, national defense, manufacturing and other fields due to its advantages of high power density, small volume and high operation efficiency8. The controller was designed based on the emotion learning and decision-making mechanism in the brain through emotion clues and sensory input, which can accurately track the speed and d-axis stator current reference, but this control method leads to poor operation stability of synchronous reluctance motor. The error of reverse operation is large.Petkar and Kumar et al. studied the predictive control method driven by three-level open winding permanent magnet synchronous motor based on computational effective model9. For the open winding permanent magnet synchronous motor powered by three-level inverter, it will produce small torque and stator flux fluctuation. However, the calculation time required for predictive control variables is very long. The maximum four voltage vectors are used to replace the 19 voltage vectors in the traditional three-level model to predict the current control, which reduces the number of predictions and the calculation time of predictions. However, this method has poor connectivity between electronic signals and mechanical models, so it is difficult to ensure that the computer simulation results match the actual data. Tornello and Scarcella studied the combined method of rotor position estimation and temperature monitoring in sensorless synchronous reluctance motor drive10, estimated the rotor position of synchronous motor by processing the voltage induced in thermistor box, and directly used this method to build sensorless controller or improve the performance of traditional model-based sensorless control system. The disadvantage of this method is that the calculation intensity is very large, high computing equipment needs to be configured, and the operation speed and the response speed of the algorithm are very slow.Daryabeigi and Mirzaei studied the enhanced emotion and speed deviation control method of synchronous reluctance motor driverIn view of the problems existing in the above literature, a high-precision Angle adaptive control simulation method was proposed for the synchronous motor of aircraft platform automatic lifting and boarding equipment, the structure of aircraft platform automatic lifting and boarding equipment was designed, the shaft components of the synchronous motor in the automatic lifting and boarding equipment of aircraft platform were analyzed, and the Angle modeling of the synchronous motor in the automatic lifting and boarding equipment of aircraft platform was completed. In this paper, the ideal transmission ratio of synchronous motor angle of automatic platform lifting and boarding device is calculated, and the association rule algorithm and PID technology are introduced into it. The PID control law is used to control the synchronous motor of aircraft platform automatic lifting and boarding equipment.In the structure of automatic lifting boarding equipment for aircraft platform, there is a lifting mechanism in the form of gantry structure, and the structural diagram of the lifting mechanism is shown in Fig.\u00a011; the lifting mechanism also applies more than one ton of pressure to the workpiece. At this time, it is still necessary to ensure that the moving position of the lifting mechanism remains synchronous, so as to achieve the purpose of uniform loading. In order to meet the needs of working position initialization and equipment calibration of automatic lifting and boarding equipment of aircraft platform, the lifting mechanism also needs to have synchronous zero return function.The two lifting rods in the lifting mechanism of the automatic lifting and boarding equipment of aircraft platform are driven by two synchronous motors of the same model. When the automatic lifting and boarding equipment of aircraft platform works, the positions of two lifting rods are required to keep real-time synchronization12. The magnetic rotation of synchronous motor is calculated by using the position Angle of synchronous motor in automatic lifting and boarding equipment of aircraft platform. The range of position angle of synchronous motor is analyzed when it rotates clockwise and counterclockwise in automatic lifting and boarding equipment of aircraft platform13, to apply the current vector to the stator of the synchronous motor in the automatic lifting and boarding equipment of aircraft platform, analyze the shaft component of the synchronous motor in the automatic lifting and boarding equipment of aircraft platform, and complete the modeling of the angle of the synchronous motor in the automatic lifting and boarding equipment of aircraft platform.The mathematical equation of the motor under the specified coordinates according to the current and inductance of the synchronous motor in the coordinate axis of the automatic lifting and boarding equipment of aircraft platform is calculated. The current of the synchronous motor in the automatic lifting and boarding equipment of the aircraft platform is analyzed by applying the size of the synchronous motor stator in the automatic lifting and boarding equipment of the aircraft platformThe mathematical equation of synchronous motor in automatic lifting and boarding equipment of aircraft platform in In Eq.\u00a0, \\documeWhen the inductance value is applied to the stator of synchronous motor in the automatic lifting and boarding equipment of aircraft platform, the corresponding direction current is, then 14.Since 15. When Assuming that the rotor magnetic pole of the synchronous motor in the automatic lifting and boarding equipment of aircraft platform is N pole, and the current vector Using Eq.\u00a0, the mod16, and can realize the real-time monitoring of the specific use of the synchronous motor in the automatic lifting and boarding equipment of aircraft platform under the application mode of this technology, so as to adjust the frequency converter in real time according to the current requirements17. It provides basic conditions for the implementation of energy-saving standards and the improvement of overall work efficiency. If the gain angular velocity is set as Based on the modeling of synchronous motor angle in automatic lifting and boarding equipment of aircraft platform, the ideal transmission ratio of synchronous motor angle in automatic lifting and boarding equipment of aircraft platform is calculated, which can bring association rule algorithm and PID technology into it. The reasonable application of PID technology is the key factor to reasonably adjust the synchronous motor and circuit in the automatic lifting and boarding equipment of aircraft platformIn Eq.\u00a0: \\documeBy weighting the ratio, we can get:9, the speed It can be seen from Eqs.\u00a0 and 7) 7) that Based on the above transmission ratio, PID control law is adopted for high-precision angle adaptive control of synchronous motor in the automatic lifting and boarding equipment of aircraft platform.Its structure diagram is shown in Fig.\u00a018. Considering that the synchronous motor model in the automatic lifting and boarding equipment of aircraft platform represented by Eq.\u00a0 control is one of the earliest developed control strategies. Because of its simple algorithm, good robustness and high reliability, it is widely used in industrial process control, especially for deterministic control systems that can establish accurate mathematical models. PID controller has a history of nearly 70\u00a0years. It has a simple structure, good stability, reliable operation and easy adjustment has become one of the main technologies of industrial control. When the structure and parameters of the controlled object can not be fully mastered, or the accurate mathematical model can not be obtained, and other technologies of control theory are difficult to adopt, the structure and parameters of the system controller must be determined by experience and on-site debugging. At this time, the application of PID control technology is the most convenient. That is, when we do not fully understand a system and the controlled object, or can not obtain the system parameters through effective measurement means, it is most suitable to use PID control technology.The high precision Angle adaptive control method of synchronous motor for automatic lifting and boarding device of aircraft platform is studied. Based on the mathematical model of the synchronous motor in the automatic lifting and boarding device of aircraft platform, the PID control law is designed to realize the Angle control of the synchronous motor in the automatic lifting and boarding device of aircraft platform. The simulation results show that the maximum and minimum values of the adaptive coefficient of the proposed method are higher than those of the two comparison methods. The proposed method can realize the high precision Angle control of the automatic lifting of aircraft platform and the synchronous motor of boarding equipment. This method is expected to lay a foundation for further research on permanent magnet synchronous motor. It provides an effective means and tool for analyzing and designing the control strategy of real-time high-performance synchronous motor.This control method can improve the anti-interference performance of permanent magnet synchronous motor and has a good application prospect in the field of aircraft platform automatic lifting and boarding equipment. It can meet the high precision and high reliability requirements of aircraft platform automatic lifting and boarding equipment."} +{"text": "TePercutaneous nephrolithotomy training using simulation is very important to the young urologists and to all surgeons who can have multiple attempts and opportunity for trial-and-error learning. In the present paper the authors evaluate the impact of preoperative high-fidelity patient-specific percutaneous nephrolithotomy hydrogel simulations on surgical and patient outcomes using amazing figures.This paper shows the importance of the translational research and anatomy for urological practice and for the training of urologists. The authors conclude that patient-specific procedural rehearsal is effective reducing the experience curve for a complex endourological procedure, resulting in improved surgical performance and patient outcomes.The paper concludes that penile allotransplantation represents a revolutionary technique in the management of penile loss. The inclusion of external pudendal artery anastomoses appears to have prevented any form of penile skin necrosis and anastomosis of the corpora cavernosa appears sufficient for restoration of erectile function independent of the cavernous artery."} +{"text": "The quality of the oocytes is pivotal for assisted reproductive efficiency and the maturation of the oocyte represents the first key limiting step of the in vitro embryo production system. At the time of removal from the antral follicles, the oocyte is still completing the final growth and differentiation steps, needed to provide the so-called developmental competence, i.e. the machinery required to sustain fertilization and embryo development. In mono-ovular species only one oocyte per cycle is available for procreation, therefore the current assisted reproduction techniques strive to overcome this natural boundary. However, the success is still limited and overall the effectiveness does not exceed the efficiency achieved in millions of years of mammalian evolution. One of the problems lies in the intrinsic heterogeneity of the oocytes that are subjected to in vitro maturation and in the lack of dedicated in vitro approaches to finalize the differentiation process. In this review we will try to overview some of the salient aspects of current practices by emphasizing the most critical and fundamental features in oocyte differentiation that should be carefully considered for improving current techniques.The efficiency of However, when oocytes are collected in pools from antral follicles, the processes necessary to confer full meiotic and developmental competence must be completed in a considerably high proportion of them. As a result, the oocytes ability to be fertilized or develop into embryos or to term might be compromised.Development of IVM techniques was made possible starting from 1935\u2019 Pincus and Enzmann observation that oocytes removed from antral follicles before natural ovulation spontaneously resume meiosis . Thus, iin vivo during the following follicular growth and dominance phase until ovulation , a heterodimer consisting of a kinase, cdk1 and its regulatory partner, cyclin B (cdk1-cyclin B), which is involved in the regulation of G2/M cell cycle transition of all eukaryotic cells. Cyclic AMP-mediated Protein Kinase A (PKA) activity inhibits Cdk1 hence contributing to oocyte meiotic arrest . The actin vitro. Nonetheless, the use of butyrolactone and roscovitine have been reported to induce some modifications in the oocytes at ultrastructural level of oocyte health status and final differentiation can provide useful tools for the selection of good quality oocyte, time of prematuration (culture length) as well as the specific environment for optimizing prematuration culture systems. At the same time, the definition of customized culture system can be associated with stimulation strategies to synchronize the growth of ovarian follicles in the donor in order to obtain oocytes specifically suitable for tailored prematuration protocols.The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality their presentation of the research findings mentioned in this work."} +{"text": "The lens is a transparent organ located near the front of the eye whose major function is to project an undistorted image of incoming light onto the retina. To achieve this the lens must maintain its transparency and refractive properties over many decades of life and in the process it needs to overcome some unique physiological challenges that are not experienced by other tissues. Being a large avascular organ that is suspended by the zonular fibers between the aqueous and the vitreous humors, the lens must exchange its nutrients and waste products without the assistance of a blood supply. The major bulk of the lens is formed by fiber cells that lack organelles; they are covered by an anterior layer of epithelial cells which differentiate at the lens equator to form the fiber cells. The degradation of nuclei and organelles that occur during normal epithelial cell-to-lens fiber cell differentiation contribute to the minimization of light scattering (reviewed in ). To comThis Research Topic covers several different areas of lens biology. These articles and reviews consider various lens ion channels and transporters and their regulation, post-translational modifications, alterations by mutations in other protein genes, and complex inter-relationships. Together, these papers help to elucidate the normal and pathological state of the lens microcirculation, lens cell homeostasis and maintenance of lens transparency.Giannone et al. provide a concise update on the lens circulation model, giving some consideration to the effects of oxidation and aging on the lens circulation and their impact on vision. Beyer et al. review the alterations in components of the lens microcirculation reported in various studies of different mouse cataract models. From these studies, they conclude that disruption of intercellular communication between fiber cells is a common feature to many of these models, even in the absence of mutations in the connexin genes. Retamal and Altenberg focus their review on gap junction channels and hemi-channels composed of connexin46 and how their properties and regulation are affected by different post-translational modifications. Some of these modifications may contribute to the changes in lens intercellular communication associated with aging and cataracts.Giannone et al. also highlight recent studies showing that forces transmitted through the zonules can lead to changes in the hydrostatic pressure gradient. Interestingly, Ebihara et al. considered the possibility that fluid flow in and out of individual lens cells (as it would occur with shape changes during accommodation) are modulated by pressure-activated channels. The results of their patch-clamp studies implicate activation of calcium-activated chloride channels by mechanical stimulation, a process that may involve influx of extracellular calcium through TRPV4 channels. Delamere and Shahidullah review the recent findings regarding the roles of TRPV1 and TRPV4 channels in the activation of different signaling pathways in the lens. The TRPV4 feedback loop senses lens swelling and leads to an increase in Na+, K+-ATPase activity, while the TRPV1 feedback loop senses shrinkage and leads to an increase in the activity of the Na+/K+/2Cl\u2212 cotransporter, NKCC1.Zahraei et al. They use stable isotope labeling and mass spectrometry to examine the patterns of glucose uptake and subsequent metabolism in bovine lenses. They conclude that the major site of glucose uptake is at the lens equator, and they correlate their findings with the distributions of different glucose transporters. Water channels are also differentially distributed in the lens. In their review, Schey et al. describe how the differential distribution, water permeability and regulation of three water channels or aquaporins and the changes in their subcellular localization in the different lens regions contribute to lens water transport. The authors incorporate these recent findings to propose an updated model of the lens microcirculation system.The survival of the organ in the absence of a blood supply and of its cells devoid of organelles continue to be intriguing issues. Further insights into the handling of glucose in the lens are reported in the article by https://cat-map.wustl.edu/) to human cataracts (tl.edu/) . AlthougOverall, this Research Topic has brought together contributions that address some contemporary issues in lens biology and physiopathology and provide a critical appraisal of significant historical advances in this research area."} +{"text": "The COVID-19 pandemic has accelerated the uptake and use of technology in hosting virtual and hybrid meetings and events. In this presentation, Dr. Falzarano will describe the ways in which the 2021 Conference on Engaging Family and Other Unpaid Caregivers of Persons with Dementia in Healthcare Delivery leveraged videoconferencing systems and other web-based platforms to enact a hybrid event. Specifically, she will discuss the various components involved in the event\u2019s planning and execution, including the appointment of a virtual liaison and audiovisual technicians to enable seamless integration and participation of both in-person and virtual attendees. She will also discuss how videoconferencing technology was used to facilitate the delivery of virtual panel presentations; strategies for immersing virtual attendees in both large group discussions and small group breakout sessions; and the process by which virtual attendees participated in the priority vote."} +{"text": "The clinical evidence and cost-effectiveness of digitalised prevention and treatment of mental disorders such as depression, anxiety and alcohol misuse have been steadily growing over the last two decades. However, bridging the gap between evidence-based eMental-health interventions and their actual delivery, evaluation and implementation in routine care has proven to be more difficult and a longer process than previously expected thereby reaching the estimated forecast of Roger\u2019s innovation cycle of 20 years. In contrast, during the appearance of COVID-19 in 2020 for many patients and therapists digitalized treatment was the only option. Meanwhile from a scientific and policy perspective the implementation and upscaling of digital mental health care innovations in routine care have gained momentum in terms of theoretical perspectives on organizational change, empirical research into how to effectively implement digital innovations from the perspective of a variety of stakeholders and organizational levels . In this presentation an overview of these issues will be presented, and it will be discussed whether COVID-19 might act as a turning point for the provision of large scale access to and implementation of digitalized mental health care in the near future.No significant relationships."} +{"text": "The quality of care provided in nursing facilities has long been a societal concern and an important focus of policy at the federal and state level. The COVID-19 pandemic has drawn increased attention to nursing facilities due to high infection and mortality rates as well as increased media attention on staffing shortages that have occurred in the wake of the pandemic. The Office of the Assistant Secretary for Planning and Evaluation (ASPE) and RTI International have recently completed a series of studies focused on nursing facility quality and staffing. This symposium will present the results of four of those studies. First, we will report the results of a study exploring states\u2019 use of value-based payment (VBP) programs as part of their nursing facility Medicaid payment systems. Second, we will present the results of a study examining inappropriate discharges from nursing facilities. Third, we will report findings from a study that examined the factors associated with nursing facility closures. Finally, we will present findings from a study that explored the effect of the pandemic on nursing facility staffing. Together these studies provide important information about the state of nursing facilities in the wake of the COVID-19 pandemic and suggest key areas of focus for policymakers."} +{"text": "The authors regret that the names of two co-authors (Jianwei Lu and Li Guo) were spelled incorrectly in the original article. The correct author names are given here.The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} +{"text": "Despite of the heightened risks and burdens of physical comorbidities across the entire schizophrenia spectrum disorders (SSD), relatively little is known about physical multimorbidity (CPM) in this population. The study\u2019s main objective was to explore the differences in the CPM prevalence between SSD patients and the general population (GEP).The primary outcome was to explore the difference in CPM prevalence in the younger SSD and GEP groups (<35 years).The secondary outcome was the number of psychiatric readmissions.This nested cross-sectional study enrolled 343 SSD patients and 620 GEP participants.Younger SSD patients had more than three-fold higher odds for CPM than GEP. We also demonstrated an association between the presence of CPM and the number of psychiatric admissions in the SSD population independently of possible confounders. We did not observe significant interaction of CPM and age in the prediction of clozapine use. Younger women with SSD had statistically significant, almost four-fold higher odds of CPM than women from GEP.This study suggests that women with SSD are at increased physical comorbidity risk compared to men, particularly early in the course of psychiatric illness. Our results highlight the importance of addressing physical health from the first contact with a mental health service to preserve general health, and provide the best possible treatment outcome. Treatment of SSD must be customized to meet the needs of patients with different physical multimorbidity patterns.No significant relationships."} +{"text": "Understanding the factors that play a role in the initiation of alcohol use and the subsequent transition to later alcohol abuse adolescence is of paramount importance from the context of developing better-targeted types of secondary (\u201cpro-active\u201d) prevention interventions . Peer and family influences together with temperament traits have been suggested to be of cardinal importance regarding the initiation of alcohol use. In addition to these factors neurobiological and genetic factors play a major role in the risk of developing alcohol abuse upon initiation. The presentation will highlight the different psychological, neurobiological, and social factors underlying the risk of the transition to abuse and dependence in adolescence. In addition, examples of targeted prevention interventions will be highlighted.No significant relationships."} +{"text": "The recent outbreak of cases of paediatric liver failure raises numerous questions regarding the potential causal agent. Paediatric liver failure is a rare disease.Are we facing here a new disease? First, we need additional epidemiological studies to confirm a true outbreak. Second, the presence in these five cases and in all cases reported so far of an adenovirus and a current or past infection of SARS COV 2 is troubling. The fact that this epidemic of hepatitis occurred during the SARS Cov 2 pandemic should of course be kept in mind. However the role of SARS Cov 2, of adenovirus and of their association as direct necrotic agent or as a trigger of inflammatory and/or immune reaction remains to be proven.At this stage, there are more questions than answers. We must confirm the outbreak and understand the pathogenesis of acute liver failure and the respective role of viruses, immune and genetic factors in a such devastating disease."} +{"text": "Neuroinflammation is implicated in the pathophysiology of several neurological diseases. The key role of neuroinflammation in a wide range of neurological disorders makes it highly attractive for diagnostic examinations and therapeutic interventions in recent years . The objBoth experimental and clinical investigations suggest the essential role of neuroinflammation in medically intractable epilepsy . SeveralMultiple lines of evidence suggest that neuroinflammation plays a pivotal role in the pathogenesis of CVD and it has been considered a potential target for therapeutic intervention . NeuroinToll-like receptors (TLR), a family of evolutionarily transmembrane proteins and a pivotal part of the innate immune system, are key components of inflammatory processes in different neurological disorders, including CVD and neurodegenerative diseases . In thisRegulation of TLR can also modulate the impact of spreading depolarization (SD) , a negatFurthermore, the clinical and experimental findings support the pathogenic role of neuroinflammation in various psychiatric disorders . PsychosThe cyclic GMP\u2013AMP synthase\u2013stimulator of interferon genes (STING) pathway plays a pivotal role in coupling the sensing of DNA to the induction of strong innate immune responses . MultiplMore efforts are required to elucidate the complex interplay among different factors and pathways that contribute to inflammatory processes in diverse central nervous system disorders. The modulation of the immune system in specific manners can provide an opportunity to treat several of these diseases with similar approaches and improve the outcomes ,39. Ther"} +{"text": "An analysis of output quantities showed that grinding wheels made of B181 cBN grains are most favorable for shaping planar technical blades of X39Cr13 steel in the grinding process.The most widely used method for shaping technical blades is grinding with abrasive tools made of cubic boron nitride (cBN) grains and vitrified bond. The goal of this work was to determine the effect of grinding wheel grain size , kinematics and feed rate ( The characteristic of production operations carried out in the modern food industry, especially in its areas related to fish processing, as described in detail in the work of Sen , Boziarifactors conducive to strong corrosive interactions ;factors that promote mechanical wear, mainly the change in the geometry of the cutting edges of blades because of the machining process ;factors that promote a change in the properties of the material, leading to an increase in its susceptibility to deformation.The cutting blades used in the food processing industry are mainly made of carbon and alloy tool steels, high-speed steels as well as stainless steels, as characterized in the work of Col\u00e1s and Totten . The latThe most widely used method for shaping technical blades is abrasive machining , includiThe superhard abrasive cBN was discovered in 1957 as a by-product of diamond synthesis research. The first commercial product appeared in 1968, when General Electric Co. introduced cBN under the name Borazon. Today, cBN is produced by many companies around the world, and its share in the group of all abrasive tools is steadily increasing [The cBN grains are characterized by sharp vertices and cutting edges and have a more developed surface than diamond. Cubic boron nitride is used in the shaping and finishing grinding processes of steel, cast iron, stainless and alloy steels; non-ferrous metals; and for machining hard-to-cut high-speed steels with high carbide content. Unlike diamond, it shows resistance to the chemical effects of iron, cobalt and nickel at high temperatures. All these advantages make it used, despite its high price, for machining those groups of materials that cannot be satisfactorily ground with conventional abrasives (due to their hardness) or diamond ,11. TablG values compared to conventional grinding [The development of grinding technology with cBN abrasive tools made it possible to significantly increase the achievable process efficiency by uRecently, there has been a noticeable trend toward the use of cBN abrasive tools with a vitrified bond ,14,15. TThere is a lack of research results in the directional literature on the use of vitrified grinding wheels with cBN grains in the grinding process of technical blades. It can be assumed that this knowledge is the know-how of tool manufacturers who do not make it public. This limits the development of abrasive machining techniques applied to shape technical blades used in the food processing industry. The goal of this work was to expand the knowledge in this area to include the issue of selecting the abrasive grain size of cBN tools used for grinding planar technical blades.grinding wheel grain size ,process kinematics ,fv = 100; 150; 200 mm/min), on the results of the grinding evaluated by the cutting force of the blade after machining F, blade surface texture parameters as well as blade surface morphology indicated by microscopic observation.and feed rate :In the experimental research, a special stand equipped with a 5-axis computerized numerical control (CNC) grinder was used a. This gFor the study, type 5A1 grinding wheels from INTER-DIAMENT made of cBN abrasive grains and vitrified bond with the technical parameters shown in The grinding wheels used were characterized by the same features of construction in addition to the dimensions of the cBN grains. Grinding wheels with cBN grains numbered (according to FEPA standards) B126, B181 and B251 were selected for the study. The grinding process was resized in three variants of the kinematic system, in which the machining was carried out, respectively, with the circumference a, face b and speThe angular position of the grinding wheel relative to the workpiece changed in each of the listed kinematic variations of the grinding process. After the tests, the knives\u2019 cutting forces were determined after grinding on a special test stand shown in n = 3. The study used an experimental planning methodology, which resulted in a research plan and the number of repetitions of each point in the plan F recorded during the test of cutting through the test specimen with the knife after sharpening on a special test stand , followed by one cutting force measurement for each of them on a special measuring station described in more detail in the work [For each combination of variable grinding conditions, three blades were shaped : The calculated values of the amplitude parameters Smvr parameter, describing the mean void volume ratio indicated favorable surface characteristics of blades shaped with B181 and B251 grinding wheels . In addition, a few surface defects in the form of cracks pointing in a direction different from the dominant one were observed on microscopic images. They could have been formed during the procedure of removing the rewind of the blade, in which the leather tool was moved along the blade, and it is possible that the removed chips of the workpiece material as well as fragments of abrasive grains or bond remaining on the machined surface accumulated on it.Microscopic images also revealed the presence of a few irregular pits in the shaped surface. It seems that these defects occurred most frequently on the surface of blades ground with the B251 grinding wheel .F required to separate the material with their use. An analysis of the surface texture of the blade and its morphology allowed us to determine that this may have been caused by the way the blade was shaped resulting from grinding the two side surfaces of the blade. When very fine grains are used, the number of contacts of the active cutting vertices with the workpiece surface increases, with many of these grains having a negative rake angle. This results in an increase in the share of friction in machining (compared to grinding with grinding wheels with larger grain sizes). This leads to an increase in the heat flux generated during machining while hindering its dissipation through coolant, which can reach the grinding wheel\u2013workpiece contact zone in relatively smaller intergranular spaces. The heat penetrates quite easily into the workpiece material due to its small thickness in the machining area, and this can result in an increase in the share of the plastic deformation phenomenon of the shaped edge. As a result, the shaped blade geometry imposes the highest force in the cutting process.The size of the cBN micrograins determined the number of active abrasive grains present on the grinding wheel active surface . As the size of the grain increased, this number decreased, which, with unchanged machining parameters, translates into an increase in the cross-sectional area of the machined layer attributable to a single abrasive grain. The results obtained show that when the cBN grains of the smallest size (B126) were used, the shaped surface of the blade was characterized by features resulting in the relatively highest value of the cutting force Sbi parameter). The study shows that it was this type of grain that made it possible to obtain a favorable compromise between the described factors determining the achieved performance properties of the ground blades.Increasing the size of abrasive grains allows the coolant to reach the grinding zone more easily, while reducing the number of abrasive grains directly involved in material removal. This, in turn, increases the cross-sectional area of the machined layer by a single grain and can result in an increase in the roughness of the machined surface as well as an increased intensity of grain wear phenomena (dulling and vertices chipping). The results obtained indicate grinding wheels with cBN grains of B181 as tools that allow shaping the surface of blades with the highest load capacity was about 19% higher with respect to the total cutting force values obtained for blades ground with grinding wheels made of cBN grains numbered B181 (15.67 N) and B251 (15.72 N).The microtopographs of the blade surfaces clearly showed regular machining traces oriented according to the direction of movement of active abrasive grains on the grinding wheel surface and resulting from the combination of the rotational and feed motion of the tool.Surface texture microtopographs also revealed local defects (single vertices) and scratches, occurring mainly on the surface of blades shaped with an abrasive wheel with B126 (blade #1\u20133) and B251 (blade #7\u20139) grains.Sbi, a grinding wheel with B181 grains was selected as the wheel that allowed the shaping of a blade surface with the most favorable functional characteristics.The differences between the values of surface texture parameters calculated for the surfaces shaped by the three types of grinding wheels compared were relatively small. However, considering the values of the bearing index An analysis of microscopic images of the blade surface of planar knives confirmed the characteristic features of the blades\u2019 surfaces, previously found based on the analysis of their microtopography. Few surface defects in the form of cracks oriented in a direction different from the dominant one were observed on the microscopic images, which may be the result of the procedure of removing the rewind of the blade. Microscopic images also presented a few irregular pits in the shaped surface occurring most abundantly on the surface of blades ground with the B251 grinding wheel.F value measurements, surface texture analysis and microscopic observations, it was found that grinding wheels made of cBN grains of B181 (of the granularity included in the study) were most favorable for shaping the planar technical blades of X39Cr13 steel in the grinding process.Considering the overall results of the cutting force"} +{"text": "The purpose of the present study was to use the combined mesiodistal crown widths of the mandibular and maxillary incisors as predictors for the combined mesiodistal crown widths of mandibular and maxillary canines and premolars. t-test. One hundred and twenty pairs of study models belong to 120 Iraqi adult subjects with normal dental and skeletal relations were included in the study. The crown widths of the mandibular and maxillary incisors, canines, and premolars were assessed at the maximum mesiodistal dimension on the dental casts using a digital electronic caliper with 0.01\u2009mm sensitivity. The correlation between combined mesiodistal crown widths of the mandibular and the maxillary incisors and combined mesiodistal crown widths of mandibular and maxillary premolars and canines has been determined using Pearson's coefficient correlation test for each arch and gender. Using simple regression analysis, the equations predicting the widths of the mandibular and maxillary premolars and canines were established. The predicted and the actual mesiodistal crown width values have been compared with the use of a paired sample According to the findings of the present study, males had significantly wider teeth compared to females. Correlations between the measured parameters ranged from moderate to strong. A nonsignificant difference between actual and predicted mesiodistal crown widths was discovered. With a high degree of accuracy, the combined mesiodistal widths of the maxillary and the mandibular incisors could be utilized for predicting the combined mesiodistal crown widths of the mandibular and maxillary canines and premolars. Diagnosing dental arch length insufficiency throughout the mixed dentition stage has been considered critical for preventing malocclusion. The mandibular and maxillary anterior teeth and first molars erupted during this stage along with the primary molars and canines. Following normal exfoliation of the primary molars and canines, crowding might arise .Despite the fact that these teeth have larger widths compared to their successors along with the primary spaces, crowding might develop as a result of the large size of the teeth or a lack of size in the dental arches as a result of an early loss of primary teeth or an increase in the trend toward soft foods, causing dental arches to not develop adequately enough to accommodate the entire set of the teeth .Dental caries and consequent early extraction of primary teeth will lead to space loss of the erupting cuspids and bicuspids and the development of malocclusion , so mainVarious approaches for predicting the width values of the permanent canines and the premolars have been devised to aid in the management of dental arch space. The teeth's average widths were initially published by Black . As a reTanaka and Johnston had predMoyers , howeverDifferent approaches \u201325 used Various studies were conducted in Iraq aiming at predicting the width values of premolars and canines with the use of many approaches \u201332. ThisThis retrospective study was authorized by the University of Baghdad's College of Dentistry's scientific and ethical committees. One hundred and twenty pairs of study models belong to 120 Iraqi Arabs aged between 17 and 25, with a full complement of the permanent teeth (excluding third molars), class I skeletal and dental relationships [Stone models of the chosen sample were acquired from the archives of the orthodontic department at the University of Baghdad's School of Dentistry. The mandibular and maxillary teeth (except for the first and second molars) have been measured at their maximum mesiodistal dimension using electronic digital calipers with a 0.01\u2009mm sensitivity .(1)Descriptive statistics .(2)Shapiro\u2013Wilk test was used to test the normality of data distribution.To test intra- and interobserver reliability, the intraclass coefficient of correlation was used.r).The correlation between combined mesiodistal widths of the mandibular and the maxillary premolars and canines and combined mesiodistal widths of the maxillary and mandibular incisors was determined using Pearson's coefficient test of correlation , the data were subjected to automated statistical analyses. Among the statistical analyses were:A significance level of more than 0.05 indicates a nonsignificant difference or correlation.p > 0.05).The Shapiro\u2013Wilk test was used to determine the normality of data distribution, and the findings indicated that the data were normally distributed .The descriptive statistics and gender differences for the combined mandibular and maxillary four incisors, premolars, and canines crown widths have been listed in In Predictions of the combined widths of the mandibular and maxillary premolars and canines must be undertaken throughout the mixed dentition period to prevent the development of crowding in dental arches. Tanaka and Johnston and MoyeVarious published articles attempted to predict the width of premolars and canines using a prediction approach based on the first permanent incisors and molars, which erupted early in life \u201325. CariVarious studies \u201332 have For verifying the gender difference, the first stage in statistics, as shown in The relationship between the combined mesiodistal crown dimensions of the mandibular and maxillary incisors and the combined mesiodistal crown dimensions of the mandibular and maxillary canines and premolars was tested in the second step. Y\u2009=\u2009a\u2009+\u2009bX, in which \u201cY\u201d represents the combined mesiodistal crown widths of the maxillary and the mandibular permanent canines and premolars , \u201cX\u201d represents the combined mesiodistal crown of maxillary and mandibular incisors, \u201ca\u201d represented the constant, and \u201cb\u201d represented the coefficient of the regression. The development of regression equations was the third phase. The equation is t-test for comparing the predicted and actual measurements. The findings revealed no statistically significant differences between actual and expected mesiodistal crown dimensions regarding both mandibular and maxillary premolars and canines (After computing the predicted widths, the fourth step was to use the paired sample canines , which iPrediction techniques are not always 100% accurate, and they could underestimate or overestimate the size of unerupted teeth. Overestimation appears to be the best way to avoid a lack of space; however, this method may mean tooth extractions for specific individuals. An overestimation of just 1\u2009mm beyond actual width values regarding permanent canines and premolars on every one of the sides of the arch would have no significant impact on the choice to extract or not extract , 19, 21.The sample size was a crucial drawback in this research because getting a sample with normal occlusion and sound teeth is so difficult. With the development in technology and presence of special analyzing software, intraoral scanner devices, digital camera, and CBCT, it became easier to measure teeth and dental arch dimensions in three planes of space . This reThe differences between the actual and predicted mesiodistal crown width values were nonsignificant. As a result, because no radiographs are needed, and it is based upon eight permanent teeth which emerge early in life, the summation of the mesiodistal widths of maxillary and mandibular incisors could be utilized for predicting the combined mesiodistal widths of the mandibular and maxillary canines and premolars with high reliability."} +{"text": "Regularly observed and sufficient physical activity (PA) of young people depends on the creation of conditions for success in the preferred PA. Therefore, we consider the diagnostics of PA preferences to be an irreplaceable part of PA diagnostics.The aim of this study is thus to (a) detect the state and trends in the preferences of individually oriented PA of young people in different education and sports environments in the context of weekly PA; (b) to detect the associations among developing preferences of track and field and the fulfilment of recommendations within a weekly PA.In the research conducted from 2007 to 2017 participated in total 16116 participants aged from 14 to 26. We have realized a sports preferences questionnaire and weekly PA questionnaire IPAQ-long in order to detect the preferences in the individually oriented types of PA.The biggest long-term stability among the Czech and Polish boys and the Czech girls showed swimming and cycling and among Polish girls swimming and skating. The most significant increase of preferences was detected in track and field, especially among the Czech girls and boys. The girls and boys who prefer track and field meet weekly PA recommendations significantly more than those who do not prefer it. Both Czech and Polish boys and girls showed that those who prefer athletic/running activities fulfil significantly more recommendations to a weekly PA; specifically at least 5 times a week for a minimum of 60 minutes of MVPA and simultaneously at least 3 times a week for a minimum of 20 minutes of vigorous PA. Preferences of athletic/running activities also increase the chance of fulfilment of above-mentioned recommendations to a weekly PA with both girls and boys . These preferences are also important predictors for fulfilment of PA recommendations.The knowledge of trends in preferred types of PA has a predictive meaning for supporting physically active lifestyle of young people and for creation of optimal conditions to pursue popular types of PA."} +{"text": "Obsessive compulsive disorder (OCD) is a disabling condition that affects the quality of life of both the patient and the caregivers. Similarly, in patients with physical medical illness, caregivers face a significant amount of stress.This study aimed to assess and compare the caregiver strain index between patients of OCD and medical illness. Moreover, this study will also compare the care giver strain index in the patients of OCD and physical medical illness depending on the severity and duration of the illness.Study was done at Department of psychiatry, Teerthanker Mahaveer University, Moradabad. In this Cross-sectional study 2 groups of caregivers were included. The group 1 included 30 caregivers of obsessive compulsive disorder patients and group 2 included 30 caregivers for physical medical illness. The Yale-Brown Obsessive Compulsive Scale was used for measuring the severity of OCD and the stress in caregivers were drawn from Caregiver strain index.This study reported a high objective burden among caregivers of OCD compared with the physical medical illness . The age of the caregivers also showed to be significantly associated with the stress in both the groups. The severity of the OCD was shown to be correlated well with the stress of the caregivers . In contrast, in physical medical illness the duration of the disease showed no significant association with the caregiver\u2019s stress.This study showed that in patients with OCD caregivers face a higher strain compared with the physical medical illness.No significant relationships."} +{"text": "Aging populations, worsening burden of chronic disease and recent pandemic has accelerated awareness and the importance of telemedicine in providing continuity of healthcare.AGENAS is the public body responsible for the implementation of telemedicine investment (\u20ac1 billion) in the context of the NextGenerationEU plan. AGENAS has built up a working group expert panel to define the technical and informatics features of the investment. The project consists of the realization of the national telemedicine platform and the regional telemedicine services. Italian regions will implement telemedicine services based on the national guidelines defined by AGENAS, that will also monitor it through key performance indicators outlined on the basis of best practices and scientific evidence of multidimensional evaluation.National telemedicine platform will improve, optimise and standardise telemedicine services throughout the Country, considering what may already be available in regional and local healthcare contexts. Regarding telemedicine services in regional context, that will be implemented within the NextGenerationEU, they will be focused on the telemonitoring of high prevalence conditions as well as other services such as televisit, teleconsultation and teleassistance. Connecting patient's home with healthcare system provide benefits for patients and their families, who will be able to interact with healthcare professionals, obtaining consultation and monitoring of their health.The implementation of the investment, aiming at improving equity and integration of care, will contribute to provide real world evidence about usage, benefits and potential risk of the telemedicine in primary care for the management of chronic diseases.\u2022\u2002The investment under the Next Generation EU plan it is the lifetime chance to transform Italian healthcare service and draw a new framework to cope with the high demand in telemedicine.\u2022\u2002Improving telemedicine services will determine a breakthrough in management of patient with chronic diseases in the Italian primary care sector."} +{"text": "With the refugee movement in 2015, also forced migrated female and male medical professionals have arrived in Germany. The effect of occupation on the subjective health status of these physicians working in the German health care system was investigated on the basis of Antonovsky's sense of coherence (SOC) and the occupational science models of Siegrist and Karasek&Theorell.Using a semi-structured interview guide, nine forced migrated physicians were interviewed before and nine forced migrated physicians were interviewed during their professional medical activity. Both interview groups had an Arabic cultural background. The transcribed interviews were analyzed according to Kuckartz's content structuring qualitative content analysis using the MAXQDA 2020 software tool.The SOC of migrated physicians is favorably influenced by meaningful occupational activity and the newly gained manageability of life. Positive influences are seen in professional appreciation and collegial support at all hierarchical levels. Negative effects are perceived in experiences of discrimination, insecurity and experienced injustice in the recognition of foreign qualifications. Physical stress results from occupational overload, unfamiliar work and time pressure.The salutogenic effect of the work, the recognition in the profession and the collegial support are essential contributions to the promotion of especially mental health among forced migrated doctors. This speaks in favor of rapid and stringent integration into professional life. However, organizational barriers inherent in the German health care system should not be disregarded, which is why both legal and structural improvements should be made to the existing integration procedure before and during professional activity.Expediting the integration of migrant doctors back into their professions is of salutogenic importance.Therefore, coordinated and corrective measures should be taken to this end by those responsible for this process."} +{"text": "The submandibular ganglion is a small fusiform-shaped cluster of cell bodies of the parasympathetic nervous system. Parasympathetic innervation of the submandibular gland is not only responsible for the secretion of saliva, but it also plays a main role in the development and regeneration of the gland. The parasympathetic root of the submandibular ganglion or the posterior branch of the lingual nerve to the submandibular ganglion is one of three roots of the submandibular ganglion. Using standard search engines , papers in English discussing the anatomy, embryology, variations, and clinical significance of the parasympathetic root of the submandibular ganglion were reviewed. AnatomyThe submandibular ganglion is a small fusiform-shaped cluster of cell bodies of the parasympathetic nervous system Figure 1]. The. The1]. The parasympathetic root of the submandibular ganglion or the posterior branch of the lingual nerve to the submandibular ganglion is one of three roots of the submandibular ganglion. Parasympathetic fibers originate from the superior salivatory nucleus (SSN) in the pons and are conveyed by the nervus intermedius carrying both sensory and parasympathetic preganglionic fibers of the facial nerve (CN VII). The facial nerve carries these fibers via the facial canal in the middle ear and just before exiting the skull (approximately 5 mm above the stylomastoid foramen), gives off the chorda tympani ,3. PregaThe sympathetic and sensory roots run through the submandibular ganglion, while parasympathetic root fibers are the only fibers to synapse within the submandibular ganglion. The parasympathetic fibers, after synapsing in the submandibular ganglion, extend on to the submandibular and sublingual glands directly Figure . It is aNeural regulation of the submandibular and sublingual salivary glandsThe composition and volume of saliva secreted are controlled by the autonomic nervous system. The parasympathetic root of the submandibular ganglion conveys parasympathetic stimulation to induce the secretion of saliva and contraction of myoepithelial cells of the submandibular and sublingual salivary glands . CholineParasympathetic stimulation results in an increase in the volume of saliva and watery (serous) ion-rich, protein-poor saliva secreted from the salivary glands . ConversParasympathetic innervation is more abundant than sympathetic innervation in the salivary glands and exerts more control over the secretion of saliva throughout the day . The salStimulation of the parasympathetic root of the submandibular ganglion results in the secretion of saliva from the submandibular glands that produce 60% of saliva volume at rest. The sublingual glands contribute less at 7-8% of resting saliva volume . CombineEmbryologyAutonomic ganglia, including the submandibular ganglion, are derived from neural crest cells. Neural crest cells migrate and differentiate into Schwann cell precursors prior to becoming cranial parasympathetic ganglia ,24. ParaVariationsFew variations of the parasympathetic root of the submandibular ganglion have been reported. Siessere et al. reported on variations in the morphology of the four cranial parasympathetic ganglia in forty adult cadavers and noted the variation in the number and volume of parasympathetic nerve fiber bundles attached to the submandibular ganglion and their proximity to the lingual nerve ranging from 2mm to 6mm . While pClinical significanceThe parasympathetic root of the submandibular ganglion conveys the majority of salivary secretions during resting conditions. Saliva lubricates the oral cavity and is essential for mastication, digestion, and swallowing. Other components of saliva include buffers such as bicarbonate that buffer acids from dietary intake and microbial metabolism, mucins that protect the underlying epithelium, and antimicrobial proteins such as defensins and IgA that protect the body from pathogens -7,10. DiHyposalivation may be caused by a number of conditions that have broad etiologies including neurotransmitter receptor dysfunction, salivary gland parenchymal destruction, fluid and electrolyte imbalances, irradiation treatment for head and neck cancers, and systemic inflammatory diseases including Sjogren\u2019s syndrome, diabetes mellitus, and amyloidosis . HyposalRecently, Kawashima et al. found thAcknowledgementsThe authors sincerely thank those who donated their bodies to science so that anatomical research could be performed. Results from such research can potentially increase mankind\u2019s overall knowledge which can then improve patient care. Therefore, these donors and their families deserve our highest gratitude . The cadThe parasympathetic root of the submandibular ganglion supplies two of the three major salivary glands. Neuromuscular disorders, systemic diseases, and drugs can affect the parasympathetic root and acinar cells in the glands it innervates, leading to hypersalivation or hyposalivation and a decrease in the quality of life of affected individuals."} +{"text": "Symbiotic nitrogen fixing bacteria comprise of diverse species associated with the root nodules of leguminous plants. Using an appropriate taxonomic method to confirm the identity of superior and elite strains to fix nitrogen in legume crops can improve sustainable global food and nutrition security. The current review describes taxonomic methods preferred and commonly used to characterize symbiotic bacteria in the rhizosphere. Peer reviewed, published and unpublished articles on techniques used for detection, classification and identification of symbiotic bacteria were evaluated by exploring their advantages and limitations. The findings showed that phenotypic and cultural techniques are still affordable and remain the primary basis of species classification despite their challenges. Development of new, robust and informative taxonomic techniques has really improved characterization and identification of symbiotic bacteria and discovery of novel and new species that are effective in biological nitrogen fixation (BNF) in diverse conditions and environments. The enzyme consists of two proteins, an iron protein and a molybdenum-iron protein. The entire process uses 16\u00a0mol of ATP and a supply of electrons and protons and occurs optimally between legumes and bacteria is catalyzed by a two-component nitrogenase complex media genes. These genes are unique to symbiotic bacteria and the phylogenies of nodA, nodB, nodC and nodD resemble each other but vary considerably from the phylogeny of 16S rRNA is a common technique for analyzing genomic similarity to determine bacterial taxonomy is a technique that analyses the entire chromosomal DNA of an organism and the DNA of mitochondria, chloroplast or plasmids at a single time. WGS is the most informative and comprehensive method of characterizing genomes.WGS allows the inference of the phylogenetic relationship between a set of bacterial strains. The technique is very appealing and enables the identification of additional classes of mutation that are refractory to detection by exome sequencing. WGS offers the opportunity to interrogate noncoding regions of DNA and identify functionally important sequence variants that influence gene expression.Rhizobium, Sinorhizobium, Mesorhizobium, Bradyrhizobium and Azorhizobium among others have been sequenced and available for public use (Molina-S\u00e1nchez et al. Currently, researchers have sequenced a large number of bacterial genome and the data is easily accessed from public nucleotide databases such as the Genebank (Land et al. Due to the rapid drop in the price of technology, it is projected that many more symbiotic bacteria complete genomes will be sequenced. Sequencing whole genome is still expensive as it requires specialized laboratories and skilled expertise to analyze the sequence data. Researchers still use nucleotide sequences of different genes and genetic fingerprints for phylogenetic and diversity studies despite the markers having limited molecular information. As the cost of sequencing continues to decrease and experience is gained in data analysis and interpretation, it is anticipated that WGS will be the method of choice for future research.The technique involves genomic analysis of microorganisms by direct extraction and cloning of DNA from an assemblage of microorganisms. Development of metagenomics stems from the inevitable evidence that uncultured microorganisms represent the vast majority of organisms in most environments. The evidence arise from analyses of 16S rRNA gene sequences amplified directly from the environment, this approach avoided the bias caused by culturing and eventually led to the identification of new microbial lineages (Bowers et al. . The use of metagenomics has significantly enhanced understanding on metabolic, physiological and ecological roles of environmental microorganisms (Strazzulli et al. Metagenomics is also considered the primary technique for studying phylogeny and taxonomy of complex microbiomes (Berg et al. Proteomics is a high-throughput technology that has been adopted to investigate a wide range of biological aspects including phylogenetic and molecular divergence studies.In the recent past, considerable attempts have been made to characterize the diversity of proteins expressed in different tissues under a variety of conditions (Faize et al. Rhizobiaceae (Ashfaq et al. Recently, MS technique for rapid identification and classification of microorganisms has attracted great interests from microbiologists for use in symbiotic bacteria research (Vitorino and Bessa Polyphasic taxonomic approach puts emphasis the use of classical methods in combination with modern genetic/molecular techniques for bacterial delineation (Chan et al. There has been an increase in the number of tools for determining the identity and diversity of microbial samples in the last decades. This review has demonstrated that methods used in taxonomy have their own discriminating power varying from the individual or species levels to the genus, family and higher levels. The techniques further depend on the field of application, particular conditions, the number and the type of strains. The degree of discrimination of a technique may vary and depends on the target bacterial taxon. It is therefore important to adopt the use of a technique with minimal contradictions that emphasizes fast and reliable features for identification.However, phenotypic and cultural techniques remain the preferred presumptive methods of classifying symbiotic bacteria despite their limitations and challenges. Development of new molecular tools has really improved the identification of new legume bacteria and discovery of elite species that are effective in biological nitrogen fixation. Using an appropriate and informative technique, it is possible to correctly identify novel bacterial species with superior nitrogen fixing abilities. These strains would be vital in developing inoculation programs and boost legume production especially in developing countries facing food and nutrition insecurities under changing climatic conditions."} +{"text": "The COVID-19 pandemic has a huge impact on the provision of mental health care. Particularly the limitations of face-to-face contacts and the access to treatment facilities can be expected to have significant negative effects on the practice of psychiatric treatment and outcomes. To date the extent and the severity of these effects in people with severe mental illnesses are rarely investigated in Germany.We investigated the impact of the COVID-19 pandemic on mental health and service use of people with severe mental illness in Germany.As part of a pragmatic randomized trial on the effectiveness of an integrated community mental health care program that started immediately after the first COVID-19 wave in June 2020, 1000 people with severe mental illness from different regions in Germany have been asked for the effects of the COVID-19 pandemic on their mental health care and on their general living conditions. Multivariate regression models were computed to estimate the effects of the patients\u2019 COVID-19 experiences on the outcome parameters empowerment (EPAS), psychosocial impairment (HoNOS) and unmet needs (CAN).Using prospective data in a large sample of people with mental illness, we will be able to examine the extent to which the pandemic has affected participants\u2019 mental health, their social lives, but also the use of mental health care services.The data will help to examine the impact of the pandemic on people with severe mental illness in a comprehensive way and will provide evidence where immediate action is needed to reduce further burdens and inequities.No significant relationships."} +{"text": "Pesticide residues are monitored in many countries around the world. The main aims of the programs are to provide data for dietary exposure assessment of consumers to pesticide residues and for verifying the compliance of the residue concentrations in food with the national or international maximum residue limits. Accurate residue data are required to reach valid conclusions in both cases. The validity of the analytical results can be achieved by the implementation of suitable quality control protocols during sampling and determination of pesticide residues. To enable the evaluation of the reliability of the results, it is not sufficient to test and report the recovery, linearity of calibration, the limit of detection/quantification, and MS detection conditions. The analysts should also pay attention to and possibly report the selection of the portion of sample material extracted and the residue components according to the purpose of the work, quality of calibration, accuracy of standard solutions, and reproducibility of the entire laboratory phase of the determination of pesticide residues. The sources of errors potentially affecting the measured residue values and the methods for controlling them are considered in this article. A sufficient amount of safe food cannot be provided for the continuously growing population of the world without the use of pesticides at the current technological level. The global demand for, and the production as well as the use of pesticides have increased steadily during the past decades and are projected to continue growing ,2. PestiTo protect consumers and the environment, the national authorities authorize the use of pesticides only after the critical evaluation of their toxicity, biological efficacy and residues remaining in/on food as well as in the environment ,9,10,11.To control the safe and efficient use of pesticides, their residues are regularly monitored in food and environmental samples in many countries according to risk-based sampling plans ,28,29,30Most publications referenced above ,48,49,50Drawing realistic conclusions and making appropriate corrective actions can only be done if the monitoring results are accurate and derived from the analyses of samples taken according to the specific objectives of the program. That can only be achieved by implementing rigorous internal quality control of the whole process of the determination of pesticide residues. The basic quality requirements for the monitoring results are defined in five major guidance documents ,64,70,71R) is influenced by four main factors (Equation (1)): sampling (S), laboratory sample handling including subsampling of large crops (CVSS), comminution (CVSp), test portion selection and analyses of sample extracts (CVA) of the measured residues (R) is equal to \u2018n\u2019. The two estimates of CVL with Equations (15) (0.1283) and (17) (0.1608) are slightly, but statistically not significantly, different. We recommend using the larger CVL to avoid underestimating the long term within-laboratory reproducibility of the residue determination process.The degree of freedom for the corresponding standard deviations [sd = CVThe gas and liquid chromatographic separation and MS detection conditions are generally well described in the publications often following the guidance given by SANTE/11312/2021, SANTE/2020/12830, USFDA documents ,63,64. HThe reported LOD values or reporting limits should always be checked at the beginning and at the end of the analytical batch of sample extracts for all targeted analytes preferably in blank sample extract, because loading the column with coextracted materials may change the resolution of the column and or shift the retention times as illustrated in The inertness and satisfactory operating conditions of gas chromatographic columns can be improved by applying the so-called analyte protectants ,113 A cr2, Log, 1/sd2). The reported results can be quite different depending on which integration options are selected. Attention is also required to assess the number of disabled points and the reported confidence limits of the slope and intercept of the regression equations. For instance, where three out of six calibration points are disabled the predicted analyte concentration should be critically considered, and possibly additional calibration injections should be made.The data analyses reports provided by the software should not be viewed as a \u2018black box\u2019 and accept it without verification of its correctness. The modern data analyzers usually offer six different curve fit types , four possible choices for the origin , and seven for weighing acquisition includes the whole peek(s) and their integration is correct .For multi-level calibration the standard concentrations should be equidistantly distributed over the calibrated range. Such a calibration program type is only justified where analytes potentially present at low concentrations are looked for in screening analyses.2) provides information only on the linearity of the calibration but does not characterize the quality of the calibration. It can be assessed based on confidence intervals, calculated by those of the data processing software for the slope and intercept of the regression line or from the standard deviation of the relative residuals. The latter parameters should also be reported together with \u2018r\u2019 or R2.It should be recognized that the correlation coefficient (r) or the coefficient of determination indicating the scatter of the responses around the regression line as well as the width of the confidence and tolerance intervals are substantially different. The confidence intervals around the regression line are strongly influenced by the number of standard injections (not shown in the figure).The regression residual The standard deviation of the relative residuals is calculated as:k times, the number of degrees of freedom is (nk\u20132).When each reference material is measured 2 values are practically the same, the Sdrr values indicating the large difference in the confidence/tolerance intervals in 2 is not a proper indicator of the accuracy of the calibration [rr should be <0.1 (10%). The Codex quality control guidelines suggest accepting a maximum of 20% relative residuals (30% near the instrument LOQ) [Nonetheless the Ribration . Our expent LOQ) .dietary exposure assessment of consumers;evaluating the residue levels and their compliance with national or international maximum residue limits or guidance values;assessing the contamination of the environment;providing the basis for the necessary corrective actions if the residues exceed the reasonably expectable levels in the treated crops.The monitoring programs are conducted around the world including large number of samples to provide data for carrying out:Each analysis may have significant consequences. Therefore, the results should be representative and defendable even in legal proceedings. Analysts must be aware of their responsibilities and the fact that their credibility could be at stake. They should be able to verify the correctness of their measurements with documented evidence.The international standards and guidelines provide the frame and acceptable performance criteria for performing the pesticide residue analytical measurements. They would facilitate obtaining accurate, defendable results only if the laboratory operations are performed by staff members who are aware of their own responsibility and are working in coordination with each other.It is not sufficient to validate our methods or test the performance of already validated methods once. The laboratories should establish their own internal quality control programs to be used daily for ensuring that their methods satisfy the specified performance characteristics when applied for instance to screen over several hundreds of analytes in samples of unknown origin or to test the residues in commodities before export.The provisions of guidance documents should be fulfilled bearing in mind that the priorities of internal quality control are in order: (1) good analytical practice; (2) good science; (3) minimum bureaucracy; (4) facilitating reliability and (5) efficiency. The quality assurance/quality control (QA/QC)should only be an appropriate proportion of the activities related to the analyses of samples and reporting of the results.Keeping in mind the above priorities, we emphasize that it is not sufficient to report the recoveries obtained with spiked test portions, the linearity of calibration, detection conditions, and confirmation of the identity of substances. In addition, we propose checking and preferably briefly reporting, for instance, the validity of samples considering the parameters that can be verified in the laboratory, accuracy of analytical standards, stability of analytes during the laboratory operations, quality of calibration characterized with the relative residuals or their standard deviation, and the reproducibility relative standard deviation of the measured residues.Moreover, the selection of the parts of samples and the composition of the residues to be determined should always be matched with the objectives of the work.It is advisable to take part regularly in proficiency tests that provide a means of objectively evaluating and demonstrating the accuracy and reliability of our measurements. Critical review of the Z-scores and identification of the sources of the potential errors can help to improve the technical operation standard of the laboratory. However, participating in proficiency tests does not replace the regular and rigorous internal quality control actions.Finally, reliable results on which regulatory decisions are based can be expected only from well-trained analysts whose knowledge should be regularly updated to fully utilize the advantages of the high-performance instruments and benefit from the rapidly expanding methodical experience gained by other laboratories through the analyses of a great variety of samples."} +{"text": "Double-sided self-pierce riveting (DSSPR) has been presenting itself as a proper alternative to self-pierce riveting (SPR) with many advantages for joining geometries of different thicknesses and cross-sections. To ensure its successful future industrial application, this paper presents a detailed comparison between different strategies to produce mechanical joints by means of the DSSPR process and discusses its performance and feasibility. Results show that the use of flat-bottom holes in both sheets provide interesting results, since they allow for a precise positioning of the tubular rivet in specific pre-defined locations, thus avoiding an incorrect joining procedure. This strategy tightens the tolerances of the process, while keeping a suitable level of destructive performance as demonstrated by the lap shear tests. Pre-riveting of the sheet has also been shown to produce suitable results in combination with or without a flat-bottom hole in the opposite sheet. This strategy comes at a cost of a slightly lower performance than that obtained with flat-bottom holes in both sheets, although the requirements of force and energy to complete the joining process are smaller. The conclusions of this research work are essential for selecting the joining strategy with DSSPR according to the requirements of the intended application. For the production of mechanical joints by means of the technology of joining by forming with auxiliary joining elements , self-piLightweight construction demands for multi-material design, which in turn demand the utilization of versatile joining processes that circumvent the metallurgical incompatibilities, provide a low heat input and can provide an adequate level of flexibility . At the The application of SPR to aluminium sheets of different alloys have been extensively investigated. From those studies, the effect of rivet and die shapes on the joint properties have been studied ,7,8. TheAn innovative self-pierce riveting process consists of double-sided self-pierce riveting (DSSPR) which makes use of a tubular rivet with chamfered ends that are capable of producing hidden joints between sheets placed over each other b. The tuThe validation of the DSSPR joining technology has been performed with tubular rivets of stainless steel AISI 304 and aluminium AA5754 sheets, to investigate the working principle and the geometric scalability of the tubular rivets. Recently, the chamfered ends of the rivets have been optimized to improve the rivet penetration and final morphology, and therefore increase the overall joint strength of the mechanical connection .Since there is no upper thickness limit in DSSPR contrary to what it is observed for SPR, sheets with a thickness of 5 mm were originally tested , and morFrom an industrial point of view or when demanded from the material combination, some strategies may need to be developed. For instance, the authors have shown that while keeping all the other parameters constant, the chamfered angle of the tubular rivet can be modified at each end of the rivet, to account for the different resistances to penetration of the materials from the two sheets . HoweverAnother possible solution is the introduction of flat bottom holes in the strongest sheet by means of machining or forming, in order to correctly position the tubular rivets before piercing. This promotes the penetration in harder materials and their subsequent assembly to softer materials by means of the opposite end of the tubular rivet . At the Therefore, the objective of this work is to combine the different previous strategies and compare their performance when subjected to static loads. To support the investigation, numerical predictions are employed to analyse the mechanics of the deformation and the stress\u2013strain levels for the different strategies and modifications introduced. Along with the numerical analysis, different specimens are produced and subjected to lap shear strength tests to determine the performance produced by each modification. This will allow us to define the strategy to be followed in accordance with the load and energy requirements for both the joining process and the intended application, towards a closer industrial implementation of DSSPR. Different compromises are made with each joining strategy, although the introduction of flat-bottom holes in both sheets is able to provide a similar performance to conventional DSSPR joints without holes, that offers the best performance among all strategies. The utilization of pre-riveting follows the introduction of flat-bottom holes in terms of performance and its applicability regards the utilization of sheets of very different strengths or thicknesses.The materials chosen and their respective flow curves were retrieved from a previous work of the authors on conventional DSSPR to whichThe main process parameters of the tubular rivet were retrieved from the original work on conventional DSSPR and resuRegarding the sheets, the upper and lower sheet thicknesses tom hole b was evatom hole c, that wtom hole d or withtom hole e, was taConventional DSSPR between two plain sheets;DSSPR between sheets with each one having a flat-bottom hole with a given depth DSSPR with pre-riveting of one of the sheets in combination with a plain sheet;DSSPR with pre-riveting of one of the sheets in combination with a sheet having a flat-bottom hole with a given depth The different strategies consist of:The experimental work was carried out at ambient temperature in the same hydraulic testing machine utilized to previously obtain the material flow curves. The range of values for each parameter is presented on A minimum of five specimens were produced for each strategy with some being halved lengthwise to observe and compare the mechanical interlocking The numerical simulations of DSSPR were performed with the finite element computer program i-form where thThe finite element equations resulting from (1) use a control volume with velocities approach . The symained in .A rotational symmetry was considered for modelling the plastic deformation, with the sheets and tubular rivets being modelled as deformable isotropic objects subjected to axisymmetric loading. Their cross-sections were discretized by means of quadrilateral elements with a larger number of elements at the locations where the riveting process takes place and larger deformations are generated refer to . The tooRegarding the friction conditions, a friction factor of \u22123 was employed after which the geometry was updated based on the calculated velocities. Whenever large local deformations were generated from the rivet being pushed through the sheets which distorted some mesh elements, local repairment of the finite element model was carried out several times by semi-automatic repositioning of nodal points with appropriate transfer of field variables from previous to newer locations. That procedure was in some cases complemented by intermediate global remeshings of the entire deformed objects. Along the simulations, the finite element flow formulation equilibrium is checked by means of an iterative procedure meant to minimise the residual of (1) to within a specified tolerance. A convergence criterion of the residual equal to 10Comparisons between the Different StrategiesThrough the combination of finite element modelling and experimentation, it was possible to identify the mechanics of deformation for the different strategies analysed. Generally, the thickness of the tubular rivet increases along the deformation due to compression and strain hardening of the sheet and rivet, the latter being responsible for promoting a combined piercing and flaring of the tubular rivet, which results in the formation of a mechanical interlocking . As the After the tools were removed, a very small protrusion is visible in the top of the sheet surfaces which results from the elastic recovery of the materials from those sheets to which it follows a steep increase of the overall force as the two overlapped sheets are being pressed against each other. The required maximum force to produce the mechanical joints is in the range of 100 kN for all the strategies analysed.Conventional DSSPR demands higher levels of joining energy, as the total rivet height has to penetrate the two plain sheets. In contrast, smaller levels of joining energy are found to the other alternatives, in particular for the alternative of sheets with flat-bottom holes, since the joining energy for the pre-riveted joints is the sum of the energy in each one of the two stages.The differences in displacement for the different joining strategies in comparison with conventional DSSPR are due to the smaller free height of the rivets that is now placed inside the flat-bottom holes, whereas for the case of the pre-riveting strategy, only half of the rivet height is compressed against the sheet in each stage.Regarding the destructive performance evaluation, the lap shear tests in refer to a, as welIn contrast, the increase of the depth of the flat-bottom holes to 2 mm provided a lower performance, justified by the reduced penetration of the rivet that results from the utilization of a deeper flat-bottom hole. This facilitates the detachment of the rivet from the sheets, as seen by the photographs included in For the pre-riveted joints and despite the differences in the amount of mechanical interlocking, the performances are very similar whether a plain sheet or a sheet with a flat-bottom hole with a depth of 1 mm is employed, since the detachment is constrained by the larger strain hardening levels of the sheet material at the pre-riveted side. The photographs in Different strategies were analysed to produce joints in overlapped sheets with DSSPR, making use of the strong advantages of this joining technology while ensuring the conditions for its industrial implementation in a wide range of scenarios.The introduction of flat-bottom holes in the sheets ensures both positioning and alignment of the rivets, while eliminating any protrusions above the sheet surfaces. As a result of the gap created by the holes, the joining forces and energies are lower than for conventional DSSPR, while their performances are very similar. However, the depth of the flat-bottom holes cannot be so high because it can compromise the performance of the mechanical connection or may even not be feasible for thinner sheet thicknesses. In the latter case, pre-riveting of one of the sheets should be employed instead to ensure a proper riveted joint.For the pre-riveting strategy, two iterations can be utilized without any relevant differences in terms of performance, other than the advantages that arise from the simplicity of placement of the opposite rivet end when a localized flat-bottom hole is already present in the opposite sheet. Nevertheless, if the opposite sheet material is much softer than the pre-riveted sheet material, a flat-bottom hole may create a weaker region in the softer sheet that can compromise the joining process. The selection of the pre-riveting strategy comes at a cost of a slightly reduced destructive performance and the need to have an additional stage other than the single stroke in which the other strategies are produced. Therefore, this strategy is more suited for different material and thickness combinations where a greater or lesser penetration of the rivets into the sheets may be desired.In conclusion, the guidelines established during this work help to create the conditions for selecting a suitable joining strategy with DSSPR depending on the material and geometry specifications."} +{"text": "Psychiatry is facing major challenges during times of a pandemic as illustrated by the current COVID-19 pandemic. The challenges involve its actual and perceived role within the medical system, in particular how psychiatric hospitals can maintain their core mission of attending to the mentally ill while at the same time providing relief to general medicine. Although psychiatric disorders are the top leading causes of global burden of disease, we can witness mental health care being de-emphasized in the wake of the massive onslaught of the pandemic: psychiatric wards are being downsized, clinics closed, psychiatric support systems discontinued etc. in order to make room for emergency care. While nobody can deny the need to act decisively and swiftly and ramp up intensive care readiness, we believe that there is no need to do this at the expense of psychiatric care. Using the pandemic COVID-19 contingency plan developed at the Department of Psychiatry and Psychotherapy of the University Hospital of LMU Munich as a case in point, we demonstrate how a psychiatric hospital can share in the acute care of a health care system facing an acute and highly infectious pandemic like COVID-19 and at the same time provide for the mentally ill, with or without a COVID-19 infection, and develop mid and long-term plans for coping with the aftermath of the pandemic.No significant relationships."} +{"text": "Risk assessment is a key component of patient care in forensic psychiatry. This audit aimed to measure the completion of different aspects of the Formulation Information Risk Management (FIRM) risk assessment for patients in the care of Low Secure Services. The FIRM incorporates formulation and a care plan into the risk assessment and should be completed for all inpatients in the trust. It was hoped that this audit would help identify any areas of improvement required in the completion of this risk assessment, and provide recommendations that would contribute to improving standards where required.Data were collected on 23rd December 2021 from the electronic patient records of 37 inpatients at a Low Secure Services Unit in Northern England. 5 audit criteria were devised following review of the trust standards regarding the completion of the FIRM assessment. These criteria included the completion of the Current / Historical Risks section, Formulation and Staying Safe / Staying Well Care Plan aspects of the assessment. It also assessed patient involvement in completion of the assessment and whether the assessment had been updated in the last Care Programme Approach (CPA) period. The findings of the audit were presented at a local academic meeting and were distributed to the relevant staff.100% of patients had the Current / Historical Risks section completed89% of the patients had the Formulation completed73% of patients had the Staying Safe / Staying Well Care Plan completedIn 16% the service user had been involved in the risk assessment completionIn 70% of cases the FIRM had been updated since the last CPA (or in the last year if not applicable)Current / Historical Risks section completion rates matched expected trust standards. Significant improvement was seen in completion of the Formulation and Care Plan compared to auditing done in October 2021. There was room for improvement regarding increasing patient involvement in the completion of the risk assessment, often due to it being completed at night leading to the patient being unavailable. It was recommended that the FIRM should be more consistently reviewed and updated as part of each patient's 6 monthly CPA review. A re-audit would prove useful to monitor the progress of these measures, and the scope of a future audit could also be widened to include the timeliness in which the FIRM is completed for new patients."} +{"text": "Leprosy is a neurocutaneous infectious disease with a highest prevalence in African region in the world. With the introduction of newer monitoring strategies of identifying the cases and multidrug therapy the overall global burden of leprosy has reduced. However, early detection and treatment is still important in leprosy due to the rapidly developing nerve damage by the bacteria and the associated disability. Here, presenting a case of a 65-year-old male patient of leprosy in borderline pole, with complaint of hyperpigmentation over back, chest and arm in a ring like pattern from past 4 months. Patient has been on WHO multibacillary multidrug regimen . On starting the treatment patient started noticing resolution in thickness of the lesion along with appearance of reddish-brown color of the lesion. On cutaneous examination, multiple geographic and annular patches with central clearing and shiny coppery reddish brown pigmentation in the peripheral margin of the lesion is seen. This case exemplifies the lesional hyperpigmentation, secondary to accumulation of ceroid lipofuscin pigment and Clofazimine inside the macrophage phagolysosome. Also, only marginal pigmentation with central clearing attributes to the punched-out lesion of the borderline lepromatous leprosy primarily which on histopathology typically shows presence of macrophage granuloma. Clofazimine induced pigmentation is reversible but takes months to years to clear after stopping the drug. As a dermatologist with leprosy moving towards elimination monitoring and managing the adverse effect of treatment and course of the disease is important."} +{"text": "To the Editor:The authors reported no conflicts of interest.Journal policy requires editors and reviewers to disclose conflicts of interest and to decline handling or reviewing manuscripts for which they may have a conflict of interest. The editors and reviewers of this article have no conflicts of interest.The Lehmann and colleaguesFirst, the authors did not present any comparative evidence to support that the Trifecta bioprosthesis is noninferior to the other valves on the market. Instead, they cited published data from elsewhere, which weakens any comparison. They reported that the rate of structural valve deterioration (SVD) in their cohort is comparable with the 10% rate of SVD in the Carpentier Edwards Perimount (CEP) (Edwards Lifesciences) valve at 10\u00a0years.,There is a growing body of evidence in the literature that suggests that Trifecta has an increased risk of early SVDAnother issue raised in the article is the success of valve-in-valve (ViV) transcatheter aortic valve implantation (TAVI) as a treatment for failed Trifecta valves. In this regard, the Trifecta is again inferior to alternative valves such as CEP because the metal stent in small sizes of Trifecta valve cannot be fractured for insertion of an adequate size of TAVI prosthesis.Overall, we believe the conclusion reached by Lehmann and colleagues"} +{"text": "Prescribing antidepressants in the treatment of bipolar depression remains highly controversial due to the inconsistency between routine clinical practice and the results of controlled trials.To assess the validity of antidepressants use in bipolar depression from the point of view of evidence-based medicine.Database search (Scopus and MEDLINE) followed by analysis of studies concerning the efficacy and safety of antidepressants in the bipolar depression treatment.The search found 23 studies. There was a high degree of inconsistency in the results, apparently related to the methodology. Only two studies compared the effectiveness of antidepressants in monotherapy with placebo. No differences were found in the study with 740 participants but in the study with 70 participants with type 2 bipolar disorder antidepressants were found to be more effective than placebo. Nevertheless, both studies had significant methodological issues. In 6 studies comparing the effectiveness of the combination of antidepressants with mood stabilizers against the combination of mood stabilizers with placebo, only the effectiveness of fluoxetine in combination with olanzapine was confirmed, other antidepressants were ineffective. At the same time, studies where antidepressants were compared with each other in combination with mood stabilizers revealed a significant clinical response to therapy. Risk of the treatment emergency adverse events were relatively low for SSRI.Despite the contradictory literature data, the use of antidepressants in bipolar depression is justified from the point of view of evidence-based medicine for certain groups of patients with taking into account risk factors."} +{"text": "Both cells and the niche may stimulate and mutually signal each other to maintain functions and regulate responses during development and adulthood.The term \u201ccellular microenvironment\u201d is a generic expression used to describe the complex collection of stimuli that contribute to cell and tissue functions . This \u201cnMoreover, the native cellular niche plays a critical role in dictating, for example, the fate of stem cells . TherefoMany research groups are now directing their efforts towards the design and manufacture of topographically controlled biomaterials that are capable of mimicking different aspects of the native stem cell niche . This isThe latest advancements in bioengineering have allowed for the control of individual aspects of the cellular microenvironment. Therefore, this Special Issue focuses on collecting outstanding contributions covering recent approaches used to recreate and promote the formation of tissue-like structures from cells and materials.Abdul-Al et al. begin by discussing the importance of stem cell niches in the human eye . They reTherefore, given the importance of microfabrication in the generation of artificial cellular niches, Ramos-Rodriguez et al. present microfabrication techniques used in the design and manufacture of cell microenvironments for tissue regeneration . In factIn summary, different fabrication techniques were developed for the successful generation of artificial microenvironments. Future tissue engineering scaffolds will also benefit from the identification of the critical elements of a given cell niche since this will influence the complexity and functionality of the construct. This capacity to produce dynamic 3D environments where stem cells are capable of residing and differentiating is particularly important for the recreation of bone marrow and the hematopoietic niche .Bioengineering the essential topics and exciting future innovations related to the research in this area. It is now evident that many research efforts are being directed towards the design and manufacture of topographically controlled biomaterials that can mimic different aspects of the native stem cell niche. Furthermore, this Special Issue provides outstanding insights into relevant issues related to stem cell research and regenerative medicine.In conclusion, this Special Issue, entitled \u201cDesign and Fabrication of Artificial Stem Cell Microenvironments\u201d, offers to the readers of"} +{"text": "Reducing health inequalities is on the agenda of many countries. Despite an increasing concern and awareness on health inequalities a wide gap exists in Europe in terms of political response. The main objective of JAHEE was to strengthen a cooperative approach among participating countries and implement concrete actions to reduce health inequalities. The partnership was composed of 24 countries including many strategically most relevant public health institutions in the European Union, which contributed with different backgrounds, skills and know-how to the achievement of the project objectives. The main results will be presented."} +{"text": "The presentation will focus on two main outcomes of the WHO initiative: a global research agenda to steer future evidence generation on PHSM, and a central monitoring system for PHSM research. In September 2021, a global technical consultation with over 60 global experts was organized to review the existing evidence on PHSM and identify the initiative's priorities. The consultation provided an opportunity to have an initial discussion on potential research priorities. This became the basis for an iterative online consultation process. The draft research agenda includes seven main research themes including effectiveness, unintended consequences, methodological challenges and implementation considerations affecting the uptake of and adherence to PHSM. Workshop participants will be invited to comment on the suggested themes and propose additional priority questions for the research agenda. The central research monitoring system will consist of a global repository of primary studies and reviews investigating the effectiveness and broader multisectoral impact of PHSM. Indexed studies will be mapped against the key themes of the research agenda, facilitating real-time monitoring and evaluation of its progress. An AI-based mechanism for automated updating of systematic reviews will complement the database. This one-stop shop will allow researchers and decision-makers worldwide to access the latest evidence on PHSM and keep track of the synthesized effectiveness and impact of different interventions and combinations. The platform will further provide a protected working interface. This monitoring system for PHSM research enables timely access to and utilization of evidence indecision-making processes during health emergencies and fosters international collaboration on the analysis and interpretation of data. Workshop participants will be invited to review the alpha version of the platform."} +{"text": "The present study is a detailed literal survey on the bond behavior of FRP (Fiber Reinforced Polymer) reinforcing bars embedded in concrete. There is an urgent need for the accurate assessment of the parameters affecting the FRP\u2013concrete bond and quantification of these effects. A significant majority of the previous studies could not derive precise and comprehensive conclusions on the effects of each of these parameters. The present study aimed at listing all of the physical parameters affecting the concrete-FRP bond, presenting the effects of each of these parameters based on the common opinions of the previous researchers and giving reasonable justifications on these effects. The studies on each of the parameters are presented in detailed tables. Among all listed parameters, the surface texture was established to have the most pronounced effect on the FRP\u2013concrete bond strength. The bond strength values of the bars with coarse sand-coating exceeded the respective values of the fine sand-coated ones. However, increasing the concrete strength was found to result in a greater improvement in bond behavior of fine sand-coated bars due to the penetration of concrete particles into the fine sand-coating layer. The effects of fiber type, bar diameter and concrete compressive strength on the bar bond strength was shown to primarily originate from the relative slip of fibers inside the resin of the bar, also known as the shear lag effect. Fiber-reinforced polymer (FRP) bars have been increasingly used in the field of structural engineering due to their significant advantages, such as high tensile and fatigue strengths, high corrosion resistance, lightweight, ease of transportation and handling, thermal and electrical insulating (GFRP only) properties and being unresponsive to magnetic fields ,2,3. On The previous research studies and field applications concentrated on four different types of FRP that can be used in the form of concrete reinforcement. BFRP and GFRP bars are the most preferred types as internal reinforcement in concrete members owing to their lower prices and ease of supply than the CFRP and AFRP bars. BFRP bars have slightly higher modulus of elasticity and tensile strength values than GFRP bars, yet the respective values of both BFRP and GFRP are considerably lower than those of CFRP and AFRP ,7. GFRP CFRP has various superiorities over the other three types, including the highest tensile and fatigue strengths and being the least vulnerable FRP type to environmental effects , fatigue and creep rupture. Nevertheless, CFRP also has some major disadvantages, including the electric conductivity, high price, vulnerability to electrochemical corrosion when in contact with metal materials in a humid environment and the highly brittle nature ,4. RecenThe bond between an FRP bar and the surrounding concrete is the governing factor that determines the efficiency and suitability of the utilization of FRP bars as concrete reinforcement. In flexural RC members (slabs and beams), the compression forces in concrete are counterbalanced by the tension forces in the reinforcement and the development of these tension forces entails the adequacy of the reinforcement\u2013concrete bond in the tension zone. The types of bond mechanism of FRP bars in concrete are similar to those of steel bars, which are the mutual adhesion, surface friction and shear interlock. Nevertheless, the mechanical properties of FRP bars are completely different from those of steel bars 14,15].,15.14,15In the literature, numerous experimental, analytical and numerical studies were conducted on identifying the prominent factors affecting the concrete-FRP bond; nevertheless, a great majority of these studies concentrated on monitoring the influence of only one or a few of all parameters. In the current study, on the contrary, all main parameters affecting the bond strength of FRP bars in concrete were investigated by conducting an extended literature survey. This survey indicated that there is consensus among the researchers on the effects of certain parameters on the FRP\u2013concrete bond, while the effects of the other parameters have not been clearly unveiled yet. There is a variety of opinions on the effects of these parameters. The present literature review introduces the parameters one by one together with the findings of the previous researchers on this parameter. Besides, the related provisions from the structural FRP RC codes are also discussed throughout the manuscript. Accordingly, the main goal of the present study is to present the recent developments and challenges related to the utilization of FRP bars in concrete by underscoring the most influential factors affecting the FRP\u2013concrete bond based on the previous works on this topic.d);Bar Diameter and bar spacing (cs);Concrete cover (dl) or embedment length (el);Development ;Compressive strength the inherent properties of FRP rebars; (ii) the arrangement and configuration of reinforcement; (iii) the inherent properties of concrete; and (iv) the method of testing. The complete list of parameters is as follows:A detailed section is devoted to each one of these parameters in the following discussion. A detailed table, which compiles the tests and studies used in that section, is given in each section to avoid any confusion and to clearly reveal the effects of each parameter on the FRP\u2013concrete bond. The entire table of each section was discussed and debated in its entirety with additional comments of the authors.The previous researchers did not reach an agreement on the denominations of the surface textures of FRP bars. In other words, the same surface type was termed differently by different researchers. This discrepancy caused significant confusion among researchers. To avoid confusion in the present review, each table contains two columns for the bar surface notations. The first column corresponds to the original notations in the source papers, while the second one refers to the notations suggested and used in the present text. The bar surface notations of the present review are given in The graphs and test results in the majority of the previous studies cannot provide credential comments on the effects of a certain test parameter on bond strength. The previous experimental studies do not possess the merit of focusing on a single parameter by isolating the related experiments from the remaining test parameters. Consequently, the experiments, which are intended to unfold the effects of a certain parameter on bond strength, include the coupled effects of numerous parameters. Moreover, the average values of the test results were adopted in the previous works when explicating the scatter plots. However, the values in these plots are scattered in a broad range and reliable and precise results may not be inferred by using the average values. Unlike the other review studies, the coupling of effects of different parameters were taken into account in the present text and the findings were elaborated by avoiding controversial arguments.The effect of a single parameter (dependent variable) on the independent variable (the FRP\u2013concrete bond strength in this case) can only be unfolded if all other dependent variables are kept fixed in the related experiments. Otherwise, the coupling between the effects of several parameters will not allow the researchers to isolate of the effect of a single parameter and set a relationship between the examined dependent variable and the independent variable. In the context of investigating the effects of the FRP material type on the FRP\u2013concrete bond strength, for instance, the surface texture, diameter, clear cover and distance from the adjacent bar of the tested bars need to be kept identical in the related experiments as well as the concrete grade, concrete type and fiber content of the concrete mixture. In that respect, the existing experiments in the literature do not suffice for the development of specific relations between each test parameter and the FRP\u2013concrete bond strength.The test data on FRP\u2013concrete bond strength is extremely scattered. The wide dispersion of this data mainly stems from the coupling between the effects of several parameters in the previous experimental studies, which were designed without paying attention to all parameters affecting the FRP\u2013concrete bond. The mathematical analyses on the data with such a dispersion do not generate meaningful and accurate expressions, since the deviation of the actual data from the mathematical curve remains high, meaning that the mathematical curve does not accurately represent the experimental data.The surface texture types of FRP bars have not been standardized with regulations, standards and previous experimental studies. For instance, the rib dimensions of the ribbed FRP bars and the grain sizes of the coating layer in the sand-coated bars are rather different in different studies. Hence, the surface type with the same notation can be excessively different in the related tests, which exacerbates the broad scatter of the test data and even results in opposing test data in different experimental studies.In this study, as much data as possible was compiled from the literature to prepare tables from which clear and accurate conclusions on the effects of a single parameter could be reached. In this way, the effects of the remaining parameters on the FRP\u2013concrete bond were minimized, if not completely eliminated. Furthermore, each finding or conclusion was justified with sufficient reasoning. The authors did not utilize scatter graphs or curves. Furthermore, the authors avoided using precise statements on certain parameters due to significant discrepancies between the related experiments in the literature. The ambiguous and even sometimes opposing findings in different studies complicates to draw definitive conclusions on these parameters. These discrepancies have been completely ignored in the previous statistical review studies. The present paper leaves it to the readers on these parameters instead of deriving conclusions from the controversial data. Only obvious and well-explained conclusions were drawn according to the existing test results, presented in tables in each section. In the present study, the dependent and independent (bond strength) variables needed to be presented in the table format rather than conducting an analysis and presenting them in a mathematical form for three main reasons:The authors could not make separate analyses on each type of FRP bars in each section due to the absence of an adequate number of studies in the literature. For instance, the number of studies on CFRP and AFRP reinforcing bars is so limited that conducting separate analysis and reaching specific conclusions on these two types is not possible at all. Strictly speaking, the majority of the studies in the literature pertain to the bond behavior of GFRP and BFRP reinforcing bars in concrete. Consequently, the authors tried to reach general conclusions on the effects of each parameter on the FRP\u2013concrete bond without diving into special comments on different FRP types. The findings from a single type of FRP were not generalized to all types, but only the common conclusions on all FRP types were given in the manuscript.The present pertains solely to the short-term bonding performances of FRP reinforcing bars in concrete. The FRP\u2013concrete interfacial bond strength is subject to degradation due to environmental factors, including but not limited to the temperature, corrosive environments and humidity. Furthermore, long-term effects, including creep and fatigue, are also responsible for the changes in the adherence of FRP bars to concrete. The long-term bonding performances of FRP bars and the durability issues are planned to be covered in a companion paper.Significant effort has been spent in the literature to determine the effects of FRP bar diameter on bond strength. There are three basic opinions on the influence of bar diameter on bond strength. The first of these opinions relies on the concept of the relative slip between the core and the fibers on the outer surface (shear lag effect) resulting from the low slip resistance within the epoxy resin and at the epoxy\u2013fiber interface under the action of axial tension forces ,19,20,21In this respect, the main reason for the differences between the findings of different researchers are expected to be a result of the shear lag effect. Several inherent properties of an FRP bar, including the fiber density, resin type, resin density and the mechanical properties of the constituents, were found to impinge on the shear lag effect. The degree of this effect also changes with the manufacturing conditions of the bar and the persistence of these conditions. Hence, the effects of bar diameter on bond strength can only be identified by considering all these variables.The literature contains a lot of studies on the influence of fiber type on FRP\u2013concrete bond strength. The statistical, review and research studies on this very topic are presented in In order to determine the effect of fiber properties on bond strength, The only clear outcome from this comparison is that the GFRP bars is the least favorable polymer bars in terms of adherence with concrete among the four types of FRP. However, reaching a crystal-clear conclusion about the contribution of AFRP, BFRP and CFRP fibers to the adherence with concrete is impossible to reach based on the available test results. The improved bonding properties of CFRP, BFRP and AFRP bars originate from two main reasons. The first reason is the shear lag effect, just like the related discussion on the effect of bar diameter. The shear lag effect is lower in FRP bars with resin and fibers strong in tension as compared to those with resin and fibers weak in tension. Considering that AFRP, BFRP and CFRP bars generally possess higher tensile stiffness values in comparison to the GFRP rebars ,43, the In the literature, a limited number of studies have been conducted on the effect of the modulus of elasticity of FRP on the concrete\u2013FRP bond. Although various views on the effect of the change in the modulus of elasticity on adherence are presented in the literature ,52,53,54A good number of research studies in the literature are devoted to the influence of surface texture on concrete\u2013FRP bar bond strength. Statistical, review and research studies on this topic are summarized in Accordingly,R1 and R2 is the fine sand-coating (SCf).R3 and R4 shows the coarse sand-coating (SCr).R5 is the standard sand-coating (SC).R6 illustrates the helically wrapped (HW) surface.R7, R8 and R9 are the helically wrapped and sand-coated surface (HWSC).R10 shows the indented (In) surface.R11, R12 and R13 correspond to the ribbed (Rb) surface.As shown in Hence, the adherence behavior alters into the mechanical interlocking with increasing concrete strength in bars with fine coating. Moreover, the surface areas of the fine sand-coated bars are larger due to the presence of indentations on the surface, and therefore, the contribution of improving the concrete quality to the bond strength becomes more considerable in these bars, having greater contact surface with concrete.The other surface types, which have deeper surface deformations, convey the internal forces to the surrounding concrete through both friction and mechanical interlocking. The two-component transfer mechanism is the main reason for the higher bond strengths of these bars. The mechanical interlocking capacity changes with the rib height, rib spacing and rib thickness. From this point of view, with some exceptions, as the rib height increases and the rib spacing decreases, the bond strength increases. This increase stems from the increased surface area for the mechanical interlocking forces to develop. On the other side, significantly narrow and deep ribs might also lead to considerable reductions in the rigidity of ribs, and hence, lower limits for the spacing and upper limits for the rib height need to be established with the help of more detailed studies. In general, the bars with \u201cIn\u201d surface type have larger rib spacings due to the increased rib thickness values and these bars possess lower bond strengths as compared to the bars with \u201cRb\u201d surface texture. The lower bond strength values are caused by the fact that the force transfer in the indented bars rely mostly on the surface friction rather than mechanical interlocking. As in the case of indented bars, the bond strength values of the ribbed bars remain below the respective values of the helically wound bars.In HW bars, the proportion of the forces transferred by the surface friction and mechanical interlocking varies with the height of the ribs. The contribution of mechanical interlocking increases with increasing rib depth, resulting in the increased adherence. Additionally, the bar starts to behave similar to a wedge with increasing rib height. Consequently, the shear forces are transmitted to the HW bars gradually, dissimilar to the sudden transfer of the shear forces in the FRP bars with In and Rb surface types. At around the peaks of the ribs, the mechanical interlocking forces turn completely into friction forces. This gradual transfer might also delay the shear failure of the beam by providing the transmission of shear forces along numerous interlaminar shear surfaces on the FRP bar instead of a single surface. The bars with \u201cRb\u201d type of surface have superior bonding strength values when compared to the bars with \u201cHW\u201d type of surface.The effects of concrete cover and bar spacing on FRP\u2013concrete bond strength have also been subject to a variety of studies in the literature. The researchers have sought to identify the changes in the failure modes of FRP bars with changing bar spacing and concrete cover. Albeit an adequate number of studies were devoted to the effect of concrete cover, the bar spacing has caught the attention of only few researchers. d (bar diameter) in all concrete types, i.e., NSC , HSC (high-strength concrete), UHSC (ultra-high-strength concrete) .,63.60,63oncrete) ,65. Whenplitting , while ampletely . ACI 440The previous studies adopted a variety of surface textures and FRP mechanical properties. Furthermore, significant variations in concrete strength and embedment length makes it almost impossible to reach precise conclusions on the need for concrete cover for the complete prevention of concrete splitting. The concrete splitting failure is a result of the transfer of splitting forces to concrete , which iIn summary, the FRP\u2013concrete bond strength generally increases with increasing concrete cover since the confinement around the bar is improved with increasing cover thickness. The general tendency of the change in the FRP\u2013concrete bond strength with increasing cover is complicated to specify as the failure mode also changes with increasing thickness of the concrete layer around the bar. Strictly speaking, the bond strength undergoes sudden changes while increasing the concrete cover, particularly at the transition of the failure mode from concrete splitting to debonding or rupture. Furthermore, concrete cover cannot be considered alone in the evaluation of test results, since this parameter governs the failure mode of a rebar together with the bar embedment length. Hence, the previous studies refrained from focusing on the simultaneous effects of concrete cover and embedment length on the FRP\u2013concrete cover. Instead, only one of these two parameters changed in the related tests while keeping the other parameter fixed.d, the bond strength was found to be unaffected by further increasing the bar spacing [The number of studies on the influence of bar spacing on the FRP\u2013concrete bond is rather limited and these studies showed that increasing the spacing between bars can contribute to the bond strength up to 50% ,50. If t spacing .There has been a great deal of research undertaken in the literature on the effects of embedment or development length on the FRP\u2013concrete bond strength ,76,77,78The studies on the effects of reinforcement location concentrated on two positions, namely the lower and upper portions of the member, according to the direction of concrete cast. These studies generally concluded that the bond strengths of the upper bars are smaller than their lower counterparts since the water, air and fine aggregates in the mixture move upwards and accumulate underneath the rebars 65,81,8,881,82.In two of these studies ,81, the A great majority of the previous studies in the literature consisted of pull-out tests for determining the FRP\u2013concrete bond strength, and hence, the number of studies on the effects of transverse reinforcement on the FRP\u2013concrete bond is rather limited. The confining effect of transverse reinforcement on the longitudinal bars can only be reflected with the help of beam tests. There are two common opinions on this confining effect. The first one is the contribution of transverse reinforcement to bond strength through limitation of the crack widths in the member . AccordiThe effect of concrete compressive strength on bond behavior may differ in steel and FRP reinforcing bars. Since steel rebars are homogeneous and isotropic as well as have a wholistic structure, bond failure patterns and stresses are governed by the shear strength of concrete . HoweverThe effects of fiber addition to concrete mixture, which aims at controlling the crack widths by increasing the tensile strength of concrete, on the FRP\u2013concrete bond has been subject to various studies in the literature. Some of these studies are given in This cracking might also be affected by the fibers inside the mixture by improving the tensile strength of concrete and this effect will primarily depend on the density and length of fibers in the vicinity of the rebars . NonetheVarious types of concrete were employed in the previous studies on the FRP\u2013concrete bond . HoweverThe present paper is a detailed literature review on all parameters affecting the bond behavior of FRP reinforcing bars embedded in concrete. The influence of each parameter is discussed in the light of the findings of previous researchers. Precise and clear comments are given throughout the manuscript. The controversial and opposing comments of the previous researchers are not mentioned in the manuscript, since most of these comments originate from the differences between the testing conditions and test methods in different studies and negligence of certain parameters affecting the FRP\u2013concrete bond. With the aim of not listing the inconsistent and ambiguous findings, only the following unquestionable conclusions are given in the present text together with the justifications behind each finding.The bond strength of an FRP bar decreases with increasing bar diameter. This decrease is associated with three possible reasons. First, the slip of fiber layers within the resin, also known as the shear lag effect, is aggravated with increasing bar size and this effect has a negative impact on the FRP\u2013concrete bond. Secondly, the amount of air voids and mixing water, accumulating underneath the bar, increases with increasing bar size and the weakness of the concrete around the bar results in the reduction of the bond strength. Finally, the mechanical interlocking and surface friction forces of a bar decrease as a result of the greater degree of Poisson\u2019s effect on the bar with increasing bar diameter.The bond strength values of GFRP bars are lower than the respective values of their CFRP, AFRP and BFRP counterparts, embedded in a similar concrete mixture. The lower adherence of GFRP to concrete stems from the more considerable shear lag effect in the GFRP bars due to the lower axial stiffness than the other three types of FRP. The greater slip of fibers from the core in GFRP results in the reduced bond strength values of these bars. Furthermore, the increase in the radial thermal expansion of the BFRP, AFRP and CFRP bars due to the friction at the bar\u2013concrete interface improves the mechanical interlocking and surface friction of these bars in concrete, as compared to the GFRP bars, which are known to have smaller thermal expansion coefficient.FRP bars with coarse sand-coating layer have higher bond strength values in normal-strength concrete than the bars with fine sand-coating layer. However, the bonding behavior of the fine sand-coated bars is improved to a greater extent with increasing concrete strength as compared to the coarse sand-coated bars. The better compaction and the lower amounts of air voids in high-strength concrete mixtures enable the fine particles of concrete to penetrate into the fine sand-coating layer and improve the bond behavior.The mechanical interlocking mechanism is improved in ribbed bars with increasing rib height and decreasing rib spacing. The increase in the surface area for the development of mechanical interlocking forces results in the FRP\u2013concrete bond strength to increase when using deeper ribs. However, further studies on the topic are needed to determine the minimum spacing and maximum height limits of the ribs since too closely-spaced and/or too deep ribs might reduce the rib rigidity and have adverse effects on the bond strength.The thicker and more widely-spaced ribs in the bars with indented surface enables them to transmit greater surface friction forces as compared to the ribbed bars. Therefore, the bond strength values of the indented bars remain below the respective values of the ribbed and helically wrapped bars.The concrete cast depth underneath an FRP bar influences the bond strength to a significant extent. With increasing cast depths, the amount of air voids and water underneath a bar increases, resulting in the compressive strength of concrete surrounding the bar and the FRP\u2013concrete bond strength to decrease.According to the existing studies in the literature, a clear concrete cover of at least three times the bar diameter is compulsory to avoid concrete splitting failure and to allow the debonding or tensile rupture failures to govern the specimen behavior. Increasing this spacing beyond seven times the bar diameter does not have a considerable effect on the FRP\u2013concrete bond strength.The contribution of increasing the compressive strength of concrete to FRP\u2013concrete bond strength is bounded by upper limits. Increasing this strength contributes to the shear strength of the concrete layers around the bar, yet beyond certain limits of concrete strength, the peeling of the outer bar surface from the core and/or slip of the fibers inside the resin can trigger the bond failure of the bar rather than the shear failure of the surrounding concrete.The bond strength tends to decrease with increasing embedment length of an FRP bar in concrete. The non-uniform stress distributions along the bar length and the reductions in the ability of a bar to convey the internal forces through surface friction are the primary reasons for the reduction in bond strength with increasing embedment length. This decrease follows a uniform path with increasing embedment length.The transverse reinforcement definitely affects the FRP-concrete bond strength. However, further studies are needed to unfold the degree of this effect due to wide range of variation of the other test variables in the existing studies.The maximum aggregate size and fiber length controls the initiation and spread of cracks in concrete based on the surface texture. Further studies are needed to uncover the effects of maximum aggregate size on the bond strengths of FRP bars embedded in the concrete mixtures with fibers.The existing studies are not sufficient to specify the concrete cover boundaries for the change in the type of failure of FRP bars in concrete due to wide variations in the surface types and mechanical properties of the tested bars as well as the wide ranges of concrete strength and bar embedment length in the related tests. The concrete splitting failure necessitates the transfer of adequate splitting forces in concrete , which i"} +{"text": "Extending well beyond the energy conversion function, transition metals are also essential for mitochondrial biosynthetic and proteolytic machinery, and antioxidant defenses. Moreover, mitochondria are the sites of synthesis of vital metal-containing cofactors such as heme and iron-sulfur clusters that support a variety of cellular functions inside and outside of mitochondria. Last but not least, metal homeostasis in mitochondria influences the management of transition metals at the cellular level. Therefore, unraveling the mechanisms behind the function, formation, and regulation of mitochondrial metalloproteome is crucial to understanding the involvement of transition metals in the life of eukaryotic cells.Mitochondria are essential metabolic and redox hubs of eukaryotic cells, which apart from their best-known role in energy generation, participate in a plethora of biochemical processes. Many of the critical mitochondrial functions depend on transition metals as cofactors of enzymes that are housed within the organelle. One canonical example is the mitochondrial respiratory chain, which utilizes the redox chemistry of iron-sulfur clusters, copper ions, and heme to transport electrons from reducing equivalents such as NADH and FADHc oxidase. Nonetheless, a number of outstanding questions in the area of mitochondrial transition metal homeostasis remain to be addressed. For example, the regulation of metal homeostasis in mitochondria and coordination of cofactor biosynthesis with other functions of these organelles remains to be elucidated.In recent years, various aspects of the role and regulation of transition metals homeostasis in mitochondria were explored. An important research hotspot was the assembly of metal cofactors into mitochondrial metalloproteins and the machinery that chaperones and secures safe handling of immature cofactors. Another vital development includes the identification of key enzymes that catalyze the synthesis of metal cofactors such as iron-sulfur clusters and heme and proteins involved in the delivery of copper to the respiratory complex IV, also known as cytochrome This Research Topic of Frontiers in Cellular and Developmental Biology features three reviews and one original research report that provide state-of-the-art perspectives on the biogenesis of mitochondrial metalloproteome, its functions, and regulation.Medlock et al. for the first time summarizes and discusses an intriguing connection between homeostasis of mitochondrial transition metals and mitochondria contact site and cristae organizing complex that emerged over the last few years. The unique (and still largely uncharted) spatial organization of mitochondrial membranes is inherently intertwined with metabolic and energy transitions in mitochondria. Could it also serve as a coupling mechanism that efficiently distributes organellar resources to secure an adequate supply of metal cofactors? The review by Yien and Perfetto provides an updated overview of mitochondrial heme synthesis with particular emphasis on mechanisms that regulate this pathway. The authors focus equally on the enzymes directly involved in successive steps of heme synthesis as well as on much less explored mitochondrial transporters that control the movement of heme and critical intermediates across the mitochondrial membranes. In their perspective, the authors underscore the importance of tissue-specific mechanisms that tailor heme synthesis to the requirements of a particular cell type. The authors also stress the significance of protein-protein interactions and complex formation in crucial nodes of this vital pathway. An emerging concept of spatiotemporal regulation of mitochondrial processes is also relevant and applicable to metal homeostasis in the organelle. In the era of cryo-EM technology-powered structural studies that are proven to be so effective in deepening our understanding of the dynamic nature of the other mitochondrial supramolecular machinery, the fact that enzymes involved in heme synthesis form such assemblies is particularly intriguing. Related to that note, the article by Obi et al. presents a comprehensive review of the structure of the key heme biosynthetic enzyme ferrochelatase. Based on the structures of substrate-bound and free ferrochelatase from humans and other species, the authors propose an unifying model of the catalytic cycle of this enzyme that describes mechanisms of entry of protoporphyrin IX and iron into the active site as well as the release of the heme molecule. Obi et al. further discuss regulation and posttranslational modifications of ferrochelatases. Interestingly, the authors review the inhibition of human ferrochelatase as a side effect of certain approved drugs, additionally touching upon the curative potential of the evoked photosensitivity in photodynamic therapies. Finally, the original research paper by Brischigliaro et al. investigates the effects of cytochrome c oxidase deficiency in flies and reports on the discovery of new intriguing connections between the fitness of mitochondrial respiratory chain and cellular homeostasis of transition metals such as copper.The review by Altogether, this Research Topic offers a selection of articles providing comprehensive summaries or conveying new findings pertaining to a complex topic of the biogenesis and maintenance of mitochondrial metalloproteome. Both established and new lines of evidence link these processes to diseases in humans for which no effective treatments currently exist. It is therefore important to understand these facets of mitochondrial biology in order to develop potential new avenues for therapies targeting relevant pathways. We anticipate that this Research Topic will be of equal interest to both the experts in the field and researchers looking to learn more about metals in mitochondria."} +{"text": "If we were to look back at the history of orthopedics only two generations ago, the intertrochanteric osteotomy was a well-established procedure for the treatment of osteoarthritis of the hip . The proProximal femur osteotomies were mainly reserved for selected pediatric orthopedic indications, leaving the majority of modern adult reconstructive surgeons unfamiliar with the technique and its pitfalls. THA thus became the \u201cbread & butter\u201d procedure for the adult reconstruction surgeon.With total hip arthroplasty on the rise and the corresponding huge focus on refining implants and surgical techniques, there was still enough space for the birth of the niche of hip preservation surgery. This field emerged after a publication by Reinhold Ganz and his colleagues in 2003 , describWith the current improved understanding of morphology and pathology, a revival of the periacetabular osteotomy (PAO) and proximal femur osteotomies is on the rise and in demand, more than any time before .The fact that reduced femoral antetorsion causes an early anterior conflict upon internal rotation of the hip has raised the significance of appreciating torsional correction in cases where cam and pincer correction alone would result in an inadequate restoration of internal rotation . On the The robust method of tackling such pathologies is by working on correcting the underlying morphological abnormality. This mandates sweeping the dust off the stored instrument sets of angled blade plates that are likely to still be found in the cellars of most orthopedic departments. The coming generation of young adult hip surgeons will have to revive the techniques of osteotomy and possibly initiate implant innovation and precision surgery in the field.On the acetabular side, anterosuperior undercoverage is no longer the only indication for reorientation. The understanding of three-dimensional anatomy underlines the problem of posterolateral undercoverage in a truly retroverted acetabulum as well as focal anterior undercoverage in an anteverted acetabulum with sufficient posterolateral coverage. This allows a wide range of possible variants of pathomorphology that require individualized treatment. With the improvement in safety and efficacy of reorientation procedures, such as the PAO, the threshold for performing surgery is also lowering, allowing many more patients to benefit from the power of its success. Digital technology will aid in this development by driving improvement in the simulation of pathology, surgical planning, and accuracy of concept implementation.The ideal future hip surgeon will encompass a diverse set of surgical skills that are utilized on an individual patient-specific basis after meticulous analysis of individual pathomorphology based on evidence and digital assistance. That said, the future of hip surgery is certainly exciting and bright!"} +{"text": "The racial riots of 2020 in the US, beginning in Minneapolis, had a global impact inciting protests internationally. We look at the impact of COVID, the social isolation and frustration that therefore existed and how this effected the instigation of the riots.--To review the history of racism in the United States and the abolition theories, comparing US and UK. --To consider the impact of international immigration on the cultural tension in the US; Minnesota accepted a large population of Somalis in 1992 as refugees. --To explore how this progress toward racial equality has stagnated under the leadership of President Donald Trump. --To look at how COVID in the context of the above historical factors has served as a unwitting catalyst to racial riots and global protests.Literature research including historical accounts of principles of abolition, post-civil war reconstructive political manuevers, 1950\u2019s segregation protests and political supports (US and UK), refugee relief efforts made by the US [specifically related to Somalia], and reports regarding the impact of COVID on the 2020 reaction to racial injustice.Evidence suggests that across time periods, recourses of politicians [US and global] resulted in negative relations internationally with respect to immigration. The unique situation created by COVID resulted in a crucible effect following the death of George Floyd.Previous attempts at creating equality have proven unsuccessful and apathetic on the part of those in power. This has lead to a situation where COVID created a perfect storm in order to ignite racial tensions in the US."} +{"text": "Knowledge on the anatomy of the sphenopalatine artery (SPA) and its branches is fundamental for the success of the endoscopic treatment of posterior epistaxis. However, the complex anatomical variations seen in the irrigation of the nasal cavity poses a significant surgical challenge.Objective: This paper aims to describe the endoscopic anatomy of the SPA in human cadavers.Materials and Methods: This is a contemporary cross-sectional cohort study carried out between April 2010 and August 2011. The presence of the ethmoidal crest on the lamina perpendicular to the palatine bone and the location of the principal sphenopalatine foramen (PSF) and the accessory sphenopalatine foramen (ASF) were analyzed in 28 cadavers, and the branches emerging from the foramens were counted.Results: Fifty-six nasal fossae were analyzed. The ethmoidal crest was present in 96% of the cases and was located anteriorly to the PSF in most cases. The PSF was located in the transition area between the middle and the superior meatus in all cases. The ASF was seen in 12 cases. Most nasal fossae (n = 12) presented a single bilateral arterial trunk emerging from the PSF. In other cases, three (n = 8) or two (n = 5) arterial trunks emerged bilaterally from the PSF. In most cases, the SPA emerged as a single trunk from the ASP.Conclusions: The anatomy of the SPA is highly variable. The success of the treatment for severe epistaxis relies heavily on adequate knowledge of the possible anatomical variations of the sphenopalatine artery. The main source of blood in the nasal cavity is the sphenopalatine artery (SPA), a branch of the external carotid system. The SPA is situated in the posterior region of the nasal cavity, and is involved in most severe epistaxis episodesThe sphenopalatine foramen is a notch on the upper margin of the palatine bone between the orbit and the sphenopalatine process. It turns into a foramen as the palatine bone joins the sphenoid bone on the lateral nasal wall. Variations in size, shape, location, and number of branches emerging from its orifice have been described scarcely in the literatureCurrent trends dictate that posterior packing should be replaced by the endoscopic ligation of the sphenopalatine artery in cases of posterior bleeding to reduce morbidity and patient discomfort levelsThis study aims to describe the endoscopic anatomy of the sphenopalatine artery in cadavers and the possible anatomic variations, and assess the bone landmarks used to identify the sphenopalatine foramen.A descriptive anatomic study on the sphenopalatine artery was performed at the Pathology Service of a tertiary care hospital between April of 2010 and August of 2011. Twenty-eight cadavers were included in the study. Specimen collection took place between three and 12 hours as of the time of death of the subjects. The data collected on the cadavers included race, gender, age, time of death, time of autopsy, and cause of death as per the autopsy protocol of the hospital's pathology service. Cadavers with nose trauma, previous nose surgery, or nasal diseases preventing dissection were excluded.The analysis of the endoscopic anatomy of the sphenopalatine artery was carried out using a nasal endoscope with a 30-degree rigid scope, 4.0 mm (Karl Storz), fiber optics, and a Komlux light source (250 watts), a Toshiba monitor (model CRT 1030), and a video endoscopy device (model Toshiba IK-CU44A). Images were recorded and stored using a video capturing software .All dissections were done bilaterally, in accordance with the following steps of nose endoscopic surgery: (1) The middle concha was shifted medially to expose, with the aid of an endonasal probe, the transition between the posterior medial maxillary wall and the perpendicular portion of the palatine bone; one vertical incision was made on the mucosa of the lateral nasal wall of 1.5 cm from the beginning of the perpendicular portion of the palatine bone to the upper portion of the inferior concha; (2) A mucoperiosteal flap was made along the posterior area of the nose, from the transition between the middle and posterior meatus, until the sphenopalatine foramen and its vessels were identified; (3) Dissection was done until the anterior wall of the sphenoid sinus to identify other possible arterial branches. Samples of one centimeter starting from the point of emergence of the foramen were collected from the best segments bilaterally. These samples were taken to histopathology for testing to validate the arterial origin of the visualized structure.The anatomic structures were identified and categorized for: (1) Presence of anterior ethmoidal spine on the perpendicular plate of the palatine bone; (2) Location and presence of the sphenopalatine foramen (SPF) and the accessory SPF (ASPF). The SPF was considered as the largest bony orifice on the lateral nasal wall from which the arterial trunks adjacent to the ethmoid crest of the lamina perpendicular to the palatine bone. The ASPF was described as a smaller bony orifice beyond the SPF; (3) Number of arterial branches stemming from the identified foramens; and (4) Prevalence and analysis of symmetry .The locations of the sphenopalatine foramen and the accessory sphenopalatine foramen were defined in relation to the insertion point of the middle concha: a) on the superior meatus (SM): the opening of the SPF appears above the middle concha insertion point; b) on the transition between the superior and middle meatus (SM/MM): the opening of the SPF occurs under the ethmoidal spine; c) on the middle meatus (MM): the opening of the SPF is below the line of insertion of the middle concha.This study was approved by the local Ethics Committee and given permitFifty-six nasal fossae of 28 cadavers were analyzed, 13 of which males and 15 females. Mean age was 58.32 \u00b1 17.17 years, and ages ranged between 11 and 91. Fifteen were Caucasians and 13 were of African descent. .Table 1SThe ethmoidal spine was present in 96.4% of the cases . Only onMost nasal fossae had a single bilateral arterial trunk (n = 12) emerging from the SPF. Others had three (n = 8) or two arterial trunks (n = 5) bilaterally . On onlyHistology analysis confirmed that all specimens collected were of arterial origin.DISCUSSIONKnowledge on the anatomy of the sphenopalatine artery and its branches is fundamental in the endoscopic treatment of severe posterior epistaxis. Success rates can be greater than 95% in the hands of experienced surgeons. This procedure is associated with low complication ratesThe methods employed in nasal anatomy studies vary depending on the specimens and approaches used. Herrera Tolosana et al.Age ranges are rarely cited in studies on nasal anatomy. Cranial facial development from childhood to adult age alters the ratio between the surgical landmarks of the skullThe ethmoidal spine is present and close to the sphenopalatine foramen in most human beings. Many authors consider the ethmoidal spine as the main anatomic landmark when facing difficulties ligating the sphenopalatine artery during an episode of epistaxisHerrera Tolosana et al.Simmen et al.The presence of an accessory foramen on the lateral nasal wall has been well established in the literature. However, its occurrence varies significantly. The presence of an accessory foramen may be related to failure of the endoscopic treatment of severe epistaxis\u2022ANTERIOR: the anterior border of the perpendicular portion of the palatine bone is the best landmark to start dissecting the mucoperiosteal flap. The easiest way to find this landmark is by palpating the posterior medial maxillary wall at the level of the hrizontal portion of the middle concha. The posterior medial maxillary wall is a moving soft structure. As you palpate it from the anterior to the posterior nasal fossa, you will soon find a stiff hard structure, the anterior plate of the palatine bone, where the mucoperiosteal detachment is initiated .Figure 4\u2022SUPERIOR: the highest point of the anterior plate of the palatine bone ends where it meets the fat tissue of the pterygopalatine fossa. The dissection of the mucoperiosteal flap up to this level will reveal all possible accessory foramens of the superior portion of the sphenopalatine artery complex .\u2022INFERIOR: the insertion point of the inferior concha is the most inferior point in the extension of the mucoperiosteal flap. From this line the accessory branches of the sphenopalatine artery cannot be identified .Figure 5\u2022POSTERIOR: as the anterior wall of the sphenoid sinus and the Eustachian tube are identified, the dissection of the mucoperiosteal flap is completed with the certainty that all branches and foramens of the sphenopalatine artery have been identified .In order to ensure the identification of all branches of the sphenopalatine artery, we propose the delimitation of the endoscopic \u201cSphenopalatine Quadrangle\u201d for the The anatomy of the sphenopalatine artery is highly variable. The ethmoidal spine is an important anatomic landmark, as it is present in almost all cases in a position anterior to the sphenopalatine foramen. The most frequent location of the sphenopalatine foramen was in the transition between the middle and superior meatus."} +{"text": "The presence of acquired speech disorders of varying evidence can cause maladjustment and job loss. Often there is no adequate psychological and psychotherapeutic assistance for these patients, which hinders the process of recovery and reintegration into the social environment.To study the level of anxiety and depression in patients with dysarthria who have undergone various types of cerebrovascular accidents. To give practical recommendations regarding the correction of these conditions.To assess the level of anxiety and depression, the Hospital Anxiety and Depression Scale (HADS) was used as the most convenient for application in clinical practice.The study involved 42 people in the age group 45-60 years old with the consequences of cerebrovascular accident in the form of various types of dysarthria and without severe movement disorders. All participants had a university degree and a confirmed stroke of anamnesis. According to the data obtained, 45% of patients had symptoms of depression, 52% \u2013 anxiety. It should be clarified that specific weight of men with manifestations of depression and anxiety was higher . The beginning of active antidepressant therapy in a hospital setting showed a positive subjective effect from such influences \u2013 in 38% of patients.The use of modern methods for assessing the level of anxiety and depression in patients with speech disorders should become an obligatory stage of diagnostic measures. Psychological assistance and pharmacological correction not only helps patients to adapt to new social conditions, but also promotes prevention the progression of depressive manifestations.No significant relationships."} +{"text": "The working environment of rotating machines is complex, and their key components are prone to failure. The early fault diagnosis of rolling bearings is of great significance; however, extracting the single scale fault feature of the early weak fault of rolling bearings is not enough to fully characterize the fault feature information of a weak signal. Therefore, aiming at the problem that the early fault feature information of rolling bearings in a complex environment is weak and the important parameters of Variational Modal Decomposition (VMD) depend on engineering experience, a fault feature extraction method based on the combination of Adaptive Variational Modal Decomposition (AVMD) and optimized Multiscale Fuzzy Entropy (MFE) is proposed in this study. Firstly, the correlation coefficient is used to calculate the correlation between the modal components decomposed by VMD and the original signal, and the threshold of the correlation coefficient is set to optimize the selection of the modal number As the rotating support components of most machinery, the fault detection and diagnosis of rotating machinery such as rolling bearings is essential to prevent mechanical failures ,2. A varThe key to fault diagnosis lies in analyzing the original signals from the time-frequency domain and constructing feature sets from different aspects to describe the running state of the rotating machinery. At present, one of the most commonly used time-frequency analysis methods is Empirical Mode Decomposition (EMD) . Bustos Fuzzy Entropy (FE) is a measure of the probability that a time series will generate new patterns when its dimensionality changes . MoreoveMost of the above methods only consider the fault diagnosis in the case of a single feature, which creates a problem of insufficient fault feature representation. In order to describe the weak fault features from multiple perspectives better, this study proposes an early fault feature extraction method that combines AVMD and MFE. The parameters of the MFE are optimized by the Particle Swarm Optimization (PSO) algorithThe second part of the article introduces the principles of AVMD and PSO to optimize MFE. The third part introduces the specific process of constructing a fault feature set based on AVMD combined with optimized MFE. The fourth part uses the DDS test bench for experimental analysis. It is proved that the combination of AVMD and optimized MFE can describe the MFE characteristics of fault signals in multiple frequency bands, which can better characterize more weak fault information, and is more sensitive to the early weak faults of rotating machinery. Additionally, compared with the traditional decomposition method, the method proposed in this study has higher fault diagnosis accuracy.The overall framework of VMD is to solve the variational problem in order to minimize the sum of the estimated bandwidths of each eigenmodal function, where each eigenmodal function is assumed to be a finite bandwidth with different center frequencies. To solve this variational problem, the alternating direction multiplier method is used to continuously update each eigenmode function and its center frequency, which can demodulate each eigenmode function to the corresponding fundamental frequency band and, finally, extract each eigenmode function and its corresponding center frequency.The decomposition steps of the variational modal decomposition are as follows:(1)Initialize (2)(3)update according to the update formula for (4)update (5)Given the precision \u03c9, if the stopping condition Where Compared with EEMD and other adaptive decomposition algorithms, the VMD algorithm has better sparse modal components, but the decomposition result of the VMD algorithm is affected by multiple parameters, among which the modal number This article chooses the method of combining the correlation coefficient with VMD to optimize the value of parameter In the formula, Therefore, due to the smaller The penalty factor is one of the parameters that must be adjusted manually in VMD. Too small a penalty factor VMD and the correlation coefficient are combined to optimize the selection of the modal number. The initial modal number ,In order to check the effectiveness of the algorithm, this section takes the simulated signal as an example, and uses the AVMD algorithm proposed in this article to verify the decomposition effect. For periodic simulation signal: It can be seen from the correlation coefficient between each modal component and the original signal that when As shown in According to the sampling frequency of the periodic simulation signal, From the spectrogram of each modal component after VMD under different Therefore, from the above analysis of the simulated signal, it can be seen that the default penalty factor Multiscale Fuzzy Entropy (MFE) is improved on the basis of fuzzy entropy and combined with multiscale entropy to measure the complexity and similarity of time series under different scale factors. MFE can be calculated as follows:(1)Coarse-grained original time series (2)Calculate the fuzzy entropy of coarse-grained sequence under each scale factor, and its calculation formula is:The parameter selection of the MFE has a great influence on the extracted MFE features of the vibration signal. Selecting the default parameter settings cannot adequately characterize the weak fault characteristics of rotating machinery. It will have a greater impact on the diagnostic results.The Particle Swarm Optimization (PSO) algorithm is one of the evolutionary algorithms. It starts from a random solution, finds the optimal solution through iteration, and evaluates the quality of the solution through fitness. However, it is simpler than the genetic algorithm rule, and finds the global optimum by following the currently searched optimal solution. This algorithm has the advantages of easy implementation, high precision and fast convergence. Additionally, it has demonstrated its superiority in solving practical problems. Therefore, this study chooses to optimize the parameters of MFE with a PSO algorithm.The PSO is initialized to a group of random particles (random solutions), and then the optimal solution is found through iteration. At each iteration, the particle updates itself by tracking two \u201cextreme values\u201d Obtain the original signal, initialize the modal number (2)Perform VMD on the vibration signal and calculate the correlation coefficient between each mode and the original signal. When the correlation coefficient satisfies the termination condition, the correlation coefficient threshold less than (3)The optimized VMD is performed on the vibration signal to generate (4)In order to minimize the Ske of the original signal, the optimal MFE parameters are obtained by adaptive optimization using the PSO algorithm.(5)Calculate the MFE of (6)Input the fuzzy entropy feature set obtained in the previous step into the classifier for fault identification.In order to further verify the effectiveness of the method proposed in this article, five different types of bearing and gear faults on the DDS test bench were collected, and the cube method proposed in this article was used for fault diagnosis to prove the effectiveness of the method. Five different categories of bearing and gear failure are: (1) inner ring fault; (2) outer ring fault; (3) rolling body fault; (4) inner ring + gear wear fault; (5) inner ring + broken tooth fault. The DDS fault diagnosis comprehensive test bench and test gear are shown in Using the method proposed in this study, the initial AVMD cross-correlation coefficient threshold was set to After that, taking Ske as the objective function, the parameters of MFE embedding dimension M, scale factor S and time delay T are optimized by PSO algorithm. Set the population size to 15 and the maximum iteration times to 30. Set the value range of optimization parameters according to fault signal characteristics: M is , S is and T is . The MFE parameters of the five types of fault vibration signals are shown in A total of 500 samples was extracted from five different fault vibration signals collected by the DDS test bench, with each sample having 1024 sampling points. It can be seen from In order to further analyze the feature vector set constructed by the method in this study, the T-SNE method was introduced to visualize the eigenvector set, and the distribution of five types of faults in the low-dimensional space was observed. As shown in As shown in the The MFE feature set constructed by the five methods were put into the SVM, and the accuracy of the five methods is shown in As shown in It can be seen from In this study, a method combining AVMD and optimized MFE is proposed to construct a fault feature set, and this method is verified by DDS test bench data. Compared with other decomposition methods and feature extraction methods, the fault diagnosis accuracy rate in the proposed method reaches the highest 98%. The superiority of the method proposed in this study is proved.(1)By analyzing the simulation and experimental results, AVMD optimizes the mode number (2)The early fault feature extraction method based on AVMD and optimized MFE mainly decomposes the fault signal through AVMD. Taking Ske as the objective function, PSO searches for the optimal parameters of MFE and extracts the MFE features in multiple frequency bands. Through this method, the MFE of the modes in different frequency bands is calculated by decomposing the adaptive variational modes, and the MFE in different frequency bands is used to form a feature vector set. In this way, the weak fault information of rotating machinery can be more fully characterized, and it is more conducive to the early weak fault identification. Simultaneously, it can achieve higher fault diagnosis accuracy.(3)The MFE feature extraction method based on AVMD can effectively extract the weak fault information of the fault signal, but the calculation amount of MFE is large. The next work will improve this problem to improve the computational efficiency."} +{"text": "Ireland previously had widespread voluntary fortification but there has been a major decline in the number of food staples fortified with folic acid in Irish supermarkets in the past 15 years. In this research we set out to examine the level of folic acid in food staples in supermarkets with the leading market share in the Republic of Ireland.The food labels of food staples were photographed in supermarkets with the leading market share in the Republic of Ireland between 2017 and 2021.The data was extracted and collated in an excel spreadsheet. The data was analysed to examine the level of folic acid in each product. We compared the levels captured at the current times with the levels previously captured in 2017.Preliminary analysis suggests that folic acid level in food staples in Ireland continues to decline. Folic acid was not found in any breads (except a number of gluten free breads), milks, spreads but was found in several cereals marketed mainly at children.This study reports on the declining levels of folic acid in the food chain in Ireland. The number of food staples fortified with folic acid continues to decline demonstrating that voluntary fortification in Ireland is no longer an effective measure for passively augmenting the folic acid levels of consumers. This is of concern due to the incidence of neural tube defects in Ireland largely preventable by folic acid.This study reports on the declining levels of folic acid in the food chain in Ireland.The number of food staples fortified with folic acid continues to decline."} +{"text": "Cholelithiasis is considered to be\u00a0the most common biliary pathology. They have been categorized into three types, which are pigment stones, cholesterol stones, and mixed types of stones with varying incidence. The condition may be asymptomatic for significantly long durations and in most cases, the presence of gall stones is an incidental finding. The patients may present with pain in the abdomen in stages of cholecystitis or advanced stages or cases of gall stones causing the obstruction. Gallbladder stones are formed through a very complex procedure with the contribution of numerous factors, where the main initiating step is supposed to be the development of a state wherein there is supersaturation of the bile, which in turn gives rise to accumulation and stasis of the bile and the development of gall stones. One of the factors is said to be the hypothyroid state. Hypothyroidism itself is a significantly common endocrine disorder that affects almost every nucleated cell in the body. There is decreased efficacy of the thyroid gland. The serum T3 and T4 levels might be found on the lower side whereas thyroid-stimulating hormone (TSH) values are found to be high. In some of the cases, though the T3 and T4 levels are maintained within the normal limits, the TSH shows raised values, which are labeled as subclinical hypothyroidism. The state of hypothyroidism may act upon the amount of bile secretion, the flow of bile into the intestines, cholesterol metabolism, and the action of the sphincter of Oddi. Studies have shown results pointing towards the correlation between these two factors. The basic\u00a0mechanism behind the correlation between cholelithiasis and hypothyroidism is supposed to be due to the action of the hypothyroid state on the functioning of the sphincter of Oddi. The hypothyroid state is supposed to be decreasing the tendency of the sphincter of Oddi to relax, thus causing stasis of the bile, which over time leads to initiation of supersaturation of the bile and formation of gall stones. Both subclinical hypothyroidism and clinical hypothyroidism are found to be significantly common in patients having cholelithiasis. We, in this review article, have taken into consideration various studies which have been performed regarding this topic worldwide. The studies have been performed on individuals who are already diagnosed with either of these diseases and are then screened for the presence of the other disease included in this study. The degree of correlation varies according to the location of the stones and their sizes. Though various studies show varying results to some extent, overall almost all the studies show significant pieces of evidence of the correlation between cholelithiasis and hypothyroidism. Cholelithiasis is mentioned as the commonly encountered biliary pathology and gall stones can be pigmented, cholesterol, or mixed.\u00a0The literature has shown that 2.4% of female patients treated for hypothyroidism had cholecystectomy . The derThe hypothyroidism disease itself can either be of\u00a0hypothyroidism (overt) or subclinical type, in which\u00a0the serum level of thyroxine is lesser than the expected normal . In caseThe researchers aim to document the correlation between cholelithiasis and hypothyroidism not only in various age groups but also establish a link between subclinical hypothyroidism and cholelithiasis.\u00a0The process of formation of gall bladder stones is itself a very complex one. There can be numerous factors that can ultimately lead to cholelithiasis in patients having hypothyroidism. The decreased serum thyroxine levels affect the metabolism of cholesterol and that in turn leads to the process of supersaturation.\u00a0It also affects the filling of the gall bladder,\u00a0its motility, and contractility of the gall bladder. This in turn leads to the retention of cholesterol in the gall bladder and provokes the nucleation and maturation of the gallbladder stones.\u00a0The hypothyroid state decreases the secretion of bile, leading to precipitation and formation of the stones. It also decreases the sphincter of Oddi's relaxing tendency, leading to further stasis of the bile as it expresses the thyroid hormone receptors named TR beta 1 and beta 2.\u00a0A prospective study proved that the subclinical hypothyroidism state also is comparatively more among common bile duct (CBD) stone patients . In 2010Amount of bile secretion and hypothyroidismAn observation accomplished on a hypothyroid-affected person after thyroidectomy confirmed the equal uptake of\u00a0the radioactive contrast in the biliary tree depicting that it has no remarkable role in the early stages of hypothyroidism. But research accomplished on rats with the assistance of cannulation of the bile duct displayed a huge impact in the prolonged stages of hypothyroidism; for that reason, it's been taken into consideration that there is supposed to be at least some impact of hypothyroidism in the process of secretion of bile in the long term Hypothyroidism affects the flow of bile into the intestinesThere have been significantly variable findings in radioactive studies which might be because of modifications in the composition of the bile and gall bladder motility and the resistance essentially because of modifications in the sphincter of Oddi contractility.\u00a0The transport time between the hilum and gall bladder is notably extended in hypothyroid cases. Hypothyroidism decreased and hyperthyroidism elevated the flow rate of the bile into the duodenum Hypothyroidism and cholesterol metabolismMost hypothyroid sufferers have raised serum levels of cholesterol and deranged lipid profile\u00a0which is considered responsive to the treatment of hypothyroidism regardless of concomitant hyperlipidemia. The reduced low-density lipoprotein receptor activity is the risk factor behind the deranged serum levels of cholesterol. Reduced regulation of the hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase enzyme results in reduced cholesterol synthesis. Hypothyroidism causes decreased cholesterol secretion and the treatment of hypothyroidism leads to accelerated cholesterol secretion. The hypercholesterolemia state leads to supersaturation of cholesterol, in turn inflicting decreased motility, contractility, and atypical filling of the gall bladder. This directly contributes to the nucleation and formation of gallbladder stones ,12,17Hypothyroidism and sphincter of OddiThe gastrointestinal tract goes into a hypoactive state in a pre-existing hypothyroid state. Thyrogenic hormones have a direct action on the smooth muscles which tends to relax the smooth muscles. Potassium channel blocker glibenclamide attenuates the thyroid hormone effect on smooth muscles and causes further relaxation Numerous studies -22 have As the process of stone formation in the gall bladder or common bile duct is considerably lengthy, there are very high chances that the process of formation of the stones has already been initiated before even detection of hypothyroidism and initiation of thyroxine supplementations. Thus it has often been stated that both subclinical and clinical hypothyroidism is significantly common in patients having gallbladder or even common bile duct stones. It has also been stated in the literature that findings show significant benefits concerning dyslipidemia, cardiac complications, and neuromuscular symptoms if the patients are treated with thyroxine supplementation, especially in subclinical cases.\u00a0Despite all these results from various studies, there are still questions about the mechanism behind the association of hypothyroidism and cholelithiasis though a significant correlation has been proved. The thyroid hormone has numerous effects on metabolism and also on cholesterol metabolism, which is frequently observed with the presence of deranged values of lipid profile in this kind of patient. In cases of hypothyroidism when the metabolism of cholesterol has been affected, the level of serum cholesterol rises leading to supersaturation in the bile ,26. The The serum thyroxine-secreting hormone values in cholelithiasis patients have been studied.\u00a0The patients with gallbladder and CBD stones will have an equal increase in the prevalence of subclinical hypothyroidism if the absence of T4 affects the cholesterol metabolism and hepatic biliary secretion. In some of the studies, it was observed that CBD stone patients have twice the chance of being diagnosed with hypothyroid as compared to patients with gallbladder stones ,29. ThisIn a randomized prospective study, Mulita et al. also found that a total of 18 out of 316 (5.7%) patients who underwent laparoscopic cholecystectomy because of cholelithiasis had hypothyroidism [In summary, research studies suggest that there's a significant association between clinical or subclinical hypothyroidism state and the development of common bile duct stones. Particularly the modifications in the functioning of the sphincter of Oddi underline the association between cholelithiasis and hypothyroidism. Though the prevalence of the stones in the common bile duct in cases of hypothyroidism is more as compared to the ones having isolated gall bladder stones, the correlation of hypothyroidism with gall bladder stones is still significant and needs to be focused upon. There is evidence pointing in the direction of the phenomenon of reduction of bile flow because of the absence or scarcity of thyroid hormone. The thyroid hormone acts on the intranuclear receptors and this has an effect on almost all the nucleated cells in the human body, showing widespread effects. The state of raised thyroid stimulating hormone causes numerous effects, ultimately leading to the precipitation of gallstone disease. There are many studies enlightening the need for screening hypothyroid patients for possible gallbladder stones. Apart from the increased cholesterol concentration and reduced bile flow rate, the pro-relaxant action of thyroid hormones at the sphincter of Oddi plays a critical function in the improvement of the disease. The stone formation in particular initiates during the early untreated phase of the hypothyroid state, which continues to the level of maturation even after the initiation of treatment for hypothyroidism. Cardiovascular, neuromuscular, and dyslipidemia pathologies can be prevented through thyroxine supplementations. Most importantly, at the same time as treating sufferers with common bile duct stones or microlithiasis, clinicians need to preserve in thought the possible synchronous existence of hypothyroidism and for that reason, the thyroid workup needs to be done as a routine practice."} +{"text": "I read with interest the article by DiFrancisco-Donoghue et al., a randomized cross-over trial carried out on healthy subjects and with the aim of reducing post-anaerobic metabolites (blood lactate accumulation), through an osteopathic manual approach for the stimulation of lymphatic circulation . I wouldIn the text we can read that great importance is given to lactic acid; the latter is considered to be one of the most important causes for contractile fatigue of the skeletal muscle and for the onset of delayed muscle soreness (DOMS) . During Lactic acid postpones the onset of peripheral muscle fatigue. During muscle contraction in an intense anaerobic regime, there is an increase in the extracellular potassium concentration, in which increase induces a decrease in the excitability of the sarcolemma , 7. LactAnother reflection concerns the venous/lymphatic return. Osteopathy teaches that the diaphragm is an extraordinarily important area for bodily health . During The diaphragm collects all the lymph from the viscera and abdominal muscles, from the lower limbs and in a small portion collects the parietal pleural lymph from the lower lung lobe . The ent"} +{"text": "The PSIC study (Prospective Study of Intensivists and COVID-19) monitored the intensivists working in one of the two COVID-19 hub hospitals in Central Italy over 2 years from April 2020. This study showed how mental health varies in relation to the stressors posed by the different pandemic phases.In 4 surveys corresponding to the 4 pandemic waves, the intensivists were invited to indicate changes in work activity and measure their state of mental health using standardized questionnaires administered via SurveyMonkey.During the pandemic there was a change in occupational stressors that led to insomnia, anxiety, depression, burnout, job dissatisfaction, unhappiness and intention to quit. The predominant stressors in the first wave were fear of unprotected exposure, distrust of safety measures, and compassion fatigue from having to inform relatives of the adverse outcome of treatment. In the second and third waves the workload, the monotony due to always following only one type of patient, the isolation, and the lack of time to meditate were the more relevant factors. The fourth wave added the stress deriving from interacting with anti-vax patientsSpecific prevention strategies have been developed and applied for each of the stress factors identified. Excessive workload and lack of time for meditation originated from lack of staff were remedied with extraordinary temporary hires. The management of compassion fatigue and relations with anti-vax people were addressed with specific policies and training. The monotony and isolation in COVID-19 wards can only be resolved through employee turnover in ordinary departments. Organizational and financial efforts are necessary to protect the health of intensivists during a pandemic.\u2022\u2002Monitoring of critical care workers during the pandemic waves indicated the preventive measures necessary to ensure their mental health and quality of care.\u2022\u2002Protecting healthcare workers is a priority."} +{"text": "Rape resulting in pregnancy warrants special attention due to the associated psychosocial and physical adversities. There are no guidelines for the management of teenage pregnancies resulting from rape in Sri Lanka.This case series aims to describe the experience of four teenagers who became pregnant as a result of rape in Sri Lanka.This is a case series of 4 pregnant teenagers who became pregnant as a result of rapeThis case series highlight the deficiencies in services in Sri Lanka such as lack of legal framework to terminate pregnancy following rape, delay in legal procedure leading to prolonged institutionalization of pregnant teenager, not giving the teenage mothers the choice of breastfeeding and lack of awareness about the psychological consequences of rape and teenage pregnancy.Formulating a national guideline on managing rape related pregnancy in teenagers in Sri Lanka, with the involvement of all stakeholders is a need of the hour.No significant relationships."} +{"text": "The Research Topic of this Research Topic aims to gather the Proceedings of \u201cProteine 2022\u201d meeting organized by the Protein Group of the Italian Society of Biochemistry (SIB). This Research Topic is focused on protein-protein and protein-ligand interactions in order to obtain new relevant knowledge about the discovery, design and the rational development of drugs for the treatment of particular diseases.Iacobucci et al., focuses on the viral glycoprotein Spike S which is involved in the recognition and interaction of viral particles with surface host receptors as well as in the fusion of the viral capsid with the host cell membrane. Authors investigated the Spike S1 subunit interactomes looking for potential additional Spike human receptors that could possibly be involved in other intracellular processes in non-pulmonary systems. Shotgun proteomics approaches were used for protein identification and several bioinformatics approaches were performed for functional data analysis. The proteomic experiments allowed the authors to identify many host protein targets and ascertain the role of the S1 domain in all steps of the interaction between the pathogen and the different host cell lines used in this research. Their data suggest the role of S1 not only in receptor recognition but also in vesicle formation; an interplay between S1 domain and human proteins associated with energetic cell metabolism in almost all analysed cell lines has been highlighted. In the manuscript of Dos Santos et al., Authors showed a strong correlation between enhanced in vivo proteolysis and mortality in patients with septic shock caused by a bacterial infection as well as by COVID-19-induced bacterial superinfection. They used a peptidomic-based approach in order to estimate the magnitude of in vivo proteolysis by assessing the abundance and count of peptides in the plasma sample. The analysis of the circulating proteome and peptidome was reinforced by the quantitation of the proteolytic activity of proteases that are activated during the acute form of the disease in order to obtain more information on the role of proteases in the regulation of the balance between coagulation and fibrinolysis in the systemic intravascular proteolysis observed during acute illness. The correlation between proteolytic activity and protease expression patterns in plasma during bacterial superinfection can provide pathophysiologically and clinically valuable information that could be useful to anticipate the signs of possible coagulation disorders and aid in early diagnosis providing useful devices for a timely therapy.In this Research Topic new important data regarding the Coronavirus Disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2) are presented. The manuscript of This Research Topic also reports data on protein-protein and protein-ligand interactions related to the search for antibacterial drugs and the understanding of key cellular signaling pathways in cell migration.Kabongo et al., deals with the drug discovery related to the enzyme malate:quinone oxidoreductase (MQO), a peripheral membrane protein essential for the survival of several bacteria and parasites such as C. jejuni which is the most common cause of bacterial foodborne diseases worldwide. The increasing incidence and antibiotic resistance of this bacterial infection require the adoption of new strategies in order to counteract the infection spread. The MQO enzyme catalyses the oxidation of malate to oxaloacetate and is also involved in the reduction of the quinone pool in the electron transport chain thereby contributing to cellular bioenergetics. For these reasons the enzyme is an attractive drug target as it is not conserved in mammals. Authors optimized the overexpression and purification of MQO from Campylobacter jejuni (CjMQO), conducted an optimization of CjMQO assay conditions with a determination of enzyme steady-state kinetic parameters and reaction mechanism and finally, investigated the inhibition mechanism of two CjMQO inhibitors of plant origin, ferulenol and embelin. These molecules strongly inhibit the CjMQO enzymatic activity as well as the growth of C. jejuni, and hence offer promising perspectives as an antibacterial tool.The manuscript of Vacchini et al. applies a quantitative SILAC-based phosphoproteomic analysis coupled to a systems biology approach with network analysis to investigate the signaling pathways downstream to ACKR2, an atypical chemokine receptor which is structurally uncoupled from G proteins and is unable to activate signaling pathways used by conventional chemokine receptors to promote cell migration. ACKR2 regulates inflammatory and immune responses by shaping chemokine gradients in tissues via scavenging inflammatory chemokines. The analysis was carried out on a HEK293 cell model expressing either ACKR2 or its conventional counterpart CCR5. The model was stimulated with the common agonist CCL3L1 for short (3\u00a0min) and long (30\u00a0min) durations. As expected, many of the identified proteins are known to participate in conventional signal transduction pathways and in the regulation of cytoskeleton dynamics. However, the analyses revealed unique phosphorylation and network signatures, suggesting roles for ACKR2 other than its scavenger activity providing an unprecedented level of detail in chemokine receptor signaling and identifying potential targets for the regulation of ACKR2 and CCR5 function.Lastly, the paper of In addition to the contributions to the Research Topic, the Congress itself had a good participation offering an overview of the state of the art of Italian research in the field of proteins.Furthermore, we were particularly pleased to see the significant participation of many of the younger generation scientists presenting their work both as posters and in oral communications."} +{"text": "Reconstruction techniques are little studied in the literature due to the limited number of cases and the results are not very reproducible , 2 . PeThis paper shows the penile arteries collateral circulation and the vascularization of the perineal skin - key points to the success of penile transplantation. The authors show the vascular anastomosis of the 4 cases of penile transplantation in literature in a very beautiful figure.The paper concludes that penile allotransplantation represents a revolutionary technique in the management of penile loss. The inclusion of external pudendal artery anastomoses appears to have prevented any form of penile skin necrosis and anastomosis of the corpora cavernosa appears sufficient for restoration of erectile function independent of the cavernous artery."} +{"text": "The Scottish Burden of Disease (SBoD) Study monitors the contribution of over 100 diseases and injuries to the population health in Scotland, in the context of disability-adjusted life years (DALYs). Providing robust estimates of burden is the first step in identifying areas of prevention which could have the biggest impact on health; including identification of modifiable risk factors and changes in the underlying risk factor prevalence. Our aim was to estimate DALYs for 2019, to describe the current burden in Scotland and as a baseline for future burden scenarios.The SBoD 2016 study estimated the burden using routine data and patient-level record linkage. For this update, years lived with disability were estimated using 2016 age-sex-deprivation specific rates, assuming no change in disease prevalence from 2016, but taking account of changes to the population structure. Years of life lost were calculated from 2019 observed deaths and the application of the Global Burden of Disease (GBD) aspirational life table. Population attributable fractions (PAFs) were sourced from the GBD 2019 and risk factor prevalence from the Scottish Health Survey.In 2019 the leading causes of burden were ischaemic heart disease (IHD), Alzheimer's/other dementias, lung cancer, drug-use disorders and cerebrovascular disease, representing over a quarter (27%) of the total DALYs in Scotland. Application of PAFs shows that a proportion of the burden for each of these causes can be attributed to modifiable risk factors.IHD continues to be the leading cause of health burden in Scotland in 2019. However recent years show an increase in burden of social causes and diseases affecting the ageing population. Application of PAFs demonstrate the importance of continuing to monitor both the burden of disease in Scotland and the prevalence of risk factors, to provide robust evidence for planning of local and national services.\u2022\u2002The Scottish Burden of Disease continues to monitor the population health landscape of Scotland.\u2022\u2002Ischaemic heart disease continues to be the leading cause of burden in Scotland."} +{"text": "Adaptive robotics achieved tremendous progress during the last few years , see Although the first realizations of the idea , 2020 arDeveloping intelligent robots capable of acquiring their skills autonomously in interaction with the environment is one of the most ambitious objectives of science. The challenges which are still open are substantial but appear feasible in light of the progresses achieved in the last years."} +{"text": "According to the different characteristics of patients and cervical lymph node metastasis of oral and oropharyngeal cancer, the marginal mandibular branches of facial nerves were treated by different surgical procedures, and the safety and protective effects of different surgical procedures were investigated.One hundred ninety-seven patients with oral and oropharyngeal cancer satisfying the inclusion criteria were selected. According to the different characteristics of patients and cervical metastatic lymph nodes, three different surgical procedures were used to treat the marginal mandibular branches of the facial nerve: finding and exposing the marginal mandibular branches of the facial nerves at the mandibular angles of the platysma flaps, finding and exposing the marginal mandibular branches of facial nerves at the intersections of the distal ends of facial arteries and veins with the mandible, and not exposing the marginal mandibular branches of the facial nerves. The anatomical position, injury, and complications of the marginal mandibular branches of the facial nerves were observed.P\u2009=\u20090.0184). The best protective effect was to find and expose the mandibular marginal branch of the facial nerve at the mandibular angle of the platysma muscle flap, and the injury rate was only 2.94%.The marginal mandibular branches of the facial nerves were found and exposed at the mandibular angles of the platysma flaps in 102 patients; the marginal mandibular branches of facial nerves were found and exposed at the intersections of the distal ends of the facial arteries and veins with the mandibles in 64 patients; the marginal mandibular branches of facial nerves were not exposed in 31 patients; among them, four patients had permanent injury of the marginal mandibular branches of the facial nerves, and temporary injury occurred in seven patients. There were statistically significant differences in the protection of the mandibular marginal branch of the facial nerve among the three different surgical methods (The three different surgical procedures were all safe and effective in treating the marginal mandibular branches of the facial nerves, the best protective effect was to find and expose the mandibular marginal branch of the facial nerve at the mandibular angle of the platysma muscle flap. Comprehensive cervical lymphadenectomy includes all the lymphoid and adipose tissues that can be dissected in classical radical cervical lymphadenectomy, and whether the internal jugular veins, sternocleidomastoid muscles, and accessory nerves are preserved does not affect whether it is classified as comprehensive . RadicalSurgeries involving the facial nerves have a high risk of iatrogenic injuries. The marginal mandibular branches of the facial nerves are extremely special in nature. Once an injury occurs, the complications will be devastating and functions will be lost, which will involve the patient\u2019s facial aesthetics \u201311, causThe Ethical Committee of the Longgang E.N.T Hospital approved clinical samples for research purposes (NO. 2022\u20130001), and this study conformed to the principles contained in the World Medical Association Declaration of Helsinki. Informed consent was requested as anonymous specimens and was given by all human participants in this study.The inclusion criteria include \u2460 age\u2009>\u200918\u00a0years; \u2461 definitive pathological diagnosis of oral and oropharyngeal cancer requiring cervical lymphadenectomy with submandibular gland resection according to the National Comprehensive Cancer Network guidelines or the results of multidisciplinary care discussion; \u2462 no significant invasion of the marginal mandibular branches of the facial nerves before operation; \u2463 no serious cardiovascular diseases, diabetes, chronic respiratory diseases, cerebrovascular diseases, etc.; \u2464Karnofsky score\u2009\u2265\u200980 points; \u2465 an estimated survival of more than one year; \u2466 and patients who voluntarily participated and signed the informed consent form.The exclusion criteria include \u2460 cervical lymphadenectomy for oral and oropharyngeal cancer without the need for submandibular gland resection, \u2461 significant invasion of the marginal mandibular branches of the facial nerves before operation, \u2462 significant submandibular lesions, \u2463 a history of surgery or radiotherapy of the neck, \u2464 distant metastasis, and \u2465 unable to tolerate the surgery due to severe comorbidities.From January 2014 to June 2021, 197 patients with oral and oropharyngeal cancer, including 131 males and 66 females, aged 29 to 68\u00a0years, and with a median age of 54.6\u00a0years, were selected from the Head and Neck Department of Shenzhen Otolaryngology Research Institute/Shenzhen Longgang Otolaryngology Hospital, Head and Neck Department of Gannan Medical University Affiliated Cancer Hospital, Department of Oral and Maxillofacial Surgery of the First Hospital of Qiqihar in Heilongjiang Province, and Department of Otorhinolaryngology-Head and Neck Surgery of First Affiliated Hospital of Gannan Medical University. There were 78 cases of tongue cancer, 35 cases of gingival cancer, 32 cases of buccal cancer, 28 cases of oral floor cancer, 24 cases of lingual root or oropharyngeal cancer, 16 cases at stage III, 114 cases at stage IVA, and 67 cases at stage IVB had permanent injury of the marginal mandibular branches of the facial nerves. Among them, one patient had an injury of the marginal mandibular branches of the facial nerves in whom the nerve was found and exposed at the mandibular angles of the platysma flaps; one patient had an injury of the marginal mandibular branches of the facial nerves in whom the nerve was found and exposed at the intersections of the distal ends of the facial arteries and veins with the mandible; and two patients had an injury to the nerve in whom the marginal mandibular branches of the facial nerves were not exposed.The temporary injury occurred in seven patients . Among them, two patients had an injury of the marginal mandibular branches of the facial nerves in whom the nerve was found and exposed at the mandibular angles of the platysma flaps; two patients had an injury of the marginal mandibular branches of the facial nerves in whom the nerve was found and exposed at the intersections of the distal ends of the facial arteries and veins with the mandible; and three patients had an injury to the nerve in whom the marginal mandibular branches of the facial nerves were not exposed.\u03c72\u2009=\u20097.9875, P\u2009=\u20090.0184). The best protective effect was to find and expose the mandibular marginal branch of the facial nerve at the mandibular angle of the platysma muscle flap, and the injury rate was only 2.94% , suggesting that the mandibular marginal branch of the facial nerve should be dissected as far as possible in the comprehensive neck lymph node dissection for locally advanced oral oropharyngeal carcinoma. The best protection effect was to find and expose the mandibular marginal branch of the facial nerve at the mandibular angle of the platysma muscle flap, and the injury rate was only 2.94%.In our multicenter retrospective study, according to the different characteristics of patients and cervical metastatic lymph nodes, as well as the proficiency of surgeons, we adopted three different surgical procedures to deal with the marginal mandibular branches of the facial nerves. For patients with larger and more lymph nodes in region Ib, the marginal mandibular branches of the facial nerves were found and exposed at the mandibular angles of the platysma flaps. For patients with larger and more lymph nodes in region IIa, the marginal mandibular branches of the facial nerves were found and exposed at the intersections of the distal ends of facial arteries and veins with the mandible. After finding the marginal mandibular branches of facial nerves in these two ways, the facial nerves were completely separated under direct vision, the proximal and distal ends of facial arteries and veins were disconnected, the submandibular glands were resected, and the lymphoid and adipose tissues in region Ib were cleaned. The dissection of the marginal mandibular branches of the facial nerves according to different characteristics of lymph nodes can avoid the inflammation and tissue adhesion caused by lymph nodes and protect the marginal mandibular branches of facial nerves to the greatest extent. For patients without obvious lymph nodes in region Ib or region IIa, the submandibular glands were removed, and the lymphoid and adipose tissues in region Ib were cleaned without exposing the marginal mandibular branches of the facial nerves, When cleaning the perivascular lymph nodes at the distal end of the facial artery and vein and ligating blood vessels to remove the submandibular gland, special attention should be paid to the position of the marginal mandibular branch of the facial nerve to avoid injury. However, there were statistically significant differences in the protection of the mandibular marginal branch of the facial nerve among three different surgical methods (Among the 197 patients with locally advanced oral and oropharyngeal cancer, the incidence of permanent injury of the marginal mandibular branches of the facial nerves was 2.03% and that of temporary injury was 3.55%. The extremely low incidence of injury of the marginal mandibular branches of the facial nerves verified our correct choices. We should conduct a sufficient evaluation before the operation, formulate a strict surgical plan, and flexibly choose the measures to protect the marginal mandibular branches of the facial nerves according to the specific situation of the primary lesion and lymph nodes as well as the surgeon\u2019s proficiency. When the marginal mandibular branches of the sectional nerves are not dissected, the glands should be pulled down slightly with instruments to keep a safe distance from the marginal mandibular branches for resection. When dissecting the marginal mandibular branches of the facial nerves, we should pay attention to its branches , 25. WitThe best protective effect was to find and expose the mandibular marginal branch of the facial nerve at the mandibular angle of the platysma muscle flap. Finding and exposing the marginal mandibular branches of the facial nerves at the mandibular angles of the platysma flaps was suitable for those with larger and more lymph nodes in region Ib. Finding and exposing the marginal mandibular branches of the facial nerves at the intersections of the distal ends of facial arteries and veins with the mandible was suitable for those with larger and more lymph nodes in region IIa. Not exposing the marginal mandibular branches of the facial nerves was suitable for those without obvious lymph nodes in region Ib or region IIa.In the comprehensive treatment of locally advanced oral and oropharyngeal cancer, individualized and precise treatment is required, and every detail in the treatment should be finely managed. Protecting the marginal mandibular branches of the facial nerves from injury will play an increasingly important role in the comprehensive treatment of locally advanced oral and oropharyngeal cancer. Intraoperatively, we should choose an appropriate method in exposing the marginal mandibular branches of the facial nerves according to the different characteristics of patients and cervical metastatic lymph nodes as well as the proficiency of surgeons."} +{"text": "The diagnosis of simple schizophrenia remains an unusual and controversial diagnosis today. The presentation of nonspecific symptoms shared by other nosological entities make differential diagnosis a challenge.The main objective of this case report is to review the diagnosis of simple schizophrenia and its differential diagnosis.Case report and literature review. We present the case of a 52-year-old man who was admitted to a medium stay unit for psychosocial rehabilitation with the diagnosis of simple schizophrenia after his debut at 49 years of age with clinical manifestations of progressive self-care abandonment and personality change.Given the psychosocial deterioration observed and lack of response to pharmacological and psychotherapeutic treatments, the possible diagnoses of dementia praecox and simple schizophrenia were considered. Several individual and family interviews, neuropsychological and projective tests were performed in order to define the diagnosis. The results revealed age-appropriate cognitive functioning and the absence of data suggestive of an underlying psychotic disorder. On the other hand, it was observed that the patient was able to establish some social relationships and participate in group activities in the medium stay unit. These findings suggest the influence of factors related to the socio-familial environment and cast doubt on the initial diagnostic hypothesis.The diagnosis of simple schizophrenia continues to present itself as a complex diagnosis that requires a careful review of the differential diagnosis.No significant relationships."} +{"text": "The outbreak of the SARS-CoV-2 epidemic forced a change in the functioning of health care systems across the globe, requiring rapid adaptation to new conditions for the safe provision of services within all medical fields. General disruption has also affected the traditional program of a postgraduate training, which has been so far fixed with temporary solutions, but not given a proper evaluation in the times of big expectations and pressures from both patients and healthcare workers.Outbreak of the COVID-19 put psychiatry trainees and Early Career Psychiatrists in an unprecedented position of responsibility for treatment of a variety of comorbidities they had no prior experience with due to closure of specialized hospital departments and limited access to regular diagnostic tools. In addition to changes in clinical practice and deployment to unfamiliar ground, rescheduling of different components of regular training, transferring most of the educational activities to distance learning, limiting professional growth by canceling most courses and conferences only strengthened the feeling of uncertainty caused by constant adjustments of the final examinations\u2019 conditions.The Speciality Training Section of the Polish Psychiatric Association decided to review changes forced by the COVID-19 outbreak in a traditional postgraduate training program in psychiatry.Identified shortcomings pose questions about the necessity of a solid revision of the training in order to cope with more demanding working conditions.Presented recommendations may be the starting point for a discussion on the programs\u2019 evaluation across the entire region.No significant relationships."} +{"text": "Sessile plants must combat many challenging environments and conditions, of which drought poses one of the greatest threats. Despite noteworthy improvements in crop breeding and modern agricultural management practices, drought continues to pose the most serious challenge to agricultural production. The Research Topic (RT) presented herein aimed to address the gaps in our knowledge of how plants can effectively manage drought conditions and what elements are the most critical in this adaptation process.Triticum aestivum L.) is an important cereal crop grown in semi-arid and temperate regions of the world . It supplies 20\u201330% of calories globally . The positive regulation of nucleoredoxin gene TaNRX1 was found for drought tolerance in transgenic bread wheat plants . Presented results for multi-locus genome-wide association study for grain weight-related traits under rain-fed conditions in wheat can significantly improve our knowledge in this area . The paper published on genome-wide association study can undoubtedly aid in the identification of novel quantitative trait nucleotides for water-soluble carbohydrate accumulation in wheat plants under drought stress . The transcriptomic analysis of wheat plants revealed a hormone-mediated balance occurring during the rehydration process of plants . One of the most exciting and interesting pieces of research was the association mapping study on drought tolerance in exotic Ethiopian durum wheats . In contrast, a global meta-analysis was presented on the environmental impact of drought on the yield and protein content of wheat . All papers presented in the current Research Topic were based on a wide and diverse range of modern technologies, scientific approaches and research ideas aimed toward achieving a better understanding of all aspects of plant responses to drought to increase the overall tolerance of bread wheat varieties.The goal of the presented Research Topic was to show the current level of research and progress in the study of plant adaptation and tolerance to drought in wheat. This has encompassed research from a range of scales, from the whole plant to the molecular level, including gene network studies between tolerant and sensitive plants. A study of dynamic regulatory gene and protein networks was carried out in wheat roots under drought using a comparative transcriptomics approach (DP and YS prepared the manuscript. Y-GH and NG along with DP and YS edited the manuscript. All editors have read and agreed with manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "TO THE EDITOR They were expanded in a 1999 publication and updated in two publications by Sweeney et al.,,4 in the same Pediatric Section of APTA, reviewing functions, competencies, theoretical structures, emerging literature database and recommendations of evidencebased practices of neonatal physical therapy. These updates reflect the needs of contemporary neonatal physical therapy practice, respected by the authors in the preparation of the First Brazilian Recommendation of Physiotherapy for Sensory-Motor Stimulation for Newborns and Infants in Neonatal Intensive Care Units. Historically, the roles and skills required for neonatal physical therapists were developed by the Pediatric Section of the American Physical Therapy Association (APTA) and were first published in 1989. The physical therapy approach should include evidence-based interventions and focus on care for the baby and his or her family.,4To work in the neonatal intensive care unit (ICU), the physiotherapist needs specific training and refined skills in the evaluation, interpretation and modification of his or her conduct or continuous resequencing of physical therapy procedures aimed at infants with structural, physiological and behavioral vulnerabilities, which predispose them to instability during routine procedures.-11Other relevant international publications also support the evidence-based practice of physical therapy in the neonatal ICU to provide adequate care for developing infants and families in the neonatal ICU on a continuous basis. aims to describe the methods of sensorimotor stimulation and their levels of scientific evidence, and suction is a positive finding described in some of the included studies. The document did not aim to propose or teach physical therapy protocols or any other professional area, and there was no intention to simplify or maximize any intervention or finding, in addition to what was found in the scientific studies included in the recommendation (see inclusion criteria for the study).It is understood that the experts who participated in the development of the document have unquestionable technical and scientific capacity to prepare any document based on scientific evidence in the field of neonatal intensive care. In addition to the questions, the First Brazilian Physical Therapy Recommendation for Sensory-Motor Stimulation for Newborns and Infants in a Neonatal Intensive Care UnitAll authors work in collaboration with speech therapists in routine care in the neonatal ICU and reinforce the esteem of the professional area and colleagues."} +{"text": "The accurate estimation of the mass and center of gravity (CG) position is key to vehicle dynamics modeling. The perturbation of key parameters in vehicle dynamics models can result in a reduction of accurate vehicle control and may even cause serious traffic accidents. A dual robust embedded cubature Kalman filter (RECKF) algorithm, which takes into account unknown measurement noise, is proposed for the joint estimation of mass and CG position. First, the mass parameters are identified based on directly obtained longitudinal forces in the distributed drive electric vehicle tires using the whole vehicle longitudinal dynamics model and the RECKF. Then, the CG is estimated with the RECKF using the mass estimation results and the vertical vehicle model. Finally, different virtual tests show that, compared with the cubature Kalman algorithm, the RECKF reduces the root mean square error of mass and CG by at least 7.4%, and 2.9%, respectively. Traffic accidents cause a large number of casualties every year and precise vehicle motion control can effectively reduce the occurrence of traffic accidents . Active Recursive least squares (RLS) with forgetting factors is a popular methodology employed for mass estimation. Zhang et al. designedSimilar to mass estimation, the estimation of the CG position is also a hot topic of research. Daniel et al. proposedThe above approaches mainly estimate the mass and CG position separately and do not regard the influence of unknown measurement noise. In addition, state-of-the-art Kalman filtering algorithms were demonstrated to enhance the accuracy of vehicle state estimations, such as the cubature Kalman filter (CKF) algorithm . TherefoThis paper aims to propose a fusion estimation scheme to achieve the estimation of mass and CG position. Furthermore, we designed a RECKF estimator to reduce the effects of unknown noise on the performance of the estimation. Then we demonstrate the effectiveness of the proposed method through comparative experiments. The remainder of the paper is organized as follows: Considering that the estimation of mass and CG position does not involve the control of four-wheel drive force distribution, the vehicle model is simplified into a longitudinal motion model, and its dynamics model is shown in With a known distributed drive electric vehicle wheel torque and angular speed, the longitudinal tire forces are given byWithout considering the effect of the ramp driving conditions during vehicle driving, the vehicle longitudinal dynamics equations are given byThe meanings of specific vehicle model parameters are shown in The longitudinal force generated on each tire depends on the longitudinal slip and the normal force applied to the tire. In the low slip region, the longitudinal force generated by a single tire is proportional to its longitudinal slip or the linear part of the friction curve of the normal force. For all-wheel drive vehicles, the linear relationship between front and rear wheel slip and longitudinal forces can be expressed aslowed in .In order to perform iterative estimations using discrete measurement signals and the RECKF, we need to transform Equations (1)\u2013(7) into the form of a discrete state space.For mass estimation, As shown in (1)Initialization:E means to perform the mathematical expectation calculation, and The conventional CKF method enhances the accuracy of the state estimation but does not account for the impact of unknown statistical properties of the noise. To further improve the nonlinear fit of the CKF, an embedded CKF is used first for the vehicle state estimation and thisi\u03d5 are given by.n is the dimension of The embedded cubature sampling points given in .(2)Time prediction:S represents a diagonal matrix, and Singular value decomposition of Evaluate the embedded cubature pointsEvaluate the propagated embedded cubature pointsEvaluate (3)Measurement prediction:Singular value decomposition of Evaluate the embedded cubature pointsCalculate the propagated embedded cubature points of the measurement vectorEvaluate The gain matrix According to the relevant conclusions in the literature , the errTo verify the effectiveness of the RECKF, a co-simulation platform of Carsim and Matlab/Simulink was established to conduct simulation experiments under two different conditions of acceleration and deceleration. The superiority of the RECKF is further verified by comparing it with the traditional CKF algorithm. The vehicle model parameters are listed in The initial vehicle velocity is set to 1 km/h, the throttle opening is 40%, the process noise covariance is known, and the measurement noise is unknown. During the whole estimation process, the vehicle speed and longitudinal acceleration are shown in It can be seen that the vehicle speed varies with the magnitude of acceleration, and the real measurement of the sensor is simulated by adding Gaussian white noise to the acceleration. The longitudinal force curve of the vehicle is shown in The results of the different methods used to estimate the mass are presented in The initial vehicle speed is 80 km/h and the braking operation is applied to the vehicle after 1 s. The process noise covariance statistics are known, and the measurement noise statistics are unknown. The estimated process vehicle speed and longitudinal acceleration are shown in The results of the different methods used to estimate the mass are presented in To further demonstrate the joint estimation effect, the CG position estimation curve under acceleration conditions is shown in In this paper, a novel joint estimation scheme is proposed to achieve the estimation of mass and CG position. This framework contains two RECKF estimators to identify mass and CG respectively, where the RECKF is a new estimator that combines robust filtering and an ECKF to suppress the influence of unknown noise. The experimental results of the virtual tests show that the proposed estimation scheme can achieve a simultaneous estimation of multiple parameters with high estimation accuracy. On the other hand, the RECKF can suppress the influence of unknown noise on the estimation accuracy. The proposed method can be used not only for passenger vehicles but also for commercial vehicles or intelligent vehicles. In our study, the effect of road slope was not considered, and the fusion estimation of mass, CG position, and slope will be carried out in the next step to further improve the identification accuracy of the parameters. Due to limited resources in some of the objective conditions, we have not conducted real vehicle experiments. We will conduct real vehicle experiments in the future when equipment and space are available."} +{"text": "In this paper, we implement an automatic modeling method for narrow vein type ore bodies based on Boolean combination constraints. Different from the direct interpolation approach, we construct the implicit functions of the hanging wall and foot wall surfaces, respectively. And then the combined implicit function is formed to represent the complete ore body model using the Boolean combination constraints. Finally, the complete ore body is obtained by Boolean operation of the hanging wall and foot wall surfaces. To model complex vein surfaces, some modeling rules are developed to allow the geological engineers to specify vein thickness constraints and vein boundary constraints. The method works for narrow vein type ore bodies which are large in two dimensions and narrow in the third. Taking the implicit function of radial basis functions interpolation as an example, several experiments are carried out by using the real geological sampling data of the mines. The experimental results show that the method is suitable for the modeling of narrow vein type ore bodies. Although most spatial interpolation methods can be applied to implicit modeling, considering the superiority of interpolation extrapolation and performance, the spatial interpolation methods based on kriging and radial basis functions are widely used in geological modeling.The implicit modeling methodAt present, for the geological modeling with complex geometry shape features, the modeling effect of the existing implicit modeling methods often cannot satisfy the actual application requirements of mines. One of the most important problems is the lack of constraint rules of implicit modeling for different types of geological bodies. Although the 3D model can be controlled by adding more interpolation constraints, it will greatly affect the automation of the implicit modeling method. In addition, it is often difficult to construct the manual interpolation constraints in three-dimensional space.16. Actually, in some cases, the 2.5D vein modeling is analogous to the coal seam modeling17. However, most of the existing modeling method of narrow vein type surfaces relies on two dimensional methods to estimate the thickness of the model.The interpolation constraints constructed by sparse and uneven geological data are often difficult to represent the complex geological conditions, resulting in the significant difference between the implicit modeling results and the geological realistic expectations. It greatly limits the wide application of the implicit modeling software in actual mines. As a classic example, the modeling and grade estimation of vein type ore bodies with small thickness is an important challenge in practical application. The narrow vein type geological body has the characteristics of thin thickness and layered distribution, which can be regarded as a thin stratified model composed of hanging wall and foot wall surfaces, and its corresponding geological sampling data has obvious sparse and uneven characteristics. As the vein type ore bodies are complex and narrow in one dimension, it is difficult and time consuming to interpolate valid and faithful models by constructing manual interpolation constraints using the traditional interpolation methods19. For the implicit function interpolation, if the hanging wall sampling data and foot wall sampling data are not interpolated separately, the interpolation result will be extremely poor. For the implicit surface reconstruction, considering that the average thickness of vein type ore bodies is generally far less than the surface reconstruction accuracy (the size of the cube). If the classical marching cubes methods22 are used to extract the isosurface directly, the reconstruction result may not recover the realistic geometry shape of the target implicit function. Therefore, in the process of implicit modeling for vein type ore bodies, the specific shape features should be considered to guide both the interpolation and reconstruction processes.In general, the implicit modeling methods control the geometry shape of the model by constructing interpolation constraints. However, for the narrow vein type geological bodies, if the traditional modeling method is used to construct interpolation constraints directly, the interpolation extrapolation result of the model may be quite different from the realistic shape of the model. The small thickness of narrow vein type geological bodies makes it very difficult for both the processes of implicit function interpolation and implicit surface reconstruction25, but the details of relevant researches have not been published for commercial reasons.We try to construct modeling rules based on interpolation constraints to deal with the problems of geological modeling with special geometry shape features. From the perspective of model constraint effect, the interpolation constraints are used to control the local geometry shape features of the model, while the modeling rules are used to control the global geometry shape features of the model. For example, although the thickness of vein type ore bodies is very thin, this type of model can be regarded as the combination of a hanging wall surface (HW surface) and a foot wall surface (FW surface). Therefore, to avoid the potential issues of direct interpolation, a more realistic outcome will be achieved if the hanging wall and foot wall surfaces are interpolated separately and these interpolations are combined to form the vein model. It is worth noting that early researchers proposed that the mature commercial software has integrated the vein surface modeling method based on the similar idea for a long timeIn this paper, we implement and verify the feasibility of vein surface modeling method based on the Boolean combination constraints, and further analyze and discuss the geological rule constraints in vein surface modeling. The method no longer constructs a single implicit function directly based on the geological sampling data. We construct the implicit function of hanging wall surface and the implicit function of foot wall surface respectively by distinguishing the hanging wall and foot wall sampling data. And then the combined implicit function is formed to represent the complete ore body model using the Boolean combination constraints. Moreover, for the reconstruction of implicit surface, we no longer extract the isosurface for the combined implicit function directly. On the contrary, we reconstruct the hanging wall surface and the foot wall surface using the corresponding implicit functions, respectively. Finally, the complete ore body is obtained by Boolean operation of the hanging wall and foot wall surfaces. The method works for narrow vein type ore bodies which are large in two dimensions and narrow in the third. Besides the interpolation constraints, some additional modeling rules should be developed to make the interpolated implicit function fits the unknown surface well. In this paper, we construct some modeling rules that allow the geological engineers to specify vein thickness constraints and vein boundary constraints. It is useful to model complex vein surfaces. For example, a minimum thickness can be specified to avoid potential surface mesh cross-overs. Based on the similar idea, more novel modeling rules can be developed to improve the reliability and efficiency of modeling results.19. Among them, the sign of implicit function values represents the internal and external position relationships of the ore body models in the mineralization domain. Without loss of generality, we agree that the function values of the points outside the orebody are positive and the function values of the points inside the orebody are negative. Note that this definition of the sign of function values is different from the convention used by Cowan et al.23.The potential field function According to the above definition, the relationship between the mineralization domain of ore body model and the sign of implicit function Several types of surfaces are defined in the process of modeling vein surfaces, including the hanging wall surface, the foot wall surface and the mean surface. The hanging wall surface (HW surface) is used to represent the surface on the upper side of the vein object, which is always above the mean surface. The foot wall surface (FW surface) is used to represent the surface on the lower side of the vein object, which is always below the mean surface. The hanging wall and foot wall surfaces make up the whole model of vein object. The mean surface is an intermediate surface which is obtained by interpolating the geological sampling data of the hanging wall and foot wall surfaces. It is used as a reference surface of the hanging wall and foot wall surfaces to construct combination constraints.26. In this paper, the basic strategy of automatic modeling method for narrow vein ore bodies is to sample , interpolate (implicit function) and reconstruct (mesh model) the hanging wall surface and the foot wall surface of the vein object, respectively. The method is mainly composed of the following steps, and the overall flow chart is shown in Fig.\u00a0The basic idea of the vein surface modeling is to combine the hanging wall implicit function and the foot wall implicit function based on Boolean combination constraintsStep 1: According to the geometry shape features of geological sampling data, construct the sampling points of the hanging wall surface and the sampling points of the foot wall surface, respectively.Step 2: The implicit modeling method based on radial basis function interpolation is used to obtain the interpolation constraints of the hanging wall and foot wall surfaces.Step 3: By solving the interpolation equations separately, the hanging wall implicit function and the foot wall implicit function are obtained. The implicit functions will be combined to construct modeling rules satisfying geometry shape features of vein object.Step 4: According to the Boolean combination constraints based on the signed distance field, the vein thickness constraints and the vein boundary constraints can be constructed, so that the geological engineers can adjust the modeling results according to the prior geological rules.Step 5: The implicit surface reconstruction method is used to extract the hanging wall isosurface and the foot wall isosurface, respectively.Step 6: After combining the hanging wall and foot wall surfaces using the polygon Boolean operation method, the complete ore body model is obtained.The combined implicit function The combined implicit function The combined implicit function The combined implicit function The combination constraints of implicit function fields represent the constraints constructed by combining the implicit function fields according to the idea of Boolean combination operations between fields. Based on the idea of signed distance field, the combination of implicit functions can be regarded as the combination of signed distance fields. Taking two signed distance implicit functions To obtain the implicit functions representing the hanging wall and foot wall surfaces, it is necessary to obtain the sampling data of the hanging wall and foot wall. According to the geometry shape features of geological sampling data, extract the hanging wall sampling point set Taking the drillhole data as an example, to extract the sampling points of hanging wall and foot wall respectively, the approximate trend surface should be fitted according to all the sampling points. And then the sampling points are divided into hanging wall and foot wall points roughly according the signed distance values corresponding to the approximate trend surface. The signed distance values of hanging wall points are positive and the signed distance values of foot wall points are negative corresponding to the approximate trend surface, as shown in Fig.\u00a0Considering the superior extrapolation of the RBF interpolation method, the implicit modeling method based on RBF interpolation is used to convert the geological sampling data into interpolation constraints through the idea of signed distance field. Firstly, the sampling points in HW and FW surfaces28. Obtain the implicit function 32, which will not be repeated here.Solve the interpolation equations composed of the interpolation constraints of the hanging wall and foot wall surfaces respectively using the multilevel domain decomposition methodMean surfaceThe mean surface is a reference surface that fits the medial trend of the hanging wall surface and the foot wall surface. Without loss of generality, we agree that the interior in the process of reconstruction should be further studied."} +{"text": "Advancements in RNA sequencing technology in past decade have underlined its power for elucidating the brain gene networks responsible for various stressful factors, as well as the pathologies associated with both genetically determined neurodegenerative diseases and those acquired during the lifespan. As these exciting studies have continued to grow in recent years, we present a series of research papers reporting on progress and new findings based on the analysis of brain transcriptional activity. The brain transcriptome is the most evolved among tissues in terms of transcriptome plasticity and responses to various stimuli. In this Special Issue, we address a spectra of studies, including, but not limited to, animal models of social stress response and various brain-disease-related data/models, in which the identification of features in gene transcription profiles aids our understanding of the molecular mechanisms of socially significant disease development.Publicly available databases allow researchers to test hypotheses and uncover additional developmental mechanisms of a variety of neuronal degenerative disorders that are actively studied, but remain poorly understood and undefeated. A group of bioinformaticians from Lomonosov Moscow State University, using RNA sequencing data freely available from the Sequence Read Archive, tested a hypothesis about the role of adenosine-to-inosine mRNA-editing patterns in the development of Parkinson\u2019s disease (PD) . They anELF3 and the interaction between STAT1 and lncRNA ELF3-AS 1 in ASD development.Autism spectrum disorder (ASD) is a neurodevelopmental pathology that impedes patients\u2019 cognition, social skills, speech, and communication. In recent years, the prevalence of ASD has been steadily increasing; therefore, the identification of the molecular mechanisms underlying ASD occurrence and development is a socially important task. Since ASD is characterized by high heterogeneity and\u2014accordingly\u2014diverse etiology and clinical manifestations, the molecular mechanisms of its development have not yet been fully studied. Lee et al. present PRKCG gene encoding protein kinase C gamma (PKCgamma) [Spinocerebellar ataxias (SCAs) are a heterogeneous group of dominantly inherited ataxias characterized by progressive cerebellar atrophy mainly due to the dysfunction in and loss of Purkinje cells. Szilvia E. Mezey and colleagues used a SCA14 mouse model to investigate Spinocerebellar ataxia type 14 (SCA14), which is a rare variant of SCAs caused by missense mutations or deletions in KCgamma) . CompariIt is well known that human lifespans are full of different social conflicts, which can lead to the development of various neurological disorders. An animal model of chronic social confrontation shows that daily agonistic interactions lead male mice to form alternative patterns of social behavior depending on victories and defeats. This model of chronic social confrontation has proven to be an effective experimental approach in elucidating differences in the molecular mechanisms underlying the excitation of the brain neurons in chronically winning mice (winners) and chronically defeated mice (losers) compared with intact controls.This Special Issue presents the results of a comparative analysis of gene expression profiles in the midbrain raphe nuclei (MRNs) of control male mice compared with winners and losers. It is known that MRNs contain a large number of serotonergic neurons associated with the regulation of numerous types of psycho-emotional states and physiological processes. The article by Redina and colleagues focuses on an analysis of the co-expression of DEGs and Tph2-encoding tryptophan hydroxylase 2, the rate-limiting enzyme of the serotonin synthesis pathway . The resAnother article in this Special Issue discusses the contribution of the altered expression profiles of genes encoding the solute carrier (SLC) transporters, which may serve as markers of altered brain metabolic processes and neurotransmitter activities in psychoneurological disorders . The artAs a continuation of these studies, we include a report by the same group of authors on changes in the level of gene expression in the dorsal striatum in aggressive (winners) and aggression-deprived (AD) male mice . EarlierA study on the impact of social stress on brain plasticity is presented by a team of authors from the University of Illinois at Urbana-Champaign . Using aThis Special Issue presents the results of another experiment by the same group of authors, which aimed to study the impact of proinflammatory challenges caused by maternal immune activation (MIA) and postnatal exposure to drugs of abuse (morphine) on the prefrontal cortex molecular pathways . The ranAnother article in this Special Issue is devoted to the study of transcription profiles in the brain of Takifugu rubripes fish under hypoxia and normoxia to identify pathways involved in regulating brain metabolism under chronic hypoxic stress. The results of the work also provided new insights into the adaptive molecular mechanisms that arise when the brain responds to hypoxic stress .Long noncoding RNAs (lncRNAs) play an important role in the control of many physiological and pathophysiological processes. Differential hypothalamic expression of several lncRNAs was found to be associated with hypertension and the behavior/neurological phenotypes of ISIAH rats; this was characterized as a rat model for a stress-sensitive form of hypertension .In summary, the current Special Issue \u201cAdvances of Brain Transcriptomics\u201d uncovers the molecular mechanisms related to transcriptional changes that accompany the development of neurodegenerative diseases or those involved in response to various social or environmental stressors. This knowledge paves the way for the next steps of further research in this field, which will expand the boundaries of our understanding of the processes occurring in the brain under conditions of pathological changes or under the influence of adverse factors."} +{"text": "The blast-induced damage of a high rock slope is directly related to construction safety and the operation performance of the slope. Approaches currently used to measure and predict the blast-induced damage are time-consuming and costly. A Bayesian approach was proposed to predict the blast-induced damage of high rock slopes using vibration and sonic data. The relationship between the blast-induced damage and the natural frequency of the rock mass was firstly developed. Based on the developed relationship, specific procedures of the Bayesian approach were then illustrated. Finally, the proposed approach was used to predict the blast-induced damage of the rock slope at the Baihetan Hydropower Station. The results showed that the damage depth representing the blast-induced damage is proportional to the change in the natural frequency. The first step of the approach is establishing a predictive model by undertaking Bayesian linear regression, and the second step is predicting the damage depth for the next bench blasting by inputting the change rate in the natural frequency into the predictive model. Probabilities of predicted results being below corresponding observations are all above 0.85. The approach can make the best of observations and includes uncertainty in predicted results. Excavation of high rock slopes in many fields, such as transportation, hydraulic and hydropower, and mining engineering, usually involves blasting due to the high efficiency, reliable effectiveness, and low costs of blasting operations ,2,3. DurMeasurement techniques currently used to detect the blast-induced damage of high rock slopes can be naturally divided into two categories, direct measurements and non-direct measurements . As regaThe blast-induced damage correlates well with the peak particle velocity (PPV), and considerable effects have been made on predicting the blast-induced damage by developing a relationship between rock damage and the corresponding PPV , which hUnlike blasting vibration velocities that are easily influenced by external conditions, natural frequencies of rock masses are intrinsic characteristics and relatively simple to obtain without knowing near-field vibration data. Based on commonly recorded blasting vibration data, natural frequencies of rock masses can be extracted by diverse methods, such as Fourier spectra ,38, the In the present study, a Bayesian approach to predict the blast-induced damage of high rock slopes using vibration and sonic data was proposed. A relationship between the blast-induced damage and the natural frequency was firstly developed. The blast-induced damage was obtained through sonic tests and the natural frequency was extracted by picking PSD peaks of blasting vibration monitoring data. Based on the relationship and available vibration and sonic data, a predictive model of the blast-induced damage was established by undertaking a Bayesian linear regression. By inputting the change rate in the natural frequency into the predictive model, the blast-induced damage for the next bench blasting can be predicted. The proposed Bayesian approach was finally adopted in the right bank rock slope at the Baihetan Hydropower Station. The results demonstrated that the proposed approach is feasible and efficient.We can define the change rate in the longitudinal wave velocity of rock masses before and after blasting as:According to construction technical specifications on rock foundation excavating engineering of hydraulic structures, rock masses are considered to be severely damaged when As regards blasting of the cylindrical charge with infinite length in infinite rock masses, the cylindrical blasting source can be treated as a cylindrical cavity with a radius of Assuming that rock masses are homogeneous, isotropic, and linear elastic media, the radial displacement of rock masses at distance K can be written as:Based on Equation (2), the equivalent radial stiffness The mass per unit length Knowing the mass When Though Equation (6) was developed based on several idealized assumptions, the assumptions can be proved valid in the vicinity of the blastholes , where tBased on Equation (7) and available data of Excavation of high rock slopes follows the construction sequence from top to bottom. As the excavation advances, more and more on-site data from sonic tests and the blasting vibration monitoring at lower benches are progressively accumulated. In order to make the best of those accumulated data and update the predictive model in real time as new data are continually added, the Bayesian linear regression that can make full use of the prior knowledge and include the uncertainty of posterior parameters in predicted results was adopN is the number of data samples, ith element of the input variable ith element of the weight vector For a given dataset Equation (8) can be also written as The relationship between the predicted value For an input dataset In order to avoid overfitting in the maximum-likelihood estimation and control the model complexity, a prior distribution is defined as:According to the Bayesian theorem, the posterior distribution of the weight vector For a given test point If The predicted mean n-th bench blasting are divided into two major steps, namely establishing the predictive model and producing predicted results. For establishing the predictive model, blasting vibration monitoring and sonic test data from the first (n\u22121) bench blasting operations are firstly collected, among which the former data are used to identify natural frequencies of rock masses and the latter data are used to determine the damage depth. Then, the calculated natural frequencies n-th bench blasting are hence obtained just by inputting the change rate in the natural frequency n-th bench blasting into the predictive model. The above two major steps can be repeatedly conducted as the excavation of rock slopes advances. The procedures of the Bayesian approach have the advantages of introducing the prior information, considering the uncertainty, and improving the estimation as more data are collected.Procedures of the Bayesian approach to predict the blast-induced damage of high rock slopes using vibration and sonic data are illustrated in The Baihetan Hydropower Station lies in an asymmetrical V-shaped canyon which is between Ningnan County in Sichuan Province and Qiaojia County in Yunnan Province, located in the lower course of the Jinsha River, southwest China. The station has a total installed capacity of 16,000 MW and the dam is a double-curvature arch dam with a maximum height of 289 m, as shown in The excavation of the right bank high rock slope in the dam abutment was chose for study in this paper. Each bench height of the studied slope was designed to be 10 m. Blasting parameters used during the excavation of the slope were carefully determined based on a series of on-site experiments, and the detailed blasting parameters are summarized in Sonic tests before and after bench blasting were carried out to acquire the damage depth of each slope bench. The HX-SY04A sonic test system as presented in In cross-hole sonic tests, the longitudinal wave velocity of rock masses is obtained as:In single-hole sonic tests, the longitudinal wave velocity of rock masses is obtained as:In order to guarantee the accuracy of the sonic tests, two groups of sonic test holes with the diameter of 90 mm were bored. According to the Chinese code for blasting safety monitoring of hydropower and water resources engineering and design requirement, all sonic test holes extended about 6 m from the contour surface. Each group of sonic test holes were arranged in form of an equilateral triangle, whose edge lengths in the berm surface and the contour surface are about 1.8 m and 1.0 m, respectively. The typical layout of the sonic test holes before bench blasting is depicted in Typical results of the sonic tests before and after bench blasting are plotted in The blasting vibration monitoring during the bench blasting was implemented to further extract the natural frequencies of rock masses. The TC-4850 blasting vibration monitoring system as shown in Considering the reliability of the recorded vibration data and the safety of the monitoring system, the blasting vibration monitoring system was arranged at the toe of the upper slope bench, as shown in The power spectral density (PSD), which describes how the power of a signal or time series is distributed over the frequency, of the recorded blasting vibration data was used to extract the natural frequencies of rock masses before and after bench blasting. The PSD of a signal Considering a window of According to the previous derivation of the relationship between the damage depth and the natural frequency, the radial motion of rock masses are closely related to the blast- induced damage and hence longitudinal blasting vibration velocities were employed to extract the natural frequencies of rock masses. The typical PSD illustrations of the longitudinal velocities for the monitoring point at EL. 804 m are plotted in It is important to clarify the impact of the lower bench blasting on the damage state of the upper remaining rock masses, because the blasting vibration monitoring system for the current bench blasting was arranged at the toe of the upper slope bench. Therefore, repeated sonic tests of the same remaining rock masses were separately conducted after the adjacent and lower bench blasting. The typical results of the repeated sonic tests and their differences are presented in R of the linear fitting equation calculated through the longitudinal velocities is the largest, which verifies the reliability and superiority of using longitudinal vibration data instead of transverse and vertical vibration data.Data of sonic tests and blasting vibration monitoring were collected from total 19 bench blasting operations, and total 52 sets of data pairs comprising the damage depth and the change in the natural frequency were obtained through the collected data, Equation (1) and Equation (21). The scatter plots shown in As the excavation of high rock slopes advances, more and more on-site data coming from sonic tests and the blasting vibration monitoring were collected, and those increasing data were successively used in the progressive procedures presented in For the first two bench blasting operations, there are total 9 sets of data pairs relating the damage depth to the change rate in the natural frequency, and the corresponding scatter plot and the fitting line derived by the least square (LS) method are both shown in The slope and intercept of the fitting line obtained by LS method were selected as the initial values for the parameters The posterior predictive regression lines marked as Bayesian fits in Since the predictive model of the damage depth was determined through the Bayesian approach, the damage depth for the third bench blasting could be predicted by inputting the change rate in the natural frequency of rock masses for the third bench blasting. The complete procedures of the Bayesian approach to predict the damage depth for the third bench blasting are illustrated in As shown in The proposed Bayesian approach uses only parts of the blasting vibration monitoring and sonic test data that are originally required for controlling the vibration and damage of the remaining rock masses, and no additional data are further required. Developing the predictive model in the Bayesian approach using the natural frequency instead of the PPV helps reduce the measurement work and the prediction deviation, because at least five blasting vibration monitoring points along a line are demanded in exploring the blasting vibration attenuation law that is used to predict the PPV while just one blasting vibration monitoring point at the upper bench toe is enough for extracting the natural frequency of rock masses. By using the Bayesian linear regression, the new blasting vibration monitoring and sonic test data can be added into the input dataset to update the predictive model. Furthermore, the Bayesian predicted result is not the distinct point but a distribution containing some statistical characteristics that describe the damage state more appropriately and scientifically. Further studies will be considered to integrate the statistical characteristics into the current description and codes of the damage and vibration control.Considering the benefits that the natural frequencies of rock masses are intrinsic characteristics and relatively simple to obtain without knowing the near-field vibration data, a relationship between the blast-induced damage and natural frequency of rock masses was firstly developed. The damage depth representing the blast-induced damage is proportional to the change in the natural frequency. The blast-induced damage was obtained through sonic tests and the natural frequencies were extracted by picking PSD peaks of blasting vibration monitoring data. Based on the developed relationship and available vibration and sonic data, a Bayesian approach was then proposed to predict the blast-induced damage of high rock slopes using vibration and sonic data. The procedures of the Bayesian approach are divided into two major steps, namely establishing the predictive model and producing the predicted results. The Bayesian predictive models of the damage depth were firstly developed by undertaking the Bayesian linear regression. There exists uncertainty in the Bayesian estimates that was expressed by the variability of the regression lines of the Bayesian fits. The damage depth for the next bench blasting could be predicted by inputting the change rate in the natural frequency of rock masses to the predictive models. Finally, the Bayesian approach was applied in the Baihetan Hydropower Station, and the probabilities of predicted results being below corresponding observations are all above 0.85. The proposed Bayesian approach that makes the best of numerous monitoring data and includes the uncertainty in the predicted results is practical and efficient. This study focuses on predicting the blast-induced damage of high rock slopes and the presented case study at Baihetan Hydropower Station provides a reference for similar projects."} +{"text": "There have been a few attempts to develop prediction models of splitting tensile strength and reinforcement-concrete bond strength of FAGC , however, no model can be used as a design equation. Therefore, this paper aimed to provide practical prediction models. Using 115 test results for splitting tensile strength and 147 test results for bond strength from experiments and previous literature, considering the effect of size and shape on strength and structural factors on bond strength, this paper developed and verified updated prediction models and the 90% prediction intervals by regression analysis. The models can be used as design equations and applied for estimating the cracking behaviors and calculating the design anchorage length of reinforced FAGC beams. The strength models of PCC (Portland cement concrete) overestimate the splitting tensile strength and reinforcement-concrete bond strength of FAGC, so PCC\u2019s models are not recommended as the design equations. AntA = entire effective tension area. Zhang\u2019s model :(39)ZhacrS and test/prediction ratios can be predicted by Equation (40), as shown in crS is 1.11, with a standard deviation of 0.17. This indicates that Equation (40) can predict crS of reinforced FAGC beams well.To validate the prediction model, the experimental results on the crack spacing of reinforced FAGC beams were collected from previous literature ,40, as lw can be obtained from the crack spacing, as it is the tensile extension difference between the reinforcement and the concrete within the crack spacing [s\u03c3 = stress in the reinforcement at a cracked section, sE = modulus of elasticity of the reinforcement.The crack width spacing . ConsideBased on Equations (39) and (41), it is concluded that from a qualitative perspective, compared with PCC, the higher bond strength and the lower tensile strength of FAGC leads to the narrower crack width of reinforced concrete beams, which is beneficial to the durability of reinforced concrete beams.This study has established the databases of splitting tensile strength and bond strength of FAGC, developed and verified the prediction models and the corresponding prediction intervals of splitting tensile strength and bond strength of FAGC, and used the strength models to calculate the design anchorage length and estimate the cracking moment, crack spacing and width of reinforced FAGC beams.Compared with the previous strength models of FAGC, the tensile strength model in this study considers the effect of shape and size of tested specimens on strength, and the bond strength model in this study considers the cover to diameter ratio and the diameter to development length ratio. Therefore, the models in this study can be used as the design equations for estimating the tensile strength and reinforcement-concrete bond strength of FAGC.The strength models provide the corresponding 90% prediction intervals. The lower limit of the prediction intervals is the characteristic value of the strength.The splitting tensile strength of FAGC is slightly lower than that of PCC with the same compressive strength, while the scatter of the splitting tensile strength of FAGC is close to that of PCC.The scatter of the bond strength of FAGC is larger than that of PCC. This results in the fact that for the bond strength of FAGC, although the estimated mean value is higher than that of PCC in the same case, the characteristic value may be lower than that of PCC in the case of a small bar spacing.The strength prediction models of PCC cannot be used for FAGC.To ensure adequate anchorage and suitable design anchorage lengths of reinforced FAGC beams, the minimum bar spacing needs to be restricted in the design code for FAGC.Incorporating the models into the prediction models of the cracking behaviors for PCC gives good predictions on the cracking moment and crack spacing of reinforced FAGC beams.Based on this study, the following conclusions are obtained:"} +{"text": "The introduction of socially restrictive measures due to COVID-19 forced students to adapt to new living conditions and take active action. As a consequence of limited movement there was a danger of and increased mental health. It can become worrying if the pandemic of mental health and sedentary lifestyles continues, as well as the further risk of worsening of the resulting condition due to COVID-19"} +{"text": "Urogenital tuberculosis remains a frequent disease in our country. It is the most common extra-pulmonary location. Prostatic involvement is extremely rare.We report the observation of a prostatic lithiasis complicating a granulomatous prostatitis of tuberculous origin, revealed essentially by obstructive and storage lower urinary tract symptoms. The diagnosis was suspected on imaging and clinical findings and confirmed by histology. Treatment consisted of endoscopic resection of the prostate associated with endoscopic ballistic lithotripsy.Prostatic lithiasis is a rare condition with a poorly elucidated ethiopathogeny. The origin of tuberculosis should be evoked in front of this affection. It is the most frequent extra pulmonary localization. Prostatic involvement is extremely rare as shown by the scarcity of observations published in the literature.We propose to review this pathology in the light of the observation of a prostatic lithiasis complicating a prostatic granulomatosis of tubercular origin revealed by lower urinary tract symptoms.2A 48-year-old men, with a medical history of treated cerebral tuberculosis, presented to our consultation unit with a symptomatology that had been evolving for 3 months, consisting of storage and obstructive lower urinary tract symptoms such as urgency, frequency, nocturia three times/night and poor stream without any notion of hematuria.Clinical examination found a patient in good general condition with an indurated nodule in the tail of the right epididymis. The rectal examination reveals a multinodular prostatic hypertrophy, of hard consistency and irregular contours measuring 50 g. The cytobacteriological analysis of the urine was normal as well as his renal function (the research of Koch bacillus in the urine three days in a row was negative). The PSA value was normal. KUB radiography showed the presence of a bilobed calcification projecting on the prostatic cavity associated with two small centimetric lithiasis projecting on the path of the prostatic urethra . TransabThe patient was operated on after a correct pre anesthetic workup. He had a urethrocystoscopy which showed a pre-bulbar urethral stricture not very tight and unifocal, a prostate swollen by multiple stones (highlighted after endoscopic resection of the prostatic lobes). The treatment consisted of an internal urethrotomy for the urethral stricture, an endoscopic resection of the prostate associated with endoscopic ballistic lithotripsy. The patient had almost complete fragmentation of the large lithiasis fragments in the prostatic compartment. The stones were left in intimate contact with the prostatic capsule. The macroscopic aspect of the stones is in favor of brushite stones. The anatomopathological study of the resection shavings was in favor of prostatic tuberculosis (epithelio-giganto-cellular granuloma without caseous necrosis).The patient was put on antituberculosis treatment for 6 months combining streptomycin, rifampicin, isoniazid and pyrazinamide.The evolution was marked by the recurrence of irritative lower urinary tract symptoms. Radiological explorations (Retrograde urethrogram) concluded to a sclerosis of the bladder neck and the patient underwent an endoscopic resection of the bladder neck. After the endoscopic intervention the patient became asymptomatic. One year later, the patient is satisfied with his urination with a correct flow meter curve.3Urogenital tuberculosis is still common in our climate and its incidence is increasing worldwide. It ranks fifth after pulmonary, lymph node, osteoarticular and digestive localization. The prostatic localization is rare. This rarity is underlined by the majority of authors.The transurethral endoscopic approach is simple to perform. It is less invasive and reproducible in case of recurrence. It is more suitable for small and multiple prostatic calculi.4Prostatic lithiasis is a rare pathology of poorly elucidated etiology. The origin of tuberculosis should be evoked in front of this affection in spite of the rarity of the localization of this disease. The treatment consists of endoscopic resection of the prostate generally associated with ballistic lithotripsy.The authors declare that there are no conflicts of interest regarding the publication of this article."} +{"text": "Carpal tunnel is an important anatomical passage that carries the flexor tendons into the hand. As there is still no consensus about its contents among the anatomy textbooks, the main purpose of this study was to identify the relations of the flexor carpi radialis tendon in the carpal tunnel.This retrospective study was completed in April 2018 at authors\u2019 university\u2019s hospital. Seventy-four female and 44 male patients\u2019 wrists without any pathology were examined by using magnetic resonance images. The series of axial sections where the pisiform exist were evaluated by using T1 sequence and the structures in the carpal tunnel were identified.Results of this study showed that the tendon of the flexor carpi radialis was found above the flexor retinaculum within its own septal compartment in all patients.According to the results, tendon of flexor carpi radialis crosses the wrist region superficial to the carpal tunnel. Thus, tendon of flexor carpi radialis doesn\u2019t have any effect on the carpal tunnel syndrome. Further cadaveric studies would be useful for identifying the contents of the carpal tunnel and morphological organization of the wrist. The antebrachial fascia, anteriorly, is continuous superficially as palmar carpal ligament and distally and deeply as a strong fibrous band; the flexor retinaculum . At the medial aspect of the wrist, ulnar artery and nerve pass through a tunnel between these ligaments, which is termed as the ulnar tunnel [3]. The roof and floor of the canal is bordered by palmar carpal ligament and transvers carpal ligament, respectively [1\u20133]. Thick flexor retinaculum also encloses the carpal groove superiorly and forms a passageway on the palmar side of the wrist . The carpal tunnel is an osseo-fibrous tunnel which is bordered superiorly by transverse carpal ligament and the base consist of the hook of hamate, triquetrum, and pisiform bones medially; scaphoid and the trapezium bones laterally . The tendons of the flexor muscles of the fingers and the median nerve pass through the tunnel . Most of the textbooks mentioned that 10 structures traverse the tunnel including the 9 tendons which are flexor digitorum superficialis and profundus, and palmaris longus and the median nerve . On the anterior surface of the wrist, the flexor retinaculum and palmar carpal ligament held the tendons of the flexor muscles in the tunnel [9]. The tendon of the flexor carpi radialis (FCRt) passes through a vertical groove on the trapezium within its own tunnel at the wrist region, which is separated from the carpal tunnel by the deep portion of the transverse carpal ligament and inserts on the palmar surface of the base of the second and third metacarpal . In addition to this well-known information, it is also mentioned that FCRt found within the carpal tunnel in its own tunnel [8]. Previous studies have focused on describing the carpal tunnel\u2019s borders and related structures by using different methods to identify the safe zones for carpal tunnel release [12\u201314]. MRI and ultrasound methods are used for identification of the course of FCRt and diagnosing the wrist pathologies [15\u201318]. Carpal tunnel syndrome is the most common entrapment neuropathy characterized by tingling, burning, pain, and paresthesia in the first 3 radial digits and radial half of the forth digit due to the compression of the median nerve in the carpal tunnel [19\u201322]. Although it has been widely studied, while mentioning the contents of the carpal tunnel, differences could be seen among the textbooks . There is no conflict about the fact that the tendons of flexor digitorum superficialis, flexor digitorum profundus, flexor pollicis longus, and the median nerve are located inside the carpal tunnel. The presence of the tendon of flexor carpi radialis in the tunnel could be confusing. The aim of this study is to reveal the relationship between FCRt and carpal tunnel. Thus, the borders and contents of the carpal tunnel could be described easily. Results of this study could make an important contribution to the literature and support the diagnostic studies about carpal tunnel syndrome.This study was carried out in pursuit of receiving the ethical approval from the local ethics committee (GO 17/887-15) and completed according to the principles of the Helsinki Declaration.Retrospectively, 118 wrist regions MRI scans were evaluated. None of the patients had surgery or trauma history. Patients with wrist pathology were excluded. The mean age of the patients was 35.7 (range: 9\u201374). In addition, 3 fresh-frozen upper limb cadavers were dissected to demonstrate the course of the flexor carpi radialis tendon.Wrist MRI evaluations were completed using any of three 1.5 Tesla scanners in the radiology department . MRI protocols were performed while patients were in the supine position. The radiology department\u2019s routine MRI protocol consisted of coronal, sagittal, and transverse T2-weighted spin-echo and sagittal and transverse T1-weighted spin-echo with 3.0\u20133.5 mm slice thickness. All patients\u2019 MRI series were obtained from the Picture Archiving and Communication System (PACS) at authors\u2019 university hospital. All evaluations were completed by a 20 year experienced anatomy professor, a 15-year experienced anatomy specialist, a 6-year experienced anatomy specialist and a 20-year experienced orthopedic associated professor by using Osirix-Lite version 9 .To standardize the evaluation, the series of axial sections from the level of pisiform bone were examined. First location of the FCRt, proximally to the carpal tunnel was detected. Afterward the course of the tendon was followed till it passed through the carpal tunnel and its location according to the flexor retinaculum was observed. While dissecting the fresh-frozen cadavers, first incision was applied vertically from the anterior surface of the middle of the forearm to the proximal part of the palmar surface of the hand. The skin and superficial connective tissues were deviated carefully to view the muscle groups of the anterior compartment of forearm and the flexor retinaculum. The entire course of tendons of the flexor muscles were evaluated.In all MRI series the flexor tendons of the digits and median nerve were located below the flexor retinaculum while the FCRt was located above the flexor retinaculum within its own septal compartment . Likewise, the cadaveric dissections also executed that the flexor tendons of digits and median nerve were covered by flexor retinaculum within the carpal tunnel and, a completely different and separated compartment of the FCRt was found above the flexor retinaculum . At wrist region distal continuation of the antebrachial fascia forms superficially the palmar carpal ligament and deeply the flexor retinaculum . Between palmar carpal ligament and flexor retinaculum guyon canal is found which includes ulnar artery and nerve [3]. Strong fibrous flexor retinaculum forms the roof of the carpal tunnel . It is important to define the flexor retinaculum\u2019s boundaries precisely to identify the contents of carpal tunnel. Furthermore, this will lead to creating common terminology between the anatomy and clinical sciences. The FCR muscle is found at the anterior compartment of the forearm and mainly flexes and to a lesser extend radially rotates the wrist [16]. Although its morphological and clinical importance has been widely studied, there is still no consensus about its relationship with the carpal tunnel . Most of the textbooks mentioned that 10 structures traverse the tunnel including the 9 tendons including flexor digitorum superficialis and profundus, and palmaris longus and the median nerve . However, some other textbooks mention that the FCRt found in the tunnel [8]. Morphological studies have demonstrated that the FCRt was not included in the contents of the carpal tunnel . Chamas et al. described as FCRt forms the lateral border of the carpal tunnel in a morphological study that focused on the explaining the boundaries of carpal tunnel [7]. The FCRt is covered by a fibro-osseous tunnel which is adjacent to the flexor retinaculum of the carpal tunnel . This tunnel is formed proximally by the scaphoid, the flexor retinaculum, and a vertical retinacular septum and, distally, by trapezium, again the flexor retinaculum, and the vertical retinacular septum [10]. This vertical retinacular septum separates the FCRt from the carpal tunnel . Also, it is usually used as donor in tendon transfers and reconstructive surgeries of the forearm . Due to this proximity, any disorder like tenosynovitis of the FCRt may imitate carpal tunnel syndrome [10]. Therefore, FCR tenosynovitis should be considered in the differential diagnosis of carpal tunnel syndrome and a detailed examination should be performed.According to the results of this study, the FCRt crosses the wrist superficial to the carpal tunnel. Thus, the FCRt may not be included as a factor in the etiology of the carpal tunnel syndrome but the close relationship must be in mind that the pathologies may imitate the carpal tunnel syndrome. Further multidisciplinary studies would be useful for identifying the contents of the carpal tunnel, the morphological organization of the wrist, and the biomechanical etiologies and mechanisms of the carpal tunnel syndrome.Although, the well-known theoretical information about the carpal tunnel, there is still no consensus about situation of the FCRt in major anatomy textbooks. Therefore, this study was demonstrated the precise situation of the tendon of the flexor carpi radialis with the carpal tunnel. Results verified that the FCRt does not pass through the carpal tunnel, but it passes through its own compartment that is formed by the flexor retinaculum. The main limitation of the present study was the lack of cadaveric measurements due to the insufficient number of specimens. Moreover, a histological investigation should be performed to understand the course of the flexor retinaculum. This study was carried out in pursuit of receiving the ethical approval from the local ethics committee (GO 17/887-15) and completed according to the principles of the Helsinki Declaration."} +{"text": "However, different surgical table orientations can impact access to different work zones, areas and equipment in the OR, potentially impacting workflow of surgical team members and creating patient safety risks; (2) Methods: This quantitative observational study used a convenience sample of 38 video recordings of the intraoperative phase of pediatric outpatient surgeries to study the impacts of surgical table orientation on flow disruptions (FDs), number of contacts between team members and distance traveled; (3) Results: This study found that the orientation of the surgical table significantly influenced staff workflow and movement in the OR with an angled surgical table orientation being least disruptive to surgical work. The anesthesia provider, scrub nurse and circulating nurse experienced more FDs compared to the surgeon; (4) Conclusions: The orientation of the surgical table matters, and clinicians and architects must consider different design and operational strategies to support optimal table orientation in the OR. Optimal patient positioning is a critical part of any surgical procedure and the wrong position of the patient in the operating room (OR) during surgery can cause patient harm and injury with effects ranging from minor aches, pains and skin abrasions to paralysis and loss of life ,2,3. TheOperating rooms for general surgeries are usually designed around a standard position of the surgical table centered and parallel to the walls of the room with the assumption that most surgeries would be conducted in this orientation, with table rotation required for a smaller set of surgery types. In this standard position, the anesthesiologist is usually located at the head of the patient with the surgeon and scrub nurse located on either side of the patient depending on the surgical site location. The organization of fixed and difficult-to-move equipment and storage are usually optimized for this position of the surgical table. However, the surgical table is often rotated to accommodate different surgery types or surgeon preferences, requiring modifications to the position of the anesthesia workspace and moveable equipment associated with anesthesia and surgical activities as well as the surgical team members during surgery. These different orientations of the surgical table may pose workflow challenges and patient safety risks to the patient.In a highly interconnected and complex system like the operating room, a significant change or disruption to one part of the system potentially impacts other parts as well. For example, a particular orientation of the surgical table might result in more workspace or easier access to the surgical site for the surgeon while resulting in crowding for the scrub nurse or increasing the distance between anesthesia monitoring devices and the patient, creating challenges for the anesthesia team and safety risks for the patient. The operating room environment is a highly complex system, and efficient and safe delivery of patient care depends on understanding and supporting the dynamic and changing relationships between the surgical team members, patient, tasks, equipment and the physical environment of the OR. Several studies show that disruptions are frequent during surgical procedures and poor room and equipment ergonomics are major contributing factors . SpecifiA recent study showed that the rate of major FDs in the OR increased as the rate of minor FDs increased, and this was particularly true for disruptions that involved OR equipment . This stCertain orientations of the surgical table may result in some areas and access to certain equipment or workspaces in the OR being obstructed during the intraoperative phase, requiring team members to take alternative and undesirable paths to get to equipment or storage. This may lead to unnecessary travel within the OR and unwanted contacts between team members. For example, non-sterile team members (circulating nurse or anesthesia provider) may need to pass near sterile team members (surgeon and scrub nurse) to get to storage and equipment. Taaffe et al. found thHow do different types of surgical table orientations impact the workflow of surgical team members?How do different types of surgical table orientations impact the overall movement of surgical team members during the intraoperative phase of the surgery?This study utilizes the Systems Engineering Initiative for Patient Safety 2.0 (SEIPS 2.0) framework to studyThis observational study used a convenience sample of 38 video recordings of pediatric outpatient surgeries from four different ORs at the Medical University of South Carolina (MUSC). This study complied with the American Psychological Association Code of Ethics and was approved by the Institutional Review Board at MUSC where observations were conducted (study ID: Pro00048787).The video recordings of surgeries were captured using four video cameras located in four corners of the operating room such that all parts of the OR were visible. The video recordings were initiated when the patient entered the room and ended when the patient exited the room. The videos were then coded for surgery phases, surgical table orientation, team member locations, and flow disruptions using Noldus Observer XT V.12. Two researchers with human factors training coded the videos. All coders observed a set of 8 pre-recorded surgeries from a different OR and familiarized themselves with the coding scheme prior to coding. Coders also participated in two training sessions where they received education from human factors and clinical experts on human factors issues in the OR and also the overall goals and protocol related to the current study. The coding was conducted in parallel over three rounds until there was consensus. Percentage agreement over 83% was obtained for flow disruption and location codes using an index of concordance.Each video recording was first coded to mark different surgery phases such that the duration of each phase could be obtained and the analysis could focus on activities within that phase. This study focuses on the intraoperative phase which was defined as the duration between the incision to the surgical site of the patient and the closure of the surgical site. The plan of the operating room was drawn to demarcate different functional zones, which were bounded and defined according to the type of function conducted in them . The locFour different table orientations were identified: (A) the surgical table angled in the room with the head of the table in the anesthesia zone (orientation A), (B) parallel to the long or short walls (depending on shape of room) of the room with the head of the surgical table in the anesthesia zone (orientation B), (C) parallel to the long or short walls of the room (depending on room shape) with the anesthesia zone perpendicular to the head of the patient (orientation C) or (D) angled in the room with the anesthesia zone adjacent to the head of the surgical table (orientation D) . The zonIn this study, the impact of the different surgical table orientations on staff workflow was understood by studying FDs. Flow disruptions were coded for all surgical team members using an existing taxonomy developed by Palmer and colleagues and adapThis study also focused on understanding how different surgical table orientations impact the movement of surgical team members during the intraoperative phase. The data obtained from video coding of the surgeries were used to create a playback simulation using AnyLogic simulation software. Data on staff movement and interaction during the intraoperative phase was obtained from the software platform which was customized to track the location of all team members with the required degree of precision and to obtain measures related to the overall movement of all team members such as the total distance traveled (m) and the total number of contacts per surgery. A contact was calculated by monitoring the distance between any two subjects in the operating room in the playback model. When another subject passed or was within a prespecified threshold of (0.6 m) of another, a contact was recorded.The event-based data around FDs obtained from the Noldus Observer XT 12 software were converted into time-based data with 1 s intervals to facilitate statistical analysis. Data on distance traveled and number of contacts was obtained from the AnyLogic simulation software program. Descriptive statistics were used to report characteristics of FDs and movement patterns (distance traveled and number of contacts) associated with different types of surgical table orientations across the intraoperative phases of the observed pediatric surgeries.The analysis was conducted at the surgical team member level with data counting the number of FDs, the distance traveled, and the number of contacts for each person in the operating room during the intraoperative phase. Each person in the room was coded according to their role in the surgical team and there could be more than one provider for each type during the surgical case.Quasi-Poisson regression models were used to examine the impact that surgical table orientation had on the number of FDs. Quasi-Poisson regression is a generalization of the Poisson regression model and estimates the over-dispersion and does not assume that the mean and variance are equal. These models were used to predict the count of all FDs, the count of major FDs, and the count of minor FDs during the intraoperative phase of the surgery.In addition to the overall FD analysis, binary indicator variables were created to identify if during the intraoperative phase each person in the OR experienced any flow disruption of a specific type. Specifically, the indicator variables were created to identify if a healthcare worker was involved in a layout FD, an environmental hazard FD, an interruption FD, an equipment FD, or a usability FD. Binary logistic regression models were used to evaluate the likelihood of an individual being involved in any of the specific FD types. Explanatory variables considered in the analysis included binary indicator variables for the surgical table orientation, the provider types, and the specific operating room where the surgery was conducted. The position of the surgical site was also accounted for since this might impact the position of the surgical team around the patient. The surgeries were categorized as upper body and lower body . Additionally, the number of people in the operating room and the duration of the intraoperative phase were included as explanatory variables. All statistical analysis was conducted in R Studio version 1.4.1103 . Stepwise deletion was used to remove insignificant parameters from the models.Thirty-eight pediatric surgeries conducted in four different ORs were video recorded and analyzed. The quasi-Poisson regression model predicting all flow disruptions during the intraoperative surgical phase is shown in The quasi-Poisson regression model predicting minor flow disruptions during the intraoperative surgical phase is shown in The quasi-Poisson regression model predicting major FDs during the intraoperative surgical phase is shown in The results of several binary logistic regression models suggest that individuals working in ORs with table orientation A were less likely to experience a layout FD , controlling for the duration of the surgery. Individuals working in surgeries with table orientation D were less likely to experience a usability FD with circulating nurses also less likely to experience a usability FD . There were no significant table orientations that predicted the likelihood of individuals experiencing an equipment, environmental hazard, or an interruption FD during the intraoperative phase of the surgery.The total distance traveled by surgical team members during the intraoperative phase across all observed surgeries was 6776 m. Team members traveled an average of 178.3 m/surgery and 8.13 \u00b1 3.9 m/min during the intraoperative phase of a surgery . The cirA logistic regression model predicting the likelihood that an individual was in the 4th quartile of estimated distance moved during the intraoperative surgical phase is shown in The total number of contacts recorded between team members during the intraoperative phase when they passed each other within a prespecified threshold (0.6 m) was 1789. The Quasi-Poisson regression model predicting the number of contacts for each individual in the OR during the intraoperative phase is shown in This is the first quantitative observational study to examine how the work of the surgical team is impacted by different types of surgical table orientations. This study found that the orientation of the surgical table within the operating room significantly influenced staff workflow and movement in the OR during the intraoperative phase of surgery. More specifically, FDs during the intraoperative phase were 1.48 times higher when the surgical case used orientation B and 1.79 times higher with orientation D after controlling for other variables. The findings were similar for minor FDs, though there was no impact of surgical table orientation on the number of major FDs. This study also demonstrated that surgical team members were less likely to experience a layout related FD in surgeries conducted in the angled surgical table orientation A compared to all other orientations. Team members working in operating rooms with surgical table orientation C with the anesthesia zone perpendicular to the head of the patient) were 2.82 times more likely to experience contacts with other team members. The average number of contacts/min was least for all team members in orientation A. The study also found that individuals working in orientation B were more likely to walk longer distances during the intraoperative phase.A key insight from this study is that surgical table orientation matters with the angled orientation A impacting fewer flow disruptions, specifically the occurrence of layout related disruptions. A previous simulation-based study found that the angled surgical table orientation was preferred as it provided space for movement in the room without increasing the number of contacts . This obOn the other hand, orientation B and D resulted in a higher number of FDs during the intraoperative phase. While table orientation B is close in configuration to the angled orientation A, there were significantly more FDs in this orientation. In ORs with narrow space available at the foot of the table , the circulation space at the foot of the table may get blocked with equipment during the intraoperative phase requiring team members to find alternative (and often undesirable) paths to get to the storage and equipment. This may result in unwanted contacts, more disruptions and greater travel within the OR. The angling of the surgical table opens up space at the foot of the table, potentially helping to reduce some of these challenges. The higher number of FDs in orientation D can be explained by the spatial constraints of accommodating the surgeon and anesthesia provider and other stored equipment in a limited workspace in the corner of the room.This study also found that surgical team members are impacted differently by different orientations of the surgical table. The surgeon and scrub nurse experienced fewer FD/min in orientation A compared to orientation D. Overall, the anesthesia provider, circulating nurse and scrub nurses experienced more FDs during the intraoperative phase compared to the surgeon. An interesting finding from this study is that the scrub nurse experienced 1.78 times more major FDs compared to other team members. This may suggest that while the position of the table is optimized for the surgeon, the scrub nurse who is assisting the surgeon from the opposite side of the table, may be constrained for space and may experience major disruptions to their work. This study also found that circulating nurses and scrub nurses walked much more than other providers in the OR. The movement of the circulating nurse in the OR is expected, given the requirements of their role to monitor and support the team during surgery. However, the scrub nurse is usually fairly stationary during the intraoperative phase and the movements of this individual (while less than that of the circulating nurse) may be indicative of layout challenges requiring frequent adjustments to accommodate equipment or other staff in the OR. This study also confirms findings from other OR studies that showed the negative impacts of increased number of people in the OR ,22. In tThe findings from this study have significant implications for both OR design and clinical practice. This study suggests that OR designers and administrators should not only reconsider standard surgical table positions and associated locations of different functional zones in the design and layout of ORs, but should also evaluate layout and workflow in the context of different surgical table orientations. Simulation-based evaluations that allow teams to enact surgical procedures in different surgical table orientations in a physical mock-up may help in proactively identifying and mitigating workflow challenges . In thisFrom a clinical standpoint, this study highlights the workflow challenges experienced by surgical team members in different orientations. Some of these surgical table orientations may be required for certain types of surgeries. However, to the extent possible, surgeries should be conducted in standard surgical table orientations which support workflows for all team members. OR managers, administrators and clinicians should also be cognizant of spatial constraints posed in certain orientations such as C and D and prevent accumulation of equipment and clutter in the OR that may further exacerbate workflow challenges in these orientations. While this study did not measure incidence of patient harm or injury associated with different surgical table orientations, other studies have shown that high rates of FDs contribute to surgical errors ,15 and iThis study is arguably the most detailed analysis of the relationship between surgical table orientation and surgical workflow in the OR ever conducted. By using a systems approach, we were able to study the dynamic interactions between surgical team members and their work in the context of different spatial layout conditions (table orientation). The type of data obtained from the 38 general pediatric surgeries is very extensive. However, this study has some limitations. While this study included data from all individual team members across these surgeries, it is a relatively small sample. Further, different types of pediatric surgeries were included in this sample. While the surgeries were categorized based on surgical site location given its relevance for positioning of team members, it is possible that specific variations among procedures could potentially confound findings. Given that this is the first extensive study of its kind on surgical table positioning and orientation in surgery and that table rotations are common in all types of ORs, there is a critical need to expand this work to other types of surgical environments.The layout of the operating room, especially the position of the surgical table, significantly impacts the work of all surgical team members by impacting flow disruptions as well as movement in the operating room. Utilizing a systems approach, this study found that an angled surgical table orientation is optimal for supporting the work of all team members in general pediatric surgeries while other orientations may cause challenges to workflow and movement. There are key areas of improvement identified in this study that are relevant for architects as well as clinicians and administrators."} +{"text": "Although the exact pathogenetic mechanisms leading to age-related macular degeneration (AMD) have not been clearly identified, oxidative damage in the retina and choroid due to an imbalance between local oxidants/anti-oxidant systems leading to chronic inflammation could represent the trigger event. Different in vitro and in vivo models have demonstrated the involvement of reactive oxygen species generated in a highly oxidative environment in the development of drusen and retinal pigment epithelium (RPE) changes in the initial pathologic processes of AMD; moreover, recent evidence has highlighted the possible association of oxidative stress and neovascular AMD. Nitric oxide (NO), which is known to play a key role in retinal physiological processes and in the regulation of choroidal blood flow, under pathologic conditions could lead to RPE/photoreceptor degeneration due to the generation of peroxynitrite, a potentially cytotoxic tyrosine-nitrating molecule. Furthermore, the altered expression of the different isoforms of NO synthases could be involved in choroidal microvascular changes leading to neovascularization. The purpose of this review was to investigate the different pathways activated by oxidative/nitrosative stress in the pathogenesis of AMD, focusing on the mechanisms leading to neovascularization and on the possible protective role of anti-vascular endothelial growth factor agents in this context. Age-related macular degeneration (AMD) is a complex multifactorial retinal degenerative disease primarily affecting the macula and progressively leading to irreversible central vision loss. It is the first cause of blindness in the Western world, with an estimation of up to 18.6 million people being affected by the blinding stages of the disease by 2040 worldwide . The ConThe exact pathogenetic mechanisms leading to AMD are still not fully understood; however, it is well known that AMD is a multifactorial disease with multiple genetic and environmental factors contributing to its onset and progression ,9,10,11.The present review will concentrate on the role of oxidative and nitrosative stress in the initiation and progression of AMD, with particular attention on the mechanisms leading to the formation of neovascularization in the context of AMD and the possible protective role of anti-vascular endothelial growth factor (VEGF) agents in this context.The maintenance of a physiologic relationship between the different components of the CC/BrM/RPE/photoreceptor complex is of primary importance to preserve its correct functioning and the breakdown of this equilibrium is involved in the changes occurring in AMD .2), to photoreceptors from serum ; h; h204]; 2O2 [2O2, by modulating the activity or the expression of the eNOS and iNOS isoforms [In a recent study, our group examined the effects of Aflibercept and Ranibizumab on oxidative stress in vitro on cultured ARPE-19 cells and showed that both agents were positively involved in the modulation of cell viability and mitochondrial function in RPE cells, acting on mechanisms mediated by NO release and the regulation of autophagy both under physiologic conditions and after exposure to H2O2 . In partisoforms . For the2O2 [2O2 and modulated the activation and expression of eNOS and iNOS isoforms in RPE cells; moreover, the presence of a NOS inhibitor decreased the protective effects elicited by anti-VEGFs on RPE cells. Those findings further supported the hypothesis that NO might play an important role as a mediator of the protective effect of the anti-VEGFs on cellular survival [Contradictory evidence exists in the literature on the effects of anti-VEGFs on mitochondrial function; in fact, while Malik et al. did not find any beneficial effect on mitochondria at clinical doses , Sheu et2O2 . The ressurvival .In the present review, we have highlighted the different pathways involved in the initiation of AMD and in the progression to the late stages of disease in the context of oxidative stress. A growing body of scientific evidence supports the hypothesis that oxidative damage plays a key role in the initiation of AMD pathologic processes, such as drusen formation and RPE changes, with secondary activation of the inflammatory cascade and of mechanisms of cell death. Moreover, the idea that oxidative stress is also directly involved in the process of neovascularization in the late stages of the disease is increasingly gaining ground between researchers. NO contributes to the regulation of choroidal perfusion and the imbalance between NO and RNS observed under pathologic conditions is thought to induce microcirculatory changes in the choroid, both in non-neovascular and nAMD. Anti-VEGF agents have been shown to act against oxidative damage and to have a protective effect on cellular survival by preserving mitochondrial function, the activation of the autophagy defense system and mechanisms mediated by the release of NO. Further research is needed to study new treatments or implement current treatments, with the purpose of better counteracting the damaging effect of oxidative and nitrosative stress in AMD."} +{"text": "This clear trend of ocean warming and acidification was documented in the fifth Assessment Report (AR5) by Intergovernmental Panel on Climate Change (IPCC). The warming of the surface ocean could increase stratification, suppress transport of nutrients into the upper photic zone and alter hydrographic properties or patterns of ocean circulation. The effects of OA include the changes of seawater carbonate chemistry such as an increase in partial pressure of seawater CO2 (pCO2) concentration and a decrease in calcium carbonate (CaCO3) saturation state relative to pre-industrial times (1870\u20131899) . They discuss how these different compositions may highlight the different strategies of these two phytoplankton communities to cope with ongoing environmental changes in the Antarctic Ocean. Kang et al. demonstrate the differences in carbon uptake rates and intracellular biochemical compositions between two different size fractions of phytoplankton to understand the ecological roles of the small phytoplankton in terms of food quantity and quality in the East/Japan Sea where the water temperature has rapidly increased. Their findings show that the increase of small phytoplankton under the warming ocean conditions could negatively affect the primary productivity and caloric content, with further consequences on the marine food webs. The photosynthetic responses to oceanic physio-chemical conditions and phytoplankton communities in the oligotrophic Western Pacific Ocean were presented by Wei et al.. Their study found that the important biotic variables influencing Fv/Fm are diatoms, Prochlorococcus, and picoeukaryotes, whilst the maximum of primary production is closely related to cyanobacteria, dinoflagellates, and Synechococcus. The detailed investigation of a chromophytic phytoplankton community using high-throughput sequencing of rbcL genes in the Western Pacific Ocean was presented by Pujari et al.. The authors found that the diversity of RuBisCO encoding rbcL gene varies with depth and across latitudes in the Western Pacific Ocean. The variation observed in chromophytic phytoplankton suggests the strong influence of environmental variables on biological production induced by oceanographic features. Thangaraj et al. investigated comprehensive proteomic profiling of diatom Skeletonema dohrnii with a change of temperature and silicate deprivation based on the iTRAQ proteomic approach, to understand the effect of the temperature and nutrient on the physiology of marine diatoms growth and photosynthesis. Their study shows that the proteome analysis for environmental stress-response of diatoms could extend our understanding for the potential impacts of climate change on the physiological adjustment to the metabolic process of phytoplankton. Sow et al. present a clear and concise description of the biogeography of Phaeocystis along a transect from the ice edge to the equator in the South Pacific Ocean, by way of high-throughput 18S rRNA gene sequencing. Their study shows that Phaeocystis could be occasionally highly abundant and diverse in the South Pacific Ocean, whereas the oceanic fronts could be the driving force for the distribution and structure of Phaeocystis assemblages in the ocean. Their work greatly expands our knowledge about the biodiversity patterns and abundances of Phaeocystis as a globally important nano-eukaryote.Out of the 10 works published, seven of them are focused on the phytoplankton communities, or single species of microalgae, or environmental drivers. Malits et al.. Their study demonstrates that the effect of OA on viral dynamics and viral-mediated mortality varies depending on the nutrient regime of the studied systems. It helps us to understand how viral-mediated mortality of microbes (VMMM) can be modified with environmental forcing. Another work looks into the contribution of grazers and viruses in controlling ecologically distinct prokaryotic sub-groups and low nucleic acid (LNA) cells) along a cross-shore nutrient gradient in the northern South China Sea . Their study shows how the nutrient regime influences the fate of ecologically relevant prokaryotic groups in the actual context of global warming and the anticipated oligotrophication of the future ocean. S\u00f6renson et al. address the resilience of a marine microbial community, cultivated in an outdoor photobioreactor, when exposed to a naturally occurring seasonal stress. Differential gene expression analyses suggest that community function at warm temperatures is based on concomitant utilization of inorganic and organic carbon assigned to autotrophs and heterotrophs, while at colder temperatures, the uptake of organic carbon was performed primarily by autotrophs. Overall, the microbial community maintains a similar level of diversity and function within and across autotrophic and heterotrophic levels, confirming the cross-scale resilience theory.Three of the 10 published works have focused on the response of viruses, prokaryotes or microbial communities to environmental stress. The response of viruses to two anthropogenic stressors (OA and eutrophication) was presented by The topics of the papers published in this Research Topic range from viruses, prokaryotes to phytoplankton and cover microbial communities from the various oceans. The studies confirm that the changes already occurring in ocean environments affect the metabolism and physiology of microbial communities, and further suggest that future changes will impact the physiological and ecological function or strategy of the microbial community in the marine ecosystem. Since most of these studies focus on the response of a single taxonomic population of microbes to environmental changes, our special issue highlights the need for studies to understand how ecological interactions occurring within and among the microbial community in the changing ocean will affect ecosystem structure and function. Finally, we hope that the group of papers that we have drawn together here will be a valuable addition to the accumulating observational evidence of how microbial communities are responding to the climate-related changes and consequently useful for evaluating and predicting the ongoing and future responses of marine ecosystems associated with the global climate change.MSY wrote the text with input from CL. All other authors commented on and approved the text.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "EmONC integration was reportedly high and significantly associated with EmONC training and availability of guidelines. However, the congruence of reported and actual extent of integration of EmONC at the three levels of healthcare delivery still need validation as such would account for the implementation success and maternal-neonatal outcomes.The integration of emergency obstetric and newborn care (EmONC) into maternal and newborn care is essential for its effectiveness to avert preventable maternal and newborn deaths in healthcare facilities. This study used a theory-oriented quantitative approach to document the reported extent of EmONC integration, and its relationship with EmONC training, guidelines availability and level of healthcare facility. A descriptive cross-sectional study was conducted among five hundred and five (505) healthcare providers and facility managers across the three levels of healthcare delivery. An adapted questionnaire from NoMad instrument was used to collect data on the integration of EmONC from the study participants. Ethical approval was obtained and informed consents taken from the participants. Both descriptive and inferential analyses were done with statistical significance level of p<0.05 using STATA 14. The mean age of respondents was 38.68\u00b18.27. The results showed that the EmONC integration median score at the three levels of healthcare delivery was high (77 (IQR = 83\u201371)). The EmONC integration median score were 76 (IQR = 84\u201370), 76 (IQR = 80\u201368) and 78 (IQR = 84\u201374) in the primary, secondary and tertiary healthcare facilities respectively. Integration of EmONC was highest (83 (IQR = 87\u201378)) among healthcare providers who had EmONC training and also had EmONC guidelines made available to them. There were significant differences in EmONC integration at the three levels of healthcare delivery ( Maternal and newborn morbidity and mortality is a worldwide health challenge with burden disproportionately distributed and highest in developing countries. The 2015 estimates of global maternal and neonatal mortality documented 303,000 maternal deaths, 2.6 million stillbirths and 2.7 million newborn deaths, most of which happened in developing nations especially Sub-Saharan Africa [Obstetric complications such as haemorrhage, sepsis, eclampsia, obstructed labour and fetal distress remain the leading cause of deaths in women of reproductive age and neonates in low and middle-income countries. These deaths are preventable with implementation of emergency obstetric and neonatal care (EmONC) to treat and manage the obstetric complications in healthcare facilities \u20138. The NNigeria as a nation has adopted EmONC as an evidence-based practice to be implemented in healthcare facilities to reduce maternal and neonatal morbidity and mortality, yet only 1.2% and 3.9% of public healthcare facilities fulfil the criteria for BEmONC and CEmONC respectively . NtambueIntegration of EmONC is necessary for EmONC effectiveness to avert maternal and newborn deaths that result from obstetric complications. Integration is an implementation outcome that focuses on the extent to which an intervention has truly and in reality become part of work in services or organization . EvidencWhile EmONC is not a newly adopted intervention in Nigeria, the anticipated result has not been realised. This is evident in the country\u2019s maternal and mortality ratio of 814 /100,000 deaths and neonatal mortality ratio of 34/1,000 births . There iAssessing integration on an established framework of normalization process theory offers the opportunity for comparability and ensures generalizability of knowledge from various local context . To a laThis is a descriptive cross-sectional study conducted at the three levels of healthcare delivery system in Osun State, Nigeria. Osun State is one of the 36 states in Nigeria. The state has three senatorial districts, six administrative zones and thirty Local Government Areas (LGAs). A purposive sampling technique was used to select nine of the thirty LGAs . Also, four primary healthcare facilities in each of the six LGAs where State hospitals are located were selected. Therefore, a total of 33 health facilities were selected based on World Health Organisation framework of linking one CEmONC healthcare facility to four BEmONC healthcare facilities . All 505In Integration was highest at the tertiary healthcare facilities with a median score of 78 and narrowest total variation. The extent of EmONC integration at primary and secondary healthcare facilities was the same (76) although primary healthcare facilities had a wider total variation.In Implementation research assessing the integration of EmONC is important in bridging the gap between the evidence of EmONC effectiveness and its real-world practice. The integration of evidence-based practice is dependent on the collective and coordinated behaviour of health professionals within the complex healthcare system . IntegraThe study was carried out among five hundred and five (505) healthcare providers and facility managers in primary, secondary and tertiary healthcare facilities in Osun State. The categories of healthcare providers in this study were the nursing staff (nurses/midwives), medical practitioners, and community health extension workers (CHEWs). More than half of the respondents were working in primary healthcare facilities. The World Health Organisation recommendation to link four basic EmONC healthcare facilities to one comprehensive EmONC healthcare facilities necessitA review of the minimum standards for primary healthcare shows basic emergency obstetric care should be provided for women with obstetric complications by a medical officer (where available) and nurse/midwife, while newborn resuscitation should be provided by a medical officer, nurses/midwife, CHO and CHEW . The staA careful comparison of the mechanisms of integration across the three levels of healthcare delivery shows there was a difference in the mechanism of integration between primary, secondary and tertiary healthcare facilities. The findings of the study reveal that there was a difference in coherence and cognitive participation across the three levels of healthcare delivery. This finding implies that the stakeholders\u2019 understanding of the tasks that the implementation of EmONC requires of them and involvement in EmONC differs across the primary, secondary and tertiary healthcare facilities. This finding is expected as healthcare providers in primary healthcare facilities provide BEmONC while those in secondary and tertiary healthcare facilities provide both BEmONC and CEmONC . On the The findings established that EmONC has not been fully integrated into maternal and newborn care at all levels of healthcare delivery in the state. The results show there was a difference in the level of integration across primary, secondary and tertiary healthcare facilities. Integration of EmONC is higher in the tertiary healthcare facilities than in primary and secondary healthcare facilities. The tertiary healthcare facilities are referral centres with multidisciplinary experts in obstetrics care and management of complications. However, there is minimal collective action for EmONC integration at all levels of health care delivery as the findings of the study suggest sufficient training is not provided to staff for the implementation of EmONC, sufficient resources are not available and management does not adequately support EmONC implementation at all levels of healthcare delivery. Previous studies in Nigeria affirmed that the quality of EmONC is poor ,18 and eThe integration of EmONC at the healthcare facilities was associated with EmONC training and availability of EmONC guidelines in the maternity units. Majority of respondents in the primary and secondary healthcare facilities did not have training in EmONC nor have guidelines available in the maternity unit where they work. Healthcare providers who have been trained on the implementation of EmONC and also have access to EmONC guidelines are more likely to higher integration than those without training and guidelines. Lack of training and unavailability of guidelines may impact negatively on the knowledge, skills and confidence of healthcare providers to implement EmONC. Previous studies conducted in Nigeria documented poor knowledge and skills of health care providers ,36 whichThough EmONC integration seems to be high in all the healthcare facilities, it may be related to the fact that the study was done in public healthcare facilities where full MNC such as antenatal, intranatal and postnatal care are implemented. This was necessary to ensure that respondents have practical experience of care for obstetric complications as this study was part of a larger study on implementation of EmONC. So, exclusion of public healthcare facilities that only offer partial MNC might have contributed to the high EmONC integration among healthcare providers in this study. Also, this was a reported integration, and may not have reflected the actual integration of EmONC in the healthcare facilities. A complementary qualitative approach would be necessary to validate the high integration reported in this study.The state of maternal and newborn health as well as the high maternal and newborn mortality in Nigeria posed a query on the high EmONC integration and the authenticity of the information given by the respondents. Evidence has shown that the inability to implement effective interventions in healthcare facilities occurs because such interventions are not integrated into existing programmes . The impConsidering the items in the mechanism of integration, many of the healthcare providers concluded the management of the hospitals were not supportive enough with regards to EmONC implementation. The lack of training and guidelines as widely reported in this study as well as insufficient resources to implement EmONC is a clog in the wheels of successful implementation of EmONC in healthcare facilities. Diffin et al. noted that interventions are likely to be successfully implemented if there is an opportunity to implement them . Lack ofIt needs to be mentioned that the assumption that services at the primary care levels do not have to be provided by practitioners with desirable competencies that can assure better access to quality EmONC and MNC as operational in many public healthcare settings in Nigeria has negative consequences for healthcare access to clients and undesirable congestions at the secondary and tertiary levels of care. One critical observation by the investigators in this study is the lack of monitoring, evaluation and communication strategies in the implementation of EmONC. This would also need to be given due attention and further investigations.Integration of emergency obstetric and newborn care in maternal and child care has been sparsely studied. This study is one of the pioneering studies assessing the implementation process of EmONC in the context of its integration on an established framework of normalization process theory. Although the quantitative approach may not give the full picture of the reality of integration of EmONC in healthcare facilities, it provided valuable information on the dynamics of implementing complex interventions as it relates to mechanisms for cohesiveness and coordination of individuals involved in getting the interventions integrated in practice. The Normad instrument used has provided some pieces of information on the causes of poor quality of EmONC and where priority for improvement lies. This study had thus provided baseline information on the integration of EmONC using theory oriented quantitative approach with which future studies on EmONC could compare.The adoption of a theoretical approach for the integration of EmONC as shown in this study, may fail to capture the actual integration required in reality for the effectiveness of EmONC to reduce maternal and newborn mortality. Research on implementation of interventions and programmes in maternal and child health would benefit more from mixed methods research.There is a dearth of studies for comparison with the findings of this study. Efforts have been made by researchers to assess the integration of evidence-based interventions in well-established programs, though not in Nigeria ,25, but"} +{"text": "The main difficulty of radiotherapy is to destroy cancer cells without depletion of healthy tissue. Stem cells and cancers are tightly interrelated. On the one hand, radiosensitivity/radioresistance of cancer stem cells affects the radiocurability of tumors, on the other hand, radiosensitivity is responsible for the stem cell depletion of organs at risk exposed to irradiation. Efficient solid cancer destruction is limited by the preservation of organ homeostasis. For this reason, targeted irradiation is an effective cancer therapy, however, damage inflicted to normal tissues surrounding the tumor may cause severe complications. The consequences of stem cell depletion of healthy tissue irradiated are acute and chronic radiation diseases. The depletion of endogenous stem cells can be compensated by a supply of exogenous stem cells. For this reason, cell therapy is a therapeutic approach that offers a therapeutic alternative to patients who have failed conventional treatment. This domain will bring forth the solution for optimal radiocurability associated with long-term patients\u2019 quality of life.This special issue covers research on the radiosensitivity of cancer stem cells and adult stem cells associated with tissue regenerative medicine.Integration of the cross talk of these two types of stem cells is essential. Nagle and colleague studied the roles of organoids as model to understand relationship between normal tissue and tumor responses in radiobiological studies . UnderstThe consequence of restoring the homeostasis of healthy tissue is first of all to allow tissue regeneration and then organ functionality on a permanent basis. A supply of mesenchymal stromal cells (MSCs) ensures this functionality, mainly through a trophic effect. Two research articles dealt with MSCs in the treatment of radiological burns. Brunchukov and colleagues studied the effect of human MSCs derived from the placenta and their conditioned medium concentrate on skin-regenerative processes. The use of conditioned MSCs in severe local radiation injuries accelerates the transition of the healing process to the stage of regeneration and epithelization . CavalleEach pathology related to radiotherapy is complex, which is why it is necessary to apprehend all the elements of a pathology and its treatment but also to be able to understand in detail the associated mechanisms. Helissey and colleagues reviewed radiation cystitis and its treatments. The authors investigated the role of immunity with a special focus on macrophages. They concluded that MSCs seem to be an excellent therapeutic substitute for the treatment of fibrosis in chronic radiation cystitis .To go further into the abovementioned issue and end on an optimistic note, it is interesting to associate the beneficial effect of irradiation with cell therapy in order to propose novel treatments for new pathologies. Tovar and colleagues tried a therapeutic approach, based on MSCs stimulated with radiation, to improve pneumonia caused by SARS-CoV-2. The activation of the immune system by the irradiated tumor to trigger the beneficial abscopal effect is decisively improving radiotherapy applications and their outcomes ."} +{"text": "Tool wear and breakage detection technologies are of vital importance for the development of automatic machining systems and improvement in machining quality and efficiency. The monitoring of integral spiral end milling cutters, however, has rarely been investigated due to their complex structures. In this paper, an image acquisition system and image processing methods are developed for the wear and breakage detection of milling cutters based on machine vision. The image acquisition system is composed of three light sources and two cameras mounted on a moving frame, which renders the system applicable in cutters of different dimensions and shapes. The images captured by the acquisition system are then preprocessed with denoising and contrast enhancing operations. The failure regions on the rake face, flank face and tool tip of the cutter are extracted with the Otsu thresholding method and the Markov Random Field image segmentation method afterwards. Eventually, the feasibility of the proposed image acquisition system and image processing methods is demonstrated through an experiment of titanium alloy machining. The proposed image acquisition system and image processing methods not only provide high quality detection of the integral spiral end milling cutter but can also be easily converted to detect other cutting systems with complex structures. Tool wear and breakage have a great effect on the quality of machined components and production efficiency. Lacking efficient detection methods of tool failure forms will lead to facture, large tolerance, and even damage of machine tools, resulting in great economic loss. Machine tools equipped with a tool condition detection system are found to have their downtime reduced by 75%, production efficiency enlarged by 10% to 60% and utilization ratio improved by more than 50% ,3. The rThe present tool failure form detection technologies fall into two categories: indirect and direct methods. The indirect methods determine the tool failure forms through the analysis of various signals generated in different cutting conditions \u03b7(k)=The optimal threshold The preprocessed and binarized images of a flank face are compared in It should be mentioned that the reflected light of the wear region cannot be perceived by the cameras and that the wear regions on the cutter are concealed in the background region. The wear region needs to be extracted with the images of the cutter both before and after machining. g status . SimilarThe image of rake face contains the wear region, background and cutter body under the point light source, the thresholding techniques that binarize the images are not applicable, instead, a Markov Random Field (MRF) image segmentation method is employed for the failure extraction of the rake face. The MRF model considers the spatial interaction of neighboring pixels; thus, spatial inhomogeneities in the images of the rake face can be processed with this model ,46,47,48Let us assume an image with pixel sites Since theorem (14)U(\u03a9,rameters ,51. A coIn order to demonstrate the feasibility of the image acquisition system and abovementioned image processing methods, the detection of failure regions of the integral spiral end milling cutters is conducted during the machining process of titanium alloy.The experiment setup for machining test and failure detection is shown in The different images and extracted failure regions of the integral spiral end milling cutters are given in It can be concluded from In this paper, tool wear and breakage detection technologies based on machine vision were proposed for integral spiral end milling cutters. The spatial positions and type of cameras and light sources were designed to develop an image acquisition system which would capture high-quality images of rake face, flank face and tool tip of the milling cutters. Denoising and contrast enhancing were employed for the image preprocessing. Then, the failure regions in images of the flank face and tool tip were extracted after binarization by Otsu thresholding method, while the failure regions in images of the rake face were extracted after segmentation by MRF models. The feasibility of the proposed image acquisition system and image processing methods were eventually examined by the titanium alloy machining process. It was found that the failure regions on the cutters increased with the machining time. The cutters could be regarded as a failure when the failure regions reached their corresponding threshold values. The developed image acquisition system and proposed failure extraction methods provide reliable machine monitoring of the integral spiral end milling cutters during machining processes."} +{"text": "In daily inspection, the nonstandard management of sterile articles in clinical departments of hospitals often leads to the destruction of the sterilization effectiveness of sterile articles. Therefore, it is necessary to strengthen governance and improve this phenomenon. This study intends to investigate the mode in which the disinfection supply center participates in the supervision and management of the management of sterile items in clinical departments. It played a role in improving the standardization of the management of sterile articles in clinical departments and ensured the closed-loop management of the sterilization effectiveness of sterile articles. Every quarter, the disinfection supply center of our hospital will inspect the standardized management of sterile articles in all clinical departments of the hospital, mainly including the storage environment and facilities of sterile articles, the cleanliness of storage cabinets, placement principles, whether they are stored by category, and the quality and validity management of sterile articles. The quarterly inspection results were summarized and analyzed to find the existing problems and the causes. The disinfection supply center shall supervise the improvement. After the disinfection supply center inspected the standardized management of sterile articles in all clinical departments of the hospital for the first time according to the inspection contents, under the guidance and assistance of the nursing department and the hospital infection department, it improved the sterile article management system, conducted knowledge training for the whole hospital, and incorporated the standardized management of clinical sterile articles into the quality control inspection of the nursing department. In the later stage, the disinfection supply center is responsible for conducting routine inspection and supervision on the standardized management of sterile articles in all clinical departments of the hospital every quarter according to the inspection contents, including summarizing, analyzing, and urging the clinical departments to achieve the improvement of the management of sterile articles in clinical departments. P < 0.05). The average number of lost packages caused by nonstandard management in the department was significantly reduced. The average rate of lost sterile packages during and after the improvement was significantly lower than that before the improvement . It also effectively reduced the cost caused by the loss of sterile packages. The standardization of aseptic articles after improvement was significantly higher than before and during improvement, and the qualified rate was significantly different (99.4% vs 97.9% vs 89.5%, The disinfection supply center participates in the quality control and management of sterile articles in the nursing department and regularly inspects and supervises the management of sterile articles in clinical departments. It can effectively improve the standardized management of sterile articles in clinical departments, ensure the safety of sterile articles, and form a closed loop of sterilization effectiveness. The disinfection supply center is one of the key departments of the hospital, an important department in nosocomial infection management and an eThe Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine is a class III class a comprehensive hospital. 65 clinical departments use the sterilized articles of the disinfection supply center. Three senior nurses of the disinfection supply center conduct special inspections on the management of sterile articles in these departments, mainly routine standby sterile bags, special bags, and special sterile articles for emergency rescue, which are uniformly placed in the cabinet of the treatment room.According to the specifications for disinfection supply center and the on-the-job training course for hospital disinfection supply center, the inspection contents mainly include the following: (1) According to the expiration date, place them in an orderly manner from left to right, follow the principle of first-in, first-out, and take out the expired package in time. (2) The storage environment requires the temperature to be lower than 24\u00b0C and the humidity to be lower than 70%, sterile items are stored, the height from the ground is required to be \u226520\u2009cm, the distance from the wall is \u22655\u2009cm, and the distance from the ceiling is \u226550\u2009cm. (3) Sterile items should be classified and stored in separate cabinets, and the storage cabinets should be kept clean and dry. (4) According to the needs of the department and the cost, determine the sterility of each department pack base and record. (5) The department is required to count the number of sterile packs every day and check the quality. If the outer packaging is loose and damaged, and the label is incomplete, take it out in time. From April to June 2018, the staff of the disinfection supply center inspected 65 clinical departments. The inspection mainly adopts the method of on-site assessment and hearing from the staff of the Inquiry Department. The inspection results are summarized and analyzed by the disinfection supply center and the nursing department. The main problems are shown in The nursing department incorporated the standardized management of clinical, sterile articles into the quality control inspection of the nursing department. The disinfection supply center is responsible for conducting routine inspection and supervising improvement according to the inspection contents every quarter. It was analyzed and discussed at each quality control meeting of the nursing department. Some departments carried out continuous quality improvement for the standardized management of sterile articles to urge the department managers to strengthen the standardized supervision of sterile articles.The head nurse of the disinfection and supply center gave standardized teaching and on-site training on the management of sterile articles to the head nurses of clinical departments and senior nurses of the whole hospital. They introduced the standardized operation of the management of sterile articles in the disinfection and supply center and the identification of sterilization effectiveness. Then, let the trainees return to the department for secondary training and popularize it to every staff member. The teaching and training contents mainly include the importance of storage and use of sterile articles, the necessity of overall arrangement of the base package, storage environment, facility requirements, placement and taking requirements, influencing factors, and other related knowledge. The relevant training contents were included in the theoretical examination of the nursing department to evaluate the learning effect of nursing staff. This measure enables the clinical department personnel to understand and improve the standardized management of sterile articles in theory and practice.The process started with the staff of the disinfection supply center, which set an example and strictly controlled sterile articles' quality. They ensured that the indication tape outside the sterilization package could be distributed to the clinical department only after it was discoloured and qualified. The process required the special frame to be sealed for transportation, strictly implemented hand hygiene when taking it, paid attention to the clean and dry placement position, and handled it gently to avoid damaging the packaging of the sterile package. In case of any problem, communicate with the clinical department in time and put forward any nonstandard operation. At the same time, the sterilization indication discoloration comparison card made by the disinfection supply center was distributed to the clinical department so that the clinical department staff could know the discoloration requirements of the indication card inside and outside the sterile package and make a good judgment.The disinfection supply center can improve the existing sterile article management system and refine the distribution and receiving process, and the hospital infection control department will assist in the audit. Finally, the nursing department can distribute the sterile article management system to all clinical departments for homogenization and implementation.During the rectification process, for the more prominent problems in each inspection, such as the inconsistency between the base number of sterile bags and the records, the on-site rectification shall be carried out immediately, and the importance of fixing the base number of sterile bags shall be emphasized again during the lecture. The actual number shall be consistent with the base number. If the base number is insufficient, the disinfection supply center shall be informed to increase it in time to facilitate overall arrangement and save hospital costs.During the year from July 2018 to June 2019, after the implementation of the measures, the disinfection supply center conducted routine supervision, summary and rectification on the management of sterile articles in the clinical departments of the whole hospital every quarter, four times in total, and timely put forward the nonstandard management of sterile articles in the clinical departments during the normal distribution process. According to the actual situation in the implementation of the inspection, the focus of the inspection content was appropriately adjusted, the inspection of the damage of the sterile package was strengthened, and the results of the four inspections under improvement were summarized so as to evaluate the feasibility of the supervision model of the disinfection supply center.After determining the feasibility of the supervision model of the disinfection supply center in the standardized management of sterile articles in clinical departments, with the assistance of the nursing department and the hospital feeling department, the disinfection supply center will continue to inspect the standardized management of sterile articles in clinical departments of the whole hospital every quarter, supervise and correct problems, and summarize and score at the same time. The special management of sterile articles was included in the quality control inspection of the nursing department. By summarizing the results of 8 inspections from July 2019 to June 2021, we can further evaluate the effect and significance of the supervision model of the disinfection supply center in the standardized management of sterile articles in clinical departments.Composition ratio\u2009=\u2009frequency of problems/total frequency of problems\u2009\u00d7\u2009100%. Qualified rate of inspection results\u2009=\u2009total qualified frequency/total inspection frequency\u2009\u00d7\u200910%. Loss package rate\u2009=\u2009average number of lost packages/total number of single inspection packages\u2009\u00d7\u2009100%. The loss value is the single loss value\u2009=\u2009the average number of lost packages\u2009\u00d7\u2009unit price.\u03c72 test, P < 0.05; the difference was statistically significant.SPSS22.0 was used for statistical analysis. The counting data were expressed in percentage and compared between groups Three training sessions were conducted for the clinical departments of the hospital. The number of participants and clinical departments are shown in During the rectification process from July 2018 to June 2019, the disinfection supply center was responsible for the statistics of the results of four rounds of quarterly inspection on the standardized management of sterile articles in 65 clinical departments of the hospital. The existing problems were less than those before the rectification, as shown in From July 2019 to September 2021, after rectification, the disinfection supply center was responsible for 8 rounds of standardized management of sterile articles in 65 clinical departments, and the quarterly inspection results were counted. The existing problems were less than those before and during the rectification, as shown in P < 0.05 in \u03c72 test, indicating the difference is statistically significant.The results of the standardized inspection of sterile articles in each clinical department were summarized and compared the qualification rates of the three processes, as shown in Table 6With the development of medical technology, the disinfection supply center has become an essential department for the hospital's development and a guarantee for sterile articles in the hospital. It provides cleaning, disinfection, sterilization, and disposable sterile articles for all hospital departments \u20138. ThereThe disinfection supply center is an important service department of clinical departments . While dTo sum up, the nursing department will incorporate the standardized management of the storage and use of sterile articles in clinical departments into the nursing quality inspection. By adopting the supervision and management mode of disinfection supply center, the disinfection supply center will regularly supervise the sterile articles in clinical departments, provide targeted training and on-site guidance to rectify existing problems, and strengthen the critical concept of standardized management of sterile articles by clinical medical staff. It ensures the closed-loop management of sterilization effectiveness of sterile articles and safe use which improve the standardized management and safe use of sterile articles in clinical departments. At the same time, it can also reduce the cost consumption caused by the loss of sterile packages."} +{"text": "Ablation of sites showing Purkinje activity is antiarrhythmic in some patients with idiopathic ventricular fibrillation (iVF). The mechanism for the therapeutic success of ablation is not fully understood. We propose that deeper penetrance of the Purkinje network allows faster activation of the ventricles and is proarrhythmic in the presence of steep repolarization gradients. Reduction of Purkinje penetrance, or its indirect reducing effect on apparent propagation velocity may be a therapeutic target in patients with iVF. Patients who have survived idiopathic ventricular fibrillation (iVF) are typically difficult to treat and often rely on an implanted Cardioverter Defibrillator (ICD) to restore normal heart rhythm . Single The cardiac Purkinje network is highly variable between species and within species, especially in the extent of the peripheral branches . The ven50, QRS duration at 50% of the QRS amplitude). We tested this in a computer model of a human heart and torso predominantly through Purkinje-muscle conduction, and the other (in Purkinje-void tissue) predominantly through muscle-muscle conduction. Thus, the end of the QRS complex is generated by muscle-muscle conduction only. In the presence of a more penetrating Purkinje network, a relatively larger contribution of (fast) Purkinje-mediated conduction is expected at the beginning of the QRS complex, whereas the end of the QRS complex is relatively unaffected. We therefore reasoned that a Purkinje system that extends slightly more into the walls of the ventricles leads to a shorter QRS \u201cbody\u201d as an expression of relatively late activated tissue. This generates the question whether humans with a short QRS50 are more at risk for idiopathic ventricular re-entrant arrhythmias block and not the maintenance of the arrhythmia.We have demonstrated earlier that conduction may modulate the arrhythmogenic substrate formed by a repolarization heterogeneity and may either promote or suppress the induction of reentry by a premature beat depending on the site of application of sodium channel blockade relative to a repolarization gradient . ConductEarlier studies have shown that fast activation of the heart, and shortening of the QRS duration, relates to the extensiveness of the Purkinje system . Our comWhether the Purkinje system indeed penetrates deeper into the ventricular wall of patients with iVF is yet to be investigated. Evidence of the involvement of the Purkinje system with iVF was provided by Indirect evidence of the involvement of the Purkinje system in iVF comes from the observation that Purkinje spikes often precede premature beats that give rise to ventricular fibrillation in iVF patients. Ablation of sites showing Purkinje spikes is often antiarrhythmic . It is tIt cannot be excluded that ablation of sites showing Purkinje spikes reduces the fine network of the Purkinje fibers, which, in turn, reduces the anatomical substrate for reentry. The Purkinje network as substrate for reentry was already recognized by Although the above ideas are speculative and not substantiated by systematic studies, our findings suggest that cardiac conduction velocity has a safe upper limit in the presence of pre-existing repolarization heterogeneities. Moreover, the reduction of apparent propagation velocity by ablating parts of the Purkinje system may underlie the antiarrhythmic efficacy of this therapeutic approach. These contentions need to be tested clinically.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.RC and BB designed and wrote the manuscript. MP performed computer simulation and edited the manuscript. MHa, ND, MR, VM, MC, and MHo edited the manuscript. All authors contributed to the article and approved the submitted version.MC is part-time employed by Philips Research.The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The main deficits of the available classifications of radical hysterectomy are the facts that they are based only on the lateral extension of resection, do not depend on the precise anatomy of parametrium and paracolpium and do not correlate with the tumour stage, size or infiltration in the vagina. This new suggested classification depends on the 3-dimentional concept of parametrium and paracolpium and the comprehensive description of the anatomy of parametrium, paracolpium and the pelvic autonomic nerve system. Each type in this classification tailored to the tumour stage according to FIGO- classification from 2018, taking into account the tumour size, localization and infiltration in the vaginal vault, which may make it the most suitable tool for planning and tailoring the surgery of radical hysterectomy.The current understanding of radical hysterectomy more is centered on the uterus and little is being discussed about the resection of the vaginal cuff and the paracolpium as an essential part of this procedure. This is because that the current classifications of radical hysterectomy are based only on the lateral extent of resection. This way is easier to be understood but does not reflect the anatomical and surgical conception of radical hysterectomy and the three-dimensional ways of tumour spreading, neither meet the need of adjusting the radicality according to the different stages of FIGO classification, which depends\u2014at least in the early stages\u2014on the tumour volume and the infiltration in the vagina (but not on the directly spread in the parametrium). The new classification presented in this paper does not base anymore on the lateral extent of resection only but too on the depth of resection in the small pelvic and the extent of the resected vaginal vault without or with its three-dimensional paracolpium. This classification takes into account the tumour size, stage, localization and infiltration in the vaginal vault and may offer the optimal tool to adjust and tailor the surgery according to these important variables. Even when the radical hysterectomy has a long tradition as the standard therapy for early cervical cancer, there are a lot of unmet needs regarding the optimal technique and the most effective and practical way of defining and tailoring the radicality according to the tumour size and infiltration in the vaginal vault. The most popular classifications of radical hysterectomy are based on the lateral extent of resection Tumours with a big volume but without macroscopic infiltration in the vaginal vault or obvious infiltration in parametrium/paracolpium : these tumours demand a resection of a longer vaginal vault to be able to close the vaginal cuff beneath from the tumour to avoid any spelling of tumour cells and any contamination of the abdominal cavity. In these cases and because of the curved and anteflexed shape of the upper vagina, tumors with ventral localization demand the resection of longer vagina cuff comparing with tumors from the same size and stage but with dorsal localization .(2)Tumours infiltrating the vaginal wall with no obvious infiltration in parametrium/paracolpium : these tumours demand the resection of a longer vaginal vault with paracolpium (the blood supply and drain and the lymph drain of the upper 1/3 to \u00bd of the vagina) to be able to confirm or deny any tumour spread along with the vaginal vessels/lymph ways.Therefore, it is essential in our opinion to find a more suitable classification for radical hysterectomy, taking into account both situations affecting the radicality and challenging the tailoring of surgery in early-stage (resectable) cervical cancer:In both good operable situations, it is possible\u2014and we have to say mandatory\u2014to spare the pelvic autonomic nervous system to reduce the postoperative complications to the minimum ,5,6.The second problem in the popular classifications of radical hysterectomy is the arbitrary definition of type B (Querleu\u2013Morrow) or class II (Piver-Rutledge) radical hysterectomy as the resection of parametrium at midway/halfway, which does not correlate with tumour spread in parametrium (direct -continuous- tumour infiltration or affected lymph nodes in parametrium) and neglects the lymph nodes and ways lying at the distal part of the lateral parametrium. The continuous parametrial invasion occurs rarely, and the tumour spreading in the adjacent parametrium takes place mainly by tumour cell emboli and lymph node involvement . The assThis new classification of radical hysterectomy depends on the Cibula 3-dimentional concept of parametrium , MuallemThis classification is no longer based only on the lateral extent of resection but also on the depth of resection in the small pelvic and the extent of the resected vaginal vault without or with its three-dimensional paracolpium. The resected length of the vaginal vault has to be adjusted to the tumour size or to the tumour infiltration in the vagina . This is the most crucial point by adjusting and tailoring the radicality during surgery for cervical cancers according to the tumour volume and spread. We think that the resection of all length of 3-dimentional parametrium, which it is nothing else than the lymph nodes, lymph vessels and the blood supply and drain of uterus/cervix, is essential to be performed in every FIGO-stage (up IA2) demanding a lymph node staging (Sentinel and/or lymph node dissection). Here, it is worth mentioning the contribution of Girardi and BeneThis classification takes too into account the location of the cervical lesion in the cervix, which plays an important role in the surgical decision about the resected length of the vaginal vault during the radical hysterectomy. The classification distinguishes, therefore, between tumours locating on the ventral (anterior) and tumours locating on the dorsal (posterior) cervical lips in FIGO IB-stage. This is because of the fact that the resection of a longer vaginal wall ventrally, which is, of course, the case with tumours locating at the ventral cervical lip , is anatThe radical hysterectomy has to be performed nerve sparingly in every procedure as long as there is no direct (contiguous) infiltration in the paracolpium and/or the tendinous arch of the pelvic fascia (endopelvic fascia).The three-dimensional anatomic template for the resection of parametrium and paracolpium depends on the precise anatomy of parametrium and paracolpium published before ,10 and bIn this way, every part of the three-dimentional parametrium and paracolpium has a proximal aspect and a distal aspect . The dorsal parametrium in this classification is the sacrouterine ligament, and the dorsal paracolpium is the sacrovaginal ligament which has been previously described as the deep uterosacral ligament by Ramanah et al. and as tThe ventral parametrium is the vesicouterine ligament, and the ventral paracolpium is the vesicovaginal ligament which contains the venal anastomoses between the vaginal vein and the inferior vesical vein consisting from the lateral and medial vesicovaginal veins \u2014Figure The detailed description of parametrium and paracolpium is shown in The new classification of radical hysterectomy describes four types of radical hysterectomy according to the clinical and surgical demands. Each type in this classification is tailored to the tumour stage according to the International Federation of Gynecology and Obstetrics (FIGO)- classification from 2018, taking into account the tumour size, localization and infiltration in the vaginal vault and depends on the precise three-dimensional anatomy of parametrium and paracolpium and their close anatomic relationships to the pelvic autonomic nerve system. The pelvic exentration or the extended resection of pelvic organs (Class V in Piver-classification or Type D in Querleu-Morrow classification) is no more a part of this classification because this kind of radical resection is rarely indicated for primary cases and is not compatible with the concept of radical hysterectomy.This procedure is equivalent to the so-called extra-fascial hysterectomy (Class A in Q-M classification) aiming to remove the entire uterus with parmetrium margins and minimal vaginal vault. This tailored procedure is suitable for microscopic cervical cancers FIGO IA and no lymphovascular space invasion (LVSI) .This procedure is equivalent to the Wertheim-Meigs Operation and included the resection of the distal aspects of the new defined ventral , lateral and dorsal parametrium but without paracolpium resection and with minimal (about 2 cm) vaginal vault, which will be the surgery of choice for stages IA with lymphovascular space invasion and IB1 (less than 2 cm) with dorsal localization .This procedure is similar to the class C1 radical hysterectomy in Q-M classification with complete resection of the upper defined ventral, lateral and dorsal parametrium but with resecting of only the proximal aspects 3-dimentional paracolpium allowing the resection of the longer vaginal vault (about 2\u20134 cm). This demands more preparation of the vagina and highlighting the vaginal vessels and the ventral parts of the inferior hypogastric plexus to be able to dissect them carefully, lateralizing and sparing them during the procedure. This type of radicality is the surgery of choice for stage IB1 with ventral localization, IB2 and IB3 with dorsal localization in the cervix. This new type of radical hysterectomy has the advantage of avoiding the difficult preparation and resection of the vesical venous plexus and the vaginal vessels, which reduce the risk of injuring the inferior vesical vessels and the following ischemic injuries of the distal ureter and the bladder trigon. This type offers the opportunity to resect an adapted length of vaginal vault to cover big cervical tumours . It is wThis procedure is equivalent to the nerve-sparing Okabayashi operation (modified from Fujii) and included the radical hysterectomy with resection of the distal aspects of ventral , lateral and dorsal parametrium with the resection of distal aspects of ventral paracolpium , lateral paracolpium and dorsal paracolpium (at the tendinous arch of the pelvic fascia) . This prIn the direct infiltration in paracolpium and/or in the endopelvic fascia, the ipsilateral resection of the inferior hypogastric plexus will be mandatory. This procedure is the surgery of choice for stage IB3 with ventral localization or deep stromal invasion, stage IIA and selected cases of stage IIB.The old classifications of radical hysterectomy depended only on the lateral extension of resection. This misinterpreted the concept of surgery to be reduced only on the resection of lateral parametrium with no or minimal resection of the ventral parametrium. The limited resection of the ventral parametrium and/or paracolpium restricted the length of the resected vaginal vault and made the surgery of tumours with big volumes (>2 cm) or with vaginal invasion pretty difficult.The new classification is based not only on the lateral extension of resection, but also on the depth of resection in the small pelvis, taking into account the three-dimensional parametrium and paracolpium template and the comprehensive description of the anatomy of parametrium, paracolpium and the pelvic autonomic nerve system . It takeThe author is aware that this new classification does not result from randomized control studies, but it depends on a clear described anatomical and surgical concept ,6,10 andThis new classification may supply a very good tool for uniting the terminology and definitions of radical hysterectomy and for planning the right tailored radical surgery for cervical cancer according to the tumour size, stage, localization and infiltration in the vaginal vault. All these parameters could be evaluated with clinical examination and with or without additional magnetic resonance imaging.The new suggested classification of radical hysterectomy does not depend only on the lateral extension of resection in lateral parametrium but consider the three-dimensional template of parametrial resection, the three-dimensional template of resection of paracolpium and the comprehensive description of the anatomy of parametrium, paracolpium and the pelvic autonomic nerve system. It may be a good tool for planning and tailoring the surgery according to the tumour size, stage, localization and infiltration in the vaginal vault."} +{"text": "The developments of modern science and technology have significantly promoted the progress of sports science. Advanced technological methods have been widely used in sports training, which has not only improved the scientific level of training but also promoted the continuous growth of sports technology and competition results. Competitive Wushu routine is an important part of Chinese Wushu. The development trend of competitive Wushu routine affects the development of the whole Wushu movement. To improve the training effect of the Wushu routine using artificial intelligence, this paper employed fuzzy information processing and feature extraction technology to analyze the visual features in the process of Wushu competition. The deep neural network-based region segmentation method was employed for implicit feature extraction to examine the shape, texture, and other image features of Wushu routines and improve the recognition performance. The proposed feature extraction model achieved the highest average accuracy of 93.98% accuracy as compared to other contemporary algorithms. Finally, the model was evaluated to validate the superior performance of the proposed method in improving the decision-making ability and effective instruction ability of the martial arts routine competition. Wushu, formed gradually in the course of historical development, is a kind of integration of multiethnic cultures and the common cultural wealth of mankind . It contThe appearance of the competitive Wushu routine , 7 is thAt present, Chinese Wushu has not been included in the Olympic Games , and theThis paper proposed a novel artificial intelligence-aided algorithm for Wushu routine competition decision-making based on feature fusionThe paper proposed a deep neural network region segmentation method for implicit feature extraction to analyze the shape, texture, and other image features of martial arts routines which can effectively improve the recognition performance of martial arts routines.In the context of the continuous development of society, people's ability to control society has been significantly improved, and competitive martial arts and Wushu have a broader space for development. The research on the macro development of competitive Wushu routine is more concentrated than that of Wushu one-way sports , 12. TheThe rest of this paper is organized as follows. Since the 1950s, competitive Wushu has gradually formed and developed based on traditional Wushu. Competitive Wushu is a modern competitive sport in China that is based on the rules of competition with routine and free Shushu as the two main activities. The competitive Wushu routine is formed after the integration of Chinese and Western sports culture which belongs to the modernization of Wushu. Scholars define the concept of \u201ccompetitive Wushu\u201d with respect to the formation, evolution, and development of the competitive Wushu routine .To reflect the changes in Wushu routines, athletes should have good strength, speed, agility, and flexibility. The extraction of images for different actions can effectively judge the completion quality of Wushu actions. To study the difficult images of Wushu, it is necessary to extract the prominent features of different Wushu images.With the rapid development of machine image recognition technology, the identification of Wushu actions has moved from an original method of artificial representation to advanced deep learning methods. Recently, with the expansion of video behavior and action recognition algorithms , it can A convolutional neural network (CNN) is a variant of deep learning algorithms and has a good expression of the two-dimensional features of the image. It can effectively combine the features of the previous moment to ensure that the network can understand the timing information. Since the video behavior can be viewed as a set of constant image sequences, a dual-stream network model architecture of CNN \u201318 combiThe first step in the video processing of the WR\u2009=\u2009G\u2009=\u2009B, then color represents a grayscale color with a grayscale range of 0\u2013255. Pixel in grayscale image stores brightness value in bytes, that is, grayscale value, one pixel corresponds to one byte. In the color image, the gray values of three gray images are used to express the brightness of the three components, and one gray image can be selected.fk is the gray value of the converted gray image at . The following equation represents the grayscale image after the weighted average of the three components of RGB:Gray image with basic color is white and black. Each image's brightness ranges from zero to hundred percent. Mostly, the color images are represented in the RGB model. If i, j), routine action image characteristics such as shape, texture, and image gray value of Iswk, the dataset to get the corresponding Wushu movement characteristics of visual components can be represented asc is the column number of the visual feature of martial arts routines and r is the three-dimensional points on the motion model. Combined with the two-dimensional color image reconstruction method, the distribution characteristic quantity of martial arts routine action visual pixel set was obtained. The visual image of martial arts routine action was processed using fuzzy fusion, and the action visual tracking [To achieve the Wushu movement visual image recognition based on feature extraction, the Wushu movement region segmentation method is applied using frame section scanning techniques, and a visual feature reconstruction model is developed. Assuming that Wushu movement visual image grayscale pixel sets are P(tracking and recoTs is the location feature of the edge of the region. The watershed image segmentation algorithm is used to reconstruct the action of martial arts routines, and the dynamic feature decomposition model was established as follows:wmk is the edge feature component of the martial arts routine. Using mathematical morphology, the three-dimensional modal output of the visual reconstruction of the martial arts routine is obtained asThe edge feature segmentation method was employed to express and process the visual features of the Wushu Sanda whip movement, and the edge contour feature analysis model was developed. The edge feature extraction is used to isolate or extract the features of an object. Once the edges have been identified, we can analyze the image and identify the object. The visual distribution function of Wushu Sanda whip movement was calculated asThis section establishes an artificial intelligence \u201325 auxilTo test the application performance of this method in realizing the visual image recognition of martial arts routine movements, the simulation experiment analysis was carried out using Matlab 7. The experimental parameters are shown in \u2217256. To test the recognition model, the selected images were divided into 5 groups in which each group was randomly selected to test the proposed model. Taking the abovementioned sample settings as the base for this experiment, examples of the visual images of martial arts routines are shown in The dataset used in this experiment was downloaded from the Internet. The dataset is comprised of a total of 8000 visual pictures of martial arts routines. From the given dataset, the low-quality images were eliminated and the high-quality 1700 images were selected for training the proposed model. All the images were converted into two-dimensional matrix format with area pixel distribution 256We confirmed the feature extraction ability of CNN Wushu routine action recognition through feature visualization. Taking the visual image of Wushu routine action in Figures \u03b1 to avoid zero gradients. The model convergence effect is shown in The activation function is usually used to perform a nonlinear transformation on the output of the hidden layer of deep neural networks to improve the nonlinear expression ability of the whole network model. For several common activation functions, the ReLU function does not have the problem of gradient disappearance compared with the sigmoid function and tanh function, and its calculation is very simple. Therefore, most CNN adopts ReLU as the activation function. However, the way that the ReLU function directly takes zero on the negative interval makes it easy to transmit the zero gradients back in the process of backpropagation training, resulting in the weight not being updated, that is, the phenomenon of \u201cneuron death\u201d occurs. To solve this problem, ReLU's modified Leaky ReLU function multiplies the value of the negative interval by a very small coefficient As can be seen from Competitive Wushu is emerging as a product of the times and is also the artistic carrier of traditional culture China times, and competitive Wushu is a kind of performing arts, through a creative and routine performer to organize routine stylized martial arts action. This paper proposed an artificial intelligence auxiliary algorithm for martial arts routine competition decision-making based on feature fusion. We combined frame segment scanning technology to sample martial arts routine movement visual images and developed a Wushu routine movement feature extraction model. The proposed model achieved the highest recognition accuracy of 93.98% as compared to other feature extraction methods based on deep learning algorithms. The proposed model was evaluated to demonstrate the superior performance of the method in improving the decision-making and action effectiveness instruction ability of Wushu routine competition. In the future, we are planning to incorporate and compare other machine learning algorithms such as RNN and BNN."} +{"text": "In the original version of our article , the autCorrected authors list of this publication: Marya Anne von Wolff and Dietmar Stephan.The authors apologize for any inconvenience caused and state that the scientific conclusions are unaffected."} +{"text": "Understand the most important interactions between inflammation and coagulation.Understand the mechanism(s) and role of NETs formation.Understand the pathophysiology of DIC.Understanding and knowledge of correctional treatment of different coagulopathies.There is now growing evidence for a crosstalk between inflammation and coagulation. In this session, Konstantin Stark report on the current evidence for a crosstalk between platelets and neutrophils. These cells are among the first responders to pathogens and perturbation of vascular integrity and the interplay triggers neutrophil extracellular trap (NET) formation. NETs have many functions including providing a scaffold for platelet adhesion and enhanced platelet activation. Cheng-Hock Toh provides an update on the pathophysiology of disseminated intravascular coagulation (DIC), a condition characterized by severe and uncontrolled activation of coagulation that may lead to multiorgan failure secondary to thrombosis in the microvasculature and in medium sized vessels and also due to consumptive coagulopathy leading to bleeding. Novel aspects of pathophysiology include mechanisms of thrombin generation, cellular dysfunction in the microcirculation and the contribution of innate immunity and inflammation with NETs formation. Finally, Riitta Lassila discusses management of coagulopathies in a selection of severe inflammatory disorders."} +{"text": "Biological and engineering strategies for neural repair and recovery from neurotrauma continue to emerge at a rapid pace. Until recently, studies of the impact of neurotrauma and repair strategies on the reorganization of the central nervous system have focused on broadly defined circuits and pathways. Optogenetic modulation and recording methods now enable the interrogation of precisely defined neuronal populations in the brain and spinal cord, allowing unprecedented precision in electrophysiological and behavioral experiments. This mini-review summarizes the spectrum of light-based tools that are currently available to probe the properties and functions of well-defined neuronal subpopulations in the context of neurotrauma. In particular, we highlight the challenges to implement these tools in damaged and reorganizing tissues, and we discuss best practices to overcome these obstacles. Neurotrauma such as brain injuries, stroke, and spinal cord injury (SCI) scatter the finely organized network of circuits that produce behaviors, causing devastating cognitive and/or sensorimotor impairments. Many biological and engineering strategies seek to repair these circuits to enhance functional recovery from neurotrauma. Understanding the consequences of neurotrauma and the mechanisms underlying repair strategies requires tools to visualize the neuroanatomical and functional properties of circuits in the brain and spinal cord. Optogenetics has triggered a paradigm shift in the resolution of these evaluations. Traditionally, electrical stimulation and recording methods have been the main tools for studying broadly defined circuits and pathways.However, the results of these experiments are difficult to interpret, since these methods cannot distinguish between the various subpopulations of neurons that constitute the brain and spinal cord. Optogenetics leverages the expression of genetically engineered proteins to target specific neuronal subpopulations. These methods enable activation, silencing, and recording of neural activity with light in precisely-defined neurons, which is not possible with electricity-based methods . Using photostimulation of cortical neurons located in the vicinity of regions receiving repeated TBIs, it was possible to confirm that functional deficits correlate with reduced cerebral blood flow and the presence of astrogliosis whose expression can be driven by distinct promoters (Madisen et al., Optogenetics has transformed the field of neuroscience. These methodologies have enabled unprecedented precision to interrogate the role and function of neurons within the realm of neurotrauma, albeit the implementation of these tools comes with a unique set of challenges. Harnessing the full power of these methods requires a careful design of experiments in a way that takes advantage of the specificity of the employed tools while controlling for potential side effects of both their expression and activation in the presence of existing tissue damage. While the current toolbox for optogenetic modulation and recording has expanded widely over the past decade, it continues to undergo rapid developments that promise the availability of more precise and less invasive methods for optical circuit interrogation.SC conceptualized and wrote the original draft and prepared the figure and table. SC and GC edited and revised the manuscript. GC supervised the work. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling editor declared a past co-authorship with one of the authors GC.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The Journal and Authors retract the 12 March 2021 article cited above for the following reasons provided by the Authors:Following publication, concerns were raised regarding the integrity of the images in the published figures. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers. The authors agree to this retraction."} +{"text": "Antioxidant active food packaging can extend the shelf life of foods by retarding the rate of oxidation reactions of food components. Although significant advances in the design and development of polymeric packaging films loaded with antioxidants have been achieved over the last several decades, few of these films have successfully been translated from the laboratory to commercial applications. This article presents a snapshot of the latest advances in the design and applications of polymeric films for antioxidant active food packaging. It is hoped that this article will offer insights into the optimisation of the performance of polymeric films for food packaging purposes and will facilitate the translation of those polymeric films from the laboratory to commercial applications in the food industry. An. AnCaricWhile most of the active ingredients used in film development for antioxidant food packaging come from botanical sources, antioxidants obtained from animal sources have also been adopted. A good example is melanin, a ubiquitous biological pigment found widely in the eyes, hair, skin, and brain of living animals ,66. NatuInorganic agents have been used as antioxidants, too. ZnO nanoparticles are a good example of such antioxidants and have been adopted in food packaging as antimicrobial agents and as UV light absorbers ,69. The Food additives approved by the Food and Drug Administration (FDA) in the US for use as antioxidants have also been incorporated into the film matrix for antioxidant active food packaging. Representative examples include butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), and tertiary butylated hydroquinone (TBHQ). These additives have been incorporated into polypropylene-based films and have been exploited for possible use in food protection . An incrMultiple steps are involved from the time a film is first designed to the time when the film is manufactured and used by end-users. Failure can occur at one of these many steps, which can compromise technology transfer ,74. For In addition, when the film is used in the food industry, it is expected to experience potentially remarkable variations in temperature and humidity during food processing and storage, or even tear and wear during rough handling. The extent of such variations is sometimes much larger than what the laboratory can imitate or predict. This is compounded by the fact that most of the studies on antioxidant active food packaging have only tested the antioxidant capacity of the film using chemical tests such as the DPPH assay and the 2,2\u2032-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) radical scavenging assay. Even in studies that adopt food models to evaluate the performance of the film, the variety of food models used is limited. This hinders the technology transfer and industrial application of the reported film because the composition of the food, in fact, affects the rate of release of the antioxidant from the film, leading to variations in the efficiency of food protection based on antioxidant food packaging. This has been confirmed by a recent study , which eAntioxidant active food packaging can prolong the shelf life of foods by retarding the rate of oxidation reactions experienced by food components. As discussed in the sections above, significant advances in the design and development of packaging films loaded with a large diversity of antioxidants have been achieved over the last several decades, yet few challenges remain to be solved before antioxidant active food packaging films can be effectively translated from the laboratory into commercial applications. One challenge is the poor understanding of the interactions between antioxidants and other film components during film fabrication. The bulk properties of the film may change when antioxidants are added to the film. If the generation of the film involves the use of other additives such as plasticisers and colouring agents, the release kinetics of these additives may be changed, leading to the migration of these agents to food products and causing safety concern. For this, more studies are required to decipher and predict the interactions among different additives in a film so that films with better properties can be designed.The development of effective antioxidant active food packaging films is compounded by the difficulty of determining and optimising the amount and concentration distribution of the antioxidant in the packaging film. Uneven distribution or a suboptimal concentration of the antioxidant can hamper the efficiency of the film in food protection. Furthermore, the polymeric matrix of the film needs to be properly designed so that the release kinetics of the antioxidant match the kinetics of oxidation of food components. Over the years, various studies have been performed to apply mathematical models of mass transfer to examine the release kinetics of bioactive agents from the polymeric matrix; however, most of these studies have been performed by placing the matrix in a liquid environment ,78. The"} +{"text": "Elucidation of the human genome has increased understanding of human body responses to drug administration . LikewisThe Coronavirus disease 2019 (COVID-19) pandemic has reinforced the urgent need to study the genetic differences among people with mild symptoms and those with complex responses to the disease . During Examples that illustrate the importance of understanding therapeutic effectiveness responses in target groups include the premature administration of hydroxychloroquine based on affect glycosylation of angiotensin converting enzyme-2, without information on genetic variability and remdThe reservoirs of genetic material in biobanks in the United Kingdom and evenBiobanks can hold genetic data for a significant percentage of an entire population . In EstoThere are ethical dilemmas involved with asking a donor to provide unique informed consent. This has, however, been improved with the development of the model of dynamic consent . Ethics Due to high population diversity, Latin America faces the challenge of addressing genetic variability in studies to improve pharmacological responses to therapeutics for diseases. The creation of biobanks, their strengthening, and collaboration among them, would be a fundamental contribution to obtain pharmacogenetic information and efficient therapeutic responses in Latin America."} +{"text": "The Journal retracts the 26 March 2021 article cited above for the following reasons provided by the Authors:Following publication, concerns were raised regarding the integrity of the images in the published figures. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers. The authors agreed to this retraction."} +{"text": "While observations in neurobiology provide inspiration for methods in artificial intelligence and machine learning\u2014most famously, in the development of artificial neural networks \u2014the recFeltgen and Daunizeau. Their focus is on refinement of the estimation procedure for drift-diffusion models inference; through the perspective of Bayesian filtering (prospective) and smoothing (prospective and retrospective). The authors propose a middle ground between the two by limiting the number of past time-steps over which retrospective inference is performed\u2014curtailing the computational cost accrued in modeling long sequences\u2014and demonstrate the success of the resulting scheme on a probabilistic reversal learning task.Temporal sequences of this sort are central to two other contributions to this Research Topic. Safron provides a broad overview of active inference and its relationship to other influential theories of brain and consciousness, including the global neuronal workspace theory (Gershman adds an interesting novel perspective to this through proposing a generative adversarial theory of brain function. This is based upon the widely used deep learning networks of the same name (Gershman highlights how human brain architectures could support the generative and discriminative parts of such networks.At a more conceptual level, e theory and intee theory . Gershmaame name . GeneratLeptourgos and Corlett and Mehltretter et al. The former set out a theory for the distortions in the sense of agency experienced by some people with schizophrenia. They do so through assuming the brain makes use of two distinct predictive hierarchies that deal with the feeling of, and the judgment of, agency, respectively. This dual hierarchy allows them to incorporate features of prominent theories of passivity phenomena (Mehltretter et al. take a different perspective on computational psychiatry and make use of deep learning methods in feature selection to predict remission of symptoms in patients taking antidepressants. Their focus is on the important challenge of interpretability for such analyses.A key area of application for theoretical neurobiology is in computational psychiatry . This inhenomena . MehltreThe papers outlined above offer a snapshot of the exciting work at the interface of neuroscience and probabilistic reasoning and the enduring symbiotic relationship between the two fields."} +{"text": "This paper reviews recent advances regarding land\u2013atmosphere\u2013ocean coupling associatedwith the Tibetan Plateau (TP) and its climatic impacts. Thermal forcing over the TPinteracts strongly with that over the Iranian Plateau, forming a coupled heating systemthat elevates the tropopause, generates a monsoonal meridional circulation over South Asiaand creates conditions of large-scale ascent favorable for Asian summer monsoondevelopment. TP heating leads to intensification and westward extension (northwardmovement) of the South Asian High , and exertsstrong impacts on upstream climate variations from North Atlantic to West Asia. It alsoaffects oceanic circulation and buoyancy fields via atmospheric stationary wave trains andair\u2013sea interaction processes, contributing to formation of the Atlantic MeridionalOverturning Circulation. The TP thermal state and atmospheric\u2013oceanic conditions arehighly interactive and Asian summer monsoon variability is controlled synergistically byinternal TP variability and external forcing factors. Itsdynamic blocking effect in winter leads to division of the impinging westerly flow intonorthern and southern branches, which merge on the lee side of the plateau to form thestrong East Asian jet stream downstream )o et al. found th )et al. found thIt is noteworthy that the relationship between TP thermal forcing and global and regionalSSTAs exhibits strong seasonality. In early spring, the general circulation over the TP andAsian monsoon regions is still in the winter state and the AHS over the TP is influencedmainly by the mid\u2013high-latitude circulation in the Northern Hemisphere. After the onset ofthe Southeast Asian monsoon in early May, accompanied by weakening and northward retreat ofthe westerly jet stream, the relationship between the tropical oceans, especially the IndianOcean, and the TP diabatic heating shows a more intimate interconnection. Specifically, theIndian Ocean Basin Mode (IOBM) can significantly affect the heating condition of the TP byaltering the local meridional circulation and suppressing or enhancing ascent over the TP. In termThe interannual variability of the East Asian Summer Monsoon (EASM) is controlled byatmospheric internal variability and external forcing. Through data diagnosis and numericalsimulations, the relative importance of the thermal forcing of the TP and the first leadingmode of the Indian Ocean SSTA, that is the IOBM, with regard to the interannual variabilityof the EASM circulation was investigated .ResultsDespite the recent advances in studies of the thermal status of the TP and of its climateimpacts, many unknowns and challenges remain.It has been demonstrated that the elevated thermal status of the TIPS exerts significantimpact on atmospheric circulations and the global climate, particularly the maintenance ofthe ASM. However, mechanisms for generation and maintenance of the monsoon can be differentfrom that responsible for its variation on different timescales. Given the existence of boththe land\u2013sea thermal contrast and the thermal forcing of large-scale orography in thecurrent climate system, the relative contributions to monsoon variability of the mechanicalforcing of the TP versus its thermal forcing and of the remote effects versus the localthermal impacts remain unclear and require further study.Improved understanding of the thermal status of the TP and of its climate impact is helpfulfor enhancing the skill of climate prediction. Knowledge of clouds and their role in TPclimate variations remains limited because of the sparse observations before the 1970s.Since 2006, with the launch of the CloudSat and CALIPSO satellites, the vertical structureof clouds and their radiative effects over the TP have been the focus of considerableresearch interest ,131. HowThe previously sparse distribution of meteorological observation stations over the TIP areameant information for quantification of the land\u2013air coupling processes over the TP wasinsufficient. The Chinese Meteorological Administration, together with the provinces nearthe plateau, has now provided 2 billion Yuan (RMB) to construct an improved observationnetwork over the TP comprising more than 6000 automatic weather stations before 2023. Thisnetwork is expected to lead to marked improvement in our understanding of the coupledland\u2013air processes and the thermal features of the TP, as well as their variations.Correlation diagnosis, statistical analysis and numerical modeling have all been usedwidely to reveal the impacts of TP forcing on downstream circulation and climate, whilstdynamic approaches are helpful for improved understanding of these impacts. From thepotential vorticity\u2013diabatic heating (PV-Q) perspective, it is known that the strong diurnalchange in surface heating of the TP in summer significantly influences the land\u2013air couplingand provides favorable background conditions for the genesis of a plateau vortex during thenight . In wintMany African and European regions are highly populated and the regional ecosystems andenvironments are susceptible to global climate change; therefore, better understanding ofthe impact of the TP on its upstream climate is very important because of the strong linkbetween the TP and its upstream regions. Although previous studies have demonstratednumerous changes in the upstream climate that appear linked to the conditions of the TP, thephysical processes through which the plateau might exert its influence remain unclear. Forexample, in the mid\u2013high latitudes, the rotational portion of atmospheric motion is dominantand forcing, such as that derived from the TP, usually generates wave-train patterns.Therefore, it is important to consider whether the changes in TP conditions directly \u2018block\u2019the eastward propagating signals causing variations of the upstream climate, or generatesignals that propagate eastward across the North Pacific and NorthAmerica to affect regions to the west of the plateau. Future investigations are needed toprovide answers to the many related important questions.Understanding the modulation of the TP on air\u2013sea interaction is challenging in the studyof climate dynamics. Despite encouraging progress in exploration of the interactions betweenthe TP and global oceans, various important issues remain unaddressed. For example, it willbe important to determine how such modulation influences the variability of the ASM and theENSO\u2013monsoon relationship. Moreover, the question of whether the melting of Arctic sea iceis related to continuous weakening of the TP heat source is of fundamental importance. Inaddition, further investigation is required to establish the optimal method for measuringthe relative importance of external forcing to the variability of the TP heat source against its self-sustainedvariability."} +{"text": "In recent years, the sanitization of environments, devices, and objects has become mandatory to improve human and environmental safety, in addition to individual protection and prevention measures. International studies considered ozone one of the most useful and easy sanitization methods for indoor environments, especially hospital environments that require adequate levels of disinfection. The purpose of this work was to evaluate the microclimate influence on sanitizing procedure for indoor settings with ozone, to prevent infections and ensure the safe use of the environments. The concentration of ozone was measured during sanitization treatment and estimation of microorganisms\u2019 survival on the air and different contaminated plates after the sanitization operations were performed. The results demonstrated a significant reduction in the microbial count that always fell below the threshold value in different conditions of distance, temperature, and relative humidity. In the last year, a new worldwide emergency introduced the requirement of new disinfection and sanitation procedures to optimize the quality of care and work safety in professional environments for the sanification of the environments. These data can be used for reducing ozone concentration although assuring safe disinfection under different conditions."} +{"text": "Thermal drift of nano-computed tomography (CT) adversely affects the accurate reconstruction of objects. However, feature-based reference scan correction methods are sometimes unstable for images with similar texture and low contrast. In this study, based on the geometric position of features and the structural similarity (SSIM) of projections, a rough-to-refined rigid alignment method is proposed to align the projection. Using the proposed method, the thermal drift artifacts in reconstructed slices are reduced. Firstly, the initial features are obtained by speeded up robust features (SURF). Then, the outliers are roughly eliminated by the geometric position of global features. The features are refined by the SSIM between the main and reference projections. Subsequently, the SSIM between the neighborhood images of features are used to relocate the features. Finally, the new features are used to align the projections. The two-dimensional (2D) transmission imaging experiments reveal that the proposed method provides more accurate and robust results than the random sample consensus (RANSAC) and locality preserving matching (LPM) methods. For three-dimensional (3D) imaging correction, the proposed method is compared with the commonly used enhanced correlation coefficient (ECC) method and single-step discrete Fourier transform (DFT) algorithm. The results reveal that proposed method can retain the details more faithfully. Computed tomography (CT), which is a nondestructive technique to obtain structural information inside objects, is widely used in cultural relic detection, life sciences, and other industrial applications . HoweverThe projection alignment method based on short reference scan was proposed by Sasov in 2008.At present, there are three common methods to align the main projection with respect to the reference projection: intensity based method, frequency domain based method, and feature based method. The intensity based method constructs the similarity measure through the gray scale of projection and calculates the extreme value of the similarity measure by using the optimization algorithm to get the translation parameters. Evangelidis et al. proposedThe feature based method uses the similarity constraint of position and direction of the feature points extracted by local descriptors to align the projections, such as scale invariant feature transform (SIFT) , speededIn this study, a method based on outlier elimination and position adjustment is proposed to align the main projection by the reference projection for reducing or even eliminating the thermal drift artifact of nano-CT. The proposed method can effectively eliminate outliers with high accuracy. Further, an elimination model of outliers based on the geometric position of features and structural similarity (SSIM) of projeThe rest of this paper is organized as follows. The reference projections are acquired with a larger rotation step after obtaining the main projections. The proposed method is used to calculate the drifts between the main and the reference projections. The drift of main projection without reference projection is estimated by cubic spline interpolation of adjacent drift.The workflow of the proposed method is shown in The alignment model of main projection and reference projection is considered as a rigid transformation model since drift is a slowly varying translation process .(1)[xmaSURF is a fast and robust feature description method which consists of two main parts: feature generation and matching. The purpose of feature generation is to determine feature point locations and descriptors. Matching is achieved by the Euclidean distance of the feature points.It is assumed that the matching feature point sets have been extracted by SURF. The feature point sets of the main and reference projections are denoted by ightness due to tightness . The outThe proposed elimination strategy includes rough elimination and refined elimination. The purpose of rough elimination is to identify and eliminate obvious outliers for accelerating the operational efficiency of refined elimination.First, the feature points Then, the feature angle is considered to identify the outliers. If the matching relationship of the feature points is accurate, the feature angles However, it is difficult to satisfy Equation (3) due to the difference in brightness and noise distribution between the main projection and reference projection. The difference between the feature angle of To obtain the optimal solution of Equation (4), the feature angle similarity function It is extremely difficult to directly set the upper and lower limits of the feature angle, so the feature point offset limit Then, the rough elimination strategy of outliers can be expressed asIn The SSIM is used to further refine the features. Each pair of feature points provides a guide to align the projection through the alignment vector To improve the robustness of refined elimination, the SSIM threshold ASIM is used to eliminate the outliers in the SURF initial feature points. The complete procedure is summarized in The location of feature points may not be calculated accurately by SURF due to the influence of brightness and gray distribution of projected image. The position adjustment method based on SSIM provides accurate position of feature points. In the main projection and reference projection, image blocks SSIM is used for feature point relocation. The relocation process is summarized in The features directly affect the calculation of drift. There are three parameters in the proposed algorithm: the feature point offset limit In the nano-CT scanning experiment, the time difference between the main and reference projections is long. Three 2D transmission imaging experiments are set up to evaluate the influence of brightness, noise level, and initial feature number on the alignment accuracy. To evaluate the effectiveness and robustness of the proposed method for high-precision matching between the main projection and the reference projection, the SURF, RANSAC, and LPM are used as comparison methods.All the data used in the experiment are obtained from the nano-CT in Henan Imaging and Intelligent Information Processing Laboratory. To consider the difference in correction accuracy caused by different gray distributions and shapes, four groups of actual scan data are selected for testing. The samples and exposure time are listed in Firstly, the alignment effect under different lighting conditions is tested. The interval time of image pairs is shown in Secondly, the number of initial features is considered to test the effect on accuracy. The samples in Finally, the robustness of ASIM is tested. The sample projection pairs of the first and second rows in To evaluate the accuracy of the proposed method in 3D reconstruction, the method is applied to the 3D imaging experiment of an electronic component and cabbage seed. The relevant scanning parameters are listed in the RMSE), drift calculation error, precision, and accuracy are used to evaluate the effect of different methods on the elimination of feature points.The root mean square error of image is too low, the original feature matching fails, so the feature elimination and point adjustment can also fail. In the future, we hope to further optimize the feature matching strategy to maintain the robustness of the algorithm."} +{"text": "In this review article we will discuss the acute hypertensive response in the context of acute ischemic stroke and present the latest evidence-based concepts of the significance and management of the hemodynamic response in acute ischemic stroke.Acute hypertensive response is considered a common hemodynamic physiologic response in the early setting of an acute ischemic stroke. The significance of the acute hypertensive response is not entirely well understood. However, in certain types of acute ischemic strokes, the systemic elevation of the blood pressure helps to maintain the collateral blood flow in the penumbral ischemic tissue. The magnitude of the elevation of the systemic blood pressure that contributes to the maintenance of the collateral flow is not well established. The overcorrection of this physiologic hemodynamic response before an effective vessel recanalization takes place can carry a negative impact in the final clinical outcome. The significance of the persistence of the acute hypertensive response after an effective vessel recanalization is poorly understood, and it may negatively affect the final outcome due to reperfusion injury.Acute hypertensive response is considered a common hemodynamic reaction of the cardiovascular system in the context of an acute ischemic stroke. The reaction is particularly common in acute brain embolic occlusion of large intracranial vessels. Its early management before, during, and immediately after arterial reperfusion has a repercussion in the final fate of the ischemic tissue and the clinical outcome. Blood pressure is usually elevated in the acute phase of all types of hemorrhagic and ischemic strokes. This acute hypertensive (presumably physiologic) response is common in the early phase of acute ischemic stroke. Approximately two-third of the ischemic strokes present with elevated systolic and diastolic blood pressure . The sigAfter an acute occlusion of a major intracranial artery, the cerebral tissue is able to sense a decrease in the interstitial oxygen tension . The resThe acute hypertensive response observed in the acute phase of ischemic stroke is self-limiting, and it tends to decline over the course of the next several days and return to the premorbid baseline levels. In embolic strokes, this tends to happen at the same time when spontaneous recanalization occurs. Mechanical thrombectomy for large vessel occlusion can shorten this period, and it is not uncommon to observe a dramatic reduction of the systolic blood pressure after the vessel is recanalized. On the contrary, in cases of ineffective vessel recanalization, the acute hypertensive response may persist for several days.Sudden and aggressive reduction of the blood pressure is deleterious across all types of ischemic strokes. This is also true in intracerebral hemorrhage. In the case of hypertensive intracerebral hemorrhage, two different randomized clinical trials showed a tendency for reduction in the hematoma expansion with early and moderate reduction of the hypertensive response , 6. HoweLess than one-third of acute ischemic stroke can present without elevation of the blood pressure, and some of the can present with low blood pressure level. Subjects presenting with acute ischemic stroke and lack of acute hypertensive response can harbor other cardiovascular comorbidities including concomitant congestive heart failure and severe aortic and mitral valvular disease. These patients are well known to face worse outcomes in spite of the successful acute interventions. In a recent secondary analysis of the Head Positioning in acute Stroke Trial (HeadPoST), patients with acute ischemic stroke presenting initially with low blood pressure, defined as a systolic blood pressure less than 120 mmHg and a diastolic blood pressure less than 70 mmHg, had an increased risk of death or dependency compared with patients presenting with acute hypertensive physiologic response [Acute ischemic stroke patients presenting with an acute physiologic hypertensive response represent the majority of the cases. For the subset of patients arriving with the first 3\u20134\u00bd h after the symptoms onset guidelines recommend to maintain the systolic blood pressure below 185 mmHg and the diastolic blood pressure below 105 mmHg. The original studies that tested the effectiveness of intravenous r-tPA for acute ischemic stroke used these blood pressure thresholds to reduce the risk of symptomatic hemorrhagic transformation that could potently offset the benefit of the thrombolytic treatment. However, this trials did not assess a specific blood pressure target for the lower limit of the goal. A U-shaped association between the initial blood pressure and the final unfavorable outcome in acute ischemic stroke was demonstrated in several observational studies. The extremes (low blood pressure and excessively elevated blood pressure) range are associated with worse outcome, and the best outcomes appear to be present with a modest initial hypertension. However, none of these observational studies were able to prove causality. Two large registries of intravenous thrombolysis in acute ischemic stroke reported the association between hypertension and the risk of the symptomatic hemorrhagic transformation . In bothLarge vessel occlusion is perhaps the subtype of acute ischemic stroke in which the relevance of the acute hypertensive response and its adequate management are of great significance in the final outcome. The acute hypertensive response is noticeable immediately after the embolic occlusion of a proximal intracranial artery. This instantaneous systemic hemodynamic physiologic response is key for the maintenance of the retrograde leptomeningeal collateral circulation. It is also responsible of the initial minimal neurological deficit present in patient with acute large vessel occlusions presenting and low NIHSS score or even with a complete resolution of the clinical symptoms (transient ischemic attack). However, this could be a precarious situation, and it is calculated that approximately 20 to 40% of this subjects will deteriorate their neurologic condition during the subsequent 24\u201372 h. In this context, arterial hypotension (spontaneous or induced) well known to be obviously deleterious for the final fate of the penumbral tissue and finally the clinical outcome. Spontaneous hypotension can be seen in subjects with large vessel occlusion and concomitant cardiac disease that impairs the stroke volume. Iatrogenic systemic hypotension can be seen in patients with large vessel occlusion that undergo to mechanical thrombectomy under general anesthesia. During the induction of general anesthesia, the inhaled volatile anesthetics can cause inappropriate lowering of the systolic and mean arterial blood pressure that can overcorrect the acute hypertensive physiologic response. In a retrospective study of 371 patients that underwent to mechanical thrombectomy under general anesthesia, a linear association between arterial hypotension and worse outcome was demonstrated . Even a Successful endovascular recanalization in large vessel occlusion is the most important predictor of subsequent successful clinical outcome at 90 days. A successful recanalization implies an almost complete or complete reperfusion of the occluded vessel, defined as TICI 2b or higher Fig. score. TA successful recanalization is commonly associated with spontaneous regression and disappearance of the acute hypertensive physiologic response. The lack of spontaneous resolution of the acute hypertensive response after a successful recanalization has been correlated with worse neurologic outcome in several observational studies. The reason behind the persistence of arterial hypertension in spite of successful recanalization is probably multifactorial. Uncontrolled premorbid hypertension might explain some cases but also the fast progression to ischemia in spite of early and effective vessel recanalization is also suspected. In this last circumstance, the ischemic gliovascular tissue might continue sending a signal that keeps the sympathetic outflow from the central nervous system into the cardiovascular system. Post-operative arterial hypertension in patients that underwent to a successful recanalization has been correlated with higher chances of hemorrhagic transformation.The impact of blood pressure levels within the first 24 h after mechanical thrombectomy on the clinical outcome was reported in a recent retrospective study that included 700 patients with large vessel occlusion that underwent to mechanical thrombectomy . The stuAreas of hyperatenuation in the brain parenchyma represent regions of the brain tissue with broken brain blood barrier due to established ischemia after mechanical thrombectomy. This area of hyperatenuation is more prone to hemorrhagic transformation after effective recanalization. In a retrospective study of a prospectively collected cohort of the consecutive acute ischemic stroke patients due to large vessel occlusion that underwent to a successful mechanical thrombectomy, 50% of the subjects exhibited areas of the hyperatenuation in the post-procedure non contrast CT . In thisUntil further evidence from randomized clinical trials are available, it appears prudent to try to correct the persistence of an acute hypertensive response after a successful mechanical thrombectomy to avoid hemorrhagic reperfusion injury. This is particularly important in subjects with early signs of established ischemia as evidenced by areas of hyperatenuation in the immediate post-operative non contrast CT of the head. Based on the best current evidence, the systolic blood pressure should range between 140 and 160 mmHg and diastolic blood pressure below 90 mmHg after successful thrombectomy. Caution should be applied with subjects with history of premorbid of arterial hypertension. The control of the blood pressure should start in the Cath lab immediately after the clot is removed. We suggest the placement of an invasive arterial line to facilitate a continuous and accurate blood pressure monitoring for at least 24\u201372 h as well as to facilitate the accurate titration of the doses of the intravenous short-acting vasodilators. If the Cath lab has the capability to perform flat panel detector-computerized tomography of the head, we do suggest to acquire the information of hypertanuated areas of the brain parenchyma to better identify subjects with higher risk of the reperfusion injury and implement therapeutic measurements (including blood pressure control) to improve the final clinical and neurologic outcome (Fig. Acute hypertensive response is considered a physiologic hemodynamic reaction of the cardiovascular system that is commonly seen in patient with acute ischemic stroke. The lack of presence of this physiologic hemodynamic response or its iatrogenic over correction has a negative impact in the final neurologic outcome, particularly in subject with large vessel occlusion before recanalization. The persistence of the hypertensive response after effective recanalization appears to negatively affect the final outcome, and the cautious lowering of the blood pressure to a safer range appears to be reasonable."} +{"text": "The Long-Term Care Insurance (LTCI) Act in South Korea was enacted in 2008 to improve the quality of life of older adults by promoting better health and to mitigate the burden of care on family members. In 2014, the Enforcement Decree for the LTCI Act was revised to broaden criteria for eligible recipients of LTCI-related services and care. This policy analysis seeks to explore the political circumstances under which the Act was formed and how social environmental factors had evolved to revise the LTCI Act using a multiple streams policy analysis framework. A combination of factors influenced the status of LTCI policy agenda, including shifts in aged demographic structure and increasing medical expenditures. From the Korean National Dementia Plan, a pilot project of dementia care was conducting to prove the efficiency of dementia care service. While the Korean Senior Citizens Association (KSCA) was less successful gaining press attention around dementia care, the presidential election and candidates\u2019 election pledges were key factors to suddenly open the opportunity to extend the recipients for dementia care. The process through which the LTCI Act was revised and expanded showed the importance of the political environment associated with the election. Based on the recognition of LTIC policy agenda and already testing the efficiency of dementia care services, the election leaded to revision of LTCI Act and it quickly diffused by the new administration. From the revision of LTCI, international policymakers and scholars should recognize how the political events might use the policy for older adutls."} +{"text": "Discovering a composite of measures of executive function/working memory predicted everyday medication adherence among older adults, led to the development of a behavioral intervention, the Multifaceted Prospective Memory Intervention (MPMI) to improve hypertension medication adherence. The intervention resulted in a 35% improvement in adherence compared to an active education and attention control condition. However, adherence slowly declined over an additional five months of adherence monitoring without the presence of interventionists in the home. We proposed that the use of technology might help individuals maintain the prospective memory strategies, resulting in sustained adherence. An interdisciplinary team was formed to translate the behavioral intervention to technology, resulting in the first version of the MEDSReM system. In this presentation we describe the evolution of the project, from the components of the successful MPMI to the design and initial testing of MEDSReM. These efforts provide general insights about translating interventions into technology tools."} +{"text": "The pathophysiological mechanisms involved in the symptoms of overactive bladder syndrome are varied. Thus, despite the current guidelines are organized in steps for treatment , based oThe use of electrical stimulation in the treatment of overactive bladder has been proposed for several years , and hasAlthough proposed for several years, the effectiveness of bladder training on symptoms of overactive bladder is still poorly studied in the literature as the aIn the present prospective randomized trial , the aut"} +{"text": "N-ethylmaleimide sensitive factor) initially purified from an in vitro reconstitution of Golgi membranes needs to be transported around the cell by the help of motor molecules such as kinesins and dyneins. These molecular machines use the energy of ATP hydrolysis to generate processive and efficient distribution of cargo and their carriers along the microtubule network of the cell. A set of comprehensive reviews from Schiavon et al. discuss previous work on mitochondria intracellular transport and provide an example where impaired mobility of mitochondria may be playing a central role in the pathogenesis of Charcot-Marie-Tooth (CMT) disease, a neurodegenerative disorder. Additionally, Sneeggen et al. describe how the role of intracellular trafficking in cell proliferation, epithelial to mesenchymal transition, and invasion contributes to the regulation of energy consumption and metabolism during cancer progression. Lastly, Tavares et al. delineate the capacity of the HIV-1 virus to hijack key machinery of the intracellular pathway in order to ensure efficient viral replication and survival in the host.Finally, this Research Topic includes three insightful articles that put the role of intracellular trafficking in the context of disease. Altogether, these contributions should give the reader an updated view of different aspects of the intracellular traffic field and a glimpse on the future directions of research in each sub-field. We hope the reader can also find the gaps and the many still unanswered questions in the field while navigating through this Research Topic: how is the motor molecules' activity regulated during development and disease progression? What is the next technology that will thrust this field into the next century of discovery? At the molecular level, how do we modulate cargo localization through targeting strategies, and in doing so, influence the adaptive requirements of a cell? How can we prevent future public health crises by manipulating the host-microorganism energy supply and demand flux? How can we generate a more comprehensive understanding of the molecular machinery of protein trafficking? The dysregulation of the machinery involved in controlling intracellular traffic is frequently associated with diseases that include cancer, developmental and degenerative diseases, and multiple immunity disorders, highlighting the urgency of unraveling the mechanisms that regulate the movement of cargo inside the cell.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.AH was funded by the Spanish Ministry of Economy and Competitiveness (BFU2017-88766-R and PID2020-119132GB-I00). DCG was funded by a Sir Henry Dale Fellowship awarded from the Wellcome Trust/Royal Society (Grant No. 210481).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The Journal and Authors retract the 5 November 2019 article cited above for the following reasons provided by the Authors:Following publication, concerns were raised regarding the integrity of the images in the published figures. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers. The authors agree to this retraction."} +{"text": "By means of virtual reconstructions obtained from the data provided by the different Multislice Computed Venotomographies (MCVT) performeSixtty-four detector Phillips Multislice tomograph and process the data using IntelliSpace Portal .Discussion and conclusions: The knowledge of the venous anatomy, through virtual representations, allow to understand the collateral circulation and its patterns in cases of obstruction of the Superior Vena Cava [The collateral circulation network chosen depended mainly on whether or not the Azygous vein was compromised and the time of evolution of the obstruction . From thena Cava ."} +{"text": "The training course for the training of highly specialized interviewers for the IV SCAI study was developed within the EU-Menu program of EFSA; the implementation of the innovative training model was carried out with the support of the Determinants of Diet and Physical Activity Knowledge Hub (DEDIPAC) project, as part of the Joint Programming Initiative (JPI) A Healthy Diet for a Healthy Life (HDHL), which for Italy was led by the IRILD consortium Research support infrastructures to promote an active lifestyle and a healthy diet.The authors thank Dr. Gaetana Ferri for her enthusiastic support and Dr. Elisabetta Lupotto (CREA Food and Nutrition) for her supportive commitment. Our immense gratitude goes to all participants for their great and competent commitment in completing the IV SCAI 3 months\u22129 years old individuals: Rosaria Amabile, Cristina Baggio, The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated."} +{"text": "The current Research Topic focused on the implementation of next-generation sequencing (NGS) applications in the field of genetics, genomics, and transcriptomics of red blood cell defects. This is an important issue in the current scientific scenario either in the clinical/diagnostic or in research settings. Indeed, the widespread use of these technologies has already modified the approaches to diagnosis and research for red blood cell disorders within Karaosmanoglu et al. investigated the transcriptomic profile of three different cell types of bone marrow resident cells isolated from patients affected by DBA related to RPS19 and CECR1 mutations. The authors observed increased expression of genes associated with both cellular stress and immune response. Interestingly, they also suggested that the gene expression profile reflecting cellular stress and cytokine response in proerythroblasts may be associated with increased inflammation in the bone marrow niche. This observation agrees with previous studies suggesting a specific contribution of TNF-\u03b1 in the inhibition of erythroid differentiation and in the pathogenesis of ribosomal stress observed in DBA.From the research perspective, the use of NGS applications allowed both identifying new causative genes associated to red blood cell defects as well as understanding the pathogenic mechanisms underlying these disorders. Within this Research Topic, the study by The importance and the advantages of NGS are easily understandable. However, despite the extensive employment of this technique in routine laboratory practice, some considerations on its limitations and/or disadvantages should be done. A major limitation of NGS genome analysis remains the data processing steps and the assessment of the pathogenicity of identified genetic variants (Roy and Babbs, flox/flox mice with a transgenic mouse strain that expresses the Cre-recombinase under regulation of the endogenous erythropoietin receptor promoter. The resulting CdanEry\u0394 mice die in-utero at mid-gestation from severe anemia with very low numbers of circulating erythroblast. Of note CdanEry\u0394 model displays severe aberrations of primitive erythropoiesis and erythroblasts exhibit the pathognomonic morphological features described in individuals with CDA I, suggesting that CDAN1 is required for primitive erythropoiesis.The study by This collection of articles related to the study of genetics, genomics, and transcriptomics of red blood cell defect highlighted the actual and future direction of the diagnosis and research in this field. Although these conditions are rare, the study of their pathogenetic mechanism could shed light on new mechanisms that can be shared with other conditions affecting larger populations.AI, RR, and IA wrote the manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Older adults from racial/ethnic backgrounds as well as those from rural areas experience a disproportionate burden of physical and mental health risk factors. Given the prevalence of comorbid physical and mental health conditions in later life, the inadequacies of current treatment approaches for averting years living with disability, the disparities in access to the health care delivery system , and the workforce shortages to meet the mental and physical health needs of racial/ethnic and rural populations, development and testing of innovative strategies to address these disparities are of great public health significance and have the potential to change practice. This session will illustrate how four different interventions are being used to address mental and physical health needs in Latino and rural-dwelling older adults with the goal of reducing and ultimately eliminating disparities in these populations. Particular attention will be paid to the use of non-traditional interventions . Results of clinical research studies will be presented alongside clinical case presentations. This integrated focus highlights the importance of adapting research interventions to real-world clinical settings."} +{"text": "This paper investigates the influences of nonperiodic rainbow resonators on the vibration attenuation of two-dimensional metamaterial plates. Rainbow metamaterial plates composed of thin host plates and nonperiodic stepped resonators are considered and compared with periodic metamaterial plates. The metamaterial plates are modelled with the finite element modelling method and verified by the plane wave expansion method. It was found that the rainbow metamaterial plates with spatially varying resonators possess broader vibration attenuation bands than the periodic metamaterial plate with the same host plates and total mass. The extension of attenuation bands was found not to be attributed to the extended bandgaps for the two-dimensional metamaterial plates, as is generally believed for a one-dimensional metamaterial beam. The complete local resonance bandgap of the metamaterial plates is separated to discrete bandgaps by the modes of nonperiodic resonators. Although the additional modes stop the formation of integrated bandgaps, the vibration of the plate is much smaller than that of resonators at these modal frequencies, the rainbow metamaterial plates could have a distinct vibration attenuation at these modal frequencies and achieve broader integrated attenuation bands as a result. The present paper could offer a new idea for the development of plate structures with broadband vibration attenuation by introducing non-periodicity. Metamaterials are artificial structures with extraordinary properties that cannot be found in naturally occurring materials. Metamaterials were first described in the area of electromagnetic wave control ,2,3. ManL = n\u03bb/2, where L is the structure periodicity dimension, and \u03bb is the wavelength of the propagating waves. The occurrence of Bragg-type bandgaps requires the structure periodicity dimensions to be comparable to the wavelength of the propagating waves; therefore, it is difficult to achieve Bragg-type bandgaps at low frequencies. By contrast, the local resonance bandgaps are formed by the resonance of oscillators attached to or embedded in host structures. The periodicity dimensions of the structures could thus be much smaller than the wavelength of propagating waves.One of the most important features of elasto-acoustic metamaterials is the existence of bandgaps, within which the propagation of waves is prohibited or greatly attenuated to required levels. The bandgaps can be generated by two mechanisms, Bragg scattering and local resonance. Bragg scattering bandgaps occur with The existence of local resonance bandgaps enables the metamaterials to have a great potential for the vibration attenuation at low frequencies. However, the widths of local resonance bandgaps are limited at low frequencies, which challenges the practical application of metamaterials. The elasto-acoustic metamaterials are originally periodic structures; \u201crainbow\u201d metamaterials with nonperiodic units that can adjust the band structures were found to be capable of enhancing the vibration attenuation in recent research. Zhu et al. and WangThe abovementioned rainbow structures were centered on 1D metamaterial beams or multi-dimensional phononic crystals. To the best of our knowledge, none of the existing papers have investigated the influences of nonperiodic rainbow resonators on 2D metamaterial plates. Plates are the fundamental elements of many engineering structures; metamaterial plates with effective broadband vibration attenuation have potential for applications with a demand for low-frequency vibration control, such as cabin walls, ceilings and floors of airplanes, trains and other vehicles, machines, etc. The development of plate structures with broadband low-frequency vibration attenuation could therefore be critical for both industry and academics.Metamaterial plates with rainbow stepped resonators are first proposed in the present paper for the purpose of obtaining broader vibration attenuation bands. The masses of oscillators are spatially varying in two directions. The dispersion spectrums and frequency response functions (FRFs) of rainbow metamaterial plates are compared with those of a periodic structure of the same host plate and total mass. The mechanisms of the attenuation band extension are also revealed through the analysis of mode shapes. The present paper is structured as follow: Metamaterial plates with stepped resonators are considered in the present paper as shown in sonators a, whereasonators b. It shoNotably, it should be stressed that we focused on the flexural vibration of the metamaterial plate; only anti-symmetric (A mode) Lamb waves existed in the plate due to the thin layer assumption, and the resonators also oscillated in a normal direction to the plate with only the out-of-plane resonance modes being considered.The dispersion spectrum and FRFs of the metamaterial plates were calculated by FE models. The host plates and stepped oscillators were treated as an integrated solid part modelled by the Solid Mechanics module of COMSOL Multiphysics. The plates and resonators were set as the Linear Elastic Material domains in the FE models.The plane wave expansion method that can calculate the dispersion curves of periodic structures was employed in the present study to verify the FE models. The plane wave expansion method has been validated to effectively evaluate the band structures of metamaterials and phononic crystals ,60,61.Given that the thickness of the host plate was much smaller than its width and length, the displacement of the metamaterial plates and oscillating mass are given as (1)D\u22022\u2202xDue to the periodicity of the metamaterial plate, the displacement of the plate can be written as ,(3)w1r=Substitution of Equations (3)\u2013(5) into Equations (1) and (2) yields,SinceSubstitution of Equations (8) and (7) into Equation (6) yields,To solve the above equation, the infinite summation needs to be truncated. Assuming The wavenumbers could be solved with a given angular frequency and azimuth angle. The band structures of the periodic metamaterial plate can thus be obtained with the calculated wavenumbers.The dispersion curves of two periodic metamaterial plates estimated by the FE models are compared with that obtained by the analytical model in It can be seen from In order to exhibit the influences of the rainbow resonators and reveal the underlying mechanisms, the dynamic properties of periodic and rainbow metamaterial plates with the same host plates, springs and total resonator mass are calculated and compared in this section. The materials and unit dimensions of the metamaterial plates were the same as that mentioned in Two rainbow metamaterial plates with linearly and sinusoidally varying resonator masses are considered in this section. Linear and sinusoidal distributions were employed as they are the most common nonperiodic distributions and other more complex distributions can easily be generated based on the piles of these two distributions . It shouThe two rainbow metamaterial plates with linearly varying and sinusoidally varying resonators have resonator mass distributions asSupposing that the finite metamaterial plates are subjected to an excitation force at one point of the host plates, the transmissibility values The transmissibility values and dispersion curves of the periodic metamaterial plate are shown in The influences of resonator numbers on the transmissibility of the plate are explored in As is known, the dynamic properties of plate structures are greatly influenced by their boundary conditions. The influences of plate boundary conditions on the transmissibility of the metamaterial plates are also revealed in this section. The transmissibility values and dispersion curves of the rainbow metamaterial plate with linearly varying resonators are shown in Similar findings could be obtained based on the dispersion spectrum and transmissibility of the rainbow metamaterial plate with sinusoidally varying resonators as shown in The vibration attenuation of 2D rainbow metamaterial plates with spatially varying stepped resonators was investigated in the present paper. By comparing the dispersion spectrum and transmissibility of the two rainbow metamaterial plates with those of the periodic metamaterial plate, it was found that rainbow resonators could lead to wider vibration attenuation bands compared with periodic resonators. Although the additional mode shapes of the rainbow resonators could break the complete bandgap of the periodic metamaterial plate in to isolated narrower bandgaps, the vibration amplitude of the host plates was much smaller compared with that of the vibrating resonators at these modal frequencies, i.e., the vibration of the host plates was still largely reduced by the resonators. Extended integrated vibration attenuation bands were therefore formed regardless of the separated narrower bandgaps.The idea of broadening the attenuation band by non-periodicity proposed in the present paper could be instructive for future researchers to design more plate structures with better vibration attenuation. The investigated plane single-layered metamaterial plate could also be easily extended to plane and curved single-layered or multilayered plate structures for wider applications."} +{"text": "In recent years, multiscale modelling approach has begun to receive an overwhelming appreciation as an appropriate technique to characterize the complexity of infectious disease systems. In this study, we develop an embedded multiscale model of paratuberculosis in ruminants at host level that integrates the within-host scale and the between-host. A key feature of embedded multiscale models developed at host level of organization of an infectious disease system is that the within-host scale and the between-host scale influence each other in a reciprocal way through superinfection, that is, through repeated infection before the host recovers from the initial infectious episode. This key feature is demonstrated in this study through a multiscale model of paratuberculosis in ruminants. The results of this study, through numerical analysis of the multiscale model, show that superinfection influences the dynamics of paratuberculosis only at the start of the infection, while the MAP bacteria replication continuously influences paratuberculosis dynamics throughout the infection until the host recovers from the initial infectious episode. This is largely because the replication of MAP bacteria at the within-host scale sustains the dynamics of paratuberculosis at this scale domain. We further use the embedded multiscale model developed in this study to evaluate the comparative effectiveness of paratuberculosis health interventions that influence the disease dynamics at different scales from efficacy data. In the field of mathematical biology, we are beginning to witness an overwhelming appreciation of multiscale modelling in studying infectious disease systems dynamics ) to denote the percentage reductions of R0 when the two health measures are implemented. Therefore, the percentage reductions of the two health measures of PTB disease dynamics are calculated using the following expression:e = v = d = 0.1, comparative effectiveness at medium efficacy level (CEM-eff) which is taken to be e = v = d = 0.4, and comparative effectiveness at high efficacy level (CEH-eff) which is taken to be e = v = d = 0.8 using each of the basic reproductive number of the multiscale model system (R0). The results of the comparative effectiveness of the PTB health intervention strategies and their respective combinations are tabulated in The ranking of the percentage reductions of these two health interventions of PTB disease dynamics ranges from 1 to 8 based on the different combinations of the PTB health interventions. In the ranking, 1 corresponds to the highest comparative effectiveness, and 8 corresponds to the lowest comparative effectiveness. In this study, we use a notation followed by the combinations of the killing of the within-host MAP bacteria effect and vaccination effect, while the combination of environmental-hygiene management effect and vaccination effect has a much lower comparative effectiveness and ranking the fifth when each individual intervention is assumed to have the same efficacy value. Therefore, a treatment that cures an infected ruminant with PTB infection complement with the good sanitation and hygiene practices of farmers is considerable good for the ruminant population in the herd because they reduce the risk of PTB transmission among the ruminantsR0 with an increase in efficacy from low level of 10% to moderate level of 40% and high level of 80%Finally, we considered the comparative effectiveness of the PTB interventions when all the components of the two PTB interventions (EHM and MBPT) are implemented together. We note from the results that this combination has the highest comparative effectiveness than any of the other combination interventions considered in this study for all efficacy levels . Therefore, a joint introduction of environmentally management, vaccination, and drug therapy can lead to a more considerable percentage reduction of The main innovation of this study is the development of an embedded multiscale modelling framework that enables investigation of the role of superinfection plays in paratuberculosis (PTB) disease dynamics of in ruminants. Unlike hybrid multiscale models (HMSMs), our multiscale model uses pathogen load as a common metric for infectiousness and disease transmission potential across the microscale and the macroscale. In HMSMs such as in , 22, patR0 indicate that the variation of the within-host scale parameters in particular the decay rate of the within-host MAP bacteria population have significant affect disease dynamics in the ruminant population. Therefore, considering that there are no drugs for PTB infection, the results of sensitivity analysis reveal that the development of a drug that kills and restricts replication of MAP bacteria at the within-host scale would have beneficial effect on the reduction of the transmission risk of the disease among the ruminants at between-host scale. We further used the multiscale model to assess the comparative effectiveness of two PTB health interventions which are (i) environmental-hygiene management (EHM) and medical-based preventive and treatment (MBPT). The comparative effectiveness results show that administration of drug treatments that kill MAP bacteria at individual ruminant level has a higher comparative effectiveness than other two PTB health interventions . The results also show that employing all PTB control measures could lead to a more considerable reduction of the disease at the start of infection. The embedded multiscale model developed in this study provides new insight into the role that superinfection plays on the dynamics of environmental disease systems with obligate pathogens that cannot replicate outside the host. The embedded multiscale model in this article also provides a better understanding about the reciprocal influence between the replication of pathogen load at within-host scale and transmission at between-host scale. We were also able to identify, through the sensitivity analysis of the basic reproductive number and the numerical simulations of the multiscale model, the key target parameters for eliminating paratuberculosis infection in ruminants. Although this study was focused on PTB infection in ruminants, the multiscale modeling framework itself is general enough and is applicable to guide control and elimination of many other environmentally transmitted diseases with obligate pathogens beyond the specific case study of PTB disease system.In this study, we established that superinfection of the ruminant does not significantly alter the total pathogen load within an infected ruminant. Collectively, the numerical results in this study establish that once the infection has successfully established at the within-host scale the replication of MAP bacteria sustains PTB disease dynamics. Further, the results of sensitivity analysis of"} +{"text": "The Journal retract the 6 April 2021 article cited above for the following reasons:Following publication, concerns were raised regarding the integrity of the images in the published figures. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers. The authors did not agree to this retraction."} +{"text": "ABSTRACT IMPACT: Impaired neuromuscular control could lead to the failure in activation and deactivation of the target muscles in a timely manner, with the concurrence of activities of non-targeted muscles. OBJECTIVES/GOALS: Stroke leads to impaired capacity to manipulate objects with the hand in terms of timing and strength of grasp. The influence of abnormal coupling across more proximal arm muscles post stroke on the failure in normal functioning of finger flexor muscle activity is of interest to investigate. METHODS/STUDY POPULATION: We have recruited 12 participants with stroke hemiparesis in the sub-acute or chronic stage. Motor impairment of the arm was assessed using electromyography (EMG) and the Fugl-Meyer Upper Extremity (FMUE) assessment. Participants were requested to flex and relax the metacarpophalangeal (MCP) joints against motorized resistance in response to audible tones to determine timing and strength during flexion. They were asked to flex maximally, as quickly as possible, in response to the first of a pair of tones, and relax as quickly as possible after the second tone. Delays in initiation and termination were evaluated using EMG responses versus a predefined threshold. RESULTS/ANTICIPATED RESULTS: We anticipate greater delays in grasp initiation as well as in grasp termination in participants with a greater extent of abnormal coupling across more proximal muscles of the upper extremity in comparison to participants with a less extent, using the results of the FMUE assessment. Also it is expected that participants with a greater extent of the flexion synergy produce a less extent of force generation. The EMG results will show that activities of more proximal arm muscles precede the initiation of MCP flexion and their activity termination precedes that of MCP flexion, significantly more in participants with a greater extent of the flexion synergy. DISCUSSION/SIGNIFICANCE OF FINDINGS: The flexion synergy over the arm following stroke affects the timing and strength of hand grasp. Impaired neuromuscular control could lead to the failure in activation and deactivation of the target muscles in a timely manner, with the concurrence of activities of non-targeted muscles."} +{"text": "The journal retracts the 13 January 2020 article cited above.Following publication, concerns were raised regarding the integrity of the data in the published figures. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers' policies.This retraction was approved by the Chief Editors of Frontiers in Cellular Neuroscience and the Chief Executive Editor of Frontiers. The authors agree to this retraction."} +{"text": "The lockdowns and stimulus programmes that governments have adopted to fight the COVID-19 pandemic and the associated economic crisis have affected the distribution of income and production within and between countries. Considering both, current evidence indicates that the EU-wide and global inequality of disposable income did not change dramatically in 2020. However, the unequal impact on the wealth and health of people is likely to worsen income inequality in the future."} +{"text": "It supports the assumption that imaging Mueller polarimetry holds promise for the fast and accurate collagen scoring in pregnancy and the assessment of the preterm birth risk.Preterm birth risk is associated with early softening of the uterine cervix in pregnancy due to the accelerated remodeling of collagen extracellular matrix. Studies of mice model of pregnancy were performed with an imaging Mueller polarimeter at different time points of pregnancy to find polarimetric parameters for collagen scoring. Mueller matrix images of the unstained sections of mice uterine cervices were taken at day 6 and day 18 of 19-days gestation period and at different spatial locations through the cervices. The logarithmic decomposition of the recorded Mueller matrices mapped the depolarization, linear retardance, and azimuth of the optical axis of cervical tissue. These images highlighted both the inner structure of cervix and the arrangement of cervical collagen fibers confirmed by the second harmonic generation microscopy. The statistical analysis and two-Gaussians fit of the distributions of linear retardance and linear depolarization in the entire images of cervical tissue quantified the randomization of collagen fibers alignment with gestation time. At day 18 the remodeling of cervical extracellular matrix of collagen was measurable at the external cervical os that is available for the direct optical observations The accurate assessment of PTB risk is critical both to devise new treatment options as well as for the deployment of the available intervention focused on prolongating the pregnancy, such as cervical cerclage, pessaries, or special drug administration.Preterm birth (PTB) is a public health problem worldwide. PTB complications are the most important cause of death in neonatal infants, and many survivors will face long-term health challenges2. Several studies suggest that the evolution of mechanical properties of cervical tissue in pregnancy is related to cervix softening because of cervical ECM remodeling4.The two main functions of uterine cervix in pregnancy include: (1) maintain its load bearing capability and integrity in the first phase of pregnancy, thus, letting a fetus to develop properly until delivery time and (2) prepare to labor and delivery by cervical tissue softening through physical and chemical changes that are part of the cervix ripening process and will let the cervix dilate during delivery. The main constituent of the extracellular matrix (ECM) of cervical tissue is fibrillar collagen6 leading to premature delivery. The early detection of the PTB risk may prevent this event and decrease both mortality and morbidity in infants as well as lowering health care system expenditures.All steps of the uterine cervix remodeling process are drastically accelerated in preterm labor8. Despite the extensive preclinical studies11 there are no clinical tools available for the fast and accurate detection of a spontaneous PTB risk.The ultrasonographic examination of cervix and the test of fetal fibronectin may help to avoid unnecessary treatment in case of negative results, but these techniques are not used for the PTB screening in pregnant women because of the low accuracy13 suggests exploring optical polarization for the assessment of the remodeling of cervical ECM in pregnancy. The development of polarization sensitive optical techniques for the accurate, fast, and non-contact diagnosis of the PTB risk in clinical settings represents the real challenge, as these modalities have potential to revolutionize the current medical practice of PTB risk diagnosis. The most promising approach includes using the complete Mueller polarimetry16 combined with the state-of-the-art Mueller matrix decompositions and data processing algorithms20, because (i) Mueller matrix images of a sample contain information on all polarimetric properties 14 of a sample, contrary to the incomplete polarimetric techniques , and (2) imaging Mueller polarimetry does not require sample scanning, all 16 Mueller matrix images of the entire cervix can be imaged in a few seconds an one wavelength22. Using Mueller polarimetric system in a visible wavelength range for the PTB risk assessment is a natural and safe choice, because visible light is harmless for the patients. Preliminary in vivo polarimetric studies of human uterine cervix in reflection configuration22 revealed linear birefringence of healthy cervical tissue; however, the detailed studies are needed to relate the observed optical anisotropy to the state of collagen ECM of uterine cervix during the pregnancy. Hence, we unavoidably face the fundamental questions:Is optical polarization sensitive to the modifications of collagen arrangement in the uterine cervix during pregnancy?Could the polarimetric signature of cervical collagen rearrangement be detected early enough to serve as a predictive marker of PTB risk despite shallow penetration depth of visible light in tissue because of strong scattering?Our case study of mice model of pregnancy address these questions using the custom-built complete imaging Mueller polarimetric system23 operating in a visible wavelength range in transmission configuration.An extreme sensitivity of polarized light to the subtle alterations of the structural components of such complex object as biological tissue14. We focused our analysis on the images of the total linear retardance Thin unstained sections of the upper and lower parts of the mouse uterine cervix from mice model of normal pregnancy at days 6 and 18 of a 19-days gestation period were imaged with the Mueller matrix polarimeter in transmission configuration. The experimentally recorded Mueller matrices were processed with the logarithmic Mueller matrix decomposition (LMMD) to obtain the images of polarization and depolarization parameters24 are shown in Fig.\u00a0The total linear retardance images of the sections from upper and lower parts of the cervix (day 6) demonstrate circular arrangement of the collagen fibers in a midstroma around cervical canal compared to the signals of ultrasound and MRI modalities36 or pathological zones40 that are not visible in the unpolarized intensity images. An imaging Mueller polarimeter integrates the signal over the volume of the probed tissue. This volume is defined by the thickness of tissue slab when measurements are performed in transmission configuration and by light signal penetration depth for the measurements in backscattering configuration. The latter is the most relevant optical measurement geometry for in vivo medical applications. Cervical ECM remodeling starts from the internal os towards the external os, the latter can be examined by a medical doctor during the colposcopy test41. The question arises \u2013 at which moment of pregnancy the ECM remodeling and consequent softening of cervix can be detected by an optical technique operating within the visible wave band (harmless for a patient) and having shallow penetration depth? Will the risk of PTB be detected early enough to allow for the deployment of necessary treatment to prevent PTB? The illustration of this problem is shown in Fig.\u00a0It was shown that the wide-field imaging Mueller polarimetry in backscattering configuration provides high contrast images and highlights tissue microarchitectureThe images of tissue sections from both upper and lower parts of the mouse uterine cervix at day 18 of pregnancy are shown in the third and fourth columns of Figs.\u00a0It confirms that the remodeling of ECM of collagen induced by cervix ripening can be detected with imaging Mueller polarimetry at least one day prior to delivery at day 19 in mice model of normal pregnancy.For the quantitative cervical collagen scoring during the course of pregnancy we conducted a statistical analysis of the distributions of the polarimetric parameters. The histograms of total linear retardance and linear depolarization in the images of the sections of lower part of mice cervices at days 6 and 18 of pregnancy are shown in Fig.\u00a0The mean values of the total linear retardance are relatively small and analyzed the distributions of all polarimetric parameters over the whole image, because such approach removes any operator-dependent bias in selecting the ROIs. In such case the parameters of circular statistics of the azimuth distribution show no significant difference between day 6 and day 18 of pregnancy, because the circularly arranged collagen fibers of midstroma at day 6 are covering the range of azimuth angles from 0 degrees to 360 degrees as well as the randomly distributed collagen fibers at day 18.The kurtosis of distribution of the azimuth of the optical axis was shown to be a promising metrics for the assessment of collagen arrangement in different zones of the cervix at different gestation time42.The polarimetric measurements of tissue in backscattering geometry will inevitably deal with the specular reflection of incident light on the uneven surface of tissue that will affect the backscattered signal. Such image pixels, however, can easily be detected and omitted from the analysis, because of a low depolarization of light specularly reflected by a surfaceThe suggested metrics for collagen scoring by the analysis of mean value, standard deviation, skewness and kurtosis of the distributions of retardance and depolarization in the images of cervical tissue as well as the parameters of fit with two Gaussian distributions do not require any image segmentation and selection of the special regions of interest for data analysis, and contain the valuable information on remodeling of the cervical ECM of collagen that may serve for the prediction of PTB risk.We have demonstrated the capability of a new promising polarimetric imaging technique, namely, imaging Mueller polarimetry operating in a visible wavelength range to detect the cervical ripening in mice model of normal pregnancy. We showed that cervical ripening starts from the internal os and progresses towards the external os of cervix. The state of cervical ECM remodeling is not detectable in the standard intensity images but is highly contrasted in the images of the total linear retardance and the azimuth of the optical axis. The circumferential arrangement of midstroma collagen around the cervical canal and subepithelial stroma zone was seen in the above mentioned images and confirmed by the SHG microscopy measurements.The polarimetric images of cervical tissue sections at day 18 confirmed the remodeling of cervical ECM throughout the entire cervix and, thus, complete ripening of mouse uterine cervix one day before delivery at day 19. As it was mentioned the PTB is characterized by the drastic acceleration of the normal process of cervix ripening. We demonstrated that the remodeling in cervical collagen affected the entire cervix at least one day before delivery in mice model of pregnancy. Hence, this remodeling can be detected with the imaging Mueller polarimetry in reflection geometry as well, despite the fact that the penetration depth of Mueller polarimetry does not exceed couple of millimeters in cervical tissue in a visible wavelength range.17, and suggested collagen scoring metrics can be applied for the distributions of the retardance and depolarization values in the corresponding images of uterine cervix recorded in vivo.The most informative collagen scoring metrics include the statistics on the total linear retardance and the linear depolarization images of cervical tissue that were calculated with the logarithmic decomposition of the recorded Mueller matrices. Various Mueller matrix decompositions are available for the matrices recorded in backscattering configurationin vivo assessment of the PTB risk in humans during the colposcopy test22.Whereas the mouse reproductive system is different from that of human, the evolution of ECM of cervix during pregnancy in mice is representative of the same process in humans, but with a shorter time scale. It supports our assumption that Mueller polarimetry has potential to become a technique of choice for the fast, accurate and non-contact cervical collagen scoring and quantitative The mouse strain C57BL6/129sv mice were used and pregnant females in this study were between 3-6 months of age. For timed pregnancies, breeding pairs were set up in the morning for 6 hours. The presence of a vaginal plug at the end of the 6 hours period was considered day 0 of pregnancy. The birth of pups generally occurred in the early morning on day 19.Mice were housed in an approved animal resource facility. All animal procedures were performed in accordance with the standards of humane animal care following the NIH Guide for the Care and Use of Laboratory Animals. The research protocols were reviewed and approved by the Institutional Animal Care and Use Committee at the University of Texas Southwestern Medical Center (registration number: IACUC 2016-101519) and at Florida International University (registration number: IACUC-20-014). All animals were maintained and used in accordance with the ARRIVE guidelines.45. The mouse uterine cervix consists of the connective tissue mainly. It has a cylindrical shape, and there is an inner cervical canal that connects both the uterus and vagina . A cryostat (Leica CM3050) was used for the transverse cryosectioning of the whole length of cervix were performed to calculate sixteen coefficients of a sample Mueller matrix for each image pixel. Non-linear SHG microscopy images were obtained with the SAMMM system24.The custom-built liquid crystal-based Mueller polarimetric imaging systemon Figs.\u00a0a and 9b.47. Among all algorithms a logarithmic decomposition LMMD developed for the transmission measurements considers all optical properties as continuously distributed through the volume of a medium along the path length of probing beam. It makes LMMD particularly suitable for the studies of biological tissues. The key steps of LMMD are briefly recalled below. Within the framework of differential matrix formalism of a fluctuating anisotropic medium51 the transmission Mueller matrix is described by the following equation:z, is associated with a unique differential matrix m can be decomposed into a sum of matrices 52z. Spatial averaging is performed in the transverse plane to the direction of light propagation. It follows from Eq. (Several algorithms of nonlinear data compression were proposed for the physical interpretation of Mueller matrix dataTo obtain the differential matrix m of Eq. at \\docuz, the following relations hold23:A and B are the coefficients that do not depend directly on a local thickness of sample. It follows from the Beer-Lambert law that I - an intensity of transmitted light, z is physical thickness of a sample. Consequently, the ratio of the parameters described below should not depend directly on a local thickness z:A, and B parameters may vary within the The optical anisotropy, scattering and absorption properties of tissue as well as the path length of the detected light beam that travelled through tissue will all affect the measured values of tissue polarization and depolarization parameters. When a thin tissue section is measured in transmission the path length of the detected light beam is equivalent to sample thickness. Thus, the thickness of tissue section will also impact the polarization and depolarization parameters calculated with LMMD from the experimental Mueller matrices. We suggest mitigating the spatial fluctuations of thickness of tissue section in a transverse plane by exploring Eq."} +{"text": "To examine the impact of using Communty Treatment Orders (CTO) of the Mental Health Act on use of inpatient care in Assertive Outreach team.Currently there is little evidence of the efficacy of community treatment orders (CTOs), and in particular with patients who use the Assertive Outreach service. One large randomised controlled study found no impact on use of inpatient care while a naturalistc study found significant impact.Our primary outcome was the number of admissions with and without a CTO comparing each patient with themselves before CTO and under CTO(\u201cmirror-image\u201d). Our secondary outcomes were the number of bed days, and the percentage of missed community visits post-discharge. We also looked at the potential cost savings of a reduction in inpatient bed usage.All the 63 patients studied over period of 6 years had a severe and enduring mental illness. The use of a CTO was linked to a significant reduction in the number of admissions and bed days There was no significant difference in the percentage of missed community visits post-discharge. Looking at the costs, an average cost for an inpatient Assertive Outreach bed per day in the local Trust was \u00a3250, and there were 8145 bed days saved in total, making a potential saving of just over \u00a32million, during the study period.This study suggests that the implementation of CTOs using clinical judgment and knowledge of patients can significantly reduce the bed usage of Assertive Outreach patients. The financial implications of CTOs need to be reviewed further, but this study does suggest that the implementation of CTOs is a cost-effective intervention and is economically advantageous to the local Trust."} +{"text": "However, we think the proposed approach is still reductionistic in its scope resulting in various problems as far as it concerns the development of adaptive and flexible police officers.Koedijk et al. recently proposed a step-by-step methodology for police organizations to develop an observational behavior assessment instrument to assess psychological competencies of police officers in a meaningful way. We strongly appreciate widening the scope of the assessment of psychological competencies beyond the pure measurements of constructs Koedijk et al. build their assessment approach on the assumption that police officers need to develop a certain level of competence, that can then be evaluated through observable competencies. While broadly competence refers to the practitioner's integrated skills . Instead, emphasis has to be put on the thinking and decision-making process underlying these behaviors. In this way, a focus on the thinking process aids the development of declarative over procedural knowledge (Cruickshank et al., Police organizations neglecting the underlying thinking processes when assessing observed behavior may nurture problematic aspects concerning the training and education of police officers (Koerner and Staller, police officers training for the test instead for performance in the field by focusing on the desired output instead of the underlying production process of any behavioral solution,police trainers focusing on the to be shown competencies, thus neglecting a focus on problem-solving processes of their learners, anda neglection in the development and assessment of individual solutions to problems, that emerge as a result of the individual problem-solving process.Taken together, we would like to encourage the authors\u2014and police organizations\u2014to refrain from a competence-oriented approach, that underlies the conceptualization of the proposed assessment (Fouad et al., MS and SK authors contributed equally to the ideas presented. MS wrote the first draft of the paper. Both authors contributed equally to editing the first draft to its final version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Although many outstanding research challenges remain to be addressed, the successfuldevelopment of new treatments to preserve female fertility for a range of clinical indicationshas largely been underpinned by the conduct of extensive, fundamental research on oocytesand ovarian tissues from a number of laboratory and commercially important farm species.Indeed, the most recent evidence from large animals suggests that it is also possible to cryopreserveintact whole ovaries along with their supporting vasculature for later auto-transplantationand restoration of natural fertility. This review will explore how the methods developedto preserve human oocytes and ovarian tissues can now be used strategically to support thedevelopment of conservation strategies aimed at safeguarding the genetic diversity of commerciallyimportant domestic animals and also of preserving the female germplasm for wild animals andendangered species.A detailed understanding of the cryobiology of gametes and complex tissues has led to the developmentof methods that facilitate the successful low temperature banking of isolated mature humanoocytes, or immature oocytes In the last few decades, advances in cryobiology have been combined with the development of newassisted reproduction technologies (ARTs) and used as a means to cryopreserve the structuralintegrity and biological function of key reproductive cells. Translation of these researchadvances has resulted in the development of the capacity to cryopreserve and long-term storeisolated gametes, embryos, complex gonadal tissues and even whole reproductive organs in humansand laboratory species as well as commercially important farm animal breeds and a limited numberof exotic or endangered species. Indeed, the issue of fertility preservation is particularlyrelevant in animals as over the last two decades, some 300 of 6000 farm animal breeds have becomeextinct and a further 1350 domestic breeds are being threatened with extinction as a result ofaggressive animal breeding strategies using limited genetic stocks of animals of high meritfor a range of economically valuable traits ; coolingto subzero temperatures; and either removal or conversion of the greater majority of liquidwater within the cells into a solid state. The reverse occurs during thawing or warming. Any ofthese elements can inflict damage on the cells to be preserved although the level and nature ofthe damage is dependent on the cryobiological properties of each individual cell type. Withregards to the cryopreservation of complex tissues containing multiple cell types, such asthe ovary, the success of cryopreservation is dependent on the need to balance the freezing optimafor a range of different cell types which are influenced by cell number and size and, in the caseof oocytes, maturational status, as well as the requirement to preserve the structural integrityof the tissue. Furthermore, the nature of the cryopreservation methodology, whether slow freezingor vitrification, the cooling and warming rates and the containment vessel used for tissue preservationand storage will all influence the efficacy of the preservation method and hence impact on thesubsequent viability of the preserved tissue. Finally, the unique biological properties ofoocytes challenge our potential to freeze-store these important cells. Mammalian oocytesare very large cells of approximately 120 \u00b5m diameter with a small surface area to volumeratio and high lipid and water content. The latter confers a high sensitivity to chilling injuryand intracellular ice formation. In MII oocytes these parameters are further confounded bythe presence of a fragile cytoskeleton that is resistant to the volumetric excursions commonlyassociated with equilibrium freezing and a highly temperature sensitive meiotic spindle apparatus. These et al., 2016). Regardless of species, it is clear that the success of oocyte cryopreservationis dependent on oocyte quality, a key parameter that is profoundly and negatively influencedby advancing maternal age and Mexican gray wolf . Regardless of species,GV oocytes cryopreservation requires the freeze-storage of both the gamete and its supportingcompliment of cumulus granulosa cells. The success of GV oocyte cryopreservation is criticallydependent on the post-thaw/warming maintenance of the functional integrity of the heterologousgap junctional contacts connecting the cumulus cells to the oocyte. This network of cumuluscells supply the oocyte with vital nutrients and signaling molecules that are essential to drivethe cytoplasmic and nuclear maturation of the oocyte to the MII stage. Furthermore, cumuluscells have a discretely different cryopreservation optima compared to oocytes. The loss ofgap junctional contacts between the cumulus cell compartment and the GV oocyte is a common casualtyof the cryopreservation process such that both the capacity of the oocyte to under go in vitro maturation and its subsequent fertile potential and developmental competenceare severely compromised. Finally, GV oocyte preservation must be supported by the provisionof robust culture environments that support oocyte maturation in vitro. Insight into the discrete, species-specific differences in the composition of the cultureenvironment required to drive oocyte maturation will be needed to maximize oocyte quality aftercryopreservation.Some of the difficulties associated with the cryopreservation of MII oocytes may potentiallybe overcome by the preservation of fully grown, but nuclear immature, germinal vesicle (GV)staged oocytes. However, this option has proved inconsistent and like the MII oocyte preservationdetailed earlier, success rates appear to be influenced by species. In addition to rodent andruminants, successful GV oocyte vitrification has recently been reported in equids following exposure to the ovotoxicimpact of chemotherapy agents for the treatment of cancer . When fertility preservation strategies require theremoval of the whole ovary and it is age appropriate to do so, ovarian cortex harvest and cryopreservationcan be most effectively combined with methods for in vitro maturation andvitrification of MII oocytes to maximize the likelihood of a future successful pregnancy outcomefor the patient. Although significant progress has been made in the development and use of ovariantissue cryopreservation as a means to safeguard the future fertility of girls and young womenat risk of POF, further optimization of the cryopreservation and transplantation protocolsare likely to be beneficial as the longevity of ovarian autograft function following transplantremains unclear. The latter is likely to be determined by patient age at tissue harvest and bythe degree of follicle loss that results from ischaemia and reperfusion injury following ovariantissue transplantation. Importantly, further research is also needed to define and mitigateagainst any potential risk of reseeding cancer cells through the transplanted ovarian tissues of oocytes from the primordial follicle stage to maturityin the laboratory . To date the production of healthy,live offspring from primordial follicles has only been achieved in mice (in vitro, the subsequent fertile capacity of the in vitroderived oocytes and, ultimately, the efficiency of IVGM strategies for cryobanked tissueswill be species specific. Furthermore, it is likely that no single IVGM strategy will fit allspecies and IVGM strategies for rare or wild species will likely have to be used in conjunctionwith xenografting in order to realize the fertile potential of the stored germplasm.Multiphase culture strategies are being developed to support the complete oratory , sheep in situ within ovarian tissues or by the vitrificationof MII oocytes. The stage is now set to translate these clinical advances for animal conservationand to use them to develop comprehensive strategies that will not only safeguard the future geneticdiversity of commercially important domestic species but will also facilitate germplasm preservationfor animals at risk of extinction.In conclusion although many questions remain to be answered, considerable recent progressin cryobiology, reproductive science, and IVGM technology have led to therapeutic advancesin clinical ART that have significantly improved our ability to cryopreserve female fertilityby banking primordial oocytes"} +{"text": "Caring is described as the innermost core of nursing which occurs in a relationship between the patient and the care provider. Although caring in nursing is associated with maintaining and strengthening of the patient\u2019s sense of dignity and being a person, there seems to be a gap between caring theories in nursing, healthcare policies and caring for patients by professional nurses in primary health care clinics. Developing strategies that will facilitate effective caring for patients by professional nurses in primary health care clinics within an ethical and mindful manner became an area of focus in this study.To develop strategies to facilitate effective caring for patients by professional nurses in primary health care clinics in South Africa.Strategies were developed based on the conceptual framework developed in Phase 2, which was derived from synthesis of the results of Phase 1 of the previously conducted study and supported by literature. The conceptual framework reflects the survey list of Dickoff, James and Wiedenbach\u2019s practice theory.Three strategies were developed: 1) facilitating maintaining of the empowering experiences; 2) facilitating addressing the disempowering experiences by professional nurses, and 3) facilitating addressing of the disempowering primary health care clinic systems.The developed strategies, being the proposed actions, procedures and behaviours, could facilitate effective caring for patients by professional nurses in primary health care clinics. Caring Science is based on the philosophy of Human Caring, a theory articulated by Watson as a foundational covenant to guide nursing as a discipline and profession is considered as a fundamental component of the health transformation process in post-apartheid South Africa. The PHC approach\u2019s strength is based on its response to the local needs of individuals, families and populations through a comprehensive, intersectoral approach that focuses on communities as the unit of intervention Model Empowering experiences in caring for patients, 2) Disempowering experiences in caring for patients and 3) Disempowering experiences with PHC clinic systems. Literature regarding: 1) facilitating maintaining of the empowering experiences, 2) facilitating addressing the disempowering experiences, 3) facilitating addressing of the disempowering PHC clinic systems were the guiding principles in developing the strategies. The strategies in this study entail proposed actions to be taken, behaviours to be adopted and procedures to be performed to facilitate effective caring for patients by professional nurses. The strategies also include a cluster of decisions to be taken to pursue the goal to facilitate effective caring for patients by professional nurses. The conceptual framework was developed from the researcher\u2019s thinking map as the basis for developing strategies, reflecting the practice theory survey list of Dickoff, James and Wiedenbach :415\u2013435.The conceptual framework in this study reflects the activities or the strategies undertaken by the nurse manager and professional nurses to facilitate effective caring for patients by professional nurses in PHC clinics. The objectives to be achieved, proposed actions to be taken and how the available resources will be used to achieve the goal of effective caring for patients by professional nurses were listed. The decisions regarding the procedures to be implemented and the behaviours to be adopted were also made. The conceptual framework enabled the researchers to interpret and link the study findings into practice and also served as a tool for connecting concepts to provide a context for interpreting the study findings. The questions of the survey list were as follows: Who is the agent?, Who is the recipient?, What is the context, What is the procedure or process?, What are the dynamics?, and What is the purpose or outcome?A brief outline of the six elements follows:Agent \u2013 Who performed the activity?The nurse manager is the agent who plays an active role in facilitating the process of professional nurses effectively caring for patients. The facilitative role focuses on maintaining empowering experiences and addressing disempowering experiences through a continuous interactive process that ensures engagement, support and empowers professional nurses by guiding them in realising the nursing goals. The nurse manager also plays a significant role in facilitating the implementation of activities that promote the delivery of effective caring for patients by professional nurses in accordance with the patients\u2019 health needs.Recipient \u2013 Who was the recipient of the activity?The professional nurses are the recipients who receive guidance and support from the nurse manager on maintaining empowering experiences and addressing disempowering experiences whilst they actively engage themselves and play an active role in the facilitation and implementation of the process to facilitate effective caring for patients.Context \u2013 In what context was the activity performed?The context for this study is the PHC clinics rendering comprehensive PHC services to patients; the nurse manager and professional nurses caring for patients in PHC clinics. The nurse manager supports and often takes part in caring for patients, especially when the PHC clinics are too full, to off-load the heavy workload from the professional nurse.Procedure \u2013 What was the guiding procedure?The outline of the procedure in this study is to develop strategies to facilitate effective caring for patients by professional nurses in PHC clinics. The nurse manager, being the agent, performs the activities of the facilitation procedure in interaction with professional nurses. The professional nurses, are the recipients of the activities of the facilitation procedure, aimed towards the realisation of the outcome of the study.Dynamics \u2013 What was the energy source for the activity?The dynamics are the driving forces or motivation to provide effective caring for patients. The empowering and disempowering experiences encountered by the professional nurses and the patients in the PHC clinics motivate the nurse manager in facilitating effective caring for patients.Purpose/outcome \u2013 What was the purpose/end product of the activity?The nurse manager facilitates effective caring for patients by professional nurses through engagement, empowerment and support of professional nurses. The professional nurses provide effective caring for patients in PHC clinics.n = 10) to 20 (n = 20) different staff categories, whilst a few also open on Saturdays for half a day. As revealed by the findings in Phase 1 of the main study, these PHC clinics are faced with challenges such as shortage of medicines, staff and functional medical equipment.The setting of this study was the PHC clinics in Ekurhuleni, an area in the eastern part of Gauteng Province, South Africa. The PHC clinics provide comprehensive health care services such as maternal, child and reproductive health, human immunodeficiency virus, tuberculosis testing and treatment, screening and care for non-communicable diseases and treatment of common ailments through the DHS maintaining of the empowering experiences by professional nurses, 2) addressing the disempowering experiences of professional nurses as well as 3) addressing of the disempowering PHC clinic systems were the guiding principles in developing the strategies. The developed strategies will guide and assist professional nurses in PHC clinics in utilising effective caring behaviours displayed as kindness, empathy, respect and helpfulness in caring for patients within an ethical, reflective and knowing framework behaviours. The developed strategies will also enable professional nurses to maintain the empowering experiences and address their disempowering experiences in this environment."} +{"text": "In the original version of this article, the given and family names of Christiane Dienhart were interchanged. The original article has been corrected."} +{"text": "In the last 10 years, we have experienced exceptional growth in the development of machine-learning-based (ML) algorithms for the analysis of different medical conditions and for developing clinical decision support systems. In particular, the availability of large datasets and the increasing complexity of both hardware and software systems have enabled the emergence of the new multidisciplinary field of computational neuroscience models are increasingly used by scientific communities due to their higher accuracy and efficiency. Deep learning comprises different classes of algorithms that implements artificial neural networks with deep layers. These models have proved effectiveness in a wide range of applications since they can be used even with non-trivial relationships among the features of a predictive task and between the features and the outcomes. On the other hand, due to the high inner complexity of the algorithms, is often difficult to obtain insights into the workings of the deep learning models. Their \u201cblack-box\u201d nature makes the models less trustworthy to physicians, thus hindering their expansion into real clinical settings has emerged, which aims to provide new methodologies and algorithms to enhance transparency and reliability to both the decisions made by predictive algorithms and the contributions and importance of individual features to the outcome for gender classification. They exploited information provided by graph architecture on functional connectivity networks by means of GNNs which comprise graph operations performed by deep neural networks and demonstrated that the gender classification method is able to effectively extrapolate state-of-the-art results by achieving high accuracy values. At the same time, the authors introduced an important mathematical formalization concerning the relationship between GNN and CNN. Based on this relationship, they used a saliency map visualization technique for CNN, i.e., the gradient-weighted class activation mapping (Grad-CAM) to visualize the important brain regions resulting from the classification task, overcoming the current limitation issue about the interpretability of the GNN architectures.Bu\u010dkov\u00e1 et al.. The authors tested a deep convolutional neural network trained to identify biological gender from EEG recordings of a healthy cohort on another dataset of EEG data of 134 patients suffering from Major Depressive Disorder. In their work, they developed an explainable analysis to verify the discriminative power of beta-band power and test its effectiveness before and after the antidepressant treatment by highlithing the contribution of each electrode in order to clearly identify the final set of biomarkes.The gender classification task was also treated by Lopatina et al. where several XAI methods based on attribution maps (heatmaps) were used in conjunction with a CNN to both identify multiple sclerosis patients from 2D susceptibility-weighted imaging scans and highlight individual heatmaps indicating the contribution of a given voxel to the classification decision.The crucial role of XAI methods for clinical personalized analysis was explored in the work of Lombardi et al. also showed how to use local XAI algorithms to extract personalized information about the importance of several brain morphological descriptors extracted from the MRI scans of a healthy cohort of subjects for the prediction of the biological age. The authors presented an explainable DL framework to evaluate the accuracy of the models while achieving high interpretability of the contribution of each brain morphological feature to the final predicted age They introduced two metrics to compare different XAI methods and quantitatively establish their reliability in order to choose the most suitable for the age prediction task.Varzandian et al. adopted the brain MRI scans of 1901 subjects to train a predictive model based on the apparent brain age and the chronological age to classify Alzheimer's disease patients. The authors developed a workflow to perform the regression and classification tasks maintaining the morphological semantics of the input space and providing a feature score to assess the specific contribution of each morphological region to the final outcome.Bae et al. focused their work on revealing misleading points that may arise from the pre-defined feature space. They used a DNN architecture to simulate four different problem scenarios such as the incorrect assessment of the feature selectivity, the use of features that act as confounding variables, the overestimation of the network feature representation and several misassumptions regarding the feature complexity.Finally, although the concept of interpretability has several implications, one of the most important regards the ability to understand also the errors and pitfalls of the ML and DL algorithms. In conclusion, all the works included in this Research Topic outline the potential effects of XAI techniques in different diagnostic scenarios and show how empirical studies could draw future directions for boosting XAI in real clinical applications.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.This work was supported in part by the research project Biomarcatori di connettivit\u00e0 cerebrale da imaging multimodale per la diagnosi precoce e stadiazione personalizzata di malattie neurodegenerative con metodi avanzati di intelligenza artificiale in ambiente di calcolo distribuito (project code 928A7C98) within the Program Research for Innovation -REFIN funded by Regione Puglia in the framework of the POR Puglia FESR FSE 2014-2020 Asse X - Azione 10.4.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "African American caregivers are often confronted with the complexities of caregiving through the lens of race and associated health disparities. The COVID-19 pandemic has both exacerbated the systemic disparities and deeply rooted inequities experienced by African Americans and laid bare their effects on the community of caregivers. The purpose of this project was to explore the experiences of African American dementia caregivers during the COVID-19 pandemic. Nineteen African American caregivers of persons living with dementia were recruited by primary investigators and community partners with purposeful sampling techniques to participate in semi-structured focus groups that were held April 2021. Four overarching themes were constructed during thematic analysis: social isolation, decreased well-being, the good and bad of telehealth, and challenges in fulfilling the caregiver role. Caregivers expressed that they became socially isolated from family and friends, which led to them becoming depressed and mentally strained. Several caregivers felt they could not carry out their caregiver duties due to the constraints surrounding the pandemic. The varying levels of interaction with and the comfort level of physicians utilizing telehealth led to caregivers having mixed reviews on the popularized service. The results of this study will be used to culturally adapt caregiving education courses and programs promoting mastery and competency during a pandemic. In preparations for future public health crises, healthcare professionals will be able to use the results of this study to address the specific needs and improve the experiences of African American dementia caregivers."} +{"text": "Inflammatory bowel diseases (IBDs) and irritable bowel syndrome (IBS) are associated with decreased quality of life and mental health problems. Among various approaches to supportive therapy that aims to improve mental health in affected individuals, vitamin D supplementation is considered to be an effective method which may also be beneficial in alleviating the symptoms during the course of IBDs and IBS. The aim of the present study was to conduct a systematic review of the literature presenting the data regarding the influence of vitamin D supplementation on mental health in adults with inflammatory and functional bowel diseases, including IBDs and IBS. This study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and registered in the International Prospective Register of Systematic Reviews (PROSPERO) database (Registration number CRD42020155779). A systematic search of the PubMed and Web of Science databases was performed, and the intervention studies published until September 2021 were included. The human studies eligible to be included in the review should have described any intervention involving vitamin D as a supplement in a group of adult patients suffering from IBDs and/or IBS and should have assessed any component of mental health, but studies presenting the effects of combined supplementation of multiple nutrients were excluded. After eliminating the duplicates, a total of 8514 records were screened and assessed independently by two researchers. Further evaluation was carried out on the basis of title, abstract, and full text. Finally, 10 studies (four for IBDs and six for IBS) were selected for the current systematic review, and their quality was assessed using the Newcastle\u2013Ottawa Scale (NOS). The studies analyzed the influence of various doses of vitamin D on bowel diseases, compared the results of vitamin D supplementation with placebo, or administered specific doses of vitamin D to obtain the required level in the blood. Supplementation was performed for at least 6 weeks. The analyzed mental health outcomes mainly included disease-specific quality of life/quality of life, anxiety, and depression. The majority of studies confirmed the positive effect of vitamin D supplementation on the mental health of IBD and IBS patients, which was proven by all research works evaluating anxiety and depression and by the majority of research works evaluating quality of life. Although the studies followed different dosage regimens and supplementation protocols, the positive influence of vitamin D on mental health was found to be consistent. The number of studies on patients suffering from ulcerative colitis and the availability of trials randomized against the placebo group was low in the current review, which is considered to be a limitation of the present study and could also reflect the final outcome of the analysis. The conducted systematic review established the positive effect of vitamin D supplementation on the mental health of IBD and IBS patients, but this result requires further investigation, particularly in relation to other mental health outcomes. Inflammatory bowel diseases (IBDs), including ulcerative colitis (UC) and Crohn\u2019s disease (CD), are idiopathic intestinal disorders that are characterized by repetitive episodes of inflammation of the gastrointestinal tract, usually associated with bloody diarrhea, tenesmus, and abdominal pain, while the location of the disease and the thickness of the affected bowel wall differ in the diseased individuals . They arThe diagnostic problems in IBDs are mainly due to the lack of appropriate diagnostic tools to differentiate between UC and CD , as wellAs described above, the IBDs and IBS may influence the mental health of patients, and their mental health may further influence the somatic course of the disease, which is observed in both IBD and IBS Among the various methods of supportive therapy that are known to positively impact the mental health of patients, nutrition is considered to be one of the promising and potential options to reduce the symptoms of depression, as demonstrated in the systematic review and meta-analysis by Firth et al. and systOf the various dietary factors known to play a key role in alleviating mental health problems, vitamin D is one of the most effective components. The meta-analyses by Vellekkatt & Menon , ShafferTaking into consideration the above-described state of knowledge, the present study aimed to conduct a systematic review of the literature analyzing the influence of vitamin D supplementation on the mental health status of adults with inflammatory and functional bowel diseases, including IBDs and IBS.The systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines , based oThe studies eligible to be included in a systematic review of influence of vitamin D supplementation on mental health in adults with inflammatory and functional bowel diseases were ought to describe any intervention including any dose of vitamin D supplemented and the assessment of any component of mental health.(1)adult population studied;(2)patients with confirmed IBDs and/or IBS studied;(3)applied vitamin D dose defined within the study;(4)any component of mental health assessed within the study (assessed using either subjective or objective measurements).The following inclusion criteria were scheduled:(1)animal model studies;(2)studies conducted in populations with concurrent intellectual disabilities;(3)studies conducted in populations with concurrent eating disorders;(4)studies conducted in populations with concurrent neurological disorders, including Alzheimer\u2019s disease, epilepsy, etc.;(5)conducted assessment of supplementation of multiple nutrients combined;(6)studies conducted in populations of mothers/children, analyzing the association between maternal vitamin D supplementation and the mental health of their offspring.The following exclusion criteria were scheduled:No other exclusion criteria, associated with the stage or course of disease, studied population, or country were scheduled.The detailed description of electronic search strategy is presented in After conducting an electronic search and identifying potentially eligible studies, the duplicates were removed and the studies were verified based on the previously planned inclusion and exclusion criteria. They were verified in three phases\u2014on the basis of title, abstract, and full text. If the full text was not available, the corresponding author of the study was asked for the full text of their study. The assessment in each phase was conducted independently by two researchers, while the results of the searching were verified by comparison of their lists and if any disagreement appeared, it was consulted by third researcher.The detailed procedure of including studies is presented in (1)the basic characteristics of the studies and of the studied populations ;(2)the detailed characteristics of the patients studied ;(3)the detailed characteristics of the applied vitamin D supplementation intervention and mental health outcome ;(4)the observations and conclusions .The data extraction was conducted based on the standard procedure, while researchers extracted the following data:If any information was not available, the corresponding author of the study was asked about the details of their study (data marked in the systematic review as provided on request). The data were extracted independently by 2 researchers, while the extracted data were verified by comparison of their results and if any disagreement appeared, it was consulted by third researcher.The included studies were assessed in terms of the quality of the study, expressed as the risk of bias, as recommended by Cochrane . The NewThe basic characteristics of the studies and of the studied populations of IBD patients for the studies included to the systematic review ,33,34,35The detailed characteristics of the studied IBD patients for the studies included to the systematic review are presented in The detailed characteristics of the applied vitamin D supplementation intervention and mental health outcomes for the studies of the IBD patients included to the systematic review are presented in The basic characteristics of the studies and of the studied populations of IBS patients for the papers included in the systematic review ,39,40,41The detailed characteristics of the studied IBS patients for the papers included in the systematic review are presented in The detailed characteristics of the applied vitamin D supplementation intervention and mental health outcomes for the studies of the IBS patients included in the systematic review are presented in The summary of the observed association between vitamin D supplementation and mental health outcomes for the studies of the IBD and IBS patients included in the systematic review, accompanied by the NOS total score, is presented in The conducted systematic review confirmed the proposed hypothesis that vitamin D supplementation positively influences the mental health of bowel disease patients. Only one study did not support this positive influence , and allAs the majority of studies included in this systematic review assessed the disease-specific quality of life/quality of life, it can be confirmed that this mental health outcome is significantly associated with the disease course and symptoms, as revealed in both IBD and IBS Moreover, it is of particular importance that two studies assessing the influence of vitamin D on anxiety and depression, in IBD and IBS However, except for the mental health outcomes, for which the influence of vitamin D supplementation has been studied so far, there are some reports revealing that bowel diseases are also associated with other mental health outcomes, which include self-esteem , lonelinThe results indicate that vitamin D supplementation may be beneficial to alleviate the symptoms during the course of bowel disease ,26 and mIn spite of the fact that the conducted systematic review highlighted some novel observations for the betterment of IBD and IBS patients, some limitations of the study need to be acknowledged. The most important issue is the inclusion of a small number of studies, especially the small number of studies randomized against placebo, and that no studies randomized against a placebo representing the IBD population were included. Moreover, it should be emphasized that there is a risk of overlap in the results presented within the included studies, since two studies ,39 were The conducted systematic review confirmed the positive influence of vitamin D supplementation on the mental health of bowel disease patients, observed for both IBD and IBS. The majority of the studies supported the beneficial effect of vitamin D supplementation on the studied mental health outcomes, such as disease-specific quality of life/quality of life, anxiety, and depression. Though the studies adopted different vitamin D dosage regimens for varied periods of time, the general observation of positive effect on mental health is consistent. However, the limited number of studies selected for the review, especially for UC, must be considered as a limitation of the present analysis. The effect should be further studied in a larger sample of patients and on other mental health outcomes."} +{"text": "OBJECTIVES/GOALS: We describe here the implementation of a pilot Quality Improvement (QI) program in clinical research processes in order to facilitate translation from bench to community. This presentation will also discuss challenges encountered by the research teams during the implementation of QI activities. METHODS/STUDY POPULATION: Miami CTSI collaborated with University of Kansas\u2019 CTSA to test the implementation of a QI program for clinical research processes. The program has a duration of 1 year and consists of multi-modal training and coaching sessions with different research teams. Six teams comprising of Principal investigators, clinical coordinators, and regulatory specialists participated in the program based in applied clinical microsystem theory science. Team coaches and teams worked together to assess current processes, test new and improved processes, and standardize and disseminate applicable best practices of the QI program. RESULTS/ANTICIPATED RESULTS: The implementation of QI activities in large clinical research settings poses numerous challenges for the research team. We will present survey results from the coaching sessions and follow on feedback from the different teams involved in the program to implement the QI activities. We will describe the modifications and adjustments made to the original conceptual framework of QI program in order for it to be applicable and feasible for the settings of the University of Miami. We will provide recommendations for other academic clinical research centers that are considering implementing a QI program. DISCUSSION/SIGNIFICANCE OF IMPACT: The successful adaptation of a QI process to implement in academic clinical research settings relies on early engagement of the institution leadership, careful selection of team members, as well as developing communication skills to enhance team dynamics as a clinical research unit."} +{"text": "The retention force of electronic connectors, in general one of the essential specification requirements, is defined as a maximum force of metallic terminals withdrawn out of the corresponding plastic housing. Accurate prediction of the retention force is an important issue in the connector design stage; however, it is not an easy task to accurately assess the retention force based on the authors\u2019 knowledge. A finite element analysis is performed in conjunction with a self-coded user subroutine accounting for relaxation/creep behaviors of semi-crystalline thermoplastic polymers under various loading conditions in order to appraise the mechanical performance of the plastic base structure. Material parameters adopted in the constitutive model are evaluated by utilizing the automated design exploration and optimization commercial software. Applications of the developed subroutine with several damage criteria to assess retention forces of two electronic connectors were conducted. Retention forces predicted by utilizing the current constitutive model agreed fairly well with the associated experimental measurements. A dramatic improvement of the underestimation of the retention force based on the approach commonly adopted in the industry is also demonstrated here. Thermoplastic polymers are broadly used as engineering materials in modern industry, from automotive systems to electronic devices. An electronic connector, used for electrical signal transmission, is basically comprised of two major components, these being metallic terminals and a polymeric housing. Terminals are secured into the housing via the interference of barbs designed in the terminal and the polymeric structure by using a fixture. The retention force of the connector, defined as the peak force of the terminal withdrawn out of the associated housing, is required to be sufficient to prevent embedded terminals from being detached from the housing under relatively critical conditions such as shock and vibration. Accurate prediction of the retention force is thus an important issue, especially in the connector design stage. Nonetheless, it is generally known that the retention force is not easy to be appropriately estimated due to the complex mechanical responses and fracture mechanisms of thermoplastic polymers.In order to properly illustrate the dependence of time, strain rate, and temperature on stress-strain behaviors of thermoplastic polymers, various constitutive models were developed based on experimental observations. Colak modifiedGearing and Anand used theAcademic studies related to applications of the advanced constitutive model suitable for the thermoplastic materials to industrial products under practical operation conditions are not broadly reported, and there are even fewer research articles focusing on the assessment of the retention force of electronic connectors. The constitutive model proposed by Ayoub et al. , with vaIn order to examine the rate-dependent mechanical response of semi-crystalline polymers, a series of uniaxial compression tests for both polyamide 4T (PA4T) and liquid crystalline polymer (LCP) were conducted at different strain rates of 0.001 To further assess the appropriateness of the choice of damage criteria presented later, the strength of an injection-molded PA4T latch of DDR4 connector under multiaxial loading conditions was conducted, as shown in Two types of the electronic connector, PCIE (with a PA4T housing) and SSD (with a LCP housing) as shown in The constitutive model proposed by Ayoub et al. is choseDeformation gradients of both intermolecular resistance and network resistance can be respectively decomposed into elastic and inelastic parts in a multiplicative manner.Furthermore, the stress\u2013strain behavior of the network resistance is given by the eight-chain rubber model (Arruda and Boyce ).(14)TBThe users\u2019 subroutine VUMAT, which accounts for the constitutive model mentioned above, is coded and the ABAQUS/Explicit solver iFour developed damage criteria are examined to evaluate the failure of the polymeric materials. (a) Maximum principal strain criterion: A local region of the structure with maximum principal strain larger than a certain threshold Mechanical behaviors of the latch of DDR4 connector under the compression conditions as described in the previous section are numerically examined. Both the head and the metallic fixture are assumed to be rigid bodies since they are much stiffer than the polymeric latch. Four-node tetrahedron elements with reduced integration are assigned to the latch while a region enduring large deformation illustrated in an insert of Geometry models of PCIE and SSD connectors are reasonably simplified in the numerical analysis as shown in It should be noted that in the connector industry, based on the authors\u2019 understanding, the polymeric housing is usually regarded as the simple isotropic elastic-plastic material for the retention force numerical analysis. Comparisons of the simulation results based on the modified VBO model and the isotropic elastic-plastic model will be presented in the next section.Numerical simulations of the latch of the DDR4 connector under rather complicated loading conditions with four damage criteria were then performed. For the sake of concise presentation, only the force-displacement responses subjected to the head based on the calculations with the damage accumulation criterion are shown in Insertion/withdrawal force histories with respect to the displacement based on the simulations with the damage accumulation criterion of PCIE and SSD connectors are shown in Mechanical behaviors of the engineering polymers PA4T and LCP were experimentally and numerically examined in the current study. The modified VBO model, which was suitable for the rate-dependent semi-crystalline polymeric materials with four damage criteria, was coded as the user subroutine implemented into the finite element analysis software. Numerous material parameters of the modified VBO model can be evaluated by fitting the stress-stain relationships of the specimen under the uniaxial compressive loading conditions at two distinct strain rates based on the simulations with those based on the experiments. By utilizing the strength investigation of the latch of the DDR4 connector, simulations can reasonably capture the mechanical response and give fair estimations of crack orientation compared with the experimental observations. Trends of the withdraw force-displacement relationships of PCIE and SSD connectors based on the modified VBO model with the damage accumulation criterion and the corresponding experiments are relatively similar, and simulated retention forces agree well with the measured ones, while the approach commonly adopted in the industry yields substantial underestimations. Applications of the current numerical procedures to evaluate mechanical responses of other polymeric components/products of interest could be promising in the future."} +{"text": "It is with great pleasure that we present this Research Topic dedicated to handling of transient metals and their function in the brain. Iron and copper are important co-factors for a number of enzymes in the brain, which are involved in multiple processes including neurotransmitter synthesis and myelin formation. Both shortage and excess of copper and iron will negatively influence brain function. The transport of copper and iron from peripheral circulation into the brain is strictly regulated and the concordantly protective blood-brain and blood-cerebrospinal fluid barriers have evolved to protect the brain's internal environment from unwanted exposure from the periphery. The sites for uptake and transport of copper and iron into the brain overlap, and the uptake mechanisms of the two metals significantly interact: Both iron deficiency and overload lead to altered copper homeostasis in the brain. Similarly, changes in dietary copper influences the cerebral iron homeostasis.For long the focus of research on copper and iron in the brain has had a mainstay on the significance of molecules in the brain using these metals as co-factors. Moreover, the understanding of transport of the metals through the blood-brain and blood-cerebrospinal fluid barriers and the understanding of the further handling of the metals inside the brain have received prioritizing from many research groups. Naturally, the possibility of nutritional deficiency caused by insufficient dietary supply of copper and iron has also gained interest, and attempts to delineate the impact of dietary insufficient on the brain, not at least the developing brain, are important topics that could aid in understanding periods during development where sufficient metal supply could be a highest importance.In 2015, we launched this Research Topic with the intention to provide a platform for presentation of research on the significance of copper and iron for the brain. This Research Topic mainly covers: (i) the genetics of proteins handling copper and iron in the brain, (ii) the expression and function of copper- and iron proteins in normal and pathological conditions; (iii) the clinic manifestations of dietary deficiencies in copper or iron; (iv) the clinic manifestation of genetic diseases leading to mishandling of copper and iron; (v) therapeutic aspects of handling dietary deficiencies, overloading pathologies, or conditions with genetic mutations in proteins related to copper and iron.M\u00f8ller et al. show that protein variants encoded by ATP7A transcripts missing either exon 10 or exon 15 are not functional and not responsible for the so-called occipital horn syndrome (OHS) phenotype, a mild variant of Menkes disease with mutation in the ATP7A gene. Prior studies have shown that only minor copper dependent transcriptional regulation of the ATP7A and ATP7B genes are apparent. In agreement with lack of rapid transcriptional regulation, the study of Lenartowicz et al. shows that a high copper concentration pressure leads to cellular selection. Interestingly, normal cells encoding the ATP7A gene have selective growth advantages at high copper concentrations, whereas cells without functional ATP7A has selective growth advantage toward low copper concentrations. The review by Lenartowicz et al. reports on the Mottle mouse, which closely recapitulates the phenotype of Menkes disease and leads to multi-systemic copper metabolism disorder caused by mutations in the X-linked ATP7A gene. Lessons from studies in the Mottled mouse reveal that the ATP7A protein is expelling copper from certain cells including cells in the kidney, intestine, placenta and testis, and that deficiency in ATP7A function leads to excessive or even toxic amounts of copper in these tissues and at the same time lack of copper in other tissues. Fu et al. show in their study on the role of copper in the developing ventricular system in the rat, that copper levels increase as a function of age, and subventricular zone shows a different expression pattern of Cu-regulatory genes than seen in the choroid plexus. An age-related increase in the copper-binding protein MTs and a simultaneous decrease in Ctr1 may contribute to the high copper level in this neurogenesis active brain region. The review of Skj\u00f8rringe et al. discusses the role of the divalent metal transporter 1 (DMT1) for handling iron transport at the blood-brain barrier (BBB). DMT1 is detectable in endosomes of brain capillary endothelial cells denoting the BBB. Iron uptake at the BBB occurs by means of transferrin-receptor mediated endocytosis followed by detachment of iron from transferrin inside the acidic compartment of the endosome. McCarthy and Kosman discuss the transendothelial trafficking of iron at the BBB by covering mechanisms by which brain endothelial cells take-up iron from the blood and by which they efflux this iron into the abluminal space mediated by ferroportin. They also cover the regulation of iron efflux into the brain by exocrine factors released from adjacent astrocyte-end feet, and how cytokines secreted by the endothelial cells conversely may regulate such glial signaling. Codazzi et al. reviews the mechanisms responsible for non-transferrin-bound iron (NTBI) entry in neurons and astrocytes and on how they can be modulated during synaptic activity, not at least under the influence of calcium permeable channels and DMT1. They also in-depth speculate how NBTI might have relevance for cellular iron homeostasis in both physiological and pathological conditions. Gajowiak et al., studied changes in iron metabolism in a mouse model of amyotrophic lateral sclerosis (ALS). They report that overexpression of mutated SOD1G93A leads to pathological changes in a skeletal muscle with deposits of iron.This research forum hence covers many important characteristics of metal biology in the brain, and the advances thereof included within this Research Topic indeed brings novelty to the understanding of how the neurons and glial cells handles copper and iron to enable the significance of these essential metals for the brain.LM and TM: summary of topic. Both authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "This historical summary of the main aspects of the Barlow valve helps readers to understand the complexity of this pathoanatomical and functional disease and the best approach for successful surgery.58.See Article page Mitral valve repair is the gold standard in the surgery of degenerative mitral valve disease, a solidly established option supported by practice guidelinesThe Barlow valve still poses technical challenges to surgeons due to its multifaceted pathoanatomical presentation with bileaflet involvement with abnormally thickened leaflet tissue, gross involvement of the subvalvular apparatus, and annular dilatation.8Journal, Barlow and colleagues,In this issue of the Additional investigations showed that pathoanatomic changes in the form of annular abnormalities like disjunction should not be separated from the disease complex.Surgery of the Barlow valve aims at restoring competence, releasing leaflet tension, and achieving leaflet coaptation. Another issue is like in other repairs what may eventually happen with the left ventricleThis elegant and comprehensive historical review of Barlow and colleagues on the Barlow valve"} +{"text": "Preparedness of residents in long-term care (LTC) exposed to disasters continues to warrant concern. Prior work by our research team highlights explicit evidence of the profound vulnerability of Florida nursing home (NH) residents exposed to Hurricane Irma in 2017. This research adds to our knowledge of the profound effect of disasters on long term care residents. This symposium will utilize mixed methodologies to discuss the varied effects of Hurricane Irma on vulnerable older adults residing in Florida NHs and Assisted Living communities (ALCs). Using a novel methodology for identifying a cohort of ALC residents, the first presentation will present the morbidity and mortality effects of Hurricane Irma on Florida ALC residents and identify high risk groups by health condition. The second presentation will document the effect of Hurricane Irma on NH Residents previously enrolled in Hospice and expound on the effect of the disaster on hospice enrollment after the storm. The third presentation will present qualitative results of interviews with ALC administrators highlighting the effect of the storm on both large and small ( <25 beds) facilities. The fourth presentation will address the issue of heat exposure in the days after Hurricane Irma and consider the preventative effect of generators on morbidity and mortality. Finally, a fifth presentation will examine NH staffing level variation in the days leading to the hurricane. To conclude, this symposium offers a multi-faceted view of a disaster\u2019s effects on LTC residents across Florida, including novel data from the NH environment and lesser-examined ALCs."} +{"text": "This session provides insights into how the pandemic challenged the capabilities and ingenuity of the Older Americans Act (OAA) programs and the aging network and what it means for in-home and community aging services now and in the future. Speakers will include key aging network stakeholders, who will discuss the overnight evolution of programs serving often isolated older adults."} +{"text": "Variations in the upper limb arterial pattern are commonplace and necessitate complete familiarity for successful surgical and interventional procedures. Variance in the vascular tree may involve any part of the axis artery of the upper limb, including the axillary artery and brachial artery or its branches, in the form of radial and ulnar arteries, which eventually supply the hand via anastomosing arches.To study the peculiarities of the arterial pattern of the upper limb and to correlate them with embryological development.The entire arterial branching of forty-two upper limbs of formalin fixed adult human cadavers was examined during routine dissection for educational purposes, conducted over a 3-year period in the Department of Anatomy, Lady Hardinge Medical College, New Delhi.The study found: 1) One case in which a common trunk arose from the third part of the axillary artery, which immediately splayed into four branches (2.4%); 2) High division of the brachial artery into ulnar and radial arteries, in 3 cases (7.1%); 3) Pentafurcation of the brachial artery into ulnar, interosseus, radial, and radial recurrent arteries and a muscular twig to the brachioradialis in 1/42 cases (2.4%); 4) Incomplete Superficial Palmar arch in 3/42 cases (7.1%); and 5) Presence of a median artery in 2/42 case(4.8%)This study observed and described the varied arterial patterns of the upper limb and identified the various anomalous patterns, supplementing the surgeon\u2019s armamentarium in various surgical procedures, thereby helping to prevent complications or failures of reconstructive surgeries, bypass angiography, and many similar procedures. The web of arterial patterns in the body has always been an enigma. Inconsistencies in the arterial pattern of the upper limb in both origin and distribution are the rule rather than an exception. The axillary artery (AA) is a direct continuation of the subclavian artery, extending from the outer border of the first rib to the lower border of the teres major and thereafter continuing as the brachial artery (BA) of the upper limb.This observational study was conducted with donated human cadavers meant for research work and teaching for first-year Bachelor of Medicine and Bachelor of Surgery undergraduates at the medical institution. Eight to nine embalmed cadavers are designated for dissection purpose in the research laboratory for each new academic year and, since the study was conducted over a 3-year time frame, 24 embalmed cadavers and a total of 42 upper limbs were included. Cadavers that were not appropriate were excluded from the study. The Local Government approved the ethical aspects and policy for conducting research studies and for teaching purpose on donated human bodies at the Department of Anatomy in the Government Medical institution. Informed written consent was also obtained at the time of body donation from the donors themselves or their relatives.Cunningham\u2019s Manual of Practical AnatomyIn the present study on 42 upper limbs , the artIn one of the 42 cases, an extremely rare type of variation was seen involving the axillary artery . The fiSubscapular artery , supplying the latissimus dorsi muscle along with the thoracodorsal nerve supplying the lateral thoracic wall especially the third and fourth intercostal areas including the respective serratus anterior muscle parts and mammary gland gave rise to the usual anterior and posterior interosseus artery and a persistent median artery, which passed deep to the pronator teres and accompanied the median nerve.The superficial palmar arch (SPA), which is completed by a superficial palmar branch of the radial artery and infrequently by an arteria princeps pollicis or arteria radialis indices, was not evident in three out of 42 limbs. In these cases, there was no contribution from a superficial branch or any other branch from the radial artery and therefore an incomplete SPA was observed that was formed solely by the ulnar artery in the palm . A mediaDeviations of the upper limb arterial pattern from its anatomical norms have been frequently reported, which makes it even more essential to familiarize one with the possible patterns for successful surgical interventions.Several authors have reported an exceptional number of branches from the axillary artery varying from seven to eight branches to as many as five to eleven branches.A very close similarity to our finding of variant AA is presented in a case report on the tetrafurcation from the third part of the AA, where a common trunk gave origin to TDA, CSA, PCHA, and LTA, whereas, in our study the PCHA had a normal origin from the third part of the axillary artery and the common trunk gave rise to an auxiliary LTA and the TDA, in addition to the SSA branch itself, which divided into CSA and TDA.The SSA is known to supply the subscapularis muscle by giving off one collateral branch in 31.1% and two branches in 29.3%.In another study conducted on 40 cadavers, the LTA emerged from the SSAThe TDA defines the posterior boundary of axillary lymph node dissection and any derailment of its normal course can compromise oncology results in surgeries involving lymph node dissections. The presence of multiple TDA seen in our study renders isolation of the vascular pedicle for free flap transfer a perilous task and could result in postoperative complications or failure of the flap itself.There are considerable variations of the brachial artery (BA) documented in the literature. In the majority, the brachial artery, which is a continuation of the axillary artery, begins at the inferior border of the teres major and ends by bifurcating about a centimeter distal to the elbow joint into radial and ulnar arteries. The two main classifications of arterial arm variations include one in which a brachial artery is positioned anterior to the median nerve, described as the superficial brachial artery, while the other main anomaly involves doubling of the brachial artery into superficial and deep brachial arteries. The former is called the superficial brachial artery superior when it occurs at the median nerve roots in the axilla. Another version of the superficial brachial artery is positioned posterior to the median nerve, but ultimately travels anterior to the median nerve more distally in the arm.,In cases where the brachial artery is doubled or presents with high bifurcation of the vessel, the superficial and deep brachial artery reunite distally before eventually either dividing into radial and ulnar artery or continuing as only a radial or an ulnar artery, with or without collateral branches.Unlike in our case, trifurcation of the brachial artery has been documented in the literature, dividing into radial, ulnar, and superior collateral arteries;The brachial artery is reported to become compressed by the bicipital aponeurosis in athletes with hypertrophied muscles in the forearm, to the extent that it causes discomfort and even obliterates the radial pulse.In our study, one cadaver had a median artery (MA) that originated from the common interosseous artery and travelled deep to the pronator teres, supplying the brachioradialis and accompanying the median nerve. The median artery did not pass under the carpal tunnel. The RA and the MA ended in the musculature of the forearm, making no significant contribution to the hand. As a result, an incomplete superficial palmar arch with no participation from the radial artery was observed in the hand. It was further noted that the ulnar artery that was solely involved in the superficial palmar arch was highly tortuous. The tortuosity of the ulnar artery is explained due to involvement of the ulnar nerve as an occupational hazard in those who engage in constant hammering/vibrating actions and repetitive traumatic activities; a condition known as hypothenar hammer syndrome.-Unwanted vascular injuries can be circumvented during surgical and other interventional procedures if any vascular anomaly is delineated in advance by Doppler study or MRI with angiography. The thoracodorsal and circumflex scapular vessels are used as recipient sites for autologous microsurgical reconstructions with free flaps such as deep inferior epigastric artery perforator (DIEP) or profunda femoris artery perforator (PAP) flap procedures.,Due to long term patency and survival benefits, the radial artery is a versatile conduit for CAB (coronary artery bypass) and open radial artery harvesting reduces the risk of endothelial damage.Considering the plethora of clinical significance scenarios, it is imperative for clinicians to have a thorough knowledge of the normal and aberrant vessels in the branching pattern of the upper limb.The principal arteries anastomose and periarticular networks of capillaries emerge according to a temporal sequence; some paths that are initially functionally dominant subsequently regress. The anomalous patterns occur as differences in the mode and proximodistal level of branching, aberrant vessels anastomosing with principal vessels, and/or vessels forming unexpected neural, myological, or osteoligamentous relationships.In conclusion, understanding of the normal and variant patterns of the vascular system is imperative for a favorable postsurgical outcome when the surgeon is taking an on-table decision in vessel selection for reconstructive surgery. Rarer variations are encountered more frequently than anticipated. Therefore, having a thorough knowledge about the kinds of variants that exist might help to raise the level of suspicion and thus extend the reach of the safety net for preventing surgical catastrophes. The present study is intended to raise awareness in the minds of surgeons and radiologists alike about the variations of the arterial arrangement of the upper limb that could determine the outcomes of surgical and interventional procedures."} +{"text": "A 70-year-old patient with positive anamnesis of hypertension and arthritis, went to our observation for an asymptomatic localized lesion on the hard palate. The lesion has been present for about seven years and initially appeared as a slight swelling at the midline of the hard palate. He reported a slow and constant expansion but remained asymptomatic. The objective examination revealed poor oral hygiene conditions, previous periodontal disease and prosthetic and endodontic treatments. On the anterior third of the hard palate in a central position there were multiple pink rounded neoformations, the bigger of about 1 cm in diameter and of elastic consistency. The patient confirmed the habit of tapping the tip of the pen on the palate. Palpation of the laterocervical lymph nodes was negative. There was no leakage of liquid to compression."} +{"text": "The recent development of magnetic resonance-guided focused ultrasound highlights the clinical significance of the stereotactic surgery of the thalamus including deep brain stimulation (DBS). Essential tremor and parkinsonian tremor respond well to DBS of the ventral intermediate nucleus (Vim) of the thalamus and subthalamus (downstairs).Above the plane containing the anterior commissure (AC) and posterior commissure (PC), the ventral motor nuclei of the thalamus are arrayed as Voa, Vop, and Vim from the rostral to caudal direction . The venft, ft are made up of the ansa lenticularis and fasciculus lenticularis (fl). The al originates from the ventral globus pallidus interna (GPi). The al is ventral and medial of the subthalamic nucleus (STN) and makes a turn (ansa) to go over the STN. The fl originates from the dorsal GPi. The fl goes between the dorsal STN and ventral Zi, the field H2 of Forel (Forel H2). The al and fl go through the prerubral field H of Forel (Forel H) and merge to become the ft. The ft goes through field H1 of Forel (Forel H1) and then enters the Voa/Vop , (ii) rostal Zi, (iii) Forel H2 (including fl), and (iv) STN. The posterior subthalamic area (PSA) consists of the cZi and Raprl for this research from Japan Society for the Promotion of Science.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The analysis of mobile phones data at regional level in the EU reveals varying patterns in mobility trends during the Covid-19 pandemic. These depend on the temporal evolution of the pandemic in each EU Member State, the measures taken at local or national level to limit the growth of the pandemic, as well as the level of urbanization and type of economic activity in each region. During the first phase of the pandemic (March- April 2020) the decrease in mobility was in general uniform among regions in the same Member State, especially in Italy, Spain and France, where national level measures were adopted. A relaxation of the measures and a resulting rebound of mobility was evident during the summer period (July- August 2020). At the same time, a shift from urban to rural areas during the summer vacation period is evident, with especially touristic areas increasing the number of movements in the same Member State. The variance in mobility trends during the second wave of the pandemic (October- November 2020) was higher, a result of the predominantly local and regional level measures applied in each Member State. Those insights suggest a certain correlation between the level of mobility and the evolution of the pandemic at regional level. The association with high levels of Covid-19 prevalence is particularly strong in urban regions with high mobility levels. Apart from the obvious health dimension, the pandemic also led to a serious disruption of social and economic activities. Governments in many EU Member States and worldwide adopted measures aiming to decrease the risk of contagion. Such measures differed across countries and regions, as well as during the first and second wave of the pandemic and the period in between. While some Member States opted for a general limitation of activities, especially during the first wave, others opted for approaches based on awareness raising. Teleworking, where possible, was encouraged or even made obligatory. In many areas educational activities shifted to online modes during the first wave. Retail or entertainment activities in several cases had to adapt to timetable or capacity restrictions. Cross-national or \u2013especially during the second wave- cross-regional or even intercity transport was limited by measures or lack of connections.Either as a result of the measures applied or because of changing choices at individual level as regards social distancing, the evolution of the pandemic led to fluctuations in the levels of personal mobility. Most importantly though -since mobility is one of the many factors affecting the speed and spatial pattern of the evolution of the pandemic- the fluctuations in the levels of mobility can be a good predictor for the levels of contagion in the short term future (2\u20133\u00a0weeks). At regional level, the level of urbanization and the economic activity profile are factors that influence how mobility is affected by measures to limit the pandemic. As a result, there is a high variation as regards how each European region reacted in terms of changes in levels of mobility.\u2022\u201cunderstand the spatial dynamics of the epidemics using historical matrices of mobility national and international flows;\u2022quantify the impact of physical distancing measures on mobility, including the phasing out of such measures as relevant;\u2022feed epidemiological models, contributing to the evaluation of the effects of physical distancing measures on the reduction of the rate of virus spread in terms of reproduction number (expected number of secondary cases generated by one case);\u2022feed models to estimate the economic costs of the different interventions, as well as the impact of specific control measures on intra-EU cross border flows due to the epidemic.\u201dThe case study presented here uses aggregate and anonymized movement data from a large number of Mobile Network Operators (MNOs) across EU Member States in order to monitor the evolution of mobility at regional level during 2020. The data were provided by European MNOs following a request by the European Commission for anonymized and aggregated mobile positioning data that would serve the following purposes in the fight against COVID-19 in order to :\u2022\u201cundersThe contributions of MNOs in 23 European countries were collected on a daily basis and the resulting dataset was further processed through standardization and normalization techniques to ensure its compliance with the General Data Protection Regulation (GDPR) and to allow comparability across countries . The anaThe main research question addressed is how the diversity of regional profiles within countries and across Europe influences the trends in mobility observed during the pandemic. The differences in the spatial and temporal patterns between urban and rural areas can reveal some underlying factors that influence the impact of policy measures and explain the role of regional mobility in the propagation of Covid-19. The novel aspect of this work is the combination of a dataset covering 23 countries for the whole of year 2020 with a geographic information analysis technique that allows a deeper insight into the factors that affect mobility.2The use of mobile phone data for the analysis of mobility is already a well-established approach . Mobile Monitoring mobility at origin\u2013destination level during the pandemic can be relevant for the design and evaluation of policy measures and especially non-pharmaceutical interventions. The type, duration and effectiveness of such measures differ significantly among countries and regions but also vary across time. A classification of the type of interventions applied in each EU Member State and their evaluation in terms of the impact on the number of contacts is available by ECDC Fig. 1)Fig. 1). Work, education, retail and entertainment are the main drivers for mobility , but difThe average trip distance and overall transport demand fell significantly across Europe, although it is difficult to distinguish whether this is a consequence of policies or of voluntary behavioral changes . Local a3The primary dataset used here is a combination of the data provided by the Mobile Network Operators (MNO) in 22 EU Member States plus Norway. No data were made available for Cyprus, the Netherlands, Luxembourg, Malta and Poland, The data definition and spatial resolution varies by operator, as does the time coverage of the data provided .Table 1D\u2022Aggregation of daily movements registered by each MNO at NUTS3 level: the granularity of the original data ranged from cell to municipality level\u2022Normalization of number of daily movements at NUTS3 level based on maximum total number of movements in each NUTS3 region: all data providers represent at least a 30% local market share and their data can be considered as representative of the population in the area they cover\u2022Aggregation of NUTS3 level movements based on origin and destination: internal in the same NUTS3 region, outwards to other NUTS3 region, inwards from other NUTS3 region\u2022Calculation of 3 mobility indicators , indexed based on the maximum aggregate value per region and each of the 3 origin\u2013destination combinations\u2022Calculation of 7-day moving averages to account for weekly variance\u2022Definition of urban classification using the GISCO NUTS 2016 definition \u201c\u2022For comparisons at aggregate level between the types of urbanization , the indicators were weighted based on the 2019 population in each NUTS3 region In order to convert the data into a common format that allows comparability, we performed a number of data processing actions:The original data provided by each MNO consisted of Origin-Destination matrices corresponding to the number of registered movements between origins i and destinations j during a specific time period. While the structure of the datasets produced by each provider was common, the spatial and temporal granularity varied considerably among operators, even within the same country. Differences include the spatial unit used, ranging from detailed cell tower level data to coarse administrative units, and the use of hourly or daily frequency for the registration of movements. The volume of the data submitted on a daily basis by each operator varied significantly as a result, ranging from less than 50 kilobytes to more than 30 Megabytes, resulting in a total data flow of about 200 Megabytes per day. Regardless of the specificities of each data source, each dataset maintains its internal consistency over time and space, a property that allows the calculation of relative changes in the indicator values. In order to make the indicators comparable across the EU, we aggregated the registered movements at NUTS3 level and calculated their daily totals.The mathematical formulation of the three indicators follows a standardized structure for all spatial and temporal scales :(1)MZintIn order to explore possible correlations between mobility and the spread of the pandemic at regional level, we used data on reported cases from the Covid-19 European regional tracker . The tra4The detailed mobile phones data available allow the calculation of mobility indicators at regional level in a uniform format across the EU, even though the data structure and granularity may differ among operators. We used the dataset in two specific applications that are examples of how mobile phone data availability can serve in understanding the nexus between policy measures, mobility and the evolution of the pandemic:From the mobility point of view, we explored how activity responded in each phase of the pandemic depending on the epidemiological situation and the policy measures in place;From the epidemiological point of view, we investigated the potential correlation between mobility levels and the spread of Covid-19 at regional level.4.1Summarizing the results at the level of urban typology for the NUTS3 regions for which data are available reveals some visible patterns as regards the differences in mobility levels during the various phases of the Covid-19 pandemic . The defA pronounced reduction of individual mobility can be perceived starting in the second half of March 2020 and reaching the minimum levels of mobility on average about mid-April. While initially the decrease in mobility was practically the same for all three types of urbanization level, at the minimum -and for the entire period afterwards- the reduction in mobility was stronger in NUTS3 regions classified as predominantly urban. This can be probably explained by the difference in the type of economic activities in the three types of urbanization. A larger number of jobs in urban areas would be suitable for a shift to teleworking rather than in less urbanized zones . Retail With the intensity of the pandemic receding after May, accompanied by the relaxing of measures and citizen behavior, the mobility indicators recovered to comparable levels \u2013about 90%- to those before the pandemic for all types of regions. Nevertheless, a reversal of the recovery trend can be observed in urban areas during the summer period. Holidays in educational activities and summer vacation for a large share of the labor force led to a noteworthy reduction of mobility indicators. In contrast, rural and intermediate regions continued their trend of recovery and actually reached \u2013on average- the maximum level of the mobility indicator for year 2020. This indicates the considerable flow of residents in the major urban centers towards the main tourism and holidays destinations within each country.After September 2020, the emergence of a second wave of the pandemic and the subsequent implementation of new policies can be reflected in a new shift of mobility indicators downwards. Measures taken during this period tended to be of a local or regional level, as opposed to country-wide ones, as was the norm during the first wave.The trend was interrupted in the end of December/ beginning of January, with a marked decrease of the mobility indicator in all three groups. This was the result of the combination of the holiday period on one hand and the emergency measures implemented in several Member States, on the other. The latter aimed at limiting the number of social interactions during the period in order to avoid spreading the disease even further . The relative impact for each regional profile differs again, with urban areas registering a stronger decrease, an observation that possibly indicates a certain movement to rural areas during the holiday period. The downward trend of the mobility indicator continued in January, without however reaching the minimums of March 2020, even though the epidemiological situation was comparably worse in many EU Member States.The countries in the EU that were hit the hardest during the first wave of the pandemic were Italy, Spain and France. They were also the first Member States that introduced strict measures that limit mobility, such as stay-at-home enforcement and closure of non-essential activities. The combination of those measures with the generalized perception of high risk of contagion led to a steep decrease of mobility across all regions in these countries . It is iDuring the summer period (July- August 2020) the level of mobility recovered in all Member States and Norway . There wA third phase of the pandemic in terms of regional mobility is visible during October and November 2020, when a second wave of the pandemic emerged. In terms of measures adopted, the main difference with the first wave was that restrictions in most Member States were applied at local and regional level (as opposed to country-level). As a result, a high variability at both Member State and regional level can be observed .Fig. 5AvAnother diverging pattern among regional mobility profiles can be observed in the variation across working and non-working days. The outwards and inwards mobility indicators tend to be symmetrical on a weekly basis for most part of the year. The number of movements leaving a NUTS3 region during a weekday is normally compensated by the number of movements entering the same region. This effect can be attributed to the large share of commuting trips between neighbouring regions, for which a return trip usually takes place within the same day. Return trips during the weekends may show a longer lag, since they may involve an overnight (or longer) stay at the destination. The influence of this duration is already reflected in the difference in the mobility patterns between weekdays and weekdays , with moNevertheless, the trend of inwards/ outwards mobility differs from the one of internal mobility. The number of trips outside each NUTS3 zone decreased significantly more than trips within each NUTS3 zone during the first wave of the pandemic. As The measures applied in November and December 2020 to limit the evolution of the second wave of the pandemic included the restriction of mobility between regions in several EU Member States. They had a clear impact on the ratio of outwards mobility, which reached levels comparable of those during the first wave in March-April 2020. Throughout the period March-December 2020, it is evident that fluctuations in the level of mobility were the result of the restrictiveness of policy measures to fight the pandemic.4.2While the effectiveness of non-pharmaceutical measures in limiting mobility is easy to demonstrate, the impact on controlling the evolution of the pandemic is less straightforward. There are several indicators for the evaluation of the epidemiological situation at country and \u2013in some cases- region level, including the measurement of daily and/or accumulated reported cases, hospitalizations, intensive care patients or deaths related to Covid-19. The latter may be the most reliable measure of the true impact of the pandemic in terms of human lives, but is also the result of a combination of numerous confounding variables that limit the ability to identify clear cause and effect relationships. For example, the number of Covid-19 related deaths in a specific region given a certain share of contagion in the population probably depends more on the local health system conditions, response mechanisms and demographics rather than on the virus propagation dynamics themselves.Since we are addressing the contribution of mobility on the dynamics of the pandemic, the most suitable indicator for the exploration of causality would be the number of infected individuals, a possible direct effect of social interaction and \u2013by extension- mobility. The disadvantage of this measure is \u2013however- its lower reliability. Especially during the early phases of the pandemic, the number of cases was seriously under-reported since the number of tests was limited and the sampling was biased. The testing strategies and \u2013in consequence- the reporting quality improved by June 2020, even though still at varying degrees across countries. As a specific indicator, we use the 14-day notification rate of new Covid-19 cases per 100 000 inhabitants, one of the most frequently used indicators for reporting the pandemic\u2019s evolution . The datVisualizing the notification rate at regional level over time suggestsIf mobility and urbanization typology are both taken into account, however, the interpretation of the correlations changes. We used a simple observational model to explore a possible link between increased levels of mobility and high levels of Covid-19 incidence. We grouped the regions in the dataset according to their mean mobility indicator during July and August 2020, for all three types of mobility , and defined regions as having had high levels of mobility for any of the three types if the respective mobility indicator was over 90%. In a similar fashion, we grouped the regions in terms of their level of Covid-19 incidence in October 2020. We considered as regions of high incidence the ones that had an average 14-day notification rate in the 80% percentile within their country.This formulation allows us to test whether increased levels of mobility in a region eventually led to high numbers of reported Covid-19 cases, with the region population already accounted for. In addition, we combined the level of mobility and the urban typology of each region and created 6 different region categories (3 urbanization profiles\u00a0\u00d7\u00a02 mobility levels).The main test we performed was the comparison of the odds ratios of the six categories as regards the probability for a region to present a high incidence level in October 2020. We compared the odds of each category to those of the regions with an intermediate profile and low mobility. The mean odds ratios and the width of their confidence intervals provide a quantitative estimate of the relative probabilities. For internal mobility , the oddA similar picture overall can be drawn for inwards and outwards mobility , Fig. 11Notwithstanding the few unexpected results, we may still draw a conclusion that can be generalized at EU level. Urban areas tend to have a higher rate of Covid-19 cases and increased levels of mobility accentuate the urbanization effect. Both aspects reinforce the importance of population density and volume of social interaction as a factor of the disease propagation. In that sense, mobility should be seen as a means to facilitate social interaction rather than a driver itself. Movements between regions do not appear to have significantly accelerated the evolution of the pandemic in non-urban regions. We can therefore not claim any direct regional spillover effect due to increased mobility. On the other hand, we cannot discard another possible mechanism that could still explain most of the observations. The increase in mobility levels may be primarily an expression of the general increase in social interaction that -combined with the overall relaxation of personal protection measures- led to a new wave when the majority of the residents of urban areas returned after the summer holidays.5Several Mobile Network Operators have been providing aggregate and anonymized mobility data at fine spatial and temporal resolution in order to assist policy makers in monitoring the evolution of mobility during the Covid-19 pandemic. The analysis of Mobile Network Operator data at regional level (NUTS3) reveals varying patterns depending on the temporal evolution of the pandemic in each EU Member State, the measures taken at local or national level to limit the growth of the pandemic, as well as the level of urbanization and type of economic activity in each region.During the first phase of the pandemic (March- April 2020) the decrease in mobility was in general uniform among regions in the same Member State, especially in Italy, Spain and France, where national level measures were adopted. A relaxation of the measures and a resulting rebound of mobility was evident during the summer period (July- August 2020). At the same time, a shift from urban to rural areas during the summer vacation period is evident, with especially touristic areas increasing the number of movements in the same Member State. The variance in mobility trends during the second wave of the pandemic (October- November 2020) was higher, a result of the predominantly local and regional level measures applied in each Member State. Those insights suggest a certain correlation between the level of mobility and the evolution of the pandemic at regional level. The combination of mobile phones and epidemiological data at regional level reveals that urban regions are more prone to reaching high levels of Covid-19 prevalence. If accompanied by high levels of mobility, the epidemiological risk for urban regions becomes even higher. Nevertheless, since intra-zonal and inter-zonal mobility are only two of the numerous factors that affect the geographic distribution of the pandemic , it woulThe results suggest that there is a high degree of variability in the evolution of mobility and prevalence at regional level during the pandemic. The decrease in mobility was more evident in urban areas when country-wide restrictions were in place, presumably due to the higher share of activities that can be replaced with tele-working. Urban areas also show a decline in mobility during weekends and holidays, owing to the significant share of employment and education in their normal levels of activity. Non-pharmaceutical interventions are the primary driver for decreased mobility throughout the EU. Voluntary confinement appears to have played a role during the first wave of the pandemic, but gradually relaxed \u2013as did general compliance to restrictions- until the second wave was evident. Mobility levels in non-urban areas tended to recover faster than in urban areas, possibly due to the increased movement of travellers from urban areas.Given the persevering nature of the pandemic and the considerable possibility of further waves in the future, a main lesson drawn from the analysis of dynamics of intra- and inter-regional mobility concerns the question of restricting movements between regions. The new waves in the pandemic in Europe following the holiday periods in summer and New Year appear to be a repercussion of the activity in the destination regions and not of increased mobility per se. Measures aiming to limit future contagions should therefore prioritize limiting unsafe interactions among the population before restricting travel options. Future work, once a better understanding of the Covid-19 transmission mechanism is achieved, can produce improved models of the role of mobility and regional profile and contribute in the identification of more effective prevention policies.Panayotis Christidis: Conceptualization, Data curation, Formal analysis, Methodology, Writing \u2013 original draft, Writing \u2013 review & editing. Biagio Ciuffo: Conceptualization, Formal analysis, Methodology, Writing \u2013 review & editing. Michele Vespe: Conceptualization, Formal analysis, Methodology, Writing \u2013 review & editing."} +{"text": "Family firms are a unique setting to study constructive conflict management due to the influence of family ties of the owning family imprinting a sense of common purpose and shared destiny, and high levels of trust. We study the relationship between shared vision and trust that intervene in the adoption of constructive conflict management. To achieve our purpose, we carried out a systematic indirect observation using a mixed methods approach. We used the narratives of 17 semi-structured interviews, audio-recorded and transcribed, of family and non-family managers or directors from five Spanish family firms in the siblings' partnership stage, combined with documentary data obtained from different sources. Intra- and inter-observer reliability were confirmed. Results show a dynamic relationship between shared vision and specific components of trust (benevolence and ability) at different levels of conflict management. We also provide evidence of specific processes of concurrence-seeking and open-mindedness in family and ownership forums accounting for the relevance of family governance in these type of organizations. Family firms are a sum of several subsystems which exhibit a particular resources configuration. This study sheds light on constructive conflict management in family firms opening interesting avenues for further research and offering practical implications to managers, owners, and advisors. Organizations are fertile ground for conflicts. They respond to the high demands of a highly changing environment, which exerts many pressures on the teams and demands people to solve their dissents, effectively collaborate, and make agile decisions and assuming that novel ideas may fail Johnson, , trust wAlthough there is evidence that trust is present across the multiple levels of family firms Qualitative (QUAL) consisted of collecting narratives, coding by an indirect observation system, and converting it into a matrix of codes. (2) Quantitative (QUAN): we conducted a polar coordinate analysis to quantify the narratives. (3) Qualitative (QUAL) to integrate qualitative and quantitative findings see .The following sub-section describes the indirect observation methodology employed to conduct the mixed methods study.Observational studies allow observing psychological phenomena in the natural context. In the last two decades, Anguera et al. have built a body of robust evidence regarding the use of observation in a variety of contexts supported by a \u201chighly systematic data collection and analysis, stringent data quality controls, and the merging of qualitative and quantitative methods\u201d , the awareness of the generational stage (generation), the family kinship (family ties), and the references to the milestones of the family history as a group or as individuals. The business dimension allowed coding the narratives regarding the specific attributes (or characteristics) of the company or its environment or group of companies and the role played in the organization (business role). The shared vision dimension is concerned with the perception of a \u201cgroup member's genuine belief that they are working collaboratively toward a common purpose\u201d Lord, , p. 8 anTool for the Observation of Social Interaction in Natural Environments .The polar coordinate analysis is a quantitative analytic technique widely used in observational studies is normally distributed with \u03bc = 0 and \u03c3 = 1. The calculation of the Zsum parameter, whose formula is sum as lags for each specific category from the prospective and retrospective perspectives. As a consequence, the same quantity of Zsum and lags are obtained for every specific category in each of the two prospective and retrospective perspectives.This technique was developed by Sackett and impr Cochran . Zsum issum may carry a positive or negative sign, which will therefore determine which of the four quadrants will contain the categories corresponding to the conditional behaviors in relation to the focal behavior being displayed. The polar coordinates analysis helps to identify the activation or inhibition relation of the focal behavior and all or some of the categories of the observational instrument, which are the conditional or matching behaviors.Each Zsum criterium and Zsum matching values for each of the conditional behaviors. For a significance level of 0.05 the length of the vector has to be >1.96. Once the length and the angle corresponding to each vector are obtained, the angle must be adjusted taking into account the quadrant where each vector will be located.Vectors represent the relationships graphically. The length parameter of the vector coinciding in three or more cases .In the following sub-sections, we will present the results of polar coordinate analysis differentiating the focal behaviors and conditional behaviors which were selected for analysis according to our previous theorization.The four subdimensions of family boundaries selected as conditional behaviors showed significant associative relationships with shared vision and trust. In this sense, we found that the presence of an inspiring vision for the future or shared vision was activated in presence of narratives concerning family business systems whereas narratives regarding family members and non-family members were inhibitors of narratives of shared vision. The perception of a lack of shared vision and family business system were mutually activated, whereas lack of shared vision and non-family members were mutually suppressed or inhibited.Concerning trust, the family boundaries indicated that narratives regarding the several subsystems activated or inhibited narratives depending if these referred to the perception of trust, lack of trust or the specific components of trust. We found that mentions to the perception of trust did probably not emerge in the presence of narratives regarding the family business system. Indeed, references to a family system and family business system showed this inhibitory role of perception of the component of ability whereas elusions to non-family members and the external environment showed an activation effect of the perception of the capacity or ability to manage the tasks as a trait of trust. For the component of benevolence, we found the opposite direction such that elusions to the family system activated the perception of caring or the affection-based component of trust (benevolence) whereas family business system and non-family members exerted an inhibitory role.Narratives regarding the external environment and perceived conflict were mutually inhibited. The same type of relationship was found between those narratives referred to the external environment and open-mindedness. The external environment seems to be perceived as trustful as long as narratives of ability and external environment are mutually activated.As shown in The indirect observation reported that shared vision narratives were inhibited in presence of mentions to controversies that produced tensions or interference (perceived conflict). Something similar was found between the presence of narratives of shared vision and references to conflict related to the process of transference of power, managerial roles and ownership (succession) in which they were mutually inhibited. We indirectly observed that shared vision was activated in the presence of mentions to the managing committee, ownership , and board of directors (it was coinciding in the five cases but in two of the cases it was in a different quadrant). On the contrary, mentions to family meetings, family council and family constitution inhibited the emergence of narratives regarding future goals or shared vision. Relationships between shared vision and processes of constructive conflict management are not supported by consistent findings across at least three cases. Although we found that a perception of lack of shared vision and perceived conflict exhibited a mutual activation relationship.Findings supported the relationship between shared vision and mentions to the tendency of the firm to engage and support processes oriented to result in new products, services or processes (innovativeness) in the expected direction given that narratives of future goals or shared vision are activated in the presence of these mentions to innovativeness. The same relationship was found with the subdimension of risk-taking orientation (both categories of risk-taking and risk-avoidance). Narratives of lack of shared vision also showed this relationship of activation with references to innovativeness and risk-taking orientation.Narratives regarding the perception of trust inhibited narratives of perceived conflict whereas narratives of trust were activated in presence of narratives of a specific conflict of succession. We found evidence about the relationship between narratives of trust and narratives regarding the different levels at the organization where conflict management took place. In this sense, trust was indirectly observed alongside mentions to collaborators and managing committee whereas mentions of trust were inhibited in presence of narratives concerning the board of directors. It is noticeable that an associative relationship between trust and managing committee was coinciding across five cases although not in the same quadrant. In From the scrutiny of the specific components of trust, we found a relationship between mentions to ability and perceived conflict. They were mutually inhibited. We also found that mentions to ability were activated in presence of elusions regarding the levels of collaborators and managing committee and that they were inhibited when participants referred to family meetings, family council, family constitution and ownership. Concerning narratives regarding the affective component of trust or benevolence, they were inhibited in presence of mentions to the board of directors and teams. We also found that narratives of lack of trust and trust were related to open-mindedness where they exerted an inhibitory role. Whereas, in close-mindedness narratives of trust exerted an inhibitory role and lack of trust activated the references to close-mindedness.We found that narratives of trust and innovativeness were related. Perception of trust and their components of benevolence, ability and integrity were associated with innovativeness in the sense that they were mutually inhibited. However, the perception of ability which was activated in the presence of narratives related to engaging innovation in collaboration with external parties or collaborative innovation.Given that we were interested in understanding constructive conflict management we conducted several analyses considering the different subdimensions as focal behaviors. As it is shown in We found evidence about the presence of open-mindedness at different levels of the organization. Precisely, findings indicated that this process is more probably present in family meetings and family council whereas it was not referred to during the elusions of ownership and collaborators. Concurrence-seeking and open-mindedness shared in some sense this structure of relationships, such as that narratives regarding concurrence-seeking and family meetings and ownership were mutually activated though they were inhibited when participants referred to collaborators. In managing committees, it close-mindedness did not seem present as the inhibitory relationship between narratives of both types indicated.As it is shown in Concerning the several business roles played in the company, the results obtained indicated that they conditioned shared vision, trust, constructive conflict management and innovativeness in different senses.For instance, we found that references to the members of the board of directors suppressed the presence of narratives of shared vision and trust. In other words, if the participants narrated experiences regarding the board of directors, these were not followed by mentions to shared vision and trust. It is noticeable that mentions to the general manager activated narratives concerning the component of trust related to personal skill and capacities (ability) whereas references to workers exerted the opposite effect, it meant that if the participants referred to ability they did not refer to trust stemmed on ability. Findings supported that perceived conflict and open-mindedness were not referred to managers.In this research, we provide evidence about the use of indirect observation as a rigorous approach to study narratives and extracting conclusions about complex phenomena would hinder the perception of future goals or shared vision. It seems that benevolence stems from family ties and shared vision is referred to business goals according to the results. This finding might explain that shared vision does not seem to be a subject of conversation in family governance forums . A plausible explanation for this, is that shared vision is an antecedent of the creation of a governance bodies, particularly in siblings partnership stage, therefore shared vision is not a topic of conversation in family governance forums. In fact prior research suggests that the more aligned the values and vision of the family, the more they develop governance structures Parada, . This fiper se) which would lead to groupthink (Arteaga and Men\u00e9ndez-Requejo, Our study brings evidence that shared vision would promote innovativeness which contradicts some theorization about the potential harming of a shared culture as a sort of \u201ccollective blindness\u201d that may inhibit the pioneering process and consequently innovation (Carnes and Ireland, The processes of constructive conflict as open-mindedness or concurrence-seeking does not emerge with the same strength in the picture of innovativeness. Although the findings suggest that the perception of conflict or the lack of conflict may hamper innovativeness, which is consistent with the paradoxes of conflict and innovation described in scientific literature: too much conflict hampers innovation but at the same time specific levels of conflict trigger organizational innovation Vollmer, . SomethiThis research reports interesting insights about trust and constructive management in the unique context of family firms (Elgoibar et al., Family governance is a fundamental level of constructive conflict management in family firms (Berent-Braun and Uhlaner, Trust promotion between family and non-family members should be included in the agenda of family firms to boost collaboration and innovativeness through constructive conflict management (Bennedsen and Foss, Despite the strengths of this research, we acknowledge that it is not exempt from limitations. We are aware that this study is an initial step to understand the dynamics of constructive conflict in family firms. It opens interesting avenues to further studies. For instance, the exploration of the relationships between shared vision, bonding and relational climate in the family firm require further exploration. Further research may explore a more heterogeneous group of cases in terms of their levels of innovativeness, size or geographical situation. Moreover, studying cases in different generational stages of ownership is a good avenue to extend this study. Indeed, studying cases which might not have been assisted by family business advisors may be an interesting opportunity. In this study, narratives provided by family members are more prevalent. Further studies may explore in-depth the perspective from non-family members.Further studies may investigate the potential of conflict management as a tool for balancing work and family in the context of family firms (Lu et al., Given that conflict is in the area of sensitive issues (Jehn and Jonsen, This research sheds light on the uniqueness of constructive conflict management in family firms. We provided empirical evidence of shared vision and trust as roots of constructive conflict management observed at different levels of the organization. Trust management demonstrates critical importance for obtaining constructive outcomes of conflict in this type of organizations. Conflicts of succession emerges as a critical moment for developing trust and shared vision. Although the role of conflict in innovativeness is confirmed, it is necessary to further explore in-depth open-mindedness and concurrence-seeking in family firms. This study paves the way to further research, which looks into family firms from a psychological perspective.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The study has been carried out following the ethical guidelines and procedures of the Doctorate Program of Psychology of Communication and Change of the both Autonomous University of Barcelona and University of Barcelona, to which the first author is a Doctoral candidate and the ethical standards for research established by the American Psychological Association . All parCA-A carried out the data collection, literature review, research design, writing the manuscript, coded the textual material, and trained and supervised the coders who participated in the data quality control procedures. IA and MP made a substantial, direct and intellectual contribution through supervising all the stages of the research process, and reviewing the several drafts of the article. MA supervised the method and conducted the polar coordinate analysis. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling editor declared a shared affiliation with several of the authors CA-A, IA, and MA at time of review."} +{"text": "The supply of transplants or autologous materials for the repair or replacement of defective cardiovascular tissues is largely inadequate and it does not fulfill the increasing demand resulting from the generalized aging of the population. Although the mortality has decreased over time thanks to improvements in prevention and new intervention protocols, the quality of life of the patients is still often poor, due to the limited regenerative capacity of the cardiovascular tissues or the use of implant devices made with non-living tissues not endowed with regeneration ability . In the last decades, the growth of the tissue engineering as a novel interdisciplinary framework to supply living materials able to regenerate lost tissues has made huge steps in refinement. This progress has been promoted by advancements in fabrication and refinement of biomaterials, and the possibility to combine cells with permanent or bio-absorbable scaffolds able to provide the necessary geometrical, chemical, and mechanical information to instruct proper cellular behavior. The aim of the present Research Topic was to collect experimental work and review articles illustrating improvements in the state-of-the-art for the production and application of biomaterials for the preparation of cardiac, valve, and vascular tissue replacements.Cathery et al., the Authors combined a commercial porcine-derived extracellular matrix patch with umbilical cord derived pericytes, to generate a vascular graft with increased mechanical resistance thus providing an ideal platform for implantation in pediatric surgery procedures such as the one for Fallot tetralogy. Amadeo et al. seeded valve interstitial cells inside decellularized porcine pericardium, to obtain a repopulated scaffold containing proliferating and mature cells capable of depositing extracellular matrix components. Given that the decellularization process removes major antigens such as \u03b1GAL , which can be obtained by bacterial fermentation. Alongside direct tissue replacement, biomaterials can be used to create 3D ex vivo models to study the processes involved in the development of congenital and acquired cardiovascular diseases, and to test novel therapeutic approaches aimed at reducing the burden of disease. The review by Iop identifies the advantages and limitations of bioengineering 3D tissue models in vitro for the study of common cardiovascular diseases such as arrhythmias, cardiac infections and autoimmunity, cardiovascular fibrosis, atherosclerosis, and aortic valve stenosis. A final example of cardiac disease modeling was provided in the study by Bracco Gartner et al. in which a tunable stiffness hydrogel (gelatin methacryloyl) was combined with human cardiac fibroblasts to create a 3D in vitro model of cardiac fibrosis. Results of this study showed the ability of these cells to modulate the mechanical properties of the extracellular matrix and thus controlling fibrosis not only through the release of paracrine signals, but also modification of tissue biophysical features.Decellularized scaffolds ideally reproduce the complex structure of the tissue of origin, however the use of natural and synthetic polymers provides a level of tunability and functionalization difficult to achieve in decellularized tissues. In particular, natural polymers are more sustainable, biocompatible and present natural ligands promoting cell adhesion. In the article collection, In summary, the publications within this Research Topic illustrate the multi-faceted role of biomaterials in the fight against cardiovascular diseases, which are endemic and increasingly relevant in our society. The spectrum of the applications of these materials indicate an increasing effort toward reducing the long-term symptoms of acute cardiac events, such as valve failure or myocardial infarction, and the push to find new therapeutic options to relieve burden of congenital cardiovascular diseases.PC and MP wrote the editorial. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In recent years, the study of consumer behavior has been marked by significant changes, mainly in decision-making process and consequently in the influences of purchase intention rational processes; (ii) emotional resources; (iii) the cognitive currents arising from the theory of social judgment; (iv) persuasive communication; (v) and the effects of advertising on consumer behavior satisfaction between the act of buying and the results obtained , so purchases made in physical stores tend to be more impulsive than purchases made online. This type of shopping results from the stimulation of the five senses and the internet does not have this capacity, so that online shopping can be less encouraging of impulse purchases than shopping in physical stores (Moreira et al., Researches developed by Aragoncillo and Or\u00fas reveal tFollowing the logic of Platania et al. we consiAll authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "After decades of below replacement fertility, China is now experiencing rapid population aging and the lives of the growing older population are being shaped by massive social and economic change. Of particular importance, is the large-scale migration of working-age adults from rural areas to large cities in search of job opportunities. The departure of migrants from their rural hometowns has resulted in a large population of left-behind older men and women. This distinctively Chinese demographic phenomenon has spurred scholarly interest in the emotional well-being of this older left-behind population, but careful demographic description of aging, migration, grandparenting, and loneliness has yet to be conducted. We bridge this gap by describing the average remaining life spent lonely by older men and women in China. We use Sullivan\u2019s method to calculate lonely life expectancy by urban/rural residence and by the migration status of adult children (as proxied by the presence or absence of coresiding children). We use data from the Harmonized version of the Chinese Health and Retirement Longitudinal Study and focus the analysis on adults aged 55-100. Preliminary results show that, at age 55, women on average spend 9% more of remaining life lonely than men and that rural men and women spend more of their remaining life lonely than their urban counterparts. We will extend these life table analyses by conducting multivariate analyses of the correlates of loneliness in urban and rural China to better understand the role of migration and grandparenting responsibilities."} +{"text": "The lessening of food wastage, specifically among nations where about half of its worldwide quantity is produced, has turned to be a mammoth challenge for environmental, social and economic sustainability, and represents one of the seventeen Sustainable Development Goals (SDG) within the Agenda 2030. The quantity of food being thrown away in spite of being in an edible condition has become alarming in middle and high income countries. The COVID-19 lockdown strategy, both at local and international levels, has expressively altered work, life and food consumption behaviors globally, directing to food wastage as a multi sectoral issue. Pakistan has no exception to such manifestations. The main objective of this study is to analyze the perceptions of rural people of Pakistan regarding food wastage during the COVID-19 pandemic. To evaluate whether behavior about food wastage among rural households varied or not during the pandemic, a descriptive survey was carried out using a self-administered questionnaire and 963 responses were selected for further empirical investigations. The findings of the study reveal that food waste actually decreased in spite of an increased amount of purchased food during the lockdown. Our results highlight that the effect of the pandemic has led to reduction in food wastage among rural respondents, an increased consciousness for the morals of food waste, and awareness of environmental impacts of food wastage. The conclusions of this study highlight that rural consumers of Pakistan are emerging with a new level of responsiveness about food wastage with possible positive impact on the environment in terms of decreased greenhouse gas (GHG) emission and other pollutants. The study findings imply that this pandemic time provides a suitable window to raise awareness about food wastage among rural as well as urban households while contemplating effective strategies to overcome the issue of food wastage in the country. The th most populated country in the world, having a population of about 225.2 million. Millions of people in the country don\u2019t have an approach to balanced and healthy food, while food wastage is also massive in the country on a daily basis. The annual food wastage is estimated at around 36 million tons in parties, hotels, weddings, and households [th position out of 107 countries across the globe. World Food Program (WFP) 2020 highlights that about 1/4 of households in the country (49 million people) are facing moderate food insecurity while 10% of the household (21 million people) are facing serious food insecurity. In March 2020, the World Health Organization (WHO) confirmed the outbreak of Corona Virus Disease-2019 as a pandemic . To restuseholds . On the Food and Agriculture Organization\u201d (FAO) of the \u201cUnited Nations\u201d (UN), an economy with high levels of malnutrition costs around three to four percent of GDP. In the case of Pakistan, it is estimated that malnutrition and its consequences cost the country 3% of GDPevery year, so the government set a target to achieve sustainable food security by 2025 [The situation of food security has strong links with the human capital of the country. According to \u201c by 2025 . It is iThe earlier research uncovered the different dimensions of food waste. First, the present literature on food wastage behavior has been significantly led by qualitative explorations investigating various problems relevant to barriers and motives to reduce food wastage. However, these studies were not able to create causality and connections among the concerned variables . Second,This study examines the behavior of food waste in rural households of Pakistan during the pandemic. Rural residents usually bear accessibility restrictions and food purchases may be well scheduled with the opportunity that the nearest food retail outlet is remote and transportation may not be readily available and costly. Reduction in food waste not only requires the determination of responsible factors but must recognize the perceptions of rural people during the pandemic with respect to gender, income level, size of households and distance for shopping. The study findings may help policymakers to devise policy in rural areas of Pakistan for prevention of food waste in the country.st -15th March 2021. A set of questions was asked about purchasing behavior of rural consumers, their food expenditures, production of food waste and food relevant behavior during the COVID-19 pandemic. A consent form is obtained from participants willing to participate in the survey, then questionnaire was given or interviewed the questions of the questionnaire. The respondents who filled the questionnaire were asked to submit the consent form. The consent was taken from respondents at first, who were contacted through telephones. The respondents who were not willing to give consent or submit a consent form, are excluded from the sample. The respondents having age more than 18 years and main purchaser of food are allowed to participate in the survey. A set of qualitative questions was asked from respondents about their food purchasing behavior, waste productions, food expenditures and other food related behaviors. The data collected through online forums and mobile phones is treated collectively for empirical analysis as the same questionnaire was conveyed to both groups of respondents. Likert scale is used to quantify the information. The questionnaire is given in the A questionnaire in the local national language (Urdu) was administered in rural areas of Pakistan using the online platform and telephonic interviews. The respondents were contacted through mobile telephones if they do not have an easy access to the internet. A procedure of stratified sampling was adopted that produced a representative sample of the population. Stratified sampling is a method of partitioning a population into sub-populations called stratum. The members of the population are classified into homogenous subgroups. The sample was stratified using age, gender, income. Every member of the population is assigned only a single stratum. Then observations of each stratum are gained through simple random sampling so sampling errors are reduced in this way. There was random contact with the rural people of the country and few screening questions were asked with respect to their food waste behaviors, gender, age, and income level. The quotas were specified with respect to the age, gender, income and number of households. These quotas of different categories made the representative sample of the Pakistani population. The respondents of the survey were allowed to participate until the quotas of different categories were filled. Only those 18 years aged respondents were considered who confirmed their status as main food purchaser at household level. The final sample consisted of 963 rural household consumers of Pakistan due to evaluation of rural behavior of people about food wastage. Among the sample, 678 respondents provided information through online means while information from 285 respondents was collected through telephones. The data is collected through questionnaire and interviews in the period from 1The variables are described in the An ordered logit model is regressed to determine the behavior of rural respondents regarding household food waste during Covid-19. The biographic information is a source to determine the behavior of food wastage in different age groups. The income level of respondents is categorized into three categories because the per capita nominal income of Pakistan is about $1544 (Rs. 300000 estimated) in 2020. Food waste (FW) is a dependent variable which measures the change in amount of wasted food during the pandemic. The dependent variable is measured through the ordered nature of Likert scale by using an ordered logit model. The summary statistics of the variables are given in 2 = \u22122641.28).The coefficients of an ordered logit model assume the same relationship among all pairs of the groups and only one coefficient represents each variable . It is the main difference between the ordered logit model and a multinomial logit model. The Brant test is used to verify the assumption of stratification. The Brant test confirms the presence of proportional odds assumption considered by the ordered logit model and FQD are significant at 5% and 1% respectively. These findings reveal that purchasers of household food decreased the quantity of purchased food and also reduced the frequency of food purchasing. These food purchasers are not intended to increase the quantity of wasted food during the pandemic as compared with those participants who remained unaltered with respect to their habits of amount food purchasing and frequency of food purchasing as supported by earlier studies \u201317, 20. In the same way, the odds ratio of the variables FPI and FQI are also significant at 1%, showing that participants who raised the frequency of food shopping and quantity of purchased food are not intended to increase the wastage of food during the pandemic as compared with those who remained altered their behavior with respect to frequency of food shopping and amount of purchased food. The odds ratios of the variables NHH 0.863) and LYY (0.481) are also significant both at 1% depicting that as the size of household decreases there is more probability to increase the amount of food waste. Moreover, decreasing levels of income is also a reason for reduction of food wastage . It can 63 and LYConsumer behaviors in their eating and buying habits are significantly changed due to the Covid-19 pandemic . The finMoreover, the size of the household played an important role with respect to food waste during the pandemic in rural areas of the country . The finIt was found that a small percentage of respondents reportedan increase in food waste due to increased food stocks. The fear of food shortage caused \u201cpanic buying\u201d behavior and purchased excessive food remained unused and spoiled . These wThe outbreak of the COVID-19 pandemic at the beginning of 2020 had altered perceptions and ways of consumers using food products in a significant way. This study explored the effect of COVID-19 pandemic on awareness of rural Pakistani consumers, their behaviors and attitudes towards food waste and consumption. People are spending time at home and outdoor dining is restricted due to the pandemic, so there is a major change in the behaviors and attitudes of people with respect to health and food. People are more conscious to manage their budgets in healthy food and avoid food wastage. This behavior is not limited to urban areas, but rural consumers are also in line with these global changes. There is a need to explore the perceptions of rural people of Pakistan towards food waste in the time of pandemic as it is a less focused area in the literature. This paper is an attempt to explore the factors influencing the quantity of wasted food in rural areas of the country. Rural households usually bear accessibility constraints and their behaviors towards food purchase are well planned because grocery stores are remote and transportation means are costly and may not be available readily. So there are clear and vast changes in the way people are interacting, shopping and eating around the food in rural areas also. The prime objective of the study was to determine the food wastage behavior and consumer trends about food purchases in rural areas of Pakistan during the pandemic of COVID-19. The general findings of the study conclude that food consumption and quantity of food purchased increased due to shut down/limited operations of restaurants but food wastage decreased in rural areas of the country. These results suggest that the pandemic increased the level of awareness among rural people to avoid food wastage that creates significant environmental and economic problems in the country. The rural inhabitants have also realized that wasted food is a source of methane gas emissions, which is more harmful to the environment than carbon dioxide. So we can also protect the environment by avoiding food wastage. The earlier studies on the topic focused on the urban areas of the country but this study is a first attempt to explore the food waste and food purchasing behavior in rural areas of Pakistan during Covid-19 pandemic. So the findings of this study has important implications for devising policy about rural areas of the country. The COVID-19 has altered the behavior of people about food wastage, so this behavior may be prolonged in future through proper awareness. This change in behavior will be fruitful for the environment and the economy, both. The food resources will be saved for the future generations and the growing population while the environment will be protected from methane. The earlier studies highlight that food wastage decreased in different regions of the world during the pandemic but there is ample space to investigate this problem in many more dimensions such as behaviors about food waste among different socio demographic groups. It would also be interesting to know the persistence of changed behavior with respect to food waste even after COVID-19 in future studies.S1 Appendix(DOCX)Click here for additional data file.S1 Data(XLSX)Click here for additional data file."} +{"text": "In the Funding statement, the name of the funder is spelled incorrectly. The correct spelling is Deanship of Scientific Research at Princess Nourah bint AbdulRahman University, Riyadh."} +{"text": "Quality assessment of stitched images is an important element of many virtual reality and remote sensing applications where the panoramic images may be used as a background as well as for navigation purposes. The quality of stitched images may be decreased by several factors, including geometric distortions, ghosting, blurring, and color distortions. Nevertheless, the specificity of such distortions is different than those typical for general-purpose image quality assessment. Therefore, the necessity of the development of new objective image quality metrics for such type of emerging applications becomes obvious. The method proposed in the paper is based on the combination of features used in some recently proposed metrics with the results of the local and global image entropy analysis. The results obtained applying the proposed combined metric have been verified using the ISIQA database, containing 264 stitched images of 26 scenes together with the respective subjective Mean Opinion Scores, leading to a significant increase of its correlation with subjective evaluation results. Panoramic images, constructed as a result of image stitching operation conducted for a series of constituent images with partially overlapping regions, may suffer from various distortions, including blur, ghosting artifacts, and quite well visible geometric and color distortions. The presence of such issues decreases the perceived image quality and in some cases may be unacceptable from an aesthetic point of view. Although modern cameras and smartphones are usually equipped with software functions making it possible to properly register the overlapping areas of individual photos to create panoramic images, some additional requirements should be fulfilled by users during the recording to prevent such problems. Nevertheless, the growing availability of software and hardware solutions causes higher popularity of panoramic images which may be useful, e.g., as wide background images, in virtual reality scenarios, as well as in mobile robotics for the Visual Simultaneous Localization and Mapping (VSLAM) applications.Considering the modern applications of image stitching and image registration algorithms, related to the use of cameras mounted on mobile robots, the quality of obtained panoramic images is very important due to potential errors in vision-based control of their motion. In the case of decreased image quality, such images might be removed from the analysis to prevent their influence on the robot\u2019s control. Another interesting direction of such research in mobile robotics concerns the fusion of images acquired by unmanned aerial vehicles (UAVs) ,2.One of the most relevant factors, influencing the final quality of the panoramic images, is the appropriate choice of distinctive image features used to match the same regions visible on the \u201cneighboring\u201d constituent images. Nevertheless, some additional post-processing operations conducted after the assignment, such as blending and interpolation, may also have a significant impact on the quality of stitched images. Some obvious examples might be related to different lighting conditions and background changes visible on the constituent images which may cause some easily noticeable seams. Another factor, related to geometric distortions, is the influence of lens imperfections and a too low number of detected keypoints used for further image matching, particularly for constituent images with overlapping areas less than 15\u201320% of the image area. Although some corrections, e.g., calibration, chromatic aberration or vignetting corrections, may be conducted using both freeware and commercial software for image stitching, even after the final blending some imperfections may still be visible. Since a synchronous acquisition of constituent images using multiple cameras may be troublesome in many practical applications, some problems may also occur for moving objects, particularly leading to motion blur and ghosting artifacts.Although during several recent years a great progress has been made in general-purpose image quality assessment (IQA), the direct application of those methods proposed by various researches for an automatic objective evaluation of stitched images is troublesome, or even impossible. This situation is caused by significant differences between the most common types of distortions and those which may be found in stitched images. Therefore, the development of stitched images quality assessment methods is limited by the availability of the image databases containing panoramic images subject to different types of distortions typical for image stitching together with subjective quality scores. Since the first attempts to such quality metrics have not been verified using such datasets, there is a need of their additional verification, as well as the analysis of their usefulness in the combination with some other approaches.Such experiments are possible with the use of the Indian Institute of Science Stitched Image Quality Assessment (ISIQA) dataset consisting of 264 stitched images and 6600 human quality ratings. One of the methods recently proposed for quality assessment of stitched panoramic images, verified using the ISIQA database, is the Stitched Image Quality Evaluator (SIQE) proposed by the authors of this dataset . This meOne of the methods for the increase of the correlation of objective metrics with subjective quality evaluation results is the application of the combined metrics, successfully applied for general-purpose IQA ,6, multiThe motivation for the combination of the entropy-based features with some existing metrics has been related to the observed increase of the local image entropy for the regions containing some kinds of distortions typical for the stitched images. According to expectation, an increase of the global image entropy for lower quality images may also be observed. Nevertheless, as the image entropy is highly dependent on the image contents, a more useful approach is the comparison of the entropy-based features calculated for the constituent and the stitched images in a similar way as for 36 features extracted in the SIQE framework ,10, leadObjective image quality assessment methods may be generally divided into full-reference (FR) and no-reference (NR) methods. The latter group also referred to as \u201cblind\u201d metrics, seems to be more attractive for many applications since FR metrics require the full knowledge of the reference (undistorted) images. Since such \u201cpristine\u201d images are often unavailable, a partial solution for this problem might be the use of the reduced-reference methods where the knowledge of some characteristic parameters or features of the original image is necessary.Nevertheless, the FR quality assessment of the stitched images should be considered in another way since perfect quality panoramas are usually unavailable, however, there is still a possibility of some comparisons with constituent images that are typically at the disposal. Therefore, the stitched image quality assessment may be considered as an indirect assessment of the quality of the applied stitching method. In view of these assumptions, these methods cannot be directly classified as \u201cpurely\u201d FR or NR IQA algorithms, since they use the data from additional (constituent) images but do not utilize any direct comparisons of the distorted panoramas with the \u201cpristine\u201d stitched images.One of the first interesting approaches to stitched IQA is based on the attempt of using the well-known Structural Similarity (SSIM) method examinedColor correction and balancing in the image and video stitching has also been investigated in the paper , whereasThe application of the local variance of optical flow field energy between the distorted and reference images has been combined with the intensity and chrominance gradient calculations in highly-structured patches by Cheung et al. , allowinUnfortunately, regardless of their popularity and good results obtained in some other applications, some data-driven quality assessment methods cannot be successfully applied for the quality assessment of stitched images due to the necessity of training with the use of a great number of images . Some reOne of the methods partially utilized in this paper has been proposed by Solh and AlRegib ,10 who hAs mentioned earlier, one of the most interesting approaches to quality assessment of panoramic stitched images has been recently developed by the inventors of the ISIQA database . The maiThe detection of ghosting artifacts observed as some additional edges or replications of some image regions, caused by imprecise aligning of the overlapping regions of constituent images during the stitching procedure, is based on the use of the multi-scale multi-orientation decomposition. For this purpose, the use of the steerable pyramids has been proposed by the authors of the paper , who havAlthough the Pearson\u2019s Linear Correlation Coefficient (PLCC) for the ISIQA dataset is equal to 0.8395 with Spearman Rank Order Correlation SROCC = 0.8318 reported in .Considering the necessity of the use of a large amount of the ground truth data for training to avoid overfitting of the trained model, there is a limited possibility of application of the deep CNN-based methods, as stated by Hou et al. . For thiAlthough the fundamental element for our research is the SIQE metric, its extension towards a combined metric requires an implementation of some additional metrics and calculation of additional features, as well as their further optimization making it possible to increase the correlation with subjective quality scores.s to 21, which is a trade-off between a reasonable computation time and accuracy. The subscript I denotes the reference image whereas J stands for the distorted image, and C is a constant added to prevent instability in case of the denominator value being close to zero.Two additional sub-measures have been incorporated from for thisI has to be computed using the formula [I. These values have been computed for the same non-overlapping macroblocks as previously. Finally, the texture randomness index has been mapped to the object index in the following wayTo compute the overall quality index a weighted average of luminance and contrast index of each macroblock should be used. The weights\u2019 values are obtained based on the reference image in the following way. First, the texture randomness index at macroblock formula (2)t.. ofThe luminance and contrast index for The values of In our study, the reference image has been a region of interest (ROI) selected from each constituent image and the corresponding ROI found in the evaluated stitched image. All the formulas have been implemented as MATLAB functions. Instead of the third MIQM term, namely spatial motion index, partially utilizing the local entropy, we have used the additional global and local entropy-based features, leading to the increase of the proposed metric\u2019s correlation with subjective MOS values.The initial experiments, conducted using 264 stitched images obtained for 26 scenes that are included in the ISIQA dataset and stitched (s) images are subtracted, respectively. Regardless of these two differential features, their equivalents for the constituent and stitched images may be independently analyzed as well.Therefore, the additional entropy-based features are defined as :(7)ent\u00aflAfter the experimental verification of the possible combinations, the initially considered combined metrics, referred to as EntSIQE, have been defined in two variants based on the weighted sum and weighted product :(9)EntSIThe additional extension of these metrics with the use of two indexes, originating from the MIQM approach (Equations and 5)))5)), desFormulas and (5),The additional useful feature, leading to a further increase of the correlation of the proposed metric with subjective scores is the variance of the local entropy that may be calculated subtracting the averaged variances determined for the constituent and stitched images according to:Hence, the final formulas may be expressed as:It is worth to note that the necessity of the use of additional weighting exponents in Formula in compaTo verify the correlation between the objective and subjective quality scores for the 264 images from the ISIQA database, three correlation metrics being the most typical in the IQA research, have been used.Pearson\u2019s Linear Correlation Coefficient (PLCC) between the objective metric Q the Mean Opinion Score (MOS) values, illustrating the prediction accuracy of the image quality, is defined as the ratio of the covariance to the product of the standard deviations:It should be noted that in many IQA related papers, the additional nonlinear regression is applied, usually with the use of the logistic function, according to the recommendations of the Video Quality Experts Group (VQEG). Nevertheless, in the case of the combined metrics, it does not lead to meaningful changes of the correlation coefficients due to the nonlinear combination of various features. This has also been verified experimentally both for the original SIQE metric as well as for all the proposed combinations.n denotes the number of images. The second one is Kendall Rank Order Correlation Coefficient (KROCC) expressed as:To verify the prediction monotonicity, two rank-order correlations may be applied. Spearman\u2019s Rank Order Correlation Coefficient (SROCC) is given as:fminsearch function with additional verification of the local minima.The calculations of all parameters as well as the optimizations have been conducted in MATLAB environment. For the optimization of exponential parameters The obtained results for the original SIQE metric as well as for the initially considered and finally proposed combined metrics are presented in As can be easily noticed much more linear relation between the proposed objective metrics and MOS values can be observed in comparison to the original SIQE metric. Analyzing the number and location of outliers, most of them are located closer to the linear trend visible on the scatter plots for the proposed metrics. The values of the parameters obtained for the proposed combined metrics are presented in Analyzing the obtained results, a significant increase of the correlation with subjective scores may be observed for the proposed approach. Since the values of the parameters used for all six metrics or features are not close to zero (for the EntSIQE Formula .The extensions of the recently proposed SIQE metric towards the combined metric proposed in the paper make it possible to achieve considerably higher correlation of the designed objective metrics with subjective quality scores of the stitched images delivered in the ISIQA database. The obtained results are promising and confirm the usefulness of the combined metrics also for the automatic quality assessment of the stitched panoramic images. As shown in the ablation study, the application of the additional entropy-based features utilizing the local image entropy and its variance is one of the crucial elements increasing the correlation with the MOS values, since their removal causes the most significant decrease of the PLCC, SROCC and KROCC values, obviously with the exception of the original SIQE metric.One of the potential directions of further research might be the application of the proposed approach for quality assessment of parallax-tolerant image stitching methods as well"} +{"text": "In this issue, we present promising developments in the field of bacterial enteric vaccines. For the last three decades, the development of new enteric vaccines has played Cinderella to the three ugly sisters Malaria, HIV and TB. Public health programs for enteric vaccines historically suffered from underfunding, despite enteric pathogens being major killers of infants in areas with a high burden of disease, despite the World Health Organization (WHO), non-governmental organizations and aid agencies committing resources to preventing cholera, typhoid, invasive non-typhoidal salmonellosis and shigellosis in low-income and middle-income countries (LMICs). A change came in 2010, when vaccination was chosen as the main intervention strategy to combat a major cholera epidemic in Haiti. This put vaccines in the spotlight as a control measure for enteric diseases. In 2013, the WHO established the stockpile for inactivated oral cholera vaccines (OCVs) to meet the global demand for this vaccine [The world received a further boost by the campaigns with newly developed rotavirus vaccines for infants, which have had a tremendous impact on the morbidity and mortality of this diarrheal disease, lowering the cost to public health as well as limiting the economic impact of the disease. The involvement of the Bill and Melinda Gates Foundation (BMGF) provided crucial financial support and guidance for these programs and also expedited the development and licensing of new bacterial vaccines for enteric diseases, exemplified by the incorporation of new typhoid conjugate vaccines (TCVs) in the public health programs of LMICs.The WHO collaborating centers played a role in smoothing regulatory hurdles associated with the quality control of the OCVs and new TCVs. In 2009, the WHO Expert Committee on Biological Standardization approved a major standardization program for new enteric vaccines with the assistance of one vaccine developer, Novartis Vaccines for Global Health . This program also benefited from the input of the WHO, the Coalition against Typhoid, PATH, BMGF, the International Vaccine Institute (IVI), the Oxford Vaccine Group (OVG) and others. This network delivered WHO workshops to train local laboratory staff, WHO international standards for batch release assays and \u2018open access\u2019 in vitro diagnostic tests for comparing the performance of vaccines used in different clinical trials or field settings. In addition, written WHO guidelines were produced such as the recently updated guideline on the safety and quality of TCVs [Currently, WHO international standards are being developed for Shigella vaccines, inactivated oral cholera vaccines and vaccines for invasive non-typhoidal salmonellosis. It is hoped that these reference reagents will aid the development of batch release assays and thereby the manufacture and deployment of these vaccines.The impact of the introduction of new TCVs and OCVs will slow the dissemination of antimicrobial resistance and will impact on the spread and evolution of antibiotic resistance organisms amongst humans and animal species used for food production. It will help to reduce outbreaks in areas subject to flooding and with an insecure clean water supply and protect the health of communities from life-threatening and debilitating diarrheal disease and enteric fever.A comprehensive update on the status of more than thirty candidate vaccines to protect against diarrheal disease ;Insight into serum markers of immunogenicity ;Studies highlighting the implementation of assays to improve the safety of vaccines containing outer membrane vesicles ;Studies describing methods which can be used to show the thermal stability of vaccines outside the cold chain ;Escherichia coli infections .In silico engineering of the fimbrial tip adhesin holds promise for new improved subunit vaccines to combat enterotoxigenic In this issue, the contributions from leading authors provide the following:This collection of reports taken together will help to improve our understanding of these new enteric vaccines and ultimately expedite their acceptance and use as public health tools in resource-stretched settings."} +{"text": "National Science Review [1].The two research articles from the Institute of Neuroscience of Chinese Academy of Sciences represent the new format for publishing significant findings as stated in an editorial of The first article describes the generation of a group of gene-edited macaque monkeys by CRISPR/Cas9 editing of monkey embryos that deleted the expression of a core circadian regulatory transcription factor BMAL1. These monkeys exhibited the loss of sleep and psychosis-like phenotypes. The second article describes the cloning of a group of monkeys by somatic cell nuclear transfer (SCNT), using a juvenile gene-edited monkey from the first study that exhibits the most severe circadian-disorder phenotypes. These two pieces together demonstrate that a population of customized gene-edited macaque monkeys with uniform genetic background will soon be available for biomedical research.NSR's philosophy is to avoid unnecessary lengthy delay in publishing significant findings, if the purpose of the delay is simply to wring out the last few percentage points of the results [2]. The timely publication of the main finding may in fact facilitate exhaustive investigations in follow-up work.The two articles went through the review and revision process thoroughly within 4 weeks. There is undoubtedly room for improving the two papers. For example, some reviewers inquire whether the five monkeys cloned by SCNT have been examined for the same array of phenotypes as the founder A6. To meet these requirements, the authors will have to delay the publications by several months."} +{"text": "Relational practice is characterised by genuine interaction between families and healthcare professionals that promotes trust and empowerment. Positive clinical outcomes have been associated with relational practice. To assess and examine in-hospital interventions designed to promote relational practice with families in acute care settings of emergency departments, intensive care units and high care units. The preferred reporting Items for Systematic Reviews and Meta-Analyses guidelines informed the design of this scoping review. To identify relevant studies, databases and the search engine Google Scholar were searched using terms for core elements of relational practice and family engagement. Of the 117 articles retrieved, eight interventional studies met the search criteria. The interventions focused on relational practice elements of collaborating with and creating safe environments for families, whilst only one addressed healthcare professionals being respectful of families\u2019 needs and differences. In relation to the nature of engagement of families in interventions, the focus was mainly on improving family functioning. Family engagement in the interventions was focused on involving families in decision-making. The scoping review revealed a limited number of in-hospital interventions designed to promote relational practice with families in acute care settings. Further research is encouraged to develop such interventions.The scoping review has highlighted specific elements of relational practice that have been overlooked in the mapped interventions. This provides guidance on where future interventional research may be focused. Families play an important role in caring for their loved ones in acute healthcare settings, whilst simultaneously assisting healthcare professionals (HCPs) with vital information for the treatment of the patient and closely aligned to the aim of the review. The research questions were as follows: 1) What in-hospital interventions are available to promote relational practice with families in acute care settings of EDs, ICUs and HCUs? 2) What elements of relational practice did the interventions address? 3) What was the nature of family engagement in the interventions?A search strategy detailing search terms see and idenThree authors were involved in the review process. Articles in English which were published between January 2005 and December 2018 were included in the review and this was informed by the interest in relational practice and quality outcomes of complex healthcare contexts in the literature of the studies reported that the interventions had positive outcomes of improved family support and improved family decision-making were retrieved that included interventions for promoting relational practice with families in acute care settings. All the studies in the review were conducted in developed countries, where health resources, cultural and social perspectives of a family\u2019s role during illness and hospitalisation and the family\u2019s experience of illness may be different from that of developing countries (Shields A limited number of studies Shields . AccordiThe reviewed studies used different study designs with two studies being randomised controlled trials. Vincent stated tSimilar to other interventional studies by Torke et al. and HeylWhen considering the outcomes of the interventions reported in the included studies, the family members indicated that their perceived expectations and needs were met by the interventions. Torke et al. recounteHCPs respecting families\u2019 needs and honouring family differences in terms of their values systems and practices (Fletcher It was notable, that the interventions in the review, developed for acute care settings did address some elements of relational practice. Previous research has called for strategies to support family collaboration in acute care setting (Mackie, Marshall & Mitchell Fletcher were addThe nature of family engagement in the interventions varied, according to the dimensions proposed by Knafl et al. . AlthougAlthough the authors were rigorous in the review process, by using a recognised methodology it is possible that some studies could have been missed. Publication in English as an inclusion criterion may have led to the omission of important interventional studies published in other languages. Most of the studies identified in this scoping review were conducted in the ICU, thus limiting translation to other acute care settings especially the ED, which is characterised by transient care and focuses on rapid throughput of patients.The findings of this review reiterate the fact that there is a scarcity of interventional studies focusing on genuine connection between families and HCPs in acute care settings. The interventions of the reviewed studies indicated variability regarding inclusion of the elements of relational practice and the nature of family engagement in the interventions. Taking into account the positive outcomes of family and HCP collaboration in the reviewed studies, it is recommended that ongoing training and education to capacitate HCPs relationally should be a major component in future interventions seeking to promote relational practice with families."} +{"text": "To conduct a successful geomechanical characterization of rock masses, an appropriate interpretation of lithological heterogeneity should be attained by considering both the geological and geomechanical data. In order to clarify the reliability and applicability of geological surveys for rock mechanics purposes, a geomechanical characterization study is conducted on the heterogeneous rock mass of Niobec Mine , by considering the characteristics of its various identified lithological units. The results of previous field and laboratory test campaigns were used to quantify the variability associated to intact rock geomechanical parameters for the different present lithological units. The interpretation of geomechanical similarities between the lithological units resulted in determination of three main rock units . Geomechanical parameters of these rock units and their associated variabilities are utilized for stochastic estimation of geomechanical parameters of the heterogeneous rock mass using the Monte Carlo Simulation method. A comparison is also made between the results of probabilistic and deterministic analyses to highlight the presence of intrinsic variability associated with the heterogeneous rock mass properties. The results indicated that, for the case of Niobec Mine, the carbonatite-syenite rock unit could be considered as a valid representative of the entire rock mass geology since it offers an appropriate geomechanical approximation of all the present lithological units at the mine site, in terms of both the magnitude and dispersion of the strength and deformability parameters.Evaluating the reliability and applicability of geological survey outcomes for rock mechanics purposes.A geomechanical characterization study is conducted on the heterogeneous rock mass by considering the various identified rock lithotypes.The geomechanical parameters of intact units and their associated variabilities are used to stochastically estimate the geomechanical parameters of the heterogeneous rock mass by employing the Monte Carlo Simulation.A comparison is also made between the results of probabilistic and deterministic geomechanical analyses.The results indicate that, in the case of Niobec Mine, the combined syenite-carbonatite rock unit could be considered as a valid representative of the entire rock mass. Site characterization is an essential preliminary phase for implementing a successful rock mechanics program in any underground mining activity. As part of an underground mining plan, site characterization facilitates the subsequent geomechanical classifications by determining the geological settings and lithological characteristics of the area in which the mining activity is taking place. Aside from determining the geological settings and/or hydrogeological characteristics of the mining environment, site characterization contributes in estimation of the strength and deformability parameters of the numerous lithological units identified within the rock mass at the mine site , 2. In fFrom a geological perspective, various intact lithological units can be distinguished based on their lithological and petrographic differences. In some cases, numerous units could be identified within the rock having slight differences in mineral assemblage or alteration intensity , 6. ThisLithological heterogeneity should be considered when estimating the geomechanical parameters of rock masses. Variable lithological compositions along with the presence of geological structures would result in anisotropy and heterogeneity in rock mass properties. Heterogeneity and anisotropy in rock mass geomechanical characteristics, originate from the presence of multiple rock formations having horizons with different alternations , 5. The Different methods for estimating the strength and deformability parameters have been used in geomechanical characterization of rock masses. However, employment of the empirical Hoek\u2013Brown failure criterion in conjunction with the Geological Strength Index (GSI) classification system has been reported to be the most common method for estimation of rock mass properties \u201311. Convmi values of heterogeneous rock masses. Sari et al. .To eliminate generation of negative and/or false values, the normal PDFs were truncated by using the actual reported minimum and maximum values. Figures D represents the degree of disturbance of the rock mass (ranging from 0 to 1 for undisturbed in-situ rock masses to highly disturbed rock masses) [D is considered equal to 0 for this study.Subsequently, the MCS method was applied to calculate the Hoek\u2013Brown strength and deformability parameters of each rock mass type through Eqs. Each output parameter was generated from 10,000 iterations of randomly selected combinations of input parameters in accordance with their assigned PDF using Latin hypercube sampling algorithm (LHS). The advantage of LHS is that it provides smoother resulting PDFs with fewer iterations by using stratified sampling models. For the sake of simplicity, all the input parameters were assumed as independent variables. Although the dependence of the Hoek\u2013Brown parameters could jeopardize the probabilistic estimation of the output parameters, Sari et al., stated tBy the aim of MCS, the variability associated to the output geomechanical parameters were quantified and the Kolmogorov\u2013Smirnov test determined the best-fitted distribution function for each parameter. The mean, standard deviation and best-fitted PDF of strength and deformability parameters for each rock mass type are reported in Table Figures The results of rock mass geomechanical characterization indicated that, similar to the intact properties, the obtained values for syenite and carbonatite rock masses show slight differences and the carbonatite rock mass quality was estimated to be higher than syenite. However, estimated geomechanical parameters for the carbonatite-syenite rock mass were fairly close to the values calculated only for the carbonatite.RocData 5.0 software [D was considered equal to 0 [The software , was useual to 0 . The faiStudying the results of geological and geomechanical characterizations, emphasizes the necessity of adopting an accurate interpretation of data considering both lithological and geomechanical descriptions of a rock mass. In fact, lithological heterogeneity shouldn\u2019t provide a misleading indication on the presence of geomechanical variety throughout that rock mass.In this regard, studying the field and laboratory tests results of different test campaigns at the Niobec Mine, revealed that lithological heterogeneity rather than depth was responsible for the variation in geomechanical properties of the three main identified rock units. Studying the results proved that the largest variations in geomechanical parameters belong to the carbonatitic lithological units either with maximum separation from the syenitic lithological units or with intense alteration to chlorite. In fact, the resulted variability in strength and deformability parameters of intact samples was determined to be significantly defined by the degree of carbonatite alteration. Furthermore, geomechanical characterization of intact samples identified the carbonatitic units with a better quality than syenitic units. Determination of the Mohr\u2013Coulomb and Hoek\u2013Brown failure parameters for each constituent lithological unit and subsequently each rock unit Tables and 8, rHowever, in geomechanical perspective, the observed difference between the calculated strength and deformability parameters of carbonatite and syenite units, wouldn\u2019t be sufficient enough to justify the assumption of considering them as distinguished units within the intact rock; however, the carbonatite-syenite rock unit provided a quite reasonable approximation of geomechanical properties of the two aforementioned units. Besides, findings of local site characterization programs during the deposit exploitation phases, demonstrated that the nature of rock mass at the Niobec Mine is too complex to be discretized geomechanically and the captured irregularities in the quality of intact rock units are considered to occur in local scales , 35. The89 conversion method. It should be noted that even though this conversion method has been conventionally used in many similar studies, it can be unreliable, particularly for poor quality rock masses and for rocks with lithological peculiarities that cannot be easily incorporated in the RMR calculations. Therefore, it is recommended to estimate the GSI directly and not from the RMR classification. Moreover, due to the lack of data, a same value of GSI had to be considered for all the three identified rock mass units which imposed a limit upon the accuracy of the obtained results through oversimplifying the calculation of rock mass geomechanical parameters for different units.In the rock mass scale, estimation of geomechanical parameters separately for syenite, carbonatite and carbonatite-syenite rock units also determined the quality of carbonatite rock mass to be slightly higher than syenite but fairly close to the quality of the carbonatite-syenite rock mass. Even the dispersions of the estimated strength and deformability parameters, were obtained to be very similar between carbonatite and carbonatite-syenite rock masses Table . The obtUltimately, comparison between the results of deterministic and probabilistic estimation of geomechanical parameters for the particular case of carbonatite-syenite rock mass, proved the significant presence of variability associated to each parameter and inability of conventional deterministic approaches to address them entirely Table . ConventThe results of previously conducted geomechanical and geological field and laboratory tests at the Niobec Mine were combined to characterize the heterogeneous rock mass by considering the lithological and mechanical properties of identified lithological units\u2019 constituents. The aim of this study was to find a reasonable agreement between the geomechanical parameters in relation to the extensive lithological variability for describing the rock mass properties.The results of intact rock characterization indicated that the carbonatite-syenite rock unit could be considered as an appropriate representative lithology to define the rock mass geomechanical properties. Furthermore, estimated rock mass geomechanical parameters for syenitic, carbonatitic and carbonatitic-syenitic units also indicated that considering the carbonatite-syenite rock mass instead of trying to distinguish the syenite and the carbonatite as separate units provide a reasonable and reliable approximation of the geomechanical quality of rock mass at the Niobec Mine. Moreover, consideration of the carbonatite-syenite rock unit to represent the entire rock mass provides a good agreement between both the geomechanical and geological perspectives. Finally, the use probabilistic approaches instead of conventional deterministic methods in rock mass geomechanical characterization programs are highly recommended since a more realistic portray of the intrinsic nature of rock materials is depicted by considering the inherent variability associated to the geomechanical parameters."} +{"text": "Diagnostics and assessment of the structural performance of collectors and tunnels require multi-criteria as well as comprehensive analyses for improving the safety based on acquired measurement data. This paper presents the basic goals for a structural health monitoring system designed based on distributed fiber optic sensors (DFOS). The issue of selecting appropriate sensors enabling correct strain transfer is discussed hereafter, indicating both limitations of layered cables and advantages of sensors with monolithic cross-section design in terms of reliable measurements. The sensor\u2019s design determines the operation of the entire monitoring system and the usefulness of the acquired data for the engineering interpretation. The measurements and results obtained due to monolithic DFOS sensors are described hereafter on the example of real engineering structure\u2014the Burakowski concrete collector in Warsaw during its strengthening with glass-fiber reinforced plastic (GRP) panels. Aging of the existing infrastructure and dynamic development of new infrastructure in the cities are becoming a growing environmental threat to people. The development of large-scale infrastructure, including deep-founded buildings, is causing constant changes in the groundwater regime, which often intensifies their negative impact on susceptible structures. Conversely, urban areas are exposed to a number of additional factors that shorten the operational life of installations, such as stray currents. All these aspects have a negative impact on the condition of water and sewage infrastructure, especially those components of these systems that were built in the previous century, sometimes using low-quality materials. Leaks and failures may result in contamination of the ground around sewage collectors, as well as contamination of underground and inland waters . It is nThe article describes the concrete sewage collector originally constructed in 1964 and now reinforced with Glass\u2013fiber Reinforcement Plastic (GRP) panels. Such structural components are made of glass fiber reinforced polyester resins. This technology allows to obtain any shape, fitting it to an existing pipeline or sewer cross section .The space between the panels mounted inside and the existing inner walls of the collector was filled with a cement grout. Fitting self-supporting GRP panels inside the existing concrete structure is a process that completely cuts off any access to the original collector structure, making it difficult to assess the phenomena occurring in the original structure and at the interface between different materials. To better understand the complex state of stresses occurring during the retrofit and further operation, a decision was made to use comprehensive structural health monitoring (SHM), which allows to obtain measurement data and to assess the structural performance of the sewage collector\u2014without the need for additional inspections.Warsaw\u2019s sewer network is currently nearly 4200 km long, most of which will require renovation in the coming years. The owner of the network is interested in finding modern monitoring solutions which would make it possible to supervise the process of modernizing the network and its future operation. This will allow more effective quality control of the network, reduce the number of potential failures and extend the life of the renovated collectors.DFOS-based structural health monitoring is pretending to meet the above goals and that is why the pilot installation was conducted within the modernized section of the \u201cBurakowski\u201d sewer collector in Warsaw, Poland.3/s .The Burakowski sewage collector is an important element of the Warsaw sewage system. It was built in the 1960s using the mining method in a concrete casing, with segments of lengths from 2 to 3 m and an internal diameter of approximately 3 m, buried under the ground surface at a depth of 4.5 to 7.5 m . Its purThere are a number of unfavorable external factors in the collector area. The most important ones include the location near the edge of the high bank of the Vistula River and the vicinity of subway and tram lines and stations. A number of modern, high and deep-founded buildings have also been built in the area over the past 15 years. Moreover, in mid-2015, the construction of a new Burakowski Bis collector was completed in the immediate vicinity, along the existing Burakowski collector, which resulted in a number of changes in the environment of the existing collector compared to when it was originally constructed.In 2019, a decision was made to retrofit a 4.8 km long section of the collector using non-circular GRP modules and relining technology . The intassessment of the structural performance of the retrofitted collector structure;tracking the development of cracks identified during the inspection of the existing concrete casing of the collector;behavior of the original collector structure during the renovation works, in particular behavior of the identified cracks and detecting the new ones that occur at key stages of works, i.e., placement of GRP panels inside the collector, gap grouting process, grout setting process;monitoring the mating of GRP modules with concrete casing after completed renovation and during the operation of the renovated section, e.g., in extreme operating conditions (when completely filled).Because of the importance of the Burakowski collector in Warsaw\u2019s sewage distribution system and the applied method of its renovation using standard and split self-supporting GRP panels, the following issues requiring verification were identified:monitor the collector strains along its entire length );locate and monitor the development of cracks and fractures (large measuring range);fully integrate sensors to the monitored object, i.e., concrete collector casing and grout;ensure a possibly high ease of installation, high resistance to the mounting stress as well as harsh environmental conditions.Identification of the abovementioned issues shows the need of application the measurement technique that allows to:The objectives established for monitoring the collector\u2019s longitudinal strain dictated the need for geometrically continuous, distributed fiber optic measurements using DFOS sensors. Fiber optic-measurement technology allows to use Rayleigh, Brillouin or Raman scattering-based techniques for measurement. As a result, it is possible to measure both temperature changes (Raman scattering) and straThe key task preceding construction of the monitoring system was to select fiber optic elements used to measure strains. A popular and widely used market solution is layered sensing cables. A common feature of all commercially available solutions is their multi layered designs .The need for an operating principle of coatings in sensing cables is explained in a patent application filed by F. Ravet from Omnisens in 2014 (US 2014/0033825 A1\u2014Method And Assembly For Sensing Permanent Deformation Of a Structure). The patent application provides a description and principle of operation for most of the commercially available sensing cable designs. The primary purpose of the coatings in a sensing cable is to protect the optical fiber from rupture. When the fiber reaches the strain assumed at the cable design stage, particular layers of the sensing cable start to slip against each other. The layers split and shift in relation to each other. This behavior of the sensing cable protects the optical fiber from breakage by reducing its strain and, at the same time, it mechanically records the occurring event. Mechanical shifting of the layers sustains the tension of the fiber when external impacts cease, which makes it possible to detect exceeding of the assumed strain threshold at any time . In order for layered sensing cables to fulfill their role, they must be designed and manufactured individually for each system, considering the location of sensors and the expected strain level such that the slippage between the layers of the sensing cable occurs at a precisely defined strain value. This means that it is not possible to use general purpose layered cables for different applications.Layered design of sensing cables allows these layers to slip uncontrolled in relation to each other; thus, it is not possible to unambiguously transfer strain to the internal measuring fiber in the desired strain range. An intuitive example showing the influence of layer slippage effect on the final strain results is presented in Plastic (nonlinear) effects including slippage between intermediate layers but also yielding of steel components in the cable make it impossible to monitor the current deformation state of the structure, e.g., closing of cracks under the impact of external factors. Such characteristics of layered cables prevent their application for monitoring the strengthened collector described hereafter in the article.The other possible solution is distributed fiber optic sensors (DFOS) designed specially to measure strains of the monitored structure. The sensors consist of a monolithic, composite core in which a measuring optical fiber in its primary coating is integrated during production process (pultrusion). There are no intermediate layers or external coatings, what allows for unambiguous, predictable transmission of strain from the surrounding material to the measuring fiber inside the core. Additional advantages of the DFOS fiber optic sensors include a wide measurement range of up to \u00b14%, ribbed or braided external surface that ensures good integration of the sensor and the surrounding concrete as well as a high resistance to the installation process and environmental conditions.Given the requirements: unambiguous strain measurement, no slippage between layers, possibly wide measurement range and resistance to sewage collector operating conditions, monolithic composite DFOS sensors have been selected for the system.The monitoring covered the renovated segment of the Burakowski sewage collector, specifically the section where the structural damage was most significant. The length of the pilot section equipped with sensors was about 146 m . The senTwo types of sensors were installed in the monitored section\u2014see The sensor locations were determined by reference to similar sewage collector and tunnel monitoring systems, including those performed in 2016 as a part of the Grand Paris Express project All sensors with appropriate lengths were delivered to the site in coils a. Three Sensors installed in grooves are protected from mechanical damage during installation of the GRP modules and properly integrated with the existing concrete casing.The other type of sensor\u2014EpsilonSensor (B)\u2014was used to measure circumferential strains between 120\u2013125 m of the monitored section, where diagonal cracks in the collector walls were visible with an unarmed eye . EpsilonThe measurement sessions were performed using an optical backscatter reflectometer (OBR) from Luna Innovations. This device is based on Rayleigh scattering . The reaStrain measurements were taken during the strengthening works, including the stage before GRP modules were placed inside the collector (zero reading), after the GRP modules were placed (before grout injection) and finally after grout injection. DFOS monitoring allowed to observe structural response of the collector under gradual load changes.A particularly significant effect observed due to the DFOS measurements was revealing the original structure of the concrete collector casing, related to technological breaks during its construction. Increasing load on the concrete casing during the renovation works revealed cracks caused by technological breaks and changes in their widths were carefully analyzed during this process . ConcretBased on the strain measurements acquired from three lines of EpsilonRebars, vertical displacements of the collector caused by the strengthening works (including dead weight of the composite panels and mortar injection) were determined. The analysis was based on the trapezoidal method ,14,15,16The mid-length displacement (settlement) of the collector determined after the grout injection process relative to its ends was about 1 mm, as shown in Identify discontinuities of the collector\u2019s concrete casing (cracks) and observe their development over time. In the analyzed case study, deformations were caused by the dead weight of GRP segments, grout injection used to fill the gap between the new composite panels and old collector, as well as the thermal-shrinkage effect of the cement grout;Determine relative displacements (changes in shape) of the collector along the whole measured section, both in the vertical and horizontal plane. The relative displacements determined in the vertical plane after strengthening were about 1 mm;Measure the temperature distribution along the entire length of the collector. Such measurements can be used to detect leaks through the collector casing. They are also used for thermal compensation, which is one of the key aspects during long term structural health monitoring. In the presented case study, thermal compensation was performed due to two approaches:application of additional optical device based on Raman scattering to obtain temperature profiles along the entire collector. EpsilonRebars were used simultaneously as the temperature sensors;application of additional spot measurements at the start and end point of the collector to obtain reference temperature readings.The application of fiber optic measurements using DFOS sensors with a monolithic core allowed to create an effective monitoring system for a concrete collector reinforced with GRP panels, operating in difficult and variable ground and water conditions. A wide spectrum of information acquired from the sensors may, in the future, contribute to the reduction of required functional inspections carried out directly by the operator. In particular, the system allowed to:In addition, the analysis of changes in the collector strains along its length recorded using distributed fiber optic sensors (DFOS) can provide important information on the structural performance of the collector. An indication of change in tendency (even if changes are minor) at the initial stage of operation helps to plan and carry out remedial work as soon as possible. The ultimate goal is to create a situation where a properly designed, constructed and operated structural health monitoring system will enable an objective assessment of the technical condition of a facility over its entire service life, and inform the users of any abnormalities. The application of distributed fiber optic sensors is a promising solution for these purposes. DFOS can be successfully used to detect and observe phenomena that are typically local in nature, such as steel yielding, junction cracking, concrete cracking, leakages etc. None of the commercially available, conventional spot techniques provide similar capabilities."} +{"text": "The COVID-19 pandemic has impacted all of our lives, but the population most at risk are older adults. Canadians over the age of 60 account for 36% of all COVID-19 cases but 95% of the deaths, and over two-thirds of ICU admissions. Older adults with chronic health conditions are especially at risk. Prior to COVID-19, family caregivers (FCGs) for older adults were managing their caregiving duties at the limits of their emotional, physical and financial capacity. As such, FCGs need special consideration during these times of uncertainty to support them in their role and enable the continuation of care for their older adult family members. This symposium will report on independently conducted studies from across Canada that have examined how the pandemic and associated public health measures have influenced resource utilization by FCGs and the older adults for whom they provide care. McAiney et al\u2019s study examines the deleterious effect of reduced services on community dwelling FCGs and the wellbeing of their family member with dementia. Parmar & Anderson examined the effect of pandemic restrictions on FCGs of frail older adults and found they were experiencing increased distress and decreased wellbeing. Flemons et al report on the experiences of FCGs managing caregiving without critical services and the effect of restrictive visiting policies and the well-being of the caregiving dyad (FCGs and family member with dementia). Finally, McGhan et al will share how FCGs evaluated the efficacy of public health measures and the public health messaging about the pandemic."} +{"text": "The endogenous microbiome of the oral cavity plays an essential role in the development of periodontal disease. It also has a significant pathogenic effect on the inner-vation of the oral cavity organs. The experimental determination of the effectiveness of various drugs is required for the effective treatment of periodontal disease, and this involves the creation of a model of experimental periodontitis. The objective of this series of studies was to determine the possibility of reproduction of the experimental model of periodontitis and the study of the effects of anticholinergic drugs on the development of an experimental periodontitis model. The reproduction of the experimental model of periodontitis was performed by injecting the gums of rats with solutions of pathogenic factors: lipopolysaccharide, hyaluronidase and trypsin. We aimed to study the effect of anticholinergic drugs (pilocarpine and atropine) on the development of an experimental model of periodontitis after the injection of a hyaluronidase solution (2 mg/ml) into the rats' gums. The study was performed on white Wistar rats. Elastase activity, malonic dialdehyde content, urease activity , lysozyme activity (an indicator of nonspecific immunity), and catalase activity (an antioxidant enzyme) were determined in the homogenate of the studied tissues. The results of a comparative study of the effect of three pathogenic factors on the activity of elastase in different tissues of experimental animals showed that hyaluronidase has the greatest proinflammatory effect. The action of pilocarpine and atropine was determined with an underline experimental periodontitis model. It was shown that both anticholinergic drugs stimulate the inflammatory process in the periodontium and that anticholinergic drugs enhance the proinflammatory effect of hyaluronidase. According to the literature, the microbiome of the oral cavity occupies an important place in the development of periodontal disease \u20134. This Other pathogenic factors of microbes are proteolytic enzymes that cause degradation of the protein base of vascular walls \u201321. For The objective of this series of studies was to determine the possibility of reproduction of an experimental periodontitis model and the study of anticholinergic drugs effect on the development of an experimental periodontitis model.The reproduction of the experimental model of periodontitis was performed by injecting the rats\u2019 gums with solutions of pathogenic factors: lipopolysaccharide, hyaluronidase and trypsin. The drugs were in the form of solutions of 0.9 % NaCl lipopolysaccharide (1 mg/ml), hyaluronidase (2 mg/ml) and trypsin (5 mg/ml), which were injected into the gums in the molar area in an amount of 0.2 ml per rat.The study was performed on white Wistar rats . We aimed to study the effect of anticholinergic drugs (pilocarpine and atropine) on the development of an experimental model of periodontitis after the injection of hyaluronidase solution (2 mg/ml) into the gums of rats. In order to achieve this, the rats were previously given oral applications of gels with pilocarpine (2 mg/ml) or atropine (0.2 mg/ml) for two days. The rats were euthanized under thiopental anesthesia (20 mg/kg). Three hours after hyaluronidase injection, the gums and dental pulp were isolated, and blood serum was obtained.The level of biochemical markers of inflammation was determined in the homogenate of the isolated mucosa: elastase activity and malondialdehyde content, urease activity , lysozyme activity (an indicator of nonspecific immunity), and catalase activity (antioxidant enzyme) . AccordiAll the results of experimental studies were processed using standard statistical techniques .Previous experiments have shown that significant pathological manifestations of pathogenic factors are detected after 3 hours. The activity of the proteolytic enzyme elastase was chosen as an indicator of inflammation. It is produced by leukocytes, and the increase of its activity indicates leukocyte infiltration of the studied tissue, which is an important pathogenetic sign of the inflammatory process.The results of a comparative study of the effect of three pathogenic factors on the activity of elastase in different tissues are presented in The results of this series of experiments became the basis of the hyaluronidase use for the experimental periodontitis model.The effect of modulators of the autonomic nervous system (pilocarpine and atropine) on the development of acute experimental periodontitis after the injection of a hyaluronidase solution (2 mg/ml) into the rats' gums was studied in the next series of experiments (15 rats). To perform this, the rats were previously given oral applications of gels with pilocarpine (2 mg/ml) or with atropine (0.2 mg/ml) for two days. Rats were euthanized 3 hours after the hyaluronidase injection, and gums and dental pulp were isolated, and serum was obtained. The biochemical parameters of the rats' gums were determined, and the results are presented in The presence of an inflammatory process in the periodontium is evidenced by a significant increase in the activity of elastase (by 22.5%). Applications of gels with pilocarpine or atropine slightly reduce elastase activity, but it remains significantly higher compared to intact rats. Both anticholinergic drugs (pilocarpine and atropine) significantly increase the malondialdehyde content compared to its level in rats with an experimental periodontitis model (control).Applications of gel with pilocarpine significantly reduced catalase activity and the API index. Gel applications with the proposed drugs did not reduce catalase activity. However, they reduced the API index to some extent. As for the levels of urease and lysozyme, they were not significantly reduced in the experimental periodontitis model after the application of anticholinergic drugs.We performed a comparative study of the effect of three pathogenic factors on elastase activity in different tissues of experimental animals , and we found that hyaluronidase has the greatest proinflammatory effect. It has been established that hyaluronidase exceeds the proinflammatory activity of trypsin and even intestinal endotoxin lipopolysaccharide. We also found that anticholinergic drugs enhance the proinflammatory effect of hyaluronidase: they increase elastase activity, malondialdehyde content and significantly reduce catalase activity, API index, alkaline phosphatase activity, and increase acid phosphatase and lysozyme activity.As a result of the first series of experimental studies, an experimental model of periodontitis was developed using one of the pathogenic effectors of bacteria, namely hyaluronidase, which can significantly increase the permeability of bacteria and their toxins into periodontal tissues."} +{"text": "Research on intergroup contact suggests that negative contact experiences affect cognitive representations such as stereotypes more strongly than positive contact experiences. To comprehensively examine the full effect of intergroup contact, the valence of the contact experience as well as the affective and cognitive dimensions of prejudice should be assessed. In ageism research, previous studies typically focused only on contact of positive valence and were limited to the perspectives of younger individuals on older adults. Primary objective of this study is to examine both positive and negative contact frequency and their relation to affective and cognitive dimensions of ageism from the perspectives of younger adults between the age of 18 and 25 (study 1) and older adults between the age of 60 and 92 (study 2). Consistent with previous research on intergroup contact, our results confirm that both types of contact were similarly predictive of affective facets of prejudice. However, only in study 2 that assessed older adults\u2019 agreement with contemporary stereotypes about young men and women, negative compared to positive contact frequency proved to be a stronger predictor of the cognitive dimension of ageism. Our findings emphasize the importance of focusing on all dimensions of prejudice and highlight the need to consider the perspectives of young and old in ageism research."} +{"text": "This special issue was inspired by Grigoryev et al. on ethniWhile many of the papers in this volume incorporate these cognitive functions of stereotypes, they go beyond these basic acts of perception, categorization, attribution, and generalization that give meaning to intercultural interaction and intergroup anxiety. They deal also with the processes of evaluating members of the groups , and then to acts ranging from discrimination to inclusion as the static and dynamic aspects of intercultural relationships. All these individual psychological processes are embedded in the general sociopolitical group contexts that incorporate the history of intergroup relations, their mutual images, the extant institutional and systemic values, and the established collective practices that may act against some groups but privilege others.This special issue consists of 13 articles by 46 scholars from 15 countries that address both personal and cultural stereotypes for which insights from the Stereotype Content Model of locals in Portugal and the US.The first three articles include an examination of the cognitive sphere of non-dominant groups . Lutterbach and Beelmann addressed personal stereotypes by refugees toward host society members and their perceptions of discrimination provoked by host society members to analyze their associations with the refugees' shared reality and acculturation orientations in Germany. The article claims that contextual and everyday discrimination experiences prevent integration because they reduce the motivation to adopt aspects of the host culture, reduce the perception of shared reality between the cultural groups, and increase the motivation to maintain one's own culture among refugees holding strong positive sociability stereotypes toward the host society members. Hence, increased discrimination experiences are likely to lead to a disillusioning effect included separation acculturation strategies among refugees who actually had the potential to integrate into the host society.Urbiola et al. investigated the relationships between personal stereotypes and the acculturation preferences of Spanish and Moroccan origin adolescents in Spain. The article claims that it connects the literature of acculturation and intergroup relations in an interactive way instead of studying the predictive role of stereotypes or acculturation perceptions in isolation. For example, stereotypes would play an important role in majority members' acculturation preferences when they perceived that minority youth were not adopting the host culture because it is a more threatening situation than when minority group members are adopting the host culture. Moreover, this work illustrates the importance of the concept of mutuality in the study of acculturation and negative appraisal of immigrants, and contact. The article shows how the relationships between variables differed by immigrant groups based on cultural stereotypes that were related to the social structural characteristics of these groups. The results strengthen a theoretical conceptualization that posits an indirect relationship between individual value preferences and behavior through both positive and negative group appraisal. We find this as a good example of the group-specific approach within the SCM for how, considering threats (and benefits as well) separately, one can form a consistent threat profile for each target group based on the Justification-Suppression Model. The article considers the full path from the ideologies to the expression of stereotypes by investigating how the La\u00efcit\u00e9 norms can set the stage for specific regulatory strategies: (1) to prevent prejudicial attitudes but which can lead to unexpected consequences on stereotyping within the Historic La\u00efcit\u00e9 context and (2) to help realize prejudice within the New La\u00efcit\u00e9 context . This analysis expands our understanding of the functioning of intergroup ideologies in specific cultural contexts . The results supported their hypothesis that compared to low-attitude-strength networks; high-attitude-strength networks of evaluations had a stronger degree of global connectivity, i.e., the higher connectivity between the evaluations on different aspects of anti-Roma bias .Javakhishvili et al. applied the SCM and the BIAS map in Georgia (the former Soviet Union republic from the South Caucasus) to evaluate English and German speakers globally. The article shows some features of evaluation of representatives of large and powerful countries by people from small countries, including the implication of a unique set of perceived socio-structural variables and culturally specific meaning of emotions.Hakim et al. experimentally examined the stereotype of Muslims as being either moderate or radical to add the findings of these subtyping to the adverse implications of concepts with positive guises. The article claims that the endorsement of these Muslim subtyping can be translated into support for aggressive military and social policies toward Muslims in the US.Findor et al. used a representative sample of ethnic Slovaks and two target ethnic minority groups (stigmatized: Roma vs. non-stigmatized Hungarians), whereas, Bye used the data from the Norwegian Citizen Panel and asylum seekers as the target group to experimentally examine the effect of response instruction . The results of both highlight the importance of the distinction between cultural stereotypes, which are shared by members of a particular society, and personal stereotypes, which are beliefs of individuals about groups. Social perceivers can recognize a common belief about groups, even if they do not personally endorse it and systematic patterns of stereotype content and ingroup favoritism.Further, the methodological contribution continues due to the appeal to an issue of non-Western face perception. Knutson integrated the scientific study of stereotypes within the SCM with a literary-theatrical exploration of stereotyping. The article demonstrates how theater performance can sometimes embody the dynamic for Jew stereotype traced by the BIAS map, from cognition to affect to behavior.Finally, We hope that the collection facilitates wide interest in stereotypes as the heart of intercultural relations and as the ways individuals grapple with the many different kinds of knowledge they have about cultures and of their understandings of communication.DG wrote the first draft of this paper. JB and AZ reviewed and edited the draft to finalizing it. All the authors approved the submitted version of this paper.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The declaration of any public health emergency in the Democratic Republic of Congo (DRC) is usually followed by the provision of technical and organizational support from international organizations, which build a parallel and short-time healthcare emergency response centered on preventing the extension of health emergencies across the countries and over the world. Previous Ebola virus disease (EVD) outbreaks have highlighted the need to reinforce the healthcare sector in different countries.Based on the difficulty to implement the International Health Regulations (2005) to the local level of affected countries including the DRC, this paper proposes a multidisciplinary model based on the health zones through the strengthening of preparedness and response structures to public health emergencies vis-\u00e0-vis the existing weak health systems existing in DRC. A commitment to integrating the emergency response in the existing health system should work to reduce the tension that exists between local recruitment and its impact on the quality of daily healthcare in the region affected by EVD outbreak on one hand, and the involvement of international recruitment and its impact on the trust of the population on the emergency response on the other. This paper highlights the provision of a local healthcare workforce skilled to treat infectious diseases, the compulsory implementation of training programs focused on the emergency response in countries commonly affected by EVD outbreaks including the DRC. These innovations should reduce the burden of health problems prior to and in the aftermath of any public\u00a0health emergency in DRC hence increasing the wellbeing of the community, especially the vulnerable people as well as the availability of trained healthcare providers able to early recognize and treat EVD. The declaration of any public health emergency is followed by the provision of technical support from international organizations to individual countries to limit the widespread of infectious diseases , 2. As aEVD outbreak is associated with important implications in concerning countries such as lack of trust in the government, food insecurity, loss of domestic income, stigmatization, impaired provision of healthcare, and disruption of school activities . During While conducting the analysis and reviewing the challenges identified during the tenth EVD outbreak in DRC, this paper proposes an alternative model based on a health system able to respond to future outbreaks that involve community engagement, the provision of skilled healthcare workers, and perform the quality of healthcare during outbreaks. Similar to the EVD outbreaks in the West-African region which caused 11,310 deaths and produced more than 17,300 orphans, the tenth EVD outbreak in DRC was marked by the declaration delay of a public\u00a0health emergency, the contact tracing in an insecure setting, the lack of funds, the overwhelmed healthsystems, and the disruption of preventive interventions including immunization, HIV/AIDS, and tuberculosis prevention .The Health System in the DRC is based on three levels namely the central, intermediate, and operational levels Fig.\u00a0. The impThe DRC healthcare system is affected by the impoverishment of nearly 70\u2009% of the population including health workers; poor adoption of the Health System Strengthening Strategy adopted in 2005, unequipped infrastructure, impaired supply of drugs into health facilities, multiples labor strikes motivated by poor salary, as well as the paucity of trained health workers . This heThe history of emergency response to outbreaks has been linked to the early attempts to implement a commitment focused on anticipatory actions and the performance of the health system . In the Instead of recruiting the local healthcare workers as the emergency responders in an outbreak situation, international response staff was used during the tenth EVD outbreak. The massive participation of international staff workers involved in the emergency response team and their increased salary compared to local health workers lead to increased community resistance illustrated by the burns of Ebola treatment facilities and the murder of Richard Valery Mouzoko, a WHO staff on April 19, 2019, in Butembo city, located in North-Kivu province . AlthougA functional health system focused on the preparedness and response services in health emergencies should bThe tenth EVD outbreak has constituted a clear opportunity to build the capacity of a strong health system in DRC via the training of a local healthcare task force , and theThe management of the tenth EVD outbreak in DRC has tinted the reinforcements of the health systems by enhancing primary and specialized healthcare capacities either during a global crisis or in normal conditions. This sustained approach Fig.\u00a0 may provFirst, the emergency response against EVD outbreaks is usually impaired by community resistance. However, the experience of global health security in the previous public health problems showed that community resistance is encouraged by the non-recruitment of local practitioners and the lack of funds . In UganSecondly, the lack of local healthcare workers trained to manage a public health emergency in North-Kivu and Ituri has affected the EVD outbreak, called for international healthcare workers, and increased community resistance . BuildinThirdly, monitoring, contact tracing, and supervision have been criticized over the years in their methods of handling infectious disease outbreaks. However, the close monitoring and the recruitment of local healthcare workers and community leaders have shown a positive result during the response against the tenth EVD outbreak. Also, trained staff should be encouraged to conduct continuous monitoring and supervision of health facilities with special attention to the cause of outbreaks. Given that the communication is impaired between the health facilities of different levels of the ministry of health and the community, the use of available means of communication such as community radios and community outreach programs could promote awareness of health problems and quick health communication. The extension of the job description of community health workers to address public health challenges and to support the appropriate management of many diseases at the community level should be supported .The lack of ambulances and means of communication especially in rural health zones impaired the chain of early treatment from the community, primary health facilities, and the Ebola treatment center.The occurrence of EVD outbreaks in DRC, commonly in rural and armed conflict settings, demonstrated the role of delayed consultation of health facilities by patients in the early recognition of outbreaks. This delay is caused by the lack of money for the affected patients. Therefore, the promotion of a free-of-charge healthcare service should be promoted up in health zones with a high risk of outbreaks, during or out of global health crisis.This paper proposes a multidisciplinary model made to increase the health system capacity in regions concerned with public health emergencies including DRC, especially in North Kivu and Ituri provinces. This model will be applied by the Ministry of Health and other related agencies involved in\u00a0public\u00a0health emergencies including WHO, especially in armed conflict and developing settings. This model is based on two main assumptions: firstly, the model centered on the IHR 2005) may increase the community resistance; modulates the worseness of the health system during the outbreaks, and impaired the management of common health problems [ may incrThe integrative model may reduce the delay of outbreak recognition and modulates the early contact tracing and contention of an outbreak in an efficient way. Therefore, the shift from a model based on a short-term parallel health system into an emergency response model integrated into the existing health systems could accelerate the elaboration of this model . Given tThe tenth EVD outbreak in DRC has revealed the need for new approaches to strengthen the weak health systems in developing countries. The lessons from previous outbreaks have emphasized the integration of the emergency response into the existing health system. There is a need to set up the reinforcement of the operational health level to perform the readiness and preparedness against any public health emergency. Therefore, the multidisciplinary model centered on the health zone is proposed to fight infectious diseases that cause outbreaks. The trained health workers on providing emergency health\u00a0care services are required as well as support from international organizations for effective management of health emergencies and disease outbreaks the first time. Finally, a monitoring system by the central and intermediary level of the ministry of health in these countries must be instituted, which should be supported and supervised by the WHO."} +{"text": "Introduction: Nowadays, the final success of implantation is not only based on obtaining osseointegration of the implant but is also determined by achieving a satisfactory aesthetic effect of the soft tissues surrounding the implant, which can be defined as an aesthetic integration. The process of obtaining this aesthetic integration already begins at the stage of placing the healing abutment, which allows us to obtain the emergence profile necessary for our prosthetic reconstruction. Materials and Methods: The study used cone-beam computer tomography (CBCT) scans of 51 patients. The measurements of the maxillary teeth were performed from cross-sections of the individual teeth at the transition zone to design a custom anatomic healing abutment milled from zirconium and luted to the non-index Ti-base. Results: The obtained results allowed to design and create the shape of the anatomic healing abutment. Conclusions: The use of laboratory-produced anatomical healing abutments is possible and may allow to obtain the desired and planned emergence profiles of prosthetic restorations. In addition, it might be a method of reducing work time at the dental chair but further clinical trials are necessary. One of the greatest challenges of modern implantology is to achieve beautiful and natural aesthetic results. The key to achieving this is the maximum preservation or reconstruction of hard and soft tissues around the implant.Current guidelines for successful therapeutic implantology treatment are based on the evaluation of the implant and restoration survival, dento-gingival aesthetics, rate of mechanical complications, and the bone levels and health of surrounding soft tissues ,2,3. TheThe soft tissues at the implant have the characteristics of scar tissue, which is the cause of reduced resistance to damaging factors such as the inflammation due to bacterial plaque. This characteristic feature makes it very important to produce the corresponding anatomical structure of the soft tissue around the implant as well as the corresponding shape of crowns and bridges capable of maintaining a high standard of oral cavity hygiene . This prPrefabricated standard healing abutments have a circular cross-section. They are produced in various sizes and heights, and are usually made of titanium ,20. The The healing response around abutments can be evaluated and the soft tissue around the fixtures can heal according to the contours of the definitive prosthesis . The cirIn this study, randomly selected CBCT scans of patients from the database were used regardless of the gender and age of the patient. In the research group, CBCT scans were from 30 women aged from 19 to 65 years and 21 men aged 18 to 69 years, and were randomly selected by two independent researchers. The scans of patients with tooth shape disorders, numerous fillings, and metal scattering problems were disqualified. The scans were originally made for diagnostic purposes due to planned dental treatment. The study had the consent of the bioethical commission at the Regional Medical Chamber in Gda\u0144sk (registration number KB-13/18 approved 25May 2018). The commission operates in accordance with the principles of good clinical practice.To analyze anatomic crown ratios, the measurements were taken from CBCT scans of the maxillary teeth: central incisor, lateral incisor, canine, first premolar, and first molar made on cross-sections of individual teeth at the place where the tooth comes from bone tissue into the soft tissue (transition zone). Four measurements were made in millimetres on each cross-section . Two independent researchers made the measurements . The L1 At total of 51 patients were included in this study. Out of 51 patients, 59% (30) were females and 41% (21) were males .The received results allowed to design the shape of the individualized anatomical healing abutment for each group of the teeth mentioned . The ariIn order to achieve success in implantology treatment, it is highly beneficial to understand the importance of proper designing and the obtaining of the emergence profile. The aim of this study was to design an anatomically stock-healing abutment. We can manage the healing of tissues around the implant by using customized prosthetic elements such as healing abutments or connectors between the implant and the crown. Due to the use of such solutions, the tissue healing process is possible to plan and is more predictable, which results in a better aesthetic effect ,2,3.When striving to obtain the best fitting natural representation, we should come as close as possible to the physiological anatomy of the dental and surrounding structures. The use of standard prefabricated implant prostheses such as the healing abutment to shape the emergence profile is an unpredictable procedure. These screws are made in a shape of which the cross-section is circular: in no way does this shape mimic the natural anatomy of the crown and the root of the tooth ,30, whicThe use of existing CBCT scans taken from the research group allowed us to obtain a cross-section of dimensions needed to obtain average values in four dimensions at the entry of the tooth into the bone structure (transition zone) for the central incisor, lateral incisor, canine, first premolar, and first molar in the upper jaw. The obtained results allowed to design the shape of the individualized anatomical healing abutment for each group of teeth mentioned. None of the profiles received were of a rounded profile due to the detailed measurements, making it feasible to get as close to the natural anatomy as possible. Having such an anatomically shaped healing abutment offers the possibility to control the process of soft tissue healing around the implant either immediately after implantation or after the integration period and after unveiling the implant without the necessity to establish additional visits with the patient or long procedures involving the use of additional materials needed for standard healing abutmentThe material from which the implant and prosthetic components are made should be mechanically stable (abutments) and maximally biocompatible . In the studies by Abrahamsson et al. performed on animals, materials for making abutments were tested . The abuWhile selecting material for customized implant abutments, recent advances in milling technology recommend two materials: zirconium and titanium . TitaniuThe meta-analysis by Linkevicus et al. showed statistically significant superiority of Zr abutments over Ti abutments in developing natural soft tissue colour and a superior aesthetic PES score . When deNowadays, this material is also relatively inexpensive, thanks to which it is possible to create a highly biocompatible and economic implant-prosthetic solution that is an individualized anatomical pre-fabricated healing abutment.The present study is a preliminary report on an innovative method of using clinical data for the construction of anatomical healing abutments made of biocompatible material, namely zirconium and available chairside without additional time-consuming visits or protocols. Further clinical trials, which are currently underway, are needed to determine the results of the study and compare its effectiveness with other solutions practiced so far.The use of dental implants in the treatment of missing teeth is a predictable treatment at the osseointegration stage. The use of individualized healing abutment allows to achieve this predictability in the healing process of soft tissues surrounding the implant and in the formation of the emergence profile. The obtained CBCT measurement results allowed to design and create the shape of the anatomic healing abutment, milled from biocompatible zirconium and luted to the non-hexed Ti-base.The use of laboratory-produced anatomical healing abutments is possible and may allow to obtain the desired and planned emergence profile of prosthetic restoration. It might also be a method of reducing work time at the dental chair but further clinical trials are necessary."} +{"text": "The long road of establishing an accreditation entity began in August 2010 when the AGHE Accreditation Task Force was convened. After numerous meetings complete with loud and vigorous debates, AGEC, the Accreditation for Gerontology Education Council emerged in 2016. Over the subsequent years, the Standards hit the hard road of reality leading to various revisions to the Handbook. The symposium\u2019s first presentation concerns the history of AGEC and its further development into an independent entity. The key purpose of AGEC is assuring gerontology programs educational quality and enhancement governed by the principle of self-evaluation and peer review that engenders trust. The next presentation discusses the marketing aspect of AGEC built on getting feedback from the public. One of the outcomes of conducting focus groups and surveying the public is the discovery that prospective students really see the value of accreditation. The penultimate presentation focuses on refinements to procedures alluded to in the first presentation in response to the feedback received in meeting with institutions and faculty about what accreditation offers to students, stake-holders, and ultimately the older adults served by the graduates in the work force. The key goal is to clarify the expectations and simplify the application process. On no other issue has more time been spent than on the assessment of students\u2019 competency. Our last presentation explains competency-based education consisting of well-articulated student learning outcome measures that are consonant with the program\u2019s mission that lead to \"closing the loop\" of continuous and durable improvements in the learning environment."} +{"text": "The surgical management of Laryngeal webs is challenging and is associated with a high recurrence rate due the presence of opposing raw mucosal surfaces of the vocal cords, especially near the anterior commissure which causes re-scarring. We describe an endoscopic technique of mucosal flap lateralization (MFL) with ultrasound guidance, which prevents the apposition of the anterior raw surfaces of the vocal cords after web incision, thus avoiding recurrence. Congenital Laryngeal webs (LWs) are rare entities and are classified based on the percentage of vocal cord involvement and the presence of subglottic extension as given by Cohen \u20133. PreseWe describe an endoscopic technique of mucosal flap lateralization (MFL) with ultrasound guidance, which prevents the apposition of the anterior raw surfaces of the vocal cords after web incision, thus avoiding recurrence.The present surgical technique is described in a 15-month-old otherwise healthy child with Type 2 Laryngeal web who presThe surgical technique can be viewed in the The surgery is performed with the patient under spontaneous ventilation using a combination of propofol and sevoflurane. After direct laryngoscopy and rigid bronchoscopy, an appropriately sized Benjamin Lindholm laryngoscope is introduced for laryngeal exposure which is placed in suspension with the aid of a self-retaining laryngoscope holder secured to a Mayo stand. A vocal fold spreader is placed in order to completely expose the anterior glottic web till the anterior commissure. The microscope is then brought into the field and after implementation of all laser safety precautions, a straight handheld CO2 laser set at 2W, 20mj ultrapulse repeat mode is used to incise the laryngeal web close to the edge of the right vocal fold .The thin mucosal flap is then subsequently wrapped on to the left vocal fold which is then lateralized using an ultrasound guided exo-endolaryngeal technique of suture lateralization, which we have described in detail previously . In brieA 2 months follow up, revealed no evidence of recurrence of the laryngeal web with significant subjective improvement in voice . The resThe biggest challenge in the endoscopic management of LWs is to prevent apposition of the anterior vocal folds to avoid rewebbing. The present technique of mucosal flap lateralization provides a method of maintaining distance between the anterior vocal cords, preventing contact of raw mucosal surfaces. The use of ultrasound guidance for accurately predicting the trajectory of the needle assists in precise placement of the endolaryngeal stitch avoiding multiple passes and preventing injury to the delicate laryngeal mucosa.A multitude of surgical techniques have been described for avoiding juxtaposition of the anterior vocal cords after LW excision. The most widely used is endoscopic excision and placement of a silicon keel. However, keel placement has been associated with a high recurrence rate of 10\u201330% . Also, kThe present technique provides a way of keeping the anterior vocal fold raw mucosal surfaces separated till complete healing has taken place of the contralateral vocal cord. The ultrasound guided exo-endolaryngeal technique of suture lateralization has been previously described by our team for the management of bilateral vocal cord paralysis . We haveThe authors preference is to use flexible fiber CO2 laser for the incision of the LW, however cold steel microlaryngeal instrumentation can also be used to achieve similar result. The ultrapulse mode of laser emission provides control and precision while reducing the damage to the surrounding tissues. The flexible fiber laser helps in greater maneuverability in the limited space of the pediatric larynx. Preloading the needles with sutures and making sure that they can easily slide through the needles will ensure that the surgical procedure runs smoothly. The external puncture site is critical and should be 1.5 cm lateral to the midline at the level of the vocal cords which can be determined with the help of ultrasound guidance. The ultrasound probe is used in the axial plane with an in-line axis needle approach which allows for visualization of the whole trajectory of the needle, accurately placing the needle in the anterior part of the vocal cord, while preventing repeated passes. The suture is tightened over a silastic button to avoid cutting through the soft tissue and buried underneath the strap muscles to avoid extrusion and stitch abscess. While tightening the suture it is important that the stitch is being visualized endolaryngeally to achieve minimal laterization to avoid increased separation of the vocal cord which might lead to aspiration. For LWs with greater glottic involvement multiple sutures can be placed in order to achieve mucosal flap lateralization. The simplicity of the technique makes it an option for management of children with predominant voice symptoms rather than waiting in this subset of patients till school age.The major drawback of the technique is the possibility of producing aspiration. In the present case transient choking was noticed which resolved after adequate pain management. Even if postoperative aspiration is noticed, it will be transitory, as the suture is cut within a week of placement. Another surgical risk is the possibility of glottic and subglottic trauma secondary to multiple needle passes. However, with the assistance of ultrasound, we were able to complete the procedure in a single needle pass. The technique is also a two-stage procedure and warrants a second stage to take down the suture and excise the redundant mucosa on the left vocal fold. However, most techniques described for the management of LWs require a second stage for removal of an endoscopically placed keel. Also, the technique is of limited value in case of subglottic extension and hence is not an option for Type 4 LWs.Future studies using a larger sample size and outcome analysis regarding the tissue impact, and technical ease should be undertaken to demonstrate the effectiveness of this technique.We propose an ultrasound guided mucosal flap lateralization technique for management of laryngeal webs which is a simple way of avoiding vocal cord apposition and preventing recurrence of LWs.The original contributions presented in the study are included in the article/The studies involving human participants were reviewed and approved by University of Iowa IRB No: 20210205. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.SK contributed to the idea, writing of the manuscript, and final approval.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Female infertility is the main reason for involuntary childlessness nowadays. The overall trend toward postponing parenthood toward a more advanced female age in most of the economically developed countries is certainly partly, though not exclusively, responsible for this trend. A number of age-independent factors, related to professional gametotoxic exposures and other independent environmental factors can also come into play. The impact of all these factors in each individual woman is conditioned by her genetic and epigenetic background.Apart from the known genetic and epigUnlike the most studied mammalian models, the maternal-to-embryonic developmental control transition occurs later in humans. The first signs of embryonic RNA synthesis (transcription) were observed in the nucleoli of four-cell human embryos , while eThe relatively late onset of embryonic gene expression in human embryos underscores the importance of the molecular events taking place during oogenesis, so as to endow the future embryos with an appropriate stock of maternal mRNAs as well as of the molecules controlling their coordinated translation before, during, and after the embryonic genome\u2019s activation . These mThe oocyte-derived epigenetic patterns controlling early embryonic development are no less important than the genetic ones . In factThe information derived from the studies focused on the above aspects of the relationship between oogenesis and the early embryogenesis can serve as a guide for further in-depth analysis of the molecular particularities that can guide a doctor to design the optimal diagnostic and treatment strategies for each individual patient. A failure of implantation and early pregnancy loss, independent of embryo quality, is another possible cause of female infertility . In addiPreserving genome integrity during the cleavage stage of early embryogenesis is another essential goal to be achieved by the embryo, with regard to its potential to generate multiple, distinct cell lineages . A mammaThe ability of the oocyte to repair sperm-derived DNA damage represents another challenge for future research. Sperm DNA damage is known to cause genomic instability in early embryonic development . HoweverWe still need more high-quality studies to suggest ways to improve female fertility status by increasing the quality of oocytes in women of any age. The mechanisms responsible for both the physiological and premThis Special Issue aims at advancing our knowledge and understanding of how to design molecular studies in both humans and animal models and to use and interpret their results to improve different aspects of oocyte quality related to the success of both natural and assisted reproduction outcomes. The data obtained from both humans and different animal models will be used to predict the risk of premature ovarian decay and the ways to prevent it, the molecular diagnostic of the existing issues, and the ways to act in each individual case, also taking into account the condition of the male partner. The synthesis of these analyses will serve to determine (1) the need for taking therapeutic actions, (2) the degree of urgency of the actions to be taken, and (3) the possibility of adopting a wait-and-see approach in patients who either do not have a stable male partner or in whom immediate action is not currently necessary."} +{"text": "The lack of transparency in the methodology of unit cost estimation and the usage of confidential or undisclosed information prevents cost comparisons and makes the transferability of the results across countries difficult. The objective of this article is to compare the methodologies used in the estimation of the cost of a day case cataract extirpation that are described in the official and publicly available sources and to study how these translate into different unit cost estimates.A literature review was conducted to identify the main sources of unit costs of cataract extirpation. A semi-structured questionnaire to obtain information on national costing methodologies was developed and sent to consortium partners in nine European countries. Additionally, publicly available sources of unit cost of cataract surgery in those countries included in the European Healthcare and Social Cost Database (EU HCSCD) were analysed.The findings showed a considerable diversity across countries on unit costs varying from 432.5\u20ac in Poland (minor degree of severity) to 3411.96\u20ac in Portugal (major degree of severity). In addition, differences were found in the year of cost publication and on the level of detail of different types of cataract surgery. The unit of activity were Diagnosis-Related Groups in all countries except Slovenia. All unit costs include direct costs and variable overheads (except Germany where nursing costs are financed separately). Differences were identified in the type of fixed overheads included in unit costs. Methodological documents explaining the identification, measurement and evaluation of resources included in the unit costs, as well as use of appropriate cost drivers are publicly available only in England, Portugal and Sweden.We can conclude that while unit costs of cataract extirpation are publicly available, the information on methodological aspects is scarce. This appears to pose a significant problem for cross-country comparisons of costs and transferability of results from one country to another.The online version contains supplementary material available at 10.1186/s12962-022-00346-3. Untreated cataracts are the leading cause of blindness worldwide and the second leading cause of visual impairment. It causes the opacity of the eye lens leading to blurred or reduced vision. The only effective intervention is a surgical operation that involves the removal of the blurred lens and subsequent implantation of a lens . CataracEconomic evaluation is an important tool for adoption and reimbursement of health technologies. The lack of transparency in the methodology of unit cost estimation and the usage of confidential or undisclosed information prevent cost comparison, which, in turn, makes the transferability of the results across countries difficult. The European Commission (Horizon 2020) project IMPACT-HTA aimed to understand the variation of costs across European countries. One of the outcomes of this project was the European Healthcare and Social Cost Database (EU HCSCD), a minimum common dataset of international costs which can feed into health-economic evaluations and enable transfer of models across countries . All cosThe objective of this article is to analyse and compare official and publicly available sources of cost of cataract extirpation in nine European countries included in the EU HCSCD.A literature review was carried out consulting PubMed and Scopus databases to identify articles estimating the cost of cataract extirpation that were published after 2005 in English, Spanish and French. This search was completed with an additional search on Google Scholar. The search terms used were: . The literature search was verified by a librarian with an extensive experience in the field of public health. The reference lists of relevant studies identified from the search were also reviewed. The objective of the literature review was to identify the sources of costs and/or tariffs of cataract surgery and ensure that there were no additional publicly available sources to those included in the EU HCSCD.The methodology used to construct DRG costs has been described in some detail by previous EC projects HealthBASKET and EuroNo ethical approval was required since we were not dealing with patient data.The literature review retrieved 54 articles; five articles were based on unit costs obtained from publicly available databases. A PRISMA diagram is provided as Additional file Table Significant variability in the level of detail of different types of cataract surgery was observed. Thus, there were countries that showed up to 9 different costs depending on type of the procedure or degree of complexity (England), while; other countries published a single cost of the procedure . The unit of activity was DRG in all countries except in Slovenia, where the estimation of cost was based on the breakdown of various cost items included in the procedure . The DRGAgence Technique de l\u2019Information sur l\u2019Hospitalization; ATIH) from the social health insurance point of view. The health insurer in France does not reimburse the hospital for the full DRG tariff; a proportion of the tariff must be paid by the patient or by an additional insurance [Another important finding relates to the differences between DRG costs and DRG tariffs. In England, each cataract surgery subtype has both DRG costs (referred in England as reference costs) and DRG nsurance . No offinsurance . Spain hOne of the findings of this study is that detailed documents explaining how the resources are identified, measured and valued, what type of cost drivers were used in estimation of costs and what type of resources were included in calculation of unit costs are missing in most countries except England, Portugal and Sweden. This suggests that there is a lack of transparency in the costing methodology used in estimating costs and setting public prices and/or tariffs for different procedures carried out by health systems. This result is more striking considering that the economic evaluation guidelines of the different countries mention the need for an adequate identification of all resources included in the final unit costs.Fattore and Torbica comparedThe results of the study described in this article demonstrate the need for the authorities of the European countries to include detailed information on the estimation of the costs/tariffs in the official sources, which can ensure the transferability of the results across countries. Geissler et al. (2015) mentions the idea of \u2018a common European DRG system to define homogeneous groups of patients across different countries\u2019 . This woThe result of standardized economic evaluation is that it would be transferable from country A to country B without having to develop the same economic model from scratch in country B.The study has several limitations. First, the year of cost publication varies among countries, so the costs do not refer to the same year. However, costs inflated to 2019 prices using both Consumer Price Index and Gross Domestic Product can be found at the EU HCSCD webpage . Second,This study highlights the need of methodological documents describing the resources included in the estimation of unit costs to be publicly available. Enhancing transparency in and accessibility of methodological costing documents will improve the transferability of economic evaluations across countries.Additional file 1. Questionnaire General.Additional file 2. Responses to questionnaire.Additional file 3. PRISMA flow diagram."} +{"text": "The Journal and Authors retract the 14 January 2021 article cited above for the following reasons provided by the Authors:Following publication, concerns were raised regarding the integrity of the images in the published figures. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers. The authors agree to this retraction."} +{"text": "The emergence of multicellular organisms was, perhaps, the most spectacular of the major transitions during the evolutionary history of life on this planet ,2. DespiResearch into the evolution of multicellularity tends to involve either studies that attempt to understand transitions from unicellularity to multicellularity or studies that focus on the subsequent step, corresponding to the emergence of complex multicellularity . Kin and Schaap argue thExtracellular matrices (ECMs) have played key roles in the evolution of multicellularity. They not only carry out the basic physical role of binding cells together but also influence development and morphology, protect cells from the external environment and represent an important a source of molecules involved in intercellular communication, all essential functions for a multicellular organism. Kloareg et al. provide Transcription-associated proteins play a central role in mediating and modulating the expression of the genome in the different cell types of a multicellular organism. In this Special Issue, Petroll et al. focus onAn opinion article by Patthy focuses Finally, the article by Isaksson et al. considerIn conclusion, this Special Issue brings together articles that address several of the important outstanding questions in multicellularity research and should be of interest to a broad audience interested in both the experimental and theoretical aspects of this exciting field of research."} +{"text": "The presence of maternal emotions towards the offspring resulting from assisted reproductive techniques (ART) has been previously reported in oocyte donors. However, there is limited information about the presence of these emotions in oocyte donors during the ART process and before pregnancy. The aim of this study was to evaluate the emotions of oocyte donor women towards the potential genetic offspring and to compare them with women treated with ART by using own oocytes.A cross-sectional study was conducted on 100 women who were divided into two groups of oocyte donors and those treated with ART and using autologous oocyte. At the time of oocyte retrieval. Using a validated questionnaire, the emotions toward potential offspring (EPO) resulting from ART and its three dimensions were measured and compared in two groups.Comparison of the EPO in the two groups showed that the emotions in all three dimensions were lower in oocyte donors than the other group (p\u2009<\u20090.001). Moreover, in oocyte donors, the mean score of the scale of the importance of treatment outcome dimension was higher than the other two scales (p\u2009<\u20090.001).The results of the study showed that there is a significant emotion toward the potential offspring in oocyte donors. The presence of these emotions thus should be considered in formulating the ethical charter of ART by using oocyte donation. There is limited information about the presence of maternal emotions in egg donor women during the assisted reproductive techniques (ART) process and before pregnancy. The aim of this study was to evaluate these emotions of women towards the potential genetic offspring and to compare them with women treated with ART by using own oocytes. A cross-sectional study was conducted on 100 women who were divided into two groups of egg donor and those treated with ART and using autologous oocyte. At the time of oocyte retrieval and using a validated questionnaire, the emotions toward potential offspring (EPO) resulting from ART and its three dimensions were measured and compared in two groups. Out of 100 women. Comparison of the EPO in the two groups showed that the emotions in all three dimensions were lower in egg donors than the other group. Moreover, in egg donors, the mean score of the scale of the importance of treatment outcome dimension was higher than the other two scales. The results of the study showed that there is a significant emotion toward the potential offspring in oocyte donors. The presence of these emotions thus should be considered in formulating the ethical charter of ART by using oocyte donation. The widespread use of the third party in assisted reproductive techniques (ART) and the success of infertility treatment following its use has led to an increased number of women volunteers who enter the oocyte donation process , mainly Willingness to connect with the born offspring in some gamete donors , oocyte It is believed that genetic relationships form the basis of the parent\u2013child relationship . In thisAlthough evaluation of oocyte donors' experiences in qualitative studies has shown the presence of emotional responses in donors, the presence of these emotions in oocyte donors towards the potential offspring has not been reported during the ART process and before the confirmation of the clinical pregnancy. However, the recognition of these emotions at the time of the donor's participation in ART allows the development of ethical charter for using a third party in ART. Therefore, the aim of this study was to evaluate the emotions of oocyte donor women towards the potential genetic offspring and to compare them with women treated with ART by using own oocytes.To evaluate the emotions of oocyte donors to the potential offspring this cross-sectional study was performed on 50 oocyte donors and 50 women under ART through using own oocyte with the approval of the ethics committee of Isfahan University in 2019\u20132020 Isfahan, Iran. A necessary sample size was 50 women in each group in order to achieve a 95% confidence interval and 80% test power to identify significant differences in means values at the 5% level and a significance level alpha of 0.05.Inclusion criteria in the two groups consisted of having no history of major psychological disorders, sexually transmitted diseases and hepatitis B and C based on the documents recorded in their medical file. For oocyte donor women previous complication with ovarian stimulation and history of genetic diseases were other inclusion criterion. ART cycle cancellation was exclusion criteria for both groups.Using convenience sampling method, sampling was performed in Isfahan Fertility and Infertility Center before ovarian stimulation among the oocyte donor women and the women who were candidate for ART due to male factor infertility. After obtaining informed consent, the baseline characteristics were recorded and, then, the EPO was measured on the day of oocyte reception and after the operation by using the self-report 12-item emotions towards the potential offspring questionnaire , while they had lower education level and most of them were employed (Table Comparison of the mean scores of the three dimensions of EPO shown in the donor women showed that the mean score of importance of treatment outcome dimension was higher than the mean score of imagination dimension (p\u2009<\u20090.001). Also, the mean score of importance of treatment outcome was higher than the mean score of the sense of ownership dimensions in the donor women (p\u2009<\u20090.001).Also, in the own oocyte women the mean score of importance of treatment outcome dimension was higher than the imagination (p\u2009<\u20090.001) and sense of ownership dimensions (p\u2009<\u20090.001). The mean scores of imagination and sense of ownership did not differ in the oocyte donor women; but in women treated with own oocytes, the mean score of imagination was significantly higher than that of sense of ownership (p\u2009=\u20090.02).In oocyte donor women, independent of the age, monthly income and educational level, the sense of ownership score was related to the number of children inversely and decreased with the increase of number of children (p-0.005). Additionally, the scores of sense of ownership p\u2009=\u20090.04) and the importance of treatment outcome (p\u2009<\u20090.001) had a significant inverse relationship with the economic level of the oocyte donor women (Table and the The aim of this study was to evaluate the emotions of oocyte donor women towards the potential genetic offspring and to compare them with women treated with ART by using own oocytes. The results showed that oocyte donor women had a relatively significant emotion towards the potential genetic offspring. Furthermore, while these women were younger and less literate than those treated with ART through using own oocytes, their emotions toward potential offspring response were lower than those of the women undergoing ART with own oocytes.The results showed that the individual characteristics of the participants were different in the two groups. In ART through using donated oocytes, oocyte donors are selected from among young women in order to obtain an appropriate number of oocytes with high fertility potential and increase the treatment success. Moreover, as studies have shown that the oocyte donor\u2019s participation in ART is mainly because of financial motivations , 3, 11, The finding suggests that prior to embryo formation; the oocyte donor women were somewhat emotionally involved with the potential genetic offspring, reflecting maternal attitudes toward the offspring resulting from the ART. The emotions of the oocyte donor women after the end of the participation in ART and the occurrence of pregnancy have been reported in previous studies. According to these studies, before starting the oocyte donation process, some women have expressed their concerns about the possibility of an emotional bond with the resulting offspring and their fondness for it . Some coImagination of the offspring is one of the events that take place during the bonding of the mother with the fetus . AdditioThe higher level of the dimension of the importance of treatment outcome compared to the other two dimensions of imagination, sense of ownership of the potential offspring, shows that the condition in which the child will grow up and the future that awaits him are so important for the oocyte donors. This research finding is in line with a study by Purewal et al. suggesting that it is important for the oocyte donors to have access to the information such as the number of the children, the history of divorce and socio-economic status of recipient couples .Another finding of the study showed that women with lower economic status and more children experience fewer emotions when participating in ART. Other studies have shown that financial incentives for oocyte donation are significantly related to the low economic status . TherefoAlthough the results of the present study indicated the presence of maternal emotions of oocyte donors towards potential genetic offspring, its limitations should be considered in interpreting the results of the present study. The first limitation of the study was related to the time of assessing the emotions of the donors. Owing to the researcher's lack of access to the donors, this assessment was performed after their participation in the treatment process on the day of ovulation and before sperm insemination and embryo formation.Also, in this study the results have been adjusted for some potential confounding factors which may influence the overall score of EPO, however there could still be residual variables which are not considered. Therefore, in-depth understanding on the emotional aspects can be investigated further through qualitative approach. Not following-up of the donors and evaluating their changes over time was another limitation of the present research. Furthermore, since the sampling method wasn\u2019t a random sampling, the sample to generalize to the population might not accurately represent the population.Another limitation of the study was using a forced Likert scale and limitations of this method to deal the neutral value among respondents who don't have clear response to the item. Despite the limitations of the present study, its results can be helpful in development of the ethical charter in providing ART services through using donated oocytes. It is also suggested that the motivation and emotions of the oocyte donor towards the resulting offspring be considered before they enter the treatment process. Moreover, for a deeper understanding of these emotions in oocyte donors, it is suggested that future researchers compare the emotions of oocyte donors with that of the oocyte recipients.The results of the present study showed that albeit the EPO score in oocyte donor women was lower than women treated with own oocyte, the donor group score was high, This results revealed that the oocyte donors were involved in significant emotions towards the potential genetic offspring who does not belong to them and should be taken into account to develop the ethical charter in providing ART services through using donated oocytes."} +{"text": "There has been a rising interest in compliant legged locomotion to improve the adaptability and energy efficiency of robots. However, few approaches can be generalized to soft ground due to the lack of consideration of the ground surface. When a robot locomotes on soft ground, the elastic robot legs and compressible ground surface are connected in series. The combined compliance of the leg and surface determines the natural dynamics of the whole system and affects the stability and efficiency of the robot. This paper proposes a bio-inspired leg compliance planning and implementation method with consideration of the ground surface. The ground stiffness is estimated based on analysis of ground reaction forces in the frequency domain, and the leg compliance is actively regulated during locomotion, adapting them to achieve harmonic oscillation. The leg compliance is planned on the condition of resonant movement which agrees with natural dynamics and facilitates rhythmicity and efficiency. The proposed method has been implemented on a hydraulic quadruped robot. The simulations and experimental results verified the effectiveness of our method. Legged robots have superior mobility and maneuverability in complex unstructured environments, benefitting from the ability afforded by their morphology and varied gaits . Recent An essential property of animal locomotion is the alternative foot-ground contact in the swing and support phases of a locomotor cycle based on their inherent dynamics, roughly defining the locomotion\u2019s rhythmicity ,17. ElasAn emerging amount of research in biomechanics and kinesiology has revealed that elastic structures, including tendons, ligaments, muscles, and foot pads , play anTo approach the performance and efficiency of the biological archetype, researchers have expended plenty of endeavors in realizing legged locomotion in theory and engineering practice and made numerous profound achievements ,31. RaibTo achieve tunable compliance of the robot leg, it is crucial to employ physical or virtual elasticity in drive units. Physical elasticity means that the drive unit consists of some physical elastic components, and may be actively controlled to adjust the stiffness. Virtual elasticity means that the elasticity is achieved by an active compliance control algorithm. One approach to the implementation of physical elasticity is to add elastic components to the leg ,46,47, aThis paper proposes a systematic compliance planning and implementation method for a quadrupedal robot on various terrains. The stiffness of the ground surface is estimated during locomotion based on the analysis of ground reaction forces in the frequency domain. The compliance of the robot leg is actively controlled to offer virtual elasticity and is regulated as changes of locomotion parameters and the environment to achieve harmonic oscillation of the elastic leg-ground system in the stance phase.In developing the bio-inspired compliance planning and implementation method, we mainly offer two contributions: (a) A novel surface stiffness estimation method is proposed for legged robots. Through analysis of ground reaction forces in the frequency domain, the estimation can be completed within one step period. (b) The principle of harmonic locomotion is exploited for the leg compliance planning to improve rhythmicity and efficiency. The leg compliance is actively regulated on the condition of resonant movement which agrees with the natural dynamics of the leg-ground system.The paper is organized into six sections: As stated before, legged animals exploit elastic properties and adjust leg compliance to maintain longitudinal harmonic oscillation. Inspired by biological research, this paper takes harmonic locomotion as a basic principle for the motion planning and control of a quadruped robot. The leg compliance is planned on the condition of resonance to exploit the natural dynamics of elastic leg and to match the desired motion in terms of locomotion rhythmicity on various ground surfaces.The dynamics of legged locomotion can be revealed by the SLIP model, and various locomotion gaits of quadruped robots such as trotting, pacing, and bounding are expressed as elastic oscillations of the sample mass-spring bouncing system in SLIP, as illustrated in However, the natural dynamics of robots are actively controlled and the motion is arbitrarily generated, respectively; thus, the harmonic motion can hardly be achieved directly. In pursuit of efficient harmonic motion, the dynamics determined by the actively controlled joints should agree with the rhythmicity of motion, and foot reaction forces should match the oscillation of the CoM during each stance phase.k is the stiffness of the spring, m is the center mass, and f is the natural bouncing frequency. The natural dynamics described by k and m determine the passive harmonic motion parameterized by f.For a simple spring-mass bouncing system, the natural dynamics are governed byFor quadruped robots locomoting on various ground surfaces, the compressible ground surface and compliant leg are connected in series to form an elastic combination, and the combined effective stiffness regulates the legged motion on ground surfaces. To achieve harmonic locomotion for the quadruped robot, the estimation of the surface compliance under the current robot statuses and the prediction of the leg compliance for the next step based on the estimation are the main issues for the compliance planning.M1 and M2 represent the mass of the foot and body mass of the robot and, In the practical quadruped robot, the contact force can be directly and precisely measured while the deformation value estimated from the robot state is usually not reliable. Thus, the surface stiffness estimation should mainly be based on the foot reaction forces. The robotground contact model is shown in x1 and the height of the robot mass x2 as the generalized coordinates, the Lagrangian Equation for the system is given byT, V, D and Q denote the kinetic energy, potential energy, dissipated energy, and general forces of the system, respectively, and the details are given asThe simplified robot\u2013ground model possesses two degrees of freedom. Selecting the height of the foot Assuming that the damping of the spring leg and ground surface is negligible, the dynamics Equation (2) can be converted intoTo solve the Equation (4), the variable transformationConsidering the analytical solutions of the ordinary differential Equation system (6) have the same frequency and different amplitudes, then the solutions can be assumed to beThe solutions for Equation (8) generally have the following formEquation (9) can be substituted into (6) to generateTo ensure the existence of the solution, the determinant of Equation (10) should be zero, that isEquation (11) can be converted into an algebraic equationThe analytical solutions of Equation (12) can be derived asThus, the two modes of vibration for the robot-ground system are fully developed and can be expressed by Equations (5) and (13). On the assumption of the negligible damping of the ground surface, the foot reaction force is directly proportional to the displacement. Therefore, the vibration mode of the foot reaction force and that of the displacement, and the current spring leg stiffness and surface stiffness can be calculated by solving the vibration mode of the foot reaction force with Equation (13).T consists of the aerial phase period Ta and stance phase period Ts; that isAccording to the SLIP model of legged locomotion, the whole gait cycle period The stance phase period can also be expressed as c is used to relate the stance phase period Ts and harmonic vibration period Th, and we can thus obtainThe longitudinal oscillations of the stance leg during the stance phase approximately operate as a part of the simple harmonic vibration. The coefficient fs and gait step frequency fstep can be expressed asThe relation between vibration frequency K of the combined robot\u2013ground system determines the resonant frequency fs:The synthetical stiffness K can be calculated from Equations (16) and (17):The preferred synthetical stiffness K and surface stiffness K1, leg stiffness K2 is governed byThrough the relationship between the synthetical stiffness The preferred leg stiffness can be derived from Equations (18) and (19) and can be expressed bylegK based on the gait characteristics of legged locomotion.Equation (20) can be used to estimate the preferred leg stiffness The implementation of the active compliance control and planning for the quadruped robot is depicted in This section presents our implementation of the bio-inspired compliance planning for a hydraulically actuated quadruped robot. In contrast to electric motors, the main superiority of hydraulic actuation is its high power density, which is critical for heavy-duty legged robots. On the other hand, the control of hydraulic actuation is more challenging because of the wide variety of nonlinearities in the system. This section presents the detailed implementation of the compliance planning methods for the hydraulically actuated quadruped robot based on the framework in \u00ae based controller.The hydraulic actuator of the quadruped robot under consideration is depicted in The pressure dynamics of the cylinder considering the compressibility of the oil can be modeled asThe considered servo valve is developed for high dynamic response applications. The dynamics of the servo valve are neglected; thus the control input The net fluid force can be described byThe time derivative of Equation (24) is given bySubstituting Equations (21) and (22) into (26) yieldsz is presented in Equation (28). We see that Equation (27) maps the control voltage to the fluid force. Through the inverse of Equation (27), the hydraulic force controller can be obtained asThe exponential force stabilization is guaranteed byEquation (31) indicates that Active compliance control plays an important role in the period of contact of the actuator and the load. It indicates the synchronous control of force and position during the contact by tuning the stiffness, damping and inertia, and can be described asReferring to Equation (32), position and velocity tracking errors are used to compute the desired force. The measured acceleration is usually unreliable since it is calculated through the second-order difference of the position signals. The desired acceleration is used to replace the acceleration feedback. Thus, the ideal desired force is derived asIn practice, the friction force of the hydraulic cylinder also affects the character of the contact. In hydraulic systems, notable friction force exists in hydraulic cylinders for leak tightness requirements. A dynamic friction force identification method is used based on our previous work . For furEquations (29) and (34) reveal the main framework of the active compliance controller for the hydraulic quadruped robot. The velocity gain It should be noted that the active compliance parameters in the Equation (34) are expressed in the actuation space. Based on the virtual work principle, the relation between the stiffness in each actuation and joint space can be derived asAs is known, the Jacobian relates the joint torques and the forces applied on the foot byFrom the definition of stiffness, we differentiate Equation (38) and we haveThe systematic method of compliance planning and implementation proposed in this paper consists of two major steps: the compliance planning and the compliance implementation. For the compliance implementation, the position and force tracking experiments were conducted to verify the inner force controller and the outer position controller of the active compliance controller, as illustrated in In the control framework of this paper, the inner-loop torque control is the basis of the compliance control algorithm, while, ideally, the compliance control will not reduce the position tracking performance. To verify the force and position tracking performance of the compliance controller, the experiment was conducted on the left front leg of the quadruped robot hanging in the air. The robot was controlled to perform a 50 mm range of squatting motion when a 25 kg load was mounted on the foot. The pressure sensors on the hydraulic cylinder allow for the calculation of the measured force of the hydraulic cylinder. The encoder on the joint allows for the indirect measurement of the joint position, which is represented by the cylinder length.We expect the robot leg to behave as an actual spring under the active compliance controller to cope with the impact disturbances exerted on the robot feet. We designed the corresponding experiment on our quadruped robot. The impact disturbance force was exerted to the foot when the robot was lifted in the air and the position response was measured. Without loss of generality, the actuation stiffness of 500 N/mm was set for the hydraulic cylinder. The impact disturbance force was also sensed by the three-axis force sensor. We took the compression of an ideal virtual spring with the same stiffness as the desired position response and measured the actual position response of the hydraulic cylinder. The comparison is shown in The experimental setup of a single robot leg is built for the study of robot\u2013ground contact behaviors. The internal components, including the spring of the robot leg, substrate surface, sliding rail and sensors are illustrated in The proposed surface compliance estimation method was verified on the experimental setup of a single robot leg. The leg was released after being lifted to a certain height to excite the contact of the robot foot and ground surface. The contact force and surface deformation were measured as shown in As can be seen from \u00ae and Simscape\u00ae. The mass and geometric dimensions were set according to the robot experimental setup. The robot walked with different step frequencies and leg stiffness using a trotting gait at a speed of 2 m/s. The energy consumption of the robot locomotion was normalized by the displacement on the ground surface. The preferred leg stiffness to obtain minimum energy consumption at a given gait frequency was investigated and compared with the calculated leg stiffness using the Equation (20). The simulation result in To observe the influence of leg stiffness on legged locomotion performance, a quadrupedal robot simulation model was constructed based on MatlabCompliant legged locomotion has recently become an emerging area of interest in the field of robotics. Few studies, however, have been carried out on the planning and implementation of leg compliance with the consideration of ground stiffness. As mentioned in the introduction section, the main challenge is the lack of rapid and affordable ground compliance estimation methods and of reasonable principles for leg compliance planning. In this paper, a systematic compliance planning and implementation method for the quadrupedal robot is proposed to plan and control the leg compliance continuously with the consideration of the ground through surface stiffness estimation. In this way, the compliant robot leg behaves naturally following bio-inspired principles, and the performance is improved in terms of locomotion efficiency and rhythmicity. The effectiveness of the proposed control method has been shown through simulations and experimental results on a hydraulic quadruped robot. The proposed method can also be extended to other legged robots actuated by hydraulic systems or motors where both the torque and compliance are controllable.Future work will include the development of the proposed control architecture for a practical hydraulic quadruped robot, walking and running in a more challenging ground environment."} +{"text": "This contribution has two key objectives. First, inspired by earlier studies in comparative welfare state and in gerontology, we develop a conceptualization of autonomy that is rooted in its social dimensions. This concept is then deployed to assess its policy considerations within the field of home care, both with regards to access and generosity in 21 industrialized countries. Second, this contribution performs a comparative assessment of the key factors resulting in a prioritization of the social dimensions of home care and social services in long term care. This study involves an-depth analysis of policy instruments deployed by public authorities to enhance the autonomy of older adults, complemented with interviews with policy makers in diverse home care policy settings . As such, this study features an evaluation of the presence of social elements in the definition and supply of care needs across 21 countries. It leads to the construct of a social dimensions of autonomy index based upon these instruments and the budgetary prioritization of home care within long term care policies. Among core findings, one discovers broader access and more generous funding when home care responsibilities are firmly embedded at the local level."} +{"text": "The Mental Heath Commission (MHC) is an independent body in Ireland, set up in 2002, to promote, encourage and foster high standards and good practices in the delivery of mental health services and to protect the interests of patients who are involuntarily admitted. Guidelines on the rules governing the use of seclusion are published by the MHC. These guidelines must be followed and recorded in the patient's clinical file during each seclusion episode. A Seclusion Integrated Care Pathway (ICP) was devised in 2012 for use in the Approved Centre in Tallaght University Hospital. This ICP was developed in conjunction with the MHC guidelines to assist in the recording and monitoring of each seclusion episode. Since its introduction in 2012, this ICP has become an established tool used in the Approved Centre in Tallaght University Hospital.The aim of this audit was to assess adherence to MHC guidelines on the use of seclusion in the Approved Centre in Tallaght University Hospital 8 years after the introduction of an ICP and compare it to adherence prior to its introduction and immediately after its introduction.Thirteen rules governing the use of seclusion have been published by the MHC. These include the responsibility of registered medical practitioners (RMP), nursing staff and the levels of observations and frequency of reviews that must take place during each seclusion episode. Using the seclusion register we identified a total of 50 seclusion episodes between August 2019 and July 2020. A retrospective chart review was conducted to assess documentation of each seclusion episode.There was an overall improvement in adherence with MHC guidelines compared to adherence prior to the introduction of the ICP and immediately after its introduction. Areas of improvement included medical reviews, nursing reviews, informing patient of reasons for, likely duration of and circumstances that could end seclusion, and informing next of kin. The range of compliance levels across the thirteen MHC guidelines improved from 3\u2013100% to 69\u2013100%. Post intervention there was 100% compliance with five of the thirteen guidelines.The introduction of an ICP led to an overall improvement in compliance with MHC guidelines. The ICP has ensured that many of the rules governing seclusion are explicitly stated; however adjustments and revisions to the document and ongoing staff training are needed to ensure full adherence to MHC guidelines."} +{"text": "The development of the nervous system is a time-ordered and multi-stepped process that includes neurogenesis and neuronal specification, axonal navigation, and circuits assembly. During axonal navigation, the growth cone, a dynamic structure located at the tip of the axon, senses environmental signals that guide axons towards their final targets. The expression of a specific repertoire of receptors on the cell surface of the growth cone together with the activation of a set of intracellular transducing molecules, outlines the response of each axon to specific guidance cues. This collection of axon guidance molecules is defined by the transcriptome of the cell which, in turn, depends on transcriptional and epigenetic regulators that modify the structure and DNA accessibility to determine what genes will be expressed to elicit specific axonal behaviors. Studies focused on understanding how axons navigate intermediate targets, such as the floor plate of vertebrates or the mammalian optic chiasm, have largely contributed to our knowledge of how neurons wire together during development. In fact, investigations on axon navigation at these midline structures led to the identification of many of the currently known families of proteins that act as guidance cues and their corresponding receptors. Although the transcription factors and the regulatory mechanisms that control the expression of these molecules are not well understood, important advances have been made in recent years in this regard. Here we provide an updated overview on the current knowledge about the transcriptional control of axon guidance and the selection of trajectories at midline structures. The survival of organisms relies on their ability to detect stimuli, process sensory information and generate adequate motor responses. These functions depend on the precise organization of neural networks that enable communication between cells in an efficient and accurate manner. These networks emerge during embryonic development when newly born neurons extend axons away from the cell body to navigate through the developing embryo in order to reach their final targets. The growth cone at the tip of the travelling axon is a specialized structure armed with a plethora of receptors that defines the response of the growing axon to the environmental cues and determines its direction. The existence of both commissural neurons that project to the opposite side of the brain and ipsilateral neurons that connect with targets in the same hemisphere, is essential for the distribution and integration of sensory information and the subsequent generation of coordinated motor responses in species with bilateral symmetry . IntenseDrosophila initially identified a number of TFs involved in controlling the trajectories of motoneurons (MNs) axons towards their corresponding muscles and, subsequent work in vertebrates, revealed some of the transcriptional regulators that define specific limb muscles innervation was described as essential for the targeting of olfactory projection neurons in osophila , and Pouosophila . Also inosophila . In the osophila and Ctiposophila . In bothe lamina and altee lamina .In addition to the abovementioned examples, two neuronal populations have been particularly useful to study the molecular mechanisms underlying axon pathfinding: spinal neurons at the time their axons navigate the floor plate, and retinal ganglion cells when their axons traverse the optic chiasm. In the following sections we review recent findings on the transcriptional regulation of neuronal trajectories using these two classic midline axon guidance models.The population of early born interneurons located in the most dorsal part of the spinal cord is known as dI1. As soon as dI1 neurons differentiate, they migrate ventrally to finally occupy the deep dorsal horns . A largeIn vitro, Barhl2 binds to the regulatory sequences of Lhx2 and represses its expression specifying a commissural versus ipsilateral choice in dI1 neurons given the complexity of the regulatory mechanisms linking Atoh1, Barhl1/2, Lhx2/9 and downstream targets. Together with a more precise definition of the GRN controlling the specification of dl1 subtypes, other questions such as whether Lhx TFs activate other guidance receptors such as DCC, Robo2 or members of the Wnt pathway, or whether Robo3 expression is regulated by other homeodomain TFs in different types of commissural interneurons remain to be answered.In the mouse visual system, the majority of retinal ganglion cell axons cross the ventral diencephalon at the optic chiasm level (cRGCs) while a minority of these axons project to the ipsilateral hemisphere (iRGCs). In this model, also largely used to study axon guidance mechanisms, another member of the LIM homeodomain TF family, Islet2 (Isl2), is differentially expressed in the ipsi and the contralateral RGCs subpopulations . Isl2 muUnc5c locus and represses its expression in order to facilitate their growth into the diencephalic region located in the dorsal horns of the spinal cord. dILB neurons are born very close to the dorsal midline . These cAll together, these observations point to the existence of several gene programs that control axonal laterality in ipsilateral spinal neuron populations with dispar ontogeny. Early born dl1 neurons locate far away from the midline because the ventricle and the subventricular zone (SVZ), which is rich in progenitor cells, occupy the medial region of the dorsal tube. As progenitors exit the cell cycle, the SVZ shrinks and the somas of the late born dILB neurons locate close to the midline. In contrast to the dl1i population whose axons never approach the midline and their projection patterns rely on Lhx factors, dILB neurons are born in close contact with the midline and their axons need to be repelled as soon as they start growing in order to project ipsilaterally. Thus, it is not surprising that although both populations, dI1i and dILB neurons project ipsilaterally, they developed alternative strategies to control the guidance of their respective axons Figure .Despite the increasing number of rapidly emerging innovative techniques that largely facilitates research on the transcriptional mechanisms regulating gene expression, only a handful of TFs have been convincingly shown to control genetic programs involved in the regulation of axonal behaviors. In the last decade, the interest to understand how neural circuits function has exponentially increased and the development and application of genetically encoded, magnetic and thermal tools to manipulate neuronal circuits is helping us to disentangle brain connectivity and circuits function. However, it is surprising that in the era of next generation sequencing and single cell transcriptomic approaches there ar"} +{"text": "In this review, we outline the differentclasses of oocyte transcripts that may be involved in activation of the embryonic genome aswell as those associated with epigenetic reprogramming, imprinting maintenance or the controlof transposon mobilisation during preimplantation development. We also report the influenceof cumulus-oocyte crosstalk during the maturation process on the oocyte transcriptome andhow The notion of embryo quality refers to the capacity of an embryo to develop and support successfulpregnancy to full term. In cattle, most failed pregnancies result from embryonic mortalitythat occurs before implantation, during the first two weeks after fertilisation . The emMammalian embryos are transcriptionally quiescent at the start of development. Early embryogenesisis mainly governed by post-transcriptional and post-translational events. The stockpileof maternal RNAs and proteins, which are stored within the oocyte during oogenesis, sustainsthe first stages of development until embryonic genome activation during the early developmentof bovine embryos is a paradigm. JMJD3 belongs to the Jumonji family of genes that are epigeneticregulators. JMJD3 is a lysine demethylase associated with histone demethylation. Its activityis required for the removal of trimethylated histone 3 lysine 27 (H3K27me3) marks during thereprogramming process . The leWhile tight temporal activation of the translation of dormant maternal mRNAs is required forsuccessful progress through the first cleavages, destabilisation and degradation of the maternalRNA pool is a major determinant for the start of EGA . DecreaPiwi-interacting RNAs (piRNAs), another class of small non coding RNAs, are potentially involvedin the control of maternal mRNA translation and decay as well as in that of transposon activityduring early embryogenesis (Long non-coding RNAs (lncRNAs) are increasingly being recognised as modulators of gene expression.These lncRNAs are a class of transcripts longer than 200 nucleotides that do not usually codefor a protein . RecentThe capacity of the fertilised egg to support successful embryonic development is reliant uponthe content of the oocyte at the time of fertilisation. The phenotype of a mature oocyte is theculmination of continuous and highly coordinated interactions between the germinal and somaticcompartments of the ovarian follicle that occur throughout folliculogenesis, and particularlyduring terminal differentiation of the cumulus-oocyte complex (COC) . The LHin vitro study using bovine model shown that modulation of miRNA-130bexpression in maturing oocyte affects the meiosis progression as well as the proliferationrate and the glucose metabolism activity of surrounding CCs and fertilisation (IVF). The resultingzygotes undergo a 7-day culture period that permits them to reach the blastocyst stage. Embryoquality is assessed morphologically at the end of this culture period in order to select embryosthat are compatible with the transfer procedure (in vitromatured COCs with those obtained from in vivo counterparts highlightedcritical deficiencies affecting several cumulus molecular pathways known to support developmentalpotential of the oocyte in mice, humans and cattle (in vitro procedure.The alteration of terminal molecular events in CCs before fertilisation could compromise cumulus-oocytedialogue and hence the full development of oocyte competence. Studies using a bovine model haveshown that the rise in PTGS2-related PGE2 production that occurs in CCs concomitantly with theresumption of meiosis is affected by IVM conditions and the presence of exogenous EGF (in vivo maturation (in vitro procedure (In cattle, ocedure . We prein vitro produced embryos is to optimise the in vitro maturationconditions in order to preserve the integrity of cumulus-oocyte coupling, which will contributeto achieving the full developmental competence of the oocyte.Disruption or deregulation of the interactions between a maturing oocyte and surrounding CCscan affect the final stages of the storage of maternal factors and consequently of subsequentembryonic development. One of the most promising options to improve the viability of"} +{"text": "This article explores this topic in a succinct and systematic manner for the purpose of future documentation and referencing. This history is documented and updated up to January 2021. Nigeria, also known as the Federal Republic of Nigeria, is located in the West African sub-region bordered by the Republic of Benin in the West, Chad and Cameroon in the East and the Niger Republic in the North. The Nigerian coast in the South lies on the Gulf of Guinea in the Atlantic Ocean. Nigeria has more than 250 ethnic groups, but the three major traditional groups are the Hausa-Fulani, the Yoruba and the Igbo. Geopolitically, the country consists of six regions or zones, 36 states and a Federal Capital Territory located in Abuja. It consists of about 20% of the sub-Saharan African population.2 This course spanned the period 1932\u20131952, after which it was phased out for a more preferable course at the Faculty of Medicine, University of Ibadan, a programme that has a special relationship with the University of London where students who completed the pre-clinical training often proceed for the clinical aspect of their training.2Early western medical training in Nigeria prior to 1932 was mostly acquired in the United Kingdom (UK), few other European countries and the United States of America. Some Nigerians also obtained scholarship to study medicine in Eastern Europe and Russia up to 1960.Indigenous Western medical training commenced in Nigeria in 1932 when the Yaba Higher College provided a 7-year basic medical education course for candidates with Senior Cambridge School Certificate and produced assistant medical officers whose certificates were only recognised in Nigeria. The assistant medical officers were like physician assistants as observed today in the United States of America.2 Notable amongst them are late Prof. Thomas Adeoye Lambo, the first Nigerian psychiatrist who later rose in his carrier to become a deputy director at the World Health Organization (WHO).3 Others include the likes of late Prof. Tolani Asuni, who is the second indigenous psychiatrist in Nigeria.4 Closely followed these two are Prof. Ayo Binitie and Prof. Betta Johnson, amongst others.5Most of these foreign trained doctors remained in the UK to acquire postgraduate training in different specialties of medicine. Those who specialised in psychiatry then came back with Diploma in Psychological Medicine. These groups of doctors became the \u2018founding fathers\u2019 as they are fondly referred to forming the foundation of indigenous postgraduate psychiatry training in Nigeria and they went ahead to train other psychiatrists in Nigeria.Medical and Dental Practitioners Act of 1963.2 The Nigerian Medical Council later became Medical and Dental Council of Nigeria (MDCN) in 1990 through the promulgation of the Medical and Dental Practitioners Act, Cap 221 Laws of the Federal Republic of Nigeria, 1990.6Medical and Dental Act of 1963 was the law regulating medical graduate registration and practice. An amendment to it led to Decree number 44 of 1969 to enable the Nigerian Medical Council to be also responsible for postgraduate medical training and registration/practice of postgraduate medicine as opposed to only undergraduate medical training.Military Decree number 44 of 1969 gave a take-off wing to indigenous postgraduate medical training in Nigeria after due approval by the then Nigerian Medical Council . This de2 The decree establishing the NMC was promulgated into law in September 1979 by former military Head of State, General Olusegun Obasanjo.2 Simultaneously, the West African Health Society established in 1976 the West African College of Physicians (WACP) that was to superintend postgraduate medical training in West African sub-region.7 It consists of the following faculties: Internal Medicine, Laboratory Medicine, Community Health, Family Medicine, Paediatrics and Psychiatry.7Postgraduate medical training including psychiatry was superintended by the then Nigerian Medical Council (Postgraduate Examinations), the section of Nigerian Medical Council that assumed the responsibility of postgraduate medical education with the support of external examiners from the UK and neighbouring West African countries from 1970 up to 1979, when postgraduate training was separated under an independent body named the National Medical College (NMC) that later metamorphosed into the National Postgraduate Medical College of Nigeria (NPMCN).Both the NPMCN under the Faculty of Psychiatry (FMCPsych) and the WACP have been responsible for producing specialists in psychiatry since their inception in 1979 and 1976, respectively, leading to the award of Fellowship of NPMCN under the Faculty of Psychiatry (FMCPsych) and Fellowship of West African College of Physicians (FWACP) Psychiatry, respectively.Under the control of the Nigerian Medical Council (Postgraduate Examinations), today\u2019s Faculty of Psychiatry of NPMCN was founded under the administrative leadership of the first board that was chaired by late Prof. Tolani Asuni and Dr A. Anumonye as the secretary in 1974 well before the promulgation of the NPMCN into existence in 1979 by Decree number 67.Between the inception of the NPMCN and WACP and now, about 250 indigenous specialist psychiatrists have been produced till date and about 200 psychiatric trainees are presently undergoing specialist training as supervised by both colleges. Specialists are appointed into the position of Consultant Psychiatrist in Nigeria with either Fellowship of NPMCN under the Faculty of Psychiatry (FMCPsych) or with the Fellowship of West African College of Physicians under the Faculty of Psychiatry (FWACP) or their equivalent which could be the Member of the Royal College of Psychiatry (MRCPsych) with a number of years of practicing experience.To obtain the fellowship of the two colleges, one would need to undergo three stages of the examinations \u2013 the primary examination that examined related basic medical sciences, Part 1 examination that tests the core clinical expertise of the specialty and Part 2 fellowship examination that is usually made up of oral examination and defence of a dissertation or thesis before award of the fellowship of either colleges. Both colleges collaborate in the designing of training and development of curriculum. The West African College serves the countries in the West African sub-region and the NPMCN is strictly for Nigeria. The fellowships of the two colleges are equivalent in Nigeria for the purpose of employment and practising as a specialist.8The NPMCN in the Faculty of Psychiatry in recent years approved Post-Fellowship sub-specialty trainings in Child and Adolescent Psychiatry and Old Age Psychiatry in Nigeria.9 This association consists of full-fledged psychiatrist and associate members, who are usually psychiatric trainees and some allied professionals.9 The Association of Child and Adolescent Psychiatry and Allied Professionals in Nigeria (ACAPAN) was registered recently under the umbrella of APN.All the psychiatrists practising in Nigeria are bound together under the umbrella of the Association of Psychiatrists in Nigeria (APN) inaugurated in 1970.10 Since then, this controversy has continued raging and lately making the NPMCN bow under pressure to include alongside her award of Fellowship a Doctor of Medicine (MD) degree that could be deemed equivalent to PhD in other academic settings. Prior to the introduction of MD degree as a postgraduate medical degree, the degree being awarded to individuals who completed undergraduate medical training in Nigeria is Bachelor of Medicine or Bachelor of Surgery (MBBS or MBChB) depending on the particular university.Around 2008, the Nigerian University Commission (NUC), a body that regulates university education in Nigeria, issued a directive that possession of PhD degree would now be a prerequisite for promotion to a professorial level in a university setting as against just the fellowships of either colleges that are acceptable for postgraduate medical teachers in university settings in Nigeria. This directive was vehemently rejected by officials of both the NPMCN and the WACP and surgeons who argued that medical training, and more so postgraduate medical training, is a hands-on bedside training that must be combined with didactic lectures along with research competence unlike PhD that is mostly classroom taught courses and research. And there is no way a PhD degree can replace the standard offered by the level of training competence afforded by fellowships of either colleges for the purpose of postgraduate medical training in Nigeria11 The fact whether this step would improve the quality of postgraduate medical teachers in the university environment remains to be seen. It is the opinion of many specialist medical professionals that the change being accommodated by the NPMCN is just a cosmetic way of bowing to political pressure because Decree number 67 of 1979 has specified and defined the role of NPMCN as a body that would regulate the training and practice of postgraduate medical education in Nigeria. Whilst it is argued that undergraduate medical training may be under the supervision of or housed by a college in the university, postgraduate medical training, on the other hand, can actually be set up in any accredited health service institution across the country and not necessarily under the supervision of a university regulatory authority (NUC).The curriculum for the MD degree and training design which would be incorporated into the structure of normal Residency Training Programme leading to the award of Fellowship is presently going through finishing touches.This historical perspective is important at this present time in view of the current controversies arising in the university environment in Nigeria based on the new directive by the NUC, requiring every postgraduate medical teacher in a university environment to have a doctorate degree (PhD or MD) before being qualified for promotion to a professorial level in a Nigerian university setting. A moratorium was recently given by the NUC to postgraduate medical teachers in the university environment to obtain a PhD or MD degree by 2025 to be able to qualify for the promotion to the level of a university professor."} +{"text": "I read with great interest the article entitled \u201cThe Long-Term Effects of 12-Week Intranasal Steroid Therapy on Adenoid Size, Its Mucus Coverage and Otitis Media with Effusion: A Cohort Study in Preschool Children\u201d by Zwierz et al. [In this study, the authors used examination adenoid hypertrophy according to Boleslavska et al. ,3. This Grade I: adenoid tissue that fills less than one third of the vertical portion of the choanae;Grade II: filling of the adenoid tissue from one third to two thirds of the choanae;Grade III: adenoid tissue that fills more than two thirds of the choanae.In this regard and according to choanal obstruction, adenoids are differentiated into three grades:Grade A: adenoid tissue not in contact with the torus tubarius;Grade B: adenoid tissue in contact with the torus tubarius without complete covering;Grade C: adenoid tissue that completely covers the torus tubarius.The condition of the nasopharyngeal orifice of the Eustachian tube was also differentiated into three grades related to the condition of adenoid tissue:I do not understand how they performed the classification of the size of the adenoids for the patients in the study. Why did the authors not classify the conditions of adenoid tissue into the nasopharyngeal orifice of the Eustachian tube? In otitis media with effusion, the relation between adenoids and torus tubarius is more important than the volume of the adenoids ."} +{"text": "Needling interventions consist of the use of filiform needles for the management of different conditions of the neuromusculoskeletal system. The most commonly used needling therapies are trigger point dry needling, an intervention showing an increasing interest in both clinical and research setting , and acuTwo studies focused on the use of ultrasound imaging: the first one for improving the safety of some potential dangerous dry needling treatments when applied on the thorax and the In addition, two meta-analyses support the effectiveness of trigger point dry needling for conditions that cause neck and knee"} +{"text": "In this Special Issue, a wide variety of original and review articles provide a timely overview of how viruses are recognized by and evade from cellular innate immunity, which represents the first line of defense against viruses. The success of the immediate response relies on the recognition of invariant features encoded by viruses termed pathogen-associated molecular patterns (PAMPs) and by specialized sensors called pattern recognition receptors (PRRs). In the review by Singh et al., the reader is provided with a broad overview of the innate sensing of viruses by diverse PRRs. The authors discuss recent progress in the understanding of the consequences of innate sensing on the central nervous system (CNS), a tissue that can be severely damaged by infections of diverse viruses .The consequence of this surveillance network and the downstream pathway activation is the secretion of cytokines and type I interferons (IFNs). Schwanke et al. provide an in-depth review on the master regulator of the type I interferon response, IRF3, and the IFN\u03b2 enhanceosome . They prBased on the nature of their nucleic acid and entry pathways, viruses are, in general, either sensed by RNA or DNA sensors. Retroviruses, including the Human Immunodeficiency virus, are RNA viruses that reverse transcribe their RNA genome into DNA. Different aspects of RNA or DNA sensing in retroviral infections are highlighted in one review and one"} +{"text": "Various relationships are important for the well-being of older adults. This session focuses on the vertical and horizontal relations of Korean and Korean American older adults and their well-being. The purpose of this session is to highlight the importance of intergenerational relations and social involvement of Korean and Korean American older adults. For vertical relations, two studies focus on intergenerational relationships and solidarity. The first study investigated whether intergenerational relationships and social support mediate the distressing consequences of life events, and how this improved the psychological well-being of Korean older adults. The second study developed a standardized measurement tool for intergenerational solidarity because intergenerational conflicts caused by rapid socioeconomic changes have highlighted the importance of strengthening intergenerational solidarity. The third and fourth studies focus on horizontal relations involving social isolation and social involvement. Guided by the double jeopardy hypothesis, the third study examined the health risks posed by the coexistence of social and linguistic isolation in older Korean Americans. As the opposite of social isolation, social involvement was an important factor of social integration of older adults. The fourth study examined volunteering as an example of social involvement by focusing on older adults\u2019 volunteering on the social integration and role identity. Implications of this study suggest not only the importance of social involvement but also the intergenerational relationships on older adults\u2019 well-being."} +{"text": "The pulmonary vasculature consists of a large arterial and venous tree with a vast alveolar capillary network (ACN) in between. Both conducting blood vessels and the gas-exchanging capillaries are part of important human lung diseases, including bronchopulmonary dysplasia, pulmonary hypertension and chronic obstructive pulmonary disease. Morphological tools to investigate the different parts of the pulmonary vasculature quantitatively and in three dimensions are crucial for a better understanding of the contribution of the blood vessels to the pathophysiology and effects of lung diseases. In recent years, new stereological methods and imaging techniques have expanded the analytical tool box and therefore the conclusive power of morphological analyses of the pulmonary vasculature. Three of these developments are presented and discussed in this review article, namely (1) stereological quantification of the number of capillary loops, (2) serial block-face scanning electron microscopy of the ACN and (3) labeling of branching generations in light microscopic sections based on arterial tree segmentations of micro-computed tomography data sets of whole lungs. The implementation of these approaches in research work requires expertise in lung preparation, multimodal imaging at different scales, an advanced IT infrastructure and expertise in image analysis. However, they are expected to provide important data that cannot be obtained by previously existing methodology. The gas-exchange function of the mammalian lung is closely linked to its structural composition Weibel . At the The blood reaches the lungs via the pulmonary arteries more or less in parallel with the airways. The veins draining the blood from the capillary bed run within interlobular septa and do not follow the arterial and airway paths. Like the airways and like arteries of the systemic circulation, the wall composition of the pulmonary arterial branches changes along the vascular tree. The pulmonary arteries are involved in important physiological processes such as hypoxic vasoconstriction are essential for the physiological function of the lung and are critically involved in human diseases. Due to the close link between structure and function of the lung, (quantitative) morphological methods to analyze the different vascular compartments of the lung contribute to understanding the healthy and diseased lung Weibel . DespiteDesign-based stereology is the gold standard of morphometric studies on the lung or rely on a group of vessels that seem to be easy to identify. Within a homogeneous cohort the grouping of arteries according to their diameter or wall thickness may lead to consistent results. However, when these data are used to draw a comparison with an experimental group with altered vascular characteristics, it may well be that different types of arteries are compared with one another. The difficulty in defining a coherent vascular compartment can also lead to biased observations and light microscopic stereology generation of arterial branches. However, when looking at a human lung, such a design would not make sense: it certainly requires a lot more generations from the hilus of the lung until the subpleural parenchyma at the base of the lung than to the perihilar gas-exchange region. According to the dichotomous branching, an average number of 23\u201328 generations of arteries is widely accepted (Ochs and Weibel Morphological analyses of the pulmonary vascular trees as well as the alveolar capillary network are essential to improve our functional understanding of the lung vasculature in health and disease. New microscopic and X-ray imaging techniques, digital image processing and stereological estimators increase the potential of microscopic investigations of the pulmonary vasculature. Future studies will benefit from using and combining the techniques to enhance their conclusive potential."} +{"text": "The growing cost of healthcare services has been a concern for many countries in the world. In China, medical expenditures can account for as much as 65% of per capita income in some low-GDP counties in 2011. One of the primary goals of the New Rural Cooperative Medical Insurance (NRCMI) is to provide financial protection and alleviate the financial burdens of rural residents in China. This paper examined whether NRCMI participation impacted the incidence of catastrophic health expenditure (CHE) among middle-aged and older adults (45 years old and above) using the China Household Income Project 2007 rural data and an instrumental variable estimation method in two provinces where there was heterogeneity in NRCMI implementation schedule. The results show that NRCMI enrollment could not impact the likelihood of experiencing CHE among middle-aged and older adults. However, NRCMI participation increased the actual amount of medical expenses in one province but not in the other. Although none of the prior studies have used instruments and village fixed effects or take endogeneity issues into account to investigate the impact of NRCMI on relative financial burden among recipients, the results found in this study are generally aligned with the prior findings, especially with those using quasi-experimental studies. Findings from this study provide empirical evidence to the policymakers that the effect of NRCMI participation on financial protections is limited despite its broad population coverage. The limited effects are probably due to the low reimbursement rate and more utilization of expensive healthcare services."} +{"text": "In the last two decades, a change in paradigm has taken place in the management of salivary gland diseases. Hardly any type of pathological entity in this anatomical area has remained unaffected by the philosophy of individualised medicine, the general need for reducing therapeutic \u201cinvasiveness\u201d and the necessity of achieving a satisfactory postinterventional quality of life. The need for reducing surgical invasiveness and achieving a better quality of life for patients has led to using minimal invasive surgical modalities in benign parotid gland lesions, focussing away from the dissection of the facial nerve and concentrating on the capsular features of the tumour itself. With respect to pleomorphic adenomas of the parotid gland, understanding the histological characteristics of the tumour capsule and the potential clinical relevance of their biological behaviour is fundamental for the management of these entities. Undoubtedly, the different histological subtypes, the various degrees of capsular intactness and the formation of pseudopodia, as well as satellite nodules, constitute a demanding profile with apparent clinical-surgical implications . HistoriA thorough search of the salivary-gland-relevant literature reveals that research concerning the refinement of the surgical management of parotid tumours is undoubtedly advancing further. In this context, several reports refer to the value of facial nerve monitoring in both the intraoperative and the postoperative setting of parotidectomy. In everyday surgical routine, it is of major importance to investigate the correlation between a decrease in the electromyographic signal of the facial nerve and postoperative facial nerve function . InteresThe paradigm shift in the treatment of salivary gland pathologies does not only refer to the management of parotid tumours. Expanding the use of sonography with a variety of ultrasound-assisted techniques and also new sialendoscopic-controlled modalities have led to the development of a significant number of treatment forms in obstructive sialopathy, aiming at preserving the anatomy and function of major salivary glands and reducing postinterventional morbidity . It is iWhile surgery remains the first-line treatment for salivary gland cancer, a multitude of systemic therapies including chemotherapy, targeted therapy and immunotherapy are available for inoperable and distant metastatic disease. However, there are a significant number of different studies dealing with the most common histological subtypes of salivary gland cancer that have not yet been reviewed and evaluated. A thorough search of the relevant literature reveals that systemic treatment can achieve prolonged progression-free and overall survival with a satisfactory quality of life, while the overall prognosis remains rather poor. In the future, further studies with a larger patient cohort and ideally only one histological subtype are needed in order to improve the outcome for salivary gland cancer patients. Given that clinical research on salivary glands is undoubtedly in progress and definitively advancing, the present Special Issue entitled \u201cTreatment of salivary gland diseases: established knowledge, current challenges and new insights\u201d is intended to provide an overview of recent advances in the management of salivary gland diseases in an effort to provide new evidence regarding several surgical and non-surgical aspects of this topic."} +{"text": "Automatic crankshaft production lines require high reliability and accuracy stability for the oscillating grinding machine. Crankshaft contour error represent the most intuitive data in production field selective inspection. If the mapping relation between the contour error components of the crankshaft pin journal and the axis position control error of the oscillating grinding machine can be found, it would be great significance for the reliability maintenance of the oscillating grinding machine. Firstly, a contour error decomposition method based on ensemble empirical mode decomposition (EEMD) is proposed. Secondly, according to the contour generating principle of the pin journal by oscillating grinding, a calculation method to obtain the effect of the axis position control error of the oscillating grinder on the contour error of the pin journal is proposed. Finally, through the grinding experiments, the error data are acquired and measured to calculate and decompose the contour error by using the proposed methods for obtaining the mapping relation between the crankshaft pin journal contour error and the axis position control error. The conclusions show that the proposed calculation and decomposition methods can obtain the mapping relation between the contour error components of the crankshaft pin journal and the axis position control error of the oscillating grinding machine, which can be used to predict the key functional component performance of the machine tool from the oscillating grinding workpiece contour error. The crankshaft is a critical component of the automobile engine, and its machining quality directly affects engine performance and reliability. Therefore, crankshaft manufacturing plays a very significant role in the automotive industry. In the mass production of engine crankshafts, the pin chasing grinding technology based on the oscillating grinding machines has been widely implemented to meet machining accuracy and efficiency requirements ,2. DurinEngineering surface texture is considered as the \u201cfingerprint\u201d of the manufacturing process , where tThe schematic of an oscillating grinder and the error sources are illustrated in jth IMF component is indicated as jC, then the original signal X(t) can be described as,r is the residue of the signal and it is a monotonic function. In essence, EMD is an adaptive dyadic filter bank [The EMD, proposed by Huang et al. in 1998 , can decter bank which cater bank . In termC1 of the pin journal contour error is composed of the previous order component, the components with frequency less than 2 UPR, and the residual component. If the contour error is decomposed into M components, To overcome the mode mixing problem in EMD, EEMD was proposed by Wu and Huang . The decG is in the connecting line between the center of the pin journal C and the pin journals rotate around the center O of the main journal. The grinding wheel implements reciprocating chasing motion along the X axis and realizes the grinding of crankshaft pin journals.In the oscillating grinding process, the grinding wheel is always tangent with a crankshaft pin journal. The contour of crankshaft pin journal is produced by reciprocating motion of grinding wheel following rotational motion of crankshaft. Under ideal conditions, the contour of the crankshaft pin journal is a standard circle. The principle diagram of the crankshaft oscillating grinding motion is illustrated in The contour control point can be transformed as,According to the geometric relationship, Equation (4) can be achieved as,From Equations (3) and (4), Equation (5) can be obtained as follows,When a workpiece is ground, the theoretical grinding motion control equations can be derived from Equation (2),According to Equations (6) and (7), the CNC system of the machine tool controls the motion of C axis and X axis to machine the pin journals. Only if the practical motion positions of C axis and X axis accurately meet the requirements of the equations, the grinding result is an ideal circle. However, in the practical machining, the motion control of the C axis and X axis both have errors. Therefore, the contour of the ground pin journal is not a standard circle.To accurately obtain the grinding contour generation mechanism of the crankshaft pin journal and the relation between the machine tool position control information and the crankshaft pin journal contour information, the coordinate system is created in the pin journal, where the grinding process can be considered as the grinding wheel rotation around the pin journal. If the elastic deformation of the grinding wheel and the workpiece is neglected, the inner envelope of the grinding wheel trajectory is the grinding contour of the pin journal, as illustrated in According to the above principle and neglecting elastic deformation of the mechanical system, the grinding wheel and workpiece utilize the practical contour control point and (9), Equations (10) and (11) can be obtained,Let O, the practical pin journal center pO and the phase reference positive x axis direction can be inferred.From Equations (12) and (13), the contour information The mapping relation between the contour error components of the crankshaft pin journal and the axis position control error of the oscillating grinder is obtained. Because there are other error sources that affect the contour error of the crankshaft pin journal, there are some minor differences in the magnitude of the contour error. However, the position control error of the C axis and X axis represent the main influencing factors.(2)A contour error decomposition method based on EEMD is proposed. Boundary periodic extension is applied to avoid the boundary effect and the low frequency component discrimination method is set up to effectively extract the low frequency components.(3)The mapping relation between the contour error components of the crankshaft pin journal and axis position control error of the oscillating grinder can be applied in predicting the key functional component performance of machine tools from oscillating grinding workpiece contour error."} +{"text": "The second paragraph of the Methods section detailing methodology for skeletochronology was included by mistake; the authors clarify that skeletochronology to infer age by related snout-vent length (SVL) was not used in this study. Specimen age was deduced using SVL measurement only.The erroneously included description of the skeletochronology is adapted from reference [24], which is cited in support of the snout-vent length methodology used to select adult lizards .Accurate ageing beyond selection of adults is not essential for the study design in .For mitogenome analyses, tissue samples from the terminal end of the tail were collected from live specimens. To minimize demographic impact, transcriptome analyses of brain tissue were carried out only on specimens that accidentally died during capture or manipulation .Specific sanitary protocols were employed to guarantee isolation of individuals of one population from the others to avoid the potential spread of pathologies and parasitosis. The research program for capture, manipulation, tissue sampling, temporary housing, and release of individuals at the site of capture was assessed by the Istituto Superiore per la Protezione e la Ricerca Ambientale (ISPRA) and by the Societas Herpetologica Italica and subsequently approved by the Ministry of the Environment and Protection of Land and Sea."} +{"text": "Engineers, scientists and mathematicians are greatly concerned about the thermal stability/instability of any physical system. Current contemplation discusses the role of the Soret and Dufour effects in hydro-magnetized Carreau\u2013Yasuda liquid passed over a permeable stretched surface. Several important effects were considered while modelling the thermal transport, including Joule heating, viscous dissipation, and heat generation/absorption. Mass transportation is presented in the presence of a chemical reaction. Different nanoparticle types were mixed in the Carreau\u2013Yasuda liquid in order to study thermal performance. Initially, governing laws were modelled in the form of PDEs. Suitable transformation was engaged for conversion into ODEs and then the resulting ODEs were handled via FEM (Finite Element Method). Grid independent analysis was performed to determine the effectiveness of the chosen methodology. Several important physical effects were explored by augmenting the values of the influential parameters. Heat and mass transfer rates were computed against different parameters and discussed in detail. The mechanism of transport phenomenon in different materials has received reasonable attention recently due to its wider applications in industry and different medical processes. Several important materials exist for the support of these mechanisms. Due to their different characteristics, these materials cannot be explained through one constitutive relation. Carreau\u2013Yasuda is one such important material which has the following constitute relation.For e et al. reportede et al. . In theie et al. presentee et al. via the e et al. through The involvement of nanoparticles enhanced the thermal performance and heat transportation rate. Several models of the nanomaterials are available and frequently used to study the thermal performance of different materials. Several researchers have paid attention to these materials due to their wider applications and usage. For instance, Gorla and Gireesha developeIn the above cited literature, no study deals with the combined behavior of the following: mass, heat transport in hydro-magnetized Carreau\u2013Yasuda material using Joule heating, viscous dissipation, heat generation, chemical reaction and the Soret and Dufour influences in the Darcy\u2013Forchheimer porous stretching sheet. This report fills the gap in this discussion and should be used as a foundation for researchers to work further on this model. The inclusion of nanoparticles in the Carreau\u2013Yasuda material is attractive to researchers. Organization of this research is divided in the following way: the literature survey is reported in An enhancement in the thermal and solute performance of Carreau\u2013Yasuda rheology, inserting the impact of nanoparticles and hybrid nanoparticles, is considered as shown in The non-linear PDEs are developed according to the physical happenings and the boundary layer approximations.The no-slip theory provides the required boundary conditions of the current model:Change in variables is constructed as:Transformations are used in Equations (1)\u2013(5) and the system of non-linear PDEs are converted to following ODEsHere, The temperature gradient because of the nano and hybrid nanoparticles isConcentration gradient at the surface of the melting sheet isThe local Reynolds number is \u27a2FEM has the ability to handle various complex geometries;\u27a2This numerical method is thought to be most significant in solving physical problems with wide ranges;\u27a2FEM requires less investment in the view of time and resources;\u27a2A main advantage of FEM is its handling of various types of boundary conditions and\u27a2It has a good ability with regards to the discretization (of derivative) problems into small elements;\u27a2The Working scheme of finite element method has been shown with the help of The finite element method is an effective method in the view of accuracy and the convergence of a problem compared with other numerical approaches. There are many advantages to FEM but some are discussed here:The numerical approach called finite element scheme is used to simulate the numerical results of highly non-linear PDEs and numerous applications of FEM are found in CFD problems. The FEM approach is explained in the following steps:Step I: The division of a problem domain into a finite number of elements and residuals. The weak form is captured from the strong form due to residuals. The approximation result is simulated using shape functions and the approximation simulations of the variables are:Here Step II: In this step, the matrices are stiffness, vector and boundary . The global stiffness (matrix) is obtained whereas the Picard (linearization approach) is utilized to obtain a linear system of equations that are defined as:Here Step III: The algebraic equations (non-linear) resulting from the assembly process are:Step IV: The computational domain is considered as Comparative analysis: The numerical result of the current problem is verified with published results [ results by the dhed work are presMechanisms of velocity, thermal energy and diffusion of mass influenced by chemical reaction are addressed over a stretched melting surface. Correlations between silicon dioxide and Molybdenum dioxide in EG (ethylene glycol) are used in the presence of the Carreau\u2013Yasuda liquid. Various kinds of influences are also addressed. As such, the complex transport phenomenon is simulated with the help of a numerical approach (FEM). The graphical computational investigations are captured in graphs and tables. The detailed outcomes are discussed below:Graphical investigations of velocity against distribution in various parameters: The change in Weissenberg, power law index, Forchheimer numbers and Carreau\u2013Yasuda variables are addressed in the motion of fluid particles considered in cles see . It is dGraphical investigations of heat energy against distribution in various parameters:f Df see . FurtherGraphical investigations of mass diffusion against distribution in various parameters:Mechanisms of gradient temperature, surface force and mass diffusion rate versus the distribution of various parameters: The computational analysis of surface force (skin friction coefficient), gradient temperature (Nusselt number) and rate of mass diffusion versus the variation of Convergence of the problem is ensured at 270 elements;The motion of nanoparticles and hybrid nanoparticles in ethylene glycol is boosted versus the enhancement in fluid variable, power law index number, Weissenberg number and Forchheimer porous number;Significant production of heat energy versus higher values of heat generation, Eckert, Dufour and Forchheimer porous numbers;The transportation of solute particles declines versus the large values of Schmidt and chemical reaction numbers, but solute particles accelerate against higher values of Soret number;Surface force is increased via large values of Weissenberg and Forchheimer porous numbers but surface force is decreased versus the large values of heat generation number andThe transport features in the rheology of Carreau\u2013Yasuda liquid and involvement of nanostructures and hybrid nanoparticles over a heated surface have been visualized. The Dufour and Soret effects under the action of a magnetic field have been addressed. Forchheimer porous media was also considered. The simulations of the current model were computed by finite element approach. The prime findings are captured below:The role of the Schmidt number is significant in the development of temperature and concentration gradient."} +{"text": "Specialized biological processes occur in different regions and organelles of the cell. Additionally, the function of proteins correlate greatly with their interactions and subcellular localization. Understanding the mechanism underlying the specialized functions of cellular structures therefore requires a detailed identification of proteins within spatially defined domains of the cell. Furthermore, the identification of interacting proteins is also crucial for the elucidation of the underlying mechanism of complex cellular processes. Mass spectrometry methods have been utilized systematically for the characterization of the proteome of isolated organelles and protein interactors purified through affinity pull-down or following crosslinking. However, the available methods of purification have limited these approaches, as it is difficult to derive intact organelles of high purity in many circumstances. Furthermore, contamination that leads to the identification of false positive is widespread even when purification is possible. Here, we present a highlight of the BioID proximity labeling approach which has been used to effectively characterize the proteomic composition of several cellular compartments. In addition, an observed limitation of this method based on proteomic spatiotemporal dynamics, was also discussed. Dear editor,We have read with great interest the recent publication by Go et al. ; a studyThe proximity labeling method has been developed to study the spatial compartmentalization of protein networks and their assembling pattern into functionally integrated complexes. This method involves the selective and covalent biotin tagging of neighbouring proteins in a living cell with the use of engineered enzymes. Isolation of the biotinylated proteins can then be carried out after cell lysis and mass spectrometry characterization. Proximity labeling has previously been applied in the mapping of different cell organelle components as well as in the identification of novel interactions with increased spatial specificity. These studies have shown that the proximity labeling approach is an efficient method for the dissection of molecular localization and interaction patterns with nanometer spatial resolution .There are two major categories of the proximity labeling methods, both of which are linked to the type of enzyme used for catalysis. The peroxidase-based proximity labeling depends on the expression of an engineered HRP (horseradish peroxidase) or APEX (ascorbate peroxidase) in the tissues or cells of interest. HRP can alternatively be targeted to specific antigens on the cell surface through antibody conjugation. To start labeling, hydrogen peroxide is always added to tissues or cells for 1\u2009min. The cells are then to be pre-loaded with biotin-phenol or its variants, such as desthiobiotin phenol, alkyne-phenol and BxxP. Biotin-phenol is oxidized by the peroxidase into phenoxyl radical which in turn reacts with neighboring proteins at their electron-rich side chains. Because the half-life of the phenoxyl radical is less than 1\u2009ms, the intensity of labeling falls dramatically within few nanometers from the active site of the peroxidase. This generates a biotinylation contour map which can be analyzed by quantitative proteomics to produce a list of proteins that are ranked on the basis of proximity to the enzyme .E. coli) is attached to a polypeptide of interest (regarded as bait) and this combination is expressed in organisms or cultured cells. The BirA* releases biotinoyl-AMP into its immediate environment and the released compound labels lysine residues between the range of 5\u201310\u2009nm to the bait protein. The biotinylation permits the use of harsh lysis conditions to enhance the solubility of proteins from intracellular compartments that are normally poorly soluble, such as the nuclear lamina, membranes or the chromatin. The biotinylated proteins are then trapped with streptavidin affinity, followed by mass spectrometry identification. Because the diameter of an average globular protein is 5\u201310\u2009nm, the radius of labeling for the BioID technique benefits the biotinylation of proteins that are localized to the immediate intracellular environment of the bait and its direct binding partners [In the biotin ligase-based (BioID) proximity labeling approach, BirA* which permits data viewing and searching regarding all profiled organelles, baits and prey proteins. Major features of the database includes its usage for the identification of queried bait-specific preys, the localization of a query bait to particular compartments of the cell based on similarity with the interactomes of other baits, and the identification of previously queried baits that have similar interactomes. Included among these features also is the ability to make comparison between the human cell map database and user BioID data. Although the BioID-predicted proteome compartmentalization appears to share high similarity with predictions made by fractionation and large-scale microscopy studies, the inability of the method to factor in the spatiotemporal dynamics of the human proteome in its predictions remains a limitation. To illustrate this, we compared the single-cell proteogenomic-based spatiotemporal prediction of the APPL1 localization from the human protein atlas (https://www.proteinatlas.org/) (Fig.\u00a0In an attempt to enhance their dataset exploration, the authors have designed the human cell map (In conclusion, the mass spectrometry in conjunction with biochemical fractionation and microscopy has been popularly applied in defining the proteomes of different cellular organelles, but most intracellular compartments of the cell have remained obstinate to such methods. Although, the specificity of the described BioID-based proximity labeling prediction for proteome compartmentalization exceeded those from previously reported approaches, limitations as a result of the spatiotemporal dynamics of the human proteome remains a factor to be considered in order to increase the specificity of predictions."} +{"text": "Iron is an element whose content in the human organism remains under strict control not only due to its involvement in many life processes but also because of its potential toxicity. The latest studies in iron metabolism, especially the involvement of hepcidin, which is the main regulator of iron homeostasis, broadened our knowledge in many medico/ fields . The present paper is a review of the literature devoted to the importance of hepcidin under selected conditions. Those discoveries opened the way for new diagnostic and therapeutic opportunities. Hence, studies on iron metabolism have become the focus of interest for scientists and doctors specializing in many fields.Only a few years ago the only tests for evaluating iron metabolism were blood iron, transferrin and ferritin levels and the iron binding capacity. Since 2001 iron homeostasis investigations have been expanded by new markers: hepcidin, ferroportin, hemojuvelin, erythroferrone. The genetic bases of iron metabolism have been determined Recent discoveries in the field of iron metabolism and the mechanisms of iron absorption and use demonstrated that hepcidin was the pivoting element of iron homeostasis . HepcidiThe association between the susceptibility to infections and the organisms iron resources is a subject of numerous studies. It is particularly important in those populations where the rate of iron deficient individuals reaches several dozen percent, and the danger of mass epidemic outbreaks is high . The WorThe level of hepcidin as an acute phase protein is increased in the course of pneumonia. The increase leads to the so-called anaemia of inflammation through the inhibition of erythropoiesis (reduced iron availability as a result of inhibiting absorption and liberation of the element). On the other hand, anaemia and tissue hypoxia activates erythropoiesis by the inhibition of hepcidin. The transient anaemia accompanying pneumonia seems to have a beneficial effect on the limitation of bacterial development. Michels et al. demonstrRespiratory infections are an important health problem in the paediatric population. Some reports regarding studies in disorders of iron metabolism in the course of infections of the respiratory system, particularly the lower respiratory tract, are available in the literature . The lowEscherichia coli compared to individuals reacting to bacterial infection with increased expression of hepcidin. The authors also demonstrated that bacteria were able to restrict the synthesis of hepcidin in the renal cells of infected animals. That was their protection mechanism [There are reports of attempts to use the assessment of iron metabolism parameters in the course of severe systemic infections in order to predict the course of the disease. It was observed that there is an unfavourable prognosis regarding survival in patients with sepsis whose blood iron levels were high. A possible application of iron chelating drugs was suggested in the therapy of critically ill patients . Houamelechanism .There are numerous reports regarding iron metabolism in chronic inflammations of various aetiology . The autOne of the inflammatory disorders for which the use of iron parameters for prognostic purposes is considered is Kawasaki disease. Particularly patients with a prolonged course of this disease are at a risk of anaemia. The hepcidin level was significantly higher in patients demonstrating resistance to immunoglobulin therapy, who were at a higher risk of developing lesions in coronary arteries . The auThe role of chronic inflammation associated with the synthesis of cytokines, chemokines and growth factors secreted by adipose tissue has been underlined in the pathogenesis of obesity for several years. Problems of iron metabolism in obese people, in the context of the variability of hepcidin concentration, are of interest due to the tendency to anaemia and an inferior efficacy of treatment with iron oral preparations in obese children . MorenoType 2 diabetes is currently regarded as a disease associated with iron overload. Increased hepcidin levels contribute to the development of insulin resistance and type 2 diabetes . HoweverDiscoveries of factors involved in iron metabolism, including hepcidin, opened the way to progress in developing knowledge about the diseases whose aetiology is primarily associated with the disruption of this metabolism. Patients with \u03b2-thalassemia of pathogenetic origin, and also patients requiring frequent blood transfusions for therapeutic reasons, demonstrate an excessive tissue storage of iron from decomposed RBCs. Increased iron absorption from the alimentary system is observed in such cases. In this group, hepcidin levels are lower compared to healthy individuals . SimilarThe liver is the principal place of hepcidin synthesis. Dysfunction of that organ in the course of chronic conditions, regardless the aetiology, leads to disturbed regulation of expression and subsequent reduced production of hepcidin . That, iKidneys are organs playing a significant role in iron metabolism. Plasma hepcidin is eliminated mostly through the kidneys \u2013 the compound is almost totally filtered in the renal glomeruli, but then re-uptaken and disrupted in the proximal tubules. As a result, only a low percent of hepcidin is eliminated in a non-altered form . In patiThe results of scientific research on the complex process of iron metabolism constitute the basis for pharmacological tests of their practical application. Attempts have been made to synthesize drugs that may be used in therapy by influencing iron metabolism. Studies on antagonists and agonists of hepcidin are underway. One of them is lexaptepid , an antaIron plays a principal role in the physiological function of the human organism. It has been known for a long time that its disturbed metabolism leads to such diseases as \u03b2-thalassemia and hemochromatosis. The current state of knowledge indicates that the complex mechanism of iron metabolism, with hepcidin playing the key role, is significantly correlated with the development of anaemia in the course of many diseases (inflammation-associated anaemia or anaemia of chronic diseases). The present review of the literature shows that not only is the pathogenesis of these disorders more fully understood, but some possibilities of targeted therapy are also emerging."} +{"text": "Genetic testing is associated with many ethical challenges on the individual, organizational and macro level of health care systems. The provision of genetic testing for rare diseases in particular requires a full understanding of the complexity and multiplicity of related ethical aspects. This systematic review presents a detailed overview of ethical aspects relevant to genetic testing for rare diseases as discussed in the literature. The electronic databases Pubmed, Science Direct and Web of Science were searched, resulting in 55 relevant publications. From the latter, a total of 93 different ethical aspects were identified. These ethical aspects were structured into three main categories and 20 subcategories highlighting the diversity and complexity of ethical aspects relevant to genetic testing for rare diseases. This review can serve as a starting point for the further in-depth investigation of particular ethical issues, the education of healthcare professionals regarding this matter and for informing international policy development on genetic testing for rare diseases. Geneticividuals . This noividuals .A precise molecular diagnosis is essential for the efficient handling of rare diseases in order to provide disease management and treatment options. In addition, it enables informed future family planning decisions and the formation of supportive networks of individuals and families affected by rare diseases . Early aAdvances in genetic testing, especially next generation sequencing technologies (NGS), have positively impacted the likelihood of obtaining a genetic diagnosis in a timely manner . HoweverThe rapid technological advancements in genetics and the lack of education in this field limit the ability of many nonspecialized physicians to partake in the much needed professional discussion of ethical issues in genetic testing . The widThis review is the first, to the best of the authors\u2019 knowledge, to present a profound overview of all ethical aspects of genetic testing for rare diseases as published in the literature. In systematizing ethical problems related to this field this review can assist researches in the field of genetics as well as clinicians and counsellors in enhancing the moral sensibility for issues pertinent to their professional practice. For example, this review systematically gives a list of ethical issues occurring at the micro-level of patient-provider-contact and enables a further in-depth literature analysis of moral problems relevant for the individual reader. It furthermore provides a systematic basis for the ethical education of not only healthcare professionals but also patients, their families and other relevant stakeholders. This review provides a systematic background for further empirical and normative investigations of ethical aspects and is meant as a comprehensive aid to health policy making.This article provides an overview of the full spectrum of ethical aspects in genetic testing for rare diseases based on a systematic review of the literature closely following the methodology used by Strech et al. . The repThe electronic databases Pubmed, Science Direct and Web of Science were searched . Publica\u201cEthical aspects\u201d were identified on the basis of the ethical theory of principlism, according to Beauchamp and Childress . This ap\u201cGenetic testing\u201d is defined as an laboratory examination aimed at detecting or ruling out the presence of hereditary illnesses or predisposition to such conditions in a person by directly or indirectly analyzing their genetic heritage .Up until the completion of this article, no definite set of criteria had been established on how to conduct a quality appraisal for reviews of ethical literature . ConsequNo restriction was applied to the type of publications included in this review. Therefore, not only original articles but also comments, editorials and book chapters were included. In order to display the full spectrum of ethical aspects relevant to the review question, not only argument-based but also empirical literature was included Process of testing: Ethical aspects concerning the procedure of genetic testing for rare diseases, the analysis of these tests and the delivery of the results to the patient and/or the family.2) Consequences of the test outcome: Ethical aspects that result from the knowledge of the test result or the decisions made following the disclosure and patient reactions to the test result.3) Contextual challenges: Ethical aspects that are associated with the circumstances and background of the tests, the diseases tested for and the test results.The following three main categories were established: process of testing encompasses 36 ethical aspects in nine subcategories which they did not consent to but if not disclosed might have an interest to know. This ambiguous situation needs to be extensively addressed and prepared for during counselling:\u201cEthical challenges are generated when information produced by the results may affect third parties, including family members not directly involved in the process.\u201d ontextual challenges includes 21 ethical aspects in four subcategories . Prior to the final search, an exploratory search of several databases was conducted and the three databases subsequently accessed were identified as delivering the highest number of relevant results. Additionally, the neglect of literature written in languages other than English or German limits this review.No quality appraisal for the included literature was performed due to the lack of quality assessment tools for systematic reviews of ethical literature . TherefoA lack of knowledge and comprehension of the fast-paced developments of genetic testing among professionals poses an obstacle to accessing comprehensive testing . Many phThis review found that not many physicians find themselves in a position where they feel knowledgeable enough to order and conduct genetic testing, especially for rare diseases . An effeHowever, this should, on the other hand, not deviate from the much needed extension of the education of physicians and other healthcare professionals to deliberately cover the advantages and disadvantages of genetic testing in the context of rare diseases, including not only medical subjects but also the ethical and legal issues presented in this review ."} +{"text": "Traditional symphony performances need to obtain a large amount of data in terms of effect evaluation to ensure the authenticity and stability of the data. In the process of processing the audience evaluation data, there are problems such as large calculation dimensions and low data relevance. Based on this, this article studies the audience evaluation model of teaching quality based on the multilayer perceptron genetic neural network algorithm for the data processing link in the evaluation of the symphony performance effect. Multilayer perceptrons are combined to collect data on the audience's evaluation information; genetic neural network algorithm is used for comprehensive analysis to realize multivariate analysis and objective evaluation of all vocal data of the symphony performance process and effects according to different characteristics and expressions of the audience evaluation. Changes are analyzed and evaluated accurately. The experimental results show that the performance evaluation model of symphony performance based on the multilayer perceptron genetic neural network algorithm can be quantitatively evaluated in real time and is at least higher in accuracy than the results obtained by the mainstream evaluation method of data postprocessing with optimized iterative algorithms as the core 23.1%, its scope of application is also wider, and it has important practical significance in real-time quantitative evaluation of the effect of symphony performance. At present, most of the research processes of orchestral performance effect evaluation are based on the traditional \u201cperformance type and melodic characteristics research and vocal appreciation\u201d type, supplemented by the \u201cperformance audience evaluation\u201d research . In receThe innovation of this paper is that the multilayer perceptron genetic neural network algorithm is used in the evaluation of symphony performance. On this basis, it can make full use of a large amount of symphony performance data and extract appropriate symphony performance information from it. Data quantification, information extraction, and analysis of the audience's facial expression changes, emotional performance, and so forth are carried out dynamically to achieve an overall closeness in the audience evaluation data. Compared with the current mainstream performance evaluation methods of data analysis and postprocessing, the symphony performance evaluation management and analysis model proposed by this research uses multitransformed neural network factors to quantitatively describe the quantitative characterization characteristics of different performance processes. The degree of agreement between the similarity of the dimensional vocal analysis model and the expected evaluation index is to complete the ranking of the influence on the performance melody with the quantitative index, which can efficiently analyze the characteristics of the factors that affect the performance of the symphony.This paper investigates the normalization scheme of audience evaluation and analysis of orchestral performance effects and is divided into five sections. It is known that it is relatively backward in symphony performance analysis model innovation and quality evaluation using computer techniques . At presIn summary, it can be seen that most of the current orchestral music in performance effect analysis models does not involve an intelligent evaluation method based on performance effect and audience evaluation. In the evaluation of the delay effect of symphony in mainstream scientific research, most of researches are unified collection of data in the performance link, followed by postprocessing and analysis of the data, but rarely realize the \u201cdata collection\u201d of the symphony performance link. Data analysis effect evaluation integrated real-time dynamic processing analysis. On the other hand, although the multilayer perceptual genetic neural network algorithm has been gradually applied in music evaluation, it has not been quickly and efficiently widely used in the stage of multianalysis and dynamic real-time processing of data. Therefore, it is important to study the orchestral performance effectiveness evaluation method based on multilayer perceptron genetic neural network algorithm.There are no more successful intelligent models applied in the world for studying optimization problems and simulation in the field of teaching quality assessment compared to genetic algorithms, particle swarm greedy algorithms, and local optimization neural network algorithms \u201326. The Based on this, in the orchestral performance effect analysis and evaluation model based on multilayer perceptron genetic neural network algorithm, the genetic neural network algorithm based on multilayer perceptron factors is firstly designed; that is, by combining neural network factors based on the level variability and audience evaluation effect in the process of orchestral performance, the quantitative evaluation of orchestral performance effect is realized. Then, the multilayer perceptron genetic neural network algorithm is used to precisely divide the information in a series of teaching processes expressed in the symphony performance analysis, so as to achieve a high degree of categorization of different qualities in the process of symphony performance analysis and melody research and to generate targets with a very high degree of synergistic correlation, which are pushed to the next stage of the process to be optimized to achieve a quantitative effect evaluation.In constructing the orchestral performance effect evaluation model and evaluation link based on the multilayer perceptron genetic neural network algorithm (GA-MLP-NN), the multilayer perceptron genetic neural network algorithm is used to categorize different types of symphonic music at different stages according to the similarity of vocal analysis patterns and the synergetic similarity of musical instruments, and then the melodic information in the analysis process of different symphonic performances is divided into secondary divisions, and, through the multilayer perceptron genetic neural network algorithm, the effect of symphonic performances of different degree types is selected for secondary division and update to ensure the stratification and update of effect analysis and management of different types of symphonic performances. Its data processing process is shown in The steps are shown below.In the first step, in the data analysis and processing link, the neural network data is classified according to the degree of association between the data and the difference of the modulus length after the data is vectorized, and a label is generated for each classified group for subsequent use. Supervised learning and then a neural network coding strategy are selected to transform the parameter set (feasible solution set) into a multilayer perceptron structure in a multilayer perceptron genetic neural network algorithm. In order to realize the process, this study combines a new multifactor coupling model-based orchestral performance process evaluation method, which uses the deformed coupling sequence to randomly displace common orchestral content and melody types, and then decouples it to achieve the optimal determination of different types of orchestral performance and performance process schemes and conducts simulation verification to evaluate this orchestral performance analysis quality evaluation scheme. It has a good objective evaluation capability. In this process, the simulation results of the relationship between the number of iterations of analysis and the coupled hierarchical perceptron are shown in From Q(x), dimensional function W(x), and hierarchical function E(x) used in this process are given below, and they are often applied to some fields like image processing, image recognition, and image classification [x is the original input data. The results of the simulation analysis between different numbers of perceptrons and the number of neuron nodes in this stage are shown in The expressions of the perceptual function fication \u201331:(1)QxQ\u2032(x), the true degree matching function R(x), and effect analysis function T(x) are expressed asx is the original input data. The results of the simulation analysis of the relationship between the analysis strategy and the analysis quality factor for different dimensions in this stage are shown in As can be seen in As can be seen in s=. Thus, the neural network node function isIn the second step, a neural network node function is defined. We take a sequence of symphony types that conform to the algorithm rules s= symphonic performance object. The results of performing layer 1, layer 2, layer 3, and layer 4 variants areA, B, C) is the different variation rule. m is the n-dimensional information.The basic implementation of this step is as follows: neuronal encoding of the individual We then perform the regular recursive or neighborhood operation in the multilayer perceptron genetic neural network algorithm, such as swapping the last three, and then the variation results under 1, 2, 3, and 4 layer variation factors at this time areThe results of the simulation analysis of the relationship between different levels of variables and the effect evaluation index factors in this stage are shown in As can be seen in In this orchestral performance effect evaluation model, in order to make different types of evaluation methods to maximize the evaluation level of different orchestral music types based on the existing melodic timbre and characteristic information status in the process of orchestral performance, the orchestral performance effect evaluation model is optimized by combining neural network algorithm and greedy optimization rules, and its optimization analysis process is shown in The specific optimization process is divided into two steps.First, the existing level information of common symphony types is first used to determine the appropriate recursion strategy according to the corresponding feature values and adaptations to achieve random generation of initialized neuron nodes. When the linguistic feature values of any two symphonies in the group are not the same, it means that the performance performative melodies of the two symphonies are extremely different, and automatic separation is achieved to calculate the number of symphonies in the group, perform the recursive postadaptation values, and compare them with the feature values and adaptations of the next target to be tested for analysis. When the multilayer perceptron genetic neural network algorithm is processed in deep mining for different symphonic performances, it generates different similarities in the types of symphonic melodies corresponding to the expression methods of the symphonic performances.Second, in the optimization process of this multilayer perceptron genetic neural network algorithm-based orchestral performance effect evaluation model, the multilayer perceptron genetic neural network algorithm, in the actual evaluation process of a specific orchestral performance, transforms the timbral characteristics level information of the target orchestral performance type into computer-recognizable data information (such as vector groups and matrices) through specific processing, and then uses it in this process; the relationship between the different numbers of pointer patterns and the degree of difference of symphonic performance effect is shown in As can be seen in Assembling a comprehensive evaluation model based on symphonic performance analysis and symphonic performative melody analysis enables a more accurate assessment and analysis of the differences and characteristics of different types of symphonic music. Therefore, before conducting the experiment, a consistency assumption of noncore factors is required in order to reduce the interference of random factors.In the experimental design link, this evaluation model can carry out feature monitoring when obtaining feature information of different orchestral performances, then realize feature information extraction from the monitored data, carry out further study through the analysis process of melodies and evaluation carriers in the process of multiple orchestral performances, and, finally, through data analysis of dynamic detection and experimental process, compare with the feature values of the standardized analysis model to realize in evaluation in terms of core indicators. The format of the input data in this experimental process and the previous simulation analysis process is the data matrix filled by the quantitatively evaluated data vector group, which uses the function of how to extract the input data. The dimensions of the data matrix are dynamically adjusted . The input layer, hidden layer, and output layer have an upper limit of 100 nodes. In addition, the data intervals in terms of crossover rate and data mutation rate are both 0.2\u20131.0.In the evaluation session, the stored information about the characteristics of the symphony in terms of language and melody will be stored for further comprehensive evaluation of the quality of different vocal music analysis modes at a later stage, so as to achieve an intelligent analysis of the quality of symphony performance analysis and thus an objective evaluation under the innovative mode of symphony performance analysis based on the multilayer perceptron genetic neural network algorithm.According to the multiple influential indicators, it is possible to achieve the optimization and high-quality development of each vocal music analysis model scheme, to quickly enhance the data recording for each symphony type, and thus to achieve intelligent recording for different students. This orchestral performance evaluation model analyzes four indicators for the first group of objects and the second group of objects, and the experimental analysis results of the deep recursive processing of neural networks at different levels are shown in As can be seen in The comparison results of the objective indexes for the evaluation of orchestral performance effects combining multilayer perceptron genetic neural network algorithm and intelligent fuzzy evaluation are shown in The analysis of the error degree of the results during the experiment under the multilayer perceptron genetic neural network algorithm is shown in From Therefore, combining the results of In order to better realize the quantitative evaluation enhancement of symphony performance effect, the innovation of the current symphony performance effect evaluation and analysis model in China is imperative. Based on this, this paper uses genetic neural network algorithm based on multilayer perceptron and fuzzy evaluation model and firstly selects three characteristic parameters related to symphony performance-related analysis and proposes an evaluation system based on the characteristic parameters of symphony performance effect. This orchestral performance effect evaluation model is evaluated from multiple perspectives by studying the type of orchestral performance, vocal playing ability, vocal coordination ability, audience evaluation effect, and timbre feature extraction analysis in the analysis process. The final experimental results show that the use of multilayer perceptron genetic neural network algorithm to characterize the effectiveness of the innovation model can achieve comprehensive evaluation. However, this study did not evaluate the effectiveness of specific types of orchestral performances in a targeted manner, so in-depth research can be conducted on the scope of application and customized analysis."} +{"text": "In the original article, there were errors in the text.Discussion, Future Direction. The corrected paragraph is shown below:A correction has been made to Several randomized control trials investigating the efficacy, safety, and utility of MMA embolization for cSDHs are underway . Additionally, various embolisates for MMA embolization are currently being studied. The SQUID Trial for the Embolization of the Middle Meningeal Artery for Treatment of Chronic Subdural Hematoma (STEM) is a randomized control trial that is investigating the safety and efficacy of SQUID for the management of cSDHS (61). Another embolisate currently being analyzed is Onyx, which is being evaluated in the Embolization of the Middle Meningeal Artery with ONYX Liquid Embolic System for Subacute and Chronic Subdural Hematoma (EMBOLISE) (62). Both of these trials are comparing medical management alone to MMA embolization, and surgical treatment with embolization to surgical treatment alone. Because the literature on MMA embolization of cSDHs includes a large number of patients who also received surgical intervention, randomized control trials will need to be conducted in a manner to also elucidate the appropriate patient selection for either MMA embolization alone or in combination with surgical intervention.The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated."} +{"text": "The use of principal component analysis (PCA) for soil heavy metals characterization provides useful information for decision making and policies regarding the potential sources of soil contamination. However, the concentration of heavy metal pollutants is spatially heterogeneous. Accounting for such spatial heterogeneity in soil heavy metal pollutants will improve our understanding with respect to the distribution of the most influential soil heavy metal pollutants. In this study, geographically weighted principal component analysis (GWPCA) was used to describe the spatial heterogeneity and connectivity of soil heavy metals in Kumasi, Ghana. The results from the conventional PCA revealed that three principal components cumulatively accounted for 86% of the total variation in the soil heavy metals in the study area. These components were largely dominated by Fe and Zn. The results from the GWPCA showed that the soil heavy metals are spatially heterogeneous and that the use of PCA disregards this considerable variation. This spatial heterogeneity was confirmed by the spatial maps constructed from the geographically weighted correlations among the variables. After accounting for the spatial heterogeneity, the proportion of variance explained by the three geographically weighted principal components ranged between 85% and 89%. The first three identified GWPC were largely dominated by Fe, Zn and As, respectively. The location of the study area where these variables are dominated provides information for remediation. Soil pollution; Heavy metals; Principal component analysis; Spatial heterogeneity; Geographically weighted principal component analysis The conThe sources of these metals in soils may be natural components or due to anthropogenic activities . HoweverDue to different sources of contamination, an assessment of soil heavy metal pollution may be difficult . In an aThe PCA has been widely applied for soil heavy metals pollution characterization and offers considerable benefits of understanding multiple sources of soil pollution . These bTo account for spatial heterogeneity in soil heavy metals data, the geographically weighted principal component analysis (GWPCA) was used. The use of the GWPCA does provides additional information which is obscured in the conventional PCA . For ins22.10 North and longitude 1.60 West of Ghana with n representing the number of observations and p represents the number of individual variables. The The GWPCA which was introduced by fined as :(2)wij={o retain . The numo retain . With tho retain . The deci for two given variables . location, The outputs from GWPCA are extensive and provide detailed information which will be obscured in the conventional PCA. For example, a spatial map of the winning variable which corresponds to the variable with the largest absolute loading in each GWPCA can be used to describe the dominant variable in each component and where they are located. The spatial map of the cumulative proportion of total variance (PTV) gives information on where the PTV are highly concentrated. In addition, the spatial map of the GW correlation among the variables can be used to describe the interaction of the variables at distinct spatial locations. Similar to the conventional PCA, the geographically weighted correlation coefficient at location yi is defined as:R version 4.0.2 at the sampling points are presented in The results of the conventional PCA are presented in 3.2Following, the conventional PCA, the GW correlation was conducted to investigate the local relationship between the first three most correlated variables and determine if such relationship is heterogeneous in the study region. The results, as presented in Since three PCs accounted for 86% of the total variation in the data as observed in the conventional PCA, it was reasonable to retain these same three components for the GWPCA . The spaThe heavy metals dominating in each GWPC (winning variables) are presented in 4The characterization of heavy metal pollution in soil has increasingly become important to understand the sources of pollution and contamination. The present study explored the use of GWPCA as oppose to the conventional PCA to account for spatial heterogeneity and connectivity in soil heavy metals in Kumasi, Ghana. The results from the GWPCA confirm the existence of local relationship among the soil heavy metals in the study area. In addition, the distribution of these variables was found to be spatially heterogeneous in the study area. The identified GWPCs and the spatial distribution of their influential heavy metals can inform the future trajectory of soil pollution and contamination research, allowing for identification of different locations in which these are likely to manifest. The results also have policy implications to alleviate aspect of polluted areas. The use of GWPCA has improved the results of the conventional PCA by providing information on the spatial distribution of the percentage of variance explained in the data and also the distribution of the most influential metals in each identified principal components. Such information gives better understanding of the relationship between the different heavy metals and their distribution across the study area.Eric N. Aidoo: Conceived and designed the experiment; Performed the experiment; Analyzed and interpreted the data; Wrote the paper.Simon K. Appiah: Conceived and designed the experiment; Analyzed and interpreted the data.Gaston E. Awashie: Analyzed and interpreted the data; Wrote the paper.Alexander Boateng: Conceived and designed the experiment; Wrote the paper.Godfred Darko: Contributed reagents, materials, analysis tools or data.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors do not have permission to share data.The authors declare no conflict of interest.No additional information is available for this paper."} +{"text": "Earlier research suggests that educational attainment up to early adulthood are crucial for the development of cognitive reserve, while intellectually stimulating activities later in the life course are of limited impact. We sought to explore the effects of educational attainment and occupational factors (occupation type and currently having work) across the distribution of cognitive performance for adults aged 45-65 years in South Korea. We analysed scores from the Korean Mini Mental State Exam (MMSE) provided in the 2006 wave of the Korean Longitudinal Study of Aging. We used quantile regressions to both investigate relationships across the distribution and to reduce bias for measures of the central tendency as the MMSE is known for its ceiling effects. The quantile function at the lowest conditional decile of MMSE scores suggested that education level was the dominant significant factor for adult performance on the MMSE . All occupational factors were non-significant. Further factors with a significant association with the MMSE were hearing loss, the log-transformed household income, and age squared. With the conditional median function, occupational factors became significant in the middle of the distribution but remained much less important than education levels. In summary, educational levels were more important to explain variation in cognitive functioning than occupational factors, echoing studies with Western samples. We discuss the findings with regard to the historically gender unequal educational and occupational opportunities in Korea."} +{"text": "The neuromuscular junction (NMJ) is a tripartite synapse in which not only presynaptic and post-synaptic cells participate in synaptic transmission, but also terminal Schwann cells (TSC). Acetylcholine (ACh) is the neurotransmitter that mediates the signal between the motor neuron and the muscle but also between the motor neuron and TSC. ACh action is terminated by acetylcholinesterase (AChE), anchored by collagen Q (ColQ) in the basal lamina of NMJs. AChE is also anchored by a proline-rich membrane anchor (PRiMA) to the surface of the nerve terminal. Butyrylcholinesterase (BChE), a second cholinesterase, is abundant on TSC and anchored by PRiMA to its plasma membrane. Genetic studies in mice have revealed different regulations of synaptic transmission that depend on ACh spillover. One of the strongest is a depression of ACh release that depends on the activation of \u03b17 nicotinic acetylcholine receptors (nAChR). Partial AChE deficiency has been described in many pathologies or during treatment with cholinesterase inhibitors. In addition to changing the activation of muscle nAChR, AChE deficiency results in an ACh spillover that changes TSC signaling. In this mini-review, we will first briefly outline the organization of the NMJ. This will be followed by a look at the role of TSC in synaptic transmission. Finally, we will review the pathological conditions where there is evidence of decreased AChE activity. The overall organization of the neuromuscular junction (NMJ) contains three partners : (1) a nvia a collagen-like tail [collagen Q (ColQ)]. It should be noted that AChE anchored by ColQ is an enzyme that limits not only the lifetime of ACh in the synaptic cleft but also the spillover of ACh outside of the synaptic cleft.Synaptic transmission results in several sequential steps: (1) depolarization of the motor neuron membrane triggers the simultaneous release of dozens of vesicles filled with acetylcholine (Ach) and ATP; (2) synchronous activation of nicotinic acetylcholine receptors (nAChRs) clustered on the crest of the post-synaptic membrane that depolarizes the membrane and triggers an action potential; (3) termination of the action of ACh by hydrolysis with acetylcholinesterase (AChE) localized in the synaptic cleft and anchored in basal lamina In mammals, AChE and butyrylcholinesterase (BChE) are enzymes that hydrolyze ACh and have similar molecular forms , both coThe PRiMA/AChE that is presented at the NMJ can be produced by the motor neuron because in ColQ KO mice, which have only PRiMA/AChE, very fine staining of AChE was detected at the plasma membranes of the motor neuron but not at the surface of the muscle . HoweverButyrylcholinesterase (BChE) is also detectable at the NMJ , more spThese enzymes (AChE and BChE), localized and clustered in different compartments of the NMJ, efficiently eliminate ACh and thus prevent ACh spillover and limit the action of ACh to a single shot to the post-synaptic receptors.+ ions. When an action potential is generated, the accumulation of K+ ions released by nerve and muscle cells into the extracellular environment can lead to the depolarization of the cell membrane and consequent inactivation of Nav1.4 sodium channels. Thus, the timely removal by TSC K+ ions prevents the development of muscle fatigue . The short-term exogenous application of neurotrophin-3 or BDNF in newborn animals leads to an increase in the level of intracellular Ca2+ ions in the TSCs, which correlates with an increase in the release of ACh through the activation of presynaptic receptors of the tropomyosin receptor kinase (Trk) into TSCs reduces the induced release of the ACh, while the microinjection of GDP\u03b2S reduces the synaptic depression caused by high-frequency stimulation (In addition to controlling the concentration of extracellular Kse (Trk) . The conmulation .It has been shown that TSCs can influence ACh release from the motor nerve . Since a2+ in TSC has been described (In response to the release of ACh, the release of Caescribed . HoweverIt is important to note that in general, the effect of the activation of mAChRs on ACh release in mammalian NMJs has been studied mainly in the presence of exogenous agonists , 2015 orThese mechanisms, demonstrated experimentally in the context of mature NMJ, are observable only after the inhibition of AChE and BChE. The depression of ACh release triggered by ACh spillover could be activated in physiological contexts where AChE density is significantly decreased. For example, during motor unit remodeling, some neuronal terminals are no longer connected to the muscle fiber that accumulates AChE and thus ACh spillover can be detected by TSC and thereby limit ACh loss. These mechanisms are useful during tissue remodeling and may accentuate pathological alterations in NMJ when the AChE level is reduced.To date, over 60 mutations have been identified in the human ColQ gene, all of which lead to a congenital myasthenic syndrome with endplate AChE deficiency . ColQ isThe C-terminal domain of ColQ interacts with the receptor tyrosine kinase MuSK, master organizer of NMJs , and couIn addition to these post-synaptic mechanisms, the probability of ACh release is changed in ColQ KO mice . During Around 5\u201315% of patients with myasthenia gravis carry autoantibodies directed against MuSK . It was The endplate AChE deficiency has been described in the mouse model of Schwartz-Jampel syndrome. It results from mutations of the gene encoding perlecan .It was shown that AChE deficiency contributes to NMJ dysfunction in a mouse model of type 1 diabetic neuropathy . DiabetiIn this mini-review, we have analyzed the function of cholinesterases in tripartite NMJ. In this light, AChE deficiency or inhibition results in an ACh spillover that can be detected by the TSC. Therefore, a better understanding of the molecular and cellular mechanisms of these communications could lead to the discovery of new possibilities for the development of therapeutic methods. On the other hand, the further study of pathological conditions associated with AChE deficiency could help to better understand the contribution of TSC to neuromuscular synaptic transmission.All authors contributed equally to the writing and editing of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The results of strain measuring experiments, with the help of rosettes consisting of fiber Bragg grating sensors (FBG) embedded at the manufacturing stage in a polymer composite material are considered in this paper. The samples were made by the direct pressing method from fiberglass prepregs. A cross-shaped sample was tested under loading conditions corresponding to a complex stress state. A variant of strain calculations based on experimental data is discussed. The calculations were performed under the assumption of a uniaxial stress state in an optical fiber embedded in the material. The obtained results provide a reasonable explanation of the absence in the conducted experiment of two peaks in the reflected optical spectrum, the presence of which follows from the known theoretical principles. The experimental result with two peaks in the reflected optical spectrum was obtained for the same sample under a different loading scheme. The proposed variant of the numerical model of the experiment and the results of numerical simulation made for FBG rosettes embedded in the material allowed to estimate error in the strain values calculated on the assumption of the uniaxial stress state in the optical fiber and in the presence of two peaks in the reflected optical spectrum. Fiber-optic strain sensors (FOSS) based on fiber Bragg gratings (FBGs) are currently considered one of the most promising sensitive elements for strain measurement . These eThe general solution to the problem of strain measurement by various sensors, including fiber-optic sensors, is associated with obtaining information about all components of the strain tensor. One way to solve this problem is to use special sensor layouts called rosettes ,4,5,6,7.Various works are devoted to the description of the strain measurement by rosettes made from Bragg grating sensors. One of the first is the work by , which pA number of works focus on studying the problems concerning the use of rosettes from FBG sensors and new possibilities of their practical application. In , FBG rosA number of works discuss technologies for creating sensor networks, including those with the implementation of FBG-based rosettes. An example of such studies is in work , which pOne of the key problems associated with application of FBGs embedded in the material is the evaluation of strains based on information about the registered physical quantity. The known relations establish the direct relationship between the measured strains and the readings of sensors only in the case of a uniaxial stress state in the location of the Bragg grating. In review paper , which eThe authors of describeIn the present work, experimental results on the strain measurement by rosettes made from FBG sensors embedded in a polymer composite sample under a complex stress state are presented. The schemes and results of numerical simulation that interpret the experiment are considered. A variant of estimating error limits of strain, obtained using assumptions that allow calculating the strain values on the basis of measured physical quantities, is considered.A cross-shaped PCM sample was used in the experiment on strain measurements with the embedded fiber Bragg grating sensors arranged in a rosette configuration . A complThe selected shape of the sample, under the condition of elastic material behavior in the range of the tested loads, makes it possible to obtain experimental results for the complex stress state when tested on uniaxial tensile machines. This case was used in the present work. The equivalent of biaxial tension by forces PCM samples were manufactured by direct pressing method from 20 layers of fiberglass prepreg with the following mechanical properties: tensile modulus of elasticity along the warp direction A rosette consisting of three fiber Bragg grating sensors was embedded into the PCM during the technological process of material formation at the stage of stacking the prepreg layers on the molding tool. The sensor rosette was placed between the second and third layers of prepreg. The single-mode bend insensitive germanosilicate optical fibers of the SM1500(9/125)P series with photosensitivity were used. Silica glass optical fiber has the following mechanical properties: elastic modulus k method , followeThe main property of a Bragg grating consists in reflecting the part of a broadband optical signal transmitted through an optical fiber. The value of strains measured with the FBG sensors is obtained by processing information about the resonant wavelength of the reflected signal recorded by the interrogator. The resonant wavelength of the reflected signal is determined by the effective refractive index The interaction of the fiber-optic sensor with a deformable material causes the length of the Bragg grating to change, leading to a change in the resonant wavelength of the reflected signal. The relationship between the change in the resonant wavelength of the reflected spectrum and the strain of the fiber in the Bragg grating zone is determined by the relations :(2)\u0394\u03bb1\u03bb*12=0.270 .In the uniaxial stress state, the strains in the optical fiber, which does not interact with the environment, are, For used optical fibers, Relations (2) and (4) show that an unambiguous relationship between the experimental data on the change of the resonant wavelength of the reflected signal, and the component of the strain tensor in the fiber along its length, takes place only in the case of a uniaxial stress state in the region of the Bragg grating. In the general case, a complex stress state with three different components of the strain tensor: ss state . Despitess state . In addiRelation (2) shows that for fiber Bragg gratings embedded into the material in a complex stress state, there will be two resonant wavelengths of the reflected spectrum The tension is applied to the studied sample in direction 1 by gripping zones 1 and in direction 2 by gripping zones 2 . The loax, y, z axes The absence of two peaks in the reflected spectrum can be explained by the results of the following numerical experiments. Consider a PCM element shown in The presented results demonstrate that under loads acting along the optical fiber, in contrast to the load acting perpendicular to the fiber, a stress state close to uniaxial takes place in the zone of the Bragg grating. In this case, relation (2) shows that Known theoretical and experimental results have shown that embedding of an optical fiber in a PCM can produce a technological defect called a resin pocket . This deFor the refined PCM model with an embedded optical fiber and a region filled with matrix, the strains in the Bragg grating zone were calculated and the corresponding theoretical values of The obtained results show that in the case of applying loads along the optical fiber, value \u03bb* is less The difference between the theoretical values surface , at a loTo obtain the strain values based on relation (4) for the sample subjected to tensile loading in directions 1 and 2, the maximum values of the resonant wavelengths in the reflected spectrum were used. For the loading scheme shown in For the sample under consideration, and equivalent loads applied in directions 1 and 2, the readings of sensors The obtained close values for strains This relation shows that the value of the angle These results demonstrate that when using rosettes embedded into the material, small errors in the information about the orientation angle of the sensors lead to an additional error in the determined strains.Relation (2), which determines the relationship between the strain tensor components in a fiber Bragg grating and the characteristics of the reflected spectrum, generally demonstrate the presence of two peaks in the spectrum. In the case of a uniaxial stress state in an optical fiber, relation (4) determines an unambiguous relation between the strain in the optical fiber and one resonant wavelength in the reflected optical spectrum. The assumption of a uniaxial stress state, which is used for strain calculations in an embedded optical fiber and the presence of two peaks in the reflected spectrum may cause a problem with their choice for relation (4).Numerical experiments can provide additional information on the error limits, in the case of using in relation (4) the information about each of the peaks in the reflected optical spectrum. A plate with an embedded rosette a is consThe results of the numerical experiments are the values of the characteristics of the reflected optical spectrum The values Analysis of the results of numerical experiments showed the fulfillment of one of the main properties of the rosettes, namely, the strain values Comparison of the strain values From the presented results, it can be seen that, according to (6), the error of the calculated values of strains is much less when Experimental and theoretical results on the strain measurement by FBG-based rosettes embedded into PCM are presented. The rosette was embedded into the material at the technological stage of its manufacturing by the method of direct pressing of fiberglass prepregs. A cross-shaped sample was loaded by forces applied in its plane in different directions. The strains were calculated based on the measurements of physical quantities of the reflected optical spectrum under the assumption of a uniaxial stress state in an optical fiber embedded in the material. The reliability of the presented results of strain measurements with FBG sensors is confirmed by satisfactory agreement with the results obtained by the VIC-3D optical system.In contrast to the theoretical results predicting the existence of two peaks in the reflected optical spectrum, the spectra obtained in the experiments only have one peak. To clarify this contradiction, the results of numerical simulations are presented, which qualitatively and quantitatively explain the difference between the theoretical and experimentally obtained values of resonant wavelengths of the reflected optical spectra. On the considered sample under loads acting perpendicular to the plane of the sample, a variant with two peaks in the reflected optical spectrum was experimentally obtained. The analysis of the obtained experimental results using numerical simulation methods shows that the error in the information about the angles of orientation of the sensors has a significant effect on the results of strain calculations based on the relations for rosettes. It should be noted that the numerical simulation conducted under the assumption of a uniaxial stress state in an optical fiber embedded in the material and the existence of two peaks in the reflected optical spectrum poses the problem of selecting one of them to calculate the strains. A numerical model of an experimental sample with an embedded rosette was developed. The numerical simulation results of the error in the strain calculations under the assumption of a uniaxial stress state in an optical fiber, as well as information on the theoretical reflected spectrum with two peaks, were analyzed. The obtained results demonstrate that information about one of the peaks provides a significantly smaller error in calculating the strain values. However, there were no adequate methods for selecting a peak, securing a smaller error in the strain values. Therefore, in the case when the reflected spectrum has two peaks, it is suggested that the strain calculations are made using their arithmetic mean value."} +{"text": "Self-assembled phospholipid bilayers are ubiquitous in biology and chemistry. In biological systems they constitute membranes that act as cell barriers that mediate the exchange of compounds, energy and information with the extra-cellular environement and essentially act as a host for membrane proteins that carry out a multitude of critical taks. In modern chemistry, self-assembled surfactant or lipid membranes in dispersed lamellar or non-lamellar phases or bicontinuous microemulsions are frequently used in health care or consumer products to encapsulate and deliver active substances for controlled pharmacokinetics. Thanks to their generally well defined structure and properties, and using nature as an inspiration they can also be used to formulate nano-scale reactors for sustainable synthesis.Neutron scattering provides a unique view on nanoscopic and mesoscopic structure of self-assembled membranes, for example with Small Angle Neutron Scattering (SANS) in solution or with Neutron Reflectivity (NR) using contrast variation through deuterium labelling . ThermalContrast variation by deuteration in neutron scattering provides a tool to get insight into the properties of individual components in the liquid state on nanometer length scales that is not accessible through other techniques.The collection of papers in this issue only covers a fraction of the plethora of membrane properties that can be studied with state-of-the-art neutron scattering techniques.Sharma et al. ILs as solvents with increasing importance in chemical engineering processes of various kinds are generally toxic to living organisms. The article aims therefore in a better understanding of the interaction if ILs with lipid membranes by studying the influence of ILs on the membrane fluidity and molecular motion and learning in this way how to work with or use safely ILs. Quasi elastic neutron scattering was the method of choice to get insight into the molecular dynamics at the membrane-IL-interface.The interaction of ionic liquids (ILs) with cellular membranes is studied in the Article of LoRicco et al. with neutron diffraction. Apolar molecules stabilize the membrane of archea for survival in the most extreme conditions of high temperatures and pressures.A completely different environment is studied in the contribution of Engelskirchen et al. in view of understanding and improving lipase catalyzed reactions. Combining structural and spectroscopic investigations were essential for getting hints on the residence time of the lipase at the surfactant membrane.The interaction of lipase with microemulsion membranes has been investigated by Conn et al. to study membrane proteins, here in the bicontinuous cubic phase of phospholipid bilayers.The strength of neutron scattering in soft matter and biology is the accessibility to parts of a complex sample through contrast variation. This has been used by Thermally driven motion of membranes or incorporated proteins can be investigated with neutron spin echo (NSE) spectroscopy on nanometer and nanosecond length- and time-scales under physiologically relevant conditions. Subtleties of modern NSE experiments are discussed in the Article of Hoffmann.Jaksch et al., where, in combination with reflectometry, the influence of salt on phospholipid membranes is studied.An example of studies of the membrane dynamics with NSE is found in the contribution of Jakubauskas et al..Also plant cells contain membranes, and photosynthetic membranes are eminently important for nature and thus studied since long time. Combining different techniques such as microscopy and x-ray or neutron scattering provides an added value helping to understand nature, when the experimental results are properly modelled, as is shown in the paper of Waldie et al.).Use of deuteration is a strong argument for neutron scattering techniques, which is applied for studying the interplay between high density lipoproteins (HDLs), cholesterol and the lipid membrane to help understanding the factors for deseases as Altzheimer\u2019s or atherosclerosis in the article by Luchini et al. to characterize different preparation methods, an important prerequisite for reliable data analysis, and further on the existence of different lipid phases could be proven by the diffraction data attributed to the heterogeneity of their acyl chain composition. Using natural extracts with the ability of producing deuterated lipid mixtures plays again with the power of adjustable contrast for biological samples.Neutron diffraction on stacks of lipid multilayers allowed Krugmann et al. in order to investigate the nanoscopic details of the role of myelin basic protein in the formation of the sheath that wraps around axons. Combination of static and kinetic measurements together with theoretical arguments provide insights concerning the myelination process.Neutron reflectivity is mainly used in the contribution by Kinnun et al. highlights the advantages of contrast variation availiable in various neutron methods for accessing structural and dynamic information about the in-plane and out-of-plane structure of a variety of biomembrane systems.Finally the review article by With this research topic we hope to provide a useful and interesting overview over some recent advances in membrane studies with different neutron scattering techniques and to illustrate the potential of the different techniques of diffraction, reflectivity and spectroscopy measurements which allow together with the unique contrast variation possibilities with isotope substitution a nanoscopic view into the details of artificial and natural self assembled membranes."} +{"text": "Glomerulopathy is generic disease of the renal glomerulus, impairment of which can lead to hematuria or proteinuria due to injury or dysfunction of the endothelium, glomerular filtration barrier or podocyte . The NepThe most primitive ki dney was the pronephros, evolved as a urea-secreting organ in a multicellular animal that wasDiagnosing pathology and assessing etiology on the basis of dysfunction of evolutionary adaptations is a promising approach to analyzing contemporary disease and injury . An evo-"} +{"text": "This Research Topic represents a collaboration between the International Federation of Associations of Pharmaceutical Physicians and Pharmaceutical Medicine (IFAPP) and Frontiers in Pharmaceutical Medicine and Outcomes Research aimed to create further awareness of Pharmaceutical Medicine (PM) as a profession and meet the new challenges of medicines development.The advancement in biomedical sciences extended the concept of medical products to including biological agents, gene and cell therapies as well as drug-medical device combinations. The development and application of these new products can be efficiently done only in complex, multidisciplinary teams combining the know-how of pharmaceutical physicians, clinical investigators, basic and applied bio-medical scientists and other non-medically qualified professionals.This Research Topic covers 11 articles focusing on the evolving challenges in medicines development as related to the standards for performing with competence and the application of ethical principles while working in the pharmaceutical industry, academia, research sites and regulatory agencies.systems approach integrating research into healthcare systems has been proposed to overcome the current barriers to a cost/effective cooperative process and appropriate management of the risks involved with an urgent call to address these disparities using a multi-stakeholder approach and building consensus for change.However, a host of challenges confront healthcare authorities worldwide. The challenge is particularly great in therapeutic areas where, despite significant medical need and economic impact, the technical challenges and commercial risk of development serve as disincentives to sponsors. Currently the development and approval of new active substances, with its disproportionate focus on oncology and rare diseases is not in alignment with health care needs in most geographic regions. The origins of this misalignment and approaches to overcome this situation are discussed indicating low and variable levels of perceived competence for the various domains regardless of the seniority in the job. Similar results were reported among individuals involved in clinical research . Core Competencies in Clinical Research have also been identified and proposed as a model for E&T and improving the quality and accountability for specific functions involved in the drug development process (Sonstein and Jones) including the challenges for implementation and lessons learned.The evolution of postgraduate vocational E&T in pharmaceutical medicine along with the development of the full set of core competencies for pharmaceutical physicians and drug development scientists within the competence framework of seven domains are now established (Chisholm) whereas the process of adoption of the scope of the above Framework to reflect such roles in academic institutions or regulatory bodies in Switzerland is part of the lessons learned .Many of the disruptive forces affecting the healthcare industry today are also impacting education. The increasing voice of the patient and the rise of patient engagement in drug development are mirrored by the increasing student voice and student focus on education. The process for curriculum transformation from didactic to competency based programs in Pharmaceutical Medicine in Australia is thoroughly described with recommendations to all other countries and institutions which may consider establishing this program.There is a growing consensus of the role of vocational training to gain competence . SpecifiBridges).Regulatory Affairs professionals play pivotal roles to ensuring healthcare products adhere to regulations and in gaining regulatory approvals for product manufacture and sales. Although they perform complicated work connected to the entire product lifecycle, only 14% of regulatory professionals come to the field with a degree related to the work and moreKerpel-Fronius et al.) with the intention to provide recommendations to both professional groups to make joint ethical decisions during various situations occurring during clinical trials. Mutual trust and respect between the various experts is emphasized as the basis of effective multi-professional team work.The complexity of developing and applying increasingly sophisticated new medicinal products has led to the participation of many medical and non-medically qualified scientists in multidisciplinary non-clinical and clinical drug development teams worldwide. Revising the IFAPP International Code of Ethical Conduct for Pharmaceutical Physicians written in 2003, the Ethics Working Group prepared the IFAPP International Ethics Framework .These revised recommendations add to the list of Codes of Practice for pharmaceutical physicians prepared by professional organizations like the Faculty of Pharmaceutical Medicine, CIOMS and the World Medical Association. Jointly they provide clear and detailed guidance for correct behavior of pharmaceutical medicine experts in specific research situations .An alignment of the Declarations of Helsinki with that of the Declaration of Taipei is recommended for the better protection of both biological materials and data derived from clinical studies when their secondary use is intended. Furthermore, it is emphasized that any future plan for data and/or material sharing should be explained in the protocol, signed by the research participants and should be made publicly available (This Research Topic intended to create further awareness of the complex set of competencies and ethical considerations required for clinical drug development and the need to foster education and training at the undergraduate, postgraduate and continuing professional development levels to ensure the pharmaceutical industry is fledged with competent professionals able to bring better and valuable medicines to the market place and contribute to leveraged health in their communities."} +{"text": "Blood blister-like aneurysms (BBAs) are rare and usually appear at nonbranching sites in the supraclinoid portion of the internal carotid artery (ICA). Because it is difficult to obtain histological specimens of the aneurysm wall and because experimental models are challenging to establish, the pathogenesis of BBAs remains uncertain. In this paper, we reviewed the diagnostic, radiological, and pathophysiological characteristics of patients with BBAs. We also summarized the existing evidence and potential mechanisms related to the causes of BBAs. Current evidence indicates that atherosclerosis and dissection are the main prerequisites for the formation of BBAs. Hemodynamics may play a role in the process of BBA formation due to the unique vascular anatomy of the supraclinoid ICA. Further research on histopathology and hemodynamics is warranted in this field. Blood blister-like aneurysms (BBAs) usually appear at the anteromedial or anterior wall of the supraclinoid segment of the internal carotid artery (ICA) BBAs typically have a thin, fragile wall and unidentifiable neck In clinical practice, patients are usually diagnosed with BBAs due to the acute symptoms caused by SAH, and they typically have no complaints until rupture occurs From the digital subtraction angiography (DSA) images, a typical BBA is usually observed as a small irregular hemispherical protrusion on the anteromedial wall of the supraclinoid ICA. Sometimes, BBAs may be accompanied by dissection of the ICA; the lumen of the parent artery can be narrowed or dilated during angiography The diagnosis of BBAs should not be confirmed only by radiological images and clinical symptoms. Although the imaging manifestations of an aneurysm and the patient's clinical symptoms are consistent with the characteristics of BBAs, the pathological manifestations may be a true aneurysm Although numerous treatment strategies have been proposed, including microsurgery, endovascular treatment, and combined options, the optimal therapeutic strategy for BBAs is still under debate. Wrapping only has a limited effect on preventing postoperative re-rupture, and direct clipping carries a high risk of intraoperative rupture and subsequent ICA sacrifice From the endovascular treatment perspective, the small size and unidentifiable broad-based neck of BBAs limit the effectiveness of coil embolization Saccular intracranial aneurysms (sIAs) are pathological pouch-like dilatations of the intracranial arteries. Although BBAs can grow from a small protrusion to a saccular shape in a short period, the pathological features of BBAs are different from those of sIAs.Under normal physiological conditions, the intracranial artery usually consists of three layers Compared with the rarity of BBAs, sIAs are a common disease, with a prevalence of 5%-7% in the general population Therefore, a true aneurysm is formed by the gradual degeneration of the artery wall, and there are usually one or two layers of typical arterial structures in the aneurysm wall. BBAs cannot be considered an early stage of saccular aneurysm development. BBAs do not have a complete arterial wall and lack the IEL and media instead of having thin adventitia and fibrous tissue BBAs usually show a thin, fragile wall and unidentifiable neck from the intraoperative view; these features make it challenging to obtain a specimen of the aneurysm wall, and experimental models are difficult to establish Dissection weakens the artery wall, making it unable to resist hemodynamic stress and eventually leading to rupture of the artery wall Atherosclerotic remodeling, degeneration of the artery wall and loss of the IEL lead to the formation of BBAs There exist only a few reports of traumatic-related BBAs. Haji et al. sIAs commonly arise at the bifurcations of the cerebral arteries. In contrast, BBAs usually appear at nonbranching sites in the surpraclinoid portion of the ICA The large curvature of the carotid siphon is one of its essential characteristics. In a computational fluid dynamics (CFD) simulation study, high curvature tightness resulted in a significantly higher wall shear stress (WSS) on the outer wall than on the inner wall of the bend The OA is one of the main branches of the paraclinoid ICA. The branching vessels can affect the hemodynamic characteristics of the ICA trunk through shunting. Indo et al. In this study, we reviewed the diagnostic, radiological, and pathophysiological characteristics of patients with BBAs. We also summarized the existing evidence and potential mechanisms related to the causes of BBAs of the supraclinoid ICA. Current evidence indicates that atherosclerosis and dissection are the main prerequisites for the formation of BBAs. Hemodynamics may play a role in the process of BBA formation due to the unique vascular anatomy of the supraclinoid ICA. Overall, the pathogenesis of BBAs is not entirely clear and further research on their histopathology and hemodynamics is warranted."} +{"text": "To the Editor,Onishi et al. responded productively to our research by adding the importance of the relationship between symptoms and vitamin B1 deficiency.The vagueness of the symptoms regarding vitamin B1 deficiency can make the diagnosis of vitamin B1 deficiency challenging. As the authors of the letter suggested, Wernicke encephalopathy can be critical because of high mortality and is diagnosed on the basis of typical symptoms, blood tests, and magnetic resonance imaging.The perception and degree of symptoms can be dependent on patients\u2019 background, clinical settings, and their help\u2010seeking behavior (HSB), which can make it difficult to investigate the relationship between symptoms and vitamin deficiencies. Older patients tend to experience symptoms in varying degrees because aging impinges on the sensitivity of perceiving symptoms.I appreciate the letter of the authors regarding the present challenges for the clarification of the relationship between symptoms and vitamin deficiencies. In the present era, aging societies can lead to the exacerbation of the prevalence of vitamin B1 deficiencies, which may drive the progression of dementia and frailty in future. Future studies should investigate the prevalence of vitamin B1 deficiency in various contexts and improve older people's HSB for improvements in their quality of life.The authors declare no conflicts of interest with regard to this article."} +{"text": "After this article was publPLOS ONE website at the time of retraction.The authors requested that PLOS remove the article from online publication, and informed PLOS that the University of Ottawa library and representatives of Factiva (Dow Jones) had both requested this action. Following an internal assessment of this case, PLOS agreed to remove the article from the All authors agreed with retraction."} +{"text": "Our senses receive a manifold of sensory signals at any given moment in our daily lives. For a coherent and unified representation of information and precise motor control, our brain needs to temporally bind the signals emanating from a common causal event and segregate others. Traditionally, different mechanisms were proposed for the temporal binding phenomenon in multisensory and motor-sensory contexts. This paper reviews the literature on the temporal binding phenomenon in both multisensory and motor-sensory contexts and suggests future research directions for advancing the field. Moreover, by critically evaluating the recent literature, this paper suggests that common computational principles are responsible for the temporal binding in multisensory and motor-sensory contexts. These computational principles are grounded in the Bayesian framework of uncertainty reduction rooted in the Helmholtzian idea of unconscious causal inference. We receive sensory information from the environment and the body through several distinct senses. For a coherent and unified representation of information, our brain needs to group the multisensory features emanating from an object or event for solving the causal inference problem congruency between pairs of cues, and attentional allocation are all known to influence temporal perception actions Buehner, , machineA number of recent studies have begun to investigate IB mechanisms from the perspective of Bayesian cue integration (Moore and Obhi, This review explored the temporal binding mechanisms in multisensory and motor-sensory contexts. By critically evaluating the recent empirical evidence, this paper suggests that the common computational mechanisms grounded in Bayesian causal inference models are responsible for the temporal binding in multisensory and motor-sensory contexts. Moreover, the extent of temporal binding depends on the strength of prior and the precision of sensory likelihoods. Future studies are required to understand the independent and interactive roles of multiple priors and sensory likelihoods on temporal binding across the multisensory and motor-sensory features.The author confirms being the sole contributor of this work and has approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In recent years, the development of interface debonding defect detection methods for concrete-filled steel tubes (CFSTs) using stress wave measurement with piezoelectric-lead-zirconate-titanate (PZT) actuator and sensor has received significant attention. Because the concrete core in CFSTs is a heterogeneous material with randomness at the mesoscale, the size, position and distribution of aggregates unavoidably affect the stress wave propagation and the PZT sensor response. In this study, to efficiently investigate the influence of the mesoscale structure of the concrete core of CFSTs on the response of embedded PZT sensors, a multi-physics substructure model of CFST members coupled with a PZT actuator and a PZT sensor, where a single circular aggregate with different size and position and randomly distributed circular aggregates are considered, are established first. Then, multi-physics simulations on the effect of the local mesoscale structure of the concrete core on the response of the embedded PZT sensor excited by both a sinusoidal signal and sweep frequency signal are carried out. Moreover, corresponding multi-physics and mesoscale simulations on the embedded PZT sensor response of substructures with different interface debonding defects are also carried out for comparison. The amplitude and the wavelet packet energy of the embedded PZT sensor response of each mesoscale substructure are employed to distinguish the influence of the concrete core mesoscale structure and interface debonding defect on sensor measurement. The findings from the results with the multi-physics coupling substructure models are compared with those of the full CFST-PZT coupling models and the tested members of the previous studies to verify the rationality of the embedded PZT sensors measurement of the established substructure models. Results from this study show that the effect of interface debonding defect on the amplitude and the wavelet packet energy of the embedded PZT sensor measurement of the CFST members is dominant compared with the mesoscale heterogeneity and randomness of the concrete core. As a typical concrete-steel composite structure, concrete-filled steel tube (CFST) members have been widely used as significant vertical and axial load-carrying members in long-span bridges and super high-rise buildings due to their advanced mechanical behavior under strong dynamic loadings, such as earthquakes, where the confinement effect of the steel tube on the concrete core plays a key role. However, the possible interface debonding defect between the concrete core and the steel tube has been a common concern due to shrinkage or apparent hydration heat. This is because the interface debonding defect weakens the confinement effect of the steel tube on the concrete core and finally leads to a negative effect on the load-carrying capacity, the stiffness and the ductility of CFST members. Therefore, it is particularly critical to develop effective interface debonding defect detection approaches for CFSTs .In recent years, non-destructive testing (NDT) techniques, including the acoustic echo method, electromagnetic method, infrared thermal imaging method, ultrasonic method and the X-ray method have been proposed to detect different defects, including cracks and debonding between concrete and rebars in reinforced concrete (RC) structures ,3,4,5,6.Investigation of the mechanism of defect detection for complex structures, including CFST members using PZT patch actuation and sensing technologies, is desired. Recently, numerical simulations on the stress wave propagation and the response of the PZT sensor embedded in or bonded on different structural members have been carried out to investigate the defect detection mechanism using stress wave measurements. The concrete core in most of the CFST numerical studies on stress wave propagation is usually modeled as a kind of homogeneous material ,8. In faEven though a variety of mesoscale modeling methods for the purpose of numerically investigating the effect of the mesoscale structure of concrete on both local and global behavior of concrete materials and structures have been proposed ,16,17,18PZT materials have been used as either actuators or sensors for defect detection of different engineering structures. A low frequency bending piezoelectric actuator with integrated ultrasonic non-destructive evaluation (NDE) functionality was embedded in a composite laminate to detect damage using ultrasonic pulse excitation qualitatively . In ordePerera et al. ,27 used Most recently, in order to consider the mesoscale structure of the concrete core of CFST members, Xu et al. first peIn this paper, in order to efficiently investigate the influence of the mesoscale structure of the concrete core of CFST members on the output voltage response of the embedded PZT sensor and to compare it with that of the interface debonding defect, the multi-physics mesoscale simulation on the stress wave propagation of a substructure of CFST-PZT coupling models, considering different circular aggregate size, position, distribution and interface debonding defect, are carried out. The proposed mesoscale CFST-PZT coupling substructure model can not only study the influence of the mesoscale component, including aggregate dimension, position and distribution between the PZT actuator and embedded sensor on the response of the embedded sensor, but can also significantly improve the simulation efficiency.In the mesoscale and multi-physics simulation on the stress wave propagation of the substructure, both sinusoidal excitation signals with different frequencies and sweep excitation signals are considered. The effect of the size, position of a single circular aggregate and the distribution of circular aggregates on the embedded PZT sensor response and the corresponding wavelet packet energy of healthy CFST members are investigated in detail. Then, the effect of the size, position of a circular aggregate and the distribution of circular aggregates on the embedded PZT sensor response and the corresponding wavelet packet energy of the corresponding CFST members with interface debonding defects are demonstrated and compared. Finally, the effect of the mesoscale structure of the substructure on the response of embedded PZT sensors is compared with the interface debonding defect and the finding from the numerical results with the substructure proposed in this study is compared with that of the previous experimental study by the authors. The rationality of the proposed mesoscale substructure model for stress wave propagation is demonstrated. Mesoscale substructure simulation results show that the mesoscale structure of the concrete core has a certain effect on the embedded PZT sensor response, but the effect of the interface debonding defect on the embedded PZT sensor measurement is dominant. The results imply that the interface debonding detection approach with stress wave measurement for CFST members is reasonable even though the concrete core is a heterogeneous material.t is the time variable, The control equations for elastic stress wave propagation simulation in a solid medium excited by a PZT actuator are shown in Equations (1)\u2013(5) ,38,39,40The direct and inverse piezoelectric effects of the PZT sensor and actuator are considered in the coupling model composed of PZT patches and CFST members. The mechanical balance equation of PZT patches is consistent with the force balance equation of conventional solid materials except for the stress tensor shown in Equation (6).The charge conservation equations of the PZT material in the electrostatic effect are shown in Equations (7)\u2013(11) ,38,39,40Different from the previous multi-physics mesoscale simulation studies by Xu et al. ,36, in tThe low reflection boundary condition used for the substructure of the CFST member coupled with the PZT actuator and sensor meets the following Equations (12) and (13) ,30,31.The steady output voltage amplitudes of the embedded PZT sensor of the mesoscale substructure coupling models showed that the size, lateral and longitudinal positions of a single aggregate and the aggregates distributions differently affected the response of embedded PZT sensor of the mesoscale substructures without the interface debonding defect under continuous sinusoidal excitation signal. The effect of the size, lateral position of a single aggregate and the aggregates distributions on the response of embedded PZT sensor of the mesoscale substructure without interface debonding was limited, but the aggregate longitudinal position had the most obvious influence.(2)The effect of the size and position of a single aggregate and the distribution of aggregates on the response of the embedded PZT sensor of the mesoscale substructure coupling models with interface debonding defect was comparatively limited when compared with that of the mesoscale substructures without the interface debonding defect under sweep frequency excitation signal. The existence of interface debonding defect led to an obvious decrease in the output voltage amplitude of the embedded PZT sensor no matter what size and position of the single aggregate and distribution of aggregates were considered.(3)The wavelet packet energy of the embedded PZT sensors is also dominantly affected by the interface debonding defect rather than the mesoscale structure of the concrete core of the substructure coupling models with different single aggregate sizes, positions and aggregates distribution. Additionally, the length of the interface debonding defect had an obvious effect on the wavelet packet energy of the embedded PZT sensor in the mesoscale substructure coupling models and its wavelet packet energy of the models with the interface debonding defect was always much lower than that of the healthy substructure.In this study, a multi-physics simulation with mesoscale substructures of CFST-PZT coupling models composed of a surface-mounted PZT actuator, an embedded PZT sensor, aggregate, mortar and a steel tube was carried out, and the effect of the size, lateral and longitudinal positions of a single aggregate, the distribution of aggregates and the interface debonding defect on the response of the embedded PZT sensor was investigated in detail with the proposed mesoscale substructure coupling models. The output voltage response of the embedded PZT sensor of different mesoscale substructure coupling models without and with debonding under a continuous sinusoidal excitation signal or sweep frequency excitation signal was discussed. Based on the multi-physics mesoscale simulation results on each substructure, the following conclusions could be drawn:In this study, the distance between the PZT actuator and the PZT sensor was constant. In order to demonstrate the effect of the measuring distance between the PZT actuator and sensor on the embedded PZT sensor of the mesoscale substructures without interface debonding defect, further investigation on the response of the output voltage signal of the PZT sensor with different measuring distances could be carried out in future. Moreover, the mesoscale structure of concrete with aggregate shapes other than the circular shape can also be considered further."} +{"text": "Teaching Point: In the absence of a clear history of trauma, avulsion of the lesser trochanter should raise a high index of suspicion of an underlying malignancy. Figure 1, arrows). The initial diagnosis of an avulsion fracture of the lesser trochanter was made, in accordance with the professional activities of the patient. Because of aggravating pain and the onset of nightly pain, a follow-up radiograph was performed a few weeks later, showing an expansile radiolucency of the lesser trochanter with an adjacent layered periosteal reaction at the medial cortex of the proximal femoral diaphysis . These findings were suggestive of an aggressive osseous lesion. Repeated MRI showed an osseous lesion causing destruction of the medial cortical bone and a large soft tissue component posteromedial the proximal femoral diaphysis. There was heterogeneous enhancement of the mass . Perilesional edema was present in the iliopsoas and vastus intermedius muscles. The combination of the age of the patient, the absence of a history of trauma, and the aggressive appearance of the osseous lesion with a large soft tissue component was suspicious for Ewing\u2019s sarcoma. After histopathological confirmation, the lesion was treated with neoadjuvant chemotherapy, followed by resection of the proximal femur shaft and placement of a femoral prosthesis. Further follow-up was uneventful.An 18-year-old cyclist was admitted to our hospital for spontaneous onset pain in the left groin for the past months. Computed tomography (CT) showed a fracture at the lesser trochanter of the left femur, which was confirmed on the subsequent magnetic resonance imaging (MRI) (Ewing\u2019s sarcoma is a highly aggressive neoplasm and the second most common bone tumor in children and adolescents with a peak incidence between 10\u201320 years. The metaphysis of the long bones is most involved (80%), and the femur is the most affected (25%) [Location in the lesser trochanter may rarely mimic an avulsion fracture. The imaging appearance of Ewing\u2019s sarcoma is highly variable and often presents as a large permeative lesion, with lamellated periosteal reaction. Conventional radiography shows an aggressive osteolytic lesion. Computed tomography (CT) is more sensitive for the detection of bony destruction in more complex regions. Magnetic resonance imaging (MRI) is the modality of choice for evaluating local tumor extension and staging. Typically, there is bone marrow replacement with heterogeneous enhancement and a large soft tissue mass. Systemic chemotherapy is the keystone in treatment; additional surgery or radiotherapy can be performed depending on the location and size of the lesion.Underlying malignancy should be excluded in case of avulsion fracture of the lesser trochanter, particularly in the absence of trauma."} +{"text": "The SARS-CoV-2 pandemic has swiftly and firmly implanted itself within our communities and countries. Multiple countries have faced numerous waves of the virus and have paid the price with death counts totalling in the hundreds of thousands, and globally now millions. The novel nature and rapidity of the SARS-CoV-2 spread throughout the globe has resulted in health institutions finding themselves in a quagmire as they battle with the lack of the necessary equipment but more importantly battle the lack of an established and universally accepted treatment protocol for the COVID-19 infection [The drugs thus used throughout the progression of the pandemic have varied and waivered as more research and data for better treatment of the infection has become available. Corticosteroids are however, one such group of drugs that are almost always a constant among most treatment regimens and protocols. The off label use and implementation of various regimens and high doses of certain drugs have led to some deleterious adverse effects .Corticosteroids and immunosuppressants of such a nature have long been used in viral infections and have been a mainstay in the treatment protocol and regimens used in COVID-19 patients. Popular corticosteroid drugs and therapies that are being prescribed in patients suffering from COVID-19 are dexamethasone, methylprednisolone and or hydrocortisone with IV (intravenous) and/or oral administration. The use of such high doses of corticosteroids have shown very positive results and have been lifesaving in many cases . The basCorticosteroid therapy has a multitude of side effects and they vary dependant on the dosing, duration and potency of the particular species being prescribed. When used in short durations at high doses the side effects may vary and include hypertension, electrolyte abnormalities, cutaneous effects, pancreatitis, hematological dyscrasias, hyperglycemia, neuropsychologic and immunological adverse effects . In the Reports have surfaced with new side effects developing in patients post recovery from COVID-19 infections. Severe systemic mucormycotic infections causing orbital compartment syndrome and severe multiorgan infections have been partially attributed to the extensive use of systemic corticosteroids in the treatment of the initial COVID-19 infection. A factor further superadded to the complications experienced is the fact that the majority of patients being treated with corticosteroids have pre-existing conditions and severe comorbidities. The nature of these pre-existing conditions are often exacerbated and accelerated via the use of the lifesaving steroid treatment. A textbook example being the reduced and worsened glycaemic control in diabetic patients whom are treated using corticosteroids ,9.As in the case of mucormycotic infections and their relationship with the immunosuppressive nature of corticosteroid therapy used in the treatment of COVID-19; a rise in cases of avascular necrosis of the femoral head is being reported and is being attributed to the sustained and aggressive corticosteroid therapy being implemented ,11.The severe and life-threatening nature of the COVID-19 infection, the ARDS (acute respiratory distress syndrome) as well as the cytokine storm induced by the infection, commands lifesaving high doses of steroid therapy. As in all pharmacological therapies adverse effects are present. One such adverse effect which is being reported is corticosteroid induced avascular necrosis of the femoral head/ osteonecrosis of the femoral head (ONFH) .The mechanism by which steroids induce avascular necrosis of the femoral head is underpinned by the collective actions of the drug therapy. It must be noted that AVN principally affects the femoral head and most commonly the anterolateral aspect thereof as it is the crux of weight bearing. Corticosteroids induce fat mobilization and this thus innately enhances the likelihood of fat emboli developing from the liver to occlude minor blood vessels in the femur, this thereby compromises the microvascular environment. Superadded to this the steroid therapy disrupts calcium metabolism and homeostasis which induces hypertrophy in the intramedullary fat cells, Gaucher cells and inflammatory cells; whilst increasing the activity of osteoclasts, thus increasing bone resorption and decreasing calcium uptake and deposition; ultimately leading to an insufficiency in the trabecular and cortical bone. This insufficiency thus equates to an increased intraosseous pressure which impedes intramedullary circulation and results in avascular necrosis .Agarwala et al, conducted a study on three cases whom had recovered from COVID-19 after being treated with corticosteroid therapy and subsequently developed AVN. The patients were found to be prescribed a mean dose equivalent to 758mg of prednisolone. The patients were subsequently symptomatic with bilateral femoral hip pain. The great importance of this study is that the patients were diagnosed with AVN after receiving a dosage much lower than the 2000mg equivalent ceiling which current guidelines dictate to avoid AVN. Superadded to this; it was noted that the patients presented with the features of AVN at a mean of 58 days after their initial COVID-19 diagnosis and treatment. This is in contrast to the current literature which states that AVN takes 6 months to 1 year to develop post corticosteroid therapy ,17.It has been long established that steroids induced AVN, however reports of AVN developing rapidly in COVID-19 patients post treatment where specific treatment guidelines to avoid steroid induced AVN have been adhered to suggest that the COVID-19 infection itself may also be instrumental in the development of AVN and that steroid use is not the sole benefactor to inducing AVN post steroid therapy ,16.Current guidelines suggest that a cumulative ceiling dose of 2000mg of prednisolone or its equivalent should not be breached in order to prevent the development of AVN, however as seen in the research conducted by Agarwal et al it is evident that AVN is being induced in patients treated far below the 2000mg ceiling. It is thThe use of steroid therapy in COVID-19 is invaluable and lifesaving in nature. The adverse effects of such steroid therapy are however profound and well established. It is evident that avascular necrosis is directly caused by high dose steroid therapy, however the case reports have very clearly indicated that the rapid onset of AVN post recovery from the COVID-19 infection cannot be solely attributed to steroid therapy and that another benefactor induced by the COVID-19 infection is at play. It is thus vital for treating physicians to take cognisance of this adverse effect post recovery and therefore should ensure that prophylactic bisphosphonate therapy is initiated timeously and congruently."} +{"text": "Temporo-mandibular joint (TMJ) joint and the condyle of mandible are observed in the radiographs of the skull and the jaw. Therefore, it is of interest to assess the predictability of four different shapes of condyle in skeletal class I, II and IIImalocclusion. The four commonly visualized shapes are oval, bird beak, diamond and crooked were assessed using an ortho pantomogram (OPG). Each of the malocclusion was visualized for different shapes of the condyle. 987 OPGs were radiographically evaluatedand the morphology of 1974 condylar heads was visualized. The shapes of the condyles were grouped under four different types. Data shows that oval shaped condyle was most common followed by bird beak. There was variability in the diamond and crooked shape andwas lesser than the other types. Thus, the shapes of the condyle are useful predictable guide in deciding the nature of the occlusion. Temporomandibular joint (TMJ) is a ginglymoarthroidal joint that is formed by articulation between two bones, the condyle part of mandible and glenoid fossa at the base of the skull . The variation and the pattern noticed among individuals in the morphologicalvariations in the condyle and the fossa has now become an area of research . The treIn literature there are various studies that have assessed the various condylar morphology. There is difference in condyle morphology in various malocclusions. There are variations in male and female with men having a larger condyle than woman. The malocclusionthat has a major change in the condyle is the transverse malocclusion. Another important finding Tadej et al had pointed out was the medio-lateral dimension as compared to anterior- posterior . SeymouThe predictability of condylar morphology is of significant importance. Data showed that all three types of skeletal malocclusion I, II and III, the oval shaped condyle was found to show the maximum occurrence. The next common shapes are diamond and bird beakand the crooked shaped condyle is the least common. There was no significant difference between the left and right side of the condyle. Results of the study suggest that reasonable predictability of condylar morphology with nature and any deviation from the ovalshape would require the need for further investigation with relevance to any clinical significance."} +{"text": "Breast carcinoma metastasis can involve any ocular structures, but involvement of the optic nerve is extremely rare. Choroidal metastasis is usually multifocal as well as bilateral and occurs late. We report an unusual initial presentation of metastasis from breast cancer; unilateral infiltrative optic neuropathy with concurrent choroid metastatic deposits in an adequately treated middle-aged female. Our present case, wherein for the first time in the literature, we illustrated unilateral infiltrative optic neuropathy and choroidal metastatic deposits secondary to breast carcinoma, will increase our knowledge about the various potential ocular presentations of this relatively common malignant disease. A 41-year-old woman with a history of invasive ductal type of adenocarcinoma of her right breast presented with diminution of vision in the right eye of 15 days duration. The diagnosis of her right-sided stage IIIc breast carcinoma was made 6 years ago when she developed a lump and pain in the right side of her chest. She subsequently underwent a right-sided modified radical mastectomy followed by eight cycles of chemotherapy and external beam radiation of the thoracic wall. She remained stable thereafter and presented with diminution of vision in the right eye, six years after initial diagnosis of breast cancer.On evaluation, the best corrected visual acuity (BCVA) was 3/60 in the right eye and 6/6 in the left eye. Examination of the right eye was completely unremarkable except for the presence of a relative afferent pupillary defect (RAPD). There was also the presence of trace vitreous cells on the anterior vitreous surface in the right eye. Fundus examination of the right eye revealed a diffuse enlargement of the optic disc, and disc oedema with splinter haemorrhages in the surrounding retina (blue arrow) suggestive of optic nerve infiltration. There were multiple, homogenous, creamy yellow lesions with interspersed alterations of the retinal pigment epithelium along the supero-temporal vascular arcade (yellow arrows) associated with serous detachments of the fovea characteristic of choroidal metastatic deposits (Figure 1A In view of the patient\u2019s present condition, her past history of breast cancer, optic nerve head infiltrative and characteristic choroidal metastatic features on fundus examination, and imaging findings, the diagnosis of infiltrative optic neuropathy with concurrent choroidal metastasis of the right eye secondary to breast carcinoma was made. The patient was referred to an oncologist to rule out other system involvement and necessary interventions.Metastatic tumor is the most common ocular malignancy, and uveal tissue is the most favored site where cancer metastases develop . The oveOptic nerve head involvement in metastatic disease of the breast is seen in less than 5% of all intraocular metastases; appearing as a diffuse enlargement of the optic disc in about 84% of cases with a degree of secondary disc oedema and splinter haemorrhages [Optic disc infiltration is known to occur either due to direct extension of a choroidal tumor which is located close to the optic disc, or due to a spread of neoplastic cells to the circulation of the optic nerve head by blood route . The hemTo conclude, breast carcinoma is a relatively common source of ocular metastasis and can have varied presentations depending on the site of metastasis. Unilateral infiltrative optic neuropathy with concurrent choroidal deposits as initial manifestation needs to be considered as differential diagnosis and potential cause of severe diminution of vision due to metastasis in patients of breast carcinoma.The authors declare that they have no competing interests."} +{"text": "Skin cancer is one of the most common cancers in humans. This study aims to create a system for recognizing pigmented skin lesions by analyzing heterogeneous data based on a multimodal neural network. Fusing patient statistics and multidimensional visual data allows for finding additional links between dermoscopic images and medical diagnostic results, significantly improving neural network classification accuracy. The use by specialists of the proposed system of neural network recognition of pigmented skin lesions will enhance the efficiency of diagnosis compared to visual diagnostic methods.Today, skin cancer is one of the most common malignant neoplasms in the human body. Diagnosis of pigmented lesions is challenging even for experienced dermatologists due to the wide range of morphological manifestations. Artificial intelligence technologies are capable of equaling and even surpassing the capabilities of a dermatologist in terms of efficiency. The main problem of implementing intellectual analysis systems is low accuracy. One of the possible ways to increase this indicator is using stages of preliminary processing of visual data and the use of heterogeneous data. The article proposes a multimodal neural network system for identifying pigmented skin lesions with a preliminary identification, and removing hair from dermatoscopic images. The novelty of the proposed system lies in the joint use of the stage of preliminary cleaning of hair structures and a multimodal neural network system for the analysis of heterogeneous data. The accuracy of pigmented skin lesions recognition in 10 diagnostically significant categories in the proposed system was 83.6%. The use of the proposed system by dermatologists as an auxiliary diagnostic method will minimize the impact of the human factor, assist in making medical decisions, and expand the possibilities of early detection of skin cancer. According to World Health Organization statistics, non-melanoma and melanoma skin cancer incidence has significantly increased over the past decade . Up to tRapid and highly accurate early diagnosis of skin cancer can reduce patients\u2019 risk of death . When deToday medicine is considered one of the strategic and promising areas for the effective implementation of systems based on artificial intelligence . There iThere are many methods for pre-processing dermoscopic images to improve and visually highlight diagnostically significant features. One of these methods is segmentation to highlight pigmented skin lesions\u2019 contours. Segmentation can be performed using a biorthogonal two-dimensional wavelet transform and the Otsu algorithm . Edge exThe presence of hair in dermatoscopic images can drastically change the size, shape, color, and texture of the lesion, which significantly affects the automatic analysis of the neural network . RemovinAnother way to improve the accuracy of intelligent classification systems is to combine heterogeneous data and further analyze them to find additional relationships. In database dermatology, heterogeneous data mining makes it possible to combine patient statistical metadata and dermoscopic images, greatly improving the recognition of pigmented skin lesions. The use of multimodal neural network systems ,35,36,37Despite significant progress in implementing artificial intelligence technologies to analyze dermatological data, developing neural network systems of varying complexity is relevant to achieving higher recognition accuracy. The main hypothesis of the manuscript is a potential increase in the quality of neural network systems for analyzing medical data due to the emerging synergy when using various methods to improve recognition accuracy together. This study aims to develop and model a multimodal neural network system for analyzing dermatological data through the preliminary cleaning of hair structures from images. The proposed system makes it possible to achieve higher recognition accuracy levels than similar neural network systems due to the preliminary cleaning of hair structures from dermoscopic images. The use of the proposed system by dermatologists as an auxiliary diagnostic method will minimize the impact of the human factor in making medical decisions.The rest of the work is structured as follows. The paper proposes a multimodal neural network system for recognizing pigmented skin lesions with a stage of preliminary processing of dermatoscopic images. The proposed multimodal neural network system for analysis and classification combines heterogeneous diagnostic data represented by multivariate visual data and patient statistics. The scheme of a multimodal neural network system for the classification of dermatoscopic images of pigmented skin lesions with preliminary processing of heterogeneous data is shown in The multidimensional visual data undergoes a pre-processing stage, which identifies and cleans hair structures from dermatoscopic images of pigmented skin lesions. Patient statistics also undergo a one-hot encoding process to generate a feature vector. The multimodal neural network system for recognizing pigmented lesions in the skin consists of two neural network architectures. Dermatoscopic images are processed using the specified Convolutional Neural Network (CNN) architecture. Statistical metadata is processed using a linear multilayer neural network. The resulting feature vector at the CNN output and the output signal of the linear neural network are combined on the concatenation layer. The combined signal is fed to the layer for classification. The output signal from the proposed multimodal neural network system for recognizing pigmented skin lesions is the percentage of 10 diagnostically significant categories, including a recognized dermatoscopic image.The main diagnostic method in the field of dermatology is visual analysis. Today, many imaging approaches have been developed to help dermatologists overcome the problems caused by the apperception of tiny skin lesions. The most widely used imaging technique in dermatology is dermatoscopy, a non-invasive technique for imaging the skin surface using a light magnifying device and immersion fluid . StatistThe presence of such noisy structures as hair significantly complicates the work of dermatologists and specialists. It can also cause errors in recognizing pigmented skin lesions in automatic analysis systems. Hair violates the geometric properties of the pigmented lesion areas, which negatively affects the diagnostic accuracy . Figure The most common way to solve the occlusion problem of pigmented skin lesions is to remove the visible part of the hair with a cutting instrument before performing a dermatoscopic examination. However, this approach leads to skin irritation. Also, it causes diffuse changes in the color of the entire pigmented lesion, which distorts diagnostically significant signs to a greater extent than the presence of hair itself. An alternative solution is digitalizing dermatoscopic visual data to remove hair structures. The essence of the hair pre-cleaning methods is to identify each pixel of the image as a pixel-hair or pixel-skin and then replace the pixels of the hair structures with skin pixels . PrelimiThis paper proposes a method for digital pre-processing dermoscopic images using morphological operations on multidimensional visual data. A step-by-step scheme of the proposed method is shown in Image processing of pigmented skin lesions consists of four main stages. At the first stage, the RGB image is decomposed into color components. The second step is to locate the locations of the hair structures. At the third stage, the hair pixels are replaced with neighboring pixels. The fourth step is to reverse engineer an RGB color dermatoscopic image.The input of the proposed method is RGB dermatoscopic images of pigmented neoplasms of the skin At the next stage, the original image The operator of zeroing the pixels After the operation of threshold zeroing of pixels, a morphological operation of dilatation with the The next step is to replace the pixels of the hair structure with neighboring pixels. Using the Laplace equation, pixels are interpolated from the area\u2019s border of the selected hair structures. In this case, the pixels from the border of the hair structures cannot be changed. The last step is the reverse construction of the RGB color image from the extracted color components. For this, the color channels An example of the step-by-step work of the proposed method for identifying and cleaning hair structures from dermatoscopic images of pigmented skin lesions is shown in Today, in medicine, there is an increase in the volume of digital information due to the accumulation of data from electronic medical records, the results of laboratory and instrumental studies, mobile devices for monitoring human physiological functions, and others . PatientMetadata pre-processing is converting statistical data into the format required by the selected data mining method. Since the proposed multimodal system for recognizing pigmented skin lesions is a fully connected neural network, it must encode the data as a vector of features. A corresponding metadata information vector is generated for each image in the dataset, which depends on the amount and type of statistical information. One-hot encoding can sometimes outperform complex encoding systems . All mulSuppose the One-hot encoding is used to encode the statistic In deep learning, multimodal fusion or heterogeneous synthesis combines different data types obtained from various sources . In the For the recognition of multidimensional visual data, the most optimal neural network architecture is CNN . The inpThe dermatoscopic image includes The concatenation layer at the input receives the feature map, which was obtained on the last layer intended for processing dermatoscopic images The activation of the last layer of the multimodal neural network is displayed through the Data from the open archive of The International Skin Imaging Collaboration (ISIC), which is the largest available set of confidential data in dermatology, was used for the simulations . The maiThe modeling was performed using the high-level programming language Python 3.8.8. All calculations were performed on a PC with an Intel (R) Core (TM) i5-8500 CPU @ 3.00 GHz 3.00 GHz with 16 GB of RAM and a 64-bit Windows 10 operating system. Multimodal CNN training was carried out using a graphics processing unit (GPU) based on an NVIDIA video chipset GeForce GTX 1050TI.Preliminary heterogeneous data processing was carried out at the first stage of the proposed multimodal classification system. Dermatoscopic image pre-processing consisted of stepwise hair removal and image resizing. The removal of hair structures was carried out using the developed method based on morphological operations, presented in The pre-processing of patient metadata consisted of one-hot encoding to convert the vector format required for further mining. The coding tables for each patient metadata index are presented in CNN AlexNet , SqueezeLarge volumes of training data make it possible to increase the classification accuracy of automated systems for neural network recognition of dermatoscopic images of pigmented skin lesions. Creating large-scale medical imaging datasets is costly and time-consuming because diagnosis and further labeling require specialized equipment and trained practitioners. It also requires the consent of patients to process and provides personal data. Existing training datasets for the intelligent analysis of pigmented skin lesions, including the ISIC open archive, are imbalanced across benign lesion classes. All of this leads to inaccurate classification results due to CNN overfitting.Affine transformations are one of the main methods for increasing and balancing the amount of multidimensional visual data in each class. The possible affine transformations are rotation, displacement, reflection, scaling, etc. The selected dermatoscopic images of pigmented skin lesions include multidimensional visual data of various sizes. Different CNN architectures require input images of a certain size. Scaling using affine transformations transforms visual data into a set of images of the same size. Scaling is usually combined with cropping to achieve the desired image size.Augmentation of dermatoscopic images of pigmented skin lesions included all of the above methods of affinity transformations, examples of which are shown in New multidimensional visual data were created from existing ones using augmentation for more effective training. This allowed us to increase the number of training images. Training data augmentation has proven effective enough to improve accuracy in neural network recognition systems for medical data . When trPre-processed images of pigmented skin lesions were fed into CNN architectures. The vector of pre-processed metadata was provided to the input of a linear neural network, which consisted of several linear layers and ReLu activation layers. After passing the different input signals through the CNN and the linear neural network, the heterogeneous data passed fusion on the concatenation layer. The combined data was fed to the softmax layer for classification. The results predicted by the multimodal neural network from the test sample were converted to a binary form to construct the Receiver Operating Characteristic curve (ROC curve). Each predicted class label consisted of a combination of two characters with a length of 10 characters. The ROC curve represents the number of correctly classified positive values on incorrectly classified negative values.Following the analysis of the confusion matrices in The The results of the analysis of the McNemar test from Even though the proposed multimodal neural network system with the stage of preliminary cleaning of hair structures shows higher results in recognition accuracy compared to existing similar systems, as well as compared to visual diagnostic methods for physicians in the field of dermatology, the use of the proposed system as an independent diagnostic tool is impossible due to the presence of a false-negative response in cases of malignant neoplasms. This system can only be used as a high-precision auxiliary tool for physicians and specialists.AlexNet deep neural network architecture is superior to other architectures in the following ways: it does not require specialized hardware and works well with limited GPU; learning AlexNet is faster than other deeper architectures; more filters are used on each layer; a pooling layer follows each convolutional layer; ReLU is used as the activation function, which is more biological and reduces the likelihood of the gradient disappearing . The lisAs a result of modeling the proposed multimodal neural network system, the best recognition accuracy was 83.6%. The preliminary cleaning of hair structures and the analysis of heterogeneous data made it possible to significantly exceed the classification accuracy compared to simple neural network architectures to recognize dermoscopic images. In CNN GoogIn , prelimiA comparison of the recognition accuracy of various multimodal neural network systems for recognizing pigmented lesions and skin with the proposed system is presented in In , the autIdentical conditions for modeling, hardware resources, image base, and many diagnostic categories used make it possible to compare the results obtained with the proposed multimodal neural network system with the stage of preliminary hair removal with the results from work. The recognition accuracy of the proposed multimodal system with the stage of preliminary hair removal on the test set was 83.6%, which is about 20.2% higher than the results of testing the system from . The maiIn , a multiThe authors of presenteIn proposedThe main limitation in using the proposed multimodal neural network system for recognizing pigmented lesions in the skin is that specialists can only use the system as an additional diagnostic tool. The proposed system is not a medical device and cannot independently diagnose patients. Since the major dermatoscopic training databases are biased towards benign image classifications, misclassification is possible. The use of augmentation based on affine transformations makes it possible to minimize this factor but not completely exclude it.A promising direction for further research is constructing more complex multimodal systems for neural network classification of pigmented skin neoplasms. The use of segmentation and preliminary cleaning of the hair\u2019s visual data will help highlight the contour of the pigmented skin lesion. Distortion of the shapes of the skin neoplasm is an important diagnostic sign that may indicate the malignancy of this lesion.The article presents a multimodal neural network system for recognizing pigmented skin lesions with a stage of preliminary cleaning from hair structures. The fusion of dissimilar data made it possible to increase the recognition accuracy by 4.93\u20136.28%, depending on the CNN architecture. The best recognition accuracy for 10 diagnostically significant categories was 83.56% when using the AlexNet pre-trained CNN architecture. At the same time, the best indicator of improving the accuracy was obtained using the pre-trained ResNet-101 architecture and amounted to 6.28%. The use of the stage of preliminary processing of visual data made it possible to prepare dermatoscopic images for further analysis and improve the quality of diagnostically important visual information. At the same time, the fusion of patient statistics and visual data made it possible to find additional links between dermatoscopic images and the results of medical diagnostics, which significantly increased the accuracy of the classification of neural networks.Creating systems for automatically recognizing the state of pigmented lesions of patients\u2019 skin can be a good incentive for cognitive medical monitoring systems. This can reduce the consumption of financial and labor resources involved in the medical industry. At the same time, the creation of mobile monitoring systems to monitor potentially dangerous skin neoplasms will automatically receive feedback on the condition of patients."} +{"text": "The mission of the Caregivers Clinic at Memorial Sloan Kettering Cancer Center (MSK) is to assure that no caregiver of an MSK patient experiencing significant distress as a result of their caregiving responsibilities goes unidentified and deprived of necessary psychosocial services. This presentation will cover the steps taken and barriers faced in the development of the Clinic, including advocating for caregivers to receive their own unique medical records. Data regarding the number of caregivers seen for psychotherapy and for medication management will be presented, as will data regarding presenting complaints and average length of care. Also included is a discussion of the challenges faced in expanding and maintaining the capacity of the Clinic, especially in the setting of the pandemic during which caregivers' use of psychosocial care at MSK is notably higher than in years past. Several current adjunct approaches to address capacity needs currently being piloted will be discussed."} +{"text": "Combined with the characteristics of the Chinese environmental regulation supervision system and evolutionary game theory, the spillover effect of local governments\u2019 investment behaviour has been incorporated into their payment function to study the influence of spillover on the strategy choice of local governments and enterprises. The results show that (1) the spillover effect is one of the reasons for distortions in the implementation of environmental regulations. Whether the influence of the spillover effect on the probability of local governments choosing the strategy of strict supervision is positive or negative depends on the environmental benefit of the local government\u2019s environmental protection investment. (2) Increasing the reward for the enterprise\u2019s complete green technology innovation behaviour is conducive to improving the probability of the enterprises choosing the strategy of complete green technology innovation, while it reduces the probability of local governments choosing the strategy of strict supervision. Increasing punishment for enterprises\u2019 incomplete green technology innovation behaviour is conducive to improving the probability of enterprises choosing the strategy of complete green technology innovation, but its impact on the probability of local governments choosing the strategy of strict supervision is uncertain due to the limitations of many factors. (3) Enterprises\u2019 emission reduction capacity is positively related to the probability of the enterprises choosing the strategy of complete green technology innovation and is negatively related to the probability of local governments choosing the strategy of strict supervision. The research conclusions provide a new explanation for the distorted enforcement of environmental regulations from the perspective of the spillover of local governments\u2019 investment behaviour. Since reform and opening up, China\u2019s economy has grown by leaps and bounds and has become the world\u2019s second largest economy after the United States. However, with rapid economic expansion, China\u2019s ecological environment has been seriously damaged. In the face of the dual pressure of environmental protection and economic development, the 19th National Congress of the Communist Party of China proposed adhering to the basic national policy of \u2018saving resources and protecting the environment\u2019 and emphasized that green technology innovation is a major measure to promote the transformation of the production mode of enterprises and to realize the win-win situation of economic benefits and environmental effects. Green technology innovation refers to a series of technologies in the production process, such as improving the production process, product structure, and innovative production activities, which are conducive to reducing energy consumption, reducing pollutants and improving production efficiency . EnterprChina\u2019s environmental policies are formulated by the central government and implemented by local governments, and there is information asymmetry in this principal\u2013agent relationship. Local governments in various regions are motivated to distort the enforcement of environmental regulations, such as reducing enforcement, providing tax incentives and lowering environmental standards . HoweverThe existing literature focuses on the influencing factors of the game mechanism of environmental regulations between local governments and enterprises, but few studies have considered the spillover effect of local governments\u2019 investment behaviour into the game system of environmental regulations between local governments and enterprises. Some scholars have proposed that local governments\u2019 behavioural decisions not only affect the improvement of regional environmental quality but also influence the environmental quality of neighbouring areas . When thIn view of this, based on the positive and negative external environmental effects of local governments\u2019 investment behaviours, this paper establishes an evolutionary game model between local governments and enterprises to identify how spillover effects influence the implementation mechanism of environmental regulations. The evolutionary trajectories of local governments and enterprises to realize the system\u2019s evolutionary stability equilibrium strategy under different circumstances are analysed. Furthermore, through numerical simulation, the influence of positive and negative externality spillover effects of local governments\u2019 investment behaviours, punishment and reward for enterprises\u2019 green technology innovation, and enterprises\u2019 emission reduction capacity on the strategy choice of local governments and enterprises is further examined. The main contribution of this paper is that the spillover effect of local governments\u2019 investment behaviour is incorporated into the study of the game relationship between local governments and enterprises, and the influence of the spillover effect on the strategy choice of local governments and enterprises is analysed. The existing literature focuses on the influence of incentive and constraint mechanisms among different stakeholders on the strategy choice of environmental regulatory subjects, while the spillover effect of local governments\u2019 investment behaviour has not been considered. In addition, this paper adopts the method of case analysis combined with numerical simulation based on the research objects of the governments of Hubei Province, Hunan Province, and the YT Environmental Protection Technology Company, and quantitatively explores the impact of some key factors on the strategy choice of both parties in the game to effectively control the influence of the spillover effect. The conclusion of this paper provides a new explanation for the failure of environmental regulations and is conducive to identifying factors that affect the implementation distortion of environmental regulations, which provides a policy basis for the central government to improve local governments\u2019 environmental regulation efficiency.The implementation process of environmental regulations is the game process of local governments and enterprises . The acaIn the context of fiscal decentralization, performance appraisals based on GDP make local governments compete fiercely for resources to increase the probability of political promotion, thus affecting the impact of the implementation of environmental regulations. Due to the inevitable preference conflict between performance appraisals and residents\u2019 welfare, local governments distort the implementation of environmental regulations . Fiscal (1)The environmental regulation strategy of the \u2018race to the bottom\u2019Most early scholars adopted the viewpoint of the \u2018race to the bottom\u2019; that is, local government competition, reduces the intensity of environmental regulations. Wilson and Raus(2)The environmental regulation strategy of the \u2018race to the top\u20192 emissions reduction but did not have competitive behaviour in SO2.With the deepening of the research, an increasing number of people put forward the view of the \u2018race to the top\u2019; that is, local government competition prompts local governments to increase the intensity of environmental regulations. Potoski examined(3)The differentiated competition environmental regulation strategyA few scholars have noted that environmental regulation competition among local governments involves differentiated competition; that is, local government competition has no significant impact on regional environmental policies. Chirinko and Wilson showed tIn our research, local governments\u2019 behaviour refers to the supporting behaviours and preferential policies related to economic development and environmental protection. Competition among local governments causes officials in different regions to imitate or fight over policies around competition for promotion opportunities, the supply of public products, and the introduction of liquidity factors, which aggravates the spillover effect of local governments\u2019 investment behaviour on regional development. There are many types of formal and informal connection in regional economic activities, which makes it easier for enterprises in the region to obtain information and knowledge from neighbouring regions. Therefore, the decision-making behaviour of regional local governments has mutual influence and mutual radiation, leading to a positive spillover effect and a negative spillover effect.On the one hand, the behaviour of local governments supporting local economic development has a negative externality on the environmental governance of adjacent areas, which has increased environmental pollution due to the improvement of regional production capacity. Hao et al. found thOn the other hand, the behaviour of local governments supporting local environmental governance has improved the local environmental quality due to the reduction of environmental pollution and has a positive externality to the environmental governance of neighbouring areas. First, regional environmental regulation has a positive environmental spillover effect, which comes from the regional \u2018demonstration effect\u2019 and \u2018warning effect\u2019. Zhao et al. found thThe spillover effect can occur horizontally or vertically, so the measure of spillover effects in the existing literature varies with the direction of spillover. In the study of vertical spillover effects, Newman et al. used theBy combining and summarizing the existing literature, we can see that existing scholars have conducted considerable research on local government competition and environmental regulations. It is widely believed that local government competition has a significant impact on regional environmental protection enforcement investment, but there are great differences in the research conclusions. In addition to differences in index measurement, research methods and research sample selection, the key problem is the lack of in-depth research on environmental regulation transmission mechanisms, ignoring the spillover effect of local governments\u2019 behaviour. In fact, the strategy choice of local governments competing with each other is the reaction function of each other\u2019s behavioural decisions. When the local area increases the investment in environmental protection, the environmental quality and the neighbouring areas will also be improved. When the local area increases the investment in production, pollutant spillover will cause environmental damage to neighbouring areas.Yi establisIn China\u2019s administrative system, the central government is responsible for formulating environmental policies, while local governments supervise and manage enterprises\u2019 energy conservation and emission reduction activities by implementing environmental regulations. Local governments are responsible for the economic construction and environmental protection of the region, so local governments have two types of investment behaviour. The first is economic constructive investment, which refers to local governments providing financial support for local natural resource development and infrastructure construction to encourage enterprise to actively participate in activities related to economic development. The second is environmental protection investment, which refers to local governments providing financial support to promote enterprises to conduct energy conservation and emission reduction, including special funds, energy conservation and emission reduction subsidies, and tax incentives. The behaviour of local governments in supporting local economic development has a negative externality on the environmental governance of adjacent areas, which has increased environmental pollution due to the improvement of regional production capacity. The behaviour of local governments in supporting local environmental governance has improved the local environmental quality due to the reduction of environmental pollution and has a positive externality to the environmental governance of neighbouring areas.To simplify the analysis, this paper takes market incentive environmental regulation as an example. Local governments will charge enterprises to discharge pollutants to keep their emissions within a certain range. In addition, local governments\u2019 environmental protection investment behaviour is affected by the spillover effect of neighbouring governments\u2019 investment behaviour.Assumption\u00a01.Based on the work of Yi , there aand within the jurisdiction of the central government. The total investment budget of the local government is, which can be used to support the development of environmental protection and economic construction. Either party\u2019s investment in environmental protection can bring benefits to both parties, any party\u2019s investment in economic production can benefit itself, and the production of pollutants can reduce the benefits of the other party. Namely, the economic constructive investmenthas a negative externality, and the environmental protection investment behaviourhas positive externalities.Assumption\u00a02.Local governments\u2019 environmental protection investmentnot only brought environmental benefitsto local governments but also contributed to reducing the cost of enterprise green technology innovation. The cost reduction of enterprises is, whereandare the environmental benefit coefficients obtained by local governments and enterprises, respectively, andAssumption\u00a03.If the local government strictly implements environmental regulations, the reward for an enterprise\u2019s complete green technology innovation behaviour is, and the punishment for an enterprise\u2019s incomplete green technology innovation behaviour is. At the same time, when the enterprise\u2019s emission reduction rate is, the local government charges pollutant discharge feesfor the enterprise\u2019s pollutant discharge, andis the sewage charge that the enterprise needs to pay per unit of sewage discharge .Based on the literature of Yi , the invWhen governments Combined with optimization theory according to Formula (2), the optimal environmental protection investment benefit If the local government chooses the strategy of strict supervision and the enterprise chooses the strategy of complete green technology innovation, the local government\u2019s profit function consists of environmental investment returns According to the above description, the game payment matrix of local governments and enterprises can be obtained, as shown in An enterprise\u2019s adaptability in choosing the strategy of complete green technology innovation is as follows:An enterprise\u2019s adaptability in choosing the strategy of incomplete green technology innovation is as follows:The average fitness is:The dynamic equation of enterprise replication is as follows:The local government\u2019s adaptability in choosing the strategy of strict supervision is:The local government\u2019s adaptability in choosing the strategy of non-strict supervision is:The average fitness is:Likewise, the replication dynamic equation of the local government is as follows:According to the Malthusian equation , the groThe strategy combination, corresponding to the equilibrium point calculated by Equation (11), is an equilibrium of the evolutionary game.Proposition\u00a01.The equilibrium point of system (11) is ,When:so the system\u2019s equilibrium point isProof.\u00a0According to equations Set According to Equation (11), the Jacobian matrix of the system can be obtained as follows Substituting the equilibrium point into equations When strategy . CombineSituation 1: When Situation 2: When Situation 3: When Situation 4: When According to the equation ofWhen the initial state of the system falls in region I, the game converges to . When the initial state of the system falls in region II, the game converges to . When the initial state of the system falls in region III, the game converges to . When the initial state of the system falls in region IV, the game converges to .Situation 5: When Situation 6: When Situation 7: When Situation 8: When Situation 9: When Based on the above dynamic evolutionary model, there are four evolutionary stability strategies for the behavioural choice of local governments and enterprises in the environmental regulation transmission mechanism. In addition, the evolutionary path of the two parties in the game is related to the game payout matrix and changes in some central parameters. The impact of the main factors related to the behavioural characteristics of local governments and enterprises on the equilibrium results of system evolution has been discussed.(1)The impact of the spillover of local governments\u2019 investment behaviour on the choice of local governments\u2019 environmental regulation strategy.The spillover of local governments\u2019 investment behaviour reflects the impact of regional government environmental regulation decisions on the environmental performance of neighbouring regions. Whether the spillover effect is positive or negative related to the probability of local governments choosing the strategy of strict supervision depends on whether the spillover effect is a \u2018demonstration effect\u2019 or a \u2018free rider\u2019 on the neighbouring areas. When the environmental protection awareness of local residents is low, environmental regulations cannot effectively encourage local enterprises to consciously save energy and reduce emissions, and the environmental protection investment benefits of local governments are relatively low. Therefore, neighbouring regional governments and enterprises have \u2018free-riding\u2019 behaviours in energy conservation and emission reduction. Local governments will be more likely to choose strict supervision strategies because of the low enthusiasm of local enterprises to save energy and reduce emissions. When the environmental protection awareness of local residents is high, environmental regulations can effectively encourage local enterprises to consciously save energy and reduce emissions, and the environmental protection investment performance of local governments is relatively large. Hence, the environmental quality improvement and environmental pollution brought by the investment of local governments have a \u2018warning effect\u2019 and \u2018demonstration effect\u2019 on the environmental protection of neighbouring areas. Local governments will be more likely to abandon strict supervision strategies because of the high enthusiasm of local enterprises to save energy and reduce emissions.Proof.\u00a0Please see the Proof of Proposition A1 in (2)The impact of local government reward and punishment for enterprises\u2019 green technology innovation behaviours on the strategy choice of local governments and enterprises.The punishment and reward for enterprises\u2019 green technology innovation behaviour by the local government is directly related to the costs and benefits of enterprises and local governments. The higher the reward for enterprises\u2019 complete green technology innovation behaviour, the greater the green technology innovation benefits of the enterprises, and the higher the environmental protection cost of the local government. The greater the punishment for enterprises\u2019 incomplete green technology innovation behaviour, the higher the green technology innovation cost of enterprises, and the lower the environmental protection cost of local governments. Therefore, the greater the reward, the greater the probability of enterprises choosing complete green technology innovation strategy to obtain more benefits, and the lower the probability of local governments choosing the strict regulation strategy to reduce management costs. The greater the punishment is, the greater the probability of enterprises choosing a complete green technology innovation strategy to reduce costs. The impact of punishment on the choice of the local government\u2019s environmental regulation strategy depends on the trade-off between the environmental benefits and economic losses brought about by the punishment.Proof.\u00a0Please see the Proof of Proposition A2 in (3)The impact of enterprises\u2019 emission reduction capability on the strategy choice of local governments and enterprises.Emission reduction capability determines the difficulty of green technology innovation for enterprises. The stronger the emission reduction capability is, the more easily enterprises can develop and utilize clean technologies, and the higher the willingness of enterprises to choose complete green technology innovation. The high willingness of enterprises to engage in green technology innovation reduces the effectiveness of strict environmental supervision, and the local government will abandon the strategy of strict environmental supervision.Proof.\u00a0Please see the Proof of Proposition A3 in To better describe the influence of various parameters on the strategy choice of local governments and enterprises in the transmission mechanism of environmental regulations, a combination of case analysis and numerical simulation has been adopted to quantitatively examine the influence of different parameters on the choice behaviour of both sides of the game.The 11 provinces of the Yangtze River Economic Belt regard the restoration of the Yangtze River\u2019s ecological environment as an overwhelming task. Lake governance, especially the governance of cross regional large lakes, requires the joint efforts of all governments in the basin, realizing joint prevention and control. If only one party carries out pollution control and the other party carries out pollution discharge, it is difficult to achieve the expected effect of environmental improvement. Hunan and Hubei provinces have similar economic development and adjacent geographic positions in the central region and are two important areas adjacent to the Yangtze River Economic Belt, the ecological restoration of which plays an important role in the growth of the Yangtze River Economic Belt. Although the central government and provincial government have invested a lot of funds in lake protection and governance, the efficiency of governance is low due to the problems of cross regional sewage transfer and unclear responsibility for pollution control. Therefore, grasping the spillover effects of local government investment behavior is a prerequisite for establishing an effective joint prevention and control mechanism in the governance of trans-basin rivers.As an important environmental protection enterprise in the Yangtze River Economic Zone, YT Environmental Protection Technology Co., Ltd. is a company specializing in environmental engineering general contracting, environmental protection facility renovation, trusteeship operation, water treatment pharmaceutical series products, and environmental protection equipment research and development. Its green technology innovation behaviour is supervised and managed by the Hunan provincial government. This paper takes the Hubei provincial government, Hunan provincial government, and YT Environmental Protection Technology Co., Ltd. as the research objects and focuses on the influence of the competition between the Hubei provincial government and the Hunan provincial government on the choice of Y T Environmental Protection Technology Co., Ltd.\u2019s green technology innovation strategy.For the sake of evolutionary game analysis, the key stakeholders are simplified as Hunan province local government and the YT company, focusing on the behavioral spillover effect of the Hubei province local government. Based on in-person investigations and data collection from the relevant government departments and enterprises, we use the following series of parameter values as the benchmark: the supervision intensity of local governments choosing strategy of no-strict supervision as According to the benchmark value, the influence of different key parameters on the evolutionary trajectory of the two sides of the game is numerically simulated and analysed as shown in (1)(i)As seen in (ii)As shown in (2)(3)The impact of punishment (i)When (ii)When (4)From the perspective of local government competition, the spillover of local governments\u2019 investment behaviour has been incorporated into the objective function of the local governments to construct an evolutionary game model between local governments and the enterprises. The following conclusions are obtained.(1)(I)strict supervision and complete green technology innovation;(II)non-strict supervision and complete green technology innovation;(III)strict supervision and incomplete green technology innovation; and(IV)non-strict supervision and incomplete green technology innovation.Under the constraints of environmental regulation, spillover effects have a significant impact on the payment function of local governments, leading to distortions in the implementation of environmental regulations. When the payment function changes, there are four evolutionary stability strategies in the evolutionary game system of local governments and enterprises:(2)When the environmental benefit of the local governments\u2019 environmental protection investment behaviour is less than the threshold, the spillover of local government investment behaviour may lead to the reduction of regional environmental protection investment for the effect of \u2018competition to the bottom\u2019 and \u2018free riding behaviour\u2019, which enhances the dependence of the improvement of environmental quality on the strict supervision of local governments. Hence, the probability of local governments choosing the strict supervision strategy increases with the increase in the spillover effect. When the environmental benefit of the local governments\u2019 environmental protection investment behaviour is greater than the threshold, clean technology and environmental pollution in external areas may encourage local enterprises to increase investment in energy conservation and emission reduction for the \u2018demonstration effect\u2019 and the \u2018warning effect\u2019. This leads to the low dependence of the improvement of environmental quality on the strict supervision of local governments. Hence, the probability of local governments choosing a strict supervision strategy decreases with the increase in the spillover effect of strict supervision.(3)The reward for enterprises\u2019 complete green technology innovation behaviour increases enterprises\u2019 profits, while punishment for enterprises\u2019 incomplete green technology innovation behaviour increases enterprises\u2019 costs. The choice of complete green technology innovation strategy is the best choice to increase revenue and reduce costs for enterprises. Hence, with the increase in reward and punishment, the probability of enterprises choosing complete green technology innovation strategy gradually increases.(4)The reward for enterprises\u2019 complete green technology innovation behaviour increases the environmental governance costs of local governments and reduces the willingness of local governments to strictly supervise. The impact of punishment on the probability of the local government choosing strict supervision strategy is uncertain.(5)The stronger enterprises\u2019 emission reduction capability is, the greater the willingness of enterprises to save energy and reduce emissions, leading to a decline in the willingness of local governments to strictly supervise.First, the central government should formulate an environmental governance supervision system according to the differences in the environmental awareness and environmental responsibility of residents in different regions, making full use of the \u2018demonstration effect\u2019 of regional environmental governance behaviour spillover to produce a positive impact on regional environmental governance decision-making. Within the region, the central government should unify regional environmental management laws, standards, and policy systems, establish regional pollution compensation mechanisms, benefit coordination mechanisms, and \u2018green GDP\u2019 competition mechanisms, and improve environmental information sharing mechanisms, joint early warning mechanisms, and demonstration effect mechanisms.Second, local governments should formulate differentiated reward and punishment mechanisms for green technology innovation in combination with the differences in environmental quality in different regions. Local governments should guide enterprises to actively carry out green technology innovation activities through appropriate fiscal and tax preferential policies and increase support for energy conservation, emission reduction and pollution prevention technology research and development. To form an effective incentive and restraint mechanism for green technological innovation, reward should be increased in areas with better environmental quality and punishment should be strengthened in areas with poor environmental quality.Third, local governments should formulate differentiated supervision systems based on individual differences in the emission reduction capabilities of different companies. Through incentive means, enterprises with strong emission reduction capacity should be guided to give full play to the \u2018demonstration effect\u2019, strengthen the cooperation of regional enterprises in the research and development and utilization of green technology, speed up the elimination of ineffective production capacity, and improve environmental production performance. At the same time, local governments should strengthen the supervision of the energy-saving and emission-reduction behaviours of enterprises with poor emission reduction capabilities, not only to avoid the effect of \u2018competition on the bottom line\u2019 among enterprises but also to prevent the transfer of heavily polluting enterprises between different regions.Based on evolutionary game theory, this paper studies the impact of external government investment spillover on the strategic choice of local governments and enterprises in regional competition and suggests a direction for improving the transmission mechanism of environmental regulation. However, in the construction of the game relationship between environmental regulation subjects, only the key factors in the behavioural characteristics of local governments and enterprises are considered. Future research will combine risk preference, environmental awareness, and social responsibility factors to examine the impact of local government competition on the strategic choice of game players."} +{"text": "This review focuses on peripheral forms of hereditary hearing loss and how these impairments can be studied in diverse animal models or patient-derived cells with the ultimate goal of using the knowledge gained to understand the underlying biology and treat hearing loss.Inherited forms of deafness account for a sizable portion of hearing loss among children and adult populations. Many patients with sensorineural deficits have pathological manifestations in the peripheral auditory system, the inner ear. Within the hearing organ, the cochlea, most of the genetic forms of hearing loss involve defects in sensory detection and to some extent, signaling to the brain Impairment of hearing can be due to several factors including environmental insults, the effects of aging, and hereditary defects. Of these three forms, the most common form is due to aging , which c1 Of the non-syndromic forms, more than 120 genes have been implicated in hearing loss and the majority of cases involve recessive mutations in which both copies of the gene are mutated. Approximately a third of the cases are dominant, requiring only one copy to be mutated. As with aging, sometimes the middle ear is affected, leading to a loss of conduction of sound. However, the majority of mutations are sensorineural in nature, mainly affecting the inner ear, with many having developmental or functional consequences for hair cells.Hereditary hearing loss is one of the most common sensory deficits in humans affecting one out every 500 newborns . The levEfforts to understand the etiologies associated with hearing loss have been ongoing for several decades. The purpose of this review is to highlight a few recent studies of non-syndromic hereditary hearing loss that illustrate the different types of pathology found in the inner ear. The following studies focus on three different tissue or cell types of the cochlea, namely the stria vascularis, sensory epithelium and the afferent neurons of the spiral ganglion. These studies were also chosen based on the variety of animal models or the use of human-derived cell lines to determine the function of the genes. Due to the inaccessibility of the inner ear in patients, a basic scientific approach with models is necessary to gain a better understanding of the nature of the defects caused by genetic variants that are associated with human hearing loss.An important prerequisite for hair-cell function is the presence of an ionic environment that is conducive to excitation. Unlike other extracellular fluids throughout the body, the fluid inside the scala media of the inner ear is exceedingly rich in potassium ions . The higAlthough marginal or \u201cdark\u201d cells found in the vestibular inner ear of vertebrates are thought to perform a similar function to the stria vascularis, the electrochemical potential created is substantially lower and defined cell layers are not evident . FurtherHGF). Despite the name suggesting a specific role in the liver, this extracellular ligand has been implicated in many biological processes involving cell proliferation, survival, and motility. More surprisingly, the only disease in humans associated with mutations in HGF is non-syndromic hearing loss. In mice, the knock-out of Hgf results in embryonic lethality, whereas a conditional knock-out in the inner ear does indeed result in deafness . Zebrafish have an inner ear that is anatomically similar to the vestibular portion of human ears. For hearing, they use the saccular end-organ. The sensory hair cells in the zebrafish inner ear rely on many of the same basic components necessary for hearing and balance in humans, and well over a dozen models of human hearing loss have been generated and studied in detail [for comprehensive reviews of studies in zebrafish see As an illustration of (i) using a non-rodent model, the zebrafish, to study human hearing loss, and (ii) the ability to compare the findings with this animal model to that of the mouse model, this review will focus on two recent studies of transmembrane inner ear play a more critical role in hearing projecting out of the bone-encased inner ear into the hindbrain. The fibers of the afferent neurons in the cochlea along with the cell bodies comprising the spiral ganglion are shown in yellow in DIAPH1, TMPRSS3 and PJVK and the vertebrate inner ear. Of the genes required for auditory function in fruit flies, 20% have human orthologs implicated in deafness in the antenna harboring the Johnston\u2019s organ caused deficits in gravity sensing and balance in adult flies. At the cellular level, the sensory scolopidia were disorganized, including abnormalities of internal structures such as actin bundles, and there was a loss of synapses formed by scolopidia neurons in the fly brain. To test the whether the Gln104Leu variant was causative, the authors compared rescue of the behavioral defects in the drk fly mutant by expressing the wild type human gene and the human Q104L variant and found that the variant form of GRAP was ineffective at rescuing the fly phenotype. The ability to rescue the deficits of an animal model with the corresponding human gene and test newly identified variants is an important and much needed approach for establishing causality of mutations found in the genome of hearing loss patients.A recent study of a novel deafness gene serves as an excellent example of the use of flies to study hereditary hearing loss . In thisin vitro experiments. The authors found that sensory neural-like cells heterozygous for the S1400G mutation had defects in later steps of differentiation and physiological responses. These defects were accompanied by a decrease in microtubule dynamics. Most of the heterozygous S1400G cells had shorter neurites in contrast to the CRISPR-corrected cells or sibling-derived cells. To further confirm their findings, the authors engineered a mouse model expressing the same dominant mutation. Heterozygous mutant mice displayed moderate yet progressive hearing loss. In addition, primary cultures of the spiral ganglion neurons exhibited similar decreases in neurite length and altered electrophysiological properties.The study of animal models is invaluable for understanding the potential etiology caused by mutations. Nevertheless, with the recent advances in stem cell biology, it is also possible to take cells from patients such as blood cells and reprogram them into cells that resemble those in the auditory system. in vitro studies. This approach is an invaluable method for assessing the pathological consequences of a mutation in human cells. Although the technology is in its infancy, another potentially useful method is to generate human organoids of the inner ear. These 3D structures contain cell types that resemble hair cells and the sensory neurons that innervate them (The above study highlights the utility of reprogramming patient-derived cells into a desired cell type for ate them . Inner evia genomic sequencing and to establish animal models or study patient derived cells is a key step toward selecting the appropriate therapeutic approaches to ameliorate hearing loss.In summary, hereditary sensorineural forms of hearing loss primarily affect the tissues and cell types of the inner ear that are vital to sensing sound and transmitting signals to the brain. Our knowledge of the genetics of hearing loss is growing as the pace of identifying novel human mutations associated with impairments in hearing is rapidly increasing . This inDrosophila are highly advanced, the existence of an obvious ortholog of a human deafness gene is not always the case. In addition, extrapolating the pathological defects in fly mutants to potential defects in human patients can be challenging due to the very different structures of the hearing organs in flies. With respect to zebrafish, some deafness genes may be duplicated due to the large-scale duplication of approximately 40% of the genome in teleost fish during evolution. Depending on the expression pattern of the gene duplicates, it may be necessary knock out and analyze two genes instead of one. Also, fish do not have a cochlea; questions about the structures or tissues present only in the mammalian cochlea cannot be addressed. In mice, the need for dissection and cochlear explants to study the cellular defects makes the work more challenging. Generally, research with this particular animal model tends to be more costly and time consuming. Branching out to the use of human stem cells and organoids to study hearing loss is certainly exciting and gaining traction. Whether in vitro differentiated cells truly resemble desired cell types or how well the organoids recapitulate the environment of the inner ear is, however, not clear. To date, generating auditory hair cells from stem cells has yet to be achieved. Despite the above shortcomings, animal and cell models have been very valuable for studies of human hearing loss and continuing efforts with these models will undoubtedly yield more insights into the defects and pathology at the molecular, cellular and physiological level.The choice of animal or cell model is dictated by several factors. Although the genetic methods in Aside from helping patients navigate hearing loss, research with animal models and patient-derived cells increases our basic understanding of how the inner ear works. These studies also add fascinating insights of how this remarkable sensory organ evolved and they provide clues about strategies that are employed to suit the auditory needs of a particular animal. On the whole, the interplay between basic and translational research is a fruitful one and offers hope to patients with hearing loss.The author wrote the manuscript and confirms being the sole contributor of this work and has approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Systemic Lupus Erythematosus (SLE) is a heterogeneous autoimmune disease characterized by hyperactive immune responses leading to severe and potentially life-threatening organ damage. Recent work indicates that in many patients increased interferon-driven responses are the root of the increased immune responses and the first therapy directed against the Type I Interferon receptor was recently approved for the treatment of SLE .A genetic polymorphism in purine nucleotide phosphorylase (PNP), which catalyzes the hydrolysis of adenosine and inosine to adenine and hypoxanthine, respectively, is strongly associated with the risk for development of SLE in patients with high levels of Interferon In this issue of EBiomedicine Hesse and colleagues report that, although CD73 expression remains relatively unchanged, the enzymatic activity of CD73 is markedly reduced on the surface of B cells from patients with SLE The authors speculate that the reduction of adenosine in the extracellular milieu of the B cells of these patients would lead to increased activity of the immune system, a finding that could predispose to the development of SLE. Nonetheless, it is unclear whether diminished enzymatic activity contributes to the development of SLE or is a result of SLE disease activity. It is also tempting to speculate that since CD73 is GPI-linked, like cell surface complement-regulatory proteins, that secondary alterations in the plasma membrane could lead to altered expression on the cell surface with diminished enzymatic activity. It would also be interesting to know whether the reduced activity of CD73 is associated with increased disease activity or manifestation of disease (skin vs kidney for example).Nonetheless, these questions aside, the authors have made an interesting finding that could suggest a complementary mechanism for the overactivity of SLE lymphocytes and the resultant autoimmunity. These findings complement earlier observations of the effect of changes in purine metabolism which offer a potential metabolic basis for SLE.Dr Cronstein is Chair of the Scientific Advisory Board and holds stock in Regenosine, LLC, a company that has licensed technology patented by his team and assigned to NYU Grossman School of Medicine."} +{"text": "This paper introduces for the first time the equal intercept transformation radar chart\u2014an improved form\u2014to the assessment of soil environmental quality of Nanling commodity grain base. The equal intercept transformation radar chart, a visual graphical data analysis method, translates data from a numerical to graphical format. This visualization enables data presentation, analysis process and results stick out a mile and is capable of fully retaining information contained in data and excavating it in depth from geometry. Moreover, it overcomes pertinently the main defect of the conventional radar chart that the evaluation result depends heavily on the order of arrangement of indicators. The results indicated that the soil environmental quality at depths of 0\u201360 cm in the low mountain area of the Nanling commodity grain base was the second grade, while that in the hilly and plain areas were both first grade. The indicators of poor soil environmental quality in the low mountain area were exogenous Cd and endogenous As; those in the hilly area were exogenous Cd and endogenous As and Hg; and that in the plain area was exogenous Cd. The results were in line with the actual situation of the study area. Soil quality integrates inherent and dynamic soil properties and is influenced by land use and management practices that interacts with the soil system2. Soil environmental quality, as an important component of soil quality, reflects the level of harmful substances in the soil. The scientific, accurate and comprehensive assessment of the soil environmental quality of the study area has important significance for planning land resources rationally, developing high-standard farmland, enhancing characteristic agriculture and improving the quality and efficiency of agricultural production4.Nanling important commodity grain base is an important component of the Wan Jiang Economic Belt and is a national high-standard farmland demonstration area that produces high-quality rice and vegetables. The selection of planting patterns and the consistent production of high crop yields are largely controlled by the soil properties and quality of the base. Soil quality can be defined as the capacity of a specific type of soil to sustain plant and animal productivity, maintain environmental quality and support human health and habitation within natural or managed boundaries8. The commonly used methods of soil environment quality assessment include the single factor index method, comprehensive index method, fuzzy mathematics method, multivariate statistical analysis and artificial neural network method.Many methods and various models have been used for soil quality assessment12. Weissmannov\u00e1 and Pavlovsk\u00fd13 provided a review of assessments of soil quality using various indices. The indices were divided into individual indices, and total comprehensive indices and the calculation formulas for every index, along with the classes of contamination or risk of soil indicated by the corresponding index value, were presented. This method can only provide the grade of soil quality and is unable to display the difference in indicator content, thereby losing some unique information originally present in the data.The comprehensive index method has been successfully used to assess soil quality in many regions, at different scales and under different agricultural management practices.14 applied the fuzzy mathematical method to the environmental risk assessment of soil at a petroleum-contaminated site in China and distinguished the primary environmental risk in the soil. This method considers the fuzziness of evaluation, but the determination of fuzzy weight is subjective, which directly affects the reliability of evaluation results. Singh et al.15 performed an environmental risk assessment of heavy metal pollution using multivariate analysis in the soils of Varanasi, India, and identified the principal contaminants. However, this method has shortcomings in terms of classification and consistency checks. Liu et al.16 assessed the soil quality of soil polluted with heavy metals in Tai Yuan city based on the support vector machine method and provided a classification of the soil quality. This type of intelligent algorithm has a strong ability to build mapping relationships, but the model training and computation processes are complex.Hu et alA radar chart, which refers to charts that resemble navigation radar graphics and are also known as spider charts, is a graphical method of data analysis. In this method, the value of multiple related attributes are drawn by a certain method; then, through analysis of drawn charts, the subject of analysis can be comprehensively evaluated. Radar chart are mainly used in the evaluation of an enterprise's financial condition, operation risk assessment and so on.17 applied the radar chart method to audit the economic benefits of two companies, obtained evaluation results, and analyzed the reasons to provide corresponding suggestions.Du.18 applied a radar chart to distinguish basalt tectonic environments and trace mineral source areas and analyzed its applicability and ability to achieve certain effects. Zhang19 compared and analyzed the environmental quality of different regions and different years using the radar chart method and obtained a comprehensive environmental index. In the assessment of soil environmental quality, the use of radar charts is still rare. Besides, there is a major weakness in traditional radar chart method that the evaluation result varies wildly from one order of arrangement of indicators to another.In the fields of the Earth sciences and environmental quality, the radar chart method has rarely been used: Zhang et al20, not to mention application of it to study of soil environmental quality.The equal intercept transformation radar chart could effectively address this disadvantage by adding equal intercept axes. There are very few works published on the equal intercept transformation radar chartIn order to verify the applicability of equal intercept transformation radar chart and improve the evaluation results, this paper introduced originally the equal intercept transformation radar charts in the assessment of the environmental quality of shallow soil in different landforms and depths in the Nanling commodity grain base. In the assessment, the equal intercept transformation radar chart area was used to represent the soil environmental quality level.23.Unlike other assessment methods of soil environment quality in common use, the equal intercept transformation radar chart, as a visual graphical data analysis method, translates data from a numerical to graphical format. This visualization enables data presentation, analysis process and results intuitive and is capable of fully retaining information contained data and examining it in depth from geometry. Meanwhile, the equal intercept transformation radar chart is totally independent of the orders of arrangement of indicatorsThis study could broaden the scope of application of equal intercept transformation radar chart and improve the evaluation of soil environmental quality further.The equal intercept transformation radar charts of soil environmental quality at different depths in the low mountain area, hilly area and plain area of the study area are shown Figs. The area of an equal intercept transformation radar chart represents the soil environmental quality. The areas of the above equal intercept transformation radar charts are shown in Table The areas of the soil environmental quality equal intercept transformation radar charts of various geomorphic units at different depths were compared with the standard radar chart. If the measured area is less than the standard radar chart area of the first grade, the soil environmental quality level of the corresponding geomorphic unit and corresponding depth is classified as the first grade. If the area is between the standard radar chart area of the first grade and that of the second grade, the soil environmental quality level of the corresponding geomorphic unit and corresponding depth is classified as the second grade. According to this, the soil environmental quality level in each depth on each geomorphic unit can be obtained , hilly areas (II) and plain areas (III). The soil types in the study area are mainly acidic paddy soils.The site is an important part of the Wan Jiang Economic Zone. Nanling County is the main grain-producing area in Anhui Province, and rice and tea are abundant in this site. Scientific evaluation of soil environmental quality is therefore critical to high-standard farmland planning and development in this site.20. Its basic elements include: first, N rays drew with the same angle from the same origin as figure axis respectively represent the N indicators of the evaluation objects; second, a fixed length line segment was cut on the bisector of every two adjacent index axes, i.e. \u201cequal intercept\u201d; third, With the coordinates of each index value, the N points are made on the corresponding axis; last, the N points and N ends of intercepts are connected successively by line segments to form a 2N-side polygon, equal intercept transformation radar chart.The equal intercept transformation radar chart is an improved form of conventional radar chart by adding equal intercept axesIt is suitable for the comprehensive evaluation of an object composed of several indicators. In the application, the distances from the origin to the end of different axes should not vary too much. If the gaps between different indicators are too large, it needs to be scaled by a certain method so that the corresponding length of the line segments on the graph is in the same order of magnitude. The values represented by the coordinate ranges and unit lengths of different axial directions may be different to suit the actual values of different indicators.This paper used 1962 groups of soil samples collected from a 1:5 million land quality geochemical survey and selected the eight indicators of Cd, Hg, As, Cu, Pb, Cr, Zn and Ni according to the People's Republic of China soil environmental quality standard (GB15618-1995 and GB15618-2008 referred) to explore the application of equal intercept transformation radar charts in the assessment of the environmental quality of shallow soil in different landforms and depths in the Nanling commodity grain base.The average values of each index of soil samples at different depths in different geomorphic areas of the study area were calculated, and equal intercept transformation radar charts were drawn accordingly. The standard values of soil environmental quality given by GB15618-1995 were also drawn on the charts. In the drafting of the charts, instead of directly using each indicator's content, the data were multiplied properly to ensure that the contents of different indicators were on the same order of magnitude.Then, these drawn radar charts were analyzed. First, a single indicator analysis was carried out to separately calculate the environmental quality levels of the soil indicators at different depths in different geomorphological areas. Second, the area of each equal intercept transformation radar chart, which represents the comprehensive soil environmental quality, was calculated (see Formula n represents the number of indicators (n\u2009=\u20098 in this paper); Ai represents the value of the i indicator/axis; L represents \u201cequal intercept\u201d (L\u2009=\u2009150 in this paper); In the formula,"} +{"text": "This study was aimed at determining the effect of microstructure on the macro-mechanical behavior of a composite solid propellant. The microstructure model of a composite solid propellant was generated using molecular dynamics algorithm. The correlation of how microstructural mechanical properties and the effect of initial interface defects in propellant act on the macro-mechanics were studied. Results of this study showed that the mechanical properties of propellant rely heavily on its mesoscopic structure. The grain filling volume fraction mainly influences the propellant initial modulus, the higher the volume fraction, the higher initial modulus. Additionally, it was found that the ratio of particles influences the tensile strength and breaking elongation rate of the propellant. The big particles could also improve the initial modulus of a propellant, but decrease its tensile strength and breaking elongation rate. Furthermore, the initial defects lowered the uniaxial tensile modulus, tensile strength, and the relaxation modulus of propellant, but did not affect the relaxation behavior of the propellant. Composite solid propellant is a high-energy composite material that is widely used as a power source for launch vehicles and various strategic and tactical missiles. The mechanical properties of the composite solid propellant greatly affect the survivability and combat capability of the missile. Moreover, composite solid propellant is composed of the hydroxyl terminated polybutadiene (HTPB) as a binder matrix and solid particles such as aluminum powder (AL), ammonium perchlorate (AP), and hexogen (RDX) as filler. The macro mechanical properties of composite solid propellant strongly depend on the mesostructure and its multi-scale physical process under an external load. Early research on composite solid propellants is mostly based on continuum mechanics to obtain the constitutive relationship of the composite solid propellant . HoweverNumerous experimental investigations have been conducted in recent years to explore the micromechanical properties and failure mechanisms of composite solid propellants. D. Bencher and Liu With the development of computational technologies, Matou\u0161 and Inglis develop The mechanical properties of the bonding interface between the particles and matrix are important factors that could affect the macro stress-strain relationship of propellant. Based on the characteristics of meso-damage of propellant, Li introducElsewhere, Han found thThese described studies considered that the mesoscopic composition of propellant is intact but various forms of initial defects existing in the production process of propellant were not considered. The existence of these defects may not only affect the macro mechanical properties of propellant but also affects the combustion characteristics during engine ignition. There are few studies on the effect of initial defects on the mechanical properties of propellant. It has been reported that He studied The formula and component information of a composite solid propellant were shown in It was found that the number ratio of different particles is related to their corresponding particle size. According to the size distribution of AP particles in propellant obtained through a real test given in the literature , the num2 \u00d7 4410 \u03bcm2. Similar to the experimental observation results, it was found that the particles were randomly and evenly distributed, closely staggered, while the small particles were distributed in the gap of large particles The mechanical properties of HTPB propellant depend heavily on its mesoscopic structure and the random distribution of particles hardly affects its macro mechanical properties. The simulation results present the initial modulus of the composite solid propellant increases with the increase of the particle filling volume fraction. The different particle ratio significantly affects the tensile strength and fracture elongation of the propellant. Although the existence of large particles improves the initial modulus of the propellant, it reduces its tensile strength and fracture elongation. (2) The main forms of initial defects were analyzed using a scanning electron microscope of the initial section of HTPB propellant. The filling models with different initial interface defect contents are constructed to predict the uniaxial tension and stress relaxation process. It was found that the initial defect is the main reason for the decrease of the initial modulus, relaxation modulus, and tensile strength of the composite solid propellant. However, the relaxation characteristics of the composite solid propellant will not disappear with the increase of the initial defects."} +{"text": "A structural vector autoregressive model and spillover index analysis based on generalized prediction error variance decomposition were used to explore the impact of public health emergencies on the dry bulk shipping market and provide suggestions for addressing the impact of public health emergencies. Moreover, the risk fluctuation and spillover of the dry bulk shipping market during public health emergencies were analyzed to understand the ways in which public health emergencies impact the dry bulk shipping market and to quantify the impact intensity. In related studies, the influence of the international crude oil price index and dry bulk ship port berthing volume were also considered. The results show that considering the immediate impact, the increase of newly confirmed cases of COVID-19 has a significant impact on the dry bulk shipping market, which lasts for more than 3 weeks and is always a negative shock. Different types of public health emergencies have different effects on the dry bulk shipping segmented shipping market. Dry bulk shipping companies should fully understand the development of public health emergencies, make full use of risk aversion forecasting tools in financial markets and make deployments for different situations. The first public case was reported on December 31, 2019, and the World Health Organization declared the 2019 novel coronavirus (COVID-19) a global pandemic on March 11, 2020. Although the severity of the pandemic in different countries and regions is different in terms of the number of confirmed cases, each country and region has been negatively affected, and the negative impacts continue to spread.A safe and stable external environment is the foundation of the sound development of each industry\u2019s economy. The prevention of a series of abrupt public health events has always been an important issue for every economy and industry. In recent years, with the global outbreak of major public health emergencies such as the SARS pandemic, the Zika virus epidemic, and the COVID-19 pandemic, public health emergencies have received increasing attention because of their high degree of suddenness, low regularity and high transmission and their strong impact on economic development. As an important bridge of economic globalization, dry bulk shipping actively helps construct the \u201cSea route of the new Silk Road\u201d. The dry bulk shipping market, which has always been closely related to the development of economic globalization, is sensitive to the impact of sudden public health events that impact the global economy.In this context, it is conducive to the sound development of dry bulk shipping, the globalization of service economy in the perspective of goods, capital, technology, personnel circulation and provide power and space for economic growth by analyzing the impact of public health emergencies on the dry bulk shipping market and the risk volatility of the dry bulk shipping segment market.According to current research and the impact of public health emergencies on markets in general, Baldwin and Tomiura predicted that the outbreak of COVID-19 would have an impact on both supply and demand of the global economy and a significant impact on international trade . Feyisa In general, the COVID-19 public health emergency has had a huge impact on the smooth operation of the global economy. The above documents provide many ideas for this article to explore the path analysis of the impact of public health emergencies on the dry bulk shipping market.According to current research and the impact of public health emergencies on shipping markets, Zhang Yongfeng and others believe that the daily increase of the COVID-19 pandemic has a significant Granger causality with the BDI index . Xu PeihThe impact of public health emergencies on the shipping market is sudden and severe. The impact of different public health emergencies on the risk volatility and spillover of the dry bulk shipping market is less researched.As shown in The COVID-19 pandemic was selected to explore the effect of the public health emergency on the dry bulk shipping market.The Baltic Dry Bulk Freight Index (BDI index) was selected to reflect the overall development of the dry bulk shipping market. Newly diagnosed pneumonia cases (COVID-19) were selected to reflect the development of the COVID-19 pandemic. The impact of the COVID-19 pandemic on the dry bulk shipping market was explored through the interaction between the two. At the same time, to perfect the exploration of the impact angle, the supply-side European Brent crude price index (EBSP) reflects the influence of the supply-side international crude oil market and the demand-side Clarkson Company\u2019s statistics of the daily port calls of bulk carriers above 65,000 dwt (Dead Weight Tonnage) reflect the actual demand of the dry bulk shipping market to further analyze the impact of the COVID-19 pandemic on the international dry bulk shipping market. The number of port calls per day reflects the real state of the dry bulk shipping market to a certain extent.The COVID-19 pandemic in China gradually faded away after the \u201cunblocking\u201d of Wuhan. The major trading powers adopted trade intervention with a relatively lifted ban. The dry bulk shipping market and related enterprises have gradually resumed under the series of shocks . FurtherThe BCI, BHI and BPI indices issued by the Baltic shipping Union represent the Capsize dry bulk market, the (Super) Handysize dry bulk market and the Panamax dry bulk market, respectively. The outbreak of the COVID-19 pandemic and the study period of the Zika virus were selected from January 3, 2020 to April 8, 2020 and February 1, 2016 to July 31, 2016, respectively. In February 2016, the World Health Organization (WHO) announced that the Zika epidemic in Brazil was a public health emergency of international concern, so February 1, 2016, was chosen as the starting time of the impact period of the Zika epidemic in Brazil. In September of the same year, the WHO announced that no laboratory-confirmed Zika virus cases were found in Brazil during the Olympic Games (August and September 2016). In November, it announced that the pheic status of the Zika virus epidemic in Brazil was lifted . To makePublic health emergencies are closely related to population mobility and directly affect public consumption, business tourism transportation and logistics and other industries. With population flows, public health emergencies tend to accelerate the spread of the disease. In real life, an effective and widely used strategy to limit the spread and development of public health emergencies worldwide is to limit the large-scale flow of the population. China has controlled COVID-19 well, which is the most effective way to implement the policy and is an effective way to curb the spread of public health emergencies and to prevent the spread of the pandemic.With the disposal method of \"early detection, early isolation and early treatment\" being accepted by an increasing number of countries, the means of shutdown, suspension of production, suspension of schools, road closures and even comprehensive \"city closures\" are carried out around the world. The implementation of an extensive segregation policy depresses the commodity consumption industry, which directly and indirectly affects the logistics and transportation industry, including dry bulk shipping.With the further development of the internationalization strategy of the world\u2019s large shipping enterprises, the shipping market, as a link between the means of production and the rest of the world, occupies an important position in the global supply chain. According to the statistics of the United Nations Commission for trade development, the volume of seaborne trade accounts for 90% of the total global trade in weight and for more than 70% of the trade volume in value. As shown in Due to the large volume difference between the data, to reduce the volatility of the daily data of the time series and the influence of the order of magnitude between different series, four groups of data are logarithmically processed, and the trend of the processed data is shown in In the process of time series data analysis, because of its nonstationary characteristics, pseudoregression events often occur. To avoid pseudoregression, the stationarity of time series variables should be tested before analysis. The ADF (Augmented Dickey-Fuller test) test method is often used to test the stationarity of variables. The form of the ADF test equation is as followsThe results obtained by Eviews analysis software are shown in Using a time series to build a model, we need to test whether there is a cointegration relationship between variables based on stable data. Therefore, it is necessary to test the cointegration of the series. The cointegration test determines whether there is a long-term equilibrium relationship between time series. This paper uses the trace test method proposed by Johansen and Juselius (1990) to test the cointegration relationship between variables. The test formula is as follows (2)EViews analysis software was used to test the cointegration relationship between BDI and dcov, and the results are shown in The VAR model is usually used in the prediction of time series and the dynamic impact of random disturbance on the system to explain the impact of dynamic impact on variables. The basic mathematical expression of the model is shown in Formula (3), and the meaning of the variables is shown in The optimal lag order of the model is selected according to the AIC and SC criterion. The analysis results are shown in Through the analysis of The optimal model Eqs and 5) 5) are oThe Granger causality test is often used to judge the causality between variables, that is, whether the change of one variable is the cause of the change of another variable. On this basis, the causal direction of the two is tested. Next, the optimal VAR model calculated above is used for the causality test. The inspection results are shown in According to Granger causality test results are consistent with the development of reality and corroborate the impact of the COVID-19 public health emergency on the dry bulk shipping market. The global spread of public health emergencies has a direct impact on the trade side. At the same time, due to the strong demand and high dependence on seafarers and other means of production, the dry bulk shipping market has had a relatively strong impact. The COVID-19 pandemic was mainly affected by the policies of various countries and regions in the sample period and was closely related to passenger transport. The development of a dry bulk shipping market dominated by cargo transport is not significant. Under the background of global supply chain development, the dry bulk shipping market is strongly correlated with the development of public health emergencies.In the analysis of the VAR model, the real-time structural relationship among system variables is hidden in the random disturbance term, and the explanation of this part is ignored. The SVAR model adds synchronous variables to the VAR model, which can directly reflect the real-time structural relationship among variables . The BDIAmong them, matrix A is the synchronous relation matrix between endogenous variables in the model, and it is the white noise vector.Formula (7) is obtained by multiplying both sides of the VAR model by the synchronous relation matrix A at the same time.Formula (8) can be obtained by simultaneous Formulas (6) and (7)\u03bct to \u03bct = Bet, the estimation model Formula (9) of the AB-type SVAR model is obtained:After orthogonalization, that is, orthogonalizing m(m\u22121)/2 constraints, namely, one constraint. According to the economic meaning, the restriction condition is as follows: The development of COVID-19 has a current effect on the freight rate of the dry bulk shipping market.In this paper, the SVAR model is identified by establishing short-term constraints. Because the model includes two endogenous variables, it needs to impose After setting constraints and sorting out, the form of After importing EViews measurement software, the estimated values of matrix parameters and related data are obtained after maximum likelihood estimation, as shown in By exploring the impulse response of the SVAR model, we can investigate the impact effect of public health emergencies on dry bulk shipping in different periods. The premise of the impulse response is that the model is stable, so it is necessary to carry out AR root tests on the model to judge whether the model is stable or not. The AR root test results are shown in Based on the stability of the model, the impulse response is analyzed. The purpose of this study is to analyze the dynamic impact of the COVID-19 public health emergency on the dry bulk shipping market represented by the BDI index. The solid line in The COVID-19 pandemic has a significant impact on the dry bulk shipping market; the impact duration is more than 3 weeks, and it always has a negative impact, as shown in As shown in the variance decomposition chart of By studying the impact of the COVID-19 pandemic\u2019s public health emergency on the BDI index, we discovered that there was no cointegration relationship between the BDI index and newly confirmed pneumonia cases related to COVID-19 at the 5% confidence level. The Granger causality test showed a one-way causal relationship between the COVID-19 pandemic and the dry bulk shipping market. Impulse response analysis showed that dry bulk shipping was significantly negatively impacted by the volatility of confirmed new cases of COVID-19, with a duration of more than 3 weeks. The increase in the impact of COVID-19 confirmed cases\u2019 reached the highest point in the short term (1\u20133 days) for the dry bulk shipping market and related enterprises. The impact will gradually decrease, but it will last more than 3 weeks. The result of variance decomposition proved that dry bulk shipping was deeply impacted by the COVID-19 public health emergency.Thus, the dry bulk shipping market quickly received external shocks by the COVID-19 pandemic. Dry bulk shipping enterprises need to be alert that the external impact will reach the maximum in the short term (1\u20133 days) and will last more than 3 weeks in the long term. Before taking effective measures, the impact of public health emergencies will continue to impact the market in waves, greatly affecting the performance and cash flow of enterprises.Diebold and Yilmaz (2014) proposed the spillover index method based on generalized prediction error variance decomposition to measure the risk spillover effect among financial sectors and then described the risk transmission relationship among sectors . The modTaking the logarithm of the return index of each dry bulk shipping market segment, the logarithmic rate of return of each ship type market segment is obtained, generalized variance decomposition is carried out, and the risk volatility spillover index is obtained after calculation. On this basis, the risk volatility spillover correlation matrix of the dry bulk shipping subdivision ship type is constructed.In By comparing the performance of the shipping rate index in the years before and after the public health emergency, the impact of the COVID-19 pandemic on the dry bulk shipping market is examined more directly to lay a structural foundation for an in-depth analysis of risk transfer between market segments. The outbreak of the COVID-19 pandemic is a worldwide public health event that affects worldwide demand for major countries in international trade and major sources of production. The impact on dry bulk shipping is reflected not only in the contraction of transport prices but also in the transport structure of dry bulk shipping. As shown in Using spillover index analysis, we obtained the risk volatility spillover matrix of the dry bulk shipping market during the period of COVID-19. The incidence of Zika virus infection is relatively small compared to that of COVID-19, and it mainly affects the place of origin of international trade production, which is not the only one. By observing the changes in the freight index of dry bulk shipping in the first three quarters of 2015 and 2016 before and after the outbreak of the Zika virus, it can be seen in By using spillover index analysis, the risk volatility spillover matrix of each ship type in the dry bulk shipping market during the Zika epidemic is obtained, as shown in Comparing the freight rate performance of each shipping type before and after the outbreak of two public health emergencies and comparing Figs Considering the overall risk volatility spillover of the dry bulk shipping market, by comparing the risk volatility spillover in Tables During the Zika outbreak in 2016, the increase of iron ore shares on China\u2013Brazil routes helped increase demand, the further development of intermediate frequency furnace events positively impacted the import demand of iron ore, and the large-scale steel-making movement in India increased the demand for coking coal carbon trade, in response to which the demand for dry bulk shipping rose steadily. At a time when the large-scale development of ships is in full swing, capsize dry bulk shipping has made great progress, and the market enjoys the benefit of marginal cost reductions. With the advantages of high deadweight tonnage, strong adaptability, and low unit cost, the capsize dry bulk carrier occupies a dominant position in the dry bulk shipping market. According to the market rule of large ships with large lines, the market position of Panamax dry bulk carriers with relatively large deadweight tonnage is closely followed by (Super) handy size dry bulk shipping. At that time, the internal risk to the dry bulk shipping market was mainly due to the transfer of the capsize dry bulk shipping market segment to the Panamax type and (super) handy size type dry bulk shipping market segments. The Zika virus epidemic, as a typical public health emergency in noncore supply and demand countries of world trade, has a relatively low impact on world trade. As reflected in dry bulk shipping, it mainly acts on the supply side of both sides of trade demand and has a relatively limited impact on Brazil\u2019s iron ore trade.During the period of analysis of the COVID-19 pandemic, the BCI index was negative for the first time in history; that is, capsize dry bulk shipping was not profitable during that period. In addition to the seasonal impact, the main reason is the sharp reduction of international demand. China is one of the major demand countries for \u2019s iron ore trade, the demand for which dwindled considerably during COVID-19. Accordingly, Brazil, as one of the main suppliers of iron ore trade, was affected by the pandemic, and the delivery recovery was slow. Compared with the same period, Brazil\u2019s grain exports also decreased by 11% in January and February 2020. The capsize shipping market is still in the doldrums, but market demand has been maintained. The smaller Panamax, Super Handysize and Handysize ships recovered relatively more quickly from the loss of freight rates. According to the data, the Panamax dry bulk shipping market segment, as the net spillover part of risk fluctuation, transfers risk to the capsize and (super)handy size dry bulk shipping market segment fluctuation during the period of analysis. The COVID-19 global public health emergency not only acts on the main suppliers of international trade but also affects the principal demand sides, impacting both the supply and demand of international trade, the dry bulk shipping market and related enterprises.Considering the risk volatility spillover between two segments of the dry bulk shipping market, During the Zika epidemic, the transportation structure of dry bulk cargo did not change. The strength and direction of risk transmission are highly consistent with the volume of goods. Through the analysis of the risk transfer framework of two typical public health emergencies, we can draw the following conclusions:With the development of globalization, the relationship between the ship type market segments of the dry bulk shipping market is closer, and the degree of risk spillover is higher.When public health emergencies affect only individual nodes of global trade, especially the nonunique supply side, the impact of public health emergencies on the dry bulk shipping market will not affect its transportation structure. The main risk volatility spillover chain reaction is: capsize, Panamax and (Super) Handysize.When public health emergencies affect the whole process of global trade, especially regarding nodes with high demand, the dry bulk shipping market will affect the transport structure under the impact of public health emergencies. The market of the Capsize dry bulk carrier with high carrying capacity and low marginal costs will be greatly affected, and its cargo-carrying function will be replaced by a relatively more efficient small carrier with low cargo demand in the short term. In the beginning, the risk transferred rapidly to the Capsize shipping market and then mainly exports from the Panamax ship market to the (Super) Handysize ship market and the Capsize shipping market.In this paper, we find that the supply and demand of the dry bulk shipping market are impacted by the COVID-19 public health emergency. Public health emergencies have a strong impact on dry bulk shipping directly or indirectly from the perspective of policy constraints and the global industrial supply chain. The research shows that the impact of a certain period will reach the maximum on the third day, after which it will gradually decrease as the pandemic comes under control, and the impact will last for more than three weeks.Different types of public health emergencies have different impacts on the dry bulk shipping market due to their different areas of influence. The economic status of capsize dry bulk carriers will be affected by the \"big event\u201d, and the efficiency of smaller Panamax dry bulk carriers will be improved. Therefore, dry bulk shipping enterprises should accurately grasp the development direction of public health emergencies, accurately study and judge the types of events, reasonably carry out business and organize distribution and ship allocation.The pandemic will not disappear. With the further development of the pandemic, the future perspective will focus more on exploring the opportunities and challenges of the global shipping industry and trade structure in the later stage of public health emergencies.For dry bulk shipping companies, the most important thing is to make a quick judgment when public health emergencies occur, determine the speed of the spread and control the situation. First, we fully respond to the regional and knock-on effects of public health emergencies, summarize, and analyze the impact of world trade and customs clearance efficiency to judge whether it will impact the transportation structure of the dry bulk shipping market.While making a good market estimation, it is important to adjust the business according to whether it will affect the dry bulk shipping structure. When the impact of public health emergencies is small and controllable or occurs in nonunitary supply markets and secondary demand markets (hereinafter referred to as \"minor events\"), dry bulk shipping enterprises should strengthen the management of corresponding regional routes. Flexibly adjusting transport capacity and reducing the loss of capital and income also reduces the risk of becoming the medium of transmission. When public health emergencies are strong and uncontrollable or occur only in producer countries of the means of production and the main demand side of trade (hereinafter referred to as \"major events\"), dry bulk shipping enterprises should quickly integrate market information, analyze the future market volume of bulk commodities and other information, and design a good distribution plan in advance. At the same time, plans should be made for possible crew shortages and becoming isolated islands at sea to eliminate having to passively accept the situation when public health emergencies occur.In response to external risk input at the same time, dry bulk shipping enterprises should control internal risk transfer, avoid a strong impact on enterprises\u2019 internal business between the development of the implicated role and affecting business revenues of the whole enterprise. According to the above analysis, when faced with \"small events\", risk transfer generally follows the principle that volume determines risk. In this situation, we should do a good job of splitting the internal business, controlling cash flow, reducing cash flow losses of damaged business, and minimizing losses. When facing \"big events\", enterprises should be prepared for the sudden reduction of cargo volume and the transformation of economic ship type. In the business planning of route ship allocation and ship cabin allocation, we should carry out the work of small ship types to large ship types, try to balance profits and losses by minimizing the reduction of the small ship market and help enterprises tide over the difficulties.In addition, from the perspective of enterprises, we can also make full use of the hedging forecasting tools of financial markets. By means of financial risk prediction, we can enhance the sensitivity of enterprise risk prediction and avoid risks in shipping financial markets in advance.The government should strengthen plans, ensure the transparency of public health emergency information to the greatest extent possible, help the market establish and improve emergency early warning mechanisms and shorten response times.The government should refine financial markets while reasonably controlling speculation, increase support for risk-averse financial products, establish and improve \"rescue mechanisms\", and help different subjects in sudden public health crises reduce the risk of capital chain ruptures.On the premise of respecting the main role of the market, the government should strengthen guidance, develop warning and guiding standards for the rational allocation of global shipping capacity, and strive to create a dynamic and reasonable capacity pool stock.The government should improve shipping laws and regulations, improve the good operation of shipping enterprises to safeguard their legitimate rights and interests and deal with the aftermath of bankruptcy according to the law and the stable mechanisms of market access, and effectively carry out standardization and supervision."} +{"text": "During the service period of a prestressed concrete bridge, as the number of cyclic loads increases, cumulative fatigue damage and prestress loss will occur inside the structure, which will affect the safety, durability, and service life of the structure. Based on this, this paper studies the loss of bridge prestress under fatigue load. First, the relationship between the prestress loss of the prestressed tendons and the residual deflection of the test beam is analyzed. Based on the test results and the main influencing factors of fatigue and creep, a concrete fatigue and creep calculation model is proposed; then, based on the static cracking check calculation method and POS-BP neural network algorithm, a prestressed concrete beam fatigue cracking check model under repeated loads is proposed. Finally, the mechanical performance of the prestressed concrete beam after fatigue loading is analyzed, and the influence of the fatigue load on the bearing capacity of the prestressed concrete beam is explored. The results show that the bridge prestress loss characterization model based on the POS-BP neural network algorithm has the advantages of high calculation efficiency and strong applicability. At present, although some progress has been made in the research of bridge prestress, most construction enterprises have some problems in the process of loss and analysis of bridge prestress under fatigue load. With the development of intelligent technology, the research on the mechanical and bearing capacity of building bridges has also been developed rapidly. In addition, in the modeling and analysis of the bearing capacity of bridge buildings, the advantages of the prestressed design of bridges under fatigue load based on cloud computing technology and intelligent algorithm are more obvious. The development of these technologies also provides opportunities for the research on the combination of the prestressed design and ultimate load performance of the bridge under the action of building stability and fatigue load . TherefoThis paper studies the application of bearing capacity and mechanical performance analysis model in the optimization of bridge prestress design under modern fatigue loads and is mainly divided into five sections. PSO-BP neural network algorithm is a multilayer feedforward network trained by error backpropagation algorithm, which is one of the most widely used neural network models. PSO-BP neural network algorithm is different from the conventional algorithm, which is a better multiresource control and calculation strategy . At presAt present, the PSO-BP neural network algorithm is widely used in the optimization analysis of the mechanical properties of materials. In the mechanical property analysis system for different types of bridge buildings, most of them are based on the PSO-BP neural network algorithm for approximate simulation solution . The objf(x) in this process is as follows:x is the prestress load, and h(x) is the expression of neural node factor:The traditional analysis method of bridge prestress performance is based on the classical Newton mechanics principle, which cannot carry out the visual overall simulation analysis of its stress process . Therefo\u03b7 is as follows:The adaptive analysis model can realize the unified management of the basic parameter information source of bridge prestress under different types of fatigue load, the local difference analysis of bridge prestressed mechanical properties under fatigue load is realized, and the coupling coefficient w(x) is as follows:Then, the PSO-BP neural network algorithm is used for intelligent control and feedback correction, so as to realize the analysis of bridge prestressed mechanical properties and data storage under different types of fatigue load. After completing the above basic mechanical performance analysis, according to the collection process of ultimate load performance analysis data of bridge prestress under different types of fatigue load, the vector differences in the eigenvalues of different data, and the structural characteristics of the data, the matrix difference and other different characteristic description values are analyzed; the corresponding characterization function Then, the intelligent optimization process and deep mining process based on PSO-BP neural network algorithm are used to realize the mechanical and ultimate load performance analysis and law analysis of bridge prestress under fatigue load in different types and different structural design methods. The finite element simulation analysis process of bridge prestress under fatigue load is shown in PSO-BP neural network algorithm is used to analyze the mechanical properties of bridge prestress under different fatigue loads, eliminate redundant data, extract effective information, classify data and optimize storage strategy, which is the necessary process of the experiment, and achieve the height classification according to the similarity degree in the design of ultimate load of the bridge prestress under the fatigue load. The data collection, information mining, and feature extraction of different ultimate load performance analysis data are realized. When the mechanical performance analysis data of the bridge prestress under repeated fatigue load are reused or the invalid information is analyzed repeatedly, the feedback control strategy of PSO-BP neural network algorithm will be adopted to control the different types of data information according to the known absorption coefficient requirements. Then, the process of collecting the effective data, feature classification, extracting mechanical information, and quantitative characterization of the bridge prestress under fatigue load is completed. The corresponding data reference range is shown in The operation analysis process of its performance is shown in f(x) in this process isIn the first step, in the process of analyzing the mechanical properties of the bridge prestress under the fatigue load of known types and structural shapes, there is often a problem that the error is too large in the process of analyzing the mechanical properties of the bridge prestress under the fatigue load due to the deviation of the discrete neural network algorithm in setting the standard mechanical parameters. Therefore, in the analysis process of the bridge prestress under the fatigue load with known type, structure shape, and thickness information, combined with the classical strategy analysis method based on traditional Newton mechanics, the PSO-BP neural network algorithm is used. Based on the random data eigenvector generated in the bridge prestress database under fatigue load collected by the finite element simulation system, the problem of large mean square error of bridge prestress under different types of fatigue load is solved by hierarchical closed-loop regulation . The judIn order to solve the problem of low cooperation efficiency in the distributed calculation of PSO-BP neural network algorithm for the collection, processing, and classification of bridge prestress data under fatigue load, this study combines the mechanical performance analysis idea based on neural network algorithm and particle swarm optimization algorithm. By simulating the \u201cmultiple PSO neural node operation network\u201d in the process of \u201cneural node transfer\u201d modeling and drawing, the evaluation basis of the mechanical performance analysis efficiency of bridge prestress under fatigue load is constructed, so as to realize the performance analysis and result storage of bridge prestress under different fatigue loads. The simulation analysis results are shown in y. The corresponding basic strategy function p(x) is as follows:In the second step, when the mechanical and ultimate load performance analysis model analyzes the corresponding datasets of bridge prestress under different fatigue loads, its ultimate load and prestress data identification rules will attribute the unknown data information to the same data cluster group according to the corresponding mechanical eigenvalues. When the characteristic values of any two data cluster groups in the data of bridge prestressed mechanical performance analysis under different types of fatigue load are different, it means that the characteristic information of the two kinds of data is greatly different. When the mechanical performance analysis model is used to analyze and solve the ultimate load of bridge prestress under different fatigue loads, the selection of eigenvector is limited by the stability, and its modulus length is determined by the modulus length of eigenvector y needs fitting analysis to know the stability effect of the mechanical property analysis model. Therefore, the equation needs to be optimized and decoupled. The fitting analysis function isThe selected eigenvector l(x) before compensation isIn the third step, in the process of mechanical and ultimate load performance analysis of bridge prestress under different fatigue loads, it is necessary to carry out targeted compensation for the placement characteristics of bridge prestress under different types of fatigue loads. The mechanical analysis function l\u2032(x) after compensation isThe mechanical analysis function e(x), r(x), and u(x) areWhen the differential evolution strategy is used to analyze the mechanical properties of different datasets, three common mechanical properties analysis rules will be randomly selected. The three analysis rule functions The PSO-BP neural network algorithm used in this study is a discrete PSO-BP neural network algorithm. The basic steps of the analysis of the mechanical properties of bridge prestress under fatigue load are as follows.Among them, different data structures will be obtained under different rules of mechanical properties and ultimate load effect analysis. In order to extract the characteristics of different types of mechanical data and analyze effective data, it is necessary to simulate the prestressed stress of the bridge under these different types of fatigue loads on the mechanical and ultimate load levels . The oveThe innovation of this paper lies in the application of neural network algorithm and bearing capacity design idea to the modeling and analysis of bridge mechanical properties. On this basis, we can make full use of the basic information and characteristic parameters of bridge prestress under different types of fatigue load in Internet database. The unified mechanical performance analysis of bridge prestress under fatigue load is realized. The similarity degree between the comparison columns and the reference columns and the agreement degree of the expected index (the effective data of bearing capacity level) are quantitatively described by the characteristic coefficient. The quantitative index is used to sort the influence degree of the efficiency of the bridge prestressed ultimate load performance analysis model under fatigue load, which can effectively realize the optimal control and ultimate load design of the bridge prestressed under different types of fatigue load through different control methods.In order to study the deflection and flexural stiffness degradation of the prestressed concrete structure under fatigue loading after cracking, FTG-3 and FTG-4 beams were subjected to 2 million constant-amplitude fatigue loading tests and the crack growth at the cracks of the beams was observed. Through the formation of new cracks and the deflection of model beam, the development rules of mid-span deflection, steel strain, and concrete strain, as well as the corresponding crack width and strain amplitude under fatigue load, are studied and analyzed. In the experiment, the experimental objects involved in this study are the samples of the bridge prestressed optimization design under the fatigue load of different types of structures. It can quickly collect, analyze, extract, transform, and store the mechanical and ultimate load performance of the bridge prestressed under fatigue load . By analyzing other characteristics of prestress under different structural fatigue loads, different ultimate load characteristics are obtained. And cloud records are obtained, so as to achieve high-precision performance simulation. Then, based on the analysis of mechanical properties, the ultimate bearing capacity of prestressed bridge under different types of fatigue load is achieved. The experimental data are shown in In this way, in the follow-up process of big data analysis, we can query, call, and determine the ultimate load scheme with high accuracy and high efficiency , so that under the big data collection and storage system, the, To achieve the accurate distribution of bridge prestress design process under each fatigue load, improve the integrity of the ultimate load analysis and prestress loss determination. The preliminary experimental analysis results of the bridge prestress loss determination model under the fatigue load are shown in The experimental group is the bridge prestress change and loss under different fatigue loads, and the control group is the bridge prestress of different types of the same structure and the same type of different structures under fatigue loads with known ultimate load effect data. The error of the intelligent optimization design model is analyzed by the experimental process of ultimate load optimization and three groups of known ultimate load performance parameters. The error is shown in The experimental data analysis results of the experimental data are shown in It can be seen from in The traditional fatigue load has some problems such as bad ultimate load effect and poor mechanical performance. Based on this, this paper designs the mechanical and ultimate load performance analysis model based on PSO-BP neural network algorithm and studies the process of prestressed design and mechanical performance modeling of bridge under the action of modern fatigue load based on ultimate load. The paper analyzes structural stability, mechanical performance, ultimate load effect, and external aesthetic feeling of the prestressed bridge under the action of conventional fatigue load. The mechanical properties of the prestressed bridge under different fatigue loads are evaluated by using the finite element simulation strategy, and the method can realize adaptive modeling of the mechanical properties and ultimate load effects of the bridge prestress under the fatigue load and realize the diversified analysis and aesthetic design. Finally, the mechanical properties and the application effect of ultimate load of the bridge prestressed under different fatigue loads are analyzed by combining the known characteristics of bridge building and designing the relevant experiments. The experimental results show that the mechanical performance analysis model based on PSO-BP neural network algorithm has the advantages of high calculation efficiency and good mechanical performance simulation effect and can play an effective role in the design of bridge prestress under fatigue load. But this study only considers the loss determination, mechanical properties, and ultimate load effect analysis process of bridge prestress under the action of modern fatigue load and does not consider its reliability factors in other aspects, so deep optimization research can be carried out."} +{"text": "The Journal and Authors retract the 8 February 2021 article cited above for the following reasons provided by the Authors:Following publication, concerns were raised regarding the integrity of the images in the published figures. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers. The authors agree to this retraction."} +{"text": "Today many older adults are experiencing intensified social isolation and loneliness as they attempt to \u201cstay safe at home.\u201d The notion, is a stark contrast from our understanding of the importance of social connections on health and well-being. This session highlights: first hand experiences caring for older adults during the COVID-19 pandemic and the implications of social isolation on the health of older adults. The speaker will offer perspectives for ESPO members on the role of community engagement in orienting research agendas, both now (amid the pandemic) and into the future."} +{"text": "The shortage of health care providers (HCPs) and inequity in their distribution along with the lack of sufficient and equal professional development opportunities in low-income countries contribute to the high mortality and morbidity of women and newborns. Strengthening skills and building the capacity of all HCPs involved in Maternal and Newborn Health (MNH) is essential to ensuring that mothers and newborns receive the required care in the period around birth. The Training, Support, and Access Model (TSAM) project identified onsite mentorship at primary care Health Centers (HCs) as an approach that could help reduce mortality and morbidity through capacity building of HCPs in Rwanda. This paper presents the results and lessons learnt through the design and implementation of a mentorship model and highlights some implications for future research.The design phase started with an assessment of the status of training in HCs to inform the selection of Hospital-Based Mentors (HBMs). These HBMs took different courses to become mentors. A clear process was established for engaging all stakeholders and to ensure ownership of the model. Then the HBMs conducted monthly visits to all 68 TSAM assigned HCs for 18 months and were extended later in 43 HCs of South. Upon completion of 6 visits, mentees were requested to assist their peers who are not participating in the mentoring programme through a process of peer mentoring to ensure sustainability after the project ends.The onsite mentorship in HCs by the HBMs led to equal training of HCPs across all HCs regardless of the location of the HC. Research on this mentorship showed that the training improved the knowledge and self-efficacy of HCPs in managing postpartum haemorrhage (PPH) and newborn resuscitation. The lessons learned include that well trained midwives can conduct successful mentorships at lower levels in the healthcare system. The key challenge was the inconsistency of mentees due to a shortage of HCPs at the HC level.The initiation of onsite mentorship in HCs by HBMs with the support of the district health leaders resulted in consistent and equal mentoring at all HCs including those located in remote areas. Globally, maternal mortality remains unacceptably high. In 2017, 295 000 women died during or following pregnancy and childbirth (approximately 810/day). The vast majority of these deaths occurred in low-resource settings and most could have been prevented . About tA significant number of maternal and newborn related deaths in LMICs are linked to the shortage of qualified Health Care Providers (HCPs) who are needed to provide quality prenatal care, skilled birth attendance and emergency obstetric services - interventions crucial to reducing maternal and perinatal deaths , 5. ThisSeveral studies have revealed that didactic training does not effectively translate knowledge into practice to address system-level barriers , 12. HowRwanda has made great strides in the area of women\u2019s and children\u2019s health over recent decades including the attainment of Millennium Development Goals (MDGs) 4 and 5 \u201320. DespTo address high morbidity and mortality rates likely due to insufficient competent HCPs in the health facilities, the TSAM project designed and implemented mentorship programmes in HCs as a way of strengthening the continuum of care while also building the capacity of HCPs to be able to deliver high-quality care during the perinatal period. TSAM for MNCH is a project with funding provided to the University of Western Ontario (Western) by Global Affairs Canada (GAC). TSAM\u2019s mission is to improve Maternal, Newborn and Child Health (MNCH) in Rwanda by working with local partners to improve health service access and delivery.Although there is the need to develop and implement effective models to train HCPs, there were no broad-based models for mentoring in the health centers in Rwanda. Furthermore, we believe that the model we developed based on making use of the midwives from District Hospitals (DHs) that were given extensive mentoring by an interprofessional group of experts from tertiary hospitals, to provide mentoring at the health centers in their hospital catchment area has not been used elsewhere in resource-poor countries . This paper aims to describe the development, implementation as well as results of the mentorship model for HCPs providing maternal and neonatal care in 68 health centers of the 3 districts of the Northern Province of Rwanda. The indicators that were assessed for the results included the distribution and consistency of mentoring to the catchment areas of all of the DHs included in this project. The data on the initial assessment of the study locations were also analysed.The 68 HCs involved in this study were located in Rulindo, Gicumbi and Gakenke districts in the Northern Province of Rwanda. The onsite mentorship programme was implemented by TSAM project and its stakeholders in the HCs in the three districts. These 3 districts were assigned to the TSAM project as per the Memorandum of Understanding between the project and the Ministry of Health. This research describes the mentorship programme. The design phase of this programme started with an initial assessment of the status of training in HCs in these districts. The results of this assessment informed the selection of midwives to serve as Hospital-Based Mentors (HBMs) for health care providers at the HCs. The research further describes the other steps of the design phase, namely the refresher courses on Emergency, Obstetric Neonatal Care (EmONC) for selected Hospital Based Mentors and training on the mentoring approach to be adopted. These HBMs also benefited from other courses with cross-cutting themes such as ethics, inter-professional collaboration, gender, maternal mental health and Gender-Based Violence (GBV). The initial training makes this mentorship programme unique, as the additional training was designed to allow HBMs to manage the mothers and newborns in an integrated manner. In addition, HBMs were drawn from the mentees who had followed the mentoring programme at the respective hospitals under the same project framework. Upgrading of some mentees to HBMs through cascades of special training presents the second unique aspect of our mentorship programme. The fact that mentors were drawn from a cadre of DH mentees allowed them to conduct mentorship more effectively.The design phase involved information meetings to establish a clear process for engaging all stakeholders and to ensure ownership of the model. Following the initial assessment and training, 23 HBMs conducted monthly visits to the 68 HCs located in catchment areas of five TSAM assigned hospitals in North for 18 months, from October 2018 to March 2020. This model was extended later in other HCs located in the catchment area of 5 hospitals in the Southern Province. Participants to the mentoring programme described in this manuscript were nurses or midwives providing maternal and/or neonatal care in 68 HCs of 3 concerned districts in the Northern province.Data on the number of mentees who attended the mentorship programme during its lifetime as well as the data on the initial assessment were analysed for this study. The data were entered in a database designed for the project using Excel. Analysis was done in Microsoft Excel to generate tables and descriptive statistics. Locations of HCs were collected using Global Positioning System (GPS) devices and the coordinates were used to generate maps in ArcGIS 10.7. The geographic data were analysed to identify spatial disparity of midwives using data on the number of midwives and population density for each hospital catchment area. Population data were obtained from the Health Information System of the hospital. In addition, data were analysed according to the number of nurses and midwives in the health centers that had received training on EmONC. Mentoring data was collected on the number of mentoring visits in the DH catchment areas segregated by gender for the nurse and midwife mentees. Also, data on the number of mentoring visits were collected based on the professional qualification of the midwives and nurses.Prior to designing the mentoring programme at the HC level, a rapid assessment in health facilities located in TSAM-assigned hospitals was conducted by researchers working on the project. The assessment aimed to determine the availability of staff who provide MNH care and the status of training on MNH for those staff. The results of the assessment allowed the project and its stakeholders not only to know the number of mentees that would be available but also it informed the initial selection of competent hospital-based mentors (HBMs). This assessment was conducted cognizant of the fact that over the past 2 decades, HCPs have received some off-site training related to maternal and newborn care , 19, 21.The results revealed that efforts are being made to equip maternity departments of HCs with midwives. Of the 68 HCs located in the catchment area of TSAM-assigned hospitals, 43 (63\u2009%) had at least one midwife by the time of the assessment. However, as seen in Fig.\u00a0Apart from the availability of midwives in the HCs, the assessment revealed that there is an inequity in terms of densities of health care providers who benefited from the training on the EmONC across hospital catchment areas and districts. As shown on Fig.\u00a0The initial assessment also allowed the project and its stakeholders to examine the spatial distribution of HCPs with additional training on Essential Newborn Care (ENC). As with the EmONC, there were geographical disparities with HCPs trained in ENC which is thought to be helpful for staff providing care to mothers and newborns Fig.\u00a0.Fig. 3SMentorship is a flexible teaching and learning process that serves specific objectives of the HCPs and health care services \u201315, 21. The mentorship model was implemented by HBMs who were midwives practicing in the DHs responsible for oversight of the HCs. The majority of these midwives had been mentored in a separate programme for HCPs at the DH level by national mentors with the support of the same project . PotentiThe mentorship model development consisted of several phases. The first was providing a refresher course on Emergency Obstetric and Newborn Care (EmONC) for 25 potential mentors. The candidate HBMs was proportional to the number of HCs within the catchment area of each hospital. The second phase consisted of providing training in mentoring and Cross-Cutting Themes (CCTs) including Gender, Ethics and Inter-Professional Collaboration to successful candidates to EmONC refresher training. Thirdly, there were induction meetings held for each DH to introduce the programme and ensure it is owned by beneficiaries. The final phases were the implementation of the onsite mentorship visits followed by monitoring and evaluation activities. Once again, it is worthy to mention that HBMs were drawn from former mentees.After a successful design phase, each HBM was assigned to between 2 and 3 HCs. The mentorship field visits were organized for all 68 HCs in the Northern Province. One-day monthly visits were conducted by HBMs from October 2018 to March 2020 (18 months). During the mentoring period, key services areas mentored include the labor ward, post-natal care and antenatal care services. Activities carried out by HBMs include management of cases with the mentees, bedside teaching, presentation on key selected topics in morning staff meetings based on the needs, and training of mentees using simulation. Logbooks were used to track the participation of mentees in different mentorship activities. The topics covered by HBMs were the components of Essential Newborn Care (ENC) and those of EmONC. HBMs had benefited from refresher courses on these topics to be covered. During the mentorship period, different activities were completed by mentors and mentees including ward round, assisted delivery, newborn resuscitation, and post natal care. In addition, mannequins were used to teach skills for different components. Upon completion of each of the visits, reports were written and submitted to the director of nursing at each hospital for compiling the report at the hospital level.The overall coordination of the mentoring programme was done by a CPD programme manager within the TSAM project. The formats of the reports by the mentors were developed by the TSAM project CPD team and presented during both the training on mentoring and induction meetings. The completed reports were then submitted to the CPD manager for the TSAM project to be compiled and analyzed and a report prepared.To ensure that the mentorship model was implemented as designed and experiences were shared, bi-annual evaluation meetings were organized for each hospital. These meetings brought together the same participants as those who attended the induction meetings before initiating the mentorship in HCs. The monitoring meetings aimed to share the key messages from the report of the mentorship visits, discuss the successes and challenges of the mentorship visits as well as develop the strategies to overcome the challenges. The key points that emerged from the coordination meetings are summarized below: The mentoring programme is helping to significantly improve the quality of care in health centers, this was mentioned based on the way HCs appreciated the improvement of the knowledge and skills of HCPs.The mentoring programme is conducted the way it was designed.The mentoring programme is different from any other form of supervision at the HC. This is because mentees realized that mentors are not supervising what they are doing but that they are working together while providing feedback. They were used to supervisors where the later come to evaluate only what they did.Mannequins were distributed to all HCs to enhance the knowledge co-sharing by the mentorship programme using simulation to achieve its goals.There is a need to strengthen the consistency of mentees during the mentorship visits. The fact that some mentees have not been consistent due to the shortage of HCPs was mentioned here as one of the key challenges.From October 2018 to March 2020 (18 months), each of the 68 HCs located in the catchment area of 5 hospitals in Rulindo, Gicumbi and Gakenke benefited from mentorship monthly visits [Analysis of the results of the programme indicated that the mentorship training programme led to equity in the capacity of HCPs regardless of the location of the health center. One hundred and forty-six (146) HCPs took part in 6 or more mentorship visits, translating into an average of 2 mentees who completed at least six visits per HC. Looking at the consistency of mentoring and taking into account the discussion with all involved people during the coordination meetings, mentorship of HCPs has potentially improved their skills and performance at all levels of the health system. This is consistent with the findings of studies that evaluated this mentorship programme and which revealed that the mentorship improved the knowledge and self-efficacy of health care providers in managing PPH and newborn resuscitation in Rwanda , 25. Acc\u2009=\u20090.58) . These f\u2009=\u20090.58) \u201315, 21.The current successes of the TSAM project\u2019s onsite mentorship model demonstrate the practicability of rapidly implementing a provider-centered mentorship programme that will deal with the quality of care and equity at both the individual and structural levels at the district level. Mentorship training, thus, has the potential to enhance the skill level of health care workers and improve the quality of care provided in rural Rwanda. In India, Jayanna et al. found thOnsite mentorship programmes can ensure equity in access to skilled service providers and quality health care because it offers training tailored towards the trainee\u2019s work situation. Thus, mentorship differs from in-class didactic training. This means that HCPs are trained where they are and in what they do, thus improving quality and equity in the number of skilled people in rural areas. In Rwanda, where, despite the achievement of the MDGs on maternal health care, disparities in the use of health services and quality of care still exist , 27), suAn additional component of a successful mentoring programme at the HCs is coordination at the district level. The TSAM programme managed this by having the mentor complete reports and after compilation and analysis, the report was shared with the hospital management team for action in case any issue was identified. It is part of supporting the DHs and their mandate to support the HCs in the mentoring programme. This is direct feedback to the management team at the DH and it supported the project efforts to be collaborative by making certain that the DH management team is aware of the experiences of the mentors. Another important factor is the involvement of Directors of Nursing in the supervision of the mentoring programme. Given the benefit of the TSAM onsite mentoring programme and similar ones such as the Mentoring and Enhanced Supervision at Health Centers (MESH) and their potential to further reduce maternal and neonatal deaths in Rwanda during the era of the Sustainable Development Goals (SDGs), there is a need to mainstream and regularize this model into the health care system. Mainstreaming this onsite model into the health delivery systems is especially useful in resource-poor settings where there are shortages of specialized health workers. Not only will this ensure that health care workers are up to date with the current trends in their fields, but it will also increase job satisfaction and lead to better performance and retention of skilled workers in rural areas, cognizant that job satisfaction is an important determinant of job retention . It is wDespite the demonstrated benefits of the mentorship programme, some challenges must be overcomed if the programme is to be effectively integrated into the mainstream health system of the country. These challenges include the limited number of HCPs in health centers and more importantly in hospitals where the mentors are working which does not allow mentoring a big number of mentees at the same time, lack of some essential equipment at health centers and the staff turnover\u00a0. MeanwhiIn summary, after a year and a half of implementing the onsite mentorship in health centers, there has been equity in the number of skilled personnel in EmONC and ENC in the Northern Province of Rwanda. All health centers were reached monthly and disparities and inequities in terms of capacity building were avoided. We conclude that the programme generated a good number of mentees who have been consistent and trained by HBMs who were drawn from the former mentees with the hope that this will improve significantly the quality of care to women and newborns in Gakenke, Gicumbi and Rulindo districts. However, very close monitoring and coordination of the mentoring including regular supervision and feedback as well as the engagement of leaders at the facility level are key to success. The strategy to strengthen knowledge retention is required for the sustainability of the gains.Researchers in the field of CPD for HCPs in primary health care facilities should consider researching job retention of mentees after some period of mentoring. The cost-effectiveness of the onsite mentorship should also be considered to compare the classic training and the onsite mentorship. Finally, researchers should examine whether onsite mentorship enhances job satisfaction and job retention among HCPs."} +{"text": "The geriatric population is rapidly growing, and this growth is beyond the pace of increase in the number of healthcare professionals who are qualified to care for and tend to the various needs of this significant subgroup of the population. The current university curricula have not been sufficient in terms of quantity as well as their ability to address the ageism inherent in the perspectives of students from across the educational spectrum. In recognition of the absence of standardized geriatric guidelines, medical associations across Canada and the United States have established geriatric learning competencies for medical programs. Nevertheless, there are exiting gaps regarding the development and evaluation of geriatric-focused didactic programs that adequately train and build competency among the students interested in pursuing careers with geriatric-specific elements. A university-wide program was developed to enhance aging education and build competency through sparking interest, providing better education related to aging, and building better relationships between future healthcare professionals and older adults. To evaluate the impact of this program, a logical framework was developed a-priori and revised through constant iterations and following discussion with the program\u2019s multidisciplinary stakeholder group. Quantitative measures are being augmented with in-depth qualitative interviews to explore elements influencing students\u2019 experiences with the program and the effect on their interests in and attitudes towards geriatrics. The results will inform our conclusions regarding program effectiveness in enhancing interest in geriatric-focused education among the students and trainees and assist with recommending future directions regarding impact and large-scale dissemination and implementation."} +{"text": "About one third of food produced for human consumption is lost or wasted. For this reason, food losses and waste has become a key priority within worldwide policy circles. This is a major global issue that not only threatens the viability of a sustainable food system but also generates negative externalities in environmental terms. The avoidance of this forbidding wastage would have a positive economic impact on national economies in terms of resource savings. In this paper we look beyond this somewhat traditional resource savings angle and we shift the focus to explore the distributional consequences of food losses and waste reduction using a resource constrained modeling perspective. The impact due to the behavioral shift of each household is therefore explained by two factors. One is the amount of resources saved when the behavioral shift takes place, whereas the other one has to do with the position of households in the food supply chain. By considering the whole supply chain, instead of the common approach based only in reducing waste by consumers, we enrich the empirical knowledge of this issue and improve the quantification of its economic impact. We examine data for three EU countries that present different economic structures so as to have a broader and more robust viewpoint of the potential results. We find that distributional effects are different for consumers and producers and also across countries. Our results could be useful for policymakers since they indicate that policies should not be driven merely by the size waste but rather on its position within the food supply chain. Food losses and waste (FLW) means that food itself and all the resources and sink capacities employed along food supply chain (FSC) are wasted. FLW have adverse impacts on environmental and socioeconomic terms with differences between high- and low-income nations . On one FLW occurs at different stages of the FSC, so identifying the most effective points for governance instruments is of the major importance to tackle this problem . Food loFLW has become a key priority within the worldwide policy circles, being particularly pertinent in the light of the Sustainable Development Goals (SDGs) of the UN, adopted in 2015 as part of the Agenda 2030. This includes Goal 2 \u2018end hunger\u2019 and Goal 12 \u2018responsible consumption and production\u2019. Under the latter, target 12.3 explicitly requires \u201chalving per capita global food waste at the retail and consumer levels by 2030 and reduce food losses along production and supply chains, including post-harvest losses\u201d. In fact, it has been estimated that halving food waste could meet the demand for food of the growing population. To track the progress towards targets 12.3, FAO has developed two indices (Food Loss Index and Food Waste Index). First results have been released in December 2019. However, considerable data challenges in developing these indices still remain [In the same vein, the EU has placed reducing FLW among its top priorities. Already in 2011, the European Commission identified the food sector as key sector in its Roadmap to a Resource Efficient Europe. The reduction of food waste was also a targeted area of the EU action plan for the Circular Economy launched in 2014 . Then, tNotwithstanding that SDG has put FLW back on to the worldwide political agenda, the evidence to inform policymakers on the magnitude, causes, remedies and impacts of FLW remains extraordinarily sparse . Within This work looks further into the distributional consequences of FLW reduction by using a CGE framework. It has the capability of capturing the economic impact of reallocating those resources saved by reducing FLW along the FSC in some EU countries. To do so, we implement budget constrained expenditure multipliers using a A Social Accounting Matrix (SAM) is the most common reference database for the implementation of the Computable General Equilibrium (CGE) framework, providing data for economic modeling but also a complete and intuitive structural snapshot of the economy under study. The concept of a circular flow of income, underlying a SAM, means that the database reflects the full process of production, trade, income generation and its redistribution between institutional sectors . This alThe structure of the SAM database is a square matrix in which each account is represented by a row and a column. Transactions are recorded by double entry bookkeeping system of accounting, with income in rows and payment in columns. There are six basic groups of accounts, representative of activities, commodities, production factors and institutional sectors . The final dimensions of the matrix are determined by the level of disaggregation of these groups .AgroSAMs is a set of standardized SAMs with disaggregation of the primary sector for each Member State, providing directly comparable structural information for each economy. These follow the same sectoral concordance as the Eurostat Supply and Use Table, with the exception of the agriculture and food accounts drawn from CAPRI database . Thus, oThere are several studies that present estimates of FLW generated at EU level using various data sources, resulting in different figures for wasted food per capita . This raIn this study, the estimates of are emplScenario 1: Impact on the member states economies analyzed as a result of reducing the avoidable FLW generated by the overall FSC.Scenarios 2 and 3: Impact of reducing the avoidable FLW in WRS and FSS respectively in terms of total output, GDP and employment on the three European economies.Scenario 4: Impact resulting of the abatement of the avoidable portion of food which ends up as being discarded by households in terms of total output, GDP and employment on Spanish, German and Polish economies.Considering the previous information, we put forward four different scenarios to assess the economic impact and the distributional effects of reducing avoidable FLW on the three member states selected:Z is the vector of exogenous accounts4F Amk represents how the income flows from the exogenous accounts are distributed among the endogenous accounts).m usually depends on the type of analysis undertaken, which determines which accounts (exogenous) are the ones explaining the variation of the income in other accounts (endogenous). If changes in the vector of exogenous accounts are denoted as dZ, changes in the income of the endogenous accounts will be expressed asBased on the multiplier theory initiated by and latethj column in M indicates the total income generated in each of the endogenous accounts when a unit of income flows from the exogenous institutions towards the j endogenous account. Adding up each column in M we obtain the standard multipliers The \u03c6) that guarantees the upholding of the budget constraint of the corresponding agent Conversely, if a budget constrain is included in the model, any increase of income to an endogenous account will be followed by a reduction of income to the remaining ones, keeping thus that constraint . This imUnlike the always positive standard multipliers Z is defined for each agent along the supply chain and at household level, encompassing the corresponding demand of agrifood commodities. A new vector Z\u2032 is obtained by subtracting the injection of income resulting of monetizing the avoidable portion of food waste by each agent along the supply chain and at a household level in each member state selected. According to [In this study, the exogenous vector rding to , food waUnder the Scenario 1 , the PolIn Scenarios 2 and 3, the portion of avoidable food waste established was the same (6.3% of food purchases) but the monetary size of the shock is quite different for each sector, such as the shock is much smaller within WRS than within FSS. In the case of WRS , GermanyThe size of the shock due to reducing food waste by German and Spanish FSS is much greater than in the corresponding WRS . For thoScenario 4 reflects the impact of reducing the avoidable food waste generated by HH . As pointrade-offs occur on the demand side where a reallocation of spending on previously wasted foods causes some producers to be worse off and some to be better off\u201d. To the best of the authors\u2019 knowledge, there are no further studies considering net effects of household behavioral shift. On the other hand, the impact of household food waste reduction is also analyzed by [investments to reduce food waste by manufactures could generate some efficiency gains from improved packaging and reductions in product losses which may even offer net benefits to those firms that uptake food waste reduction technologies base on [Nonetheless, available data does not allow an accurate quantification of these payoffs [The use of budget constrained multipliers allows a better understanding of the impact of FLW reduction on the selected economies. The money saved by such reduction is spent on other commodities following the initial pattern of expenditure of each agent. The impact due to the behavioral shift of each agent is explained by the amount of resources (money) saved but also by the position of each agent in the FSC and therefore the relationship of such agent with the remaining ones embodied within the AgroSAM. In this vein, the behavioral shift of household is of major importance as their decrease of demand of food means a reduction in the activity of all the previous agents along the FSC; while the money reallocated to other activities implies an increase in demand that should be meet with a rise in the production of such activities. These results are consistent with the economic theory of food waste and the results stated by , where tlyzed by showing lyzed by hypothes base on . Nonethe payoffs . Our tim payoffs ,60. FinaAccording to , economiFood waste reduction lessens the misappropriation of economic resources in a world that faces multiple challenges due to the increasing population growth. Wide economic impact of FLW reduction provides essential policy guidance to tailor interventions that target the prevention and reduction of FLW, including the complete FSC in order to become sustainable while improving food security and nutrition. Considering the whole supply chain instead of focusing only in reducing the waste by consumers, which seems to be the approach taken by most industrialized countries, the policies will have the greatest beneficial impact.We have used a SAM model in this research since this type of model has several advantages over other methodologies. They are very useful for impact analysis and provide sensible estimates of sectoral and economy wide impacts originating in a change in final demand. They are relatively easy to use and require only a modest amount of training for running the required software. In fact, the hardest question for researchers is the availability of data. When data are available, these methodologies enable the user to quickly conduct certain types of impact analysis. Nevertheless, in regard to further research, the development of a fully Computable General Equilibrium model based as well on a SAM database would provide a better platform for the dynamic representation of economic conditions over the medium and long term. This extension could facilitate a more holistic and complete framework to evaluate the economic impact of food waste reduction. Adaptation to changing market conditions and prices is also of paramount relevance for adequately capturing the behavior of economic agents."} +{"text": "Dystonia is a disabling movement disorder characterized by abnormal postures or patterned and repetitive movements due to co-contraction of muscles in proximity to muscles desired for a certain movement. Important and well-established pathophysiological concepts are the impairment of sensorimotor integration, a loss of inhibitory control on several levels of the central nervous system and changes in synaptic plasticity. These mechanisms collectively contribute to an impairment of the gating function of the basal ganglia which results in an insufficient suppression of noisy activity and an excessive activation of cortical areas. In addition to this traditional view, a plethora of animal, genetic, imaging and electrophysiological studies highlight the role of the (1) cerebellum, (2) the cerebello-thalamic connection and (3) the functional interplay between basal ganglia and the cerebellum in the pathophysiology of dystonia. Another emerging topic is the better understanding of the microarchitecture of the striatum and its implications for dystonia. The striosomes are of particular interest as they likely control the dopamine release via inhibitory striato-nigral projections. Striosomal dysfunction has been implicated in hyperkinetic movement disorders including dystonia. This review will provide a comprehensive overview about the current understanding of the functional neuroanatomy and pathophysiology of dystonia and aims to move the traditional view of a \u2018basal ganglia disorder\u2019 to a network perspective with a dynamic interplay between cortex, basal ganglia, thalamus, brainstem and cerebellum. Dystonia belongs to the group of hyperkinetic movement disorders that is characterized by abnormal postures or patterned and repetitive movements due to sustained or intermittent muscle contractions in the vast majority of patients with dystonia , dystonic signs can most frequently be observed in SCA type 3 (Kuo et al. The traditional view on neural network organization encompasses distinct striato-pallido-thalamo-cortical and cerebello-thalamo-cortical pathways that convergently project to distinct thalamic nuclei and are only integrated at the neocortical level.This view has been challenged by the evidence of direct anatomical connections between basal ganglia and cerebellum in animals (Bostan et al. Evidence for a link between the cerebellum and the striatum on the neurotransmitter level derives from the observation that the activation of cerebellar and striatal glutamate receptors, specifically AMPA receptors, induced dystonia in animal models whereas AMPA antagonists reversed this effect and improved dystonia (Fan et al. The clinical key feature of dystonia is an insufficient suppression of undesired movements either during rest or during the execution of a certain task. New evidence suggests that striosomal dysfunction could result in dysregulated dopamine release in the substantia nigra causing an imbalance between the direct and indirect pathway that is associated with impaired inhibition and the occurrence of dystonic movements. This hypothesis, however, has to be proven in future studies. Furthermore, dystonia can no longer be regarded a disorder of the basal ganglia. Recent evidence indicates that the cerebellum is likewise involved in the pathogenesis and that strong interactions between the basal ganglia and the cerebellum are present not only under physiological conditions but also in dystonia Fig.\u00a0. Disynap"} +{"text": "High-resolution ultrasound is preferred as the first-line imaging modality for evaluation of superficial soft tissues, such as the facial muscles. In contrast to magnetic resonance imaging and computed tomography, which require specifically designated planes for imaging, the ultrasound transducer can be navigated based on the alignment of facial muscles. Botulinum toxin injections are widely used in facial cosmetic procedures in recent times. Ultrasonography is recognized as a useful tool for pre-procedure localization of target muscles. In this pictorial review, we discuss the detailed sonoanatomy of facial muscles and their clinical relevance, particularly with regard to botulinum toxin injections. Furthermore, we have summarized the findings of clinical studies that report ultrasonographic imaging of facial muscles. High-resolution ultrasound (US) has emerged as one of the most convenient imaging tools for evaluation of superficial soft tissues ,2. Most Botulinum toxin injections are widely used in recent times for cosmetic dermatologic procedures involving the face ,7. US imIn contrast to other body parts, the subcutaneous layers of the face appear well organized. The superficial musculo-aponeurotic system (SMAS) refers to a fibrous network of inelastic tissues located deep within the subcutaneous tissues, with occasional investment into the underlying muscular layer ,10. SMASThe frontalis muscle originates from the galea aponeurotica (a layer of dense connective tissue that extends over the cranium) and is inserted into the orbicularis oculi muscle. It is innervated by the temporal branch of the facial nerve and receives its blood supply from the supraorbital and supratrochlear arteries. Contraction of the frontalis muscle raises the eyebrows and wrinkles the forehead .The transducer is initially placed in the horizontal plane at one fingerbreadth cranial to the eyebrow. The frontalis muscle covers the frontal bone and is visible beneath the SMAS A 5]. Th. Th5]. TAging is associated with the development of wrinkles over the forehead, perpendicular to the course of the frontalis muscle. Botulinum toxin injections relax the frontalis muscle and are therefore useful to minimize wrinkles . It is rThe temporalis muscle originates from the parietal bone of the skull and the superior temporal surface of the sphenoid bone and is inserted on the coronoid process of the mandible and retromolar fossa. It is innervated by the anterior division of the mandibular nerve and receives its blood supply from the deep temporal artery. Contraction of the temporalis muscle elevates and retracts the mandible .The transducer is initially placed along the zygomatic arch in the horizontal plane and is subsequently moved cranially; the temporalis muscle is visualized lying in the temporal fossa A 1]. Th. Th1]. TMyofascial trigger points inside the temporalis muscle can lead to tension headaches. In 2003, McGuigan et al. reportedThe procerus muscle originates from the fascia over the lower portion of the nasal bone and is inserted into the skin overlying the lower forehead between the eyebrows. It is innervated by the temporal branch of the facial nerve and receives its blood supply from the facial artery. Contraction of the procerus muscle depresses the medial end of the eyebrow and wrinkles the glabellar skin .The transducer is placed in the horizontal plane on the lower portion of the forehead between the eyebrows and slightly cranial to the nasion A. The shPatients with progressive supranuclear palsy may present focal dystonia of the procerus muscle (referred to as the procerus sign) along with reduced blinking, lid retraction, and gaze palsy . BotulinThe depressor supercilii muscle originates from the medial orbital rim and is inserted on the medial wall of the bony orbit. It is innervated by the facial nerve and receives its blood supply from the supratrochlear artery. Contraction of this muscle leads to downward movement of the eyebrow .The transducer is placed in the horizontal plane on the middle third of the eyebrow. The depressor supercilii is observed lateral to the procerus muscle beneath the SMAS A. SlightThe depressor supercilii was previously considered an extension/branch of the orbicularis oculi or corrugator supercilii muscle; however, it was subsequently confirmed to be a distinct muscle . The depThe corrugator supercilii originates from the supraorbital ridge and is inserted into the skin over the forehead, near the eyebrow. It is innervated by the facial nerve and receives its blood supply from the ophthalmic artery. Contraction of this muscle pulls the eyebrow downward and medially .The center of the transducer is placed on the middle third of the eyebrow in the horizontal plane. The corrugator supercilii is visualized deep to the depressor supercilii, with its lateral border beneath the orbicularis oculi A. The suThe corrugator supercilii is referred to as the \u201cfrowning muscle\u201d; its contraction results in the formation of vertical wrinkles on the forehead. Botulinum toxin injections into the corrugator supercilii effectively flatten the glabellar region and normalize the contour of the medial eyebrow in patients with thyroid eye diseases .The orbicularis oculi muscle originates from the frontal bone, medial palpebral ligament, and lacrimal bone and is inserted on the lateral palpebral raphe. It is innervated by the temporal and zygomatic branches of the facial nerve and receives its blood supply from the ophthalmic, zygomatico-orbital, and angular arteries. Contraction of this muscles closes the eyelid .The transducer is placed over the eyebrow in the horizontal plane, and the muscle is visualized lateral to the corrugator supercilii A. The trThe orbicularis oculi plays an important role in the blink reflex, which is used to evaluate the integrity of the trigeminal and facial nerves. The muscle often serves as the target for upper eyelid blepharoplasty, which is used in the treatment of sunken eyes .The nasalis originates from the maxilla and is inserted into the nasal bone. It is innervated by the buccal branch of the facial nerve and receives its blood supply from the superior labial artery. Contraction of the muscle leads to compression of the nasal bridge and depression of the nasal tip .The transducer is placed in the oblique coronal plane along the nasal cartilage; the nasalis is visualized in its short axis above the cartilage. The transducer can be redirected to the oblique horizontal plane to observe the muscle along its long axis 25]..25].Contraction of the nasalis enlarges the nose and stretches the nostril. Overactivation of the muscle produces bunny lines ; botulinum toxin injections soften and erase these lines .The levator labii superioris alaeque nasi originates from the nasal bone and is inserted into the nostril and upper lip. It is innervated by the buccal branch of the facial nerve and receives its blood supply from the angular branch of the facial and infraorbital branches of the maxillary arteries. Contraction of the muscle elevates the upper lip to expose the upper teeth .The transducer is placed in an oblique horizontal plane that passes through the nasal crease. The muscle is observed lateral to the nasalis and medial to the levator labii superioris A. The trOveractivation of the levator labii superioris alaeque nasi results in a gummy smile, which is characterized by excessive gingival display on smiling. Botulinum toxin injection into this muscle can lengthen the upper lip to increase coverage of the gingiva and can also soften and minimize a prominent nasolabial fold .The levator labii superioris originates from the medial infraorbital region and is inserted into the skin and muscle over the upper lip. It is innervated by the buccal branch of the facial nerve and receives its blood supply from the facial artery. Contraction of this muscle elevates the upper lip .The transducer is placed in the middle of the inferior orbital rim in the horizontal plane, which initially facilitates visualization of the orbicularis oculi, followed by visualization of the levator labii superioris in its short axis, which lies above the infraorbital foramen A. FollowSimilar to the levator superioris alaeque nasi, the levator labii superioris is targeted to treat a gummy smile .The levator anguli oris originates from the maxilla and is inserted into the modiolus. It receives its blood supply from the facial artery and is innervated by the buccal branch of the facial nerve. Its contraction elevates the angle of the mouth .The scanning method is the same as that used for the levator labii superioris which coThe levator anguli oris is used for reconstruction of nasal defects secondary to surgical removal of tumors in this area. The muscle can also be considered as a target for botulinum toxin injections to correct a gummy smile .The zygomaticus minor originates from the zygomatic bone and is inserted on the skin of the upper lip. It is innervated by the buccal branch of the facial nerve and receives its blood supply from the facial artery. Contraction of this muscle elevates the upper lip .The transducer is placed over the lateral inferior corner of the orbital rim in the horizontal plane. The origin of the zygomaticus minor can be visualized beneath the orbicularis oculi muscle A. The meThe levator labii superioris is partially covered by the levator labii superioris alaeque nasi and the zygomaticus minor. Botulinum toxin injections into the zygomaticus minor are usually considered to treat a gummy smile or facial asymmetry in patients with excessive upward and lateral displacement of the upper lip .The zygomaticus major originates from the lateral aspect of the zygomatic bone and is inserted into the modiolus of the mouth. It is innervated by the buccal and zygomatic branches of the facial nerve and receives its blood supply from the superior labial branch of the facial artery. This muscle participates in elevation and contraction of the angle of the mouth .The transducer is placed over the inferior lateral edge of the orbital rim in the horizontal plane. The zygomaticus major is usually visualized lateral to the zygomaticus minor A; howeveInfiltration of the botulinum toxin into the zygomaticus major during injection of the adjacent muscles may cause partial lip ptosis. Botulinum toxin injections into the zygomaticus major can be considered for correction of facial asymmetry in patients in whom the angle of the mouth is excessively drawn backward/upward when smiling .The masseter muscle has two heads . The superficial head originates from the anterior two-thirds and the deep head from the posterior third of the zygomatic arch. The masseter muscle is inserted on the lateral surface of the mandibular ramus and angle. It is innervated by the masseteric branch of the mandibular nerve and receives its blood supply from the masseteric artery. Contraction of the masseter causes mandibular elevation and protrusion .The transducer is placed along the zygomatic arch in the horizontal plane and is subsequently moved caudally to visualize the masseter muscle in its short axis A. The trMasseter hypertrophy can occur secondary to bruxism or habitual tooth grinding, which may lead to a square face (widening of the lower third of the face). Botulinum toxin injections into the mandibular insertion of the masseter muscle are useful to manage teeth grinding and jaw contouring, although caution is warranted to avoid injury to the parotid gland .The orbicularis oris originates from the medial aspect of the maxilla and mandible, the perioral skin/muscles, as well as the modiolus, and is inserted into the skin and mucosa of the lip. It is innervated by the buccal branch of the facial nerve and receives its blood supply from the facial, maxillary, and superficial temporal arteries. Contraction of this muscle leads to compression of the mouth and protrusion of the lip .The transducer is placed over the lower philtrum and labiomandibular crease in the horizontal plane. The orbicularis oris appears as a thin hypoechoic band between the two layers of connective tissue A 36]. T. T36]. TBotulinum toxin injections into the orbicularis oris can be considered to decrease perioral vertical rhytids. A small volume of the toxin should be injected into the superficial portion of the muscle to avoid impairment of phonation and sucking functions .The buccinator muscle originates from the alveolar process of the maxilla, the buccinator ridge of the mandible, and the pterygomandibular raphe. It is inserted onto the modiolus, and its fibers blend with those of the orbicularis oris. It is innervated by the buccal branch of the facial nerve and receives its blood supply from the buccal artery. Its contraction compresses the cheek against the molar teeth, which facilitates whistling .The transducer is placed in the horizontal plane between the zygomatic arch and the mandible to initially visualize the masseter in its short axis . The traFacial synkinesis, defined as inappropriate and inadvertent movements of the facial muscles during certain voluntary facial expressions, is a common sequela of facial nerve palsy. The buccinator muscle is commonly involved in facial synkinesis, which can be treated by botulinum toxin injections .The risorius muscle originates from the parotid fascia and buccal skin and is inserted on the modiolus. It is innervated by the buccal branch of the facial nerve and receives its blood supply from the superior labial branch of the facial artery. Its contraction extends the angle of the mouth laterally .The transducer is initially placed in the horizontal plane to visualize the masseter muscle in its short axis . The traAccidental infiltration of botulinum toxin into the risorius may occur during injection into the masseter muscle. Caution is warranted to avoid facial asymmetry.The mentalis muscle originates from the anterior mandible and is inserted into the chin. It is innervated by the mandibular branch of the facial nerve and receives its blood supply from the inferior labial branch of the facial artery and the mental branch of the maxillary artery. Contraction of the mentalis elevates the chin and results in lower lip protrusion .The transducer is placed over the midline of the chin in the horizontal plane. The mentalis muscle is identified in its short axis above the mandible A. The trHereditary geniospasm, a rare movement disorder, is characterized by episodic involuntary movements of the mentalis muscle . MentaliThe depressor labii inferioris originates from the oblique line of the mandible and is inserted into the integument of the lower lip . It is iThe transducer is initially placed over the midline of the chin to visualize the mentalis muscle and is subsequently relocated more laterally to identify the depressor labii inferioris above the lateral edge of the mentalis A 3]. Th. Th3]. TOveractivation of the depressor labii inferioris may cause a droopy appearance of the face secondary to lowering of the lower lip; intramuscular botulinum toxin injections are useful in such cases .The depressor anguli oris originates from the oblique line of the mandible and is inserted into the modiolus . It is iThe transducer is placed over the midline of the chin and is subsequently moved laterally. The mentalis muscle is visualized, followed by the depressor labii inferioris and depressor anguli oris A. The trOveractivity of the depressor anguli oris lowers the mouth commissure, which leads to the expression of sadness or frustration. Botulinum toxin injections elevate the angle of the mouth and improve an individual\u2019s smile .Although the present article is a pictorial essay and narrative review, we performed a systematic literature search of the PubMed, Medline, and Web of Science databases (without language limitations) from inception to January 2022 to identify articles relevant to US imaging of facial muscles. The following keywords and their combinations were used: ultrasound, sonography, ultrasonography and face, facial muscles. The following search strategy had been used for literature search: (\u201cultrasound\u201d or \u201cultrasonography\u201d or \u201csonography\u201d) AND . The inclusion criteria were cross-sectional, case-control, cohort and randomized controlled studies that use ultrasound imaging to visualize the group of facial muscles. The exclusion criteria comprised (1) nonhuman studies, (2) case report or series, (3) review articles and (4) studies that only investigate a single muscle. Our review only included articles that described more than one facial muscle.The process of the literature search was detailed in the In 2014, Volk et al. used US In 2016, Volk et al. investigIn 2019, Abe et al. investigIn 2021, Hormazabal-Peralta et al. investigWe had several viewpoints regarding sonoanatomy described in the seven included articles. First, the scanning techniques for the large-sized facial muscles and adjacent bony landmarks were consistent across the enrolled studies. There were some variations in the imaging methods for small-sized muscles such as the levator labii superioris and zygomaticus minor. Second, most of the enrolled studies used the short-axis views to examine the facial muscles, whereas their longitudinal fiber arrangement could not be clearly depicted. In our protocol, we employed the short- and long-axis views to visualize the target muscles, which could compensate the weakness of the methods reported in the previous studies. Third, during the literature search, we did not identify studies comparing US imaging with other imaging tools (like magnetic resonance imaging). A prospective trial would be needed to investigate the validity of US imaging for assessing the texture of the facial muscles in comparison with other imaging tools.Based on a review of the included articles, we observed that US may serve as a useful tool to quantify facial muscle thickness, as well as to evaluate muscle echotexture. Variations in the reliability of measurements across different muscles may be attributable to changes in scanning methods, which highlights the importance of a standardized and stepwise evaluation protocol (as described in this article). Although the botulinum toxin is widely used for rejuvenation and in cosmetic dermatology, few studies have investigated the role of US guidance in comparison with landmark-based injections. The facial muscles are thin and superficially located; therefore, in our opinion, the use of high-frequency (hockey-stick) transducers and an out-of-plane injection technique may be useful. Future studies are warranted to conclusively establish the clinical effectiveness and safety of such interventions.Several limitations of using US guidance for injecting facial muscles should be acknowledged. First, utilization of US guidance for most facial muscles may be excessive because the majority of facial muscles can be identified with anatomical surface landmarks, such as most movement disorders . Second,High-resolution US enables the delineation of facial muscles, and any asymmetry and targets for possible interventions can be promptly evaluated. Botulinum toxin injections are usually performed based on surface anatomy, and currently, US guidance is rarely used for this purpose. Further prospective studies are warranted to establish the feasibility and advantages of US imaging and guidance in the management of facial (muscle) disorders. Clinicians should consider the complementary role of US and electrodiagnostic tests in these patients. Notably, US guidance can facilitate accurate needle insertion during electromyography of the aforementioned thin/small muscles."} +{"text": "In Lebanon and many other countries where structures are vulnerable to impact loads caused by accidental rock falls due to landslides, specifically bridges with hollow core slab, it is mandatory to develop safe and efficient design procedures to design such types of structures to withstand extreme cases of loading. The structural response of concrete members subjected to low velocity high falling weight raised the interest of researchers in the previous years. The effect of impact due to landslide falling rocks on reinforced concrete (RC) slabs has been investigated by many researchers, while very few studied the effect of impact loading on pre-stressed structures, noting that a recent study was conducted at Beirut Arab University which compared the dynamic behavior of reinforced concrete and post-tensioned slabs under impact loading from a 605 kg impactor freely dropped from a height of 20 m. Hollow core slabs are widely used in bridges and precast structures. Thus, studying their behavior due to such hazards becomes inevitable. This study focuses on these types of slabs. For a better understanding of the behavior, a full scale experimental program consists of testing a single span hollow core slab. The specimen has 6000 mm \u00d7 1200 mm \u00d7 200 mm dimensions with a 100 mm cast in a place topping slab. Successive free fall drops cases from 14 m height will be investigated on the prescribed slab having a span of 6000 m. This series of impacts will be held by hitting the single span hollow core slab at three different locations: center, edge, and near the support. The data from the testing program were used to assess the structural response in terms of experimental observations, maximum impact and inertia forces, structural damage/failure: type and pattern, acceleration response, and structural design recommendations. This research showed that the hollow core slab has a different dynamic behavior compared to the post tensioned and reinforced concrete slabs mentioned in the literature review section. Lebanon is characterized by its varied terrain . With the increase in population, people moved to live in the mountain region, increasing the percentage of inhabitants there. Lebanon is one of the countries where many landslides are caused because of heavy rain falling in the winter, particularly in mountain areas, resulting in rock falls threatening lives and causing severe damage to the infrastructure and to the residential buildings. Considering the advantages of precast pre-stressed concrete structures, they were introduced in the industry for construction purposes such as covering large spans and speed of erection compared to conventional RC structures. Noting that the most commonly used bridge deck system used in Lebanon mountain areas are made up using precast hollow core units and because of the high probability of natural catastrophes that may highly cause failure for this structures, it is mandatory to study the behavior of hollow core slab under impact loading and finally provide design recommendations for manufactures and structural enhancement for a better performance under such extreme cases of loading. This complex phenomenon drew the attention of many researchers noting that structural engineers do not take into consideration such type of loading cases though the international design codes used.Numerous studies have been conducted, experimentally and numerically, on the behavior of reinforced concrete and pre-stressed structures when subjected to impact loads.Reference : The aimReference : This reReference studied Reference studied References ,6,7,8 nuThe current study intended to extend understanding and to get a deep insight into the structural response of pre-stressed slabs\u2019 behavior, namely the hollow core slabs when subjected to low-velocity successive impacts.The hollow core slab unit was selected based on the thickness to span and characteristic service loads where a 200 mm hollow core slab with a 6 m long span can support 10.5 KPa according to the manufacturer.The experimental test in this paper is part of an ongoing doctoral program at the department of structural engineering in Beirut Arab University, studying the structural response of concrete structures under impact loading.-Tested SpecimenThe analyzed slab is a structure 6000 mm in length by 2400 mm (width) and 200 mm thickness as illustrated is The slab system consists of two precast hollow core units. As illustrated in The topping slab is made of concrete with a 45 MPa compressive strength and includes steel reinforcement bars as illustrated in Construction materials properties were grouped in The bars are of 12 mm diameter spaced at 200 mm in both directions with no use of shear connectors or any other type of shear reinforcement. They have an average modulus of elasticity and tensile strength of 200 GPA and 515 MPa, respectively.-Drop WeightThe mass of the drop-weight used in the experimental program is 600 kg . The droThe impact loads were generated by what was essentially a free-fall condition of the drop weight. The cable of the moving crane attached to the steel ball ran through the center of the drop-weight and was used to guide the weight during the fall.The hollow core slab was tested under successive impact loading at three different locations: Slab center, edge, and near the support as illustrated in The hollow core slab was mounted by four electronic accelerometer fixed at different positions as shown in The experimental analysis consists of simulating a slab subject to a series of successive impacts. The tests involve dropping a steel ball of 600 kg as illustrated in The drop weight is lifted to the dropping height by a crane as shown in The generated accelerations with respect to time were recorded and the data were saved through the data acquisition system. Then these values were used to derive the displacements and calculating the impact force, as discussed in the following section. The structural assessment of the specimen response to the impacts such as damaged zones, failure pattern, cracks detections, and propagation were registered after each impact, and results were then analyzed.Several measurements have been made on the analyzed slab. We present in this paper the results related to the (i) damage analysis, (ii) dynamic behavior in terms of acceleration response, and damping ratio, (iii) shear strength, inertia force, and impact force grouped in --Impact crater diameter 28 cm;-No cracks in the topping slab;-No punching with a 2 cm impact penetration through the slab thickness.Concrete topping slab damage assessment:-No global failure in the hollow core units;-Local damage directly beneath the impact represented by the spalling of the 25 mm concrete cover from the bottom side as illustrated in the figure below;-Overall damages were concentrated in the lower unreinforced flange of the hollow core unit;-No damage in the upper flange of the hollow core unit was assessed (Topping slab and hollow core unit worked compositely);-There is no separation between the concrete topping and the hollow core unit;-No de-bonding between the embedded strands and the concrete host.Hollow core slab damage assessment:First Impact Experimental Observations :Concrete--Concrete fracture at the impact location with damaged zone;-Transversal crack passing through the depth was initiated in the topping slab and the hollow core unit;Concrete topping slab damage assessment:-Severe shear damage normal to the slab caused a cut off for the first three cores of the hollow core unit;-Total concrete facture of the web, top, and bottom flanges of the hollow core unit;-Normal and transversal contact separation between the concrete topping and the hollow core units;-The first three tendons were de-bonded from the hollow core unit because of the concrete facture of the web, top, and bottom flanges of the hollow core;-The total stiffness of the hollow core unit was highly reduced by the loss of three of its cores with their strands;-The composite behavior between the concrete topping and the hollow core unit in terms of flexural rigidity was highly reduced because of the surface contact separation;-As expected, the edge impact caused a much higher damage in both the concrete topping and hollow core unit as compared to the one done by the first impact at the center.Hollow core slab damage assessment:Second Impact Experimental Observations :Concrete--Impact crater diameter 35 cm;-No cracks in the topping slab;-Concrete topping was punched through its depth (high shear force near the support);Concrete topping slab damage assessment:-Large damage distributed along the whole parts of the hollow core units;-Major concrete facture of the web, top and bottom flanges of the hollow core unit;-The brittle damage in the top and bottom thin flanges connecting the webs leads to a total loss of hollow core slab stiffness;-The brittle damage in the web connecting the two flanges leads to a total loss of hollow core slab stiffness mainly when the lower part where the pre-stressing strands are embedded separates from the gross the section of the slab;-The brittle failure of the top and bottom thin flanges governs the global behavior of the slab.Hollow core slab damage assessment:Third Impact Experimental Observations :ConcreteThe structural damage in the concrete topping and in the hollow core slab was assessed as following:Experimental results of the three impacts are summarized in Damping ratio was calculated by the use of the first two peak accelerations and according to the Equation (3):Equation (4) from the PCI code was used to calculated the shear strength of the composite slab and found to be equal to 121.27 KN.And Equations (5) and (6) were used to calculate the maximum impact force and the dynamic magnification factor respectively for each impact.This paper studied the behavior of single span hollow core slab under successive impact load at three different locations: center, edge, and near the support under a 600 kg free falling steel ball from a height of 14 m. The structural response of the slab in terms of damage assessment, acceleration response, damping, and impact force has been studied. The results thus obtained were mentioned and the following conclusions were drawn:-Structural damages in hollow core units cannot be repaired or strengthened.-Voids, thin web, and thin flanges are considered the weak points of the hollow core slab.-The presence of strands has no participation in the section load resistance if the continuity of the webs is lost by the facture of one or both flanges.-The slab system (HC + Topping) showed a lower capacity in terms of damage and acceleration response for impact at the edge compared to the impact at the slab center.--The severe damage in the slab body led to a vibration with a low acceleration amplitude directly after the impact;-The concrete topping as solid section and the hollow core units as voided section vibrate in a different manner under impact loading;-The concrete topping and the hollow core units vibrate independently from each other under vertical excitation because of the absence of any type of connectors enforcing the total thickness to work as one unit;-The reduction in slab capacity to resist impact can be demonstrated by the ratio of the maximum acceleration response between the first and the third impact The decaying function of the acceleration response represented by the accelerometers readings indicate an unconventional damped vibration. This response can be explained by the following reasons:-The reinforced concrete topping with high compressive strength (45 Mpa) helped the hollow core slab section to carry more impacts.-Concrete solid section behaves in a much better way than the hollow section in terms of structural damage and cracks generation.-Adding shear connectors can enhance the structural response of the slab system (Topping + HC unit) to avoid the contact separation specially in the case of impact at the slab edge.-Code limitations must be provided to select the minimum flange and web thicknesses in addition to the maximum void diameter in hollow core units.-Filling material such as foam can be used to absorb a part of the energy induced in the body of the hollow core unit to mitigate the brittle fracture of the thin flanges, therefore enhancing the structural performance of the slab system.-The presented damping ratio showed the vulnerability of the impact load location in the hollow core slab.-The reduction in slab impact capacity after successive impacts can be measured by the ratio of impact resistance factor = F1/(F2) = 13,868/1600 = 8.67."} +{"text": "A 16-year-old man presented to our Emergency hospital for a major traumatic scalp injury of pubic and scrotal region; this occurred as a consequence of an autonomous motorcycle accident. Early surgical esploration showed the bilateral section of spermatic cord in correspondence of the abdominal-pelvic tract; a conservative management was adopted and the patient underwent bilateral intratesticular sperm biopsy for cryopreservation. In the presence of extensive scalp injury of the pubic and scrotum skin a early surgical exploration should be performed and the multidisciplinary decision making should take in account comorbidity, age and impact on the quality of life. Male genital injuries are rare to occur relatively due to isolated location of the genitals and in most of the cases are caused by road traffic and machinery-related accidents; rarely, the peno-scrotal degloving injury with total amputation of the scrotum has been reported. Although injuries of genitalia are non-lethal, the lesions could be incapacitating and psychologically overwhelming to patients if not treated appropriately.4A case of bilateral spermatic cord section as a consequence of an autonomous motorcycle accident has been reported.A previously well, 16-year-old man presented after sustaining with a major traumatic injury to pubic and the scrotal region; this occurred as a consequence of an autonomous motorcycle accident during a motocross race occurred about 12 hours before hospital admission. Physical examination revealed extensive scalp injury of the pubic and scrotum skin involving bilateral testis and penis ; the pat,Genitourinary injury is present in approximately 10% of cases of abdominal traumaIn our case, the first to our knowledge reported in literature, an extensive scalp injury of the pubic and scrotum skin involving bilateral testis and penis only the bilateral spermatic cord section was reported. Although CDUIn the presence of extensive scalp injury of the scrotum skin involving bilateral testis and penis an early surgical exploration should be mandatory; in the presence of isolated bilateral spermatic cord section the multidisciplinary decision making should take in account comorbidity, age and impact on the quality of life for the patient.All procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.Written informed consent was obtained from the patient for his anonymized information to be published in this article.The authors declare that they have no conflicts of interest.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors."} +{"text": "Revised legislation and bans on imports of waste electrical and electronic equipment (WEEE) into many Asian countries for treatment are driving the need for more efficient WEEE fractionation in Europe by expanding the capacity of treatment plants and improving the percentage recovery of materials of economic value. Data from a key stakeholder survey and consultation are combined with the results of a detailed literature survey to provide weighted matrix input into multi-criteria decision analysis calculations to carry out the following tasks: (a) assess the relative importance of 12 process options against the 6 industry-derived in-process economic potential criteria, that is, increase in product quality, increase in recycling rate, increase in process capacity, decrease in labour costs, decrease in energy costs and decrease in disposal costs; and (b) rank 25 key technologies that have been selected as being the most likely to benefit the efficient sorting of WEEE. The results indicate that the first stage in the development of any total system to achieve maximum economic recovery of materials from WEEE has to be the selection and application of appropriate fractionation process technologies to concentrate valuable components such as critical metals into the smallest possible fractions to achieve their recovery while minimising the disposal costs of low-value products. The stakeholder-based study has determined the priority for viable technical process developments for efficient WEEE fractionation and highlighted the economic and technical improvements that have to be made in the treatment of WEEE. Waste electrical and electronic equipment (WEEE) is a large and increasingly diverse type of waste with an estimated global yield of approximately 45 million tonnes per annum . ParticuReport on Critical Raw Materials for the EU triboelectrostatic separation under apSeveral sorting technologies have been applied to the removal of metals from WEEE , 2006. TA number of remote sensing technologies that are sufficiently fast in detecting and removing target particles have been validated for the sorting of WEEE. When these technologies are used, high levels of differentiation that can enhance the separation and value of recovered material can be achieved . Sensor-Hydrometallurgical methods have been used in the recovery of metals from WEEE but haveWith increasing tonnages of WEEE requiring treatment, and recent import bans on wastes by Asian and Far Eastern countries, there is a need within the EU to develop process technologies for efficient WEEE fractionation and separation that take into account technical and legislative requirements in the EU, including the safe removal of any hazardous materials. We now report on the analysis of stakeholder-based survey data combined with a detailed literature survey to prioritise viable WEEE process development options and rank technologies that concentrate valuable materials in the smallest possible fractions to permit maximum economic recovery and separation of commercially viable materials.The methodology used to obtain stakeholder data to enable the evaluation of viable WEEE process development options, particularly for e-waste, consisted of four parts: (a) a detailed literature analysis of both technologies that have already been developed and those under development that can be applied to separating the components of WEEE; (b) selection of key stakeholder participants; (c) design of a questionnaire to collect stakeholder data on process development options for WEEE; and (d) data handling and analysis.A detailed review of the primary and secondary literature and technical documents using critical citation keyword searches was carried out to determine what relevant fractionation technologies are available to the WEEE treatment industries. The aim of the review was to identify technologies that were likely to achieve the efficient fractionation of WEEE, particularly e-waste, within the constraints of the WEEE Directive to maximise material recovery and value. The technologies and methods identified were classified into four main groups: comminution; direct sorting; sensor-based sorting; and hydrometallurgical and chemical methods. The results from the literature review were analysed in detail to produce a subset of 25 current and emerging technologies identified as being of particular relevance to the separation and fractionation of WEEE in terms of achieving selective recovery of economically viable materials. Consultation with the 33 key stakeholders led to a classification of these technologies in terms of the key process options (chosen for their function) likely to effect the successful separation and fractionation of high-value materials from WEEE: metals, engineered plastics, glass and composite materials of value. A set of key process options was then identified through a combination of the literature survey and detailed discussions with relevant industry stakeholders: removal of specific molecules; separation of component materials of composites; comminution; sizing; removal of ferromagnetic metals; removal of non-ferromagnetic metals; removal of plastics; removal of glass and ceramics; sorting ferromagnetic metals; sorting non-ferromagnetic metals; sorting plastics; and sorting glass and ceramics.The 33 stakeholders were selected from the network of the authors\u2019 contacts within the European WEEE management sector, and all of them held senior management positions (as directors or operations managers) with responsibility for managing automated WEEE sorting processes in Europe within their organisations. All the stakeholders were users of relevant technologies for the separation of target materials from WEEE, had a sound understanding of the relevant economic and technical aspects of such processes and had responsibility for key decisions with regard to investment in the company\u2019s technology portfolio.Engagement with the stakeholders included discussions on (a) the findings from the literature survey to assess their opinion on the selection of the technologies most likely to be successful in separation of materials, (b) the relevant key process options for WEEE fractionation and (c) the economic process criteria that should be used in the data analysis. Each stakeholder was invited to participate in the survey, which was designed to determine the practitioner\u2019s point of view with regard to the relative importance of technologies available to the WEEE treatment industries. Direct input from the 33 stakeholders obtained in this part of the research resulted in endorsement of the concept of the research, the selection of technologies and process options to be used and the measures against which the options should be judged. The 33 stakeholders also decided on what cost priorities they regarded as important in the technology survey, expressed in A questionnaire was designed to obtain meaningful and unbiased data from the key stakeholders who had responsibility for managing automated WEEE sorting processes in Europe. To eliminate bias, the stakeholders selected for inclusion in the survey represented the different activities and priorities that exist across the electrical and electronics recycling industries, for example, companies that focus on specialist processes such as plastics recovery or sorting of non-ferrous metals, and those that treat and recycle all fractions and components of WEEE. The questionnaire consisted of two parts and it was supplied to relevant stakeholders in German or English, as appropriate, for completion either by means of an interview or email response, and the survey was conducted over a period of four months.Of the 33 stakeholders, 21 confirmed their agreement to participate in the survey, and the surveys were sent by email to each stakeholder for completion either by email or via a telephone or face-to-face interview. Of the 21 stakeholders, 10 responded in full, matching their weightings against the agreed criteria. The remaining 11 stakeholders provided partial responses that added to the information provided in the full responses.The first part of the questionnaire was designed to gain an understanding of the nature of the activities of the stakeholder and an insight into the WEEE collection streams and the separated fractions of secondary raw materials derived from WEEE by individual stakeholder organisations. The following set of open-ended questions was used:What collection streams of WEEE do you accept?What module/component fractions of WEEE do you accept?What material fractions of WEEE do you accept?Which of the material streams that you produce incur disposal costs?Which of the material streams that you produce generate revenue?In your answers to questions 1\u20135, which of the materials mentioned do you expect to have relatively high potential for economic improvement?In the second part of the questionnaire, stakeholders were required to complete two data input sheets providing relative numerical scores between 1 and 10 for (a) the 12 key process options, chosen to represent the methods used in the separation of WEEE fractions to optimise total value recovery, matched against a set of 6 criteria that describe economic potential , and (b)Questions 1 to 6 in the first part of the questionnaire are evaluated by ranking the most common keywords found in the stakeholder responses on the basis of the number of times they appear. Identified keywords are counted only if they occur more than once throughout all stakeholder responses, and repetition of any keyword by the same stakeholder in answer to the same question is counted only once. Because questionnaires were answered by stakeholders from different European countries, in selecting keywords the differences in definitions of WEEE collection streams in individual national member state countries were taken into account.The weighted matrix information supplied by individual stakeholders in the second part of the questionnaire provides data that can be analysed mathematically to aid management decisions. Examples of this type of approach in waste management decision-making include use of a weighted matrix analysis to develop a hierarchy of waste management options for the treatment of the organic waste fraction of municipal waste , benchmaThe numerical data obtained from stakeholders were analysed using multi-criteria decision analysis (MCDA), an analytical approach designed to deal with the difficulties that human decision-makers have in handling large amounts of complex information in a consistent way to identify best options . The guiin which each option has to be judged individually against each criterion Each criterion is also assigned a weighting In this work, the MCDA was carried out by normalising the data supplied by stakeholders for both the economic potential criteria and the weightings they attributed to the relative importance of each criterion using equations (1) and (2).The normalised economic potential is given bywhere The normalised weightings of the economic potential for the criteria where A dimensionless priority ranking score for each key process option For each option in set 125) identified as most likely to be successful for use in one or more of the 12 key process options are given in Data obtained from a detailed relevant literature survey and key stakeholder consultations have been combined with those from a stakeholder questionnaire and constitute the input into MCDA calculations and 2 toThe MCDA calculations involved (a) ranking of the 12 key process options against a set of stakeholder industry-derived process economic criteria, which were chosen for their potential importance in achieving optimum fractionation success, and (b) ranking of the 25 technologies most likely to benefit one or more of the 12 key process options. The numerical input data required in the evaluation of the rankings were provided by stakeholders in the form of weighted matrix information on two parameters, first, as illustrated in The data obtained from the stakeholder survey have been combined with the findings from a detailed literature review and stakeholder consultation exercise using the analysis described in decrease in disposal costs > increase in product quality = increase recycling rate > increase in process capacity > decrease in labour costs > decrease in energy costs. Although the criteria may not be totally independent, for example, the three criteria increase in product quality, increase in recycling rate and decrease in disposal costs may have mutual dependency, it is clear that in any process, the cost of disposal of low-quality materials has a major effect on the economics of efficient recycling of WEEE, leading to decrease in disposal costs having the highest ranking. Achieving a reduction in disposal costs is seen as the most desirable of the economic factors affecting recovery of value from WEEE, but there will always be a conflict between the disposal and process costs involved, for example, automated removal of black plastics on an industrial scale, which would reduce the disposal costs of low-grade plastics created as a by-product from separation processes designed to recover high-value engineering plastics, is not currently carried out.The results of a normalised analysis of the aThe information from stakeholder numerical data input into an MCDA calculation is shown in Although the results of the study of the technologies most likely to achieve optimal fractionation of WEEE to achieve maximum material recovery for recycling are specific to the European situation, the methodology developed is general, and with appropriate stakeholder input can be applied to any region, country or recovery operation.The treatment of large tonnages of the heterogeneous waste streams from end-of-life electrical and electronic goods has become an increasing problem for the recovery and recycling industries. Recovery of value from these wastes, however, is complicated both by their complexity and by national and international legislation applied to the treatment processes used and to any materials derived from these. In Europe, there is now a requirement to expand the capacity of treatment plants and improve the percentage recovery of materials with economic value. The need to concentrate valuable materials in WEEE into the smallest possible fractions to maximise recovery potential is now an important issue; for example, this will ensure that treatment after fractionation makes it possible to remove hazardous flame-retardant components from plastics, and also that specific components such as critical metals are concentrated into very small fractions to achieve maximum recovery potential. For reasons such as these, efficient separation of components containing metal from plastics has to be a crucial stage in any WEEE recovery process.increase in product quality, increase in recycling rate, increase in process capacity, decrease in labour costs, decrease in energy costs and decrease in disposal costs), which were selected by the key stakeholders as being the most relevant in-process cost considerations that would lead to the optimal fractionation of WEEE. The results of this normalised analysis of the assessment of the relative importance that stakeholders attach to the economic criteria show that these are ranked as follows: decrease in disposal costs > increase in product quality = increase recycling rate > increase in process capacity > decrease in labour costs > decrease in energy costs. Although the criteria may not be totally independent it is clear that in any process, the cost of disposal of low-quality materials will have a major effect on the economics of efficient recycling of WEEE, and that any fractionation processes developed must take account of this. The 25 technologies were also ranked using a combination of the results of the process option rankings and a stakeholder-determined relevance indicator.The initial stage in the development of total systems to achieve maximum economic recovery of materials from WEEE has to be the selection and application of appropriate fractionation process technologies. A combination of stakeholder-based survey data with a detailed literature survey has been used in this study to provide weighted matrix input data into MCDA calculations to prioritise and rank both process options and technologies available that could achieve optimal fractionation of WEEE with the aim of maximising material recovery for recycling. The literature review and stakeholder consultation survey permitted the ranking of 12 key process options and 25 technologies against the following six in-process economic criteria (This stakeholder-based study has (a) determined the priorities for viable technical process developments for efficient WEEE fractionation for maximum recovery of high-quality material, and (b) highlighted the economic and technical improvements in WEEE treatment that will have to be made at a time when European electronics recyclers will be required to expand their facilities because of increased volumes of WEEE and new import bans on wastes by Asian countries. Although it is important that fractionation processes have the ability to recover metal content from the smallest fractions possible to achieve maximum economic recovery, it is also important as far as the European situation is concerned that the processes are capable of recovering high-quality economically recyclable plastic fractions that are free from hazardous flame-retardant materials.The results of the study of the technologies most likely to achieve optimal fractionation of WEEE to achieve maximum material recovery for recycling are specific to the European situation, but the methodology developed is general, and with appropriate stakeholder input it can be applied to any region, country or recovery operation."} +{"text": "Central and North West London's Clinical Ethics Committee (CEC) offers a non-judgmental space to discuss ethical concerns and challenges and provide ethical guidance. This project aims to publicise these ethical dilemmas and guidance to inform decision making trust-wide.A Clinical Ethics Committee (CEC) encompasses a diverse range of figures, from psychiatrists and general practitioners to members of the clergy and experts by experience. The CEC in Central and North West London have been meeting regularly since 2003 to provide ethical assistance to a wide range of medical, surgical and psychiatric teams. Complex ethical cases are presented by the treating team, allowing a subsequent discussion of the ethical theories and frameworks within the case with the committee members. This synthesis of information can then assist the treating team in the shaping of ethical based solutions to their dilemmas.The committee wished to encourage ethical based clinical thinking within the trust and enable others to learn from the valuable insights already provided by the CEC over the years.Case notes, recorded from the last 17 years of meetings of the Clinical Ethics Committee were reviewed. 98 cases were identified between 2003-2019. The contemporaneous case reports were then anonymised and indexed into one easy to use file. This file was published on the local intranet and publicised to staff.The cases were compiled into a PDF document which is available for all staff members within the trust on the intranet. This resource is open to all clinical staff, and serves the dual purpose of encouraging ethical-based thinking and also promoting the ethics committee to those who might be in need of assistance.Clinical decisions can be complex and nuanced, often complicated by multiple viewpoints and ways of thinking. The database demonstrates the use of ethical dimensions by the ethics committee to inform decision making in a series of varied clinical and management dilemmas. The project required careful consideration around preservation of confidentiality as well as overcoming the logistical barriers of trust-wide dissemination. The result is a document that will allow ethical based decision-making to be embedded into everyday practice."} +{"text": "Gene amplification is a mechanism whereby cultured animal cells and human tumours become resistant to cancer chemotherapeutic agents. This review of studies from the authors' laboratory describes properties of the acquisition of resistance to methotrexate in cultured mammalian cells by virtue of amplification of the dihydrofolate reductase gene. These properties result in a heterogeneous cell population with respect to many cell properties, including the number and stability of the amplified genes. Gene amplification results from overreplication of DNA in a single cell cycle as a result of inhibition of DNA synthesis. The cells surviving such overreplication constitute a heterogeneous population with multiple chromosomal changes, including partial or complete endoreduplication of chromosomes, as well as a variety of chromosomal rearrangements. A similar phenomenon may underlie the generation of aneuploidy in tumours, their malignant progression, and the generation of heterogeneity in the tumour cell population."} +{"text": "Data in a regional cancer registry covering a population of 5 million and with an efficiency of registration of over 95% have been used to examine incidence trends in oesophageal and gastric carcinoma. In the West Midlands Region of the UK, during the period 1962 to 1981 the age standardised incidence of gastric carcinoma decreased by 20%. However, an analysis by both histological type and detailed site reveals that while the incidence of distal lesions is diminishing, the incidence of adenocarcinoma of the oesophagus and cardia is increasing. The proximal and distal lesions also exhibit marked differences in social class distribution and sex ratio. The results strongly suggest that the aetiological factors involved for cardia and adjoining sites are different from those for pyloric antrum."} +{"text": "Investigators with complementary expertise on the application of microarray technology in clinical oncology presented their recent findings and debated critical aspects regarding diagnostic and prognostic applications and current trends in the use of this technology for monitoring and predicting clinical response to treatment of cancer patients. This meeting report summarizes the main contributions and underlines some critical issues, which need to be addressed to enhance the effectiveness of this potential powerful new technology in clinical oncology.The completion of the human genome project together with the development and implementation of microarray technologies have opened new opportunities for progress in cancer research. For instance, an increasing number of molecular markers with prognostic and diagnostic potential have been identified in a broad range of human cancers by cDNA microarray analysis . Thus, tMicroarray technologies have been extensively used to evaluate genetic markers and changes in gene expression associated with cancer onset and progression for certain types of solid tumors . Ulrich Microarray technology has provided the opportunity to begin a comprehensive molecular and genetic profiling of human breast cancer . AlthougIn the hematological field, cDNA microarrays have contributed to an increasingly well-defined molecular taxonomy of leukemias and lymphomas. This has led to the segregation of morphologically identical tumors according to molecular patterns predictive of distinct clinical outcomes -8. MoreoThe increasing use of microarray technology for characterizing the transcriptional profile of tumors opened the opportunity to develop potent tools for prediction of response to treatment and for the identification of novel therapeutic targets. The genome-wide perspective offered by microarrays has allowed the focus of drug development to shift towards targeted therapeutics acting on specific molecular targets. For instance, it has been reported that growth factor signals are mutated in a number of cancers including colorectal cancer . Advancein vitro exposure to GM-CSF and IFN-\u03b1 as compared to cells treated with GM-CSF alone. This study could also provide insights into the mechanisms contributing to the in vivo anti-tumor activity of IFN-\u03b1 through the enhancement of monocyte and dendritic cell functions [Although there is circumstantial evidence that the activation of the anti-tumor immune response may be critical in affecting the natural or treatment-induced history of cancer, the complex interactions underling this phenomenon remains largely unknown . For insunctions ,20. The The influence of the genetic background of patients on treatment outcome was discussed by Marincola and Panelli . Short oligonucleotide (18 to 22 oligonucleotide) microarrays represent a powerful tool for genome-wide screening of genetic variations such as single nucleotide polymorphisms (SNPs), which can be used to identify genetic differences potentially responsible for divergent responses to therapy . They boThe workshop concluded with a roundtable discussion of critical issues associated with the introduction of microarray technology to the practice of clinical oncology. There was general agreement that a series of scientific, ethical and legal concerns must be resolved before these genomic tools can become part of the armamentarium of clinical practitioners . The forThe use of microarrays in clinical oncology raises another critical issue: the management of the tremendous volume of data generated in the context of different types of analyses. It was highlighted that this could be turned into an advantage since it may be that complex relationships in gene expression patterns can be resolved only when very large data sets are available for analyses. However, in order to achieve this goal, more efficient data management systems are required. For instance, James F. Reid pointed out that in building predictive models from gene expression profiling experiments, it is also important to report proper estimates of classification accuracies and validate promising classifiers on independent data to further evaluate their clinical utility . MoreoveMicroarray Technologies in Clinical Oncology: Potential and Perspectives\" will have contributed to a better understanding of the \"state of the art\" of this promising field of cancer research. In addition, we have hopefully provided a clearer definition of some critical issues that need to be addressed in order to translate the great expectations of the scientific community into realities for the better management of cancer patients.The expectations for what might be gained from high throughput microarray technology in clinical oncology are high as its utilization in clinical practice could markedly improve our current strategies for the diagnosis of cancer and prediction of the clinical outcome. This in turn may lead to the identification of treatments optimized according to the genetic background of individual patients and the biological characteristics of their tumors. However, many concerns about microarray-based experimentation need to be resolved regarding sample handling and data interpretation in order to fulfill these expectations. This can only be achieved by establishing a close cooperation between experts in microarray technologies, \"trialists\" and clinicians. A strategic international cooperation involving public and private institutions is also needed in order to exploit the potentialities of these new, continuously changing, microarray technologies in clinical oncology. Indeed, the true potential of this powerful tool will be fully exploited only when networks of excellence capable of correctly performing large validation studies and of directing data into new translational studies are established. In addition, the routine application of microarray technologies in clinical oncology would raise some relevant legal, ethical, social and regulatory issues, which have been poorly addressed so far. Therefore, efforts should be undertaken to achieve maximal technical consistency and standardization and to define specific and comprehensive regulatory frameworks for addressing the many unresolved issues. The authors hope that that the Workshop \""} +{"text": "Building an embryo is like building a house: everything has to be done at the right time and the right place if the plans are to be translated faithfully. On the building site, if the roofer comes along before the bricklayer has finished, the result may be a bungalow instead of a two-story residence. In the embryo, if the neurons, for example, start to make connections prematurely, the resultant animal may lack feeling in its skin.On the building site, the project manager passes messages to the subcontractors, and they tell the laborers what to do and where. In the embryo, the expression of specific transcription factors (molecules that tell the cell which DNA sequences to convert into proteins) at different stages of development and in different places controls the orderly construction of the body.Silvia Arber and her colleagues are studying the protracted process of neuronal differentiation in mice. Early in development, neurons are generated from dividing progenitor cells. Cell division stops soon after, and long extensions called axons grow out of the neurons in specific directions. When these axons reach their targets\u2014peripheral tissues like the skin at one end, in the case of sensory neurons, and the spinal cord at the other\u2014they form characteristic terminal branches. Finally, the nerves form contacts with other neurons so that they can pass messages on to the brain.Many aspects of neuronal character are acquired through the expression of transcription factors in the progenitor cells or immediately after cell division stops. But Arber and her colleagues have been investigating whether an even later wave of transcription programs is needed for neuronal differentiation and circuit assembly in the sensory neurons of the dorsal root ganglia (DRG), structures containing the cell bodies of the sensory neurons. Previous work indicates that the release of molecules called neurotrophic factors by the neuron's target tissues directs the late expression of Er81 and Pea3. These ETS transcription factors control late aspects of the differentiation of DRG neurons. What would happen, the researchers asked, if ETS proteins were expressed earlier? Would precocious ETS expression in DRG neurons also direct the appropriate neuronal developmental programs?Arber's team made genetically engineered mice in which ETS signaling occurred either at the correct time or earlier, and examined the development of the proprioceptive sensory neurons, which are involved in the coordination of body balance. In vivo, they found that early initiation of ETS signaling disrupts the axonal growth of the DRG neurons, both to their peripheral targets and into the spinal cord, and perturbs the acquisition of terminal differentiation markers. In vitro, premature ETS signaling allows the DRG neurons to survive and grow in the absence of the neurotrophins normally required for these processes.Arber and her coworkers conclude that the late onset of expression of ETS transcription factors induced by target-derived signals is essential for many of the later aspects of neuronal differentiation and circuit formation. During their differentiation, the researchers suggest, DRG neurons undergo a temporal switch in their ability to respond to ETS signaling. Further analysis of the mechanisms by which responses to transcription factor programs are altered over time during development will advance our understanding not only of neuronal differentiation but of other aspects of embryogenesis."} +{"text": "We have produced human CEA transgenic mice which were found to express CEA mRNA in all tissues. By immunoblot analysis using anti-CEA polyclonal antibody, we also detected CEA protein in all tissues. However, the molecular size of CEA in the brain was different from that in other tissues, although the mRNA size was same and no deletion nor rearrangement was detected at the DNA level. Immunohistochemical analysis of the lung and the colon showed that the expression sites were the bronchial epithelial cells of the lung and the columnar epithelial cells of the colon. Interestingly, the expression of CEA protein in the transgenic mice was polarised to the luminal side of epithelial cells similar to the normal CEA expression in human tissues. We also detected cell surface expression of human CEA on thymocytes and spleen cells and CEA expression was greatly reduced by the phosphatidylinositol-specific phospholipase C (PI-PLC) treatment."} +{"text": "The skin is a composite structure composed of a superficial epidermis and an underlying dermis. Wounding of this structure with resultant functional and anatomical disintegration leads to a cascade of events directed at the restoration of these features. However, maintenance of anatomical integrity has a higher priority from the standpoint of preservation of homeostatic equilibrium. The uncoupling of anatomical and functional renovation due to accelerated wound closure without proper regeneration and spatial organization of the underlying cellular/extracellular assembly leads to scarring and loss of function. Epithelialization and contraction are the major mechanisms acting to minimize the exposed wound surface.Fibroblasts and epithelial cells are major regulatory elements of wound contraction and epithelialization, respectively. Fibroblasts contribute to contraction directly by producing contractile forcesThis prospective modality may be useful for wounds where excessive contraction and fibrosis is probable, for example, extensive burns. Future research may be directed toward testing the hypothesis via EMT markers and also developing therapeutic strategies for regulation of the TGF/EGF ratio."} +{"text": "The results of the Swedish two-county study are analysed with respect to tumour size, nodal status and malignancy grade, and the relationship of these prognostic factors to screening and to survival. It is shown that these factors can account for much of the differences in survival between incidence screen detected, interval and control group cancers but to a lesser extent for cancers detected at the prevalence screen where length bias is greatest. Furthermore, examination of the relationships among the prognostic factors and mode of detection indicates that malignancy grade, as a measure of inherent malignant capacity, evolves as a tumour grows. The proportion of cancers with poor malignancy grade is several fold lower for cancers of diameter less than 15 cm than for cancers greater than 30 cm, independent of the length bias of screening. The implications of these findings for screening frequency are briefly discussed."} +{"text": "Virology Journal will reflect the excitement of the \"New Phage Biology.\"Bacteriophage research continues to break new ground in our understanding of the basic molecular mechanisms of gene action and biological structure. The abundance of bacteriophages in nature and the diversity of their genomes are two reasons why phage research brims with excitement. The pages of Virology Journal comes at a time of resurgence of interest in the basic biology of the bacteriophages and the impact that these viruses have on earth's ecology, evolution of microbial diversity and the control of infectious disease. Since playing an important part in the birth of Molecular Biology more than 50 years ago [The launching of ears ago , phage rears ago . This trears ago , the pro30 or ~10 million per cubic centimeter of any environmental niche where bacteria or archaea reside [Aeromonas, Vibrio, Acinetobacter, marine and other bacterial species. The genomes of a few T4-like phages have been sequenced and found to indeed share homologies with T4, but to also differ from one another in size, organization of the T4-like genes and content of other putative genes and DNA mobile elements . It appears that phage families like the T4-related phages have learned to cross bacterial species barriers and possess plastic genomes that can acquire and lose genetic cassettes through their travels in the microbial world. In essence, genomes of the dsDNA phages may be repositories of the genetic diversity of all microorganisms in nature.Two reasons why the new era of phage research brims with excitement are the abundance of bacteriophages in nature and the diversity of their genomes. Phage is probably the most widely distributed biological entity in the biosphere, with an estimated population of >10a reside . At one In addition to evolving by serving as traffickers of microbial genes, phage genomes evolve through the accumulation of mutations in both acquired and core genes. Sequence divergence among homologues of the essential genes for phage propagation within a phage family can be used as a source of information about the determinants of specificity of the protein-protein and protein-nucleic-acid interactions that underlie biological function. Phages are excellent sources of many enzymes and biochemical transactions that are broadly represented in all divisions of life. The large numbers of phylogenetic variants of biologically interesting proteins and nucleic acids that one can derive from sequenced phage genomes are treasure troves for studies of biological structure in relation to function. Interest in phage and phage gene products as potential therapeutic agents is also increasing rapidly and is likely to have profound impact on the pharmaceutical industry and biotechnology in general over the coming years. There is a general sense that the best is yet to come out of phage research.Virology Journal will reflect the excitement of the \"New Phage Biology\" by publishing reports in the areas of Ecology and Taxonomy, Genomics and Molecular Evolution, Regulation of Gene Expression, Genome Replication and Maintenance, Protein and Nucleic Acid Structure, Virus assembly, Biotechnology, Pathogenesis, Therapeutics and more. It would be especially interesting to see submissions of phage genome sequence briefs and their biological implications.We anticipate that the pages of The author(s) declare that they have no competing interests."} +{"text": "During the period 1957-1984 the annual age-adjusted incidence rate of cutaneous malignant melanoma (CMM) increased by 350% for men and 440% for women in Norway. The annual exposure to carcinogenic sunlight in Norway, calculated by use of measured ozone levels, showed no increasing trend during the same period. Thus, ozone depletion is not a cause of the increasing trend of the incidence rates of skin cancers. The incidence rates of basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) increase with decreasing latitude in Norway. The same is true for CMM in Norway, Sweden, and Finland. Our data were used to estimate the implications of a future ozone depletion for the incidence rates of skin cancer: a 10% ozone depletion was found to give rise to a 16-18% increase in the incidence rate of SCC (men and women), a 19% increase in the incidence rate of CMM for men and a 32% increase in the incidence rate of CMM for women. The difference between the numbers for men and women is almost significant and may be related to a different intermittent exposure pattern to sunlight of the two sexes. The increasing trend in the incidence rates of CMM is strongest for the trunk and lower extremities of women, followed by that for the trunk of men. The increasing incidence rates of skin cancers as well as the changing pattern of incidence on different parts of the body is most likely due to changing habits of sun exposure. Comparisons of relative densities of CMM, SCC, LMM and SCC falling per unit area of skin at different parts of the body indicate that sun exposure is the main cause of these cancer forms although other unknown factors may play significant roles as well. For the population as a whole sun exposure during vacations to sunny countries has so far been of minor importance in skin cancer induction."} +{"text": "The main risk factors for oesophageal cancer previously identified in western Europe are tobacco smoking and alcohol drinking. However, a study of the time trend from 1951 to 1985 of the mortality from oesophageal cancer in 17 European countries shows that, except among the younger age groups in men, oesophageal cancer had either decreased or increased only slightly in most countries. This trend differed from that of lung cancer, cirrhosis and alcohol consumption which had in general increased substantially during the period. The results strongly suggest that population-wide changes in certain undetermined risk/protective factor(s), one possibility of which is the consumption of fruit, had overridden the effect of tobacco and alcohol and resulted in a reduction of oesophageal cancer risk. Apart from further efforts to reduce smoking and drinking, studies to identify the factor(s) will be of great public health importance to the prevention of oesophageal cancer."} +{"text": "The use of apical suction devices has been well described for maintaining satisfactory haemodynamics during off-pump surgical coronary revascularization. Its expanded use has been described in a few other situations. We describe here a case of recurrent coarctation where an extra-anatomic ascending to descending thoracic aorta bypass graft was constructed using cardiopulmonary bypass without arresting the heart, and access and exposure were facilitated by the use of an apical suction device. A 49 year old gentleman presented to cardiology with lower limb claudication pain and breathlessness of three years duration. Clinical examination revealed upper limb hypertension, with similar blood pressures in both arms (180/100 mm Hg). His past history included repair of coarctation of aorta about 30 years ago. The medical records and operative details from the previous operation were unavailable. The operation had been performed through a left thoracotomy. An MRI scan revealed a 2 cm long narrowing of the aorta just distal to the origin of an aberrant right subclavian artery, which was the last of four branches from the aortic arch for venous outflow. Once the apex and the posterior surface of the heart were free of adhesions, an apical suction device was placed in position and the beating heart was lifted superiorly. This allowed further dissection in the posterior pericardium and allowing freeing up of adhesions between the left lung and the descending thoracic aorta, and allowed visualization of and access to the descending thoracic aorta just above the diaphragm in spite of a deep thoracic cavity to avoid cross clamping the heart for a prolonged period of time for an extra-cardiac operation 2) to make the elevation and retraction of the empty beating heart technically easier, and less traumatic on the epicardium and myocardium compared to retracting and elevating with the use of the assistant's hand. We could accomplish both these objectives safely and successfully, with adequate exposure to clamp the descending thoracic aorta and perform the anastomosis.This report describes another expanded use for the apical suction device."} +{"text": "The interaction between tumour and bone with respect to the proliferative activity of transplanted tumour cells was studied using five transplantable human urogenital tumours in nude mice. Cells from those tumours were injected subcutaneously over the calvaria of nude mice following disruption of the periosteum. The extent of tumour-bone interaction varied with the type of implanted tumour as shown on X-ray and by histologic examinations of the calvaria. The classic histologic pattern of bone remodelling including the destruction of bone with proliferation of osteoclasts and reactive new bone formation was seen with all five tumours. Tumour proliferative activity determined from the tumour doubling time and the S-phase fraction using bromodeoxyuridine labelling showed that the rate of reactive bone formation appeared to be inversely proportional to the rate of tumour cell proliferation."} +{"text": "Dysplasia in the upper gastrointestinal tract carries a risk of invasive malignant change. Surgical excision of the affected organ is the only treatment available. Photodynamic therapy has been shown to be promising in the treatment of early and superficial tumours and may be useful for the ablation of dysplastic mucosa. Because of the diffuse nature of the disease, such treatment would necessarily involve destruction of large areas of mucosa and it is desirable to confine its effect to the mucosa in order that safe healing can take place. By means of photometric fluorescence microscopy, we have studied the pattern of photosensitisation in the normal rat stomach using di-sulphonated aluminium phthalocyanine (AlS2Pc) and 5-aminolaevulinic acid (ALA) as photosensitisizers. AlS2Pc resulted in a panmural photosensitisation of the gastric wall with the highest level encountered in the submucosa. The mucosa and muscularis propria were sensitised to equal extent. Following light exposure, a full thickness damage resulted. ALA is a natural porphyrin precursor and exogenous administration gave rise to accumulation of protoporphyrin IX (PPIX) in the cells. The resultant pattern of photosensitisation was predominantly mucosal and its photodynamic effect was essentially confined to the mucosa. ALA produced a selective photosensitisation of the gastric mucosa for its photodynamic ablation with sparing the underlying tissue layers."} +{"text": "HoxD cluster are expressed in two spatiotemporally distinct phases. In the first phase, Hoxd9-13 are activated sequentially and form nested domains along the anteroposterior axis of the limb. This initial phase patterns the limb from its proximal limit to the middle of the forearm. Later in development, a second wave of transcription results in 5\u2032 HoxD gene expression along the distal end of the limb bud, which regulates formation of digits. Studies of zebrafish fins showed that the second phase of Hox expression does not occur, leading to the idea that the origin of digits was driven by addition of the distal Hox expression domain in the earliest tetrapods. Here we test this hypothesis by investigating Hoxd gene expression during paired fin development in the shark Scyliorhinus canicula, a member of the most basal lineage of jawed vertebrates. We report that at early stages, 5\u2032Hoxd genes are expressed in anteroposteriorly nested patterns, consistent with the initial wave of Hoxd transcription in teleost and tetrapod paired appendages. Unexpectedly, a second phase of expression occurs at later stages of shark fin development, in which Hoxd12 and Hoxd13 are re-expressed along the distal margin of the fin buds. This second phase is similar to that observed in tetrapod limbs. The results indicate that a second, distal phase of Hoxd gene expression is not uniquely associated with tetrapod digit development, but is more likely a plesiomorphic condition present the common ancestor of chondrichthyans and osteichthyans. We propose that a temporal extension, rather than de novo activation, of Hoxd expression in the distal part of the fin may have led to the evolution of digits.The evolutionary transition of fins to limbs involved development of a new suite of distal skeletal structures, the digits. During tetrapod limb development, genes at the 5\u2032 end of the Tulerpeton), seven (in Ichthyostega) and eight or more (in Acanthostega) short digits, with comparatively simple or poorly defined wrists and ankles The origin of limbs was a defining event in the evolution of tetrapods. Important new discoveries in developmental genetics and vertebrate paleontology have enhanced our understanding of limb development and evolution Comparative developmental studies have demonstrated that the mechanisms controlling initiation, position, outgrowth and pattern are remarkably conserved between teleost fins and tetrapod limbs Hoxd genes regulate the anteroposterior pattern of both fins and limbs by establishing an early map of cell identity that is important for specification of the ZPA Hoxd genes are expressed in highly dynamic patterns during limb development. Early work suggested that there are three phases of Hox expression in tetrapod limbs Hoxd9, is expressed in lateral plate mesoderm up to the pectoral level Hoxd9 expression is maintained and the neighboring Hoxd10-Hoxd13 genes are activated sequentially. This produces a spatially and temporally collinear pattern of nested expression domains along the anteroposterior axis of fins and limbs, with the Hoxd13 domain being the most posteriorly restricted Hoxd genes being re-expressed along the distal margin of the limb buds, in the area of the prospective digits Hoxd13 is expressed in all of the developing digits whereas Hoxd12 and Hoxd11 are expressed in all but the anteriormost digit. By contrast, this late phase of Hox expression was not observed during zebrafish fin development Hoxd genes for digit development and the emerging picture of early tetrapod digit evolution, and an elegant new hypothesis proposed that digits are neomorphic structures that resulted from acquisition of the late distal domain of Hoxd gene expression during tetrapod evolution HoxD gene regulation in mice have shown that the two phases of expression within the limb buds result from two independent waves of transcriptional activation. The first wave involves the action of opposite regulatory modules located outside of the cluster, which leads to sequential transcription of HoxD genes from the 3\u2032 to the 5\u2032 end of the complex HoxD genes in the distal region of the limb HoxD gene expression during mouse limb development is consistent with the proposal that proximal and distal parts of the limb have distinct evolutionary histories Hoxd cluster and resulted in distal activation of Hoxd expression, or that the preexisting regulatory modules were co-opted to perform this function during the transition from fins to limbs Hoxd expression in the distal aspect of the limb is unique to tetrapods and contributed to the evolutionary origin of digits.Genetic analyses of Hoxd genes observed in zebrafish fin buds is representative of the primitive condition for gnathostome (jawed vertebrate) fins. Zebrafish fin morphology is highly derived relative to other actinopterygians, sarcopterygians and chondrichthyans. A tribasal fin skeleton, containing a propterygium anteriorly, a mesopterygium in the middle and a metapterygium posteriorly, is widely considered to be the primitive pattern for gnathostomes Here we investigate whether the monophasic expression of Scyliorhinus canicula), and find, at both cellular and molecular levels, striking similarities to tetrapod patterns of skeletogenesis as well as differences relative to the zebrafish pattern. In order to identify the primitive role of 5\u2032Hoxd genes in fin evolution, we analyze the expression pattern of these genes during catshark paired fin development. At early stages of fin development, 5\u2032Hoxd genes are expressed in collinear, nested patterns along the anteroposterior axis of the fins, which resemble the initial wave of Hoxd transcription that occurs in the paired appendages of other gnathostomes. We also describe an unexpected second wave of expression at later stages of shark fin development, in which Hoxd12 and Hoxd13 are re-expressed along the distal margin of the paired fin buds. The results indicate that biphasic, distal expression of Hoxd genes is not uniquely associated with tetrapod digit development, but is more likely a plesiomorphic condition that was present the common ancestor of chondrichthyans and osteichthyans.In this report, we first examine skeletal development in the fins of the catshark and the associated radials and the adjacent radials mesoderm and endoderm at the cloacal level between stages 25 and 28 those examined in previous reports. Our analysis of skeletal development in shark fins shows a process with greater similarity to the tetrapod limb than to the teleost fin. The latter undergoes differentiation of the fin bud mesenchyme into a chondrogenic plate, which it then segments to form the individual bones of the fin, whereas shark and tetrapod appendicular skeletons develop by polarized condensation of separate prechondrogenic elements that then differentiate into cartilage. The teleost pectoral fin skeleton is also stunted relative to the elaborate distal endoskeleton of sharks and basal actinopterygians Hoxd gene expression may underlie the developmental truncation of their fin skeletons. Our results indicate that biphasic, distal expression of Hoxd genes is not uniquely associated with tetrapod digit development, but is more likely a plesiomorphic condition for gnathostomes.More surprising is the discovery that the initial phase of collinear Hoxd gene expression is that proximal and distal domains are separated by a zone of non-expressing cells at late stages of development, and late expression appears to be regionalized along the proximodistal axis. In tetrapod limbs, the appearance of collinear Hoxd expression along the proximodistal axis of the limb has been termed \u201cvirtual collinearity\u201d, which arises as an artifact of the two independently-regulated waves of collinear activation, the early/proximal phase controlled by the ELCR and the late/distal phase controlled by GCR/Prox Hoxd expression domains, which later appear proximodistally subdivided, suggests that the proximal and distal limb may have been under modular developmental control from an early point in gnathostome fin evolution. This also raises the possibility that factors from the AER may be involved in maintaining expression at the distal tip of the fin bud (perhaps by keeping these cells in a proliferative state). It is therefore interesting that the shark AER expresses Fgf8 Another striking similarity between the shark and tetrapod patterns of Sox8 data demonstrate that catshark fin bud mesenchyme does not undergo chondrogenic differentiation prior to the condensation of individual radials, which contrasts with patterns described for actinopterygians and some species of shark, which undergo early formation of a chondrogenic plate that later perforates to separate the radials Sox8 expression also revealed that chondrogenesis in catshark pectoral fins follows an anterior to posterior progression, starting in the prospective pectoral girdle. Similar directionality occurs in urodele amphibian limbs, whereas in amniotes the polarity of chondrogenesis generally is from posterior to anterior Hoxd expression occurs distal to the region of differentiated cartilage is consistent with idea that the second phase governs cell proliferation in the distal limb bud Sharks develop paired fins as localized outgrowths of the lateral plate mesoderm at discrete positions along the body axis, and these fin buds then develop an AER that later becomes an AEF Hoxd genes are expressed in cloacal mesoderm and endoderm. Similar patterns were observed in zebrafish Hoxd13 is required for anorectal and external genital development, and its expression in the genital tubercle and digits is under shared genomic regulation Hoxd gene expression in these tissues led to the hypothesis that the evolution of terapod digits and external genitalia may have been coordinated by a shared mechanism. Our results suggest a more ancient origin for Hoxd expression in the distal aspect of the fin buds and in the cloaca. Interestingly, Shh, which is expressed in the cloaca-derived urethral plate of the mouse genital tubercle and is required for outgrowth of the phallus During development of the shark gut, 5\u2032 Hoxd genes was associated with the origin of digits. Based on evidence that the second wave of transcriptional activity in the mouse autopod is controlled by its own regulatory modules and is required for digit development, and that this phase is absent in zebrafish (which lack digits), this domain of expression has been considered a character of the autopod. It is therefore tempting to speculate that the distal domain of Hoxd expression in sharks may define a population of cells with an autopodial identity, as was suggested recently for paddlefish Hox genes function to specify two developmental modules, proximal and distal, and these modules are not linked to specific anatomical landmarks This study allows reconsideration of the idea that the distal expression of Hoxd gene expression at the distal tip of paired appendages can be extended to the chondrichthyan lineage allows us to exclude the hypothesis that a novel domain of distal Hoxd expression first appeared in stem-group tetrapods. Secondly, distal Hoxd expression does not itself lead to development of an autopod. The third point relates to the demonstration by Duboule and co-workers that 5\u2032 HoxD and HoxA genes are required for proliferation of skeletogenic precursors cells in the limb Hoxd domain in shark fins may regulate cell proliferation beneath the AER. As such, its presence at late stages of shark fin and tetrapod limb development, and its absence from zebrafish, would fit with elaboration of the distal skeleton in the former and its truncation in the latter. It is therefore intriguing that the size of the distal expression domain in sharks is extremely narrow relative to that of tetrapods. The pivotal event with respect to the origin of digits may have been a temporal extension of the second transcriptional wave, which would have led to a sustained period of cell proliferation, thereby increasing the size of the distal Hoxd domain, at the terminus of the limb . Embryos were isolated from the eggshells, dissected from the yolk sac in ice-cold phosphate buffered saline solution (PBS) and staged according to Ballard et al For alcian green staining, embryos were washed in PBS, fixed overnight in 5% trichloroacetic acid (TCA) and transferred to 0.1% alcian green in acid ethanol. Stained specimens were differentiated in acid ethanol, dehydrated in ethanol and cleared in benzyl alcohol:benzyl benzoate (BABB). Hatchling specimens were fixed in 80% ethanol and eviscerated before being stained with alcian blue and alizarin red as described previously Acridine orange (AO) was used to identify apoptotic cells, following the method of Abrams et al 5\u2032Hoxd genes and Sox8 were used to generate digoxigenin-labelled riboprobes as described previously In situ hybridization of catshark embryos were carried out using our published modification in situ hybridization, embryos were equilibrated in graded sucrose (15% and 30%) at 4\u00b0C, incubated overnight in 20% gelatine in 30% sucrose at 50\u00b0C and embedded in 20% gelatin at 50\u00b0C. The blocks were frozen on dry ice, mounted in TissueTek OCT and cryosectioned at a thickness of 35 \u00b5m.Fragments of"} +{"text": "In a search for aetiological processes which might explain the association of stomach cancer with poverty, we have related mortality from the disease in the local authority areas of England and Wales during 1968-78 to indices of living standards derived from the 1971, 1951 and 1931 censuses. We have also analysed recently released data from a national survey of overcrowding carried out in 1936. Geographical differences in stomach cancer were most closely related to occupationally derived indices of socio-economic structure from the 1971 census, and to measures of domestic crowding from the 1931 census and 1936 survey. Unlike other indices of poor living standards, levels of past domestic crowding in north-west Wales were consistent with its previously unexplained high death rates from stomach cancer. We conclude that overcrowding in the home during childhood may be a major determinant of stomach cancer, and might act by promoting the transmission of causative organisms."} +{"text": "Immunocytochemical localisation of follicle stimulating hormone (FSH) was carried out in normal, benign and malignant human prostates by indirect immunoperoxidase technique. Positive staining was observed in the epithelial cells of all the three categories, while the stromal cells showed a weakly positive reaction in a few specimens. The brown reaction product was dispersed in the cytoplasm of the epithelial cells. These observations demonstrate the presence of immunoreactive FSH-like peptide in human prostate. The significance of FSH in the aetiopathology of prostatic disorders is discussed."} +{"text": "Promoting best gynecologic oncology practice is the subject of a manuscript from the Society of Gynecologic Oncologists of Canada. This manuscript is a descriptive report from a meeting that took place in Montreal involving members of various Canadians societies interested in the management of patients with gynecologic malignancies. Although this manuscript only partly deals with the objectives of the meeting and does not provide a clinical practice guideline report (to be reported later), this important paper may lead and stimulate other oncologic societies to embark on similar exercises.This month\u2019s issue also contains the second part of a manuscript discussing the merits of naturally occurring anti-angiogenetic herbal compounds and their potential relevance to clinical practice. Encouragement is offered to explore the potential (and highly complex) effects in humans through clinical research. These agents have multiple effects and, presumably, a potential for multiple benefits.The Genitourinary Disease Site Group of the Cancer Care Ontario Program in Evidence-based Care undertook a tricky review of the value of maximal androgen blockade in the treatment of meta-static prostate cancer. In this issue, the authors note that many meta-analyses have been published and yet the magnitude of any benefit remains unclear. The reported benefits of these agents is discussed\u2014together with the their potentially negative impact on quality of life.imrt) treatment of head-and-neck cancers, Ballivy and colleagues address the subject of variation in target volumes and normal tissues during this form of radiation therapy delivery. The issue is not only important and relevant for head-and neck imrt, but also for all of radiotherapy as image guidance becomes the norm.In discussing the impact of the geometric uncertainties associated with intensity modulated radiotherapy ("} +{"text": "Estimations of the incidence of hepatocellular carcinoma (HCC) for the period 1968-74 in the Province of Inhambane, Mozambique, have been calculated and together with rates observed in South Africa among mineworkers from the same Province indicate very high levels of incidence in certain districts of Inhambane. Exceptionally high incidence levels in adolescents and young adults are not sustained at older ages and suggest the existence of a subgroup of highly susceptible individuals. A sharp decline in incidence occurred during the period of study. Concurrently with the studies of incidence, 2183 samples of prepared food were randomly collected from 6 districts of Inhambane as well as from Manhica-Magude, a region of lower HCC incidence to the south. A further 623 samples were taken during 1976-77 in Transkei, much further south, where an even lower incidence had been recorded. The mean aflatoxin dietary intake values for the regions studied were significantly related to HCC rates. Furthermore, data on aflatoxin B1 contamination of prepared food from 5 different countries showed overall a highly significant relationship with crude HCC rates. In view of the evidence that chronic hepatitis B virus (HBV) infection may be a prerequisite for the development of virtually all cases of HCC and given the merely moderate prevalence of carrier status that has been observed in some high incidence regions, it is likely that an interaction between HBV and aflatoxin is responsible for the exceptionally high rates evident in parts of Africa and Asia. Various indications from Mozambique suggest that aflatoxin may have a late stage effect on the development of HCC. This points to avenues for intervention that could be more rapidly implemented than with vaccination alone."} +{"text": "Crimean-Congo haemorrhagic fever (CCHF) is an often fatal viral infection described in about 30 countries around the world. The authors report a fatal case of Crimean-Congo hemorrhagic fever (CCHF) observed in a patient from Kosova. The diagnosis of CCHF was confirmed by reverse transcription-PCR. Late diagnosis decreased the efficacy of treatment and patient died due to severe complications of infection. Nairovirus of the Family Bunyaviridae. Infection is transmitted to humans by Hyalomma ticks or by direct contact with the blood or tissues of infected humans or viraemic livestock . Phylogenetic studies have shown that the Kosovan strain is grouped together in the clade with the Southwest Russian and Turkish strains and is phylogenetically most closely related to Drosdov strain of CCHFV [Complete S segment of the Kosovo Hoti strain was confirmed in Slovenia . It was of CCHFV .From this report, the most important lesson to be derived is that late diagnosis decreases the efficacy of treatment and aggravates the outcome of the disease. Diagnosis of CCHF is important to prevent the spread of CCHF virus among the health-care workers and relatives of patients. Treatment with ribavirin may be useful if given within the early stage of disease . The preThe author(s) declare that they have no competing interestsSA participated in acquisition, analysis and interpretation of data. LR participated in the design of the study and drafted the manuscript. Both authors read and approved the final manuscript."} +{"text": "Activation of the ras gene family by point mutation at codons 12, 13 and 61 has been demonstrated in up to 20% of unselected series of human tumours. The present study was carried out to assess the incidence of ras activation in 37 squamous cell carcinomas of the head and neck, seven squamous cell carcinomas of the skin and eight squamous carcinoma cell lines. Oligonucleotide probes and the polymerase chain reaction were used on DNA extracted from achival paraffin embedded material. Mutations in codon 12 of the Harvey ras gene was found in a carcinoma of the larynx and a carcinoma of the lip, both of which had received prior irradiation. A cell line (LICR-LON-HN8) established from the same laryngeal cancer showed the same mutation. This study indicates that there is a low incidence of ras mutation in human squamous cell carcinomas and that activation of this family of genes is probably not a common factor in the development of this group of tumours."} +{"text": "An increasing number of methods are being developed for the early detection of infectious disease outbreaks which could be naturally occurring or as a result of bioterrorism; however, no standardised framework for examining the usefulness of various outbreak detection methods exists. To promote comparability between studies, it is essential that standardised methods are developed for the evaluation of outbreak detection methods.This analysis aims to review approaches used to evaluate outbreak detection methods and provide a conceptual framework upon which recommendations for standardised evaluation methods can be based. We reviewed the recently published literature for reports which evaluated methods for the detection of infectious disease outbreaks in public health surveillance data. Evaluation methods identified in the recent literature were categorised according to the presence of common features to provide a conceptual basis within which to understand current approaches to evaluation.There was considerable variation in the approaches used for the evaluation of methods for the detection of outbreaks in public health surveillance data, and appeared to be no single approach of choice. Four main approaches were used to evaluate performance, and these were labelled the Descriptive, Derived, Epidemiological and Simulation approaches. Based on the approaches identified, we propose a basic framework for evaluation and recommend the use of multiple approaches to evaluation to enable a comprehensive and contextualised description of outbreak detection performance.The varied nature of performance evaluation demonstrated in this review supports the need for further development of evaluation methods to improve comparability between studies. Our findings indicate that no single approach can fulfil all evaluation requirements. We propose that the cornerstone approaches to evaluation identified provide key contributions to support internal and external validity and comparability of study findings, and suggest these be incorporated into future recommendations for performance assessment. The use of automated methods in public health surveillance for the early detection of naturally occurring or bioterrorism-related outbreaks aims to reduce the time between when an outbreak starts and when it is detected, allowing additional time for investigation and intervention for disease control. An increasing number of methods are being developed to detect outbreaks of infectious disease using routinely collected data, however, which automated surveillance method is best for detecting outbreaks is not easily determined.A fundamental difficulty in the evaluation of outbreak detection methods involves specification of the aberration of interest . Data abThe lack of standardised methods for the assessment of usefulness, including outbreak detection successes and failures, as well as the diversity of factors that influence performance, makes the comparison of methods and accumulation of knowledge in this area problematic . This laThere is a need for a standardised evaluation approach to allow the identification of methods which most successfully identify outbreaks under different conditions. Reviews published to date have examined whether outbreak detection methods have been evaluated, and which aspects have been evaluated ,6, but nWe reviewed reports in the recently published literature which document the evaluation of outbreak detection methods. We searched the Entrez PubMed electronic database using various combinations of the following search terms in September 2004 to identify relevant papers published since 1999. Several additional relevant papers were also obtained from a review of the reference lists of the publications retrieved. A second search was performed in October 2004 using the Web of Science search engine to locate papers published between 1999 and October 2004 that cited one of 18 key references was published which included reports from a national syndromic surveillance conference. This volume was also reviewed for relevant papers.The titles, abstracts and where appropriate the full text version of located publications were examined to determine inclusion in this review. Papers were excluded if they did not evaluate automated methods for the detection of outbreaks of infectious disease, did not provide information on the evaluation of outbreak detection methods described, included limited detail on the fields of interest (e.g. letters), were based on non-human data, were not published in English, contained data and evaluation methods very similar to those in a paper already reviewed, or presented forecasting or other statistical methods which were not evaluated in the context of the detection of outbreaks or bioterrorism-related events.Papers documenting the evaluation of methods for the detection of outbreaks were reviewed, and characteristics of the evaluation approach used were recorded in an electronic database. Information recorded included the purpose of the surveillance, type of surveillance data analysed, data source, evaluation design (retrospective/prospective), and evaluation methods including the use of a criterion, criterion description, and whether different detection methods were compared. This abstracted information was reviewed and evaluation methods were classified according to the type of approach used. The classification developed is described and discussed.The adequacy and comprehensiveness of the framework developed was subsequently investigated by assessing its ability to describe the methods used to evaluate outbreak detection methods reported in the recently-published literature. The Pubmed database was again searched for relevant papers published between January 2005 and July 2006, and the same inclusion criteria and review methods that were used for the original search were applied.A total of 1418 unique references were obtained from the original PubMed search using 14 combinations of the selected search terms. Searches based on citations of the 18 key references of the disease-specific surveillance methods analysed disease notification data.As illustrated in the framework developed for classifying approaches to evaluation Figure , papers Syndromic or bioterrorism-related surveillance methods were also most commonly evaluated using authentic data (62%), with these studies being approximately equally likely to use a prospective or retrospective design. Of the 38% of syndromic or bioterrorism-related methods evaluated using synthetic data, 8 of these 10 studies used synthetic outbreak signals combined with authentic baseline data.Approaches to evaluation can be further classified based on the use or non-use of a gold-standard criterion to define the occurrence of outbreaks, or events of interest Figure . The majA number of papers that used criterion-related approaches to evaluation emphasised the difficulty involved in selecting a suitable criterion. Difficulties associated with the use of criterion-related approaches with authentic outbreak data include identifying the occurrence and exact timing of true outbreaks within the data, whereas difficulties associated with the use of criterion-related approaches with synthetic data include the specification of outbreak and baseline parameters to be used for evaluation. Retrospective evaluation using authentic data particularly highlights difficulties with the application of criterion-related methods, as available historical data may not consistently or comprehensively identify all outbreaks that occurred during the period of interest.Methods used to evaluate outbreak detection performance can be further classified by the specific approach used to determine the occurrence of events of interest within the dataset. Four main methods were identified among the 63 papers reviewed, and these were labelled the Descriptive, Derived, Epidemiological and Simulation approaches. No single approach of choice was apparent among the papers reviewed, and 14% of studies used multiple approaches to evaluation. The prevalence and main features of the four specific approaches identified are summarised in Table The Descriptive approach is characterised by the description of outbreak detection method performance, including the nature of events detected and the conditions under which alarms occur. The Descriptive approach differs from the other approaches identified in that indicators of performance are not based on a nominated outbreak criterion. This approach may be based on the assertion that it is not possible to accurately define the occurrence of all outbreaks in authentic data, or confirm changes detected as epidemiologically significant.Descriptive indicators may incorporate qualitative and quantitative descriptors of both the data analysed and events detected, including incidence; seasonality; type of aberration detected; the frequency, timing and duration of alarms; the time between alarm and peak number of cases; and the magnitude of rise or proportion of cases before alarms. Although a limited amount of descriptive information on outbreak detection performance was often reported when other evaluation approaches were used, the Descriptive approach was the least commonly used approach in isolation, being most frequently applied in the early stages of performance evaluation.The analysis by Hutwagner et al. providesDescriptive evaluations can be difficult to compare between studies due to the large amount of information that may be relevant to the occurrence of outbreaks and their detection, and the often limited amount of data analysed. However, a specific strength of the Descriptive approach is that it can be effectively used to directly compare the performance of multiple outbreak detection methods using common data, where no single method is designated as the gold standard. Most (83%) of the studies reviewed that predominantly used a Descriptive approach compared different outbreak detection methods using common data.A Descriptive approach can also be used to evaluate outbreak detection methods in relation to criteria other than outbreaks. Signals generated can be descriptively evaluated with reference to broad public health goals of surveillance based on existing understandings of disease and intervention capacity. For example, Teklehaimanot and co-workers evaluateThe Derived approach is distinguished by the use of a standard indicator of outbreaks to derive performance measures from the data being analysed. Outbreak indicators are derived through the application of simple or complex data-derived models. The simplest examples of this approach involve the use of an absolute number of cases or statistically derived thresholds (for example based on standard deviations) to indicate the occurrence of an outbreak, which may be associated with a requirement to exceed a threshold for a minimum period of time. Complex models may incorporate multiple variables or methods to account for fluctuations in the surveillance data such as seasonal effects which result in varying outbreak criteria over time or space.The study by Lewis et al used theThe Derived approach was most frequently used for the evaluation of disease-specific surveillance methods for the detection of large well-defined seasonal outbreaks. The models used reflect characteristics of the conditions under surveillance and the context of surveillance, for example, the rarity of the condition and the extent of background variability in the data. The evaluation of timeliness among studies using this approach was most commonly performed comparatively, based on the time of outbreak detection for several different detection methods, or the time to the epidemic peak. The Derived approach was typically associated with the use of a small number of variables to define the occurrence of an outbreak, which may provide a limited indicator of the occurrence of outbreaks within the data. For example, smaller outbreaks may be missed.The definition of outbreaks used in the Derived approach combines elements of both the Descriptive and Epidemiological approaches as it is based on agreement with an alternative data model or algorithm which has some epidemiologic credibility. The Derived approach differs from the Descriptive approach in the specification of a gold standard criterion, and differs from the Epidemiological approach in the limited account of complexity considered in the specification of the criterion. Although this approach provides an operational definition of outbreaks, difficulties remain in the definition of properties of outbreaks, including the time of commencement. For these reasons the Derived approach is not considered entirely independent of the other approaches identified.This approach is most closely linked with traditional surveillance methods in the determination of the occurrence of an outbreak relative to some loosely-defined measure of expectation, and was the most commonly used approach to evaluation among the literature reviewed. Expert judgement is used to determine the occurrence of events of public health importance, often using traditional epidemiological investigation techniques. Expert judgement may be based on a variety of available information, including surveillance data and information from epidemiological investigations, and may vary in the extensiveness of investigation methods or data utilised to determine if a data aberration represents an outbreak. Typically judgements were based on multiple factors using flexible methods.Terry and Huang's analysisAn advantage of the Epidemiological approach is that it allows complexities associated with the determination of occurrence of events of public health importance to be considered for each potential outbreak. However, epidemiological investigations can be resource intensive, and detailed descriptions of the investigations performed and the decision-making processes used are required to fully understand the basis of the outbreak definition applied. There is also evidence of variability in opinion among experts, and there has been little evaluation of the factors associated with this variability, or how it is best managed. Consensus among multiple raters has been used in a number of studies to control for individual variability.A range of factors may influence expert opinion and decision-making relating to the occurrence of outbreaks, including specialist knowledge, previous experience and contextual information. Expert figures commonly used in the papers reviewed include epidemiologists, public health practitioners, public health physicians and infection control practitioners.Approximately 40% of papers reviewed which used an Epidemiological approach used a prospective study design. Prospective surveillance of more than one data source can be used to promote a more comprehensive indicator of events of interest occurring by allowing the investigation of failures to signal as well as reasons for signalling. The use of official public health records or other published reports to identify known outbreaks was common among retrospective studies, and represents the application of traditional epidemiological methods for outbreak detection. Retrospective methods may suffer from incomplete ascertainment due to reliance on conventional methods and historical information, and inconsistencies in the methods used to identify outbreaks.Evaluation using a Simulation approach is based on criterion-related evaluation methods and requires that the definition of an outbreak be considered in the generation of data for evaluation. Studies that use synthetic data for evaluation using criterion-based methods are unique in that the number and timing of cases added to the baseline are known. Using synthetic data for evaluation addresses a number of problematic issues associated with the use of authentic data, including precisely determining the existence and timing of outbreaks within the data, and addressing a lack of data for evaluation and development. The Simulation approach is unique in enabling quantitative replicable evaluation of performance indicators including sensitivity and specificity with large sample sizes.Reis and Mandl used theSynthetic data can facilitate the comparison of multiple methods based on a standard dataset with specified outbreak and baseline characteristics. Approximately half of all studies which used synthetic data to evaluate performance used authentic baseline data with outbreak cases added. A comprehensive description of the simulated outbreaks and baseline data used in these evaluations is essential to allow their findings to be interpreted and integrated with those of other studies. The usefulness of synthetic data for evaluation is linked to the assumptions used to construct the data, which influences the ability to generalise evaluation findings to the authentic context. Both simple and complex outbreak simulation methods have been used to assess outbreak detection performance. Parameters that have been considered in the generation of synthetic data include outbreak size, outbreak shape, baseline rate and characteristics, and spatial distribution. Simulation methods also have the potential to influence the evaluation outcomes via effects produced by the simulation process which may not reflect the system or process being modelled.Studies that used a Simulation approach for evaluation predominantly described the evaluation of syndromic surveillance methods or proposed new analysis methods for outbreak detection. This reflects the lack of authentic data available for evaluation of syndromic surveillance methods and the ability of synthetic datasets to allow comprehensive description of the performance of outbreak detection methods across a variety of scenarios.Our search of the literature published since 2005 located a total of 42 papers that were considered to be highly relevant to the current study. These papers were reviewed in detail to investigate the adequacy of the conceptual framework developed and describe current trends in the evaluation of outbreak detection methods.The evaluation methods used by all papers reviewed were able to be described by the conceptual framework developed. The 42 studies located included 9 studies (21%) which were primarily descriptions or evaluations of new analysis methods that were not specific to outbreak detection or infectious disease surveillance, but were suitable for use in syndromic or disease-specific surveillance systems. Seven of these 9 studies described purely spatial analysis techniques. Of these 9 studies, 8 (89%) used a simulation approach to evaluate the performance of the algorithms, and 4 (44%) used a descriptive-comparative approach to illustrate and compare algorithm performance based on authentic data. Three studies used both a simulation and a descriptive-comparative approach to evaluation.The remaining 33 papers described specific studies of surveillance systems and outbreak detection methods, and the approaches to evaluation used are summarised in Table The primary goal of evaluating outbreak detection methods is to make inferences about their effectiveness. An unbiased assessment of performance is critical for identifying the most appropriate methods to use in specific monitoring applications. However, conclusions reached can be dependent upon the specific evaluation methods used, and consideration of the design of evaluation studies is essential in the interpretation of study findings.Our review of a large sample of relevant published literature highlights the highly specific and varied nature of performance evaluation. Recent guidelines have been drafted for evaluating outbreak detection systems generally; however there are not yet any guidelines specific to performance assessment. As a result, a variety of criteria have been used to assess outbreak detection performance, and the majority of studies in the area do not provide comprehensive assessment of performance of the methods tested. These factors introduce barriers to the accumulation of knowledge in the field, as well as the wider application of the research to practice.We describe a simple framework for the classification of approaches to the evaluation of outbreak detection methods. This framework identifies four specific approaches which are applied in the reviewed literature, and provides a logical structure within which to understand methods currently used for evaluation. The framework developed was found to be sufficient to describe the approaches used to evaluate outbreak detection methods in an independent sample of recently published studies. Based on the papers reviewed there does not appear to be any single approach of choice for the evaluation of methods for outbreak detection in public health surveillance data. A number of studies used multiple approaches to evaluation, indicating that any one approach may not satisfy all evaluation requirements, and highlighting the complementary nature of the approaches identified. The review of studies published since 2005 suggests the criterion-based simulation and epidemiological approaches to evaluation are the current approaches of choice, with the simulation approach becoming more commonly used. The recent development of tools which help to identify and simplify the technical demands of creating simulated data for evaluation -15 may pMultiple approaches to evaluation, including the use of authentic and synthetic data, allow the exploration of both applied and theoretical aspects of outbreak detection performance. Synthetic data are considered to allow more comprehensive characterisation of detection properties and provAs recently highlighted by Sokolow et al. , authentApproaches to defining outbreaks in authentic data for use in evaluation appear to vary according to the specific purpose and context of surveillance. The range of applied definitions of outbreaks reflect both practical constraints including the availability of sufficient data for evaluation, as well as the range of factors relevant to the determination of whether an outbreak has occurred for different surveillance purposes. For example, the specific methods used to distinguish outbreaks from background variation may include consideration of variables associated with potential causative factors, which may not be able to be specified in advance.For public health surveillance purposes, the adequate definition of outbreaks is often problematic in the absence of sufficient epidemiological knowledge. This requirement for epidemiological knowledge is linked to the frequent use of an Epidemiological approach to evaluation, which allows consideration of complexity and causation in the evaluation of outbreak detection performance. Epidemiological indicators of outbreaks are not absolute due to their reliance on individual judgement; however, they provide the closest approximation to current practice, and are able to accommodate changing standards, expectations, response capacities, interventions and contextual factors more readily than methods using purely data-derived models. Furthermore, the use of prospective methods for the comprehensive investigation of alarms following their occurrence to determine their public health significance as well as the investigation of detection failures has specific advantages over retrospective methods. Retrospective methods do not allow evaluation of the extra sensitivity or specificity of outbreak detection methods, as signals from historical data which have not been detected by conventional means are classified as false positives .Rare or highly variable events pose a specific challenge for evaluation. Although not commonly used among the studies reviewed, sensitivity analyses can be used for criterion-related approaches to address consequences of uncertainty or variation in detection goals. Criterion-related approaches have advantages over descriptive methods when the detection goal can be adequately defined, however their validity is dependent upon the assumptions used to construct outbreaks, as well as the comprehensiveness of the evaluation. The Descriptive approach provides an alternative approach to evaluation, particularly when there is no adequate definition of events of interest within a dataset, or when comparing outbreak detection methods.The advantage of the Descriptive approach lies in the potential for systematic description of the key features of the aberrations detected and the data examined, and the comparison of multiple detection methods using common data. As the validity of outbreak detection methods may vary according to the outbreak scenario as well as surveillance system factors, different methods need to be evaluated under the same conditions to determine their relative value . AlthougThe Descriptive approach requires further development to facilitate comparisons between outbreak detection methods through promoting more standardised, systematic and comprehensive descriptions of basic dataset features and measures of performance. Due to the potentially large reporting burden, further work is required to identify the attributes which would be most useful in standardised descriptions . HoweverOur review provides information on type and prevalence of approaches currently used to assess outbreak detection performance, and their strengths and limitations. We propose a basic framework to represent approaches currently used which provides a foundation for promoting increased comparability among studies and synthesis of knowledge in the field. This framework should offer assistance for both developers and consumers of outbreak detection research. Although there was considerable heterogeneity of study design within the approaches identified in this review, the type of approach used provides a reasonable guide to the strengths and limitations most relevant to specific studies.None of the approaches identified is alone sufficient to provide a comprehensive assessment of outbreak detection performance. In light of the complementary nature of their strengths and limitations, the use of multiple approaches to evaluation where possible is recommended, as has been highlighted previously . AlthougA major finding of this review is the identification of three of the four approaches described as 'cornerstone' approaches to evaluation, as they each use specific methods to address major requirements of the evaluation process. The key requirements of outbreak detection performance evaluations can be characterised by three main properties, being internal validity, external validity and comparability. These requirements can be related to the corresponding strengths of the three cornerstone approaches identified, being the Simulation, Epidemiological and Descriptive approaches respectively. As such, the use of multiple approaches to evaluation can provide the basis for a comprehensive and contextualised assessment of outbreak detection performance.Evaluation of the performance characteristics of outbreak detection methods is essential to allow an understanding of the type of outbreaks that can be identified, and how early these outbreaks can be identified . The curOur findings indicate that no single approach can fulfil all evaluation requirements. The varied nature of performance evaluation demonstrated in this review supports the need for further development of evaluation methods as has been identified previously ,6,25, toThe author(s) declare that they have no competing interests.AJP and REW conceived and designed the study, REW conducted the literature review and content analysis and, REW, SE, RGH, LD and AJP were involved in finalizing the results, and drafting and critically revising the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:This file includes a list of all studies reviewed.Click here for file"} +{"text": "The effects of cimetidine and indomethacin on the growth of dimethylhydrazine induced or transplanted intestinal tumours in the rat have been studied. Cimetidine is a histamine type 2 receptor antagonist and indomethacin is an inhibitor of prostaglandin synthesis. Two models of rat intestinal tumours were used: a colon carcinoma line transplantable in syngeneic animals and intestinal tumours induced by dimethylhydrazine treatment of Sprague-Dawley rats. Cimetidine and indomethacin were given in drinking water, alone or in combination. Cimetidine had no effect on the growth of transplanted colon cancer but significantly increased the incidence of chemically-induced tumours, with a tendency toward more invasive and metastatic tumours than in the control animals. Indomethacin did not significantly modify the incidence or other characteristics of the tumours in any of the models. This result is at variance with a protective effect of indomethacin on chemically-induced rat colon cancer previously reported by others."} +{"text": "Xenopus oocytes to identify plant transporter function. We utilized the full-length cDNA databases to generate a normalized library consisting of 239 full-length Arabidopsis thaliana transporter cDNAs. The genes were arranged into a 96-well format and optimized for expression in Xenopus oocytes by cloning each coding sequence into a Xenopus expression vector.We have developed a functional genomics approach based on expression cloning in in vitro transcribed cRNAs from the library in pools of columns and rows into oocytes and subsequent screening for glucose uptake activity identified three glucose transporters. One of these, AtSTP13, had not previously been experimentally characterized.Injection of 96 Xenopus oocytes, combined with uptake assays, has great potential in assignment of plant transporter function and for identifying membrane transporters for the many plant metabolites where a transporter has not yet been identified.Expression of the library in The final concentration of glucose and sucrose was adjusted to 15 \u03bcM by addition of unlabelled glucose or sucrose. Oocytes were pre-incubated in Kulori for 5 min to ensure intracellular steady state pH [Uptake assays were performed in the saline buffer, Kulori declare that they have no competing interests.Xenopus oocyte expression vector and participated in the screening of the library and drafted the manuscript. MHHN participated in the design of the study, participated in the screening of the library and provided several of the constructs for testing the influence of plant UTRs on expression and helped to draft the manuscript, BAH participated in the design and coordination of the study and helped to draft the manuscript. All authors read and approved the final manuscript.HHN carried out the construction of the transporter database and generation of full-length cDNA library, performed the cloning of the CDSs into the AGI code for the 239 genes in constructed full-length transporter library. This table provides a list of the genes included in the library. Predicted functions are stated.Click here for fileUSER cloning primers. This table provides a list of the primers used to amplify the individual CDS'es from each full length cDNA.Click here for file"} +{"text": "In order to carry out a methodological research survey of systematic reviews of adverse effects we needed to retrieve a sample of systematic reviews in which the primary outcome is an adverse effect or effects.We carried out searches of the Database of Abstracts of Reviews of Effects (DARE) and the Cochrane Database of Systematic Reviews (CDSR) for systematic reviews of adverse effects published between 1994 to 2005. The search strategies used a combination of text words in the title and abstract, Medical Subject Headings (MeSH) and subheadings/qualifiers. In addition, DARE records in progress were hand searched. No language restrictions were placed on any of the searches. The performance, in terms of sensitivity and precision, of the search strategies and their combinations were tested in DARE and CDSR.In total 3635 records were screened of which 257 met our inclusion criteria. The precision of the searches in CDSR was low (0% to 3%), and no one search strategy could retrieve all the relevant records in either DARE or CDSR. Hand searching the records from DARE and CDSR not retrieved by our searches indicated that we had missed relevant systematic reviews in both DARE and CDSR. The sensitivities of many of the search combinations were comparable to those found when searching for primary studies in which adverse effects are secondary outcomes.Searching major databases of systematic reviews, for systematic reviews of adverse effects, proved more difficult than anticipated due to a lack of standard terminology used by the authors, inadequate indexing and the variations in the search interfaces of the databases. At present hand searching all records in DARE and CDSR seems to be the only way to ensure retrieval of all systematic reviews of adverse effects in these databases. Balanced decision making in health care requires evidence on the potential adverse effects of interventions as well as their beneficial effects. Although well-conducted systematic reviews of adverse effects are important sources of evidence such reviews are relatively rare in the literature and it iIt should be easier to identify systematic reviews that were conducted with the express purpose of evaluating adverse effects. We might expect that study retrieval would be facilitated by some mention of adverse effects in the title, abstract or indexing terms. As part of a wider study of methods used in systematic reviews of adverse effects we decided to assess whether we could identify systematic reviews of adverse effects quickly and easily in two major databases of systematic reviews.We searched for systematic reviews of adverse effects using the Database of Abstracts of Reviews of Effects (DARE) and the Cochrane Database of Systematic Reviews (CDSR). These databases were chosen because they are major collections of systematic reviews. No additional sources were searched as DARE is compiled through rigorous monthly searches of bibliographic databases (including MEDLINE and EMBASE) as well as hand searching key journals, grey literature, and regular searches of the web . No langThree approaches were used to identify records in DARE figure . FirstlyThe second approach was to use Medical Subject Headings (MeSH), such as DRUG TOXICITY, and subheadings/qualifiers unattached to any indexing terms ('floating' subheadings), such as 'adverse effects'. This was an essential part of the search strategy as many systematic reviews examine specific adverse effects, such as, headaches, so would not necessarily be identified by text words of synonyms of 'adverse effects' and searching for each named potential adverse effect individually would be impractical. Previous research has also indicated the usefulness of searching with 'floating' adverse effect subheadings -4.Finally, DARE records in the process of being written do not yet have an 'outcomes assessed in the review' field. The titles of all of these 'provisional' records were, therefore, hand searched by the researchers to identify additional relevant reviews.To enable all three approaches to be executed, three searches of DARE were conducted, two via the Centre for Reviews and Dissemination (CRD) website and one via The Cochrane Library website figure . The texSearches for Cochrane Reviews were conducted in the web version of The Cochrane Library Issue 1: 2005) and DARE (n = 2646) were also scanned for relevant systematic reviews. All relevant records identified then formed our gold standard (GS) set of records.Once we had established our gold standard set of records we were able to test the performance of individual approaches in retrieving the gold standard records. The search terms used to identify the systematic reviews were assessed for their usefulness in retrieving relevant records by measuring their sensitivity and precision. Sensitivity is a measure of the search's ability to identify relevant papers, and a high value is important for searches for systematic reviews. Precision, on the other hand, is a measure of the proportion of relevant records identified by a search strategy expressed as a percentage of all articles (relevant and irrelevant) identified by that strategy. Highly sensitive strategies tend to have low levels of precision. Sensitivity and precision for each database were calculated as follows;In total 4262 records were retrieved from CDSR and DARE, of which 3635 were unique records. From the 3635 titles and abstracts screened, 298 full reports were retrieved and 256 reviews (257 publications) met our inclusion criteria. Of the 257 publications, 246 had DARE abstracts and 11 were Cochrane Reviews which met our inclusion criteria. In total, therefore, 270 systematic reviews of adverse effects were identified; 256 from DARE and 14 from CDSR.The relevant records not retrieved by our search strategies were sifted for any potentially relevant generic adverse effect search terms. Only 2 of the 13 contained potentially useful terms. Both contained the MeSH indexing term RISK FACTORS and one had the term 'hazards' in the title. These search terms, in addition to the terms used in our search strategies, were tested to identify the most sensitive search strategy possible.The sensitivity and precision of the different search approaches are presented in table All the single search terms in CDSR yielded very low precision (0 to 3%). In DARE, however, some terms did provide high precision table . The sinThe most sensitive search strategy used 'floating' subheadings table . 'FloatiThe most sensitive search strategy in DARE, with the terms tested here, used a combination of text words in the title and abstract, a MeSH term and 'floating' subheadings (see figure In CDSR the most sensitive search strategy used the 'floating' subheading 'adverse effects' combined with searching for 'adverse near/20 objectives' in the abstract. This strategy retrieved 79% (11/14) of the relevant records. However, the precision of this search was low at 3% (11/338).This research highlights the advantages and disadvantages of searching databases through the CRD website and The Cochrane Library website. The CRD website offered the most current version of DARE and allowed searches to be limited to sections of the abstract, whereas The Cochrane Library version of DARE allowed searching using 'floating' subheadings. Even when conducting consecutive searches on DARE and CDSR in two different interfaces it is difficult to retrieve all systematic reviews of adverse effects on these databases. A sensitive search using text words in the title and abstract, indexing terms and 'floating' subheadings was unable to retrieve all the records of interest. An assessment of the missed systematic reviews indicated that most of these records could not have been retrieved without searching for specific adverse effects. Although adding these terms to our search strategy would have increased the sensitivity of the searches, adding the MeSH term RISK FACTORS, in particular, would have decreased the precision.Research has indicated that primary studies of adverse effects are difficult to locate -5. This Interestingly our searches shared similar sensitivities to those reported by Badgett et al's and GoldDerry et al found thThe variation in reported sensitivities from the different case studies may reflect their different inclusion criteria. Derry et al limited Derry et al and BadgOur searches were limited to CDSR and DARE. Although these are excellent sources of systematic reviews of adverse effects, not all reviews reported as being systematic are contained in these databases: DARE has a strict quality inclusion criterion and CDSR contains only Cochrane Reviews. These databases are sources of systematic reviews that tend to be of higher methodological quality, which may reflect better reporting and hence better indexing.The low number of systematic reviews of adverse effects on CDSR (14) precluded any useful analysis of the data, including comparisons to DARE and previous research. In addition, the usefulness of individual search terms was difficult to assess because of a low number of records.The search terms tested in this study were predefined from previous research and were not obtained by objective methods . HoweverSearching major systematic reviews databases for systematic reviews of adverse effects proved more difficult than anticipated due to a lack of standard terminology used by the authors of reviews, inadequate indexing and the variations in the search interfaces of these databases.Our research suggests that it will be even more difficult to conduct thorough searches for systematic reviews that report adverse effects as a secondary outcome even in resources devoted to systematic reviews such as DARE and CDSR. At present hand searching all records in DARE and CDSR seems to be the only way to ensure retrieval of all systematic reviews of adverse effects in these databases.Every systematic review with adverse effect(s) as a primary outcome should be indexed with appropriate term(s).Authors of systematic reviews should use standardised terminology to make it explicit that they are reviewing adverse effects.Database producers and indexers need to improve the consistency of their indexing of adverse effects.The publishers of The Cochrane Library and the producers of DARE could increase the utility of these databases to users \u2013 the former by allowing searches to be limited to sections of the structured abstracts in both DARE and CDSR records, and the latter by introducing the facility to search DARE with 'floating' subheadings.The author(s) declare that they have no competing interests.SG participated in the conception and design of the study, carried out the searches, sifted the records for relevant reviews and carried out an evaluation of the searches and helped draft the manuscript. YL carried out sifting of records for relevant reviews and helped draft the manuscript. HM conceived the study, sifted the records for relevant reviews, and helped draft the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "The Committee to Assess the Health Implications of Perchlorate Ingestion [National Academy of Sciences (NAS)] released its final report in JanuaThe NAS committee was composed of 15 leading physicians and scientists with combined range of expertise to evaluate every scientific aspect of the perchlorate database and of the U.S. EPA\u2019s assessment of that database. The makeup of this committee and its credentials are available on the NAS website . The NASseveral of the animal studies \u2026 to be flawed in their design and execution. Conclusions based on those studies, particularly the neurodevelopmental studies, were not supported by the results of the studies.Although if inhibition of iodide uptake by the thyroid is duration-dependent, the effect should decrease rather than increase with time, because compensation would increase the activity of the sodium-iodide symporter and therefore increase iodide transport into the thyroid.Evidence has subsequently shown this to be the case .The California EPA perchlorate risk assessment relied oIn summary, the concerns presented by"} +{"text": "Early interventions proved to be able to improve prognosis in acute stroke patients. Prompt identification of symptoms, organised timely and efficient transportation towards appropriate facilities, become essential part of effective treatment. The implementation of an evidence based pre-hospital stroke care pathway may be a method for achieving the organizational standards required to grant appropriate care. We performed a systematic search for studies evaluating the effect of pre-hospital and emergency interventions for suspected stroke patients and we found that there seems to be only a few studies on the emergency field and none about implementation of clinical pathways.We will test the hypothesis that the adoption of emergency clinical pathway improves early diagnosis and referral in suspected stroke patients. We designed a cluster randomised controlled trial (C-RCT), the most powerful study design to assess the impact of complex interventions. The study was registered in the Current Controlled Trials Register: ISRCTN41456865 \u2013 Implementation of pre-hospital emergency pathway for stroke \u2013 a cluster randomised trial.Two-arm cluster-randomised trial (C-RCT). 16 emergency services and 14 emergency rooms were randomised either to arm 1 (comprising a training module and administration of the guideline), or to arm 2 . Arm 1 participants attended an interactive two sessions course with continuous medical education CME credits on the contents of the clinical pathway. We estimated that around 750 patients will be met by the services in the 6 months of observation. This duration allows recruiting a sample of patients sufficient to observe a 30% improvement in the proportion of appropriate diagnoses.Data collection will be performed using current information systems. Process outcomes will be measured at the cluster level six months after the intervention. We will assess the guideline recommendations for emergency and pre-hospital stroke management relative to: 1) promptness of interventions for hyperacute ischaemic stroke; 2) promptness of interventions for hyperacute haemorrhagic stroke 3) appropriate diagnosis. Outcomes will be expressed as proportions of patients with a positive CT for ischaemic stroke and symptoms onset <= 6 hour admitted to the stroke unit.The fields in which this trial will play are usually neglected by Randomised Controlled Trial (RCT). We have chosen the Cluster-randomised Controlled Trial (C-RCT) to address the issues of contamination, adherence to real practice, and community dimension of the intervention, with a complex definition of clusters and an extensive use of routine data to collect the outcomes. Stroke is the third most common cause of death in developed countries . In 80% Around 10% of all people with acute ischaemic stroke will die within 30 days of stroke onset, while 50% of the survivors will experience some level of disability after 6 months [As early effective interventions proved to be able to improve prognosis , implemeA systematic review on the adoption of in-hospital clinical pathways suggested that the currently available evidence is insufficient to support routine implementation of care pathways for the hospital management of acute stroke or stroke rehabilitation . TherefoWithin Lazio, the region with 5,302,302 inhabitants in which the capital city of Rome is located, the emergency medical services (ES) are accessible by dialling \"118\" with automatic connection to the nearest dispatch facility. The emergency network is organised in three levels of complexity. The basic level involves emergency rooms for first aid; the first level is equipped for mild-severe diagnosis and the second level for the most severe urgency. These services are in a number and a geographical disposition to grant the required assistance in emergency who should be trained to recognize early symptoms and should be empowered to contact ES in order to identify the appropriate level of care required, and to transport patients directly to stroke units.An evidence-based clinical pathway was developed with the methods adopted by the most qualified guidelines development agencies with the involvement of the emergency health workers in the expert panel, and we decided to evaluate its effectiveness in standardising and improving the procedures in pre-hospital (ES) and emergency (ER) setting, before considering its widespread adoption in our region.The cluster randomised trial design is indicated in assessing the effectiveness of complex interventions and it aWe present the protocol of study that compares the adoption of an evidence-based prehospital pathway versus current practice.It is our intention to describe the methodology of the study in the present publication in order to ensure independence of the results and to stimulate criticism and suggestions from the journal readers. The protocol has been presented in several international conferences to share our initiative with the scientific community and to collect their comments and ideas.As we are aware of the many limits and peculiarity of this study design we believe that lessons on how to deal with complex intervention in the difficult environment will be the added value of the present study.We identified all the emergency services referring to the two stroke units presently available in Rome figure . There wParticipants are all the workers belonging to the services enclosed in the study (enclosing the ambulances' drivers).About 152 physicians, 280 nurses, 50 drivers will be trained on the contents of the clinical pathway. We estimated that around 750 patients will be met by the services in the 6 months of observation.Participants in the intervention arm will be trained on the content of the clinical pathway and will be given the clinical pathway itself for consultation and discussion in groups.The clinical pathway is based on available evidence based pre-hospital and emergency interventions for suspected stroke patients.It consists of the following main points:- ES dispatcher uses a short form of Cincinnati pre-Hospital Stroke Scale (CHSS) to identify suspected stroke patients during the telephone call;- ES health workers confirm diagnosis by CHSS on the scene;- patients are provided CT and referred to appropriate care.A group of trainers from the expert panel that developed the clinical pathway, and composed of:- an ES medical doctor- anaesthesiologist working on helicopter emergency unit- two neurologists working in stroke unit- a physician working in an emergency department- a neurosurgeon- two epidemiologist and evidence-based medicine experttrained a group of health workers selected from all the entities participating in the intervention arm study to act as \"facilitators\" for peer education. The facilitators trained their colleagues in the workplace with the help of audiovisual materials produced by the teachers themselves.At least one representative of the teachers' group took part in the meetings on the workplace to support groups' discussion and to answer possible questions. As a result of the training sessions every person working in any entity participating in the intervention arm of the trial have been trained on the content of the pathway.No interventions will be implemented in the control group.The objective of this cluster randomised controlled trial is to evaluate the effectiveness of pre-hospital and emergency clinical pathway for patients with suspected stroke in: improving early identification of stroke, promoting appropriate triage coding, achieving coherence between diagnosis, interventions and referral.The outcomes relate to organizational aspects and will be measured as:- proportion of patients with a positive CT for ischaemic stroke and symptoms onset <= 6 hour admitted to the stroke unit- proportion of patients with a positive CT for hemorrhagic stroke and symptoms onset <= 6 hour admitted to a neurosurgical ward- proportion of patients with a positive CT for hemorrhagic or ischaemic stroke or symptoms onset >6 hour admitted to the nearest hospital- proportion of ICD9CM code for stroke in emergency setting confirmed in the hospital discharge data- proportion of patients with stroke confirmed by CT results- proportion of patients with ischaemic stroke receiving treatment within 6 hour of symptoms onsetAll the outcomes will be reported at cluster level and will be cross-checked by the integration of data from the available information systems.We calculated the sample size, i.e. the duration of recruitment, keeping into account the number of emergency calls and the number of emergency room admissions for suspect stroke.For the ES the mean number of patients per 19 clusters was about 50 (n\u00b010)\u2022 Groups of emergency services (n\u00b06)\u2022 Only emergency rooms (n\u00b04).Clusters were attributed sequential numbers and sample function of STATA 7 (StataCorp LP 2005) was used to generate random numbers. We utilized the Italian lottery extracted number of 6th November 2004 in Rome as seeds number for generating the random sequences.To assess the impact of the adoption of the clinical pathway over the current practice, we will analyse the data currently available from the actual information systems and no additional information will be collected.In this way, emergency health workers will not be charged with extra work deriving from registration of ad-hoc information, and they will be completely devoted to the implementation of the pathway itself.The Agency of Public Health created and maintains the Information System of Emergency Rooms (ISER), the Stroke Surveillance System (SSS) and the Hospital Information System (HIS) sufficient to obtain data for measuring the outcomes. Moreover the Agency for Public Health has access to the Information System of ES 118 (IS118).\u2022 Baseline characteristics of the entities will be compared to ensure randomisation success;\u2022 Analysis will be performed in an Intention to treat basis;\u2022 Other analysis by protocol will be performed as a sensitivity analysis;\u2022 Preliminary analysis will be performed after three month of the beginning of the observation.All the analysis will be processed with SAS (version 8) and STATA 7 (StataCorp LP 2005).The present protocol was designed following the indication of Helsinki adopted by the 18th World Medical Association General Assembly in June 1964 and following amended in 1975\u20132002 and clarified respectively for paragraph 29\u201330 in 2002 and 2004. The ethical committee of the Agency for Public Health approved the protocol with the document n.124 of 15 June 2004. The ethical committee was created according to Helsinki indications including, among others, a member of a patients' rights organization.An informed consensus was signed by the responsible of each entity participating in the study who received a document containing all the relevant information and copies of the clinical pathway and the study protocol. No informed consensus will be requested to patients .The nature of the intervention to be experimented prevented us from identifying specific stopping rules and we could only commit ourselves to submitting any possible problems to the ethical committee. The core of the pathway consists of transferring patients likely to benefit from complex interventions to specialized structures, rather than providing them with minimal assistance in the nearest hospital. This means that health workers belonging to experimental clusters may need to transfer patients for longer distance than they used to do. The risks we can foresee are unexpected events during transportation and troubles deriving from the unavailability of ambulances. Health workers have been warned to promptly report any kind of problems to the study coordinator.The emergency and pre-hospital field has not been studied sufficiently in randomised controlled trials, likely reasons are organizational difficulties, critical conditions of patients and the related ethical problems .We are therefore pioneering this kind of study in a difficult environment and we are prepared to face many obstacles. First, there are the changes in the network of the emergency services. During the last twelve months many new entities were created and others were suppressed generating problems in the identification of clusters. Lack of personnel and high turnover increase risk of contamination due to personnel sharing during the summer and the holydays.Other problems have to do with the local law imposing that patients taken from the scene should be brought to the nearest hospital where the emergency physicians will determine the need (opportunity) to transfer elsewhere for appropriate cure. This procedure delay interventions and determines a very low percentage of cases directly admitted to the stroke unit: 14% of the ambulances transport , and 47% of the emergency room admissions. However health workers participating in the experimentation expressed their worry about possible legal consequences of their decisions caused by the implementation of the protocol study, and this may represent an important obstacle to adherence.Accuracy of data registration is crucial for process evaluation but our quality control system revealed that, even though a priori criteria for data collection are homogeneous, results are sometimes heterogeneous. Nevertheless, as misreporting is non differential the deriving underestimation may not severely affect results.CME = continuous medical educationCT = computer tomographyES = Emergency ServiceER = Emergency RoomALS = Advanced Life SupportBLS = Basic Life SupportCHSS = Cincinnati pre-Hospital Stroke Scaleth revision Clinical ModificationICD9CM = International Classification Diseases 9ICC = intra cluster correlationISER = Information System of Emergency RoomsSSS = Stroke Surveillance SystemHIS = Hospital Information SystemIS118 = Information System of ES 118The author(s) declare they have no competing interests.MF wrote the manuscript and assisted in the design of the study;ADL contributed to the manuscript, ideated and coordinated the study;PGR assisted in the cluster creation, sequence generation and allocation and analysis planning and contributed to the manuscript;GL provided data analysis from the information systems and planning of information retrieval for the assessment of outcomes;GG gave the input of the project, provided overview of all the steps of the study, and contributed to the manuscript final review.All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "The occurrence of pulmonary artery obstruction in the course of acute aortic dissection is an unusual complication. The mechanism implicated is the rupture of the outer layer of the aorta and the subsequent hemorrhage into the adventitia of the pulmonary artery that causes its wall thickening and, at times, produces extrinsic obstruction of the vessel. There are no reports of this complication in acute intramural hematoma.An 87-year-old woman was admitted to the hospital in shock after having had severe chest pain followed by syncope. An urgent transesophageal echocardiogram revealed the presence of acute intramural hematoma, no evidence of aortic dissection, severe pericardial effusion with cardiac tamponade, and periaortic hematoma that involved the pulmonary artery generating circumferential wall thickening of its trunk and right branch with no evidence of flow obstruction. Urgent surgery was performed but the patient died in the operating room. The post mortem examination, in the operating room, confirmed that there was an extensive hematoma around the aorta and beneath the adventitial layer of the pulmonary artery, with no evidence of flow obstruction.This is the first time that this rare complication is reported in the scenario of acute intramural hematoma and with the transesophageal echocardiogram as the diagnostic tool. Acute aortic dissection, acute intramural hematoma and aortic penetrating ulcer are entities with high mortality known as acute aortic syndromes. The usual fatal course of these conditions is closely related to their lethal complications . The mosAn 87-year-old woman with history of systemic hypertension and dyslipidemia was admitted to our hospital after having had severe chest pain of sudden onset radiating to the back and followed by syncope. She presented with marked hypotension 60/40), sinus tachycardia (130 beats per minute), and livid legs. She needed oral tracheal intubation, mechanical respiratory assistance, volume expansion and inotropic support. An urgent transesophageal echocardiogram revealed aortic intramural hematoma compromising the anterior wall of the ascending aorta Fig. and righ0, sinus The post mortem examination, in the operating room, confirmed that there was an extensive hematoma around the aorta and beneath the adventitial layer of the pulmonary artery with no evidence of flow obstruction.This rare complication of the acute aortic syndromes has been carefully described by Buja et al. in a case of aortic dissection . Since tThere are few reports of this rare complication of acute aortic syndromes in the literature. Many of these reports are in patients who were initially thought to have pulmonary embolism. Indeed the ventilation-perfusion lung scan demonstrates mismatched absence of perfusion to the entire right lung in these cases ,5. FurthThe reason we thought of presenting this case is threefold \u2013 there are few reports of this uncommon complication of acute aortic syndromes in the literature; this is the first time that it is described in the scenario of acute intramural aortic hematoma; and it is the first time that transesophageal echocardiogram is used as the diagnostic tool -9.The transesophageal echocardiogram seems to be an accurate diagnostic tool not only to diagnose the aortic pathology but also to rule out this uncommon complication.The author(s) declare that they have no competing interests.MM carried out the transesophageal echocardiogram. MM and MSG performed review of the literature and wrote the paper. All authors read and approved of the final manuscript.Video of the transesophageal echocardiogram showing an intramural hematoma compromising the anterior aspect of the ascending aorta with no evidence of aortic dissection. A circumferential wall thickening of the pulmonary artery can also be seen both in short and long axis view. There is no evidence of pulmonary flow obstruction as shown by color and pulsed wave Doppler.Click here for file"} +{"text": "Besides the color of the teeth the color of the alveolar gingiva plays a crucial role in esthetic rehabilitation in dento-alveolar treatment. Whereas nowadays the color of the teeth can be determined exactly and individually, the specific influence of the red color of the gingiva on treatment has not been assessed yet. The aim of this study was to evaluate the vascularization as the basis for gingival esthetics.Standardized photographs of defined areas of the alveolar gingiva in operated and non-operated patients were taken and assigned to groups with same characteristics after color comparisons. In addition, histologic and immunohistologic analyses of gingival specimens were performed for qualitative and quantitative assessment of vessels and vascularization. Finally, colors and number of vessels were correlated.Our results demonstrated three different constellations of colors of the alveolar gingiva in healthy patients. The operated patients could not be grouped because of disparate depiction. There was a clear correlation between color and vessel number in the alveolar gingiva.Our investigations revealed the connections between vascularization and gingival color. Recommendations for specific change or even selection of colors based on the results cannot be given, but the importance of vascularly based incision lines was demonstrated. Esthetic rehabilitation in dento-alveolar surgery was focused solely on reconstruction of position, shape and color of teeth for a long time. Significant improvement was achieved when reconstruction of form and volume of the peri- and paradental or periimplant soft tissue was added to the protocol. Esthetic impression depends on a coordinated interaction of red and white colors of dental and gingival structures. Nowadays the dental color is chosen in a very differentiated and individual way allowing the patient him/herself to select the color of the teeth to be replaced according to the neighbouring or missing teeth. In contrast the red color of the gingiva originates from acrylics, composite resins, silicones or porcelain-based materials which lack the range of differentiation of the white color. In cases without reestablishment of the red color the present red color is taken over regardless of color changes due to surgery. The role of the red color of the soft tissue is still unclear because of lack of complete fundamental knowledge, making definition of a starting point for a specific treatment of color changes impossible. The aim of this study was to evaluate the different mucosal and gingival colors and to classify them according to defined criteria. In addition, the vascular basis was analysed using histologic sections from the oral mucosa and compared with the colors.Standardized digital photographs of the maxillary and mandibular gingiva (#13 to 23 and #33 to 43) . By means of immunohistological staining of the vessels using CD31 distribution of the vessels inside the gingiva was evaluated qualitatively in each specimen. The number of vessels was counted in 5 randomly selected so-called hot spots [Analyses covered qualitative description of the distribution of the vessels in the different parts of the gingiva, quantitative assessment of the number of vessels in the keratinized and non-keratinized areas. Statistical analysis covered the comparison of distribution of vessels using t-test (significance level p < 0.05), and correlation between number of vessels and wave length/color of the different areas of the gingiva using a linear Spearman correlation analysis and calculation of coefficient of correlation.Standardized photographs from 54 healthy unoperated patients and 32 operated patients were analysed. For histological and immunohistological analyses 28 gingival specimens were available.In almost all patients characteristic formations of alveolar soft tissue was found which was divided into area of attached gingiva (gingiva propria), transition zone and unattached area . This subdivision was not only because of mobility and surface structure but especially because of nuances of the red color. It was demonstrated that the muco-gingival line does not represent an independent anatomic structure with an own coloring but does appear as expression of the transition from attached to unattached gingiva, demonstrating the result of structural and color-coordinated changes.Qualitative evaluation of the photodocumentation according to the mentioned criteria resulted in three subgroups of the healthy unoperated patient cohort. These subgroups were again subdivided according to the intensity of the muco-gingival line Table . Group 1The natural structure of the gingiva was evaluated histologically including keratinized and non-keratinized areas. The vessels were clearly demonstrated in both areas and their distribution analyzed. There was a strongly vascularized area under the epithelium layers in the lamina propria of the mucosa Fig. .Using immunohistological staining the endothelial cells were marked specifically and the vessels analyzed quantitatively Fig. . The numTo date scientific evaluation of the gingival mucosa concentrated on two fields both disregarding the color. On the one hand, vascularization of oral mucosa was investigated using different techniques declare that they have no competing interests.JK set up the design of the study, performed the surgical part, and helped to draft the manuscript. AB carried out the photographical analysis and performed the statistical analysis. TF carried out the histological and immunohistological studies and performed the statistical analysis. UJ participated in the design of the study, coordination of the patients and helped to draft the manuscript. All authors read and approved the final version of the manuscript. AB and TF contributed equally to this work."} +{"text": "The aim of this study was to compare the expressed levels of career satisfaction of three groups of comparable dental healthcare professionals, working in Trinidad, the United Kingdom and New Zealand.Three questionnaire surveys were carried out of comparable dental healthcare professionals. Dental nurses in Trinidad and dental therapists in the UK and New Zealand. Questionnaires were sent to all registered dental nurses or dental therapists.Career satisfaction was lowest amongst Dental Therapists working in Trinidad and Tobago. Approximately 59% of the Therapists working in New Zealand reported stated that they felt they were not a valued member of the dental team, the corresponding proportion in the United Kingdom was 32%, and for Trinidad 39%.Dental therapists working in different healthcare systems report different levels of satisfaction with their career. The career development and career satisfaction expressed by dental professionals is an area which has attracted much recent research -5. Most The expressed levels of job satisfaction among dental therapists working in the UK was described by Gibbons et al who repoTrinidad & Tobago is a twin island state situated at the southern end of the Caribbean chain of islands. It is a democratic republic within the British Commonwealth having gained independence in 1962. The country's dental needs are currently served by dental professionals working in government health centres and in private practices. The Dental Council of Trinidad & Tobago presently only recognises two categories of dental professionals: dentists and dental nurses , with 216 and 50 registered and enrolled respectively . The DenCurrently, most dental therapists in New Zealand work within the School Dental Service treating children up to the age of 13 years. School dental services have provided free treatment since 1945 and it is estimated that more than 95% of New Zealand children are enrolled . A recenThe provision of dental services in New Zealand is undergoing a period of change, largely due to the implementation of the Health Practitioners Competence Assurance (HPCA) Act in September 2004. This act allows for an expansion of the scope of practice of dental therapists, and also enables these oral health providers to move into private practice for the first time, hitherto dental therapists in New Zealand have been restricted to working in the publicly funded health sector, where remuneration is low. It is not yet known how these changes will affect the dental therapy in New Zealand though it is anticipated that many therapists will choose to shift their practice to the private sector where their potential earnings are higher.Following the report of the Dental Auxiliary Review Group and the subsequent publication of a consultation document concerning professionals complementary to dentistry by the General Dental Council, recent legislation in the UK has allowed Dental Therapists to work in all sectors of oral healthcare. Prior to this dental therapists in the UK were limited to working in hospitals, community dental services or the armed forces where they were employed in salaried posts. This situation parallels that in New Zealand, and it is again anticipated that UK therapists will seek to enhance their income by working in the primary care sector where their earnings are likely to be paid on an item of service basis.Three issues arise from a review of the reported career satisfaction of dental professions. First, there are few data available on the perceptions of some groups. Second, different studies have used different measures, making comparisons both within and across professions difficult. Third, there are no studies which have compared similar professional groups across countries . The aim of the present study is to compare the expressed levels of career satisfaction of three groups of comparable dental healthcare professionals, working in Trinidad & Tobago, the UK and New Zealand.Three postal surveys were conducted. Parallel questionnaires, including a question about career satisfaction, were mailed to all dental therapists registered with the General Dental Council in the UK(n = 380), all dental nurses (n = 50) enrolled by the Trinidad & Tobago Dental Council and currently practising in Trinidad & Tobago, and all dental therapists on the Dental Council of New Zealand database (n = 716). Overall response rates for the surveys were: UK therapists 80%; Trinidad & Tobago nurses 76%; New Zealand 83%. Only dental therapists currently employed in that role were included in the analyses, reducing the sample size to: UK, 221 dental therapists; Trinidad & Tobago 38, New Zealand 502.Job satisfaction was determined by a single question. Participants were asked to rate their overall satisfaction with their career as a dental nurse or therapist (according to country) on a ten point scale with markers at each end where the value 1 was labelled \"No satisfaction\" and the value 10 labelled \"Complete satisfaction\".In addition information was collected on the following\u2022 Age of respondent\u2022 Whether the respondent had ever taken a break in their career\u2022 Whether the respondent felt a valued part of the dental teamUnivariate analyses were conducted to compare dental personnel across the three countries, on the variables identified. Mean and median scores on the satisfaction scale were calculated and compared using the Kruskal-Wallis test (a non-parametric version of the oneway ANOVA) since the satisfaction data had a skewed distribution (the standard error of the skewness statistic was greater than twice the skewness). Age was treated as a continuous variable and compared using a one-way analysis of variance, since the distribution of the data was approximately normal. The proportion of individuals who had ever taken a career break was compared across countries, and the respondents perception of whether they felt a valued part of the dental team was treated as a dichotomous variable and compared using the Chi-square statistic.In order to examine the relative importance of each variable in predicting satisfaction, a logistic regression analysis was conducted. The outcome variable was satisfaction score dichotomised around the median value for the sample , whether the individual had taken a career break, place (entered as three separate dummy variables coding each of the three countries) and whether the individual felt a valued member of the dental team were entered stepwise as predictor variables, in order of their simple correlation with the outcome variable. Additional variables were entered until there was no significant increase in the predictive utility of the variables. For each variable in the equation the following statistics were calculated: coefficient B, the standard error of B, significance, estimated odds ratio (exp [B]). For the final model the model chi-square and the log-likelihood statistic were calculated.Table The final logistic regression analysis model is summarised in Table The job satisfaction of three comparable groups of dental professionals working in different countries was compared, and found to be lowest in Trinidad & Tobago. These findings should be interpreted with some caution given the limitations of the study. This study adopted a single item measure of career satisfaction. Whilst this allows for comparison with other studies that have used similar single item measures for example ,4, singlThe extent to which an individual is satisfied with their job will be determined by many factors, including the pay and employment conditions. Country of work emerged as significant in the regression model, and is likely to act as a proxy measure for the working conditions, type of payment, healthcare system and experiences of the participants. However the present study did not investigate these explanatory variables in any depth. Surprisingly, whilst satisfaction was lowest amongst dental nurses in Trinidad & Tobago, this group had the highest proportion of members who felt a 'valued' member of the team. A feeling of being valued was lowest amongst dental therapists in New Zealand, suggesting that the satisfaction item was measuring something more than the perception of being valued in a team. The three countries in the present study place similar restrictions on the practice of dental therapy, and all three propose changes in the employment of this group of healthcare professionals but differ in the extent to which this change has been implemented. The UK system has taken the most steps towards changing the employment of dental therapists, followed by the New Zealand system. Career satisfaction was lowest amongst Dental Therapists in Trinidad and Tobago where the role is most restricted. Future research should address the extent to which the characteristics of the working environment impact upon job and career satisfaction. Research with dental practitioners has determined that system of remuneration, the characteristics of the working environment, and the type of service in which an individual works all exert an influence upon the practitioner's experience of their working life ,16,25.There is a need for further research addressing the impact of low career satisfaction on the dental workforce, including retention of workforce, the impact on the quality patient care and interactions with patients. It might be hypothesised that a dissatisfied workforce would be more likely to leave the career or be less motivated to deliver care of a high quality. Such associations have been found in studies of physicians and rehaDental therapists working in different healthcare systems report different levels of satisfaction with their career, being lowest in Trinidad & Tobago. Career satisfaction in all three countries was related to feeling a valued member of the dental team.The author(s) declare that they have no competing interests.RN was responsible for the conduct of the survey in Trinidad & Tobago. JTN was responsible for the collection of data in the United Kingdom collated the data from the three countries and performed the statistical analysis. KA was responsible for the conduct of the survey in New Zealand. All authors contributed to the production of the manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "A wide range of outcomes have been assessed in trials of interventions for carpal tunnel syndrome (CTS), however there appears to be little consensus on what constitutes the most relevant outcomes. The purpose of this systematic review was to identify the outcomes assessed in randomized clinical trials of surgical interventions for CTS and to compare these to the concepts contained in the International Classification of Functioning, Disability and Health (ICF).The bibliographic databases Medline, AMED and CINAHL were searched for randomized controlled trials of surgical treatment for CTS. The outcomes assessed in these trials were identified, classified and linked to the different domains of the ICF.Twenty-eight studies were retrieved which met the inclusion criteria. The most frequently assessed outcomes were self-reported symptom resolution, grip or pinch strength and return to work. The majority of outcome measures employed assessed impairment of body function and body structure and a small number of studies used measures of activity and participation.The ICF provides a useful framework for identifying the concepts contained in outcome measures employed to date in trials of surgical intervention for CTS and may help in the selection of the most appropriate domains to be assessed, especially where studies are designed to capture the impact of the intervention at individual and societal level. Comparison of results from different studies and meta-analysis would be facilitated through the use of a core set of standardised outcome measures which cross all domains of the ICF. Further work on developing consensus on such a core set is needed. Carpal tunnel syndrome is the most common peripheral entrapment neuropathy and a frequent cause of disability in the upper extremity. SurgicaThe evidence from trials evaluating the outcomes from surgical and non-surgical interventions is extensive,5, howevOutcome measures should not only capture the impact of the disorder on body structure (impairments) but also on activities and participation as defined in the International Classification of Functioning, Disability and Health (ICF). The ICFGiven the wide range of outcomes that have been employed in trials of surgical interventions for CTS and the lack of consensus among clinicians on what are considered the most relevant outcomes to assess it was hyA systematic review was performed to identify the outcomes assessed and concepts contained in the measures used in trials of interventions for CTS, and to relate these concepts to the ICF as a reference tool.Medline (January 1966 to July 2005), CINAHL (January 1982 to July 2005), AMED (January 1985 to July 2005) were searched using the following MeSH terms: randomized controlled trial, controlled clinical trial, carpal tunnel decompression, carpal tunnel release, carpal tunnel surgery. The titles and abstracts of articles retrieved through the search were checked applying the eligibility criteria as defined below. In order to crosscheck the sensitivity of the search strategy and identify any studies not retrieved through the search the bibliographies of a Cochrane review on surgical interventions for CTS were also examined[The bibliographic databases The review considered all published studies if they met the following eligibility criteria: randomized or quasi-randomized trials, the interventions were surgical, the patients had a diagnosis of CTS made through clinical symptoms with or without confirmatory electrodiagnostic testing, and the outcomes assessed were described. Studies designed solely for the purpose of evaluating the effectiveness of different local anaesthesia on pain during or within 48 hours of surgery were excluded.English language publications only were included and regardless of the time of the publication. The purpose of this review was not to assess the methodological quality of these trials as this has been done previously,11,12, b137 studies were identified through the initial search. After checking the abstracts and further review of the full-text article a total of 28 studies which met inclusion criteria were identified A summarThe studies, conducted between 1985 and 2005, were all designed to evaluate the relative effectiveness of different surgical techniques such as endoscopic carpal tunnel release (ECTR) or open carpal tunnel release with or without epineurotomy. Standard open carpal tunnel release (OCTR) was the control intervention in 27 of the 28 trials. A total of 2558 hands in 2232 patients were studied in 28 studies. The length of follow-up after decompression varied greatly between the studies, ranging from 4 weeks to 2 years. The mean follow-up time was 37 weeks with 13 studies reporting follow-up at 1 year or longer. There was no apparent association between the type of outcomes assessed and other study characteristics such as the type of intervention studied, the length of follow-up or the country of the study.A wide range of outcomes were reported and these were classified according to the three ICF domains: i) measures of impairment of body function and body structure; ii) measurers of activity limitations and iii) measures of participation restriction . Measures of participation used in the trials were return to work which featured as a primary or secondary outcome measure in 15 studies and satisfaction (7 studies).The level of reporting on the actual methods of assessment varied between studies. In several studies standardised outcome measures were used but with none or minimal detail on the actual instrument used, method of administration or reference to literature on standardised protocols. For example, 11 studies assessed two-point discrimination with only 4 studies stating the instrument used and one study describing the method employment status (self-employed versus employee) and variations in healthcare systems and how 'sick notes' are issued. The time taken to return to work also does not indicate whether someone is able to resume the activities without pain or discomfort and to the satisfaction of the individual and/or his employer. Furthermore, work as a measure of participation is not relevant to those patients who are not in employment or retired.Current understanding of the pathophysiology of CTS together with the clinical manifestation of signs and symptoms allows the specific body structures and functions to be identified which are implicated in this disease. Depending on the severity of their symptoms, patients are also affected in their ability to carry out activities (activity limitations) and participation in work or leisure (participation restrictions). These in turn are also influenced by personal and environmental factors .Using the ICF as a framework, firstly the codes and categories within health and health-related domains relevant to CTS were identified and the he SF-36 which caWhilst it is important to obtain a comprehensive assessment of the impact of CTS on body function and structure as well as activity and participation, the use of a large battery of instruments also increases the burden on the patient and the tester. There was some overlap between the concepts assessed in trial outcomes. For example, in the domains body functioning, the functions of muscles were assessed by manual muscle testing of the abductor pollicis brevis muscle, dynamometry for pinch and power grip strength and degree of thenar wasting. In several trials two or more of these measures were used concurrently. A review of the evidence is needed on which of these methods and instruments is most valid, reliable and responsive which, in turn, reduces assessment burden and redundancy of similar outcome measures.The use of the disease-specific measure, the BCTQ is becoming more established in recent trials. A systematic review of the psychometric properties of the BCTQ indicated that it is valid for the population, has good reliability and is responsive and shouThis review has some limitations: we considered RCTs only in this review, based on the assumption that in well-designed trials careful attention would be paid to the selection of outcome measures. There are however a number of large follow-up and cohort studies which also report outcomes from surgical decompression and inclusion of these studies may have highlighted additional outcome domains and instruments.The ICF provides a useful framework for identifying the concepts contained in outcome measures employed to date in trials of surgical intervention for CTS. It can help in the selection of the most appropriate domains to be assessed, especially where studies are designed to capture the impact of the intervention at individual and societal level. The findings of this review on surgical outcomes indicate that studies to date have focused primarily on assessment of impairment and less on the activity limitations and participation restrictions. It is important that consensus is achieved on which outcome measures should be used for which domains and on the standardisation of methods. A minimum set of outcome measures should include patient-reported scales of symptom severity and functional status such as the BCTQ, clinical measures of motor and sensory function and everyday performance in self-care, work and leisure as well as health-related quality of life. Further work is needed to review the psychometric properties of existing instruments in CTS populations and to develop consensus on a core set of outcome measures to be used in future clinical trials for CTS which crosses all three domains of the ICF.The author(s) declare that they have no competing interests.CJH conceived of the original idea, obtained funding and participated in the search, review and drafting of the manuscript. JCCL carried out the searching and reviewing of studies and helped draft the manuscript. FS provided expert advice on systematic reviewing and contributed to the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "The increasing use of complementary and alternative medicines in Australia has generated concern regarding the information on these products available to both healthcare providers and the public. The aim of this study was to examine the practice behaviours of naturopaths in relation to both the provision of and access to information on complementary and alternative medicines (CAM).A representative sample of 300 practicing naturopaths located nationally were sent a comprehensive survey which gathered data on self reported practice behaviour in relation to the provision of information on oral CAM to clients and the information needs of the practitioners themselvesA response rate of 35% was achieved. Most practitioners (98%) have a dispensary within their clinic and the majority of practitioners perform the dispensing themselves. Practitioners reported they provided information to clients, usually in the form of verbal information (96%), handwritten notes (83%) and printed information (75%). The majority of practitioners (over 75%) reported always giving information on the full name of the product, reason for prescribing, expected response, possible interactions and contraindications and actions of the product. Information resources most often used by practitioners included professional newsletters, seminars run by manufacturers, patient feedback and personal observation of patients. Most practitioners were positive about the information they could access but felt that more information was required in areas such as adverse reactions and safe use of CAM in children, pregnancy and breastfeeding. Most naturopaths (over 96%) were informed about adverse events through manufacturer or distributor newsletters. The barriers in the provision of information to clients were misleading or incorrect information in the media, time constraints, information overload and complex language used in printed information. The main barrier to the practitioner in information access was seen as the perceived division between orthodox and complementary medicine practitioners.Our data suggest most naturopaths were concerned about possible interaction between pharmaceuticals and CAM, and explore this area with their patients. There is scope to improve practitioners' access to information of adverse events including an increased awareness of sources of information such as the Australian Therapeutic Goods Administration (TGA) website. The use of complementary medicines in Australia has become commonplace. In 2002 it was estimated that 52% of the Australian population had used at least one non-physician-prescribed complementary medicine in the previous year. These mThe Expert Committee on Complementary Medicines in the Health System in Australia was convened in 2003 and asked to consider the regulatory, health system and industry structures necessary to ensure that the objectives of the National Medicines Policy (NMP) and in particular that of the National Strategy for Quality Use of Medicines (QUM), were met in relation to complementary medicines. The report from the Expert Committee raised a number of concerns surrounding the information available to healthcare providers and consumers regarding complementary medicines, and one of the recommendations of the committee was the commissioning of a study to determine the needs of healthcare professionals and consumers on complementary medicines and the options available for conveying this information[Previous research has surveyed the naturopathic and Western herbalists workforce, providiResearch in the USA has also examined practice patterns of naturopathic physicians. The demIn Australia the profession of naturopathy is essentially unregulated. A level of self-regulation is exerted by the professional associations in the form of minimum qualification levels and monitoring of annual requirements such as Continuing Professional Education, however there are a large number of these associations and entry requirements vary between the groups. Several Australian States are currently exploring options for the regulation of naturopaths and other complementary medicine practitioners .The current agreed minimum qualification level enforced by the main professional associations is Advanced Diploma in Naturopathy. This qualification is contained within the Health Training Package HLT02 and represents a consistent base level of training within the industry. The Advanced Diploma courses are taught at privately-owned Vocational Education and Training Colleges which are registered through the various State Education Authorities. Some of these colleges are also registered to deliver degree courses, and several universities also deliver courses in naturopathy.The purpose of this study was to examine the practice behaviours of naturopaths in relation to the provision of and access to information on oral complementary medicines. In particular we were interested to know what are their counseling and advice-giving behaviors and are these behaviors adequate to ensure safe and judicious use of oral complementary medicines. The study had two aims:1. to explore the information provided by naturopaths to clients on these medicines, circumstances under which information would not be provided to a client, and the types of questions clients ask in respect to oral complementary medicines, and2. to examine the skill base of naturopaths with accessing information, how practitioners find out about adverse events and changes in regulation to complementary medicines and their confidence in respect to answering client questions about these medicines.We designed a short self completion questionnaire with the aim of collecting data from naturopaths describing their practice behaviors, education and training, information gathered during a consultation with a client, experience with accessing information and knowledge about complementary and alternative therapies and socio demographic characteristics. The questionnaire adopted and adapted questionnaires that had been used to assess practice behaviors and practitioner knowledge around CAM and the use of other over the counter medicines -9. A totThe sampling frame was based on a population of 3,000 naturopaths from three States in Australia. The sample size was estimated at 300, based on an estimate that 90% of naturopaths would actively prescribe herbs, with a 95% confidence interval, this allows for a 5% error and a 50% response rate. A stratified sample was dawn from three States in Australia.Data were collected from a national survey of practicing naturopaths in Australia. A representative sample of 300 naturopaths was undertaken from a listing of naturopaths held by the Medical Benefits Fund (MBF), a large national health insurance fund in Australia. It was decided to sample from a database held by a health fund because of the absence of a central database of naturopaths held by any professional association. MBF did not release the list of practitioners but undertook the sampling procedure under direction of the research team and provided the administrative support to mail out the questionnaires. The Fund generated a list of active naturopaths defined as those who have had a claim paid in the previous 12 months. Subjects were assigned a random number selected by a researcher independent from the study team.Naturopaths were sent a questionnaire with a covering letter explaining the purpose of the study and a reply paid envelope. Two reminders were sent out to facilitate the return of questionnaires. Statistical analysis was performed using SPSS version 11.5 . FrequenIn total 300 questionnaires were sent out, and 110 were returned. However, five of these were non practicing practitioners giving a response rate of 35%. The majority identified themselves as Caucasian and female and were in the 45\u201354 age category. A diploma was the standard qualification reported having a dispensary in their clinic, with 101 (97%) naturopaths performing the dispensing themselves, and a small number (7%) using unqualified staff. Seventy-eight percent of naturopaths always advised their patients to purchase their products from their clinic, with the remainder advising their patients to purchase their products from the pharmacy or health food shop.Twenty eight (27% of the sample) never dispensed a repeat medication before undertaking a follow up consultation, while 74 (71%) practitioners often or sometimes did so. Generally, there was a reluctance to dispense medications without a consultation. Practitioners reported never dispensing a product without full consultation: (1) if the patient was presenting with a different condition (45% of practitioners), (2) if the patient had consulted with another naturopath (49%), and (3) if the patient requested it (58%).The majority of naturopaths (96%) gave verbal information to their patients on complementary medicines. Most also provided written information which included practitioner handwritten notes (83%), printed information (75%) and less often printed information from journal articles (26%), text books (25%) or manufacturer information (35%).The content of information presented to patients varied between practitioners Table . InformaThe approach in counselling on appropriate use of naturopathic products will be influenced by the practitioner's training and their skills in life long learning. Ninety one (83%) naturopaths reported their formal training met their needs when performing the day to day practice of naturopathy.often by practitioners to obtain information on CAM included articles in professional newsletters (91%), reference textbooks (72%), continuing professional education (CPE) seminars run by manufacturers (70%), patient feedback (66%), personal observation of patient response (58%), and CPE activity run by other industry bodies (55%). Use of the scientific literature or discussion with other health professionals was reported less often. For example the resources used sometimes included discussion with naturopathic colleagues (49%), journal articles describing a case study (43%), journal articles describing randomised clinical trials (41%), information from course notes (43%), popular health magazines (44%) and reference web sites (37%). Information sources used rarely by practitioners included interaction with pharmacists (46% of naturopaths), medical doctors (44% of naturopaths) or other health care practitioner (40% of naturopaths).There are a number of information resources available to practitioners and we asked naturopaths to indicate the frequency with which they used a particular resource. The resources used most Naturopaths' views on the adequacy of information resources they can draw on to help answer questions relating to CAM are reported in Table A number of factors were expressed as being very important in influencing practitioner confidence with providing information to their patients. This included: knowledge of these products (95%), access to scientific or clinical information (85%), belief in the effectiveness of these products (84%), belief in the quality of the products (84%) and belief in the safety of the products (81%).A safety concern relating to the use of CAM are adverse events associated with herbal medicines. Over 96% of naturopaths reported they were notified about adverse events through manufacturer or distributor newsletters. Other sources of notification were from professional associations (88%) and professional association journals, website or email discussion lists (86%), CPE (69%) and informal discussion with colleagues (60%). Less than 30% of the naturopaths used the Therapeutic Goods Administration (TGA) website as a source of notification of adverse events.Practitioners were asked how their knowledge of adverse events could be improved. Fifty percent of practitioners identified formal education as a need and 58% identified improved access to CPE activities and other relevant information sources were needed. Formal training or improved skills to assess quality scientific or clinical evidence was viewed positively by 30% of naturopaths.Practitioners were asked open ended questions regarding whether the information they were able to access about CAMs was adequate, what they considered the barriers to the provision of information about CAMs to patients were, and what suggestions they had for ways of overcoming these barriers.The majority of practitioners were very positive about the information they could access although there were common themes in these answers regarding the time taken to access quality information. A number of practitioners mentioned the need for unbiased information and expressed concerns regarding the objectivity of the information that was supplied by the manufacturers of those products.In general, practitioners expressed strong opinions about what they perceived to be the barriers to their own access to information. The perceived division between orthodox and complementary medicine practitioners was the strongest theme presented. Themes identified less frequently were perceived Government bias against complementary medicines, the cost of accessing information and the lack of research. Other comments related to the need for professional registration of naturopaths and a need for orthodox medical practitioners to have a greater awareness of and training in complementary medicines.With regard to the information provision to patients, the barriers were seen as misleading or incorrect information in the media, time constraints within the consultation, information overload on the part of the client, information supplied with the product in language too complex for the client, and self-prescribing of CAMs through over-the-counter sales of these products. When suggesting ways to overcome these barriers, practitioners sought greater recognition and status of the naturopathic profession and increased integration and communication with the orthodox health community. Many practitioners also felt that restricting CAM products from over-the-counter sales and allowing them to be supplied by complementary practitioners only would assist with information provision to the client. There were also calls for manufacturers to include more information on their products and for greater balance in media reporting of CAM-related issues.Self reported data from this study reported on the practice behaviour of naturopaths in relation to providing information on complementary medicines. Our data suggest most naturopaths are concerned about safety in relation to possible interaction between orthodox Western medicine and CAM and this is an area naturopaths explore with their patients during the consultation. There is scope to improve the practice behaviour of some naturopaths to ensure greater safety by giving consideration to reducing the number of practitioners who would dispense a product without consultation (on solely the patient's request), and not using unqualified staff to dispense a product.We found both consistencies and inconsistencies between practitioners where information on the product was presented. Most practitioners reported providing information on the dose, the full name, reason for prescribing the product, possible interactions, expected response, contra-indications, expected response and the action of the product. However, information was not provided by 40% or more of practitioners on possible adverse effects, interaction with other substances, and the names and ingredients of the herbal product. A bias resulting from the self report of behaviour can not be excluded.With an increasing profile of safe use of complementary medicines, improved labelling by the naturopath has the potential to reduce public health concerns and increase the judicious use of herbal medicines. Although the majority of prescribing took place in the clinic, 22% of practitioners advised their patients to purchase product from the pharmacy or health food store. Patients who are sent to these outlets may or may not be given a written directive with the item to be purchased. Patients may be sold product which is different to that recommended by the naturopath, be given conflicting advice regarding dosage or may simply forget the instructions given to them, particularly when those instructions differ from that printed on the label.Initial naturopathic training met the needs of naturopaths with performing the day to day practice of naturopathy. We have identified a number of deficiencies that will need to be addressed to ensure naturopaths can remain up to date with increasing quality information on CAM and continue to provide high standards of counselling. Naturopaths identified that information resources relating to the safety of CAM use in children, use during pregnancy and breastfeeding, the adverse effects of CAM, the safety of using CAM as related to certain medical conditions and interactions of CAM and other medicines as not very adequate. Given the insufficient research base describing any adverse effects in some of these patient groups, providing evidence based practice in these areas will remain inadequate and naturopaths will need to continue practising with caution.Naturopaths use a wide variety of information sources to obtain information on CAM, although conventional health care practitioners and practitioners from another CAM discipline were not widely used as a resource to access information. This is similar to the findings from the workforce survey where practitioners reported that they felt well prepared for practice within their clinical training with the exception of the area of inter-professional communications. Our finComplementary Medicines in the Australian Health System (2003) identified a need to improve access to information about adverse reports[The naturopathic and Western herbal medicine workforce survey reported that adverse events were relatively common, and calculated that naturopaths would on an average encounter 1.2 adverse events in their patients per year of full time practice, or one e reports. Our stuA limitation of this study is the low response rate which may limit the generalisations we can make relating to the national population of naturopaths. The demographic characteristics of our study population allow limited comparison with published data from national surveys from Australia and the United States,3. Data The findings from this study are timely in relation to the Australian Government's recognition of the need to identify the information and skills needed by health care professionals to assess the quality of evidence concerning the use of complementary medicines. This study provides baseline data for describing the practice behaviors in naturopaths in relation to counseling and advice-giving behaviors and their skills in accessing information on CAM adverse events. It will facilitate the evaluation of practice behaviour over time. The majority of naturopaths in Australia report behaviours that suggest they provide appropriate and judicious use of oral CAMs for their patients but the practitioners identify the need for further training to ensure safe and judicious practice continues in the future.The author(s) declare that they have no competing interests.CS conceptualised the research, analysed the data and took the lead with writing the paper.KM participated in conceptualising the research, study design, analysed the data and contributed to the preparation of the manuscript.EH participated in the study design and reviewed the manuscript.SS participated in the study design and reviewed the manuscript.GB participated in the study design and reviewed the manuscript.DR participated in the study design and reviewed the manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "It has been well documented that the pineal hormone, melatonin, which plays a major role in the control of reproduction in mammals, also plays a role in the incidence and growth of breast and mammary cancer. The curative effect of melatonin on the growth of dimethylbenz [a]anthracene-induced (DMBA-induced) mammary adenocarcinoma (ADK) has been previously well documented in the female Sprague\u2013Dawley rat. However, the preventive effect of melatonin in limiting the frequency of cancer initiation has not been well documented.The aim of this study was to compare the potency of melatonin to limit the frequency of mammary cancer initiation with its potency to inhibit tumor progression once initiation, at 55 days of age, was achieved. The present study compared the effect of preventive treatment with melatonin (10 mg/kg daily) administered for only 15 days before the administration of DMBA with the effect of long-term (6-month) curative treatment with the same dose of melatonin starting the day after DMBA administration. The rats were followed up for a year after the administration of the DMBA.The results clearly showed almost identical preventive and curative effects of melatonin on the growth of DMBA-induced mammary ADK. Many hypotheses have been proposed to explain the inhibitory effects of melatonin. However, the mechanisms responsible for its strong preventive effect are still a matter of debate. At least, it can be envisaged that the artificial amplification of the intensity of the circadian rhythm of melatonin could markedly reduce the DNA damage provoked by DMBA and therefore the frequency of cancer initiation.In view of the present results, obtained in the female Sprague\u2013Dawley rat, it can be envisaged that the long-term inhibition of mammary ADK promotion by a brief, preventive treatment with melatonin could also reduce the risk of breast cancer induced in women by unidentified environmental factors. MBYDJC equally carried out the experimental studies. MHP made a contribution to the acquisition of data. AM performed the histological analysis. RS participated in the design of the study and performed the statistical analysis. BK conceived the study and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript."} +{"text": "First, scientists from the Moscow Kurchatov Institute presented physical evidence that the dominant sources of energy released by the exploding reactor were not the officially assumed thermal explosions but rath131I mortality and teratogenic effects were observed in Germany, Poland, and the former Soviet Union shortly after the Chernobyl explosion . Excess In the absence of scientifically convincing evidence rebutting such challenges to official assessments of the physical events and long-term human consequences of the Chernobyl catastrophe, the Precautionary Principle in public health issues requires"} +{"text": "Peripheral sensory diabetic neuropathy is characterized by morphological, electrophysiological and neurochemical changes to a subpopulation of primary afferent neurons. Here, we utilized a transgenic mouse model of diabetes (OVE26) and age-matched controls to histologically examine the effect of chronic hyperglycemia on the activity or abundance of the enzymes acid phosphatase, cytochrome oxidase and NADPH-diaphorase in primary sensory neuron perikarya and the dorsal horn of the spinal cord. Quantitative densitometric characterization of enzyme reaction product revealed significant differences between diabetic, compared to control, animals for all three enzymes. Levels of acid phosphatase reaction product were found to be significantly reduced in both small diameter primary sensory somata and the dorsal horn. Cytochrome oxidase activity was found to be significantly lower in small primary sensory somata while NADPH-diaphorase labeling was found to be significantly higher in small primary sensory somata and significantly lower in the dorsal horn. In addition to these observed biochemical changes, ratiometric analysis of the number of small versus large diameter primary sensory perikarya in diabetic and control animals demonstrated a quantifiable decrease in the number of small diameter cells in the spinal ganglia of diabetic mice. These results suggest that the OVE26 model of diabetes mellitus produces an identifiable disturbance in specific metabolic pathways of select cells in the sensory nervous system and that this dysfunction may reflect the progression of a demonstrated cell loss. Diabetic sensory neuropathies are a common, clinically observed sequelae of hyperglycemia and are characterized by a progressive degradation of primary afferent function ,2. FunctThe OVE26 transgenic mouse line used in this study displays a well-characterized chronic hyperglycemia and hypoinsulinemia within days after birth. ,6. Ten were conducted using SigmaStat (Jandel). Controls for densitometric analysis consisted of: 1) simultaneous sectioning and mounting of diabetic and control tissue on the same slide to ensure identical histological processing; 2) statistical analysis to verify consistency of staining between animals within control and experimental groups; 3) correction for small fluctuations in tissue opacity/thickness by subtractive illumination whereby the density value of white matter was subtracted from the immediately adjacent ventral horn; and 4) manual adjustment and calibration of the video camera parameters and microscope illumination and acquisition of all images using identical settings. All experiments were conducted in accordance with the guidelines of our institutions and the National Institutes of Health regarding the care and use of animals for experimental procedures.Quantitative analysis of CO, AP and NADPH-d staining was undertaken on both the dorsal horn of the L5 segment of the spinal cord and the large and small cells of L5 sensory ganglion using previously described densitometric analysis ,12. The 2 area) to large (500 and 1950 \u03bcm2 area) primary sensory somata in the fifth lumbar spinal ganglia revealed a significant decrease in the proportion of small to large cells in diabetic compared to control mice . Quantitative densitometric analysis of the abundance and distribution of enzyme histochemical reaction product in dorsal root ganglia (DRG) revealed substantive differences between diabetic and control mice . Small somata from the ganglia of diabetic mice exhibited lower levels of AP and CO reaction product and an increase in the density of the reaction product for NADPH-d in comparison to control animals. No differences were observed in large diameter neurons from diabetic as compared to control animals.Prior to fixative perfusion, the phenotypic status of OVE26 diabetic mice were confirmed by their characteristic small eyes caused by the GR19 gene in their transgenic construct . All aduP = 0.026; 27 sections quantified) and NADPH-d reaction product in lamina I and II of the dorsal horn of control and diabetic mice declare that they have no competing interests.RZ completed this work as part of his doctoral dissertation and was involved in the writing of this manuscript and contributed both intellectually and practically to the content. PE created, characterized and supplied the transgenic mice and was also involved in the writing of this manuscript and contributed both intellectually and practically to the content. PC provided the lab, supervision, and support for this work, exclusive of that associated with generation and characterization of the mouse model. PC was also involved in the design and coordination of this study and participated in the writing of this manuscript and contributed both intellectually and practically to the content. All authors read and approved the final manuscript."} +{"text": "To be useful, clinical practice guidelines need to be evidence based; otherwise they will not achieve the validity, reliability and credibility required for implementation.This paper compares the methods used in gathering, analysing and linking of evidence to guideline recommendations in ten current hypertension guidelines.It found several guidelines had failed to implement methods of searching for the relevant literature, critical analysis and linking to recommendations that minimise the risk of bias in the interpretation of research evidence. The more rigorous guidelines showed discrepancies in recommendations and grading that reflected different approaches to the use of evidence in guideline development.Clinical practice guidelines as a methodology are clearly still an evolving health care technology. Clinical practice guidelines can provide building blocks for changing and improving health care and are Like many other conditions hypertension has been the subject of many different international guidelines. The World Health Organisation (WHO) have described hypertension \u2013 defined as a blood pressure of greater than 140/90 mmHg \u2013 as one of the ten leading risk factors influencing the global burden of disease . ReducinThe aim of this study was to review how well 10 guidelines for hypertension addressed validity in terms of their methods and their use of published evidence.We reviewed the methods used in development and the key recommendations of ten current guidelines meetWe evaluated the methods used to develop each guideline with particular reference to three dimensions that relate to the use of research evidence, as found in the full published report of each guideline:\u2022 the construction of the guideline development group and its component stakeholders.\u2022 the use of published literature and the strategy used in screening for the primary evidence; in particular, the use of existing systematic reviews or the performance of a new systematic review explicitly to answer questions posed by the guideline.\u2022 the grading of evidence and recommendations: in particular, an explicit link between recommendations and supporting evidence.We compared recommendations on four areas that were common to all the guidelines: diagnosis of hypertension, lifestyle modification, criteria for initiation of antihypertensive drug therapies and initial recommended drug therapy. We also explored links between recommendation grades and citations and looked at how these differed in recommendations for drug therapy and salt intake.The measures used to assess the guideline development process are summarised in table Only three guidelines were constructed by multidisciplinary groups where the members' affiliations and conflicts of interest were described; these three guideline groups included patient representatives as well as key professional stakeholders. A further six guidelines provided only a list of names and institutional affiliations of members of the guideline development group. One further guideline gave no details of the guideline development group per day for women and from 24 to 30 g (ethanol) per day for men. Differences in the daily limits for alcohol consumption may reflect the variations on guidance for sensible drinking in different countries. Most guidelines recommended a diet rich in fruit and vegetables with reduced saturated and total fats. Guidelines typically recommended 30\u201345 minutes of aerobic exercise three to five times per week. Although their recommendations were similar, guidelines lacked consistency in the estimations of the effect of lifestyle changes on blood pressure, possibly reflecting the different data sources used. Guidelines varied in the additional areas that they addressed: potassium, magnesium and calcium supplementation, management of stress and caffeine consumption were considered by some of the guidelines. This demonstrates one of the challenges facing guideline developers. Each clinical care pathway involving assessment, diagnosis, treatment and follow-up requires multiple complex decisions. A clinical guideline will be unable to offer guidance on every consideration that must be made by caregivers and patients. Guidelines will reflect this complexity and are likely to vary in their scope and coverage of the decisions involved in the care pathway.All of the guidelines addressed lifestyle modification as an integral part of the management of hypertension and as a first line treatment in mild hypertension, and made similar recommendations for weight loss, limiting alcohol and sodium intake, regular exercise and smoking cessation relied upon a similar and extensive body of work, either directly using the original data in a systematic review or indirectly sourcing the study via a previously published systematic review feature transparent and fully reported: guideline group methods and participation; involvement of stakeholders and sponsors; reporting and use of evidence and linking of recommendation to evidence; understanding of health care delivery, the policy context and narratives of patient experience.ACE angiotensin converting enzymeARB angiotensin receptor blockerBMI body mass indexg gramkg kilogramm metremmHg millimetres of mercurymins minutesThe authors contributed to the development of one of the guidelines reviewed .FC, HOD and JMM wrote the manuscript.ME critically revised the manuscript.FC, JC, FRB, HOD and JMM performed data abstraction.FC and FRB performed literature searches.JMM designed the study.The pre-publication history for this paper can be accessed here:"} +{"text": "A number of growth factors have been implicated in the control of the proliferation of breast cancer cells and some have been reported to mediate the proliferative effects of oestradiol. MCF-7 cells were treated with growth factors in the presence and absence of oestradiol. Oestradiol increased the response of cells to the proliferative effects of epidermal growth factor (EGF), transforming growth factor alpha and basic fibroblast growth factor (bFGF). Platelet derived growth factor (PDGF) and cathepsin D had no effect in the presence or absence of oestradiol while TGF-beta slightly reduced the stimulation by oestradiol. In the absence of oestradiol, there was little effect of combinations of growth factors although the effects of bFGF and IGF-I were additive. In the presence of oestradiol, the effects of bFGF and TGF-alpha were additive whereas bFGF acted as an IGF-I antagonist. Overall, bFGF had the greatest effect on cell proliferation although this was less marked than the previously described effect of the IGFs and insulin. The effects of oestradiol on the sensitivity of cells to the proliferative effects of bFGF did not appear to result from regulation of bFGF receptor expression."} +{"text": "The normal growth, development and function of an organism requires precise and co-ordinated control of gene expression. A major part of this control is exerted by regulating messenger RNA (mRNA) production and involves complex interactions between an array of transcriptionally active proteins and specific regulatory DNA sequences. The combination of such proteins and DNA sequences is specific for given gene or group of genes in a particular cell type and the proteins regulating the same gene may vary between cell types. In addition the expression or activity of these regulatory proteins may be modified depending on the state of differentiation of a cell or in response to an external stimulus. Thus, the differentiation of embryonic cells into diverse tissues is achieved and the mature structure and function of the organism is maintained. This review focusses on the role of perturbations of these transcriptional controls in neoplasia. Deregulation of transcription may result in the failure to express genes responsible for cellular differentiation, or alternatively, in the transcription of genes involved in cell division, through the inappropriate expression or activation of positively acting transcription factors and nuclear oncogenes. Whether the biochemical abnormalities that lead to the disordered growth and differentiation of a malignant tumour affect cell surface receptors, membrane or cytoplasmic signalling proteins or nuclear transcription factors, the end result is the inappropriate expression of some genes and failure to express others. Current research is starting to elucidate which of the elements of this complicated system are important in neoplasia."} +{"text": "Human epithelial ovarian tumours were successfully established as xenografts in nude mice in 54% of cases. An evaluation of the biological characteristics of tumours propagated in nude mice was carried out and the functions investigated included morphology, growth kinetics, cellular DNA content, cell surface antigen expression and sensitivity to chemotherapy. To allow a more detailed study of the influence of ploidy on biological behaviour, xenografted tumour with varying degrees of aneuploidy and tumours with a common ancestry but different ploidies were also established. Although this is a highly selective model system favouring the growth of biologically aggressive tumours the xenografts, in general, reflect many of the characteristics of the tumours from which they were derived and are likely to provide a useful model for investigating the biology of ovarian cancer."} +{"text": "To investigate the influence of hormones on the process of cellular differentiation the growth and differentiation of a transplantable tumour, induced by inoculation of pluripotent mouse embryonal carcinoma (EC) cells have been studied in athymic nude mice and, normal and hypopituitary Snell dwarf mice. All athymic nude mice developed tumours independent of the numbers of cells inoculated. In contrast, the tumour percentage in normal Snell mice was lower, showing a dose-dependent increase of takes. In dwarfs tumour percentage was comparable with that observed in normal Snell mice. The morphological differentiation of teratocarcinomas grown in athymic nude mice, normal and dwarfed Snell mice shows derivatives of all three germ layers next to undifferentiated embryonal carcinoma cells. This suggests that the pituitary hormonal deficiencies of the dwarfs did not influence the tumour induction nor the development of the different tissues present in this type of tumour."} +{"text": "A prospective autopsy study of deaths of women who had been diagnosed previously as having cancer of the breast was performed between October 1986 and December 1990. During the study period 28 deaths occurred and nine of these (32%) were attributable directly to breast cancer; a figure similar to that found in our earlier retrospective study. In this study the autopsy findings in both the breast cancer and non-breast cancer deaths were recorded and five cases underwent post-mortem radiological skeletal survey to detect metastases. The findings confirm the role of the post mortem in modern medicine as a method of auditing clinical practice. Of particular importance, is the finding that the clinical presumption of disseminated breast cancer as a cause of 'terminal' illness in some patients may be misleading and dangerous, possibly denying some patients treatment of potentially remedial conditions by the institution of inappropriate terminal care."} +{"text": "The fluorometric microculture cytotoxicity assay (FMCA) was employed for analysing the effect of different chemotherapeutic drug combinations and their single constituents in 44 cases of acute myelocytic leukaemia (AML). A large heterogeneity with respect to cell kill was observed for all combinations tested, the interactions ranging from antagonistic to synergistic in terms of the multiplicative concept for drug interactions. However, an 'additive' model provided a significantly better fit of the data compared to the effect of the most active single agent of the combination (Dmax) for several common antileukaemic drug combinations. When the two interaction models were related to treatment outcome 38% of the non-responders showed preference for the additive model whereas the corresponding figure for responders was 80%. Overall, in 248 of 290 (85%) tests performed with drug combinations, there was an agreement between the effect of the combination and that of the most active single component. Direct comparison of Dmax and the combination for correlation with clinical outcome demonstrated only minor differences in the ability to predict drug resistance. The results show that FMCA appear to report drug interactions in samples from patients with AML in accordance with clinical experience. Furthermore, testing single agents as a substitute for drug combinations may be adequate for detection of clinical drug resistance to combination therapy in AML."} +{"text": "The FAMMM syndrome consists of the familial occurrence of cutaneous malignant melanoma and atypical nevi (dysplastic nevi), and is inherited as an autosomal dominant trait. Conflicting results have been reported on the question of whether the syndrome includes increased susceptibility to non-melanoma cancers. We have studied cancer of all anatomic sites and histologies in nine FAMMM families which were ascertained in a pigmented lesions clinic in the Netherlands. We evaluated two hypotheses: that the number of systemic cancers observed in the families was excessive, compared to expected incidence, based on Dutch incidence data, and that there was variation (or heterogeneity) among families in the frequency of systemic cancer. A significant excess of systemic cancer was observed. Significant heterogeneity was also found among the families; three of the nine families had marked excess in numbers of systemic cancers, and the remaining families had normal numbers of cancers among the known FAMMM gene carriers and their first degree relatives. Thus, we provide evidence of increased susceptibility to systemic cancer occurring in conjunction with the FAMMM syndrome in a subset of this resource."} +{"text": "The common failure of health systems to ensure adequate and sufficient supplies of injection devices may have a negative impact on injection safety. We conducted an assessment in April 2001 to determine to which extent an increase in safe injection practices between 1995 and 2000 was related to the increased access to injection devices because of a new essential medicine policy in Burkina Faso.We reviewed outcomes of the new medicine policy implemented in1995. In April 2001, a retrospective programme review assessed the situation between 1995 and 2000. We visited 52 health care facilities where injections had been observed during a 2000 injection safety assessment and their adjacent operational public pharmaceutical depots. Data collection included structured observations of available injection devices and an estimation of the proportion of prescriptions including at least one injection. We interviewed wholesaler managers at national and regional levels on supply of injection devices to public health facilities.Fifty of 52 (96%) health care facilities were equipped with a pharmaceutical depot selling syringes and needles, 37 (74%) of which had been established between 1995 and 2000. Of 50 pharmaceutical depots, 96% had single-use 5 ml syringes available. At all facilities, patients were buying syringes and needles out of the depot for their injections prescribed at the dispensary. While injection devices were available in greater quantities, the proportion of prescriptions including at least one injection remained stable between 1995 (26.5 %) and 2000 (23.8 %).The implementation of pharmaceutical depots next to public health care facilities increased geographical access to essential medicines and basic supplies, among which syringes and needles, contributing substantially to safer injection practices in the absence of increased use of therapeutic injections. Injections are one of the most common medical procedures, with an estimated 16 thousand million injections administered each year in developing and transitional countries, most of which are for therapeutic purposes . To prevThe common failure of health systems to ensure adequate and sufficient supplies of injection devices may have a negative impact on injection safety. The WHO model list of essential medicines made no mention of the need to supply injection devices in quantities that matched supplies of essential injectable medicines although 44% of active ingredients were mentioned in injectable form . In a syBurkina Faso is a good setting to examine how access to injection devices impacts on injection safety. In 1995, an injection safety assessment indicated that (1) sterile injection devices were used for each injection in 80% of the urban health care facilities, 60% of provincial facilities and 11% of the rural facilities and (2) up to 48% of health care facilities visited reported insufficient quantities of injection devices available . In 96% We conducted a cross-sectional survey in 2001 to assess retrospectively the situation between 1995 and 2000 structured observations of injection devices available in the health facility, (2) reviews of the registers to estimate the proportion of prescriptions including at least one injection during the months of June between 1995 and 2000, (3) interviews of health care workers using standardized questionnaires, (4) interviews of operational public pharmaceutical depots managers using standardized questionnaires, (5) structured observations of 5 ml syringes with needle and 15 basic essential medicines available in the depot (chosen as sentinel indicators of availability and good stock management) and (6) structured interviews of district wholesalers managers. We recorded the origin and brand name of injection devices observed and the sale prices of syringes and needles in the pharmaceutical depots. Seven teams of two investigators collected the information. All teams standardized their data collection procedure before the fieldwork through training and a field visit.We interviewed the manager of the national wholesaler who provided figures estimating the number of injection devices sold annually in the public health care sector and fixed retail prices. We converted the retail prices of syringes and needles in the public sector set by the Ministry of Health in US$ according to the exchange rate in use in May 2001 (1 US$ for 750 Francs CFA). We used the standard set of medicines and basic consumables given to open a pharmaceutical depot in Burkina Faso by Nongovernmental Organizations (NGOs) with the Ministry of Health approval to estimate the proportion of the essential medicines expenditure used to procure injection devices. This standard package was representative of a typical consumption of a depot and covered the need of a population of 10 000 persons for three to six months in Burkina Faso. Finally, we reviewed the major changes in the new national essential medicines policy, particularly related to access to medicines and basic supplies, including the number of operational pharmaceutical depots set up throughout the country annually and the regulation of medicine retailed prices in the public sector.We entered and analysed data using the version 1.2d of the Sphinx plus software . Proportions were calculated using the number of health care facilities visited (52), the number of pharmaceutical depots or the number of health care workers interviewed as denominator, as appropriate.The Ministry of Health approved the protocol and provided an introduction letter for the visit of health care facilities. We met each regional director prior to field visits in each area. However, there was no communication with the health care facilities prior to the arrival of the field workers. We informed participants about the purpose of the assessment and about their right to refuse. Participation was voluntary for all interviewed. When injections were about to occur with non-sterile devices, field workers were asked to interrupt tactfully the procedure. All information was collected confidentially using codes.Geographical access to injection devices improved substantially between 1995 and 2000. Of the 52 health care facilities visited in April 2001, 50 (96%) were equipped with a public sector pharmaceutical depot selling syringes and needles directly to patients. Of these, 37 (74%) had been established between 1995 and 2000 of the essential medicines expenditure . In the While single-use injection devices should be made available in every health care facilities, they should also be made available in a way that ensures equity. In Burkina Faso, the poorest part of the population may not be able to assume the financial burden of single-use injection devices according to the Bamako Initiative. The use of cost recovery scheme could decrease service utilization by the general population and subsequently expose the poorest to unsafe injection practices and reuse of single-use injection devices . The lowThe generalized use of single-use injection devices in preventive and curative services has not led to major side effects in Burkina Faso. Excessive availability of injectable medicines can increase irrational use of injections and restricted access to injectable medications is associated with a reduction of injection overuse . HoweverIncreased access to injection devices is not the only explanation that may account for the improvement of injection practices in Burkina Faso. Increased awareness regarding risks of transmission of pathogens, including HIV, through unsafe injections among the population may have played a role ,17,18. IThis programme review suffers from three main limitations. First, the 1995 and 2000 assessments used different methodologies and the 1995 assessment did not use standard WHO methods However,One element of the strategy to ensure injection safety is a continuous availability of sufficient quantities of injection devices in health-care facilities . WHO, UNIn Burkina Faso, establishing pharmaceutical depots next to health care facilities through the national policy of essential medicines increased access to safe injection devices and contributed substantially to safer injection practices along with other factors, including an increased consumer demand for safe injection devices. The better access to single use injection devices was not parallelled by an increase in injection prescriptions. However, health care waste management policies need to address the increased amount of sharps waste generated.The author(s) declare that they have no competing interests.SL who was the principle investigator and writer of the article has shared with JT, the conception of the protocol and study design. PS participated in the formulation of the assessment tool, provided comments on the assessment tool and the writing of the study. YH wrote the initial terms of reference of the study, participated in the analysis of the data and in the writing of the paper. KH supervised all aspects of the investigations and of the writing.The pre-publication history for this paper can be accessed here:"} +{"text": "Some earlier studies based on relatively small data sets have suggested that the month of diagnosis affects survival of breast cancer patients. This phenomenon has been suggested to be attributable to daylight-related hormonal factors. Factors related to the holidays of both the medical personnel and the women themselves might also provide the explanation. In this study we assessed the effect of the month of diagnosis on the survival of 32,807 female breast cancer patients diagnosed in Finland in 1956-1985. Our results indicate that the month of diagnosis is a significant prognostic factor after adjusting for age at diagnosis, period of diagnosis, and stage at diagnosis. The adjusted relative excess risk of death was highest among those diagnosed in July and August, and lowest in March and November, the difference between the lowest and highest risk being 18%. Since colorectal cancer should not have any daylight-related hormone dependent risk determinants, a control cohort of 12,950 women with a diagnosis of colorectal cancer in the same calendar period was studied in a similar way. The survival pattern by month of diagnosis among the colorectal cancer patients was similar to that among breast cancer patients, indicating that general factors associated with the health behaviour of women and the health services (such as holidays) rather than biological factors may cause seasonal variations in survival of cancer patients."} +{"text": "The antigenic phenotype, ultrastructure and bone resorbing ability of mononuclear and multinucleated giant cells of four giant cell tumour of tendon sheath (GCTTS) lesions was assessed. Both the giant cells and the mononuclear cells exhibited the antigenic phenotype of cells of the monocyte/macrophage lineage. The giant cells, unlike osteoclasts, did not respond morphologically to calcitonin and showed ultrastructural and immunophenotypic features of macrophage polykaryons. However, like osteoclasts, the giant cells showed direct evidence of resorption pit formation on bone slices. This indicates that the GCTTS is composed of cells of histiocytic differentiation with the giant and mononuclear cell components expressing a similar antigenic phenotype. Bone resorption by macrophage polykaryons shows that this is not a unique defining characteristic of osteoclasts. Qualitative differences in the degree and pattern of bone resorption by macrophage polykaryons distinguish it from that of osteoclasts and may underlie the clinical behaviour of osteolytic lesions."} +{"text": "We describe a modified access technique for the proximal (open) part of single stage hybrid exclusion of aneurysm of the aortic arch.3 patients had a bifurcated Dacron graft for the innominate and left subclavian arteries and an additional end-to-side anastomosis of the left common carotid artery on the limb to the left subclavian artery. With our modification, access to the left subclavian artery is by left subclavicular incision and creation of an anterior tunnel via the left thoracic outlet from the origin of the left subclavian artery along its anatomical course to the subclavicular plane.Advantages and disadvantages of this technique in relation to anatomy and pathology. A number of techniques and variations thereof are currently employed in surgery for the aortic arch.The two \u2013 and singPreoperative imaging with contrast thorax computed tomography is required to visualize adequately the degree of calcification of the ascending aorta and the vascular anatomy of the arch, with particular attention to the degree of calcification around the origins of head and neck vessels.The standard preparation and draping of the patients for aortic arch surgery has been previously well described.Our approach Figure specificOnce proximal and distal control is achieved on the ascending aorta and the three arch vessels with colour-coded silastic slings, 5,000 international units of heparin are given intravenously. The proximal anastomosis is fashioned following application of side-biting clamp and continuous 4.0 Polypropylene sutures were used reinforced with Teflon felt. The innominate artery (IA) anastomosis follows with 5.0 polypropylene suture and partial clamping of the vessel. Subsequently, whilst flow in the IA resumes, the LSC anastomosis (primary left top end) is similarly constructed after feeding the 10 mm left limb of the graft by way of a Roberts clamp and vascular tape through a tunnel leading from the sternotomy, anterior to the origin of the LSC by the thoracic outlet to the left subclavicular incision. Care is taken for the graft not to be compressed or otherwise distorted through the tunnel with temporary approximation of the sternal edges whilst fashioning the anastomosis, while important neighbouring structures are safeguarded. The proximal part of LCC is ligated with heavy silk, a vascular clamp is applied in the distal part and the LCC is divided. The last distal anastomosis (secondary left top end) is that of the distal LCC on the antero-medial aspect of the 10 mm left limb of the graft to the LSC with same suture technique and tapering.The other two proximal parts of arch vessels are ligated with heavy silk ties to avoid endo-leak and the proximal anastomosis is marked by heavy radio-opaque clips to orientate the deployment of the endoaortic graft that is next inserted by the vascular surgeon and interventional radiologist in order to obliterate the distal part of the aneurysm. The endovascular part of the operation has also been previously well described.With small differences in the minutiae of the operation, we have applied this technique in three patients, avoiding these with Marfan syndrome (see conclusion). The outcome was favourable in each occasion.Previously described advantages of hybrid technique for exclusion of aortic aneurysm are avoidance or reduction in global cerebral ischemia time, avoidance of cerebral perfusion or cerebroplegia and limitation of primary aorto-graft anastomosis to one proximal and two distal .Concerns in relation to the endovascular stent exclusion have been expressed for endoleak and long-term aortic arch stability.In the technique presented in this article two further advantages may be ascribed:-facile access to the LSC in obese patients of short stature or barrel chest (or both) where the artery lies deep and posterior in the thoracic inlet where its approach through the sternotomy is problematic including risks of uncontrollable bleeding and injury to the left recurrent laryngeal nerve. This particular patient somatotype is where we recommend this technique.-Optimisation of distal anastomoses when significant discrepancies exist in diameter between the limbs of Dacron grafts and head and neck vessels.Potential disadvantages of our technique include: morbidity from the additional incision and, perhaps most importantly, compromise of the left limb of the graft by distortion in the' tunnel'. Similarly, the longer length of the left limb of the graft increases the possibility of kinking or distortion. Translocation of the LCA and injury from the side-biting clamp on the innominate are other potential risks.We do not intend to apply this technique to patients with Marfan's somatotype to whom traction on the arch exposes adequately the proximal LSC through the sternotomy. For them, warm arch surgery with cerebral perfusion is an established alternative.Lastly, we note the need for multidisciplinary collaboration between vascular surgeons, cardiothoracic surgeons and interventional radiologists.The author(s) declare that they have no competing interests.AP drafted the manuscript, CR and AC assisted in writing and creating the figures, NC and TA developed the technique and co-authored the manuscript."} +{"text": "We appreciate the letter from Alessio and Lucchini concerning the number and variety of toxicants able to affect serum prolactin levels. Reflecting on the wide variability of the currently available data, we would like to make two additional points.The first point concerns the usefulness of serum prolactin as a potential indicator of neurotoxicity for populations at risk. This biomarker indeed appears to be influenced by a large number of both organic and inorganic chemicals, which have seemingly little in common in terms of mechanistic action . Moreover, one chemical\u2014cadmium, for example\u2014can have a biphasic dose-dependent effect on serum prolactin , an effeB1 knock-out mice [as demonstrated by the hyperprolactinemia developed by GABAout mice ], glycinout mice . In viewAnother important issue to keep in mind concerns the biological significance of all of the modifications we observed in our study . DespiteAlthough the lack of specificity of prolactin reduces the immediate usefulness of these dopaminergic biomarkers, the question of the potential clinical impact of the small but significant changes in terms of neurotoxicity certainl"} +{"text": "In Northern Ghana, a combination of torrential rains coupled with the spilling of the Bagre dam in neighboring Burkina Faso in the past few years has resulted in perennial flooding of communities. This has often led to the National Disaster Management Organization (NADMAO) the main disaster responder agency in Ghana, being called upon to act. However affected communities have never had the opportunity to evaluate the activities of the agency. The aim of this study is therefore to assess the performance of the main responder agency by affected community members to improve on future disaster management.A mixed qualitative design employing a modified form of the community score card methodology and focus group discussions was conducted in the 4 most affected communities during the last floods of 2012 in the Kasena-Nankana West district of the Upper East Region of Northern Ghana. Community members comprising of chiefs, elders, assembly members, women groups, physically challenged persons, farmers, traders and youth groups formed a group in each of the four communities. Generation and scoring of evaluative indicators was subsequently performed by each group through the facilitation of trained research assistants. Four Focus group discussions (FGDs) were also conducted with the group members in each community to get an in-depth understanding of how the responder agency performed in handling disasters.A total of four community score cards and four focus group discussions were conducted involving 48 community representatives. All four communities identified NADMO as the main responder agency during the last disaster. Indicators such as education/awareness, selection process of beneficiaries, networking/collaboration, timing, quantity of relief items, appropriateness, mode of distribution of relief items, investigation and overall performance of NADMO were generated and scored. The timing of response, quantity and appropriateness of relief items were evaluated as being poor whereas the overall performance of the responder agency was above average.NADMO was identified as the main responder agency during the last disasters with community members identifying education/awareness, selection process of beneficiaries, networking/collaboration, timing of response, quantity of relief items, appropriateness of relief items, mode of distribution of relief items, investigation and overall performance as the main evaluative indicators. The overall performance of NADMO was rated to be satisfactory.Kasena-Nankana West district, NADMO, community score card, Rural Northern Ghana According to the UN Office for the Coordination of Humanitarian Affairs, the 2007 floods alone killed 22 people, destroyed or completely damaged 11,239 homes, washed away 12,220 hectares of farmland and affected 275,000 people.5As the reported number of people affected by disasters has risen over the past decades, so has the expectations placed on responding agencies by donors, the public and affected populations also increased.7The Kasena Nankana west district was one of the hardest hit in all the previous disastrous events resulting in widespread humanitarian effort by government; non-governmental organizations (NGOs) such as the International Federation of the Red Cross and the Inter-NGO Consortium; and individuals. All the humanitarian aid was channeled through the National Disaster Management Organization (NADMO) the main disaster responder agency in Ghana established by act 517 of 1996 to manage disasters and similar emergencies in the country.There is currently scarcity of knowledge concerning the performance of the main disaster responder agency by community members in Northern Ghana especially after the disastrous floods of 2007, 2010 and 2012. We therefore conducted this study to assess the performance of the main responder agency at the grass root level in Northern Ghana and to set the platform for an interactive feedback between responder agency and community members for future disaster management.Study area and DesignThis study was conducted from May to June 2014 in the four most affected communities of the last floods of 2012 in the Kasena-Nankana West district of the Upper East Region of Northern Ghana. The Kassena-Nankana West District is one of the thirteen districts in the Upper East Region of Ghana. Its population according to the 2010 Population and Housing Census is 70,667 representing 6.8 percent of the region\u2019s total population. Males constitute 50.8 percent and females represent 49.2 percent. Seventy nine percent of the population is rural. The population is engaged in various economic and social activities including; subsistence farming, business/trade, rearing domestic animals among others.9 The map of the Kassena-Nankana West District showing the study communities is shown in Due to the growing interest and importance in the use of qualitative, especially participatory techniques in evaluating relief programsThe Community Score Card methodology,11,12The community score card (CSC) process is a community-based monitoring tool that is a hybrid of the techniques of social audits and citizen report cards. The CSC is an instrument to exact social and public accountability and responsiveness from service providers. By linking service providers to the community, citizens are empowered to provide immediate feedback to service providers. The CSC process uses the \u201ccommunity\u201d as its unit of analysis, and is focused on monitoring at the local/facility level. It can therefore facilitate the monitoring and performance evaluation of services, projects and even government administrative units by the community themselves. Since it is a grassroots process, it is also more likely to be of use in a rural setting. CSC process involves basically four components: input tracking scorecard, community generated performance scorecard, self-evaluation scorecard by service providers and an interface meeting between users and providers to provide respective feedback and generate a mutually agreed reform agenda.10For the purpose of this study, we employed only the component of community generated performance scorecard as this involves the generation of evaluative indicators by community members themselves.Selection of study participantsAt the community level, members comprising of chiefs, elders, assembly members, women groups, physically challenged persons, farmers, traders and youth groups formed a group in each of the four communities. This target population was those actually chosen by community members to represent them in the communities\u2019 dealings with the responder agency during the disasters. Each group was considered to be heterogeneous since the members were from different backgrounds, ages and genders. All participants were however aged 18 years and above.Data collection process and analysisTwo experienced research assistants were trained on the community score card approach, the study protocol and procedures to assist the investigators in data collection.In all, four community visits where made to each community by the research team. The first visit was to explain the purpose of the study to the chiefs, elders and opinion leaders of the communities. During the second community visit, a community meeting was held where the purpose of the study and scorecard methodology (with much emphasis on indicator generation) was explained to community members after which each community chose its own group members. On the third visit, the field research team facilitated an indicator-generation process where groups generated and defined a set of indicators. The definitions of indicators were given entirely by group members themselves after which group representatives from each group met to consolidate and harmonize these definitions under the guidance of the research team. In the final community visit, group members met to score each indicator where a circle was drawn on the ground and divided into different sections by one of the participants in each group. Items like sticks, sandals and stones were used to denote an indicator in each of the divided sections of the circle. Participants cast a small stone each in the divisions of the circle (figure 2) to represent their individual grading/score of an indicator as arrived at by consensus. The groups further settled on a collective scoring measure or rating of 1-100 to score some indicators. These scoring processes in the form of a scoring matrix were then summarized into a score card and presented in the form of tables.Focus group discussions (FGDs) were also conducted with the group members in each community during the third community visit to get an in-depth understanding of how the responder agency performed in handling disasters. FGDs were recorded with an audio tape and later transcribed verbatim from the local language (kassem) into English in a text form for analysis. The investigators read through all the transcripts exhaustively to check for inconsistencies after which coding and categorization was done manually using the conventional analysis approach as outlined by Hsiu-Fang and Sarah.Ethical considerationsEthical approval was given by the Navrongo Health Research Centre Institutional Review Board (ID: NHRCIRB 183) and Kasena-Nankana West district assembly and district NADMO office. Permission to carry out this survey at the community level was verbally sought from the chiefs and elders of the respective communities first, after which a verbal informed consent was also gotten from individual members of the participating groups. The consent procedure was reviewed by and approved by the ethics committee.In total four community score cards and four focus group discussions were conducted involving 48 community representatives. Males accounted for 56% (27) of the population with the remaining 44% (21) being females. The average age of participants was 52 years. An average of 12 members formed a group in each of the communities.Findings of both the community score card and focus group discussions are presented together.Community Score CardsAll four communities identified NADMO as the main responder agency during the last and previous disasters. Community generated indicators included: Education/awareness, Selection process of beneficiaries, Networking/Collaboration, Timing of response, Quantity of relief items, Appropriateness of relief items, Mode of distribution of relief items, Investigation and Overall performance of NADMO.The consolidated and harmonized definitions of the indicators is presented in Focus Group DiscussionsEducation/awarenessTwo of the communities (Nyangania and Nakong) said they did not receive enough education and sensitization on the flooding situation resulting in community members being hit hard by the floods. This they said also affected their resilience level adversely.\u201cThe local FM station did a good job by constantly announcing and reminding us of when they were going to open the dam from neighboring Burkina Faso. This helped members of the community to prepare adequately and so we were able to adapt to the floods quite well with the help of NADMO\u201d (Group member of Navio community-FGD)\u201cOur community did not receive information or education from any organization. So when the flood came we were not prepared at all and this led to loss of lives and properties of community members\u201d (Group member of Nakong community-FGD)Section process of beneficiariesMembers of three communities identified this as an indicator with only one of these communities (Nakong) rating this indicator high.\u201cIn fact we were really very satisfied with how NADMO selected those who were to benefit from the relief items they brought to us\u201d (Group member of Nakong community-FGD)\u201cThe way NADMO selected those who were to benefit from the relief items was poor. For example some of the affected people who wrote done their names did not receive any item whiles some people who didn\u2019t write their names at all were given relief items\u201d (Group member of Kajelo community-FGD)NetworkingThis indicator was identified by only one community with members of the group identifying the existence of networking and collaboration between NADMO officials and other stakeholders in the management of the last disasters.\u201cNADMO officials, leaders of some political parties and other organizations who donated relief items to us worked together in deciding those who got those items\u201d (Group member of Navio community-FGD)Timing of responseWhereas three of the communities got some response from the responder agency, the response rather arrived very late with the fourth community (Nyangania) not receiving any response at all during the last floods in 2012.\u201cAlthough we eventually got some items such as cups, mattresses, blankets and buckets from NADMO, these items arrived very late. We hope next time there is a disaster such as this, NADMO and any organization which has items to donate, will do that as quick as possible to help the community recover more quickly\u201d (Group member of Kajelo community-FGD)\u201cFor the last floods that we experienced here, NADMO visited us to write the names of those who were affected through the community elders. However they never came back again and so no single person received any relief item in this community\u201d (Group member of Nyangania community-FGD)Quantity of relief itemsMajority of respondents from communities were the responder agency responded opined that the quantity of relief items were generally inadequate.\u201cThe quantity of relief items we received was not enough at all. Imagine after writing down the names of so many affected people only 8 mattresses were distributed to the worst affected areas of Kajelo through the elders of the community\u201d (Group member of Kajelo community-FGD)Appropriateness of relief itemsThose communities that got a response from NADMO rated the appropriateness of this response to be inadequate. All the 3 communities that got some help from NADMO said the relief items they received were exactly what they needed to cope and recover from the floods.\u201cThough officials of NADMO came to write our names and the items we actually needed, what they eventually brought to us was different and this did not help us that much especially on how we coped and recovered from the floods\u201d (Group member of Navio community-FGD)Mode of distribution of relief itemsOnly one (Nakong) community evaluated the responder agency using this as an indicator with group members stating that the mode of distributing relief items was unsatisfactory.\u201cThe unfair manner with which the relief items were distributed to affected members of the community rather brought about some disagreements and conflicts amongst the beneficiary since some people thought they were most affected and therefore deserved more items. Some members of this community have also come to complain to me that the relief items were distributed along partisan lines and not based on merit\u201d (Group member of Nakong community-FGD)InvestigationGroup members in only one (Kajelo) community identified community needs assessment by the responder agency as an evaluative indicator with 50% of the members saying that this was poorly done and the other 50% saying this was not carried out at all.\u201cAlthough NADMO came around to assess the extent of damage by the floods before writing the names of those to benefit from any relief item in my area, this was poorly done as some of the people who were seriously affected didn\u2019t get the right quantity of items to help them deal with the situation\u201d (Group member of Kajelo community-FGD)\u201cNADMO never did any investigation or assessment of the level of damage by the floods in my locality before writing down the names of those affected. And so some of those who were not seriously affected got the same items as those who were seriously hit by the floods\u201d (Group member of Kajelo community-FGD)Overall performanceAll four communities evaluated NADMO using this as an indicator. Apart from one (Kajelo) community that rated the overall performance of NADMO to be unsatisfactory, the rest of the communities thought that NADMO performed creditably well in its handling of the recent disasters.\u201cI will say that overall, NADMO performed quite well when you look at how they handled the entire disaster situation\u201d (Group member of Navio community-FGD)\u201cEven though NADMO did not respond to our needs this time round, judging by what they did during the previous floods I will say that their performance was satisfactory\u201d (Group member of Nyangania community-FGD)\u201cI wasn\u2019t happy with the performance of NADMO at all. They were not fair with the way they responded to this particular disaster. Those who really needed the items never got them and for those who eventually received their items got them very late. So how can you say that NADMO did well?\u201d (Group member of Navio community-FGD)The methodology employed in this study can be said to be one of the new emerging participatory techniques used in evaluating relief programs and agencies especially against the background that the CSC is being used for the first time for this purpose.The Hyogo World Conference on Disaster Reduction in 2005 stressed on the importance of information management and exchange which is meant to provide easy understanding of information on disaster risks and protection options, especially to citizens in high-risk areas, so as to encourage and enable people to take action to reduce risks and build resilience.Selection of beneficiaries during and after disasters can be a very daunting task. In this particular disaster, majority of the community members labeled a number of allegations against the responder agency. Similarly in a qualitative study to evaluate post-flood disaster response strategies in southern Ghana, some male participants in a focus group discussion also accused the same disaster responder agency as being discriminatory and actually decided not to violently contest the seeming discrimination they felt so as to sustain the unity in their community.16,Post-disaster governmental manipulations of response processes has been identified to exacerbate existing inequalities within affected communities which has often tend to provoke angry protest and demonstration against the institutions in charge.17,,,In non-disaster situations, many of the agencies involved in disaster management operate independently of each other. At the community level in a disaster situation however, this is often not the case as complexity arises from a variety of elements, systems, processes and actors in a complex network of many interactions between these agencies at various levels necessary for achieving mutual adjustment and a collective goal.,16It is a usual practice for NADMO to register victims of disasters before resources are mobilized and sent to the community.In one of the study communities, the need to conduct a needs assessment during and after disasters was identified as evaluative indicator as community members thought this was crucial especially concerning the distribution of relief items by the responder agency. This point has also been emphasized by Oaks in an Overview of the Damage Assessment Process: as he suggests that, after disasters, the damage assessment process is fundamental to relief and reconstruction as it triggers the beginning of formalized disaster relief and recovery aid.26Majority of group members in the affected communities in the Kasena Nankana west district of northern Ghana rated the overall performance of the responder agency to be above average. Although some group members rated certain specific generated indicators low, they thought that in the whole, the responder agency performed creditably well. This finding only emphasis the fact that community members tend to assess performance when certain specific needs are met and not just an overall scoring scale of evaluative indicators as is usually found in quantitative studies. However contrary to the findings of our study, an earlier qualitative study involving the same responder agency found the overall performance of the agency to be poor after the catastrophic 2010 flooding situation in southern Ghana.27 The observed differences in the results across the four communities can be attributed to the fact the these communities are often affected differently by the perennial floods with the response from the responder agency being different depending on the level of preparedness by NADMO at the local level and availability of support from all stakeholders.LimitationsNotwithstanding the strong qualitative methodological approach adapted by this study, the study had some limitations. We conducted the study in only four communities and about two years after the disaster occurred and therefore the accuracy and precision of the information provided could have been affected especially that some of the community leaders would have relocated, died or forgotten certain vital events that occurred.Although the composition of the groups was heterogeneous, group members were only elders and representatives of the communities and therefore their opinions might not necessarily be the views of the ordinary members of the communities who were hit hardest by the floods.This study used a combined qualitative methodological approach of focus group discussions and the community score card in four communities in Northern Ghana to evaluate the performance of the main disaster responder agency. All four communities identified NADMO as the main responder agency during the last disasters with community members identifying education/awareness, selection process of beneficiaries, networking/collaboration, timing of response, quantity of relief items, appropriateness of relief items, mode of distribution of relief items, investigation and overall performance as the main evaluative indicators. Although the indicators were rated differently by each community; the timing of response, quantity and appropriateness of relief items were generally poor whereas the overall performance of the responder agency was generally satisfactory.Since these communities are often prone and vulnerable to the perennial floods, we recommend that the main responder agency responds more timely and fast whiles engaging the other stake holders in disaster management to ensure that communities get enough of the most appropriate relief items. We also encourage that after each disaster, all the stake holders in disaster management great that platform were the disaster responder agency will be evaluated by the affected communities as a feedback to improve on future disaster managementThe authors have declared that no competing interests exist.This study was made possible by the generous support of the Global Disaster Preparedness Center and Response 2 Resilience. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The views presented in this paper are solely those of the researchers and are completely independent of the funder.All relevant data are within this manuscript.Stephen ApangaEmail: apangastephen@hotmail.comUniversity for Development Studies, Department of Community Health and Family Medicine School of Medicine and Health Sciences"} +{"text": "Multidetector computed tomographic angiography (MDCTA) is the new gold standard for diagnostic evaluation of the abdominal and/or mesenteric arteries. It is not invasive and provides a 2D and 3D global cartography of all abdominal arteries and that with only a limited amount of contrast media.MDCTA allows the optimal diagnosis of single or multiple arterial stenosis and easily analyses sometimes very complex collateral pathways. It constitutes a major advance to plan the arterial visceral safety of major commonly performed abdominal surgical procedures such as aorto-iliac surgery, endovascular aneurysm repair (EVAR), but also complex pancreatic and gastrointestinal or colonic surgery. It also allows to plan the most optimal strategy for revascularization of the mesenteric system through percutaneous angioplasty, stent placement or surgical bypass.This extensive pictorial review illustrates a large variety of situations which may be found during clinical practise. Single compression or stenosis of each digestive artery, combined and/or complex associations of stenosis and/or compressions of several arteries, secondary complications like aneurysms and classical but also sometimes unusual patterns of collateralization are richly illustrated. Specific syndromes comprising the median arcuate ligament syndrome (MALS) and the Leriche\u2019s syndrome are also discussed. Today the panel of imaging techniques available to investigate abdominal vessels is varied comprising of duplex and color flow Doppler sonography, angio-CT, angio-MRI and conventional angiography (CA). The choice of one technique over another depends on its availability, the clinical circumstances, the degree of emergency but also on the patient\u2019s physical characteristics, age and renal function.Multidetector computed tomographic angiography (MDCTA) today represents the best choice when the purpose of the examination is to quickly and easily obtain optimal global and non-invasive analysis and/or cartography of the abdominal and visceral vessels with all their interconnections. In this context MDCTA represents the new gold standard and outperforms CA. MDCTA has major advantages on CA. First, it is a very fast and non-invasive technique which only requires a limited amount of intravenous contrast media to provide high quality 2D and 3D anatomic images. Then a global cartography of all arteries may be obtained simultaneously, an opportunity that cannot be meet during CA. Multiple successive selective or semi-selective invasive catheterizations would be necessary to obtain such a global analysis of all abdominal vessels during CA. MDCTA has only several limitations. The detection threshold of the tiny arteries is lower than with CA and the opacification is static when compared to the dynamic aspect of CA which can better detect the direction of blood flow particularly in the collateral pathways. The fact that all arteries are opacified simultaneously can make spatial analysis difficult. Fortunately, secondary high quality selective reconstructions are able to dissociate and analyse the complexity of arterial superposition anatomic overlays. This nevertheless requires a good skill in the use of 3D post-processing programs.MDCTA offers the opportunity to quickly diagnose or rule out mesenteric stenosis or compression in patients presenting with suggestive abdominal pain or angina. It also constitutes a primordial advance progress to plan the arterial safety of many major abdominal surgical procedures comprising classical aorto-iliac surgery, endovascular aneurysm repair (EVAR), complex pancreatic and gastrointestinal or colonic surgery but also to optimally plan revascularization of the mesenteric system through percutaneous angioplasty (PTA), stent placement or surgical bypass .During embryogenesis, most segmental arteries regress and only three dominant major mesenteric visceral arteries persist: the celiac trunk (CTK), the superior mesenteric artery (SMA) and the inferior mesenteric artery (IMA) . FortunaPrevious studies have suggested that a significant stenosis of at least two of the three main digestive arteries must occur and/or that a complete occlusion of the CTK must precede the occlusion of the rest of the mesenteric arteries before the occurrence of clinical symptoms of mesenteric angina . Therefo1This extensive pictorial review illustrates a large variety of situations which may be found during clinical practise. Single compression or stenosis of each digestive artery, combined and/or complex associations of stenosis and/or compressions of several arteries, secondary complications like aneurysms and classical but also sometimes unusual patterns of collateralizations are richly illustrated. Specific syndromes such as the median arcuate ligament syndrome (MALS) and the Leriche\u2019s syndrome that are also discussed.The incidence of hemodynamically significant CTK stenosis in an asymptomatic population has been evaluated to 7.3% and the most important etiology is extrinsic compression by the median arcuate ligament (MAL) of the diaphragm. Atherosclerosis remains only a rather minor cause of stenosis of the CTK . The MALThe Dunbar syndrome induced by the celiac trunk compression syndrome (CTCS) also called the median arcuate ligament syndrome (MALS) is a potential clinical entity characterized by a triad comprising epigastric pain, weight loss and postprandial pain with nausea and vomiting 678910. T678967911Atypical manifestations of the MALS are extremely variable ranging from exercise related pain and diarrhea in elite athletes to dramatic rupture of a secondary pancreaticoduodenal artery aneurysm (PDAA) developing on collaterals 8. Nevert6The physiopathology of MALS is also controversial: are the symptoms really caused by ischemia of the gut itself or by neurogenic compression or ischemia of the celiac ganglion ?During MDCTA the MALS exhibits a characteristic hooked appearance of the focal narrowing of the CTK and the deformation increases during expiration Figures and 2. TIt is our opinion that the presence of these collaterals is crucial for the diagnosis of a significant or high degree of compression of the CTK even if the deformation of the CTK is moderate. Indeed, most of the abdominal MDCT are performed in deep inspiration which classically reduces and thus underestimates the compression syndrome. The presence of collaterals helps to diagnose this underestimation. The principle should be: no collateralisation, no MALS except if an additional stenosis on the SMA is present compromising the development of collaterals. Nevertheless, the presence of these collaterals proves that the compression is significant but also demonstrates a good substitution.As confirmed by several studies the CTCS of MALS is better appreciated during expiration Figure especial791112Due to the permanent mechanical extrinsic compression experienced by the CTK in high-grade MALS only a short relief of symptoms followed by early restenosis is classically found after percutaneous angioplasty (PTA) with stenting. The traumatic effects of PTA on the intima and media may weaken the vessel which may become more susceptible to collapse. In addition, stent deployment may be compromised by slippage, mechanical fatigue or crushing secondary to permanent external compression Figure . TherefoFor the same reasons of chronic mechanical compression and major cyclic variation of flow during breathing the potential role of MAL in the development of CTK aneurysm and/or CTK dissection has also been reported Figure 515].15.515].Occasionally, in addition to the CTK the constricting effect of MAL may also manifest on the SMA and rarely on the renal arteries (RAs) 910. If t9Nevertheless, this type of double compression has only been infrequently reported. Only four of the 51 patients with MAL syndrome reported by Reilly had both CTK and SMA compressions . Other i121718CCMT is a very uncommon variant Figures and 12 a131920A patient with a CCMT is potentially deprived of some of the protective benefits of dual origin vessels with multiple mutually supporting anastomoses. Occlusion or proximal stenosis affecting a common CCMT can have serious ischemic consequences to the intestine because the classical redundancy between the CA and SMA circulation is absent. Moreover, any disorder involving the common CCMT or an extensive surgery (for example a Wipple\u2019s procedure on the pancreas) may have dramatic consequences on the major abdominal viscera 22.Stenosis, occlusion and compression of the CTK by the MAL is known to be one of the main factors for increased collateral circulation and secondarily also to the formation of about 50 to 60% of all pancreatico artery aneurysms (PDAAs) 52425.24When compression occurs on the CTK compressed by MAL during expiration the CTK territory becomes abruptly supplied by reverse flow from the SMA through the PDAs causing acute hemodynamic stress in these arteries and promoting aneurysmal formation Figures and 13 2526. The25232526The IMA is the smallest of the three main mesenteric arteries and supplies the distal transverse, descending and sigmoid colonic segments as well as the rectum . It rece29The two classical most critical area of watershed of the arterial supply of the left colon are the Griffith\u2019s point in the area of the splenic flexure of the colon were the left branch of the middle colonic artery (branch of the SMA) joins with the ascending branch of the left colonic artery (branch of the IMA) and the Sudeck\u2019s point where collateral communication is found between the last sigmoidal artery and the superior rectal artery, both branches of the IMA.The collateral pathway between the SMA and the AMI is not always clearly designed. There is a real lack of consensus in the terminology used in the literature causing much confusion. Many denominations are used comprising the arch of Riolan (AR) (considered as a vague historic term to discard) Figures and 17, 29A pragmatic description consists to delineate three concentric different pathways running from the central mesenteric root to its periphery along the colon and comprising:Centrally, the inconstant arch of Riolan (AR), joining the middle and left colonic artery and running very close to the inferior mesenteric vein.In an intermediate location, the mesentery, the pathway observed in cases of severe stenosis or occlusion of the SMA and known as the Meandering Artery (MeA) Figures and 18.Finally, in the extreme periphery of the mesentery, the marginal arcade of Drummond (MAD), which is classically not tortuous and runs along the left descending colon Figures and 17.Many authors consider that the anatomic AR and the MeA of Moskowitz are the same entity, the MeA being the term classically describing the tortuous hypertrophic expansion of the AR in the presence of stenosis or obstruction of the SMA or of the IMA. The expansion of the MeA is greater in presence of stenosis or occlusion of the SMA or in the presence of combined stenosis of SMA and CTK than in isolated stenosis of the IMA because the blood flow load is greater for the SMA than for the IMA .In the presence of a large MeA it is thus recommended to surgeons to abandon or to seriously reconsider their plan for a major resection of the left colon . SecondaAIOD is most frequently a progressive chronic disease resulting of massive deposition of atheromatosis at the level of the aortic bifurcation and on the segment of aorta proximal from this bifurcation. Infrequent causes of AIOD are acute occlusion due to embolus or occlusion related to vasculitis . NeverthThe clinical Leriche\u2019s syndrome (LS) includes the typical triad of symptoms of claudication, impotence and decreased peripheral pulses . In thesAIOD has different types of collateral pathways which can be classified as visceral-systemic (VS), systemic-systemic (SS) and visceral-visceral (VV) .The VV pathways is provided by the CTK, SMA and IMA. This collateral pathway in which the digestives arteries are implicated becomes more prevalent in cases of AIOD extending more proximally along the aorta and thus approaching the level of the emergence of the RAs 34.The SS collateral pathway comprise subcostal, intercostal and lumbar arteries representing the afferent vessels Figures and 18. Another SS collateral system is provided by the sacral plexus where the lateral sacral arteries coming from the IIA and the median sacral artery coming from the aorta just above the aortic bifurcation develop collaterals .The internal thoracic artery (ITA) , and the superior and inferior epigastric arteries (EA) also constitute another SS collateral pathway of primordial importance for the lower limbs Figures and 18. 34Finally, the VV pathway is also constituted by a cross pelvic collateral system constituted by communication between the superior, middle and inferior rectal arteries (ReAs) on both sides Figures and 18.In each patient presenting with AIOD, the final collateral pattern is individually constituted by a mix of all these above described pathways. It essentially depends of the level of the occlusion: above the IMA, at the level of the IMA or below the IMA. The most proximally the aorta is occluded \u2013 from the iliac bifurcation (or under) to just under the level of emergence of the renal arteries \u2013 the most important is the recruitment of the digestive arteries and collateralization successively implicates the IMA, the SMA and finally the CTK itself . An optiThe FA is a branch of the hepatic artery (HA) that develops anastomoses with the vertical pathway constituted by the EA and the ITA. The FA may thus constitute a VS collateral pathway (in case of Leriche syndrome \u2013 LS) or a SV collateral pathway (in case of stenosis of the digestive arteries).The FA is essentially known by interventional radiologists who perform selective hepatic angiography . They arTo our knowledge, there are no studies reporting the prevalence of the FA detection during dynamic CT studies in healthy patients. Nevertheless, our opinion, based on our personal experience with 64-row multidetector CT is that this prevalence remains extremely low .In a previous report, we reported two cases in which it was likely that the FA was visualized because it was enlarged by a compensatory phenomenon related to the critical state of digestive arteries of the patients. One had a CCMT and the other had severe compression of the CTK and of the SMA by the MAL . AdditioLeft and right inferior phrenic arteries (IPAs) are other rare arteries being able of collateralization with the HA or with the CTK Figure . As for In our experience, we also recently found two cases of atheromatous stenosis of the splenic artery (SA) fortuitously diagnosed through the presence of unusual collateralisation. The first case was collateralized by a tortuous gastroepiploic artery (GEA) a situation which has only exceptionally been described in rare cases of absence or occlusion of the SA Figure . The othThrough this extensive pictorial review, we have illustrated a large diversity of complex abdominal situations implicating the digestive arteries and/or the systemic abdominal arteries, the two arterial systems being frequently interconnected. These situations are not uncommon in clinical practise. We confirm and demonstrate that multidetector computed tomographic angiography (MDCTA) can be very effective not only to diagnose a single arterial stenosis or compression but also to dissect combined and/or complex associations of multiple stenosis and/or compressions of several arteries. MDCTA also appears uncompetitive unavoidable to map sometimes very complex networks of collateralization.MDCTA is confirmed being the gold standard for the diagnostic evaluation of abdominal and/or mesenteric arterial diseases. It represents a primordial advance to plan the safety of the digestive vascularisation before many major abdominal surgical procedures and to plan revascularization of the mesenteric arterial system itself."} +{"text": "The formation of the ventricles of the heart involves numerous carefully regulated temporal events, including the initial specification and deployment of ventricular progenitors, subsequent growth and maturation of the ventricles through \u201cballooning\u201d of chamber myocardium, the emergence of trabeculations, the generation of the compact myocardium, and the formation of the interventricular septum. Several genes have been identified through studies on mouse knockout and transgenic models, which have contributed to our understanding of the molecular events governing these developmental processes. Interpretation of these studies highlights the fact that even the smallest perturbation at any stage of ventricular development may lead to cardiac malformations that result in either early embryonic mortality or a manifestation of congenital heart disease."} +{"text": "Nowadays, there is a constantly increasing concern regarding the mutagenic and carcinogenic potential of a variety of harmful environmental factors to which humans are exposed in their natural and anthropogenic environment. These factors exert their hazardous potential in humans' personal and occupational environment that constitute part of the anthropogenic environment. It is well known that genetic damage due to these factors has dramatic implications for human health. Since most of the environmental genotoxic factors induce arrest or delay in cell cycle progression, the conventional analysis of chromosomes at metaphase may underestimate their genotoxic potential. Premature Chromosome Condensation (PCC) induced either by means of cell fusion or specific chemicals, enables the microscopic visualization of interphase chromosomes whose morphology depends on the cell cycle stage, as well as the analysis of structural and numerical aberrations at the G1 and G2 phases of the cell cycle. The PCC has been successfully used in problems involving cell cycle analysis, diagnosis and prognosis of human leukaemia, assessment of interphase chromosome malformations resulting from exposure to radiation or chemicals, as well as elucidation of the mechanisms underlying the conversion of DNA damage into chromosomal damage. In this report, particular emphasis is given to the advantages of the PCC methodology used as an alternative to conventional metaphase analysis in answering questions in the fields of radiobiology, biological dosimetry, toxicogenetics, clinical cytogenetics and experimental therapeutics."} +{"text": "There are errors in the eighth and ninth sentences of the Abstract. The correct sentences are: Although there were age differences and sex differences in emotion regulation, age and sex were not significantly associated with proneness to shame and guilt. The positive relations with maladaptive emotion regulation underscore the dysfunctional nature of shame-proneness."} +{"text": "Traditional medicine is an important component of the health care system of most developing countries. However, indigenous knowledge about herbal medicines of many Ghanaian cultures has not yet been investigated. The aim of the present study was to document herbal medicines used by traditional healers to treat and manage human diseases and ailments by some communities living in Ghana. The study was conducted in eight communities in southern Ghana. Data were collected from 45 healers using ethnobotanical questionnaire and voucher specimens were collected. A total of 52 species of plants belonging to 28 plant families were reportedly used for treatment and management of 42 diseases and ailments. Medicinal plants were commonly harvested from the wild and degraded lowland areas in the morning from loamy soil. Herbal medicines were prepared in the form of decoctions (67%) and infusions (33%). Oral administration of the herbals was most (77%) common route of administration whereas the least used routes were nasal (1%) and rectal (2%). The results of the study show that herbal medicines are used for treatment and management of both common and specialized human diseases and that factors of place and time are considered important during harvesting of plants for treatments. According to the World Health Organization (WHO) about 80% of developing countries depend on traditional medicines for their primary health care needs . In Ghan materia medica of the Fanti, Ga, and Ashanti has changed considerably over time . SpecieLeaves formed 57% of the herbal medicines documented. Other plant parts used were fruits, barks, and whole plants . Leaves https://www.cdc.gov/globalhealth/countries/ghana/). Knowledge of frequently reported diseases and/ailments can be an indication of health care issues in a region and it should be of great importance to health care organizations and government.Herbal medicines were reportedly used for treatment and management of 42 diseases and ailments. Two or more herbal medicines were reportedly used for treatment and management of 17 the diseases and/ailments, and the herbals were most commonly used for treatment and management of stroke, fevers, and diabetes . The herAbout 43% of the species of plants were reportedly used in treatment of a single disease whereas the rest of the plants (57%) were involved in treatment of more than one disease/ailment. Medicinal plants are commonly used in the management of different ailments because they contain a variety of bioactive agents such as alkaloids and terpenoids , 19. It Almost all the healers (98%) interviewed harvested plant materials from lowland areas . About hThe time of harvesting medicinal plants was investigated with respect to time of day (24\u2009hr. duration) and season (dry versus wet season) of the year. About 57% of healers harvested plants in the morning followed by 28.9% who collected plants anytime of the day and then 4.4% that collected plants in the afternoon. None of the healers collected plant materials in the night and about 9% considered time of the day unimportant when harvesting plant materials for herbal preparations. Plants materials were harvested in the morning because of the importance of healthcare to healers as they collected plants first thing in the day. About 28% of the healers harvested plants anytime of the day, which might suggest that healers also collected plants as when they are needed. According to the healers they collected plants any time of the day because they sometimes needed to treat emergency cases. There is scientific evidence to support the fact that yield of some plant chemical constituents differs within a time span of 24 hours due to the interconversions of compounds . AccordiThe harvested plant materials were used in preparation of 81 herbal medicines mainly in the form of decoctions (67%) and infusions (33%). Although it is documented that a variety of methods have been used for preparation of herbal medicines the methods of decoctions and infusions have been the widely reported . HoweveThe routes of administration of the remedies reported in this study were oral, rectal, topical, and nasal . HoweverIn this paper, we have documented the current state of knowledge and use of herbal medicines for treatment and management of human diseases among some communities living in southern Ghana. This documentation contributes primary data to the wealth of data stored on the indigenous knowledge on medicinal plants from Ghana. The findings from the study suggest that healers are consulted for herbal medicines for the treatment and management of both common and specialized diseases and ailments. The extent to which the people living in the area consult the healers is unknown but it is important to understanding this in order to determine the proper role of herbal medicine in the health care system of the people. It is also essential to scientifically evaluate the specific uses of the medicinal plants reported in the current study using plant materials from the area through pharmacological, toxicological, and clinical studies in order to ensure the safety of the people consuming the medicines and for possible drug development. The results of the study have also confirmed that factors of time and place are given considerations during harvesting of plant materials by healers. Further studies on the methods and quantities of plant materials that are harvested for treatment will improve our understanding on the impacts of harvesting of medicinal plants on biodiversity conservation in the area."} +{"text": "The atlas is designed to be accessed using SHIVA, a free Java application. The atlas is extensible and can contain other types of information including anatomical, physiological, and functional descriptors. It can also be linked to online resources and references. This digital atlas provides a framework to place various data types, such as gene expression and cell migration data, within the normal three-dimensional anatomy of the developing quail embryo. This provides a method for the analysis and examination of the spatial relationships among the different types of information within the context of the entire embryo.We present an archetypal set of three-dimensional digital atlases of the quail embryo based on microscopic magnetic resonance imaging (\u03bcMRI). The atlases are composed of three modules: (1) images of fixed"} +{"text": "Implantable devices in the brain can reestablish functional connectivity in neural circuits disrupted in major depression, obsessive-compulsive disorder, and other psychiatric disorders. The most frequently used device of this type is deep brain stimulation (DBS) Benabid, Other neInstead of enabling the formation of new memories, could a device implanted in the brain erase memories that have been encoded, consolidated, and reconsolidated? DBS can modulate dysfunctional circuits mediating sensorimotor, cognitive, and emotional processing. Theoretically, this or a similar stimulating technique could selectively erase a pathological fear memory by inactivating neurons and excitatory synapses constituting the memory trace. This could disrupt reconsolidation of the memory stored as information in the brain. Erasing fear memories identified as the source of anxiety, panic, phobia, and post-traumatic stress disorder (PTSD) could be an effective therapy when they fail to respond to other treatments Pitman, .content associated with the emotional representation of the memory. These are distinct from different forms of amnesia, which is a disorder of memory capacity (Kopelman, Neuroscientists can use PET or fMRI to measure changes in neural activity and synaptic connectivity following manipulation of neural circuits associated with different memory systems. Neuroimaging could confirm erasure of a memory trace based on these changes. Hypothetically, electrical stimulation from an implantable device like DBS could reduce activity in neurons constituting the emotionally charged memory trace underlying conditioned responses to aversive stimuli and enable a subject to unlearn pathological behavior. In the psychiatric disorders I have mentioned, the emotional representation, or trace, of a memory of a disturbing or traumatizing event remains embedded in the brain beyond any short-term adaptive function. This disrupts the memory network regulating fear and results in pathological thought and behavior (Parsons and Ressler, The source of these disorders is hyperactivity in the basolateral amygdala of the fear memory network. This occurs when a negative emotional memory of a fearful experience or a series of such experiences forms and solidifies in the brain through the processes of consolidation and reconsolidation. One theory of fear memory consolidation following a traumatic experience is that the memory embeds in the amygdala from the release of noradrenaline in response to the subject's stress reaction to the experience. The memory becomes more firmly embedded in this brain region from behavior in which the subject learns to link an aversive stimulus with a conditioned stimulus. Memories must be updated constantly to remain stored in the brain. Updating memories consists in reconsolidating them after retrieval. This process serves an adaptive purpose by enabling the subject to make information in the brain relevant to current and future circumstances (Nader et al., A different and potentially more effective way of blocking reconsolidation would be to eradicate the emotional representation of the memory. In principle, certain drugs could block reconsolidation of the fear memory by blocking protein synthesis in the basolateral amygdala where the memory trace was located. During retrieval, infusion of a protein synthesis inhibitor such as anisomycin in this brain region might disrupt reconsolidation and effectively erase the memory trace (Schacter and Loftus, A major challenge to pharmacological erasure of memory is the selectivity of this intervention. Many memories of fearful experiences are adaptive and critical to survival because they enable us to recognize and react appropriately to threatening situations. Not all fear memories are pathological or maladaptive. Because of the distributed and non-discriminating effects of psychotropic drugs, a drug intended to erase the trace could have unintended expanding effects and impair normal functions of the fear memory system. A drug infused in the brain could alter both targeted and non-targeted nuclei in the limbic system and alter normal emotional processing. This could introduce a new psychopathology.The more focused action of deep brain electrical stimulation of the neurons and synapses within the memory trace might be able to overcome the problem of selectivity. Direct stimulation of these constituents of the trace at the critical frequency could neutralize the effects of LTP, CREB, and protein synthesis on the persistence of the memory. It could remove any obstacles to destabilizing and removing the memory as stored information in the brain. In addition, by precisely targeting the neurons within the trace, electrical stimulation could reduce the risk of expanding effects on adaptive fear memories and positive emotional memories. Combined with its neuromodulating action, the ability of DBS to probe circuits and nodes within these circuits would enable investigators to monitor its effects on the critical neurons and synapses (Lozano and Lipsman, The selectivity problem is a problem about localization. The main question is whether a particular maladaptive fear memory would be localized enough for DBS to erase it while leaving adaptive fear memories intact. One hypothesis that could support the idea of selectively erasing a maladaptive fear memory is that functional imaging could reveal higher levels of activation in the nuclei associated with that memory when a subject was asked to recall the traumatic experience as the source of it (Pitman et al., enhance certain types of memory in these studies is in contrast to the goal of using DBS to erase other types of memory. Enhancing memory would require activating metabolically underactive nuclei associated with LTP, CREB, and protein synthesis. Erasing memory would require inhibiting metabolically overactive nuclei associated with these same processes. The second mechanism would be similar, in some respects, to the modulating effects of high-frequency DBS on metabolically overactive circuits in treatment-resistant depression (Mayberg et al., Even if imaging showed localized metabolic overactivation in nuclei associated with the memory, there are questions about whether DBS could inactivate it. Although many studies have confirmed the neuromodulating effects of DBS, the mechanisms of action of the technique are not well understood. DBS increased glucose metabolism in the entorhinal cortex of a group of epilepsy patients and enhanced learning and spatial memory (Sulthana et al., Although direct stimulation of the hippocampus and medial temporal lobes can disrupt episodic memory (Merkow et al., Suppose that investigators could use electrical stimulation from a device implanted in the brain to erase not just pathological fear memories but also less emotionally charged memories of disturbing experiences. If the emotional representation of some of these memories was localized in discrete limbic nuclei, the critical neurons, and excitatory synapses could be inactivated and the memory trace erased by DBS, should it be?Remembering mistaken choices and actions can haunt one for years and make one indecisive when choosing between alternative courses of action in the present and future. Yet memories of these mistakes and of more emotionally disturbing experiences are necessary for character development and moral growth. They promote this development and growth by enabling the moral emotions of remorse and regret. They also enable us to reflect on our motivational states in forming and executing action plans that promote prudence and moral sensitivity toward others. Disturbing memories can provide constraints on decision-making that are necessary for effective rational and moral agency. Erasing a few disturbing memories might not undermine or impede the development and exercise of these capacities. However, erasing a broader set of memories could weaken agency and have deleterious effects on one's behavior. The mental capacity to integrate both positive and negative episodic memories in a unified autobiography also enables one to construct meaning from them. Deleting more than a critical number of memories could disrupt the psychological connectedness and continuity that constitute personal identity, the experience of persisting through time as the same person Parfit, . EpisodiResearch into manipulating the content of memory has been limited to animal models. Psychiatric researchers will have to address many theoretical and technical challenges in moving from animal to first-in-human studies. Functional neuroimaging will be critical in identifying changes in brain activity correlating with a weakened or erased memory trace at neuronal and synaptic levels. The most important question is whether the critical neurons can be altered at a localized, discrete level. The idea of erasing a particular pathological fear memory with DBS or similar implantable electrical stimulation devices is still speculative. Yet it could become a way of treating what are currently treatment-resistant psychiatric disorders of memory content.The author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The posterior tibial artery normally arises from tibial-fibular trunk at the popliteal fossa, together with the fibular artery. The classic course of the posterior tibial artery is to run between the triceps surae muscle and muscles of the posterior compartment of the leg before continuing its course posteriorly to the medial malleolus, while the fibular artery runs through the lateral margin of the leg. Studies of both arteries are relevant to the fields of angiology, vascular surgery and plastic surgery. To the best of our knowledge, we report the first case of an anastomosis between the posterior tibial artery and the fibular artery in their distal course. The two arteries joined in an unusual \u201cX\u201d format, before division of the posterior tibial artery into plantar branches. We also provide a literature review of unusual variations and assess the clinical and embryological aspects of both arteries in order to contribute to further investigations regarding these vessels. The femoral artery supplies the thigh, while the popliteal artery supplies the leg and foot.,,The posterior tibial artery (PTA) and the fibular artery (FA) are branches of the tibial-fibular trunk (TFT), located at the popliteal fossa. The TFT usually describes a trajectory of 1 to 8 cm after the popliteal artery emits the anterior tibial artery (ATA). It is usually followed by two veins and the tibial nerve. The PTA runs down the posterior face of the leg, following the posterior surface of the tibia, accompanied by two veins, and is located posteriorly to the triceps surae muscle and anteriorly to the posterior tibialis muscle and the flexor digitorum longus muscle, giving off muscular branches for the posterior compartment.,,,,In the foot, the PTA runs along the medial retromaleolar canal, medially to the calcaneal tendon, emitting malleolar and calcaneal branches before dividing into the lateral and medial plantar arteries. It is responsible for the blood supply to the posterior muscles of the leg and the plantar region of the foot.,,-Variants differing from this pattern can be explained by segmental hypoplasia, abnormal fusions or absence.Moreover, with the recent increase in the numbers of diabetic patients, critical limb ischemia due to multiple and large occlusions of the lower limb vessels is becoming more and more common; therefore knowledge of these arteries is needed in order to avoid amputations.We report a previously undescribed anatomic variation of the PTA and FA and present a review of significant anatomic variations and their clinical and embryological features.A 50-year-old male cadaver fixed with a 10% formalin solution (cause of death unknown) was dissected during Anatomy classes. While dissecting the right lower limb, we observed an uncommon relationship between the PTA and the FA along their course . The lefIn this case, the origins of both arteries were as normal, but in the ankle the vessels underwent an \u201cX-shaped\u201d anastomosis, prior to the origin of the plantar branches from the PTA .,,,,,Studies showed that the arteries of the lower limb are derived from the dorsal region of the umbilical artery, which forms the sciatic artery (SA), a branch of the internal iliac artery.,,,,Anatomic variants of these vessels often appear as result of persistent primitive arterial segments, segmental hypoplasia, abnormal fusions, or complete absence; furthermore, these mechanisms can frequently occur in combination.,,,,,,,Variations in the origin and course of the PTA and the FA have often been described after being detected either during dissection of cadaveric specimens or by arteriograms, Doppler exams, and duplex scanning of living subjects. In our analysis, we found that the PTA can be absent, hypoplastic or replaced in its distal portion or replaced altogether by the FA,,,,In cases of hypoplasia or aplasia of the PTA, the FA exhibits compensatory hyperplasia, giving off branches to what would normally be the PTA\u2019s vascular territory.,The FA also provides many fasciocutaneous and musculocutaneous perforators to supply the skin and muscles, albeit they are more common in the distal portion of the FA. These branches exhibit arterial anastomoses with each other in the subcutaneous tissue, and it appears that the second, third and fourth musculocutaneous branches have larger diameters and should be used for fibular flaps.,,,,-Knowledge of these variant patterns is important for evaluation of lower limb arteriograms and also in respect for their clinical and surgical significance for procedures such as vascular grafts, surgical repair, transluminal angioplasty, and embolectomy and for diagnosis of arterial injury.Although rare, true aneurysms of the PTA have been described in the literature and they can often compromise vessels and nerves.The free fibular flap (FFF) has recently come to be preferred for correction of mandible defects because of its low morbidity and the fact that this procedure allows a two-team approach.,,Since the PTA and the FA provide the blood supply to the foot, their distal portions and trajectories should be studied, due to the fact that ankle arthroscopy is becoming a popular procedure to treat arthritis and anatomical variations on this region can increase risk.Traumatic events in the Achilles region require free-tissue transfer. A study performed by Vaienti et al.Any type of vascular surgery in this area should be planned preoperatively, since there are many variations of the vessels in this region and some patterns of vascular distribution can contraindicate fibular flaps. Knowledge about the lower limb arteries is extremely important in a number of conditions, since anatomic variations can cause difficulties with diagnostic and surgical procedures."} +{"text": "The dataset contains thermal properties of soil such as thermal conductivity, thermal diffusivity, temperature and specific heat capacity in an agricultural farm within the University of Ibadan, Ibadan, Nigeria. The data were acquired in forty (40) sampling points using thermal analyzer called KD-2 Pro. Soil samples taken at these sampling points were analyzed in the laboratory for their moisture content following the standard reference of American Association of State Highway and Transport Officials (AASHTO) T265. The data were acquired within the first and second weeks in the month of April, 2012. Statistical analyses were performed on the data set to understand the data. The data is made available publicly because thermal properties of soils have significant role in understanding the water retention capacity of soil and could be helpful for proper irrigation water management. Specifications TableValue of the data\u2022The dataset can be used to monitor soil moisture content.\u2022The knowledge of the dataset can help to improve irrigation scheduling in the area.\u2022The knowledge of the irrigation scheduling would help to optimize water usage for improved crop productivity.\u2022The dataset would help farmers to save cost.\u2022The dataset could also be used for academic purposes to understand the applications of thermal properties of soil(s). Several similar Researches to this data article can be found in 1The dataset contains thermal properties of soil and their moisture contain in an agricultural farm within University of Ibadan, Ibadan, Nigeria. These thermal properties include thermal conductivity, thermal diffusivity, temperature and specific heat capacity. The data also contain moisture contents that were measured in the laboratory following the standard reference of American Association of State Highway and Transport Officials (AASHTO) T265 2The understanding of the thermal properties of soil is very important in agricultural science. This is because there is exchange of heat at the soil surface. The availability of the dataset on soil thermal properties would help in the improvements of wider applications of the heat of soil and modelling of the water transport in the soil. The availability of these dataset would also help in the understanding of seed germination and crop yield. Several works have been carried out on the various applications of thermal properties of soil 2.1Pro measurements were conducted. Also, after taking the first measurement, the probe was then allowed to rest for more than 15\u202fmin before taking subsequent readings. This time is called measurement interval, which allows thermal gradients to dissipate (i.e. for equilibration between readings).Field sampling design was conducted prior to the data acquisition on the field and random sampling technique was adopted. The surface of the ground was scooped before measurements to remove the effects of top soil on the acquired data. The thermal sensor was calibrated using a white plastic cylinder (a two-hole Delrin block). This was done with a view to test the functionality of the sensor\u00a0Forty sample points were considered for thermal properties measurements while soil samples were collected at these points to determine their moisture contents in the laboratory. These soil samples were put in polythene bags and stored in a cool dry place after which necessary laboratory analyses were carried out on them. Moisture contents were determined in the laboratory following the standard reference of AASHTO T265 2.2The detailed descriptive statistics which provide basic statistical information about the measured thermal properties and moisture contents are presented in"} +{"text": "This review gives an overview of morphological and functional characteristics in the human prostate. It will focus on the current knowledge about transient receptor potential (TRP) channels expressed in the human prostate, and their putative role in normal physiology and prostate carcinogenesis. Controversial data regarding the expression pattern and the potential impact of TRP channels in prostate function, and their involvement in prostate cancer and other prostate diseases, will be discussed."} +{"text": "The parasitic twin was connected to the sternum of the autosite by a tract of cartilage. Furthermore, parasite was sharing the liver with the autosite. The extrahepatic bile duct system of the parasite was separated after ligating it. The main vascular pedicle of the parasite originated from the falciform ligament of the autosite are rare type of monozygotic monochorionic asymmetrical conjoined twins. Embryological basis of EHT has been debated between the incomplete fission versus the fusion theory. It is generally considered to be a product of error in blastogenesis by incomplete fission of a single zygote post 14 days of fertilization.,3 th of all the reported cases.[EHTs are generally common in males accounting for almost 3/4Source of Support: NoneConflict of Interest: None"} +{"text": "The negative symptoms of schizophrenia cause significant distress and impairment. The treatment of them is a challenge, with medications having none or little effect. So, new treatments are necessary for this condition. The aim of the study was to ascertain the efficacy of tDCS in treating negative symptoms of schizophreniaThis study was designed to be a randomized, sham-controlled, double-blinded trial using tDCS for the treatment of negative symptoms of schizophrenia. One-hundred patients will be enrolled and submitted to ten tDCS session over the left dorsolateral prefrontal cortex and left temporo-parietal junction-left , over 5 consecutive days, with 2 mA of current. Participants were assessed with clinical and neuropsychological tests before and after the intervention. The primary outcome was change (over time and across groups) in the scores of the Negative Subscale of Positive and Negative Symptoms Syndrome (PANSS). Our secondary outcomes consist of others scales as SANSS , Calgary and the AHRS .From 70% of the sample the active tDCS was significantly superior to sham at endpoint at 6 weeks by negative sub scale of PANSS . The total PANSS and the hallucinations scale had no differences between both groups. The other times of analysis were not found differences between sham and active groups. The others scales .The results of our studies suggests a potential role of tDCS for the treatment of negative symptoms of schizophrenia. The effect size was small. This is the biggest study with tDCS for treating negative symptoms of schizophrenia until now. At the meeting all the data will be analyzed (100 patients), it these could change our preliminary results."} +{"text": "The immunopathology of chronic obstructive pulmonary disease (COPD) is based on the innate and adaptive inflammatory immune responses to the chronic inhalation of cigarette smoking. In the last quarter of the century, the analysis of specimens obtained from the lower airways of COPD patients compared with those from a control group of age-matched smokers with normal lung function has provided novel insights on the potential pathogenetic role of the different cells of the innate and acquired immune responses and their pro/anti-inflammatory mediators and intracellular signaling pathways, contributing to a better knowledge of the immunopathology of COPD both during its stable phase and during its exacerbations. This also has provided a scientific rationale for new drugs discovery and targeting to the lower airways. This review summarises and discusses the immunopathology of COPD patients, of different severity, compared with control smokers with normal lung function."} +{"text": "Variations of radial artery, in both its course and branching pattern in the anatomical snuffbox, are clinically significant for the plastic surgeons, cardiologists, and radiologists. Reports on its abnormal high origin and subsequent superficial course have been well documented. Herein, we report an unusual superficial branch of the radial artery given off before its entry into the palm by passing between the two heads of first dorsal interosseous. It eventually divided into princeps pollicis and radialis indicis arteries at the first web space of palm as a unique vascular variation. Apart from this, in the present case, the tendon of extensor digiti minimi and of extensor indicis divided into two parts. The split tendons of extensor digiti minimi were inserted to the dorsal digital expansion of the digitus minimus. However, lateral tendon of split extensor indicis was inserted along with the tendon of extensor digitorum to the index finger and the medial one was inserted along with the tendon of extensor digitorum to the middle finger. Unusual superficial branch of radial artery on the dorsum of the hand is vulnerable for an iatrogenic injury during surgical approaches in the region. Supplementary extensor tendons on the hand are one of the potential causes for the tenosynovitis. Accurate and detailed knowledge of the relationships and possible anatomical variations of the branching pattern of radial artery is vital during reparative surgery in this region. In addition, iatrogenic trauma due to occurrence of superficial branches may lead to a life-threatening hemorrhage. Inadequate knowledge of the anatomical variations of the arterial pattern may make surgery difficult.Radial artery is the terminal branch of the brachial artery given off in the cubital fossa. Following its origin, it assumes superficially downward course to the wrist along the radial side of the forearm. At the distal part of the forearm it winds around the lateral side of forearm after giving superficial palmar branch and enters into the anatomical snuff box, before reaching proximal end of the first interosseous space where it passes between the two heads of first dorsal interosseous muscle and between two heads of adductor pollicis to form the deep palmar arch of the palm .In the anatomical snuff box it gives dorsal carpal branch for the formation of dorsal carpal arch. Before its entry to the palm, on the dorsum of the hand it gives slender branches to the lateral side of the dorsum of the thumb and first dorsal metacarpal artery. In the palm it generally gives princeps pollicis artery and radialis indicis branches before it contributes to deep palmar arch.Normally, the extensor muscles have single tendon on the dorsum of hand except for extensor digitorum. The tendon of extensor digitorum divides distally into 4 tendons which pass in a common synovial sheath together with tendon of extensor indicis and diverge on the dorsum of the hand to get inserted into the medial four digits .The extensor tendons of the hand are distally interconnected by oblique interconnections known as juncturae tendinum. Knowledge of detailed pattern of the tendons and their oblique interconnections is important in hand assessment and other reconstructive procedures .Combined variations of radial artery and extensor tendons are rare. We discuss the clinical importance of concurrent variations of radial artery and the extensor tendons in this report. Anatomical knowledge on variant branching pattern of radial artery as well as abnormal tendon disposition on the dorsum of the hand is of considerable importance during several clinical approaches and therapeutic practices.During the dissection of the left upper limb of an adult male cadaver aged approximately 70 years, we found concurrent variations of radial artery and tendons of long extensors of the digits. The radial artery, after passing through the anatomical snuff box, gave a large superficial branch before continuing into the palm by passing between the two heads of first dorsal interosseous. The superficial branch ran distally over the first dorsal interosseous and terminated by dividing into princeps pollicis and radialis indicis arteries at the first web space . This vaThe extensor digitorum divided into four tendons, which had normal insertion as described in the Anatomy textbooks. Tendon of extensor digiti minimi divided into two equal parts as it passed deep to the extensor retinaculum. Both the tendons were inserted to the dorsal digital expansion of the digitus minimus. Tendon of extensor indicis divided into two parts. The lateral part was inserted along with the tendon of extensor digitorum to the index finger and the medial one was inserted along with the tendon of extensor digitorum to the middle finger. The tendons going to the medial three digits were connected to each other through broad tendinous interconnections .Awareness of radial arterial variation in its origin and branching pattern has great importance in various clinical fields and basic medical studies. High origin of radial artery from brachial artery is one among its frequent anatomical variations as its prevalence is reported to be 14.26% in cadaveric studies and 9.75% in angiographic studies [The superficial arteries often pose iatrogenic traumatic complications as they are mistaken for veins during intravenous injections , 8. AwarExtensor muscles of the forearm have a relatively consistent architecture. However, at times they may have prominent anatomic variations in their tendons, particularly on the ulnar side of the hand . Tanaka EDM tendon triggering is a less common disease when compared with other tendons triggering. Its impingement on the extensor retinaculum aggravates the condition into double triggering . Zilber Kocabiyik et al. reported bilateral extensor tendon variations which included tripled tendon for middle finger from the extensor digitorum, double tendons to ring finger, and extensor digiti minimi having duplicate tendon with an abnormal communicating tendon between extensor digitorum tendon to ring finger and extensor digiti minimi muscle tendons. While these variant tendon dispositions were observed in one hand, on the other hand abductor pollicis tendon itself trifurcated and one of these tendons also attached to tendon of abductor pollicis brevis as an additional slip .Seradge et al. reported EDM in which two tendon slips attached to the little finger and one to the ring finger's metacarpophalangeal joint . Extensovon Schroeder and Botte have reported two infrequent variant forms of extensor indicis proprius, namely, the extensor medii proprius (EMP) which gets inserted into the long finger and the extensor indicis et medii communis (EIMC) that splits into two tendons which get inserted into both index and long fingers . The varOccurrence of hand extensor tendon variations cannot be ignored as surgeons must bear in mind the existence of these variations when performing common tendon transfers. It is also important to the orthopedic surgeons to ascertain the existence of additional tendons during the treatment of tenosynovitis.The variations observed in our case with an anomalous superficial branch of radial artery at anatomical snuff box may be of significance in clinical point of view especially to vascular and plastic surgeons. The superficial arteries of the upper extremity may often be mistaken for veins, which may become a basis for intra-arterial injections instead of intended intravenous injections. Anatomic variation of extensor compartment of the hand may contribute to the development of tenosynovitis and limit the usefulness in the tendon transfer."} +{"text": "Ethiopia experienced several WPV importations with a total of 10 WPV1 cases confirmed during the 2013 outbreak alone before it is closed in 2015. We evaluated supplemental immunization activities (SIAs), including lessons learned for their effect on the routine immunization program during the 2013 polio outbreak in Somali regional state.We used descriptive study to review documents and analyse routine health information system reports from the polio outbreak affected Somali regional state.All data and technical reports of the 15 rounds of polio SIAs from June 2013 through June 2015 and routine immunization coverages for DPT-Hib-HepB 3 and measles were observed. More than 93% of the SIAs were having administrative coverage above 95%. The trend of routine immunization for the two antigens, over the five years (2011 through 2015) did not show a consistent pattern against the number of SIAs. Documentations showed qualitative positive impacts of the SIAs strengthening the routine immunization during all courses of the campaigns.The quantitative impact of polio SIAs on routine immunization remained not so impressive in this study. Clear planning, data consistencies and completeness issues need to be cleared for the impact assessment in quantitative terms, in polio legacy planning as well as for the introduction of injectable polio vaccine through the routine immunization. The global burden of poliomyelitis has decreased by \u2265 99% since the time the World Health Assembly endorsed the initiative for global polio eradication in 1988. The burden of the wild poliovirus (WPV) has shown significant reduction in Africa with the last case confirmed in July 2014 in Nigeria. Ethiopia joined the polio eradication effort in 1996 and was able to interrupt endemic WPV transmission in 2001. However, the country experienced several WPV importations where between 2004 and 2008 a total of 44 cases were confirmed associated with six different importations. The last importation of WPV was in August 2013 following the outbreak that was declared in the Horn of Africa in April 2013. A total of 10 WPV 1 cases were confirmed with onset of paralysis of the last case on 5 January 2014.Since the late 1980s, use of Supplemental Immunization Activities (SIAs) has been a key strategy of the Global Polio Eradication Initiative (GPEI). Polio SIAs are mass vaccination campaigns that aim to administer additional doses of Oral Poliovirus Vaccine (OPV) to each child , regardless of their vaccination history. In doing so, SIAs attempt to remedy the limited ability of routine immunization services to reach at-risk children with the number of OPV doses required to generate immunity. Several rounds of polio SIAs were conducted in Ethiopia following each outbreak as per the recommendation of the polio Global Advisory Committee. Like the majority of polio outbreaks which were controlled within 6 months with OPV, it took about five months in the country to limit further new incidence of the WPV; however the outbreak response extended to 2 years between June 2013 and June 2015 and the country responded with 15 SIAs out of which four were National Immunization Days (NIDs) . ResourcPrior use of routine immunization services and compliance with the routine OPV schedule has strong positive association with SIA participation . On the Study area: administratively Ethiopia is divided into nine Regional States and two City Administrations. One of the regional states is Somali region and it comprises nine zones. The region shares porous border with Somalia and Kenya which puts it at increased risk of importation of WPV. The total population of the region was estimated to be 5,446,968 in 2015 and the <5 years of age comprised 10.1% of the total population .Study design: we conducted a descriptive study design using campaign and routine health management information system (HMIS) data from June 2013 to June 2015 to explore lessons from the SIAs and observe trends in the routine immunization coverage in relation with the number of polio SIAs conducted in Somali region of Ethiopia.Method of sampling and recruitment: we purposefully selected Somali region of the country as it was epi-center for the polio outbreak during mid-2013. In addition, all rounds of the polio SIAs included the region in the subsequent campaigns until closure of the outbreak in June 2015. Except during the four NIDs conducted from 2013 through 2015, the remaining regions had intermittent polio response campaigns during the 2 years outbreak period.Procedures: we obtained technical reports of all the SIAs conducted as part of the outbreak response in Somali region from June 2013 to June 2015 as well as quarterly program review documents. The technical reports of each of the polio SIAs rounds were reviewed for coverage of the SIAs, lessons learned and best practices out of the campaigns as to their contribution to strengthen the routine immunization were narrated and summarized in tables and graphs. We also compiled the periodic HMIS reports from the region to see the trends in uptake of the routine immunization coverage on key indicators of over the same 2 years period.Statistical analyses: after the data compilation was completed, we checked the data manually for completeness and consistency. We entered the data into an MS Excel spreadsheet and checked for major outliers, and then analysed for proportions, percentages and trends. We used frequency distribution and percentages used to describe the variables, and compiled results and presented those using tables, graphs, and narrations.As shown in Routine immunization service provisions were integrated with the SIAs starting from the May 2014 round with all antigens. During their house to house visits, vaccinators sought for unvaccinated or under vaccinated children based on the national schedule for all the routine immunization antigens and referred them to nearby temporary fixed stations established for vaccination. As shown in The trend of routine immunization for DPT-Hib-HepB3 and measles coverage, over the five years (2011 through 2015) did not show a consistent pattern; however the trend in coverage based on the two indicators was parallel to each other . Throughimmunization: review of the technical reports of the 15 SIA rounds and minutes of the quarterly joint program review meetings of Expanded Program on Immunization (EPI) and surveillance for contributions of the campaigns to support the routine immunization services are stated below.Pre-implementation: the planning exercises helped to develop or update the social maps in the areas of the campaign. Intense participation of the community at the lower level during microplan development increased the community engagement and program sustainability. Bottom-up microplans were developed or updated in each of the campaigns which were claimed to have boosted the local capacity. During each of the rounds, basic and practical trainings to vaccinators was given where routine immunization and surveillance components were included. All campaign preparation trainings included the following major topics: recording and reporting techniques, vaccine management and distribution, identification of hard to reach and high risk populations, inter personal communication, monitoring and supervision which all boost the routine immunization in general. Deployment of technical assistants with skills in routine immunization and campaigns also helped in coaching and mentoring to the health workers. In majority of the reports of the SIA rounds, immunization task force committees were established up to district level with technical sub working groups. The launching ceremonies for the SIAs and advocacy visits to different level political and community leaders, nongovernmental organizations and other potential partners have helped to gain support and political commitment for immunization as a whole. On the campaign mobilization events, key immunization messages were passed. Functionality and capacity of refrigerators, inventory of cold boxes and vaccine carriers, temperature recording, pattern and placement of routine EPI vaccines were assessed at all levels. In some rural health facilities where electricity supply was not available, kerosene was distributed to start up the refrigerators and maintain the cold chain. In most places, ice pack production was initiated days before the actual implementation of the campaign date. Additionally, in places where nearby health facilities were not available, identification of vaccine distribution points were carried out. In areas where shortage of vaccine carriers was noticed, re-distribution of supplies was made from some health facilities.Intra-campaign: inter sectoral collaboration among different sectors and partners with mix of technical capacities were increased. School involvement in the campaign helped the partnership by motivating teachers and school administrators for vaccine preventable diseases (VPD). Close supervision and monitoring of the campaigns were also done at all levels which supported monitoring of the routine immunization as well.Post campaign: in order to evaluate quality of SIAs and take immediate action in low performing areas rapid convenience surveys were conducted where the number of children with zero dozes were monitored and shared for subsequent inclusion in the routine immunization programme. At the end of each SIAs round, regional review meetings were organized with participation of all level supervisors. Strengths and weaknesses were discussed and action points taken for the subsequent round. The routine immunization strengthening efforts were discussed during the review periods.We found that the administrative coverages in almost all (93.3%) of the 15 polio SIA rounds in Somali region after WPV importations met the cut off for a high quality campaign as supported by the Rapid convenience survey findings as well. Despite this fact the number of zero dose children in both age groups of 0-11 months and 12-59 months of age did not show satisfactory decrease in number along the course of subsequent campaigns. We also found from the campaign coverage trends that lessons and strategies from previous rounds of campaigns were not well documented or not implemented in subsequent campaigns which would have be reflected in an increase of campaign quality/coverage.We also found that good number of children have got access to routine immunization services integrated with the SIAs as service providers visit the community and link eligible children to prearranged temporary fixed vaccination posts; however the proportion of children accessed and tracked back to the immunization service could not be measured as there were no targets set for vaccination with routine vaccines before the vaccinators went for the activities.Despite the reported positive qualitative impacts of the campaigns on the political commitment and health workers' knowledge and skill on the routine immunization, the impact of the repeated number of polio SIAs on routine immunization coverages in the region could not be conclusively determined in this study as there was no clear pattern with immunization coverage for the key indicators of DPT-Hib-HepB 3 and measles vaccines. This finding is consistent with the study conducted in seven countries of South Asia and sub-Saharan Africa that assessed impacts of polio eradication activities on key health system functions including routine immunization using mixed methods data.Despite the major financial and technical investments on immunization in the region, the fact that the study area was already disadvantaged in terms of infrastructures and other facilities due to scattered geographic settlement, pastoralist nature, social insecurity and others could have masked the positive effects of the SIAs on routine immunization in the region with high chance of persistently missing immunization , 10. In In conclusion, SIA response should have been more focused in system strengthening and polio eradication activities can provide support for the routine immunization as part of the primary health care; their quantitative impact on routine immunization remained not so impressive in this study. As it is seen in other studies, absence of periodic documentation and tracking effects of the polio SIAs on RI and health system might have contributed to conceal the anticipated positive impacts of the campaigns on the routine immunization following increased commitment to scaling up best practices could lead to significant positive impacts to the routine system . AdditioUse of Supplemental Immunization Activities (SIAs) and strengthening of routine immunization have both been key strategies of the Polio Eradication Initiative;Several rounds of polio SIAs were conducted in Ethiopia following each outbreak as per the recommendation with additional intention to strengthen the routine immunization by identifying never-vaccinated children and linking to further completion of their immunization with other infant antigens as well;The different studies conducted elsewhere have a mixed outcome of effects of the polio SIAs on routine immunization.This study tries to summarize the qualitative impact of the rounds of polio SIAs in the outbreak focus, Somali region, during the 2013 outbreak period through review of different campaign technical reports and review meeting proceedings;Documentation on quantitative impact of polio SIAs appear to be inexistent or patchy in Ethiopia; this study tries to assess impact of the polio SIAs on coverages with regard to DTP3 and measles coverages in routine immunization;This study also triggers further statistical analysis to seek associations and statistical significant relations between the polio SIAs and on the routine immunization uptake, the dependent variable.Authors declared they have no competing interests. The views expressed in the perspective articles are those of the authors alone and do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated and the position of World Health Organization."} +{"text": "WHO pharmacovigilance indicators have been recommended as a useful tool towards improving pharmacovigilance activities. Nigeria with a myriad of medicines related issues is encouraging the growth of pharmacovigilance at peripheral centres. This study evaluated the status of pharmacovigilance in tertiary hospitals in the South-South zone of Nigeria with a view towards improving the pharmacovigilance system in the zone.A cross-sectional descriptive survey was conducted in six randomly selected tertiary hospitals in the South-South zone of the country. The data was collected using the WHO core pharmacovigilance indicators. The language of assessment was phrased and adapted in this study for use in a tertiary hospital setting. Data is presented quantitatively and qualitatively.A total of six hospitals were visited and all institutions had a pharmacovigilance centre, only three could however be described as functional or partially functional. Only one centre had a financial provision for pharmacovigilance activities. Of note was the absence of the national adverse drug reaction reporting form in one of the hospitals. The number of adverse drug reaction reports found in the databases of the centres ranged from none to 26 for the previous year and only one centre had fully committed their reports to the National Pharmacovigilance Centre. There were few documented medicines related admissions ranging from 0.0985/1000 to 1.67/1000 and poor documentation of pharmacovigilance activities characterised all centres.This study has shown an urgent need to strengthen the pharmacovigilance systems in the South-South zone of Nigeria. Improvement in medical record documentation as well as increased institutionalization of pharmacovigilance may be the first steps to improve pharmacovigilance activities in the tertiary hospitals.The online version of this article (10.1186/s40360-018-0217-2) contains supplementary material, which is available to authorized users. Pharmacovigilance in Nigeria commenced in the late 80s and early 90s initially in a tertiary hospital with some preparatory activities at the national level prior to its admission into the WHO program for international drug monitoring (PIDM) in 2004 , 2. It hThe growth of pharmacovigilance in Nigeria has been propelled by a number of factors including the establishment of the regulatory agency by Decree 15 of 1993 (as amended) now cited as Act Cap N1 laws of the Federal Republic of Nigeria 2004, the formulation of the Nigerian National Drug Policy in 2005 . The actPharmacovigilance has a wide scope with increasing product concerns. The main focus in the Nigeria context has been on adverse drug reactions, substandard and falsified medical products , 8\u201310. OReporting of drug safety concerns by health-workers in Nigeria is voluntary and the reasons for under-reporting are partly due to fear of litigation, poor understanding of the subject matter, feeling that the \u201cknown\u201d Adverse Drug Reactions (ADRs) need not be reported, time constraints and cumbersome reporting processes \u201321. LackWHO advocates regional centres as an effective way of enhancing pharmacovigilance activities as obserThe aims of the creation of the zonal centres was to decentralize the activities of the National Pharmacovigilance Centre (NPC), e.g. distribution of ADR forms and collection of the Individual Case Safety Reports (ICSRs) from reporters and perform preliminary evaluation with prompt reporting, also transmission of acknowledgements and feedback information to the reporters and dissemination of information from the national centre to the patients and health care workers. Furthermore, they were created to monitor the progress of pharmacovigilance activities at institutional levels as well as support the training and capacity building for pharmacovigilance in the areas of their jurisdiction . These mCurrently, the assessment of pharmacovigilance had been largely done at the national level using various tools including evaluating the attainment of minimum requirements for a national centre with interviews of focal persons , and recThe status of the pharmacovigilance system in the tertiary centres is presently unknown as the WHO indicators and related metrics for evaluating these centres have just been recently released and therThis study was carried out in the South-South Zone of Nigeria which is located in the coastal region of Nigeria. It comprises six states namely Akwa-Ibom, Bayelsa, Cross Rivers, Delta, Edo and Rivers with a population of 21,014, 655 million persons . Health care professionals in all tiers of hospitals in this zone could send their reports either directly or through the zonal pharmacovigilance centre for onward transmission to the national centre. The South-South zonal pharmacovigilance centre is domiciled in the University of Benin Teaching Hospital, a tertiary hospital for research and learning.In Nigeria, heath care is delivered at three levels: primary, secondary and tertiary. Tertiary care hospitals provide the highest level of care and serve as referral centres for the secondary and primary centres. Furthermore, there are three main types of tertiary centres. Firstly: the teaching hospitals, which provide teaching as well as for research and health care services. Secondly: Federal Medical Centres which are mainly for health care services as well as providing residency training in some departments and lastly the specialty hospitals which focuses on particular disease entities of public health importance such as neuro-psychiatric hospitals, orthopaedic hospitals and ophthalmic hospitals among others.This study was directed at the teaching hospitals because they provide the widest access to all patients with an inclusiveness of all cadres of health care workers. In the South-South zone there are eight teaching hospitals, seven are government owned, and one privately owned.University of Benin Teaching Hospital Benin-City, Edo State, (UBTH).Delta State University Teaching Hospital Oghara, Delta State, (DELSUTH).Niger Delta University Teaching Hospital Okolobri, Bayelsa State, (NDUTH).University of Port Harcourt Teaching Hospital, Port Harcourt, Rivers State, (UPTH).University of Uyo Teaching Hospital, Uyo, Akwa- Ibom State, (UUTH).University of Calabar Teaching Hospital, Calabar Cross-River State, (UCTH).Eligibility criteria: teaching hospitals were used to ensure inclusiveness of all clinical disciplines and staff complement. All six states in the zone were represented by a teaching hospital. An institutional approval was required from the Chief Medical Director / Management prior to inclusion in the study. The study was subsequently carried out in 6 tertiary health institutions selected through simple random sampling in the various states namely:Prior to visiting the study sites for data collection, ethical approval was obtained from the research and ethics committee of each of the selected tertiary hospitals. Furthermore, the heads of the institution were contacted for approval and access to the pertinent data. The focal persons in charge of pharmacovigilance in the institution provided answers for the indicator assessments.the background information, structural indicators, process indicators, outcome/impact indicators. The phrasing of the assessment questions was adapted to address the tertiary hospital setting . Furthermore, to compute the duration of hospital stay, the crude estimates of the duration of admission of patients with serious adverse reactions who were hospitalised was calculated from the adverse drug reaction reports obtained for the previous year.The Analysis was both qualitative and quantitative. All hospitals participating in the study were described according to each indicator. The core Structural indicators are qualitative indicators with categorical data analysed descriptively. The presence or absence of the parameter measured was described for each institution.Analysis of the core Process and Outcome Indicators are quantitative indicators reflecting rates of reports and actual numbers. They were calculated using frequencies and absolute numbers as dictated by the indicator. The data was analysed with descriptive statistics using Microsoft excel 2007.All six institutions were visited and the focal Pharmacovigilance persons or committees interviewed following a meeting with the various heads of the institutions. The teaching hospitals in this study are all government owned and serve as referral centres to the primary and secondary tier hospitals. However, they are of varying sizes in terms of bed and staff complement. The demographic characteristics of the institutions at the commencement of the study late January to mid-March 2016 were as follows while a centre neither had copies of the national nor local forms available. There were no standard forms available which addressed the subset of assessment questions covering the scope of pharmacovigilance in all of the centers was carried out and completed in UBTH in the five years preceding the analysis as a form of active surveillance. There were limited numbers of reports on ADRs, medication errors, lack of therapeutic effectiveness etc. in most of the centers. Documentation of feedback and causality assessment carried out on reports in the centers was equally poor in this study modified the ADR form showing their own hospital logo and domiciling the ADR form to their setting. This showed the willingness of the centre to improve patient safety through a sense of ownership. The inclusion of health facilities in the Nigerian national pharmacovigilance policy was to increase their participation in the pharmacovigilance activities . The stuThe processes and outcomes were however poor in all the facilities probably due to lack of awareness of measuring indices to monitor and evaluate pharmacovigilance. Again, the pharmacovigilance system in this setting is still in their infancy and the requisite culture to ensure effective operations yet to be established. However, it was noted that a cohort event monitoring of antimalarials (artemisinin-based combination therapy) was conducted in UBTH as a part of a national program. This active surveillance of medicines used in a disease of public health importance is useful in obtaining better insights into the safety and tolerability pattern in our setting . The neeThe poor record keeping in all the facilities also made computations of the process and outcomes indicators difficult to achieve. The documentation of medicine related events especially adverse drug reactions were equally poor in this study, this contributed to lack of data even in hospitals where the international coding of diseases was been done. This is not different from what has been reported in other studies about under-recognition of adverse drug reactions and drug related events , 34. It In the utilization of the WHO pharmacovigilance indicators, it is evident that the scope of reportable incidents by the facilities have been broadened and it is hoped that with the implementation framework of the Nigerian national pharmacovigilance policy, there would be a wider dissemination of the roles that tertiary hospitals are to play in the promotion of pharmacovigilance. The WHO pharmacovigilance indicators would be useful in assessing other tertiary hospitals as it would enable the hospital management develop a strategy towards improving patient safety through pharmacovigilance. It may also help identify areas that need urgent intervention or modification in the health information system management of the tertiary hospitals especially since it is recommended that the indicators be reapplied as needed in the facilities.The WHO indicators have proven to be quite useful in this assessment. However, absence of trained pharmacovigilance personnel hindered the provision of results for the pharmacovigilance process indicators in the centers. Of note is the limitation of the structural pharmacovigilance indicators to fully capture the functionality of the pharmacovigilance system. Furthermore, the overall poor documentation in all centers limited the derivation of the indicators. Again the derivation of the outcome/impact indicator required in-depth survey which young pharmacovigilance systems are unable to execute. There might be a need to develop a scoring system to quantify the indices thus highlighting the deficiencies in numerical terms.This study has shown an urgent need to strengthen the pharmacovigilance systems in the South-South zone of Nigeria. The WHO pharmacovigilance indicators have been proven to be helpful in assessing the pharmacovigilance system in the zone. Improvement in medical record documentation as well as increased institutionalization of pharmacovigilance may be the first steps to improve pharmacovigilance activities in the tertiary hospitals.Additional file 1:Assessment of the state of Pharmacovigilance in the South-South Zone of Nigeria using WHO Pharmacovigilance indicators. WHO Core Pharmacovigilance Indicators including changes made to phrasing of the assessment questions. (PDF 347\u00a0kb)"} +{"text": "The paper is devoted to the study of facial region temperature changes using a simple thermal imaging camera and to the comparison of their time evolution with the pectoral area motion recorded by the MS Kinect depth sensor. The goal of this research is to propose the use of video records as alternative diagnostics of breathing disorders allowing their analysis in the home environment as well. The methods proposed include (i) specific image processing algorithms for detecting facial parts with periodic temperature changes; (ii) computational intelligence tools for analysing the associated videosequences; and (iii) digital filters and spectral estimation tools for processing the depth matrices. Machine learning applied to thermal imaging camera calibration allowed the recognition of its digital information with an accuracy close to 100% for the classification of individual temperature values. The proposed detection of breathing features was used for monitoring of physical activities by the home exercise bike. The results include a decrease of breathing temperature and its frequency after a load, with mean values \u22120.16 \u00b0C/min and \u22120.72 bpm respectively, for the given set of experiments. The proposed methods verify that thermal and depth cameras can be used as additional tools for multimodal detection of breathing patterns. The use of different sensors is essential for the study of many physiological and mental activities, neurological diseases ,2 and moSpecial attention is paid to temperature changes of facial parts affected by emotions, mental activities, or neurological disorders. The study of the temperature distribution over different parts of the face can be used in face and emotion detection ,10,11,12Noninvasive methods of breathing monitoring include electrical impedance tomography, respiratory inductance plethysmography ,21, capnThe respiratory rate is an important indicator for moniThe present paper is devoted to the noninvasive analysis of the breathing rate by facial temperature distribution using a thermal imaging camera ,29 and bA special attention is paid to the adaptive detection of facial thermographic regions. The present paper applies specific methods for their recognition allowing to detect the breathing rate or to recognize facial neurological disorders.The proposed method of respiratory data processing is based on their statistical and numerical analysis using different functional transforms for the fast and robust estimation of desired features ,23. The The block diagram of the thermal imaging camera shown in An alternative approach to breathing data acquisition using an MS Kinect depth sensor is presented in The range imaging methods used in depth sensors are based on the specific computational technologies that create the matrices whose elements carrying the information about the distance of the corresponding image component from the sensor ,33. The The sequence of images recorded by the thermal camera were acquired with the changing temperature ranges associated with each videoframe as presented in The accuracy of the thermal camera was tested for a flat surface with equal temperature values and analysis of individual image frames as presented in Facial temperature values are useful for detection of neurological disorders and facial symmetry analysis. The time stability and precision of the thermal camera was tested for a sequence of images recorded with a sampling period of 5 s in the face area with a stable temperature distribution over a short period of time for a healthy individual. The area around the eyes illustrated in videorecording of the face area during a selected time range,extraction of thermographic frames with the selected sampling frequency (of 10 Hz) and a given resolution,automatic determination of temperature ranges in each thermographic frame and the adaptive calibration of each thermal image,detection of the mouth area using the selected number of initial frames with the largest temperature changes and the adaptive update of this ROI for each subsequent thermal image,evaluation of the mean temperature in the specified window of a changing position and size in each frame.The area (ROI) for the time evolution of temperature changes was specified empirically according to the first frame at first, as presented in An example of the time evolution of the mean breathing temperature in the selected mouth region is presented in An alternative analysis of breathing based upon thorax movement was baseM, resulting in a new sequence The analysis of multimodal records The spectral components were then calculated by the discrete Fourier transform forming the sequenceThe detection of breathing features was verified during monitoring of physical activities by the home exercise bike. Each experiment was 40 min long and it included two periods of physical exercises followed by two restful periods with each of them 10 min long. The total number of 25 experiments was performed by one individual in similar home conditions.The resulting mean temperatures recorded by the thermal imaging camera evaluated from the fixed and the adaptively specified and moving temperature regions of interest are presented in Both thermal imaging and motion data can be used for monitoring of the breathing rate during physical activity. The proposed method of adaptive specification of the breathing area and temperature range recognition was applied for the analysis of the evolution of the breathing features recorded during physical exercise and in the following resting time period. This paper proposes a method for the use of thermal and depth sensors to detect breathing features. The presentation includes a description of a machine learning method for the recognition of the temperature ranges as well as an adaptive specification of the mouth region using the sequence of thermographic images. The application is devoted to the study of selected physiological features evaluated during physical activities.The results achieved show an example of simple sensors application for breathing analysis using simple thermal imaging cameras for the detection of temperature changes and MS Kinect depth sensors for the analysis of the motion in the chest area. The proposed method of thermographic regions detection allows both analysis of the breathing rate and the study of neurological problems in the facial area.It is assumed that simple sensors can form an alternative tool for the detection of medical disorders, including sleep and breathing analysis in the home environment."} +{"text": "Accurate mapping of the auroral oval into the equatorial plane is critical for the analysis of aurora and substorm dynamics. Comparison of ion pressure values measured at low altitudes by Defense Meteorological Satellite Program (DMSP) satellites during their crossings of the auroral oval, with plasma pressure values obtained at the equatorial plane from Time History of Events and Macroscale Interactions during Substorms (THEMIS) satellite measurements, indicates that the main part of the auroral oval maps into the equatorial plane at distances between 6 and 12 Earth radii. On the nightside, this region is generally considered to be a part of the plasma sheet. However, our studies suggest that this region could form part of the plasma ring surrounding the Earth. We discuss the possibility of using the results found here to explain the ring-like shape of the auroral oval, the location of the injection boundary inside the magnetosphere near the geostationary orbit, presence of quiet auroral arcs in the auroral oval despite the constantly high level of turbulence observed in the plasma sheet, and some features of the onset of substorm expansion. Accurate mapping of the auroral oval onto the equatorial plane is necessary in determining the locations of substorm expansion phase onsets. In very early studies of auroral morphology, Akasofu showed tE models of high latitude transverse currents , using data from the THEMIS mission. Later, Antonova et al. and ground-based measurements. This implies that only upward field-aligned currents can produce visible auroras, which is supported by the comprehensive analyses of Ohtani et al. events by Baumjohann et al. , 1990 anOur analysis demonstrates the necessity to improve existing mapping techniques by using specific plasma features, so-called \u201cnatural tracers\u201d, which conserve their characteristic signatures along magnetic field lines. We show here that plasma pressure can be successfully used as one such \u201cnatural tracer\u201d.E.A comparison of the pressure distributions measured in the equatorial plane by the THEMIS satellites with those measured by DMSP satellite above the auroral oval under both quiet and moderately disturbed geomagnetic conditions, reveals the role of the plasma ring surrounding the Earth in substorm dynamics. The results indicate that the popular approach of mapping the auroral oval into the plasma sheet must be modified and that the main part of the oval, especially its equatorial boundary, should not be mapped into the plasma sheet proper, but rather into the plasma ring surrounding the Earth. Such mapping explains the ring-like shape of the auroral oval, the location of the injection boundary inside the magnetosphere near the geostationary orbit, and the presence of quiet auroral arcs in the auroral oval despite the high level of plasma turbulence constantly observed in the plasma sheet. Subsequent studies to further verify and enhance the precision of these results should focus on an analysis of the processes occurring at geocentric distances between around 6\u20137 and 10\u201312 R"} +{"text": "Penile lesions are encountered in a variety of fields from family medicine practice through urology, to sexual health specialists. It is important that practitioners consider and recognize fixed drug eruptions of the penis while being able to initiate appropriate treatment in order to avoid misdiagnosis and avoidable stress. In summary, withdrawal of the offending medication and initiation of corticosteroid therapy remain the cornerstones of treatment of fixed drug eruptions of the penis."} +{"text": "The seasonal abundance patterns of insects inhabiting the understory vegetation of a mixed deciduous forest were examined with the help of the sweep-net sampling method. During the study period of 2 years, insects were sampled regularly from the understory vegetation of the three selected habitats of the mixed deciduous forest. Insect abundance was maximum in the moist-deciduous habitat and minimum in the teak plantation. Generally, insect abundance was the highest during the southwest monsoon in all habitats. The temporal pattern of fluctuations in the insect abundance followed more or less the same pattern in all the three habitats studied. The insect abundance of the understory vegetation varied among the habitats studied, while the pattern of seasonal fluctuations in insect abundance was comparable among habitats. Composition of the insect community also indicated prominent seasonal changes within habitats than interhabitat changes within a season."} +{"text": "In 1968, the American Heart Association recommended the consumption of no more than 300 mg/day of dietary cholesterol and emphasized that no more than 3 eggs should be eaten per week, resulting in substantial reductions in egg consumption, not just by diseased populations but also by healthy individuals, and more importantly by poor communities in undeveloped counties who were advised against consuming a highly nutritious food. These recommendations did not take into account that eggs not only contain important nutrients for overall health but also components which exert protection against chronic disease. The newly-released 2015 dietary guidelines finally took into consideration the epidemiological information and the data from clinical interventions and eliminated an upper limit for dietary cholesterol. This special issue addresses the history of the recommendations for eggs ,5,6, theThe history of the recommendations of dietary cholesterol and the politics behind those recommendations as well as the perception of the public and the creation of the Egg Nutrition Center in the US are thoroughly discussed alongside the lines of evidence on which the original recommendations were based . The numEggs have been recognized as functional foods due to the presence of bioactive components, which may play a role in the prevention of chronic and infectious diseases . The preThe role of eggs on the healthy eating index (HEI) was evaluated in 139 obese post-partum Mexican American women . This arThere is controversy regarding egg consumption and patients diagnosed with diabetes . While i"} +{"text": "The personality construct alexithymia is characterized by the difficulty in identifying and describing feelings with an externally oriented thinking pattern and a limited imaginative capacity are one of the conditions associated with alexithymia that has a high prevalence are classic psychosomatic diseases Sifneos, , and sevMost of studies which listed in Table Alexithymia may contribute to an increased severity of FGID or a poor outcome independent of anxiety and depression from the epidemiological studies listed in Table Enhanced perception of visceral stimuli called visceral hypersensitivity is one of the key features of IBS and hypothalamic-pituitary-adrenal (HPA) axis. These systems are main mediators of brain-gut interaction and alteration of these systems has been reported in FGIDs Chang, . SubjectIn conclusion, alexithymia may contribute to an increased severity of FGID or a poor outcome measured by TAS-20. The empirical data may indicate that the association between FGIDs and alexithymia may not be explained simply by \u201csomatosensory amplification,\u201d but biased interpretation of their symptoms not based on appropriate bodily sensation. The physiological component of the emotional or stress response system may be altered; however, the direction of causation between these alterations and the alexithymic cognitive and affective style is not clear. The studies on the association between alexithymia and physiological aspect of FGID has been sparse. Future studies are required to make a consensus of measurement of alexithymia, and elucidate the physiological mechanism of link between alexithymia and FGID.MK: Drafting of the manuscript and critical revision of the manuscript; YE: Critical revision of the manuscript; SF: Critical revision of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Abiotic factors influenced the capacity of the strains to form biofilms. Classification of the adhesion type is related with the optical density measured on the biofilm formation of tested strains. The relationship between the biofilm formation in real values with theoretical values of the strains was used to determine the mechanism involved during mixed cultures. Specifications TableValue of the dataThe data provide useful information for producers and consumers concerning biofilms formation on mango fruits.The presence of these Enterobacteria reminds the importance of the application of good agricultural practices.Effect of microbial interactions on biofilm formation are important for the industry considering the impact to human health.1The optical density of cristal violeta absorbed by Enterobacteria involved in the biofilm formation at surface fruit, by individual and mixed strains and also the mechanism of interaction between strains during biofilms formation.2All experiments were conducted at least in 3 independent times in duplicates. The data were analyzed with the statistical software SAS v. 9.0. Statistical significance was determined by Tukey test using one-way analysis of variance (ANOVA). Fruits were harvested and selected without damage. Enterobacteria were previously isolated from mango surface. The type of mechanism between strains was assessed comparing the amount of biofilm formation in absorbance units"} +{"text": "In this study, the removal capacity of deionized water was investigated against five gaseous carbonyl compounds by means of the gas stripping method. To determine the trapping behavior of these odorants by water, gaseous working standards prepared at three different concentration levels were forced through pure water contained in an impinger at room temperature. The removal efficiency of the target compounds was inspected in terms of two major variables: (1) concentration levels of gaseous standard and (2) impinger water volume . Although the extent of removal was affected fairly sensitively by changes in water volume, this was not the case for standard concentration level changes. Considering the efficiency of sorption media, gas stripping with aqueous solution can be employed as an effective tool for the removal of carbonyl odorants."} +{"text": "A full-scale experimental test was conducted to analyze the composite behavior of insulated concrete sandwich wall panels (ICSWPs) subjected to wind pressure and suction. The experimental program was composed of three groups of ICSWP specimens, each with a different type of insulation and number of glass-fiber-reinforced polymer (GFRP) shear grids. The degree of composite action of each specimen was analyzed according to the load direction, type of the insulation, and number of GFRP shear grids by comparing the theoretical and experimental values. The failure modes of the ICSWPs were compared to investigate the effect of bonds according to the load direction and type of insulation. Bonds based on insulation absorptiveness were effective to result in the composite behavior of ICSWP under positive loading tests only, while bonds based on insulation surface roughness were effective under both positive and negative loading tests. Therefore, the composite behavior based on surface roughness can be applied to the calculation of the design strength of ICSWPs with continuous GFRP shear connectors. Insulation is a basic aspect of passive construction for reducing energy through eco-friendly and highly-efficient air-conditioning and heating systems; together with new and renewable energies, insulation is the most important element in zero-energy construction. While external insulation methods may offer energy-saving effects up to the level of zero energy and zero carbon, complex construction and durability issues are impediments to full realization in the field. The insulated concrete sandwich wall panel (ICSWP), which consists of an insulating material and internal/external concrete wythes, is a good alternative that can satisfy the levels of both structural and insulation performance required for buildings. For ICSWP to be used as an efficient load-bearing element, however, it is necessary to improve its structural performance by increasing the degree of composite of the internal/external concrete wythes.et al. et al., (exp) and the theoretical moment of inertia of full-composite action (Ic) and non-composite action (Inc). Once the experimental moment of inertia of each specimen is calculated using Equation (4), Equation (3) is used to compare the degree of composite action of the specimens in terms of the initial stiffness (\u03ba1). The degree of composite action in terms of the ultimate strength (\u03ba2) is calculated using Equation (5). The degree of composite action based on the theoretical/experimental initial stiffness and ultimate strengths of the specimens is summarized in Iexp is the experimentally-determined moment of inertia; Ic and Inc are the theoretical, uncracked, full- and non-composite moments of inertia, respectively; Pexp is the experimental ultimate strength of specimens; and Pc and Pnc are the theoretical ultimate strengths of full- and non-composite action, respectively.The degree of composite action for each specimen was evaluated in terms of the initial stiffness and ultimate strength. The method defined by Pessiki and MIynarczyk 2003) [003 [2] wIn All groups except the EPS_P group show that the ultimate strength is strongly dependent on the number of GFRP grids; the EPS_P group shows similar ultimate strength to the other group because the adhesive bonds contribute more to the composite action than the number of GFRP grids. Also, the degree of composite action according to the surface roughness is compared to determine the effect of the mechanical bond. The degree of composite action in terms of initial stiffness and ultimate strength of the XPSST_P group is higher, at 3%\u20136% and 24%\u201329%, than that of the XPSNB_P group, and that of the XPSST_N group are higher, at 13%\u201318% and 14%\u201332%, than that of the XPSNB_P group. The mechanical bond based on surface roughness has an effect on the composite action and is effective in both positive and negative loading tests.It has been experimentally proven that the shear capacity of connecting elements, which is thought to affect the composite action of test specimens, changed depending on not only the type of insulation and the number of GFRP shear grids, but also the loading direction. For example, a higher degree of composite action was obtained under the negative loading test compared to the degree of composite action of the XPSST and XPSNB groups with equal GFRP shear grids according to loading direction. The EPS specimens with equal GFRP shear grids; however, exhibited a higher degree of composite action under the positive loading test. In other words, the shear capacity of connecting elements increases under negative loading more than under positive loading tests for XPSST and XPSNB specimens; the opposite results are observed in EPS specimens. Because the adhesive bond had no effect during the negative loading test, the mechanical bond strength is investigated to explain the higher degree of composite action. Also, there was the different grid shear flow strength between positive and negative loading test that contributes to the ultimate strength of the test specimens.The upper and lower concrete wythes are connected by connecting elements comprised of GFRP grids, insulation, and bond. If the shear flow capacity of the connecting elements are calculated from the shear flow strength of the GFRP grids only, the results of the higher degree of composite action of XPSST specimens than XPSNB specimens will not be reflected in the overall shear flow capacity of the connecting elements. Therefore, the shear capacity of the connecting elements is determined via the shear flow strength of the GFRP grids and the bond between the concrete wythes and insulation.In this experimental test, the bonds can by divided into mechanical bonds based on friction between the XPSST foam and concrete wythe, and adhesive bonds based on the absorptiveness of the EPS foam. Adhesive bonds are difficult to quantify, while mechanical bonds can be determined experimentally through comparison of XPSST and XPSNB groups with equal numbers of GFRP grids. The first load peak, which was considered to be the bond strength of each specimen, is investigated in the load-deflection curve, and c) is defined in Equation (6) and the value summarized in 4, and 5,760,000 mm3, respectively (see reference [The shear flow strength of the GFRP grids can be obtained theoretically from the number of effective strands subjected to tensile force, because GFRP grids are easily buckled when subjected to compressive force. eference for a deIn In this study, the composite behavior of ICSWP reinforced with GFRP grids is investigated with respect to load direction, using both positive and negative loading. An experimental program, including eighteen full-scale specimens, was conducted with test variables, including the type of insulation, surface roughness, and number of GFRP shear grids. The failure modes of each specimen are examined to compare the structural behavior according to load direction. The degree of composite behavior is analyzed in terms of the initial stiffness, taking into account the varying moment of inertia due to crack propagation and ultimate strength. In addition, the mechanical bond strength based on surface roughness is analyzed at the first load peak and then effective shear flow strength of GFRP grids is calculated from the ultimate and mechanical bond strengths.EPS_P group showed similar flexural strength with weak dependence on the number of shear grids. In this group, a material characteristic of absorptiveness in the EPS foam resulted in a high adhesive bond between the concrete and insulation that governed the flexural and composite behavior. However, the flexural strength of the EPS_N groups showed a dependence on the number of shear grids under negative loading, because the effect of the adhesive bonds were weakened by the tensile force. XPSST_P and XPSNB_P groups showed that flexural strength is governed by the number of shear grids; mechanical bonds based on surface roughness have a consistent effect on the initial stiffness and ultimate strength. Also, the flexural strength of the XPSST_N and XPSNB_N groups shows a tendency to depend on the number of shear grids, which is similar to the results of the positive loading test. We confirmed that the mechanical bond based on surface roughness is effective under negative as well as positive loading.The higher degree of composite action under negative loading was obtained in XPSST and XPSNB groups. The primary reason for this was the difference in shear flow strength of each GFRP shear grid between positive and negative loading tests. In the case of negative loading, the tensile force acting on the shear grids from concrete wythe allowed the GFRP strands in both orthogonal directions to make some contribution to the load-carry capacity, and the effective strands resisted against the flexural-shear force. On the contrary, the compressive force acting on the shear grids from concrete wythes during positive loading weakened the load-carrying capacity of the shear grids. As result, the shear flow strength under negative loading was higher by approximately 21% than that under positive loading, contributing to the higher ultimate strength under negative loading.The specimens in the EPS_N group showed that the adhesive bond is weakened in negative loading tests; thus, composite behavior based on the adhesive bond is not expected. The design strength of the ICSWP with EPS foam may be governed by the shear flow strength of grids. On the other hand, since the mechanical bond of the XPS foam are based on the surface roughness, they were effective under positive and negative loading tests, so it concludes that the composite behavior from mechanical bonds can be applied to the design strength of the ICSWP with continuous GFRP shear connectors."} +{"text": "TFIIH is organized into a seven-subunit core associated with a three-subunit Cdk-activating kinase (CAK) module. TFIIH has roles in both transcription initiation and DNA repair. During the last 15 years, several studies have been conducted to identify the composition of the TFIIH complex involved in DNA repair. Recently, a new technique combining chromatin immunoprecipitation and western blotting resolved the hidden nature of the TFIIH complex participating in DNA repair. Following the recruitment of TFIIH to the damaged site, the CAK module is released from the core TFIIH, and the core subsequently associates with DNA repair factors. The release of the CAK is specifically driven by the recruitment of the DNA repair factor XPA and is required to promote the incision/excision of the damaged DNA. Once the DNA lesions have been repaired, the CAK module returns to the core TFIIH on the chromatin, together with the release of the repair factors. These data highlight the dynamic composition of a fundamental cellular factor that adapts its subunit composition to the cell needs."} +{"text": "Dementia & Neuropsychologia, Nunes and colleaguespresent an interesting paper evaluating the effect of educational level on thephenomenon of reduction of the asymmetry of processing by the cerebral hemispheres withaging. An intriguing observation was that although no difference between individualswith low or high educational level was seen on cognitive tests, differences were incerebral processing were evident on magnetoencephalography.1 This group of researchers led by AlexandreCastro-Caldas, of the University of Lisbon, Portugal, has been investigating the effectsof illiteracy and low educational level on the brain by means of more current andsophisticated methods of analysis of the functional organization of the central nervoussystem.In this issue of 2 The relevance of taking advantage of more elaborate methods tostudy hitherto less investigated phenomena, or those have been evaluated using onlyclinical or conventional research methods, is one of the most important non explicitcontributions of this group of researchers and is reflected by the paper published inthis issue.In 1998, Castro-Caldas and colleagues in their already classic study, employed positronemission tomography (PET) with statistical parametric mapping to evaluate thedifferences between illiterates and non-illiterates on verbal tasks, and concluded thatthe functional organization of the adult brain is modified by learning to read and writein childhood. At the time the study was devised, PET was not available in Portugal, sothe authors had to seek support from the Karolinska Institute of Stockholm, Sweden,where patients were flown to undergo the examination.3 Within the sphereof particular interest to Dementia & Neuropsychologia, it is apt toremind that the modifications of the silver impregnation methods proposed by MaxBielschowsky paved the way for Alois Alzheimer to describe the pathological hallmarks ofAlzheimer\u2019s disease,4 while it was thevery appropriate choice of Aplysia californica that made possible manyof the remarkable advances in mechanisms of memory achieved by Eric Kandel and histeam.5The use of novel methods or models of study has been critical for the advance of science.Indeed, it was the application of the silver impregnation methods, only recentlyreported by Camilo Golgi at the time, that allowed Santiago Ram\u00f3n y Cajal to goon and produce what is probably the most seminal page on neurosciences ever written byany one individual.Dementia & Neuropsychologia isthe publication of papers investigating the effects of education and cultural phenomenaon the central nervous system as well as studies on diseases that are more common indeveloping countries than in developed regions. Methods are the essence of science andnew methods and models will certainly shed new and brighter light on these relativelyneglected fields.Numbering among the main motivations of"} +{"text": "The results also envisaged that the different yield attributes viz. height, total number of branches, and number of leaves per plant have been found to be varied with treatments, being highest in the treatment where Zn was applied as both soil and foliar spray without the application of P. The results further indicated that the yield and yield attributes of stevia have been found to be decreased in the treatment where Zn was applied as both soil and foliar spray along with P suggesting an antagonistic effect between Zn and P.A greenhouse experiment was conducted at the Indian Institute of Horticultural Research (IIHR), Bangalore to study the interaction effect between phosphorus (P) and zinc (Zn) on the yield and yield attributes of the medicinal plant stevia. The results show that the yield and yield attributes have been found to be significantly affected by different treatments. The total yield in terms of biomass production has been increased significantly with the application of Zn and P in different combinations and methods, being highest (23.34 g fresh biomass) in the treatment where Zn was applied as both soil (10 kg ZnSO"} +{"text": "Oral biology, oral pathology, and oral treatments are interesting fields in dentistry. The rapid evolution of technologies and the continuous apparition of new materials and products available for practitioners oblige searchers to evaluate their impact on oral tissues and teeth. The evaluation of the biocompatibility of new products is essential to avoid any tissues damage caused by an eventual toxicity or side effects of therapeutic products or materials.This special issue is a compendium of different studies and fundamental and clinical researches. Some papers are focused on the microbiological evaluation of the effect of low level laser therapy (LLLT) in peri-implantitis treatment, a new diagnostic approach using microRNAs as salivary markers for periodontal diseases, evaluation of safety irradiation parameters of Nd:YAP laser beam during an in vitro endodontic treatments, a literature overview about the effects of Nd:YAG 1064\u2009nm and diode 810\u2009nm and 980\u2009nm in infected root canals, efficacy of ultrasonic and Er:YAG laser in removing bacteria from the root canal system, a comparative study of microleakage on dental surfaces bonded with three self-etch adhesive systems treated with the Er:YAG laser and bur, and the study of laser Doppler measurement of flow variability in the microcirculation of the palatal mucosa.We hope that the content of this special issue allows readers to understand the interaction of materials with oral tissues and provides to practitioners new therapeutic methods for their daily practices.Samir NammourToni ZeinounKenji YoshidaAldo Brugnera Junior"} +{"text": "The adult pumping heart is formed by distinct tissue layers. From inside to outside, the heart is composed by an internal endothelial layer, dubbed the endocardium, a thick myocardial component which supports the pumping capacity of the heart and exteriorly covered by a thin mesothelial layer named the epicardium. Cardiac insults such as coronary artery obstruction lead to ischemia and thus to an irreversible damage of the myocardial layer, provoking in many cases heart failure and death. Thus, searching for new pathways to regenerate the myocardium is an urgent biomedical need. Interestingly, the capacity of heart regeneration is present in other species, ranging from fishes to neonatal mammals. In this context, several lines of evidences demonstrated a key regulatory role for the epicardial layer. In this manuscript, we provide a state-of-the-art review on the developmental process leading to the formation of the epicardium, the distinct pathways controlling epicardial precursor cell specification and determination and current evidences on the regenerative potential of the epicardium to heal the injured heart. The development of the heart is a complex process. The primitive heart tube is formed from cardiogenic mesoderm of the cardiac crescents, i.e., first heart field (FHF), while anterior and venous poles are derived from a subsequent subset of cardiogenic cells located medial to the cardiac crescents, dubbed second heart field is a small protuberance that progressively develops within limiting boundaries between the hepatic and cardiac primordia. It is composed of an external epithelial lining configured as a cauliflower structure and an internal mesenchymal component , detaching from the epithelial epicardial layer and migrating first into the subepicardial space. These cells subsequently invade the myocardial walls, giving rise to the epicardial derived cells (EPDCs) and small non-coding RNAs. Our current understanding of lncRNAs is still in its infancy with just a limited number of reports in the developing heart of the Spanish Government and the Consejeria de IEconomia y Conocimiento (CTS-446) of the Junta de Andalucia Regional Council.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The assessment of medical outcomes is important in the effort to contain costs, streamline patient management, and codify medical practices. As such, it is necessary to develop predictive models that will make accurate predictions of these outcomes. The neural network methodology has often been shown to perform as well, if not better, than the logistic regression methodology in terms of sample predictive performance. However, the logistic regression method is capable of providing an explanation regarding the relationship(s) between variables. This explanation is often crucial to understanding the clinical underpinnings of the disease process. Given the respective strengths of the methodologies in question, the combined use of a statistical and machine learning technology in the classification of medical outcomes is warranted under appropriate conditions. The study discusses these conditions and describes an approach for combining the strengths of the models."} +{"text": "Perineuronal nets (PNNs) are mesh-like structures, composed of a hierarchical assembly of extracellular matrix molecules in the central nervous system (CNS), ensheathing neurons and regulating plasticity. The mechanism of interactions between PNNs and neurons remain uncharacterized. In this review, we pose the question: how do PNNs regulate communication to and from neurons? We provide an overview of the current knowledge on PNNs with a focus on the cellular interactions. PNNs ensheath a subset of the neuronal population with distinct molecular aspects in different areas of the CNS. PNNs control neuronal communication through molecular interactions involving specific components of the PNNs. This review proposes that the PNNs are an integral part of neurons, crucial for the regulation of plasticity in the CNS. Perineuronal nets (PNNs) were first described by Camillo Golgi as a reticular structure that enveloped nerve cells Table and subsIn the cortex, PNN neurons occur in high density in the motor and sensory cortex, as well as in the prefrontal and the temporal cortex. They are mostly found in layers 2\u20135 of the cortex in the amygdala ensheath parvalbumin and calbindin positive neurons influence neurons through different mechanisms: it acts as 1) a physical barrier between the neuron and the soluble extracellular matrix; (2) a binding partner for molecules that interact with neurons; and (3) a barrier to prevent lateral mobility of molecules on the neuronal membrane which is enriched in PNNs compared to the soluble extracellular matrix (ECM) , needs to be captured by PNNs to be internalized by the neuron was identified in the glial scar limit the mobility of membrane bound proteins on the neuronal surface. When the PNNs are removed from the neuronal surface using hyaluronidase in neuronal cultures, lateral diffusion of AMPA receptor subunits increases binds the PNNs and potentiates the inhibition of PNNs to the growth of axons are an integral part of a neuron and regulate communication between neurons. The PNNs are found in most brain regions and in each region, the PNNs envelope a limited group of neurons. In general, the PNNs are mostly formed around subpopulations of inhibitory neurons but also around some excitatory neurons. The PNN interneurons are mostly fast spiking neurons. It is likely these neurons have evolved to produce PNNs because they are in need of a tool to handle this high level of synaptic activity. PNNs allow the neuron to react to the stress of a high amount of inhibitory and excitatory stimuli.3+ and thus protect the neuron from oxidative stress. The negative charge also regulates ion sorting, which is crucial for fast spiking neurons that have a high utilization of ions. The PNNs have binding capacities for specific molecules, such as Otx2 and Sema3A. They also specifically bind several synaptic receptors and ion channels to regulate the synapse. Furthermore, the PNNs limit lateral mobility of membrane bound proteins. The limitation of lateral mobility affects synaptic proteins such as AMPA receptors. The localization of synaptic proteins is crucial for the efficiency of the synapse, which can in turn lead to plasticity. The three molecular mechanisms described here are mechanisms by which PNNs regulate plasticity.The PNNs regulate plasticity through a variety of molecular interactions. They function as a physical barrier to block the entrance of toxic substances such as amyloid \u03b2. The components of PNNs bear a highly negative charge, which allows the PNNs to buffer FeThe molecular interactions in which the PNNs are involved allow the PNNs to regulate communication between neurons. The PNNs regulate which molecules produced by other neurons reach the neuron through its selective binding properties. The PNNs also present signaling molecules to other neurons. Lastly, the PNNs are directly involved in the cell signaling taking place at the synapse. The different communication methods are essential for highly active interneurons to adapt to their surroundings. Interneurons process high amounts of input and fire at a high rate, which leads to high metabolic activity and the risk of oxidative stress. The interneurons produce PNNs to manage this high amount of activity. It is possible that PNNs enable neurons to synchronize their activity. Removal of PNNs with ChABC increases high frequency oscillations in the anterior angulate cortex (Steullet et al., Further investigations into function of the PNNs need to focus on the PNNs as a tool which neurons actively apply to regulate communication. The PNNs form at the outermost surface of the neuron and can serve as an easily accessible target for plasticity treatment to be applied. This would facilitate the modification of neuronal communication. Currently, the enzymatic removal of PNNs by ChABC and hyaluronidase is widely applied as tools for PNN regulation but these are very harsh and non-specific treatments since they remove PNNs and the loose ECM completely. More subtle manipulations of the PNNs by changing the ratio of the different proteoglycans would potentially allow for fine modifications of neuronal plasticity without harming the neuron. In diseases which present with a loss of PNNs, stimulation of PNN formation by providing of PNN components could help to protect neurons. Application of binding partners of PNNs might be another approach to modulate PNN functions without damaging it. Treatment designed to alter the PNNs would not have to enter cells, which makes the PNNs an accessible molecular structure for treatments. Since the PNNs are involved in a variety of diseases, such as schizophrenia (Pantazopoulos et al., Both HvS and JK contributed equally to the idea and the writing of the review.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Manual therapy has been an approach in the management of patients with various disorders dating back to ancient times and continues to play a significant role in current health care. The future role of manual therapy in health care is an important area of research. This paper reviews the history of manual therapy, examines the current literature related to the use of manual techniques , and discusses future research topics. The literature related to manual therapy has historically been anecdotal and theoretical, and current research tends to have a generic approach with broad definitions of manual therapy and inconsistencies in the classification of specific disorders. Systematic reviews of various types of manual therapy have differed on their conclusions regarding the effectiveness of this treatment modality. The current demand in health care for evidence-based practice necessitates a movement towards more specificity in the research of the effectiveness of manual therapy, with emphasis on specific patient signs and symptoms and specific manual techniques that result in effective care."} +{"text": "This paper describes a novel technique for joining similar and dissimilar metal foils, namely micro clinching with cutting by laser shock forming. A series of experiments were conducted to study the deformation behavior of single layer material, during which many important process parameters were determined. The process window of the 1060 pure aluminum foils and annealed copper foils produced by micro clinching with cutting was analyzed. Moreover, similar material combination and dissimilar material combination were successfully achieved. The effect of laser energy on the interlock and minimum thickness of upper foils was investigated. In addition, the mechanical strength of different material combinations joined by micro clinching with cutting was measured in single lap shearing tests. According to the achieved results, this novel technique is more suitable for material combinations where the upper foil is thicker than lower foil. With the increase of laser energy, the interlock increased while the minimum thickness of upper foil decreased gradually. The shear strength of 1060 pure aluminum foils and 304 stainless steel foils combination was three times as large as that of 1060 pure aluminum foils and annealed copper foils combination. With the rapid development of the automobile industry, the lightweight design of the automobile has been a wide concern. In order to reduce the weight of the automobile body, lightweight materials such as aluminum alloy has been widely used, which has set a higher standard for the joining technology of similar and dissimilar metal plates . In mostHowever, there are two main problems while joining similar and dissimilar materials by mechanical clinching process. Firstly, the poor formability of materials limits the application scope of mechanical clinching. To this end, two possible solutions are available: increasing the material formability by pre-heating and improving the material flow by optimizing the geometry of the clinching tools . Based oThe miniaturization of products is an important growing trend in precision mechanics and the electronic industry, which brings increased demands on joining processes in micro scale . NeverthThe purpose of the current study was to experimentally verify the feasibility of a novel micro clinching with cutting process for joining similar and dissimilar metal foils. Many important process parameters were determined by studying the deformation behavior of single layer metal foil in the mold. The process window of the 1060 pure aluminum foils and annealed copper foils (Al/Cu) was given through a series of experiments. The effect of laser energy on the interlock and the minimum thickness of upper foils were studied. Moreover, the connection strength of different joints was measured by the single lap shearing tests.The basic schematic diagram of micro clinching with cutting by laser shock forming is shown in The necessary condition for the process of micro clinching with cutting by laser shock forming is the generation of interlock without fracture of upper foils . The intIn this research, a Spitlight 2000 Nd-YAG laser with Gaussian distribution beam was employed, and its main parameters are listed in h) can be adjusted by changing the thickness of the spacer. As the depth of the mold (h) gradually increases, the width of the chute (w) on both sides of the mold also increases. The combined mold used in the experiments was made of SKH-51 high speed steel. This kind of high speed steel is suitable for processing the mold with high hardness, high stiffness, and good impact resistance. As shown in 2 and the lower foils are cut into square pieces of 15 \u00d7 15 mm2. The upper and lower foils were stacked on the combined mold. The combined mold was fixed on a three-dimensional mobile platform. By adjusting the distance between the metal foils and the focusing lens, the spot diameter was controlled so that it was larger than the mold diameter. In this experiment, a spot diameter of 3 mm was used. The detailed experimental parameters are listed as follows in There were three kinds of metal foils used in the experiments: 1060 pure aluminum foils, pure copper foils annealed at 450 \u00b0C, and 304 stainless steel foils. Different kinds of metal foils with different thicknesses were combined together to verify the feasibility of joining similar and dissimilar materials by the micro clinching with cutting process. The upper foils were cut into square pieces of 10 \u00d7 10 mmIn this experiment, single lap shearing tests were carried out to measure the mechanical strength of the clinched joints under different material combinations. For each processing condition, three samples were tested. Micro clinching with cutting by laser shock forming is a very complicated process which involves many process parameters such as the number of laser pulses, the depth of the combined mold, laser energy, and the combination of different materials with different thicknesses. In order to study the influence of various process parameters on different material combinations, the deformation behavior of single layer metal foil under different experimental conditions was studied first. During this study, some important process parameters were determined. Subsequently, the experiments of different material combinations were carried out according to the laws derived from the study of single layer metal foil.The study of the deformation behavior of single layer metal foil under different laser energy and the different number of laser pulses is essential for better understanding the micro clinching with cutting process. The deformation behavior of single layer 1060 pure aluminum foil with the thickness of 100 \u03bcm produced under different laser energy is shown in Veenaas et al. producedThe total thickness of materials and the die depth are two important parameters in the process of micro clinching with cutting. It was important for this experimental research to study the matching relationship between the two process parameters. Firstly, the matching relationship between the thickness of single layer 1060 pure aluminum foil and the die depth was investigated. The deformation behaviors of single layer 1060 pure aluminum foil differ when produced under differing mold depths, as can be observed in On the basis of the above research, the matching relationship between the total thickness of two layers of metal foils and die depth was identified. After additional experiments, the matching relationship between the total thickness of the materials and die depth was determined and is illustrated in In the conventional mechanical clinching, the main failure modes of clinched joints are button separation, neck fracture, or a combination of both mechanisms . The sepIn contrast, the main failure modes of micro clinching with cutting by laser shock forming differ from the conventional mechanical clinching. To obtain optimum joining conditions of the metal foils, a series of experiments were conducted. As shown in The joinability for the 1060 pure aluminum foils and annealed copper foils combination of different thickness by micro clinching with cutting process is shown in The cross sections of the clinched joints with different material combinations are shown in The interlock and the minimum thickness of upper foil were the main quality criteria of the clinched joints. It is very important to study the variation tendency of the interlock and minimum thickness of upper foil under different laser energy. As shown in For each processing condition, three samples were tested. In the conventional mechanical clinching, the main failure modes on clinched joints can be categorized into full shear, partial shear, and unbuttoning with crack as well as the full unbuttoning. The failure mode of the clinched joints depends on the size of interlock and \u03b1-angle . Moreove(1)Under the single laser pulse, the large interlock could not simply be formed by the increase of laser energy. The use of moderate energy and multiple laser pulses can solve this problem. After a series of experiments, the number of laser pulses was determined to be three pulses.(2)There is a certain matching relationship between the total thickness of the materials to be connected and the die depth. With the increase of the total thickness of materials, the corresponding die depth that can form a large interlock increases. In addition, the change of the die depth has less influence on the joining of thicker materials. When the total thickness of the materials was in the range of 60\u2013100 \u03bcm, the optimum die depth was about two-thirds of the total thickness.(3)The similar and dissimilar materials could be joined by the micro clinching with cutting process. Seen from the process window of 1060 pure aluminum foils and annealed copper foils, micro clinching with cutting process is more suitable for the material combinations where the upper foil is thicker than the lower foil.(4)The optimal laser energy for joining the 1060 pure aluminum foils and annealed copper foils (Al/Cu) was in the range of 1200\u2013380 mJ. With the increase of laser energy, the interlock between the metal foils increased while the minimum thickness of the upper foil gradually decreased.(5)According to the load-displacement curves, it was observed that the maximum load force of Al/Ss combination is about 13.12 N, which is three times larger than that of the Al/Cu combination. The Al/Ss combination with higher shear strength may be due to higher tensile strength of the lower foil or larger interlock and neck thickness of the upper foil. Furthermore, different material combinations had different failure modes.In this paper, a novel micro clinching with cutting by laser shock forming technology is described. Many important process parameters were determined by studying the deformation behavior of single layer 1060 pure aluminum foil under different experimental conditions. The feasibility of this novel technology was verified by joining similar and dissimilar materials in micro scale. The effect of laser energy on the interlock and minimum thickness of upper foil was investigated. In addition, the mechanical strength of different material combinations joined by micro clinching with cutting process was measured. The main results are summarized as follows:"} +{"text": "The new epicenter of the tularemia outbreak was 300 km west of the epicenter of the previous outbreak in Eastern Bulgaria. The authors undertook to track the source of the outbreak using molecular methods .Their study is remarkable for, among other things, the relative lack of the application of traditional epidemiological methods in tandem with the molecular methods. They made no definite conclusion.Myrtannas et al. studied Francisella tularensis (FT) from the old to the new foci of outbreaks. However, the high BMR may not necessarily be a physical barrier to the aerosolized transmission of FT.[The authors of the study referred to above did not mention the total number of cases detected during in new outbreaks, making one wonder how the authors defined an outbreak. Also, iton of FT.The authors suspecteThe study presented by Myrtannas et al. shows th"} +{"text": "A large self-report population study in Denmark found twofold increased prevalence of infantile autism in offspring of mothers that suffered influenza infection during pregnancy, and a threefold increased prevalence of ASD in children of mothers who experienced prolonged period of fever employing 257 postmortem samples from ASD and matched control brains that were subjected to H3K27ac chromatin immunoprecipitation sequencing (ChIP-seq) that are small non-coding regulatory RNAs which mediate mRNA destabilization and/or translational repression Hammond, has alsoAlthough the above-mentioned studies demonstrate the role of epigenetics in predisposition to ASD, some investigations does not support a role for epigenetics in autism. A study reported no difference in the proportion of global methylation between control and autistic brain or brain samples from a larger sample size to determine the role of epigenetics in ASD and confirm the validity of results. Nevertheless, the well-controlled studies can provide critical insights into the role of epigenetics in the pathophysiology of ASD. With the emergence of new state-of-the-art cutting edge technologies such as whole genome sequencing (WGS), it is possible to comprehend the complex role of epigenetics in ASD. The new genomic techniques allow more coverage of the genome, can provide better knowledge about ASD specific histone markers as well as can help in determining the contribution of non-coding regions such as intergenic regions and noncoding RNAs.Autism researchers and individuals with ASD themselves have plenty to benefit from in learning through the fundamentals of epigenetics, as there exists a sense of robustness in the search for new epigenetic biomarkers that can be used for future interventions. Efforts in research of the epigenetics of autism are still uncertain and at times unrewarded, yet the prospective payoff of alleviating ASD-symptoms is too great a reward to not delve even further. A better knowledge about the role of epigenetics in autism will help in developing novel diagnostic biomarkers and treatment modalities leading to improved quality of life of many ASD patients and their families.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The Recovery After Initial Schizophrenia Episode-Early Treatment Program (RAISE-ETP) study was a landmark investigation whose positive results led to increased funding and support to build first episode psychosis programs across the US. Every state in the country received dedicated funding to implement a coordinated specialty care (CSC) program designed to identify and treat persons with first episode psychosis within the context of the nation\u2019s multi-payer health system. Since the funding began in 2014, numerous CSC programs have been developed but little is known about which models of treatment providers are implementing and the success of these programs. The research here presents data from a survey focusing on providing feedback from the first episode psychosis programs in the US implementing NAVIGATE, the CSC program utilized in RAISE-ETP. The survey targets the program directors in the NAVIGATE programs; the aims of the survey include 1) to describe the program characteristics of NAVIGATE teams in the US and 2) to better understand how NAVIGATE programs are identifying and enrolling people into their services. Capturing local data on CSC team composition and case identification strategies is particularly critical in multi-payer systems lacking guidance and oversight from a national health system.An online survey is being conducted to assess the implementation of NAVIGATE programs in the US and evaluate the procedures that the program director utilizes to identify and enroll NAVIGATE participants in services. Program directors from NAVIGATE programs are being identified and contacted to participate by national trainers to join a national database of first episode programs. Program data collected includes information about the location of the program, staff in the different NAVIGATE team roles , program enrollment criteria, number of participants screened and enrolled, and rates of planned and unplanned discharge. In addition, program directors are asked questions to report community based strategies to identify participants and screening procedures to enroll participants. Data analysis will focus on presenting the demographic and clinical characteristics of the programs. Common themes will be ascertained, including barriers and facilitators to identifying and enrolling participants with first episode psychosis. Helpful recommendations provided by the project directors on identifying and screening participants will be synthesized and reported.There are approximately 30 NAVIGATE programs in 14 states in the US. Results will highlight the dissemination of NAVIGATE in the US and implementation of these programs across a wide range of different communities. We will describe the dissemination of NAVIGATE across the US and similarities and differences across NAVIGATE programs. Results also will provide feedback on the challenges and helpful strategies that program directors have used to engage people in treatment.The findings from this survey will be the first to provide an overview of the implementation of the NAVIGATE program in the US. The results will provide an overview of the dissemination of the NAVIGATE program, the only CSC program evaluated in a national US trial. Recommendations could help inform the ongoing development and dissemination of coordinated specialty care programs."} +{"text": "Pain is an integrative phenomenon that results from dynamic interactions between sensory and contextual processes. In the brain the experience of pain is associated with neuronal oscillations and synchrony at different frequencies. However, an overarching framework for the significance of oscillations for pain remains lacking. Recent concepts relate oscillations at different frequencies to the routing of information flow in the brain and the signaling of predictions and prediction errors. The application of these concepts to pain promises insights into how flexible routing of information flow coordinates diverse processes that merge into the experience of pain. Such insights might have implications for the understanding and treatment of chronic pain. Pain is a vital phenomenon that depends on the dynamic integration of sensory and contextual processes. In chronic pain the adaptive integration of sensory and contextual processes is severely disturbed.Neuronal oscillations and synchrony at different frequencies provide evidence on information flow across brain areas. The flexible relationship between oscillations at different frequencies and pain indicates flexible routing of information flow in the cerebral processing of pain.The systematic assessment of oscillations and synchrony in the processing of pain provides insights into how sensory and contextual processes are flexibly integrated into a coherent percept and into abnormalities of these processes in chronic pain. Predictive coding frameworks might help us understand these integration processes. Pain results from dynamic interactions between sensory and contextual processes Here we review recent evidence on the role of neuronal oscillations and synchrony in the processing of pain. We begin with a brief discussion of the peculiarities of pain and its processing in the brain. We then summarize recent insights into the significance of neuronal oscillations and synchrony for the routing of information flow in the brain. On this basis we review evidence on the role of oscillations in the processing of pain. We specifically discuss how oscillations and synchrony serve the flexible routing of information flow in the integration of sensory and contextual factors into a coherent percept. Moreover, we review and discuss the role of these processes in pathological abnormalities of the pain experience in chronic pain. Finally, we consider perspectives and future directions for the study of the role of neuronal oscillations in the cerebral processing of pain.nociception , electroencephalography (EEG), or magnetoencephalography (MEG) supragranular layers and terminate in layer IV. Feedback projections predominantly start in infragranular layers and terminate in layers other than layer IV. Second, the non-homogeneous distribution of feedforward and feedback connections is complemented by a non-homogeneous distribution of brain oscillations across cortical layers. Several studies demonstrate that oscillations at alpha and beta frequencies (8\u201329\u00a0Hz) are stronger in infragranular layers than in supragranular layers. By contrast, oscillations in the gamma frequency band (\u223c30\u2013100\u00a0Hz) are stronger in supra- than in infragranular layers of the cortex Granger causality) indicated stronger connectivity in the gamma band from lower towards higher hierarchical areas whereas directed connectivity in the opposite direction (from higher to lower areas) is stronger in alpha/beta frequencies.This framework is based on a convergence of anatomical and functional findings in animals and humans. First, in the visual system anatomical connections have been differentiated into feedforward (bottom-up) and feedback (top-down) connections Taken together these findings indicate that neuronal oscillations and synchrony in distinct frequency bands serve the dynamic routing of information flow in the brain. Previously seemingly independent strands of research converged on the notion that alpha/beta oscillations mediate feedback signals whereas gamma oscillations mediate feedforward signals. In predictive coding frameworks of brain function, this might correspond to the signaling of predictions and prediction errors, respectively . The invMost studies on the cerebral processing of pain have investigated responses to phasic pain stimuli in the range of milliseconds to seconds. These results are likely to apply to acute pain events that signal threat and promote protective behavior. Fewer studies have investigated the brain mechanisms of longer-lasting pain of months and years as a key feature of pathological chronic pain conditions. Furthermore, experimental studies on longer-lasting pain in the range of minutes (tonic pain) have investigated pain at timescales between those of phasic and chronic pain and are intended to represent an experimental approach towards chronic pain.EEG and MEG studies showed that brief noxious stimuli induce a complex spectral\u2013temporal\u2013spatial pattern of neuronal responses with at least three different components. First, pain stimuli evoke increased neural activity at frequencies below 10\u00a0Hz. These increases occur between 150 and 400\u00a0ms after stimulus application. They originate from the sensorimotor cortex and the frontoparietal operculum including the insula and secondary somatosensory cortex as well as from the mid-/anterior cingulate cortex. They correspond to the well-investigated pain-related evoked potentials The functional significance of the different components of pain-related brain activity is not yet fully understood. So far the evidence indicates that the components are differentially sensitive to different modulations of pain. Bottom-up modulations of pain by varying stimulus intensity influences all three components The findings, however, indicate that brain activity at different frequencies provides different and complementary information about pain. Moreover, they indicate that there is no one-to-one correspondence between any frequency component of brain activity and pain, which extends the lack of specificity of brain activity for pain The assessment of brain responses to phasic painful stimuli shows the impact of contextual modulations on stimulus processing but not the mechanisms of the modulations. A straightforward approach to the disentangling of contextual processes from stimulus processing is the assessment of prestimulus activity, which cannot be contaminated by any stimulus-related processes. The few studies on this topic with respect to pain intracranial recordings in a few patients with epilepsy investigated the significance not only of prestimulus oscillations but also of prestimulus connectivity between brain areas for pain. The results indicate that attention to pain changes the connectivity between pain-relevant brain areas at alpha and beta frequencies Studies using The above-reviewed evidence relates to the processing of brief experimental pain stimuli. It is, however, unclear how these findings relate to the brain mechanisms of longer-lasting pain of months and years, which is the key feature of chronic pain. Experimental studies using longer-lasting tonic experimental pain stimuli in the range of minutes represent a step further in that direction. These studies have shown that tonic pain is associated with suppression of oscillations at alpha frequencies Thus, during a few minutes of painful stimulation the encoding of pain shifts from gamma oscillations over brain areas encoding sensory processes to gamma oscillations over brain areas encoding emotional\u2013motivational phenomena. These findings indicate that pain-related information flow might change not only with the behavioral context but also with the duration of pain. In the current framework of flexible routing of information flow , these fThe analysis of oscillations and synchrony is conceptually promising and methodologically well suited for the investigation of ongoing processes such as chronic pain. However, remarkably few studies have addressed this topic and the results are not fully consistent see , 35. ThiA less-noticed finding is an increase of oscillations at alpha and beta frequencies In summary, the data show mostly changes of theta and beta oscillations in chronic pain, the latter particularly in frontal brain areas. Considering disturbed integration of nociceptive and contextual processes in chronic pain, an abnormal balance of feedforward and feedback signaling and thereby an abnormal balance of oscillations at different frequencies might play an important role in chronic pain. However, the role of neuronal oscillations and synchrony in chronic pain is a largely unexplored field and the emerging concepts await further empirical testing.Recent evidence has shown that oscillations and synchrony play a crucial role in the flexible routing of information flow in the brain. In particular, oscillations at gamma and alpha/beta frequencies have been shown to serve feedforward and feedback processing, respectively. The flexible routing of information flow might be particularly relevant in the processing of pain where the dynamic integration of sensory and contextual processes plays a crucial role . The resChronic pain appears to be associated with abnormal oscillations at theta and beta frequencies. Although the specificity of these findings has remained unclear, at least part of them would be compatible with abnormal contextual feedback processes playing a central role in the pathology of chronic pain. More standardized approaches, larger patient samples, data-sharing initiatives, and more sophisticated and timely analysis strategies such as graph theory-based network analyses Furthermore, considering the preeminent role of the integration of contextual and sensory information in the processing of pain, an application of predictive coding frameworks to the pOutstanding QuestionsRecent studies discuss the significance of interactions of oscillations at different frequencies . What is the role of cross-frequency coupling in pain? In particular, how do infraslow fluctuations observed by fMRI relate to oscillations at higher frequencies in the processing of pain?Pain modulations can be harnessed for pain therapy. However, a systematic understanding of pain modulations is so far lacking. Can the assessment of oscillations and patterns of cerebral information flow help to establish a brain-based taxonomy of pain modulations?It is tempting to relate the interaction between sensory input, contextual information, and pain to predictive coding frameworks of brain function. How can this relationship be specified and experimentally tested? What are the consequences for the understanding of pain and chronic pain?The analysis of oscillations is conceptually and methodologically well suited for the investigation of the brain mechanisms of chronic pain. However, evidence on the role of oscillations and synchrony in chronic pain is remarkably limited. Can timely network analyses of EEG, MEG, and fMRI data specify abnormalities of oscillations and synchrony underlying chronic pain?Subcortical areas including the ventral striatum, amygdala, and hippocampus play an important role in chronic pain. Although neuronal oscillations from these areas are well known they have not so far been investigated during pain. How can subcortical oscillations be recorded and how do they integrate in patterns of cerebral information flow?Recent studies discuss the use of patterns of brain activity as markers of pain. Can patterns of neuronal oscillations and synchrony serve as diagnostic and/or prognostic markers of pain? Can we target neuronal oscillations and synchrony for pain therapy using pharmacological, behavioral, neuromodulatory, or neurofeedback approaches?Thus, based on recent progress in our understanding of neuronal oscillations, their systematic assessment might provide a unique window onto the dynamics of cerebral information flow and related predictive coding processes underlying the experience of pain in health and disease."} +{"text": "Rheological techniques and methods have been employed for many decades in the characterization of polymers. Originally developed and used on synthetic polymers, rheology has then found much interest in the field of natural (bio) polymers. This review concentrates on introducing the fundamentals of rheology and on discussing the rheological aspects and properties of the two major classes of biopolymers: polysaccharides and proteins. An overview of both their solution properties (dilute to semi-dilute) and gel properties is described."} +{"text": "In connection with the sustainable development of scenic spots, this paper, with consideration of resource conditions, economic benefits, auxiliary industry scale and ecological environment, establishes a comprehensive measurement model of the sustainable capacity of scenic spots; optimizes the index system by principal components analysis to extract principal components; assigns the weight of principal components by entropy method; analyzes the sustainable capacity of scenic spots in each province of China comprehensively in combination with TOPSIS method and finally puts forward suggestions aid decision-making. According to the study, this method provides an effective reference for the study of the sustainable development of scenic spots and is very significant for considering the sustainable development of scenic spots and auxiliary industries to establish specific and scientific countermeasures for improvement. The sustainable development of tourism occupies a dominant position in Sustainable Development Goals (SDGs) of the whole world at present. In the opinion of the World Tourism Organization (WTO), sustainable development of tourism should not only meet existing demands from scenic spots and tourists but also meet future ones . A sceniIn order to promote the sustainable development of scenic spots, scholars have undertaken extensive and in-depth exploration from various perspectives, mainly including those described below.The first is to discuss about the influence of a certain factor or driving mechanism on the sustainable development of scenic spots. Jin and Hu start from the crowding of scenic spots to study the influence of crowding as perceived by tourists on the development of scenic spots, focus on analyzing the psychological influence and provide suggestions to the management of scenic spots . Yao andThe second is to discuss about the influence of coordinated development of stakeholders on the sustainable development of scenic spots. M\u00f3nica et al. evaluated the sustainable tourism strategy of stakeholders of National Park by analytic network analysis and Delphi-type judgment-ensuring process . Carlos-The third is to organize the multiple-objective study and evaluation of the development of scenic spots to facilitate the sustainable development of scenic spots. Early in 1998, Garrod and Fyall pointed out that the focus should be shifted from the definition to the practice of sustainable development of tourism, and established a frame to measure the sustainable tourism . EvaluatSignificant achievements have been made with respect to the study of sustainable development of scenic spots and many scholars have made great contributions resulting from their own understanding of the nature of sustainable scenic spots. However, there are few quantitative studies on the sustainable development of scenic spots. Existing ones often focus on a certain aspect of the scenic spot development, such as natural resource , ecologiFollowing the logic shown in R represents principal component score matrix after extraction and rotation of factors.In n measured objects and each object possesses p index data to constitute the initial measurement matrix of p indexes, so it is able to obtain m aggregate indexes from p indexes (p indexes by such m indexes with little information lost [(1)Data standardization. This paper relies on the raw data of measurement index of each province and city to establish the initial measurement matrix (2)Computing related coefficient matrix (3)According to the characteristic equation (4)Computing principal component contribution rate Principal Components Analysis (PCA) reduces the dimensionality of a higher dimensional variable space and replaces existing multi-dimensional variables with a few aggregate variables by linear transformation and abandoning some information while minimizing the loss of raw data . Its basion lost . This pai principal components has reached 80\u201395%, the first i principal components should be set as new variables.(5)Computing principal component loading (6)Computing the score of each principal component:If the accumulated contribution rate of the first The entropy method is an objective method for constructing the judgment matrix based on the value of evaluation index and determining the weight by the degree of variation of each index. Determining the weight of each principal component by entropy method may eliminate the influence brought by subjective factors to the largest extent to obtain a more practical result. Steps of determining the entropy weight are listed below:(1)\u2003Standardization of principal component indexesAssuming there are n measured objects (province and city) and m principal component factors, the standardization matrix established according to the score of each principal component of the measured object is (2)\u2003Determination of entropy and entropy weight of principal component factorsjth principal component factor are:In accordance with the definition of entropy, the entropy value In the equation, the original expression of ningless , so the Technique for Order Preference by Similarity to an Ideal Solution (\u201cTOPSIS\u201d for short), which is the one of group of MCDM methods, was first proposed by Hwang and Yoon in 1981 . TOPSIS (1)\u2003Establishing weighted standardized matrix jth principal component factor and ijy is the score of the jth principal component of the ith measured object:In the equation: jth index and jth inverse index.In the equation: (2)\u2003Computing the distance of each province and city to the positive ideal point A larger Development of scenic spots more and more involves challenges from society, economy and the ecological environment to complement the urgently needed sustainable development. The sustainability of scenic spots is affected by many factors, commonly comprising economic, resource and environmental aspects. The essence of sustainable development of scenic spots is to ensure the long-term reasonable economic development and fair distribution of social and economic benefits to stakeholders (auxiliary industries in relation to the sustainable development of scenic spots); and resources and environment should be utilized to the optimal extent as critical elements for the development of scenic spots. In the establishment of measurement indexes of sustainable development of scenic spots, indexes should not only reflect the authentic situation of sustainable tourism but also keep sufficient comparability and applicability for a certain period. Therefore, based on the existing relevant literature ,35,36,37Such an index system of the sustainable development of scenic spots divides two types of sustainable development into five aspects, including resource condition, economic benefits, ecological environment, auxiliary industry scale and benefit (primary index), and establishes measurement indexes under such five aspects (secondary index). The logic of index selection is as follows.Resource condition of scenic spot, including natural resource, human resource and market resource, is fundamental for the sustainable development. Natural resource is the foundation and precondition of tourism activity. Difference and relative advantage of natural resource are critical for the tourism activity and directly influence the selection and flow of tourists. Natural resource indicators include B11 and B12. Moreover, management of the scenic spot is only effective under the support from human resource, the situation of human resources in scenic spots is reflected by B13, and the market resources are measured by B14 and B15.Economy of scenic spots, which is the economic benefit generated by tourism activities on the basis of resource condition, offers a physical guarantee of the sustainable development of the scenic spot. It comprises of economic benefits (reflected by B21\u2013B24) from food & beverage, amusement and sightseeing, and also experience of tourists brought by tourism activities (measured by B25).Ecological environment of scenic spot is the radical guarantee of the sustainable development. Ecological environment should be concerned at all times while tourism resources are being developed reasonably. Ecological environment is especially important when the green, low-carbon and cyclic economic development is advocated. Therefore, ecological environment indexes should cover the greening of scenic spot and ecological environment improvement. B31 and B34 quantitatively reveal the amount of atmospheric pollutant emissions and the number of treatment facilities caused by the development of tourism economy; B32 and B33 can demonstrate the situation of green cover of green space and forest around the scenic spot; B35 and B36 outline the status of pollutants and waste disposal.Auxiliary industry refers to surrounding auxiliary industries and transportation industry, which are important parts of stakeholders of the development of scenic spot. While integrating its own resource advantages in the sustainable development, the scenic spot should give consideration to the development demand of stakeholders (auxiliary industry). On the other hand, coordination and cooperation of the auxiliary industry facilitates the sustainable development of the scenic spot greatly. Therefore, measurement of the sustainable capacity of the scenic spot should also include measurement indexes of the development of auxiliary industry. B41 and B43 measure the size of the scenic area auxiliary industry; B42 and B44 can better reflect the human resources situation of auxiliary industry in scenic spots; B45 and B46 reveal the training status of tourism professionals; B47 and B48 reflect the accessibility of scenic spots; B51\u2013B56 are mainly used for the economic benefits of scenic auxiliary industry (including tourism enterprises and transportation).It should be noted that these indexes are correlated and mutually promoted to drive the sustainable development of scenic spots. As a result, study on the sustainable capacity of scenic spots should consider each index comprehensively and conduct the analysis one by one. The relation among indexes and evaluation objectives of the sustainable capacity of scenic spots is shown in As required by the index system, the data in this study mainly comes from The Yearbook of China Tourism 2016 , China SThis paper adopts SPSS 22.0 to standardize the measurement index data and adopts principal components analysis to optimize the sustainable capacity measurement index system. Specific steps are listed below.As per the principle of keeping accumulated variance contribution rate above 80%, 5 principal components are extracted from 16 indexes of self-sustainability of the scenic spot and 3 principal components are extracted from 14 indexes of sustainable capacity of auxiliary industry. Containing 80.231% and 88.118% of information respectively in the measurement of sustainable capacity of scenic spot, they are able to interpret and express the raw index and (13). Assigning the weight of each principal component Z and further obtain the positive ideal solution Z+ and negative ideal solution Z\u2212 , areas wFrom the perspective of scale and benefit of auxiliary industry , areas wFrom the overall sustainability of scenic spots , the topThe average value of sustainable capacity of scenic spots in 30 Chinese provinces and cities is 0.444, generally in the intermediate level. The development of Chinese tourism industry differs significantly that it is stronger in East China, intermediate in Middle China and weaker in Northeast and West China. According to the score form, East China is much advantageous to other areas in the scale and benefit of auxiliary industry but is only slightly better than Middle China in the resource condition and ecological environment, so it can be concluded that the sustainable development of scenic spots in East China focuses on the expansion of industrial scale and elevation of economic benefit while concerning the protection and improvement of resources and ecological environment, thus possessing the strong sustainable capacity; moreover, East China covers much coastal areas with obvious geographical advantage and sufficient tourist source. Middle China is full of natural tourism resources and great potentiality in the tourism market but behaves ordinary in the scale and benefit of auxiliary industry, and due to the low economic strengthen of tourism and poor geographic condition, its sustainability is quite week. Although Northeast China and West China possess featured landscapes like glacier, snowfield, forest and wetland, the low traffic accessibility, far distance from major tourist sources and weak availability lead to insufficient potentiality in domestic and international tourism market; the extensive development and operation mode restrict the substantial expansion of the scale and benefit of tourism industry; moreover, these areas lack excellent development concepts with respect to the management of scenic spots and improvement of ecological environment; with serious weakness in the service system and tourism infrastructure of scenic spots, the sustainability of Northeast China and West China (0.253 and 0.366) is lower than the average level of China (as shown in (1)Local governments need to increase financial support, improve the scenic area software support to the scenic area function of scientific planning and positioning, give full play to the advantages of scenic resources, create a distinctive tourism brand, improve the attraction and competitiveness of scenic areas, promote the development of the scenic area economy from the policy and technological innovation, and then lead the development of auxiliary industries around the scenic area;(2)Developing scenic resources reasonably and paying attention to the protection and management of ecological environment for protecting the future tourism development depends on the existence of environmental quality, improving the scenic area management mode for getting rid of extensive development and management mode of scenic area;(3)Improving the basic public facilities and entertainment services around the scenic area, reasonably planning the network of scenic traffic, better the quality of life in the tourist reception area for providing tourists with high quality tourism experience.For the western and northeastern regions where the sustainable development levels of all aspects of scenic areas are relatively low, the following two measures are proposed:For the central region where the scale and benefit of the auxiliary industry of scenic spots are not outstanding, it is necessary to excavate the characteristics of its own tourism resources, differentiate and accurately locate the market, and provide high-quality tourism services; Strengthening the cooperation of regional tourism and the comprehensive integration of tourism resources, commonly designing and developing the routes of cross regional tourism, the construction of tourism fine lines and the establishment of barrier-free tourism mechanism, promoting the free flow of industrial factors for accelerating the integration process of regional tourism development.For the eastern region with good scenic resources and ecological management, it is necessary to promote the optimization and upgrading of tourism industrial structure and the accelerated development of tourism economy on the premise of protecting the existing ecological environment. Deepening the reform and innovation of the management system and related policies, constructing a number of tourist pioneer areas and demonstration areas for promoting the development of the central, western and northeastern regions in China, cultivating new growth point of tourism economy innovatively.In view of the differentiated extensity of sustainable development of Chinese scenic spots and the low sustainable development in West China and Northeast China, following countermeasures are proposed in this paper:(1)After giving comprehensive consideration to the resource condition, economic benefit, auxiliary industrial scale, auxiliary industry benefit and ecological environment in relation to the sustainable capacity of scenic spots, this model establishes a relatively comprehensive sustainable capacity evaluation index system to provide relatively reliable reference to the objective multiple-objective measurement;(2)Utilizing the principal component analysis and entropy TOPSIS method, this model establishes the sustainable capacity measurement model of scenic spots. Firstly, while decreasing the computation quantity effectively by principal components analysis, this model reduces the multi-dimensional index influencing the sustainable development of scenic spots to the lower dimensional index; secondly, through assigning the objective weight to the lower dimensional index by entropy method, this model avoids the subjective influence brought by the personal preference to make the result more objective and scientific; finally, this model further lowers the weighted low dimensional index to one dimensional index by TOPSIS method to analyze the sustainable development of scenic spots of each province and city in an easier manner. Therefore, this measurement model is better adapted to the sustainable development of scenic spots that it may ensure the scientific and reasonable result while decreasing the workload;(3)Study on the measurement of the sustainable development of scenic spots is very significant for understanding the sustainable development of scenic spots and their auxiliary industries to establish specific and scientific countermeasures;Based on the statistics information of tourism of each province in 2015, this paper establishes the comprehensive measurement index system of the sustainable capacity of scenic spots, further determines the weight of each index by the entropy method, analyzes by TOPSIS method to obtain the comprehensive value of sustainable capacity of scenic spots of each province, and finally conducts the comprehensive analysis of the sustainable capacity of scenic spots of each provinces. Conclusion of this paper is listed below:Although the measurement index system established in this paper gives comprehensive consideration to indexes in relation to the measurement of sustainable capacity of scenic spots, it does not cover all indexes influencing the sustainable development of scenic spots; therefore, this index system is still restrictive to a certain extent. Besides, principal components analysis may decrease the workload of the large-volume data computation but the converted data volume is still relatively large and the computation is quite complicated. As a result, it is still to be studied as how to better improve the measurement index system and analyze the sustainable development of scenic spots by a faster and more effective measurement method."} +{"text": "Congenital chordee and penile torsion are commonly observed in the presence of hypospadias, but can also be seen in boys with the meatus in its orthotopic position. Varying degrees of penile curvature are observed in 4\u201310% of males in the absence of hypospadias. Penile torsion can be observed at birth or in older boys who were circumcised at birth. Surgical management of congenital curvature without hypospadias can present a challenge to the pediatric urologist. The most widely used surgical techniques include penile degloving and dorsal plication. This paper will review the current theories for the etiology of penile curvature, discuss the spectrum of severity of congenital chordee and penile torsion, and present varying surgical techniques for the correction of penile curvature in the absence of hypospadias."} +{"text": "To find the prevalence as well as to identify the predictors as protective and risk factors of Non-Suicidal Self-Injury (NSSI) among children with autism spectrum disorder (ASD).In this analytical cross sectional survey 83 children with ASD age range from 8 to 18 years were selected through convenient sampling technique from five special schools of Lahore city. The Urdu form of a standardized tool was used to assess NSSI.Statistical analysis indicated overall point prevalence of NSSI was 33%. Moreover banging/self-beating (47%), scratching (38), pinching (35%), picking scabs (33%), self-biting (32%), pulling hair (30%) and rubbing skin (19%) emerged as common forms of challenging behavior. Further regression analysis showed that age B, gender B and severity level of ASD B as risk factors/positive predictors of NSSI. However early intervention and involvement of parents in counselling emerged as protective factors/negative predictors of NSSI among children with ASD.Non-suicidal self-injury is a serious challenge among children with ASD. Early intervention, counselling and parental involvement in managing the children with ASD will not only prevent but reduce the challenging behaviors. Autism Spectrum Disorder (ASD) is a neuro-developmental disorder. The individuals with ASD show deficits in social functioning, language, communication skills and ability to maintain relationships. It is a spectrum disorder which means that its symptom patterns range of abilities and characteristics are expressed in many different combinations and in any degree of severity. They are also found to exhibit unusual interest and stereotype behaviors.1Self-injurious behaviors (SIB) are self-inflicted behaviors that are harmful for the child\u2019s own body and these include head banging, hand or arm biting, excessive scratching and rubbing, self-choking, hair pulling and many others.7There are some studies which showed the prevalence of SIB 30% or higher in the population with ASD.3Though prevalence of self-injury without the intentions of suicide is common in ASD but it is not considered the symptom of ASD because self-injury has been recorded in the individuals\u2019 with other disabilities as well as in the normal population. Self injurious behaviors is assumed to be a repetitive behavior at the same time it can be episodic as it either occurs under highly specific stimulus contexts, or in bursts after long periods without problematic behavior.11There is a dearth of published studies on NSSI of individuals with ASD in Pakistan in spite of the fact that SIB is highly associated with ASD. Self-injury is a source of frustration to parents, caregivers, teachers and other professionals working with these children. The present study examined the point prevalence of NSSI among children with ASD and can be a source of awareness to the parents, teachers and other stakeholders. The results also identify the role of demographics and other environmental factors contributing as risk as well as the protecting factors of developing NSSI. This study can be helpful to design preventive measures to control the NSSI among children with ASD by indicating the protective and risk factors.13The main objectives of this study were to:Find prevalence of Non suicidal self-injury among children with ASD across forms of NSSI and across the personal characteristics of the participants.Explore the common forms of self-harm behaviors and identify the risk as well as protective factors of NSSI among the children with ASD.Total 95 children with ASD were conveniently selected from the five institutions located in Lahore city. Their mean age was 11.77 . Of them 60 were male and gender mean was 1.36 . The detail of the personal characteristics of the participants are given in The assessment process was carried out at two phases. At first phase the management of the schools pointed 132 children with ASD. At the second phase the childhood autism rating scale (CARS)The study was approved by departmental review board. Informed consent was taken from the heads of the institutions and the parents of the participants. The participants were told that they can quit from the study at any stage.In order to find the prevalence and forms of self-injurious behavior among children with ASD frequency program was run and percentages were calculated. The detail is given in The prevalence of forms of self-injurious behaviors in percentages among children with ASD is shown in Overall prevalence of NSSI among the participants and the prevalence of NSSI among children with ASD according to associated characteristics is shown in It is further supported by the results of multiple regressions 2 (0.88) and F ratio show the goodness of fit of the model. It means that the independent variables are explaining 88% of the variance in dependant variables. In the result age, gender, severity level of ASD, parental involvement in counselling and early intervention emerge as significant predictors of NSSI. However Age and severity level of ASD are the positive and turned out as risk factors. On the other hand early intervention and parental involvement in the management plans of the children emerged as protective factors and negative predictors of NSSI.In order to identify the predictors of NSSI among the participants with ASD standard multiple regression was run on the data. The values of \u2206RThe scarcity of research on prevalence of NSSI among children with ASD in Pakistan was the motive to conduct present study. Although a fair number of studies investigated the prevalence of SIB among the population with intellectual disabilities but there is a limited published data describing challenging behaviors of ASD. The previous studies highlighted the need of identifying the associated factors of self-injury among ASD so that the challenging behaviors may be stopped.1The exact reasons of these stereotype self-stimulatory injurious behaviors are not known. However ASD limits the functioning of many parts of the brain which further affects every aspect of social interactions with others and weakens the abilities like social responsiveness, communication and feelings for other people.The results of frequencies in the form of percentages in this study indicates that majority of the participants showed head banging or self-beating, scratching, pinching, picking scabs, biting, pulling hair and rubbing skin against rough surface as self-injurious behaviors. These findings are consistent with the results of previous studies.3Results of present study indicate 30% overall prevalence of NSSI among the participants. The findings of the researches conducted in west indicates up to 50% prevalence of SIB among individuals with ASD.17Further the comparison across personal characteristics indicated that females appeared to be scored higher on non-suicidal self injurious behaviors as compared to their male counterparts. A reason of these gender differences can be that girls with ASD were found to show more cognitive developmental delays and behavioral problems as compare to the boys with similar disorder.3Present study also found the highest prevalence of NSSI among the participants who had severe level of ASD. It means the severe the level of symptoms of ASD the more frequent and severe SIB among the individuals. Similarly researcher24The findings of regression analysis supported the above mentioned statements hence parental involvement in the process of counseling and provision of early intervention as protective factors as well as the severity of ASD is a major risk factor of NSSI among the children with ASD.21The main limitations of the study was small sample size selected from the Lahore city due to time and financial constraints. The more generalizable results would have been generated in case of recruiting the sample from other regions also.The present study found some associated factors as predictors of Non-suicidal Injury among children with ASD which further may help to tailor preventive and intervention programs to reduce the life threatening challenging behaviors. Early interventions, the focus counselling in adolescence and involvement of parents/caregivers in the treatment or management plans of the individuals with ASD will reduce the severity of symptoms which will further help in minimizing the challenging behaviors.BA conceived, designed and editing of manuscript.MB, ZR, AA, did data collection, statistical analysis and manuscript writing.BA takes the responsibility and is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved."} +{"text": "We present a case of spontaneous rupture of the diaphragm, characterized by nonspecific symptoms. The rapid diagnosis and appropriate surgical approach led to a positive resolution of the pathology."} +{"text": "The Moon-based Extreme Ultraviolet Camera (EUVC) of the Chang\u2019e 3 mission provides a global and instantaneous meridian view (side view) of the Earth\u2019s plasmasphere. The plasmasphere is one inner component of the whole magnetosphere, and the configuration of the plasmasphere is sensitive to magnetospheric activity (storms and substorms). However, the response of the plasmaspheric configuration to substorms is only partially understood, and the EUVC observations provide a good opportunity to investigate this issue. By reconstructing the global plasmaspheric configuration based on the EUVC images observed during 20\u201322 April 2014, we show that in the observing period, the plasmasphere had three bulges which were located at different geomagnetic longitudes. The inferred midnight transit times of the three bulges, using the rotation rate of the Earth, coincide with the expansion phase of three substorms, which implies a causal relationship between the substorms and the formation of the three bulges on the plasmasphere. Instead of leading to plasmaspheric erosion as geomagnetic storms do, substorms initiated on the nightside of the Earth cause local inflation of the plasmasphere in the midnight region. Kp index2RE) for relatively quiet geomagnetic condition1151112141516The plasmasphere contains dense and cold thermal plasmas that surround and corotate with the Earth234567841in situ measurement of plasma density via spacecraft, the configuration of the plasmasphere can be observed by remote imaging+ ions in the plasmasphere)192123252627RE away from the Earth. Thus it can provide a global and instantaneous meridian view (side view)31in situ data from various space missions have demonstrated the reliability of the EUVC data for identifying plasmapause locations3334In addition to Here we use the EUVC data obtained on 20\u201322 April 2014 to analyze the shape of the plasmaspheric configuration and its relationship with substorms (see Methods for the EUVC data reduction procedure). The cadence of the EUVC images is about 10 minutesIn When discussing configurations of the plasmasphere, we are more interested in the magnetic coordinate systems direction and the magnetic dipole axis. The plasmapause boundary in the image for field line fitting is identified via visual inspection, in which we use a threshold of intensity to get a rough dayside plasmapause position for reference. L-values. The curve of the L-values with observing times is plotted in L-value curve can be identified in L-value curve in L-values determined from the profile fitting process are consistent with the variation of plasmapause locations illustrated in the time-distance diagram.After performing the dayside plasmapause profile fitting for the 153 EUVC images, we have the coordinates of 153 field lines. The radial distances of the field lines at the magnetic equatorial plane are quantified by their L-value curve in We employ the geomagnetic (MAG) coordinate system (see Methods for the definition of the MAG coordinate system) to reconstruct the global plasmaspheric configuration based on the 153 field lines. It is widely accepted that the plasmaspheric plasmas are confined in the field lines and corotate with the Earthin situ electron density data of the Electric Field and Waves (EFW)in situ density data and those determined from the reconstructed plasmaspheric configuration along the orbits of VAP. The plasmapause locations obtained from the VAP in situ data are indicated by the vertical dashed lines. The intersection locations of the satellites\u2019 orbit to the plasmapause surface of the reconstructed configuration are indicated by the vertical dotted lines. It is interesting that most of the dashed and dotted lines are close even if the observational times of them are different, which means that the reconstructed plasmaspheric configuration is almost preserved during the observing period. This result verifies the reliability of the reconstructed plasmaspheric configuration based on the EUVC images.In order to verify the reconstructed plasmaspheric configuration based on the EUVC images, we use the L-value curve of the fitting field lines using the midday transit times overlying the AE index curve, and L-value curve shifting back 12\u2009hours (midnight transit time) overlying the AE index curve. It can be seen in The three bulges of the plasmasphere are identified from the dayside shape profiles of the plasmapause, while an examination of the observed EUVC images see shows thWe also compared the midnight transit times of the three bulges with the AL and AU indices40From the time series of EUVC images observed during 20\u201322 April 2014, we identified three bulges which successively appeared at the dayside of the plasmasphere along with the rotation of the Earth. The inferred midnight transit times of the three bulges coincide with the contemporary expansion phases of three substorms. These observations demonstrate that the expansion phases of the three substorms cause the three bulges in the plasmasphere. Different from the concept that geomagnetic storms lead to erosion of the plasmasphere, a substorm can cause plasmaspheric inflation in a local magnetic longitude range around midnight.A possible mechanism for this inflation process is a rapid filling of the upper magnetic field lines (or flux tubes) at the midnight region during the expansion phase of substorm, which affects the size of the flux tubes. Then these flux tubes rotate to the dayside and are recorded by the EUVC observations. Since the substorms happened successively, the three bulges deduced from EUVC images were also formed successively and thus are distributed at different geomagnetic longitudes. Besides, the newly formed bulge can overlap the previous plasmapause, which leads to a mismatch between the plasmaspheric reconstructions (see the overlapping region in The hypothesis of flux tube rapid filling is inspired by the direct observation of the substorm dynamic process on the nightside of the plasmasphere in the EUVC images around the time period from 8:00 UT to 12:00 UT on 21 April 2014 see . We perf3 counts per pixel), which reflects the He+ column density of the plasmasphere along LOS293132The EUVC uses a reflective optical system, which includes a spherical multilayer film mirror, a thin film filter, and a spherical photon-counting imaging detector29RE , then the size of Earth radius will be the same (10 pixels/RE) in all the processed EUVC images Rescale the image and let the FOV be 15\u2009\u00d7\u200915 The noise in the EUVC images shown in The definitions of the geophysical coordinate systems45The orbit positions of the Moon in the whole month of April 2014 shown in We use the dipole field model to calculate the geomagnetic field lines. The dipole axis is defined by the International Geomagnetic Reference Field (IGRF) model48L-values of the plasmapause locations studied in this paper are less than 4.2 RE (see 50The 2 RE see and we oin situ electron density measurements. One instrument is the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS)37Two instruments aboard the VAP satellites can provide in situ electron density values (as well as the timing and orbit data of VAP satellites) are taken from the Level 3 data files (in CDF format) of the EFW. Although the EFW data is not as accurate as the EMFISIS data, they are sufficient for determining the plasmapause positions via visual inspection , only the EFW electron density data are available. So we adopt the EFW electron density measurements in this paper web site (http://s-cubed.info). The SuperMAG auroral electrojet index data are supplied by the SuperMAG project (http://supermag.jhuapl.edu/indices).The AE indices (including AL and AU indices) data are supplied by World Data Center for Geomagnetism, Kyoto. The ground-based magnetometer data are provided by NASA\u2019s Coordinated Data Analysis Web (CDAWeb). The Wp index data are supplied by the Substorm Swift Search (SKp\u2009~\u20094) and then the corotation of the plasmasphere is less disturbed; (2) The bulges are large-scale plasmaspheric structure, thus they are relatively stable and are less affected by local convection ."} +{"text": "Variations of the testicular veins are relevant in clinical cases of varicocele and in other therapeutic and diagnostic procedures. We report herein on a unique variation of the left testicular vein observed in an adult male cadaver. The left testicular vein bifurcated to give rise to left and right branches which terminated by joining the left renal vein. There was also an oblique communication between the two branches of the left testicular vein. A slender communicating vein arose from the left branch of the left testicular vein and ascended upwards in front of the left renal vein and terminated into the left suprarenal vein. The right branch of the testicular vein received an unnamed adipose tributary from the side of the abdominal aorta. Awareness of these venous anomalies can help surgeons accurately ligate abnormal venous communications and avoid iatrogenic injuries and it is important for proper surgical management. Venous blood from the testis is drained through the pampiniform plexus of veins. This plexus condenses to form four veins at the superficial inguinal ring; two veins at the deep inguinal ring and one testicular vein at varying levels. The right testicular vein terminates into the inferior vena cava and the left testicular vein terminates into the left renal vein.During dissection classes for medical undergraduates, we observed a unique variation of the left testicular vein in an adult male cadaver aged approximately 70 years. The left testicular vein was formed by the union of the pampiniform plexus of veins, as described in anatomy textbooks. Its course in the lower part of the abdomen was also normal. It ascended retroperitoneally towards the left renal vein. Approximately 3 cm below the left renal vein, it bifurcated to give rise to left and right branches . The lefThe importance of anatomical variants of the testicular veins has greatly increased in recent times because of the development of advanced operative procedures within the abdominal cavity for varicocele and undescended testes.Testicular vein variations have been extensively studied by Asala et al.,Varicocele is a well-recognized cause of decreased testicular function and is said to be present in about 40% of infertile males.The variations we report here are surgically very important because they could go unnoticed until discovered during surgery. The communicating vein could suffer iatrogenic injuries during suprarenal surgery, leading to bleeding. The terminal bifurcation of the testicular vein could also lead to increased chances of its damage during upper abdominal retroperitoneal surgeries. One of the segments of the bifurcated vein could be used as a graft during surgeries. Left-sided varicocele is more common than right. The variant bifurcated termination of the left testicular vein and its communication with the left suprarenal vein could further increase the possibility of a left-sided varicocele. It could also result in post-surgical recurrence of varicocele. In view of the practical importance of such variations to renal transplantation, renal and gonadal surgeries, and other therapeutic and diagnostic procedures, the present report is of significant importance to surgeons, radiologists and andrologists."} +{"text": "To evaluate integrated learning program of neurosciences for continuation of integrated learning in the forthcoming teaching and learning modules of undergraduate medical curriculum at Bahria University Medical & Dental College (BUMDC).A mixed method design was conducted from August 2016to February2017 after ethical approval from BUMDC. The quantitative aspect was evaluated retrospectively by desk records ofmarks obtained in integrated module and nonintegrated module. Focused group discussionwere conducted with primary intended users to share their expectations and concerns and get responses on key evaluation questions for implementationand outcome evaluation of integrated learning program.The desk record revealed a positive perception of students and faculty at the time of implementation with improvement in results after integration in subjects of basic sciences. The discussions highlighted reasons which resulted in failure of its continuation and affirmedreadiness for re-induction and continuation of integration with clinicalsciences.Evaluators considered approval and re-application of integrated curriculum at BUMDC after utilization focused evaluation. The conventional non-integrated approach of MBBS curriculum disseminates knowledge by afragmented approach which fails to build learning skills of case investigations and analysis. As a result of which students fail to acquire conceptual understanding of the topic and hence its application in treatment and prevention of disease.3In Bahria University Medical & Dental College (BUMDC) teaching of basic science subjects is accomplished by modular and hybrid system with a mixture of traditional teaching and problem based learning (PBL). Integrated Learning Program (ILP) was implemented in 2010 to integrate the subjects of Anatomy, Physiology, Biochemistry, and Community Medicine disciplines in module of Neurosciences.1Although, ILP was executed with favorable results, yet integration was not followed in the succeeding modules. With this the stake holders taught about its evaluation by external evaluators with the aim to ascertain its usefulness and efficacy in real time environment. They wanted to identify any gaps, deficiencies and weakness for implementation and improvement in the forthcoming modules. In order to acquire this objective, utilization focused evaluation (UFE) with its 17 steps frameworkThe study was conducted from August 2016 to February 2017 after ethical approval from BUMDC. A mixed method design was adopted with the qualitative aspect acquired by Focused group discussion (FGD) and quantitative data was assessed retrospectively by the available desk records from January 2012 to December 2015. The evaluator team comprised of external and internal evaluators selected on the basis of their command on professional practical knowledge, systemic enquiry skills, project management skills, reflective practice competence and interpersonal competence from inside and outside the institution. To assess and enhance readiness for evaluation, evaluators shared their expectations and concerns with the primary intended users (PIU); chair integration committee, students, teachers from basic and clinical sciences. Evaluation of ILP was done by all the steps of UFE as shown in 6Three focus group discussions (FGD) with; chair of integration committee, faculty members and students were conducted by KEQ developed by the researchers from an iterative literature process. All the faculty members of Basic Sciences especially those who took part in integration of Neuroscience Module (2010) were invited. Medical students of first and second year MBBS were invited and informed about the purpose of FGD. They were later short listed on the basis of their academic performance (scores in the previous modules). The evaluators took consent from the participants and confirmed about anonymity and confidentiality of data. Each FGD lasted for approximately 60to 90 minutes in a private place free from noise and disturbance and were audio taped after obtaining the consent. The data obtained after debriefing from FGD answered all KEQs in terms of difficulties faced during implementation of the program, perception of usefulness of program by students and teachers. The quantitative data of results of nonintegrated (NI) and integrated module after ILP, guide book for students, schedule of both modules, feedback forms filled by students and faculty members was also acquired and analyzed. The comparative analysis of modules done in SPSS-15 with comparison by Student\u2019s t -test was taken into account.On the basis of retrieved desk records, comparison of module results after intervention of ILP declared better performance of medical students after integration. The upgrading of results attained by ILP in disciplines of Anatomy, Physiology and Biochemistry in neuroscience module. In discipline of Anatomy interactive sessions and model study facilitated in understanding of structural and functional association of nervous system in 86% students through reinforcement, interpretation and description of structured objectives in pleasant manner. Assimilation of knowledge in 80.25% of students was the result of interactive lectures of biochemistry linked with molecular and functional aspects. Elucidation of physiological mechanisms that construct basis of disease assisted 84% of students to grasp pertinent pathologies. Implementation of ILP apprehended students greater expertise and interest on subject and effectiveness of integrated teaching for better scoring in assessments and clinical postings. The feedback response from students and faculty suggested that problem solving skills, through provision of structured integrated model of neurosciences was made possible.Qualitative Results of qualitative analysis was provided after thematic analysis from responses to all the KEQ, sorted out in the form of usefulness of ILP and steps required for outcome and implementation evaluation. One of the faculty members responded; \u201cBrain Storming, Self learning, Connections among all subjects and a sense of accomplishment was acquired by the students during ILP\u201d. \u201cFraming of time table\u201d was the most important impediment to its continuation, mentioned by few participants. A senior faculty member mentioned; \u201cInterpersonal skills in terms of listening, and receiving criticism developed positive attitude, when we implemented ILP way back in 2010\u201d. One of the student said; \u201cwe lack integration of basic sciences in clinical practice that needs to be catered by implementation of case based session\u201d. Majority of students in FGD recommended that understanding and application of Basic Sciences in clinical scenarios can be improved by integration of modules. The chair said; \u201cwe were not able to take forward integrated learning due to the reason that all the schedules were not made after intra and interdepartmental discussions among \u201cBasis Science Departments\u201d with contribution from clinical faculty\u201d. Furthermore, he said \u201cObjectives were not aligned with the learning outcome and mode of information transfer hence integration could not be achieved \u201cFaculty criticized that the time table of whole module was not finalized before the start of the module rather was constructed after it had started which resulted in serious flaws. In the FGD with chair of integration committee, it was concluded that since all the political powers are in favor of implementation of integration, all efforts should be made to make it possible. The discussions with stake holders on the basis of KEQ thus shoProgram evaluation is recognition, purification and implementation of secure benchmarks to govern the objectives of evaluation meaningfully. The goal of evaluation was focused on organization of central themes or concepts that combined several subjects using multiple learning strategies in ILP.9The two imperative hypothesis of UFE are that no evaluation should progressunless there are managers who will essentially be working on proofs that the evaluation will engender and dynamically participated in the course of overall assessment.12The evaluation model used at BUMDC connected all stakeholders in planning of evaluation, given responses and understanding of findings.15UFE determines practical agenda for conducting evaluations with progress of its utility.4The application of UFE actively involved PIU in the decision making process with the guidance for improvement of program. The most important limitation was time interval in application of program and its evaluation however the approach of UFE can look up for the causes of failure of its continuation. It would further give emphasis to solve the problems with well-defined goals and strategic directions acquired by feedback from all PIU.Evaluators considered approval and application of transformation from nonintegrated to integrated curriculum at BUMDC in view of recommendations from chair integration committee, faculty members and students and desk record of an integrated module. The evaluators also highlighted the benefits of integration in undergraduate curriculum and informed them that any insufficiencies at the end of evaluation will help in further modification in the forthcoming modules.Organizations should employ UFERehana Rehman: Principal Investigator took part in conception and study design & compilation of write up, drafting the article & revising it critically for important intellectual content.Rabiya Ali: Took part in study design, acquisition, analysis and interpretation of data, compilation of write up drafting the article.Hina Moazzam: Took part in compilation of write up and formulation of tables.Saifullah Shaikh: Took part in data analysis.Rabiya Ali: Takes the responsibility and is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved."} +{"text": "Recent discovery of hair follicle keratin 75 (KRT75) in enamel raises questions about the function of this protein in enamel and the mechanisms of its secretion. It is also not clear how this protein with a very specific and narrow expression pattern, limited to the inner root sheath of the hair follicle, became associated with enamel. We propose a hypothesis that KRT75 was co-opted by ameloblasts during the evolution of Tomes' process and the prismatic enamel in synapsids. Since early days of enamel research the question regarding the presence of keratins in this epithelial tissue intrigued scientists , which are hairless, lost a number of hair and hair follicle keratin genes (Nery et al., The author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer CR and handling Editor declared their shared affiliation."} +{"text": "This is a patient aged 35 years old, farmer, who consults for chronic fistula of the soles lasting for three months without any notion of trauma. The review found a swelling of the outer edge of the foot of inflammatory appearance with pus coming through a fistula at the top. Plain radiography shows a well limited round osteolysis recalling the appearance of an abscess of Broca. The scanner confirms the osteolysis and highlights intra-osseous foreign body. The patient underwent surgical treatment by trimming, removal of a piece of wood and curettage of the bone cavity. The outcome was favorable with disappearance of infectious signs and filling the bone cavity."} +{"text": "Mycoplasma gallisepticum S6. Atomic force microscopy studies show that the above-mentioned forms of the mycoplasma have different values of DNA parameters . We suppose that the observed phenomenon may be connected with the process of adaptation of these bacteria to severe environments.Recent studies show that mycoplasmas have various programs of life. This means that changes in morphology and genome expression may occur once the environment of these microorganisms becomes extremely altered. In this article, we report on changes in the DNA molecule obtained from the vegetative forms and the viable but nonculturable (VBNC) forms of"} +{"text": "The peak particle velocity datasets recorded during quarry blasts in the neighborhood villages and towns in Ibadan and Abeokuta were processed and analyzed in order to recommend a safe blast design for each of the quarries. The minimum peak particle velocity of 48.27\u202fmm/s was recorded near the foundation of the nearest residence at the shot to monitored distance of 500\u202fm. The tendency of ground vibration emanating from the quarry sites to cause damage to the structures in the nearby dwelling areas is very high. The peak particle velocity datasets recorded were not within the safe limit. Therefore the peak particle velocity that will not exceed 35\u202fmm/s is recommended for a safe blast design. Specifications TableValue of the data\u2022The data could be used as a source of information for quarry blasters to determine the relationship between the peak particle velocity and ground vibration.\u2022The data could be used to monitor the level of damage on structures in the neighborhood of the quarry sites.\u2022The data revealed Shot - Monitored distances , shot-monitored distance, charge weight and scaled distance. Of all the parameters in the datasets, the peak particle velocity dataset is the blast induced earthquake predictor. Blair Pal Roy The components of these datasets were considered to minimize the complaints of the residents in the neighborhood of the quarry sites. In recent years, one of the problems encountered by technical personnel who are responsible for excavation with blasting is the rightful or unjustifiable complaints of people or organizations in the neighborhood of quarry sites 2The data were generated during the survey of residential buildings in the neighborhood of the quarry sites. A Global Positioning System (GPS) was used to measure Shot-Monitored distances (the distance between the shot points at the quarry sites and Building Monitored Station Points (BMSP) in the neighbourhood of quarry sites). Shot-Monitored distances (D) were recorded from GPS. 3-component blast Seismographs were positioned at twenty BMSP in the dwelling areas surrounding each site. Longitudinal, vertical and transverse PPV datasets associated with the blast-induced earthquakes were recorded from the Seismograph.Twenty PPV datasets were obtained for each site was used as explosive. The explosives were detonated using magnadet detonator. Scaled distance (SD) data were obtained for each BMSP that can be detonated safely was determined using Eqs. The maximum quantity of explosives at 55% of BMSP. The PPV data revealed that the vibration intensities were not bearable at most of the monitored stations. The analysis of data signifies that there is a correlation between the exceeded or large PPV recorded and the cracked walls of buildings in the vicinity of quarry sites.A combination of scaled distance and geological constants, k and \u03b2 parameters obtained from the regression lines in"} +{"text": "West Africa is experiencing its first epidemic of Ebola virus disease (Ebola) . As of FOutbreaks in remote areas posed a significant challenge to CHTs to mount an effective investigation and rapid response because of limited resources, personnel, and means to reach remote areas. The RITE strategy provided a framework to coordinate assistance from the central MOHSW and other agencies under the leadership of the CHT and developed several tools to help plan, manage, and track a response effort. The objectives of the investigation and response teams were to 1) rapidly isolate and treat Ebola patients, either by establishing isolation and treatment facilities in the community or by safely transporting patients to existing Ebola treatment units (ETUs); 2) ensure proper collection and safe transportation of samples for Ebola laboratory confirmation; 3) ascertain the index case (the first person in the transmission chain who entered the community from another county in Liberia) in each outbreak to better understand importation and transmission patterns; 4) identify all generations of cases by improving case finding and contact tracing to ensure no cases were missed; 5) train teams in safe burial procedures; and 6) observe contacts for 21 days from the death or ETU admission of the last case to ensure interruption of transmission. Investigation and response teams included Liberian MOHSW national and county representatives, CDC, WHO, the United Nations Children\u2019s Fund, and other multilateral and nongovernmental organizations.The RITE strategy clearly articulated the role of CHTs to coordinate efforts of partners involved in response activities to rapidly mobilize resources that could be tailored to the needs in each outbreak. After initiation of the RITE strategy in October, outbreak responses were supported with structured rapid response microplanning tools implemented by CHTs that delineated each intervention component and the organizations responsible for implementation. Outbreaks and response activities were reviewed on a weekly basis at the national level at the county operations subcommittee of the national incident management system , allowinFor this report, Ebola outbreaks that occurred in remote areas, produced at least one generation of transmission in the community and had complete investigations were analyzed. An Ebola outbreak was defined as two or more epidemiologically linked Ebola cases. Cases were classified as suspected, probable, or confirmed using the Liberian case definitions .Initial alerts of possible Ebola outbreaks were received by CHTs as rumors, reports of clusters of ill persons or unexplained deaths, or reports of patients admitted to ETUs. Case investigation reports were gathered through interviews with ill persons or their proxies. Databases from ETUs were searched to supplement incomplete case investigation reports. Transmission-chain diagrams were constructed back to the first case to enter the county from another county in Liberia (the index case). The first generation of cases was defined as resulting from contact with the index patient, and the total number of generations was determined from the transmission-chain diagrams. To monitor the effectiveness of rapid response to outbreaks over time, the number of days between the symptom onset of the index patient and the date the CHT was first alerted to a potential outbreak, and the date the CHT first sent in a team to investigate were computed for each outbreak. Outbreak duration was calculated as the number of days between the symptom onset date of the index case and the last case in the outbreak, defined as the last case in a chain of transmission to occur before 21 days passed with no new cases. Demographic characteristics of patients and the proportion of patients isolated and treated in an ETU or similar facility were summarized by outbreak.Among 15 Ebola outbreaks in remote areas of nine counties with index case symptom onset dates during July 16\u2013November 20, 2014, 12 investigations had complete data . InvestiThe median time between symptom onset in the index patient and an alert received by the CHT was 33 days (range = 0\u201358 days); the median time to alert was 40 days for the six outbreaks before October 1 (prior to initiation of the RITE strategy) and 25 days for the six outbreaks with onset after October 1 (after the RITE strategy) . The medInterventions in the 12 outbreaks included 1) engagement of traditional and community leaders in response activities; 2) community education about Ebola virus transmission and prevention; 3) active case finding, contact tracing and monitoring; 4) quarantine of asymptomatic high risk contacts at home or in designated quarantine facilities; 5) isolation and treatment of patients; and 6) safe burials. In each community, the appropriate level of intervention was determined by the community\u2019s requests, the number and severity of cases, the remoteness and accessibility of the community, and the distance to Ebola treatment facilities. Resistance to assistance was encountered in several communities, and response was suspended until discussions with county and traditional officials or the increasing burden of cases and deaths encouraged community acceptance of intervention. In two outbreaks , the availability of nongovernmental partners to rapidly establish isolation and treatment facilities permitted on-site or nearby care of patients. In these and other outbreaks, some patients were able to reach ambulances at the closest road access point and were taken to established ETUs. In one outbreak , delays in the establishment of an isolation and treatment facility resulted in only one patient being cared for in the facility before the outbreak was over.Over time, the proportion of patients in each outbreak that were isolated and treated increased from a median of 28% in the early outbreaks to 81% in the later outbreaks . The proImplementing an effective rapid response is critical to limiting the magnitude and duration of Ebola outbreaks. The remoteness and complexity of the outbreaks described in this report have posed challenges to rapid response; movement of personnel and supplies was greatly hindered by distance, river crossings, poor or nonexistent roads , and limFour of the six outbreaks that occurred before development of the RITE strategy remained undetected until they were in at least the third generation of transmission, whereas five of the six later outbreaks were detected in the first or second generation. In addition to the RITE strategy, greater community awareness of Ebola helped alert authorities earlier to clusters of unexplained deaths or illness consistent with Ebola and also facilitated faster community acceptance of interventions. Availability of ETU beds for isolation and treatment of patients also improved significantly over the period covered by these outbreaks , and theWhat is already known on this topic?The epidemic in West Africa has resulted in the largest number of Ebola cases in history. Ebola is associated with a high case-fatality rate that can be reduced through supportive care. Ebola transmission can be interrupted through isolation of infected patients, infection control, monitoring of patients\u2019 contacts, and safe burial of dead bodies. Remote rural areas pose challenges for rapid isolation and treatment of patients because of their distance, difficult access, and lack of communications infrastructure.What is added by this report?A national strategy in Liberia to coordinate rapid responses to remote outbreaks of Ebola reduced by nearly half the time between the first new case in remote areas and notification of health authorities. As coordination of the rapid response strategy improved over time, the median duration of outbreaks decreased from 53 to 25 days as the number of generations of cases decreased from a median of four to two. The proportion of patients isolated increased from 28% to 81%; survival improved from 13% to 50%.What are the implications for public health practice?Ebola outbreaks in remote rural areas require rapid responses, including the movement of patients to treatment facilities. Interventions can be as simple as arranging safe ambulance transport for patients who might have to walk out of remote areas, but might also require establishment of mobile isolation and treatment facilities if patients are too ill to move or delays in transport are anticipated. Comprehensive and innovative rapid response units can improve outcomes and shorten duration of Ebola outbreaks, and should be employed wherever possible."} +{"text": "Volar locking plate fixation of distal radius fractures is commonly performed because of its good clinical outcomes. The flexor carpi radialis (FCR) approach is one of the most popular approaches to dissecting the volar side of the distal radius because of its simplicity and safety. We describe an extremely rare case of an absent FCR identified during a volar approach for fixation of a distal radius fracture.A 59-year-old woman with distal radius fracture underwent surgery using the usual FCR approach and volar locking plate. We could not identify the absence of the FCR tendon preoperatively because of severe swelling of the distal forearm. At first, we wrongly identified the palmaris longus tendon as the FCR because it was the tendinous structure at the most radial location of the volar distal forearm. When we found the median nerve just radial to the palmaris longus tendon, we were then able to identify the anatomical abnormality in this case. To avoid iatrogenic neurovascular injuries, we changed the approach to the classic Henry\u2019s approach.Although the FCR approach is commonly used for fixation of distal radius fractures because of its simplicity and safety, this is the first report of complete absence of the FCR during the commonly performed volar approach for fixation of a distal radius fracture, to our knowledge. Because the FCR is designated as a favorable landmark because of its superficially palpable location, strong and thick structure, and rare anatomical variations, there is the possibility of iatrogenic complications in cases of the absence of the FCR. We suggest that surgeons should have a detailed knowledge of the range of possible anomalies to complete the fixation of a distal radius fracture safely. Volar locking plate fixation of distal radius fractures is commonly performed because of its good clinical outcomes , 2. The Here, we describe an extremely rare case of the absence of the FCR, identified during the volar approach for fixation of a distal radius fracture.A 59-year-old right-handed female desk worker with no significant past medical history suffered a distal radius fracture (AO classification 23-A2 type) of the right hand Fig.\u00a0, fracturFour months after the operation, the patient had no pain or neurologic problems and the X-rays showed complete bone union of the distal radius fracture Fig.\u00a0. AlthougThe FCR originates from the medial epicondyle of the humerus and inserts into the trapezius, the second metacarpal, and the third metacarpal bones, and functionally contributes to the motion in flexion and the radial deviation of the wrist joint . AlthougBecause important neurovascular structures exist close together in the distal volar forearm, there have been many complications reported for volar plating of distal radius fractures, including injuries to the median nerve, the PCB of the median nerve, and the radial artery , 10. TheIn the present case, in addition to the absence of the FCR, we also identified an anomalous FCRB muscle. In general, the FCRB is considered to be an accessory muscle of the FCR that arises from the volar surface of the radius and inserts at various sites, including the base of the metacarpal bone, trapezium, and capitate . AlthougAlthough the FCR approach is commonly used for fixation of distal radius fractures because of its simplicity and safety, various types of anatomical variations and anomalies of the distal volar forearm structures including the median nerve, the PCB of the median nerve, and the flexor tendon have been reported. Because the FCR is designated as a favorable landmark because of its superficially palpable location, strong and thick structure, and rare anatomical variations, there is the possibility of iatrogenic complications in cases where the FCR is absent. We suggest that surgeons should have a detailed knowledge about the range of possible anomalies to complete the fixation of a distal radius fracture safely."} +{"text": "In the second and third sentences of the Results subsection of the Abstract, the sentence should read: AMACR by IHC was significantly associated with increased diagnosis of PCa . Subgroup-analysis showed that findings didn\u2019t substantially change when only Caucasians or Asians were considered.In the first sentence of the Meta-analysis Results subsection of the Evidence Synthesis, the sentence should read: he pooled result revealed that positive AMACR by IHC was significantly associated with increased diagnosis of PCa and Subgroup-analysis showed that findings didn\u2019t substantially change when only Caucasians .In the second sentence of the first paragraph of the Discussion, the sentence should read: AMACR expression by IHC was significantly associated with increased diagnosis of PCa . The overall analysis provided strong replication of the initial findings, confirming the AMACR for PCa."} +{"text": "Peroneal tendon dislocation in association with medial malleolus fracture is a very rare traumatic injury to the ankle. A 19-year old male patient was referred after injury sustained in a motorcycle accident with car, with concomitant traumatic peroneal tendon dislocation and medial malleolus fracture. The possible mechanism of this unusual injury could have been sudden external rotation force to the pronated foot in full dorsiflexed position of the ankle. Diagnosis of peroneal tendon subluxation or dislocation should be carefully evaluated in patients with single medial malleolus fracture. Although, it was first described in a ballet dancer, other sport activities like skiing, soccer and basketball could result in traumatic dislocation of the peroneal tendon. Usually peroneal tendon subluxation or dislocation is mistaken or undiagnosed because of similarity of the mechanism of injury to that of lateral ankle sprain and lack of apparent findings in plain radiographs3. We describe a case of minimally-displaced medial malleolus fracture in association with avulsion fracture of superior peroneal retinaculum to show the importance of small flake fractures around the ankle joint.Medial malleolus fracture usually occurs in association with fractures of lateral malleolus, posterior malleolus, proximal fibula, or with ligamentous injuries around the ankle joint. Concomitant traumatic peroneal tendon dislocation and medial malleolus fracture have been rarely reported in the literatureA 19-year old man was referred with pain in left ankle after motor-vehicle accident. On physical examination, swelling and ecchymosis around the ankle joint especially on medial side was obvious. Significant tenderness on medial malleolus and little tenderness on lateral malleolus were noted on palpation of the ankle joint. Neurovascular function of the foot and ankle was normal. Radiographs disclosed minimally-displaced fracture of the medial malleolus and a barely visible small avulsion fracture on the posterodistal part of the lateral malleolus . ComputeAfter obtaining consent from the patient and under general anaesthesia in the operating room, open reduction and internal fixation of the medial malleolus fracture was carried out using a malleolar screw and a Kirschner wire was shortened and buried beneath the skin. Examination during surgery revealed dislocation of peroneal tendons to the anterior aspect of the lateral malleolus with passive full dorsiflexion of the ankle joint, following which, through an incision on the posterior aspect of the distal lateral malleolus, the avulsed superior peroneal retinaculum (SPR) was exposed. The flake of bone and SPR were anchored to the lateral malleolus with two sutures. Stability of the peroneal tendons was fully achieved.Postoperatively, non-weight bearing short leg cast was applied for six weeks followed by a short leg walking slab to initiate range of motion of ankle for four more weeks. At follow-up one year after surgery, the patient was completely satisfied with the healed fracture and tendon injury and had 4. In Grade I, peroneal tendons slip anteriorly on intact fibrocartilaginous ridge of retrofibular groove. A Grade II subluxation of peroneal tendons is characterized by anterior slipping of tendons under elevated fibrocartilaginous ridge of retrofibular groove. In Grade III, avulsion fracture of SPR (the \u201cfleck sign\u201d) from fibula is seen. In the case presented here, the little bone fragment on the posterodistal aspect of fibula showed Grade III peroneal tendon dislocation. Grade III has the lowest incidence (13%) among all grades described by Eckert and Davis.Diagnosis of flake fracture around the ankle joint is important because it usually shows avulsion of an essential ligament or an important retinaculum. Peroneal tendon subluxations have been classified by Eckert and Davis3, the medial malleolus fracture was Type B or Type C by the Herscovici et al classification5. These types of medial malleolus fractures occurred with external rotation forces on the pronated foot by the Lauge-Hansen classification. It meant that none of the medial malleolus fractures in the reported cases resulted from shearing force as seen in supination-adduction injuries of the ankle.In order to explore the possible mechanism of this injury, we reviewed the literature. In our case and other reported cases of concomitant medial malleolus fracture and traumatic avulsed SPRPossible mechanism would be sudden external rotation force applied to the pronated foot in full dorsiflexed position of the ankle. When the ankle is in full dorsiflexion with contraction of the peroneal tendons in the active pronated position of the foot, the tendons are in direct forceful contact to the anterior part of SPR which is attached to the fibula. Sudden external rotation force in this position of the foot and ankle could initially tear the anterior inferior tibiofibular ligament and open the syndesmosis from anterior, as described in the second stage of pronation external rotation injury of Lauge-Hansen classification. These changes could finally rotate the fibula externally. Sudden forceful external rotation of the fibula against powerful contraction of peroneal tendons in the fully dorsiflexed ankle could cause avulsion of SPR from fibula (Grade III) or dislocation of peroneal tendons without avulsion fracture (Grade I or II). This theory could explain the mechanism of injury of concomitant traumatic peroneal tendons dislocation and medial malleolus fracture. This is only a theory and it should be tested in research on cadavers.In conclusion, more attention should be paid to the lateral part of the ankle joint in patients with a solitary medial malleolus fracture of Type B or Type C by the Herscovici classification. When any small avulsed bone fragment from the posterior aspect of the lateral malleolus is revealed on radiography, instability of peroneal tendons must be looked for. After fixation of medial malleolus fracture under general anaesthesia, it is important to exclude any possible peroneal tendon subluxation even in the absence of evidence of bone fragment avulsion.The authors declare no conflicts of interest and no external funding was received in the preparation of this report."} +{"text": "In this study, the authors develop an exploratory synthesis of two major health concepts: Antonovsky's sense of coherence and Bandura's beliefs in one's own efficacy. Reinterpretation of each study in the light of the other can lead to greater conceptual development and expand existing knowledge. The mutual themes are presented with an explanation of their contribution to broader conceptual discussions. The existence of some similarities between the two concepts is suggested. Researchers can obtain valuable and additional arguments through cross-fertilization of ideas across presented studies united by shared assumptions. Further research is recommended among various age groups and social backgrounds in order to verify the possible benefits of such theoretical development. Theoretical and practical implications of such a synthesis are presented."} +{"text": "Contrast computed tomography and magnetic resonance imaging are widely used due to its image quality and ability to study pancreatic and peripancreatic morphology. The understanding of the various subtypes of the disease and identification of possible complications requires a familiarity with the terminology, which allows effective communication between the different members of the multidisciplinary team. Demonstrate the terminology and parameters to identify the different classifications and findings of the disease based on the international consensus for acute pancreatitis ( Atlanta Classification 2012). Search and analysis of articles in the \"CAPES Portal de Peri\u00f3dicos with headings \"acute pancreatitis\" and \"Atlanta Review\". Were selected 23 articles containing radiological descriptions, management or statistical data related to pathology. Additional statistical data were obtained from Datasus and Population Census 2010. The radiological diagnostic criterion adopted was the Radiology American College system. The \"acute pancreatitis - 2012 Rating: Review Atlanta classification and definitions for international consensus\" tries to eliminate inconsistency and divergence from the determination of uniformity to the radiological findings, especially the terminology related to fluid collections. More broadly as \"pancreatic abscess\" and \"phlegmon\" went into disuse and the evolution of the collection of patient fluids can be described as \"acute peripancreatic collections\", \"acute necrotic collections\", \"pseudocyst\" and \"necrosis pancreatic walled or isolated\". Computed tomography and magnetic resonance represent the best techniques with sequential images available for diagnosis. Standardization of the terminology is critical and should improve the management of patients with multiple professionals care, risk stratification and adequate treatment. Most important causes in children described by the availiable literature are (in order of frequency): biliary disease, medications, idiopathic, systemic diseases, trauma, metabolic disorders, hereditary and infectious causes,The severe form, regardless of the cause, can reach 25-45% of morbidity and mortality. About 5-10% of these individuals develop necrosis and affect their pancreatic parenchyma in 5% of cases, peripancreatic tissue 20% of the cases and both of them in 70%,,Imaging tests have fundamental importance in diagnosis, determination of severity, recognition of complications and the therapeutic choice. They have a direct impact on clinically suspected cases and differential diagnosisThe aim of this study was to demonstrate the terminology and the parameters for identifying the different classifications of the disease from the International Consensus for Acute Pancreatitis or persistent (that persists for more than 48 h) and local (liquid or necrotic peripancreatic collections) or systemic complications (which may be related to pre-existing co-morbidities)The choice of imaging technique is dependent on the research reasons, clinical symptoms, duration of symptoms and laboratory findingsThus, its recommended perform the abdominal ultrasound for all patients with first presentation of acute pancreatitis, typical abdominal pain, increased pancreatic amylase and lipase, between 48-72 h of presentation and unknown cause,The analysis of pancreatic morphology by the computed tomography imaging allows the diagnosis, determine the extent and severity of the diseaseClinical presentations that consist of more than 72 h of evolution, critical patients, carriers of critical clinical \"scores\", high severity index, signs of rapid deterioration, systemic inflammatory response syndrome and leukocytosis are the determinants of the precedence of computed tomography with contrast above other techniques2,9,28,32. MRI is the technic of choice for cases that there are limitations or contraindications to the application of computed tomography, need of multiple tests for monitoring disease progress and negative CT results with acute pancreatitis presentations ,The Review of the Atlanta Classification, 2012, subdivides acute pancreatitis in two subtypes: edematous and necrotic. For both presentation the literature elucidates two stages that overlap and are closely related with two peaks of mortality: early and late. The initial phase usually ends at the end of the first week of symptoms onset, however it can reach the second phase with a resolution of pancreatic and peripancreatic ischemia, development of fluid collection or evolution for permanent necrosis and liquefactionIt is mild form of the disease that is usually resolved in the first week. Its main feature is the local or diffuse enlargement of the pancreas without the presence of necrosis. This increase is due to the intense inflammatory process causing interstitial or peripancreatic edema.,,,,Generally it is characterized as enlarged pancreas with normal relative enhancement and regular peripancreatic fat, thickened or ground-glass opacity due to the inflammatory process. The amount of pancreatic fat can be variable but without enhancement,,The intensity of the pancreas at this stage is similar to normal organ. The \"phase in\" generally has enlarged pancreas and attenuated fat. Images in \"phase in\" with fat suppression have the delineation of the pancreas and its enhanced edges. In the pre- contrast phase, the body shows high signal intensity increases monotonically in the post-contrast image (Gadolinium) representing normal pattern of capillarity. image sequences in \" phase out \" are sensitive to the presence of edema or fluid collections,Presentation with worse prognosis, characterized by inflammation with resultant necrosis of pancreatic or peripancreatic tissue. The damage to pancreatic perfusion and peripancreatic necrosis signs develop over several days, although the usage of early performed the contrasted CT may underestimate the severity of the disease,Both computed tomography and magnetic resonance imaging are essential to obtain suitable images of the arterial phase, since the maximum highlight the pancreas is obtained in the late arterial phase and the largest signal difference between viable and necrotic tissue is evident in that stageAfter contrast administration (Gadolinium) the findings include the highlight of parenchymal areas compromised. Changing the infusion and the formation of peripancreatic fluids collections can take several days to be evidenced by imaging.,,,,,Necrosis can be identified as areas of hypointensity on \"phase in\" and increased signal intensity areas on \"phase out\", both associated with well defined areas of not enhanced parenchyma in postcontrast enhanced sequences (Gadolinium)It is very important to differentiate fluid collections composed only of exudative fluid from those that have solid components from the necrosis process. The latest revision of Atlanta, 2012, uses the following terms for collections classification: \"acute peripancreatic collections\", \"acute necrotic collections\", \"pseudocyst\", \"pancreatic necrosis walled or isolated\" and \"infected pancreatic necrosis.\",,,,,,,Defined as a collection of fluid that develops during the initial phase of the disease, most cases after 48 h of onset of symptomsCharacterized as fluid collection, single or multiple, homogeneous and with low attenuation, without well-defined and confined in normal retroperitoneal fascial planes,Images following \"phase out\" sequence are very sensitive to peripancreatic fluids, which are evidenced by an increase in signal intensity. The sequence \"phase in\" shows hypointense signal on a background and hyperintense fat,,The term refers to fluid collections in peripancreatic region that persist for more than four weeks, form visible wall that imprisons the content and have no solid componentMost pseudocysts resolve spontaneously; however, bleeding and infections may complicate the condition of the patient. Infected pseudocysts may have gas in computed tomography,,In case of suspicion and lack of characteristic clinical findings, it is necessary to perform a fine needle aspiration and morphological characterization of the contentThere are homogeneous collections of low attenuation with uniform capsular enhancement. The increase in intensity is typically observed during the interstitial phase in detriment of the presence of granulation tissue .,,,Sequential images in \"phase in\" show low signal intensity and in \"phase out\" usually have homogeneous increased signal intensity. The walls have minimal enhancement after contrast, due to the presence of fibrotic tissue,,,,,The term refers to collections containing liquid and necrotic tissue, which can be derived from the pancreatic parenchyma or adjacent tissue present in both intrapancreatic as peripancreatic region, and in most cases maintains communication with the pancreatic duct or its ramifications,,Magnetic resonance imaging has much higher sensitivity in the detection of necrotic tissue when compared to computed tomography,,,The main feature is the presence of heterogeneous attenuation, variable, higher than typical mitigations from only fluid collections. They may present as homogeneous attenuation without enhancement during the first week. The amount of solid content is variable and may be loculated,Necrotic debris are generally viewed as irregular regions of low intensity . Sequenc,,,Necrotic collections develop reactive and thick fibrotic wall that stores necrotic content inside after four weeks of evolution. The Atlanta Review uses the term \"inflammatory wall\" to describe this kind of collection. It has a higher incidence in the tail and body of the pancreas,,,It is presented as heterogeneous fluid and solid mitigations with different degrees of loculations with wall encapsulating well defined, which can extend to both pancreatic tissue as the extrapancreatic,,The sensitivity of magnetic resonance helps minimize diagnostic errors. Generally there are areas with heterogeneous intensity isolated by an intense accent wall in post-contrast, suggestive of isolation with solid and liquid content,,,,The development of secondary infection in pancreatic necrosis is associated with high morbidity and mortalityThe diagnosis of infection can be accomplished by gas visualization in both techniques. The extraluminal gas present in areas of necrosis might not form air-fluid levels depending on the stage of infection and the amount of necrotic tissue and fluid. In cases of doubt, confirmation can be obtained by fine-needle aspiration and microscopic analysis of the fluid or culture.Imaging tests are essential in the diagnosis and staging of acute pancreatitis. Computed tomography and magnetic resonance imaging are widely used, representing the best techniques with sequential cuts available for diagnosis. Tomography is the technique with greater acceptability and usage; however, MRI has the advantage in situations with CT contraindication and thorough soft tissue differentiation. The adequacy of terminology is critical and should facilitate the management of patients with multiple professionals, risk stratification and adequate treatment."} +{"text": "Six DVTs developed in the stocking group and 11 in the non-stocking group. The results suggest that the use of stockings reduces the incidence of DVT when added to herparin but the difference is not statistically significant. To obtain a predictive index for the development of DVT, discriminant analysis was applied to the control and stocking groups separately and combined. Five simple clinical variables gave a true positive prediction rate, for the combined group, of 94% and a false positive prediction rate of 26%.In a single-centre prospective trial 200 consecutive patients undergoing thoracic surgery were randomised to receive one of two prophylactic regimes against deep vein thrombosis (DVT). These were 5000 units of subcutaneous heparin twice a day, alone or combined with the wearing of graded compression stockings. The diagnosis of DVT was made clinically and with"} +{"text": "The concept of tissue or compartmental immunity and its importance in the development of inflammatory or infectious diseases was introduced and gained strength following publication of the works of Engwerda and Kaye . From thAll these studies call attention to factors described recently and studied within the context of inflammatory skin diseases and their role in the evolution of these diseases. We believe that these works will give new insights into the complex dynamics of inflammatory and infectious cutaneous diseases. These concepts may serve as the basis for the development of new experimental models and may open possibilities for future investigations.Juarez A. S. QuaresmaMirian N. SottoAnna Balato"} +{"text": "GHEs incorporate pipes with a circulating (carrier) fluid, exchanging heat between the ground and the building. The data show the average and inlet temperatures of the carrier fluid circulating in the pipes embedded in the GHEs (which directly relate to the performance of these systems). These temperatures were generated using detailed finite element modelling and comprise part of the daily output of various one-year simulations, accounting for numerous design parameters (including different pipe geometries) and ground conditions. An expanded explanation of the data as well as comprehensive analyses on how they were used can be found in the article titled Specifications TableValue of the data\u2022The data were generated using complex and computationally expensive numerical methods and can be of use to researchers that are interested in modelling the heat transfer process in shallow geothermal systems.\u2022The data allow researchers to further expand on the analyses presented in \u2022The range of input parameters used to generate the data allow researchers to investigate different aspects of this technology and the importance of these parameters.\u2022The dataset that incorporates randomness can lead to a more extensive statistical analysis of the impact of randomness on these systems.1The data describe the operation of vertical ground heat exchangers (GHE), used in shallow geothermal technologies to provide efficient and renewable heating and cooling to buildings. A modelled GHE consists of a borehole that has pipe loops embedded within it and transfers heat to and from the ground. The data of this article describe the daily average temperature of the circulating (carrier) fluid in the pipes (average of inlet and outlet) as well as the inlet temperature from which the outlet and the thermal load over time can be derived. These temperatures result from the simulated annual operation of various geothermal systems, each having different design parameters and/or ground conditions, ensuring a large number of annual datasets that satisfy a wide range of possible designs and/or locations. The data provide insights regarding the performance and efficiency of shallow geothermal systems.2The data were generated using a validated finite element numerical methodology, and applied through the software COMSOL Multiphysics, which was developed at the University of Melbourne \u03bbgrout) and For each simulation, different design parameters and conditions were used. Overall, the data are separated into three main categories, based on the geometry of the pipes within the GHE, describing the potential effect on the system performance when the placement of the pipes along the depth of the GHE is not as expected or varies. The first category describes the pipes being fixed in place along the depth of the GHE (labelled \u201cStraight\u201d), the second category describes the pipes moving in a sinusoidal pattern along the depth of the GHE (labelled \u201cVariable\u201d) and the third category describes the pipes moving randomly along the depth of the GHE (labelled \u201cRandom\u201d)."} +{"text": "BAD-responders showed a decline of the MMSE score together with a progressive impairment of executive functions. A voxel-based morphometry investigation (VBM), at the time of the second neuropsychological assessment, showed that the BAD-responders had larger grey and white matter atrophies involving the substantia innominata of Meynert bilaterally, the ventral part of caudate nuclei and the left uncinate fasciculus, brain areas belonging to the cholinergic pathways. A more widespread degeneration of the central cholinergic pathways may explain the lack of donepezil efficacy in those patients not responding to a treatment that operates on the grounds that some degree of endogeneous release of acetylcholine is still available.We explored the neuropsychological and neuromorphometrical differences between probable Alzheimer's disease patients showing a good or a bad response to nine months treatment with donepezil. Before treatment, the neuropsychological profile of the two patient groups was perfectly matched. By the ninth month after treatment, the"} +{"text": "Sociodemographic factors, alcohol and drug intake, and maternal health are known to be associated with adverse outcomes in pregnancy for women with severe mental illness in addition to their use of psychotropic medication. In this study, we describe the demographic characteristics of women hospitalized for severe mental illness along with their use of medication and other drugs during the pregnancy period.A clinical case note review of women with psychosis who were hospitalized at the State Psychiatric Hospital in Western Australia during 1966\u20131996, gave birth between 1980 and 1992, and received psychiatric treatment during the pregnancy period. The mother\u2019s clinical information was available from the case notes and the midwives record. The demographic characteristics of the mothers were described together with their hospitalization pattern and their medication and substance use during the pregnancy period.A total of 428 mothers with a history of severe mental illness were identified who gave birth during 1980\u20131992. Of these, 164 mothers received psychiatric care during the pregnancy period. One hundred thirty-two had taken psychotropic medication during this period. Mothers who were married, of aboriginal status or living in regional and remote areas appeared less likely to be hospitalized during the pregnancy period, while older mothers and those with a diagnosis of schizophrenia were more likely to be hospitalized. The number of mothers taking psychotropic medication in the first trimester of pregnancy was reduced compared to the previous 6\u2009months. The decline in the number taking substances over the same period was not significant. In all, 16% of the women attempted suicide during the pregnancy period and 10% non-suicidal self-injury.The women demonstrate a pattern of decreased use of psychotropic medication use from the period before pregnancy to the first trimester of pregnancy. Our data highlight the importance of women with severe mental illness receiving regular ongoing monitoring and support from their psychiatrist during pregnancy regarding the level of medication required as well as counseling with regard to substance use, non-suicidal self-injury, and attempted suicide. Sociodemographic factors , alcoholvia linkage of the WA Midwives Notification System and the WA Mental Health Information System (MHIS). Further information was collected by clinical case note review for the women who were treated by psychiatric inpatient or outpatient services during the pregnancy period. The study focuses on this subgroup of women.Women with a diagnosis of schizophrenia, bipolar disorder, or unipolar major depression who were treated at the State Psychiatric Hospital of Western Australia between 1966 and 1996 and gave birth between 1980 and 1992 were identified The study had University of Western Australia Human Research Ethics Committee approval as well as specific approvals from the individual inpatient and outpatient mental health services at which the clinical records were held. The data were analyzed using SAS version 9.4 .The International Classification of Diseases, ninth Revision codes were useNo further information was collected for the women who did not receive psychiatric inpatient or outpatient care during the study, but their demographic information was available for comparison with those who did receive psychiatric care.Demographic and clinical information was compared for all mothers included in the study, those receiving psychiatric care during their pregnancy period and those receiving psychiatric care and who took medication during the pregnancy period. For comparison of demographic characteristics, socioeconomic status was measured using the Australian Bureau of Statistics Socioeconomic Indices for Areas\u2014index of relative disadvantage and acceThe number of births where the mother was hospitalized voluntarily with a psychiatric diagnosis was calculated as was the average length of stay per birth in days for these admissions. This was repeated for involuntary admissions.t-test for paired samples.The number of mothers taking psychotropic medication and substances was calculated for the 6-month period before pregnancy and for each trimester. Whether or not there was a significant change in the proportion of women taking medication or substances prepregnancy and in the first trimester of pregnancy and then between the first and second trimester and the second and third trimester was tested using a The selection of the 428 mothers into the study is illustrated in Figure de facto relationship were in the group who were hospitalized compared to those who were single, divorced, or separated. The proportions of aboriginal mothers and those living in regional and remote areas were also lower in the hospitalized group. Mothers aged over 34\u2009years appeared more likely to be hospitalized than their younger counterparts, as were those living in the city. Parity and low socioeconomic status appeared to have little effect on the risk of hospitalization during the pregnancy period. Table The demographic characteristics of the mother relating to each birth are shown in Table Table In this descriptive study, we have documented the sociodemographic circumstances and the use of psychotropic medication and substances during pregnancy for women with severe mental illness who received psychiatric inpatient or outpatient services during the pregnancy period.The majority of the women who were hospitalized took psychotropic medication (80%), which is indicative of the severity of the disease, with antipsychotics recorded as the dominant medication category (73%) followed by mood stabilizers (21%), anxiolytic/hypnotics (16%), antidepressants (12%), and benzodiazepines (9%). Our data show fewer women taking medication during pregnancy than in the period immediately before pregnancy. This may be because the women are concerned about the potential effect of their medication on the developing fetus , or on tA similar study in Germany on a cohPsychotic illness is a known risk factor for substance use , 21. NinThe risk of suicide and non-suicidal self-injury are reduced during pregnancy in the general population , 24. As The demographic profile of the women shows little difference between those taking and not taking psychotropic medication. Differences between those receiving and not receiving treatment, however, may be associated with the location of the Statewide psychiatric hospital, which is in the metropolitan area and thus less accessible to those from regional and remote areas. As the majority of the aboriginal population live in rural and remote areas, this is likely to be why aboriginal mothers have a lower treatment rate. Higher treatment rates for single mothers are likely to be associated with a low level of support available at home . The higThere are a number of potential limitations to this study. The data are from the period 1980\u20131992 when the newer forms of antipsychotics and antidepressants were not widely available: atypical antipsychotics and SSRIs were introduced in the early nineties . Data weThis study on a cohort of 428 women of whom 164 received inpatient or outpatient psychiatric care for severe mental illness during the course of their pregnancy found that despite the severity of their illness, the women demonstrate a pattern of decreased use of psychotropic medication use from the period before pregnancy to the first trimester of pregnancy. Our study cannot provide data on the reason for this change, but we speculate that it may reflect motivation by the treating clinician, the mother, or both to maximize outcomes for the babies. However, of note, recently published clinical guidelines for the The study had University of Western Australia Human Research Ethics Committee approval as well as specific approvals from the individual inpatient and outpatient mental health services at which the clinical records were held. Participant consent was not required for this study of medical records. Requirement for consent was waived on the basis of the study being low risk, the benefits from the research justified any risks of harm associated with not seeking consent and privacy and confidentiality were protected by meeting the required standards for data storage and security.KB analyzed the data and drafted the paper, AJ was responsible for conception and design of the study and coauthored the paper, VM was involved in the study design, supervised local data collection, performed overall data management, supervised the data analysis, and coauthored the paper. JD and JG conducted the case note review and coauthored the paper. All the authors read and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "As a result, current treatments are often palliative and ignore the driving biological processes. In this context, the elucidation of the cause-and-effect relationships between cardiovascular biology and hemodynamics has the potential to advance the understanding of disease progression and to enable new diagnosis and treatments.Mechanical forces are powerful regulators of biology and disease. In the vasculature, the expression of particular cellular phenotypes appears to depend not only on a combination of intrinsic genetically programed biology but also on local hemodynamic environmental factors induced by blood flow responsible for the pathological response and their normalization could potentially slow down the disease process and limit or completely block its development Figure D.via flow normalization have been recently developed and are currently being tested. One such effort consists of implanting a valve device at the anastomosis in order to isolate the arteriovenous shunt from the rest of the circulation between hemodialysis sessions and to allow the passage of blood through the graft during hemodialysis sessions hemodynamic stress state and of the hemodynamic alterations triggering a pathological state. The characterization of blood flow is challenging due to its three-dimensionality, unsteadiness, turbulence, and strong coupling with the surrounding compliant vasculature were able to generate only basic mechanical stimuli , current bioreactors are able to mimic more closely the complexity of the native hemodynamic environment. Examples of such devices are single and double cone-and-plate bioreactors to expose vascular tissue to single-sided or double-sided pulsatile WSS (Sucosky et al., While mechanobiological studies have already contributed immensely to the understanding of cardiovascular pathologies, only few have been translated into clinical solutions. Major challenges include the complexity of interpretation of biological data, the assessment of the isolated and synergistic roles of mechanosensitive molecules in the disease process, the identification of effective, but safe, molecular inhibitors aimed at blocking the mechanobiological cascade, and the design of effective procedures and devices to normalize blood flow. While those complex issues are still current, the recent realization of the potential use of mechanobiology as a discovery tool for novel treatments and diagnosis modalities has motivated some collaborative efforts between the clinical and engineering fields. Those synergies are necessary to complement the basic science of mechanobiology and to elevate it to effective clinical solutions.With the continuous progress in flow characterization techniques, bioreactor technologies and biological assessment methodologies, mechanobiology has emerged as a potential tool to deliver the next generation of therapies in cardiovascular disease. The characterization of mechanical cues promoting cardiovascular pathogenesis, the identification and modeling of key mechanosensitive molecules and molecular pathways involved in the early stage of disease, and the integration of this knowledge into patient-specific flow models could provide new therapeutic modalities and predictive capabilities that will transform clinical decision-making and personalized care in cardiovascular medicine.SA and AM wrote the paper and share first authorship. PS wrote the paper and conceived the work.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The Addis Declaration adopted EPI is a legacy of the successful campaign that led to the eradication of smallpox . In 1974EPI then went through a period of maturation that involved various levels of sophistication aimed at improving coverage, reaching previously unreached children, ensuring equitable access to vaccine and introducing new or underused vaccines. Some events have accelerated the transformation of EPI more than others; the creation of the Global Alliance for Vaccine and Immunization (Gavi) in 2000 marked uAlbeit large cross-national variations in national immunization coverage across the continent, EPI in it forth decade boasts commendable achievements in VPD control and elimination. The variable performance of the EPI program across Africa reflects most often a variety of countries implementation frameworks where factors such as insecurity/inaccessibility, structural health systems performance and overall governance environment play a key role.Reaching more people, with more vaccine, in an equitable way remains by far the biggest challenge of the EPI program in Africa as it enters adulthood: since the early eighties when EPI data were systematically compiled in the WHO/UNICEF Estimates for National Immunization Coverages (WUENIC) , the aveThe performance of the EPI program cannot be decoupled from the performance of the health system in which it emanates (operating environment); poor EPI performance often translates structural weaknesses in health systems. Sustained and predictable financing, peace and security, strategic partnerships and overall governance mind-set are important determinants of this operating environment. Health policy frameworks such as Agenda 2063: the Africa We Want and otheThe advent of new vaccine technologies have the potential to expand the reach of the program by freeing it from stringent cold chain and technical know-how requirements.The introduction of more new or underused vaccines: will extend the opportunity to protect more people against more diseases throughout the course of lifeThe adoption of information technology around supply chain management, population based registries, personalized vaccination records, reminder systems, etc\u2026 will push the EPI program into the digital age.EPI will be humbled by the imperative to find innovative ways to sustain community awareness on the benefits of vaccination, to increase demand and offer of services, to strengthen confidence in the program and to better communicate risk.The future health landscape of Africa in full of challenges: the prevalence of non-communicable diseases will continue to rise; the burden of cancer is expected to double between 2008 and 2020; injury and trauma will increasingly contribute to disease burden; challenges link to climate change, food insecurity, conflicts and water scarcity will increase. With this in mind, vaccines provided through EPI are the weapons at hands to remove VPD from the pool of health challenges the continent will grapple with in the future; this opportunity cannot be wasted.The author declares no competing interest."} +{"text": "The characteristics of each lineage and the proposed genetic cascades involved in their formation are reviewed. In addition, a list of the current markers used to identify these lineages"} +{"text": "Vector-borne diseases (VBDs) represent a great proportion of the neglected tropical diseases (NTDs) in tropical regions of the world, where they disproportionately affect the poorest and most disadvantaged populations. That said, in recent years, the expansion of vectors from the tropics has placed many more people living in the temperate regions of the world at risk of contracting VBDs. The expansion of VBDs is occurring at a time when unprecedented discoveries are being made in vector biology in the areas of genetics, genomics, and physiology. With the goal of highlighting the significance of VBDs in the new century and emphasizing the role of basic vector research in the development of future disease control methods and for improving upon the existing approaches, we have advanced a call for papers that focused on vector research as part of our 10th anniversary celebration. It is our hope that these new discoveries in vector biology research will pave the way for translational methods to combat the old foes in the new climate.According to the World Health Organization (WHO), nearly half of the world\u2019s population is infected with at least one type of vector-borne pathogen . Among tTrypanosoma brucei as the cause of cattle trypanosomiasis (cattle nagana) in 1895, and the British colonial surgeon Robert Michael Forde was the first to observe trypanosomes in the human blood in 1901. Both David Bruce and the German military surgeon Friedrich Karl Kleine provided conclusive evidence showing the cyclical transmission of T. brucei in tsetse flies at the beginning of 1900 response, and how the enzyme catalase protects mosquitoes from oxidative stress and helps dengue virus survival in the insect gut. We have also received a number of papers on triatomine-transmitted Chagas disease, which highlights the increasing significance of this disease burden in South America. These papers highlight the relevance of basic research on vector\u2013pathogen interactions to find new venues to control transmission of VBDs. Many of the articles highlight the complexity of vector\u2013pathogen\u2013host interactions, which can serve as novel points of interference for control, including the role of pathogen-produced exosomes at the bite site, the role of vector saliva pre-exposure in disease, and viral modification of host gene expression.Wolbachia endosymbionts have gained attention because of the various host reproductive modifications they induce in order to enable their transmission (and that of the infected vector) through natural populations. Mosquitoes infected with a novel strain of Wolbachia have also been shown to resist transmission of dengue [Wolbachia endosymbiont\u2013mediated pathogen interference, is currently being tested in trials against dengue in Southeast Asia and in implementation trials against Zika virus disease in Colombia and Brazil. Our collection highlights several papers that investigate Wolbachia effects on mosquito disease transmission traits and model the success of these applications.Research in vector microbiota continues to attract attention as commensal and symbiotic microbes harbored by insects can influence various disease transmission traits, including an effect on the gut barriers to prevent malaria infection in mosquitoes. In the last decade, the f dengue , chikungf dengue , and Zikf dengue . One traImproved knowledge of climate change effects and vector habitats, coupled with mathematical modeling efforts, can predict transmission dynamics and enhance the efficacy of ongoing or future control efforts in the field. The diversity of vectors and diseases presented in the Vector Biology collection emphasizes the growing significance of this area and these research investigations towards effective control. The ultimate goal is for the accumulating knowledge on vector\u2013pathogen dynamics to inform and guide public health decisions for optimal outcomes. Several of the papers in this collection begin to make the link between basic science- and field-based discoveries and policy decisions. Additional papers, which are currently under review in our system, will be added to the collection.VBDs represent a great proportion of the NTDs. The high-quality vector biology research presented by various research groups from around the globe that includes basic research, field work, modeling, and translational research demonstrates the commitment to find new avenues towards the control of these NTDs. Importantly, this collection reinforces the commitment of our editorial board to continue publishing high-quality and impactful research in vector biology as they impact control of VBDs."} +{"text": "The heart has a limited ability to regenerate. It is important to identify therapeutic strategies that enhance cardiac regeneration in order to replace cardiomyocytes lost during the progression of heart failure. Cardiac progenitor cells are interesting targets for new regenerative therapies because they are self-renewing, multipotent cells located in the heart. Cardiac side population cells (cSPCs), the first cardiac progenitor cells identified in the adult heart, have the ability to differentiate into cardiomyocytes, endothelial cells, smooth muscle cells, and fibroblasts. They become activated in response to cardiac injury and transplantation of cSPCs into the injured heart improves cardiac function. In this review, we will discuss the current literature on the progenitor cell properties and therapeutic potential of cSPCs. This body of work demonstrates the great promise cSPCs hold as targets for new regenerative strategies. Heart failure remains a pressing healthcare problem because of its increasing prevalence and high rate of morbidity and mortality were the first population of cardiac progenitor cells identified in the heart that possess the three key progenitor cell properties discussed above . Originally, P-glycoprotein and Abcg2 were identified as proteins that confer chemoresistance by extruding drugs out of the cytoplasm of cancer cells were implanted to the time the LVADs were removed at the time of cardiac transplantation .The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "IgA nephropathy (IgAN) is the most common glomerulopathy worldwide. After the development of Oxford classification of IgAN, it was established that pathological parameters can also be useful, independent of all clinical or laboratory parameters, in predicting the future course of the disease in individual patients. During the recent past, a novel fragment of the complement activation cascade became the focus of research as a biomarker of current or recent antibody action. This fragment is known as C4d and is now widely used in the investigation of solid organ graft pathology. The current enthusiasm surrounding the role of C4d in the diagnosis and prognosis of both primary and secondary glomerular diseases needs cautionary approach and further investigations are needed to define the exact implications of C4d in clinical practice.IgA nephropathy (IgAN) is the most common glomerulopathy worldwide . In the It is well known that the complement system of plasma proteins plays an important role in the pathogenesis and pathology of many renal diseases, especially glomerular diseases . A numbeRath et al in the current issue of this journal have investigated the role of C4d deposits in the glomeruli and peritubular capillaries of patients with IgAN . They inMM was the single author of the manuscript.Ethical issues have been completely observed by author.The author declared no competing interests.None."} +{"text": "CCNO gene, encoding an atypical cyclin necessary for the generation of multiple motile cilia [CCNO or other genes required to generate MCCs such as MCIDAS [Primary Ciliary Dyskinesia-29 is an autosomal recessive disorder characterized by early onset, progressive and irreversible lung damage due to recurrent respiratory infections caused by the inability of the multiciliated cells (MCC) present in the respiratory epithelium to clear mucus and particles trapped in the upper airways. The airways of these patients show a nearly complete lack of cilia on the MCCs or have a few dysfunctional, motile cilia. The molecular cause of CILD29 is mutations in the coding region of the In addition to the respiratory airways, MCCs are present in the male and female reproductive epithelium and in the ependyma, the epithelium lining the brain ventricles, accomplishing different functions depending on their location. The mouse oviduct coils and the human Fallopian tube plicae are covered by a single layer of multiciliated columnar epithelium interspersed with non-ciliated, secretory cells. This structure allows the propulsion of gametes and embryos and supports fertilization and early embryogenesis. We recently described that the complete lack of CCNO in the mouse results in male and female infertility .-/-Ccno mice present few, abnormal cilia in the ependymal MCCs and develop hydrocephalus with higher penetrance than reported for the conditional knockout model or in the RGMC patients [Ccno-/- mice develop severe communicating hydrocephalus within the first month of postnatal life. Mice surviving this period can live for as long as their +/+Ccno or +/-Ccno siblings without overt neurological defects most likely due to compensation of the increasing intracranial pressure by thinning of the brain parenchyma.In the ependyma, the MCCs are responsible for recirculating the cerebrospinal fluid (CSF). CSF is produced from blood plasma in the choroid plexuses within the brain ventricles and it is reabsorbed in the arachnoid granulations through the superior sagittal sinus and into the venous circulation. Defective recircularization of the CSF leads to its accumulation and the dilatation of the brain ventricles, a condition called hydrocephalus. Hydrocephalus can arise during the embryonic development or can be acquired postnatally. Most of these patients show increased intracranial pressure and often need urgent intervention to drain the CSF. The lack of cilia or the presence of immotile cilia in the ependymal MCCs is one of the causes of congenital hydrocephalus. Constitutive patients ,3\u20135. HenNormal Pressure Hydrocephalus (NPH) is a form of communicating hydrocephalus of unknown molecular cause that can progress over decades without severe neurological manifestations. The prevalence of idiopathic NPH in the elderly is estimated to be up to of 3% in patients greater than 65 years old. However, the insidious onset of symptoms and the lack of acute episodes lead to hydrocephalus often being mistaken for other chronic neurodegenerative diseases such as Parkinson\u2019s and Alzheimer\u2019s diseases. Because of the lack of consistent clinical symptoms, it is very likely that the number of diagnosed patients is greatly underestimated [-/-Ccno mice and the physiopathological findings in human NPH patients, it is tempting to speculate that CCNO haploinsufficiency could represent a molecular cause for idiopathic NPH. Interestingly,Because of the similarities between the CNS phenotype of constitutive +/-Ccno mice also develop hydrocephalus, albeit with lower penetrance [netrance . It is pCCNO mutation carriers could be important as they may be at risk of developing NPH, a disease which has an effective treatment but is often misdiagnosed as other incurable and devastating neurodegenerative disorders.Our results therefore suggest that the identification and follow up of heterozygous"} +{"text": "In the current work, a human systems pharmacology model for gastrointestinal absorption and subsequent disposition of small molecules (monocarboxylic acids with molecular weight < 200 Da) was developed with an application to a ketone monoester. The systems model was developed by collating the information from the literature and knowledge gained from empirical population modelling of the clinical data. In silico knockout variants of this systems model were used to explore the mechanism of gastrointestinal absorption of ketones. The knockouts included active absorption across different regions in the gut and also a passive diffusion knockout, giving 10 gut knockouts in total. Exploration of knockout variants has suggested that there are at least three distinct regions in the gut that contribute to absorption of ketones. Passive diffusion predominates in the proximal gut and active processes contribute to the absorption of ketones in the distal gut. Low doses are predominantly absorbed from the proximal gut by passive diffusion whereas high doses are absorbed across all sites in the gut. This work has provided mechanistic insight into the absorption process of ketones, in the form of unique in silico knockouts that have potential for application with other therapeutics. Future studies on absorption process of ketones are suggested to substantiate findings in this study.Gastrointestinal absorption and disposition of ketones is complex. Recent work describing the pharmacokinetics (PK) of Several modifications to these models are now available to study mechanistic aspects of drug transport. Of these, in vitro Caco-2 efflux transporter knockdown cell models, transfected models [in silico mechanistic models for predicting oral drug absorption. There are proprietary models used for predictive purposes in drug development, including the ACAT model [\u2122, the ADAM model [\u00ae and the GITA model [in silico systems pharmacology model that includes knockout variants as a means of investigating mechanisms of intestinal drug absorption. We use d-\u03b2-hydroxybutyrate (BHB) to demonstrate the approach.Gastrointestinal absorption of drugs is a complex process and involves various regions of the gut and transport mechanisms. Several experimental methods are available for quantification of drug transport across the gastrointestinal tract. The most commonly used (PAMPA) and cell (PAMPA) ; ex vivoechnique ; and in primates . While sd models and knocd models , 7. Thesd models . AnotherAT model implemenAM model implemenTA model . In thisBHB, acetoacetate (AcAc) and acetone are endogenous metabolites of fatty acid metabolism, produced in the liver in response to starvation. BHB and AcAc are metabolically important. Typical concentrations of total endogenous ketones in blood, under normal conditions are less than 0.5 mM. Blood ketone concentrations equivalent to starvation-induced ketosis may have therapeutic utility in a number of clinical conditions, specifically in treating neurological disorders [R)-3-hydroxybutyl (R)-3-hydroxybutyrate) that hydrolyses in vivo to produce BHB [isorders . Ketonesisorders , 14. A nduce BHB . A recenduce BHB . This woin silico structure of the biological component and incorporating a systems pharmacology approach to incorporate the therapeutic target will provide greater opportunity in exploring the contribution and influence of therapeutics on the biological systems. In this work, we demonstrate this by combining different modules of varying activity (for transport) across the gut to generate a simplified whole human system. This model is subsequently explored to study the influence of mechanistic processes, on absorption of ketones.Systems pharmacology provides an interface between systems biology and pharmacology . Applicain silico knockout variants.The objectives of this work were 1) to develop a systems pharmacology model that represents the gastrointestinal absorption of the ketone monoester and its systemic catabolism and 2) to explore mechanistic features of gastrointestinal absorption of ketones using A literature search was conducted using Medline, Embase and Cochrane libraries using defined keywords to identify and collate information on the production, transport, and disposition of ketones (both endogenous and exogenous) and the factors governing these processes. The review focussed on identifying the metabolic processes and sites of metabolism, mechanisms involved in the transport of ketones across the gastrointestinal tract and consumption in the tissues for energy production. Information on pathway of endogenous production and associated feedback mechanisms were also identified.\u00ae . All reaction rates are defined as moles/time (mmol/h). Reactions and transport of ketones are defined either as first-order or saturable processes. The general form of the equation used in this model, to represent the movement of a molecule from a given state isfi,a is the fractional millimoles of the ith state undertaking the ath process (e.g. a transport or metabolism process), Ai is the amount (mmol) of the ith state. Ji is the flux rate or a rate constant of a saturable process ) that is dependent on amount (mmol) at tth time. All saturable transport processes and competitive inhibition for transport of substances was based on fractional receptor/transporter occupancy, which is similar to the Michaelis-Menten equation used to describe the saturable enzyme kinetics [The systems model is written as ordinary differential equations (ODEs) in MATLABkinetics . The genVimax, is the maximum velocity of the saturable process for ith state and kim, is the Michaelis-Menten constant that represents the amount (mmol) of the state where the velocity is half of the maximum. Note, all reactions/substrate transfers in the model are associated with a fraction to account for a complete molar balance. Competitive inhibitory processes for transport of ith state (competition exerted by jth state) are written in the form:Aj is the amount (mmol) of jth state and kjm, (in mmol) is the Michaelis-Menten constant for the jth state. Production of endogenous ketones is assumed to be constant with zero-order input. The negative feedback effect of circulating factors in the blood, on endogenous production is defined by a saturable process. The complete set of ODEs is provided in Here km) were either taken directly from the literature [Vmax) from in vitro and/or preclinical data [fmincon algorithm in MATLAB\u00ae from the empirical data , volumes of tissues/organs were obtained from physiological parameters in the literature \u201324 and wterature \u201327 or sccal data , 28, 29.cal data . Remainidata see . A complThe model was calibrated by comparing the predictions (blood concentrations of BHB) of the systems model against empirical data at two dose levels (192 and 573 mg/kg of the ketone monoester). The data arose from a balanced design, where each individual at each dose level provided a pharmacokinetic blood sample at each time point. Since these data had several subjects per dose level the mean of the data was selected at each time point. These two doses were the extreme doses used in a clinical study . The modIn order to explore the influence and contribution of each subcomponent of the gut and role of specific absorption processes in the gut, knockout variants for absorption were created for regions and processes in the gut (see \u2018Exploration of knockout variants\u2019 in Results section). Knockouts were created by setting an indicator variable that governs a whole process to 0 or 1 . Separate knockouts of apical and basolateral transporters were created and a \u2018passive absorption knockout\u2019 was created by eliminating passive absorption from the model. A total of 10 knockout variants were identified across the length of the gut. These knockout variants were tested either alone or in various combinations (see \u2018Exploration of knockout variants\u2019 in Results section), in order to understand the contribution and influence of specific subcomponents and processes in the gut, on the gastrointestinal absorption of ketones. The influence of processes and regions of the gut on the absorption of ketones using knockouts were studied as 1) qualitative assessment and 2) quantitative assessment.All knockout variants were tested by studying simulated blood BHB concentration-time profiles. During exploration using knockout variants, model simulations were overlaid against the empirical data and compared for deviation in the profiles. Whenever the simulated profile deviated from the empirical profile in a knockout plot, the corresponding process/region was deemed to be influential. The greater the deviation of simulations from empirical data , the larger the contribution of the process/region to the absorption process.AUC (Area Under the Curve for BHB in blood) and shift in Tmax (time to maximum concentration of BHB in the blood). Fractional AUC was computed as:AUCknockout is the AUC of BHB in blood when the specific knockout was applied was computed and values away from \u20180\u2019 were deemed to be influential. These statistics were considered in a descriptive manner in this work.The influence of knockouts on the absorption process was studied quantitatively by computing the fractional contribution of d Tables and AUCfNote, given exploratory nature of the current work, no statistical criterion was defined either for qualitative or for quantitative assessments that would confer significance of influence.A systems pharmacology model for catabolism of the ketone monoester and its metabolism products in humans was developed with a view to maintain the model structure to be as simple as possible but not simpler . The sysA brief description of each component in the model with an emphasis on its contribution to the kinetics of ketones is outlined below.+ coupled MCT1, belonging to SLC5A8 sub-family) are expressed along the gut wall. Of these, MCT1 and SMCT1 are expressed on the apical side and MCT4 is expressed on the basolateral side [2) is reported to increase from proximal to distal regions of the gut [max, down the gut regions in the model) of the transport of ketones. This forms the basis for four distinct subcomponents of the gut in this systems model. A diagrammatic representation of the transport proteins expression in the gut wall (enterocyte) is presented in The gut is divided into two subcomponents namely proximal gut and distal gut. Each of these subcomponents are further divided , providing a representation of spatial and temporal transport . These rral side , 34. Theral side . In addi the gut . This vaThe portal vein forms the link between gut and the liver. Ketones absorbed from across the gut wall are taken by the portal system and transported to the liver.The liver forms the major connecting link for absorption of ketones from the gut to the systemic circulation. The liver is represented as the site of metabolism and the site of endogenous ketone production . VariousThe systemic circulation is the site for transport of ketones in the body and also a site of metabolism of ketone ester to butanediol and BHB, butanediol to BHB and AcAc to acetone . All othAll other organs and tissues into which ketones are transported from the systemic circulation are presented together as one lumped component. This constitutes sites of metabolism such as heart, brain, kidney and skeletal muscle and sites of excretion such as lungs and kidneys. Transport of ketones (BHB and AcAc) in and out of the tissues from the systemic circulation is an active process and is facilitated by MCT1 and MCT2 transporters. MCT2 belongs to the SLC16A family of transporters and is a high affinity and low capacity transporter for both BHB and AcAc . All othGiven the objectives of the current work to explore mechanisms of gastrointestinal absorption of ketones, a fit-for-purpose approach was used in calibrating the systems model as opposed to routine model evaluation tests used for standard PKPD models , 44. Thein slico knockout variants. Assessment of influence using knockouts were studied both qualitatively and quantitatively.Influence and contribution of processes and regions of the gut on the absorption of ketones was studied using Knockout variants of the systems model see were expKnockout of all absorption processes in successive subcomponents of the gut are presented in Tmax), a negative AUCfraction was observed for knockout of passive diffusion approach was used in developing the model. The design of subcomponents for the gut was driven by variability in expression of transport proteins along the gut , 34. ThoKeeping in view, the objectives of the current work, i.e., to explore mechanisms related to gastrointestinal absorption, using of ketones as an example, a fit-for-purpose approach was used in calibrating the model . This meInterpretation of results from application of knockout variants of the systems model have shown that there are at least three distinct regions in the gut that contribute to gastrointestinal absorption of ketones. Future clinical studies of absorption of ketones are recommended to test this hypothesis. We believe that the learnings from knock-outs in this model can be tested by delivery of test substance directly into duodenum, conducting clinical absorption studies in special population such as ileostomy and colectomy patients. Exploration of knockout variants have indicated that different mechanisms dominate the input process of ketones in different regions of the gut leading to dose-dependent nonlinear absorption processes. It was also observed that the low dose was predominantly absorbed from the proximal gut and that this was mainly driven by passive processes. In contrast to this, the high dose was absorbed across the length of the gut and this was predominantly driven by passive diffusion process in the proximal gut and active transport processes in the distal gut. This work highlights that while passive diffusion contributes to the initial input profiles of blood BHB, active transport processes contribute to later input processes. The findings from this work are in line with and support the findings from our previous work in which we developed an empirical population PK model for blood BHB concentration, where putative multiple absorption sites were suggested in the modelling process. The current work leads to the hypothesis that there are three distinct input sites for absorption of ketones in the gut. This hypothesis may be the subject of interest for testing in future studies with the ketone monoester, notably when particular targets for plasma profiles and exposure are defined.Though the current systems pharmacology model for ketones was able to serve the purpose of current objectives, it is by no means complete. Exploration and application of the systems model in the current work was limited to considering the influence of absorption of the ketone monoester on blood BHB concentrations only, exploration of other ketones in blood and ketones in other tissues was beyond the scope and objectives of the current work. The current set of parameter values of the model, though providing a good description of BHB profiles in the blood, are generally not identifiable and will need to be calibrated in future work when other ketones are tested in blood and/or other tissues. The immediate areas of expansion for the current model include exploration of BHB kinetics in tissues of metabolic interest such as skeletal muscle, kidney, brain and heart and sites of excretion such as kidneys and lungs. This can be achieved by differentiation of the tissue component of the model to yield organs of interest as separate components. This may be followed by extending the scope of this model to explore kinetics of other ketones and their precursors to understand the catabolism of the ketone monoester in a broader sense. These expansions will help to provide the capability to link pharmacodynamics of ketones with the pharmacokinetics, for future prospects with the ketone monoester that are intended for therapeutic applications.in silico knockout variants in exploring the mechanism of oral drug absorption. The initial part of this work relating to model development demonstrates how processes can be integrated to build a systems pharmacology model based on known mechanisms. The later part describes how some selective processes can then be differentiated from the whole (such as the use of knockout variants shown here) to explore mechanisms that are either difficult or impossible to perform experimentally . This work demonstrates the potential of systems models to complement or, in some circumstances, replace animal studies.This work demonstrates the utility of the systems pharmacology model approach in exploring the mechanisms of gastrointestinal absorption of therapeutic compounds. A unique feature of this work is the development of In conclusion, a systems pharmacology model for the absorption and catabolism of a ketone monoester is developed. This model is capable of describing blood BHB concentrations at two dose levels. Knockout variants of components of the systems model provided a notable and convincing mechanistic insight into the gastrointestinal absorption of ketones in terms of the influence of specific regions, processes of absorption and the dose-dependent nature of absorption. The current model provides an initial platform for continuous exploration of mechanisms of ketone disposition and its further expansion to provide insight into the mechanisms of pharmacological action of ketones in therapeutic applications. This approach can be extended to other therapeutic compounds with known complexities of drug actions.S1 Fig(DOCX)Click here for additional data file.S2 Fig(DOCX)Click here for additional data file.S1 Methods(DOCX)Click here for additional data file.S1 Table(DOCX)Click here for additional data file.S2 Table(DOCX)Click here for additional data file."} +{"text": "Background: This study evaluated and determined the proximity of an impacted third mandibular molar (TMM) to the inferior alveolar canal (IAC) by using CBCT and digital panoramic radiography.Materials and Methods: This descriptive-analytic research applied CBCT and panoramic radiographs for 60 subjects . Subjects selected showed a close proximity about the TMM to the inferior nerve canal on panoramic radiographs; these subjects then received CBCT radiographs. The CBCT findings for the proximity of the TMM to inferior nerve canal used the outcomes of surgical findings as the standard of comparison.Results: Eight cases showed positive surgical findings indicating vicinity of the third molar and the mandibular nerve canal. Only 13.3% of the cases in which panoramic views showed the proximity of the TMM and the IAC were confirmed during surgery. The result for CBCT radiographic diagnosis was 95%. Conclusion: It can be concluded that CBCT is preferred over panoramic radiography to determine the proximity of the impacted TMM to the IAC. Narrowing of the mandibular canal or root canal, disconnection of root borders in panoramic radiography, and the inferior-lingual proximity of the tooth to the root in CBCT strongly indicated the close nearness of the impacted TMM to the IAC. Although panoramic imaging offers comprehensive coverage and easy access, identifying the exact proximity of the impacted TMM to the IAC in patients is not possible; hence, it is essential to augment diagnosis using cone-beam computed tomography (CBCT) .The extraction of an impacted TMM is a common minor operation in the maxillofacial region. Like other surgeries, this type can have the side effect of malfunction of the inferior alveolar nerve (IAN). It is necessary to precisely predict the vicinity of the third molar to the eIAN [5]. Such damage may cause paresthesia, hypoesthesia, and anesthesia of the lower lip. Its prevalence has been reported to be 4% to 8%; in less than 1% of cases, patients experience permanent numbness in that area [6-9]. This occurs because of the surgery in the area around the impacted molar root and the IAC results in exposure of or damage to the canal [10].One side effect of impacted third molar tooth surgery is the malfunction of the IAN . This is also the reason of one of the very frequent complaints against maxillofacial surgeons in the coroner\u2019s court and increases belief by the public that surgical negligence has occurred during surgery [3]. An extensive survey of the proximity of the impacted third mandible molar to the IAN is necessary before surgery. Panoramic radiography is the most common equipment used for pre-surgery evaluation of impacted third molars .The proximity of the impacted third molar to IAN raises the risk of numbness up to 30% and may generate mental and social troubles for the patients [3-5]. This technique is gradually being replaced by CBCT, which allows 3D views of the anatomy with the least distortion at different angles [11]. The advantages of CBCT over CT include a decrease in the radiation surface, high-quality images, low scanning period, reduction in the radiation dosage to patient, and the decrease in metal artifacts in images [2].Although this technique has gained prominence in third molar surgeries because it involves a low dose of radiation, comprehensive coverage, and simplicity of analysis and access, it has drawbacks. These include low sensitivity, 2D views, inability to distinguish bone thickness, distortion of dimensions and magnification of both the vertical and horizontal dimensions and production of ghost images on the reverse side. Sensitivity values of 24% to 64% and specificity values of 74% to 98% have been recorded for panoramic radiography [3]. Studies show that nerve damage is the most frequent side effect of surgery for the extraction of the third molar (4.4% to 8.1%). Paresthesias recorded in 1.3% to 5.3% of the cases because of the vicinity of the impacted tooth to nerves [12]. Atsuko et al. surveyed the positions of the lower jaw molars and the mandible canal by using CBCT. They concluded that data on the distance between the canal and the tooth provided by CBCT are effective for the evaluation of possible damage to the IAN. The great resolution and less radiation dosage allow the use of these images for TMMs. CBCT images in specific and standard conditions and the rating of a sufficient number of samples are listed as the advantages in the study . The current study showed that panoramic radiography failed to correctly diagnose the relation between these two structures on its own. The influence of multiple observers for radiographic accuracy was eliminated by using only one observer and the reliability of the study increased. Several investigations have found that the factors used in panoramic radiography are better related to the proximity of the alveolar nerve to the impacted third molar of the lower jaw. Albert et al. compared panoramic radiography with conventional tomography to study the vicinity of the impacted third molar and the mandibular canal. They analyzed risk factors in the perception of close proximity of the tooth to the nerve and determined the topography of nerve to the mesial and distal roots. Their results determined that the darkening of the root was the most common factor isolated in panoramic radiography; in 13 out of 14 patients showing this sign, the third molar was in close vicinity to the nerve. In 4 out of 5 patients showing an island-shaped apex, the third molar was in close vicinity to the nerve. A dark bifid root apex and deflection of the root apex did not indicate a close proximity of the tooth root to the mandibular nerve. The performance of tomography versus panoramic radiography was not discussed [22]. Tantanapornkul et al. compared panoramic radiography and CBCT to evaluate the topographic proximity of an impacted third molar tooth to the mandibular canal. They considered 4 factors for the proximity of the nerve to the tooth: lack of continuity of mandibular canal; root darkening; mandibular canal deflection; and reduction of the root. The existence or nonexistence of a direct relationship between root and nerve were the criteria for CBCT. After the analysis of the radiographs, patients underwent surgery and the results determined during surgery were recorded. After surgery, patients were examined for the existence of paresthesia. The results revealed that every factor for panoramic radiography was related to the exposure of the nerve; hence, these factors effectively predicted the risk of harm to the nerve. The lack of continuity of the mandibular canal was introduced as the most important diagnostic factor. Specificity was 93% and sensitivity was 77% for CBCT, 70%, and 63% for panoramic radiography, respectively. This showed that CBCT outperformed panoramic radiography . Valmaseda-Castell\u00f3n et al. showed that IAN injury might ensue after lower third molar operational extraction [24].When these factors were found in panoramic radiographs, the probability of the close vicinity of the impacted third molar tooth of the below jaw to the IAC increased significantly. There were no significant relationships found between radiographs of the disruption of the white cortical line of inferior alveolar, root deflection, and narrowing of the IAC. Studies have considered factors such as different numbers of observers, their specialties, the method of scoring of data, and results of surgery results in their research methods. The exposure during surgery and the surgeon assessment were considered the guidelines for evaluation. In other cases, paresthesia was considered for the vicinity of the two structures [21]. This was a restriction of the research. The present study employed observers, which had several advantages. Tantanapornkul et al. surveyed the results of CBCT and panoramic radiography to assess the closeness of the mandibular canal to an impacted third molar. Patients with impacted third molars of the lower jaw were scanned by panoramic radiography prior to surgery. The surgeons were asked to record all tooth extraction details and neurovascular exposure during tooth extraction. Patients for whom there was doubt about neurovascular exposure were dismissed from the research. Seven days after surgery, the side effects of third molar surgery of patients were recorded. Ten patients showed the side effects; patients with exposed neurovascular bundles showed notably higher side effects compared to other patients. The sensitivity of CBCT was 93%, which was notably greater than for those receiving only panoramic radiography. It was concluded that the CBCT was more effective in predicting neurovascular exposure after surgery for an impacted third molar than panoramic radiography. Moreover, its application under clinical conditions to evaluate impacted third molar pre-surgery had several advantages. Since identifying neurovascular exposure was done by the surgeon during surgery, the possibility exists that some areas were overlooked and these results showed the low specificity of images [18-22]; hence, they found that the diagnostic precision of panoramic radiography and CBCT were the same. The benefits of this study were the random viewing of panoramic radiographic images and CBCT, internal agreement of viewers for both techniques, and evaluation by one observer. The sensitivity and specificity of both CBCT and panoramic radiography have been reported differently in various studies; for example, a sensitivity of 96% and specificity of 27% have been announced in a similar study.Gaeminia et al. evaluated the proximity of impacted third molars to the mandibular canal by using CBCT and panoramic radiography. Their results revealed no significant relation between exposed IAN and nervous disorders after surgery by gender, place of surgery or third molar angle. They found no meaningful difference between these two techniques for the prediction of the risk of nerve exposure; however, the lingual location of the mandibular canal was notably correlated to IAN nerve damage. Three cases using panoramic radiography were significantly related to IAN nerve damage. CBCT sensitivity was 96% and specificity was 23% [In the current study, the sensitivity of CBCT was 100%, which indicates its effectiveness in diagnosing positive cases. Its specificity for diagnosing and identifying negative cases was 94%, which was lower than its sensitivity.This study confirmed that CBCT is the most accurate method of radiography for the determination of the proximity of impacted third molars of the lower jaw and the IAC. The results indicated that 3 of 7 factors used to evaluate panoramic radiography notably matched with the operational results used as the standard. The CBCT diagnostic value was 95% in this study, indicating that, in 95% of cases, the results of operation were the same as the predictions from CBCT. The results of CBCT evaluation increased for simultaneous observation of the inferior-lingual relation to confirm the proximity of the impacted third molar of the lower jaw to the alveolar canal. Conflict of interestThe authors declare that they have no conflict of interest."} +{"text": "Congenital unilateral agenesis of the parotid gland is a rare condition with only few cases reported in the literature. A review of 21 cases in the available literature is presented in this article. We report on a further case of a 34-year-old woman with agenesis of the left parotid gland and lipoma of the right cheek. Clinicopathological characteristics of described cases in the literature were discussed. The major salivary glands start to develop between the sixth and seventh week of gestation beginning with the parotid gland which arises from ectodermal lining of the stomatodeum . The subAgenesis of parotid glands may occur alone or in association with anomalies of the submandibular or lacrimal gland, first brachial arch developmental disturbances, or other congenital anomalies \u20137. The tWe present a case of unilateral agenesis of the parotid gland in combination with a lipoma of the cheek on the opposite site. The clinical and radiological findings in this patient are described. A review of the unilateral parotid gland agenesis in the literature is also presented considering a summary of the data regarding gender, age, defect site, and combined manifestations.A 34-year-old woman was referred to our department for evaluation of painless swelling of the right cheek over the last seven months. In addition, she often bit her right cheek. The swelling did not vary in size during eating and the patient had no other clinical symptoms and no history of recurrent parotitis. Xerostomia was not noted. There was no other relevant medical history and no family history of similar problems was reported. On clinical examination the oral mucosa was moistened by saliva. Bilateral hemifacial contour was normal, and there were no depressions in either preauricular region. Physical examination of the head and neck was without pathological findings, except for the absence of the left parotid gland papilla .Ultrasonographic examination of the head and neck area showed that the parotid gland on the left side was totally absent. The other major salivary glands were present without any pathology. A tumor in the right cheek ventral to parotid gland was observed with characteristic sonographic appearance of lipoma. For further evaluation of the tumor in the right cheek and assessment of the function of the other salivary glands magnetic resonance imaging (MRI) and scintigraphy with Technetium (Tc-99m) sodium pertechnetate were performed. MRI confirmed a lipoma of the cheek on the right side and a unilateral absence of the left parotid gland . Other pCongenital absence of the salivary glands is a rare condition which has been described to affect the parotid or submandibular glands . AgenesiThe true incidence of unilateral agenesis of the parotid gland is difficult to ascertain because it is often asymptomatic . Congenin = 12) or the presence of the parotid papilla was not documented (n = 9).In the available literature, only 22 cases of unilateral agenesis of the parotid gland have been described including the present case . Among tIn most reported cases the unilateral agenesis of the parotid gland was associated with a painless swelling of the contralateral parotid gland or facial asymmetry without any other significant clinical symptoms , 20, 25.The unilateral agenesis of the parotid gland may be clinically silent. Clinical suspicion should arise in cases of asymmetrical parotid areas and a painless unilateral swelling of the parotid gland. Clinical examination, especially the absence of the papilla of Stensen's duct, could be helpful for diagnosis. Mostly the unilateral agenesis of the parotid gland seems to be a coincident finding. We were able to confirm the diagnosis of parotid gland agenesis by using a combination of MRI and salivary gland scintigraphy."} +{"text": "One of the great challenges in the near future will be the sustainable production of sufficient amounts of safe food worldwide. A combination of adverse demographic factors and climatological perturbations is expected to impact on food systems globally . Several studies have demonstrated that abiotic stress conditions induce aberrant expression of miRNAs that reduce steady-state levels of their target mRNAs. To shed more light on this subtopic Leister et al. review the link between the expression of chloroplast genes and whole-cell acclimation to environmental changes. Although only a small fraction of the genes present in the original cyanobacterial endosymbiont remains in the modern organelle, perturbation of their expression plays a major role in triggering acclimation and tolerance responses via signaling from the chloroplast to the nucleus. Zoschke et al. have investigated the effect of a maize chlorophyll-deficient mutant, chl1H/gun5, on the translation of plastidic transcripts coding for chlorophyll-binding apoproteins (CBPs). By comparing the positions and numbers of ribosomes on the plastidic transcripts of wild-type and mutant plants, the authors concluded that chlorophyll availability modulates the stability rather than the synthesis of CBPs in plastids. Furthermore, the chl1H mutation had no effect on the partitioning of CBP footprints, suggesting that co-translational targeting of the nascent peptides into the thylakoid membrane is independent of chlorophyll binding by the CBPs.Two papers included in the Topic deal with aspects of organellar gene expression. Sablok et al. summarize recent global analyses of mRNA populations associated with ribosomes (now referred to as the \u201ctranslatome\u201d), highlighting the importance of alternative splicing and the application of these technologies to polyploid plant species. Miras et al. focus on translation in plants infected with RNA viruses. Plant viruses have evolved subtle mechanisms, such as mRNA cis-translational enhancers, to recruit the host's translational machinery and initiate translation by non-canonical mechanisms. The authors highlight the diversity of these translational elements and focus on current knowledge of their structure and interactions with the host's translational initiation apparatus.Finally, the Topic includes a review of translational regulation during development and under stressful conditions, and an overview of the translation of viral RNAs. We believe that this compilation of original research articles and reviews will bring the reader up to date on the current state of the art in the field of post-transcriptional and translational regulation in plants. We are also convinced that advances in this area will be of the utmost importance for the development of biotechnological tools for yield enhancement.All authors listed have made substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Editorial on the Research TopicIntrinsic ClocksThe existence of living organisms on our planet has been dependent on and co-evolved with the foreseeable variations in environmental conditions oscillating over recurring periods. All species have responded to these exogenous rhythms by developing endogenous clocks that allow for an approximate, but reliable estimation of the periodic changes and elicit corresponding adaptive processes.period contributed to the emergence rhythm of a population and to the locomotor activity of individual flies (Drosophila melanogaster). The key to the explanation was the discovery of transcription-translation feedback loops of the so-called \u201cclock genes.\u201dThe importance of these mechanisms for health and disease has been highlighted by the 2017 award of the Nobel Prize in Physiology or Medicine to Jeffrey C. Hall, Michael Rosbash, and Michael W. Young for their discoveries of the genetic control of the daily biological rhythm. They explained in molecular terms how the gene named as Intrinsic Clocks which appeared earlier comprises a well-balanced collection of original research and review articles on endogenous rhythms from seasonal and monthly to daily and hourly oscillations in different experimental model systems with analytical approaches from systemic to cellular and molecular levels.This research topic on Serchov and Heumann in their review focus on the role of Ras, an enzyme which hydrolyzes guanosine triphosphate and dependent intracellular signaling cascades in the regulation of the circadian rhythm in mice. They elegantly summarize how Ras activity forms a molecular bridge between entrainment of the suprachiasmatic nucleus that is the master clock in the brain and synaptic plasticity in dependent brain regions, such as the hippocampus, and corresponding functions. The extensive study by Chiang et al. specifically investigated rhythmic alterations in the murine hippocampus. They characterized the protein phosphorylation using a mass spectrometry approach with which they provided large-scale quantitative analysis of the daily oscillation of hippocampal phosphorylation events over a range of biological pathways. The hippocampus is a key focus also in the review by Urs Albrecht. It features the role of circadian proteins in the control of adult hippocampal neurogenesis, reciprocally implicated in depression and antidepressant responses. He discusses neurobiological mechanisms implicated in the pathogenesis of mood disorders, such as monoaminergic neurotransmission and stress response by the hypothalamic\u2013pituitary\u2013adrenal axis. The hypothalamus and the pituitary are further involved in seasonal cycles as highlighted in the review by Lewis and Ebling who elaborate in detail on the role of tanycytes, pituitary radial glial cells, in the regulation of circannual clocks in hamsters. They provide evidence supporting their hypothesis that tanycytes serve as central organizers of seasonal rhythms in the adult hypothalamus. Raible et al. present in their review on marine animals the current insight in the cellular mechanisms in molecular detail the monthly or semi-monthly rhythms. They express their worry about light pollution and further review the relevance of circalunar rhythms to mammalian physiology and reproduction in specific. They speculate that these rhythms may be the remnant of evolutionary ancient clocks, which were uncoupled from a natural entrainment mechanism.Bourguignon and Storch summarize recent findings of the cellular substrate and mechanism, which generate locomotor activity with periods of 2\u20136\u2009h. Such rhythms are normally integrated with circadian rhythms, but often lack the period stability and expression robustness. They further review the concept of the dopaminergic ultradian oscillator and show that ultradian locomotor rhythms rely on cells in the brain using dopamine for transmission. Intriguingly, Monje et al. report in their study on interleukin-6 knockout mice that the ultradian locomotor rhythm was impaired under both light-entrained and free-running conditions, whereas the circadian period and the level of locomotor activity as well as the phase shift response to light exposure at night remained normal. During the day, Cry1 and Bhlhe41 expression levels were increased whereas those of Nr1d2 were decreased in the hippocampus. Liu and Zhang first created mutants of cryptochrome circadian clock 1 (Cry1) protein at potential phosphorylation sites and conducted thereafter a screen in Cry1/Cry2 double deficient cells. They targeted at identifying mutations that disrupted circadian rhythms. They found that these single amino acid substitutions changed not only the circadian period, but also repression activity, protein stability, or cellular localization of the protein. Concerning the circadian period, Narasimamurthy and Virshup elucidate in their review the molecular mechanisms that regulate an enigma of the clock. Unlike other chemical reactions, the output of the clock as measured with the period remains nearly constant with fluctuations in ambient temperature. This is called as temperature compensation. The key lies especially in the mechanism that controls the stability of period circadian clock 2 protein. Clock-enhancing small molecules have become of particular interest as candidate chronotherapeutics, since there is a close association of circadian amplitude dampening with progression of chronic diseases, especially that of mood disorders. Gloston et al. present in their review an update of the regulatory mechanisms of circadian amplitude and the current status of these small molecules of therapeutic interest. Millius and Ueda introduce the readers to study of biology which takes advantage of engineering and mathematical tools to model and test the behaviors of the intrinsic clocks. It has evolved through the development of both wet lab and in silico work. The goal here is to understand the clocks that are made up of a range of complex properties of cells, tissues, and organisms.Intrinsic Clocks highlights the vibrant scientific activity in the field of the investigation of endogenous biological rhythms and their relevance for physiology and pathology.The cross-section of studies comprised in this research topic on TP and DP planned and wrote the manuscript together.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The internal iliac artery (IIA) is one of the branches of the common iliac artery and supplies the pelvic viscera, the musculoskeletal part of the pelvis, the gluteal region, the medial thigh region and the perineum. During routine cadaveric dissection of a male cadaver for undergraduate Medical students, we observed variation in the course and branching pattern of the left IIA. The artery gave rise to two common trunks and then to the middle rectal artery, inferior vesicle artery and superior vesicle artery. The first, slightly larger, common trunk gave rise to an unnamed artery, the lateral sacral artery and the superior gluteal artery. The second, smaller, common trunk entered the gluteal region through the greater sciatic foramen, below the piriformis muscle and presented a stellate branching pattern deep to the gluteus maximus muscle. Two of the arteries forming the stellate pattern were the internal pudendal artery and the inferior gluteal artery. The other two were muscular branches. The internal iliac artery (IIA) is one of the two terminal branches of the common iliac artery. The branching takes place at the level of the lumbosacral articular disc and in front of the sacroiliac joint. The artery consists of a trunk and two divisions, namely the anterior division and the posterior division. The arterial trunk passes subperitoneally downwards in front of the sacroiliac joint and, on approaching the upper margin of the greater sciatic foramen, it divides into anterior and posterior divisions.-The anterior division gives off the obliterated umbilical artery, superior vesicle artery, middle rectal artery, obturator artery, uterine artery, vaginal artery, inferior gluteal artery and the internal pudendal artery. The vaginal artery corresponds to the inferior vesicle artery in males. Arising from the posterior division are three branches, the iliolumbar artery, superior gluteal artery and lateral sacral arteries. The branches of the IIA supply the pelvic viscera, the musculoskeletal part of the pelvis, the gluteal region, the medial thigh region and the perineum. The IIA is the proximal part of the umbilical artery whereas its distal end obliterates after birth. Knowledge of the variations in the origin, course and branches of IIA helps in planning and conducting surgeries involving the areas supplied by the artery. Classification of variant patterns of branches of the IIA has been documented.During routine cadaveric dissection of a male cadaver for undergraduate Medical students, we observed variation in the course and branching pattern of the left IIA. The variant vessels were dissected as documented in Cunningham\u2019s manual of practical anatomy.The obturator artery did not arise from the internal iliac artery. Rather, the obturator artery arose from inferior epigastric artery (IEA). It gave off two unusual branches before entering the obturator foramen. These two branches anastomosed with each other on the obturator internus. One of them coursed lateral to the prostate and entered the crus of the penis. The other branch supplied the obturator internus. The obturator vein drained into the external iliac vein. The above variations are shown in Owing to the numerous branches arising from the IIA, variations in the IIA branching pattern have been often reported. In this case, variations were observed in the branching pattern of anterior as well as posterior divisions. For over a century, various authors have devised various classifications of the branching patterns of the internal iliac artery. Way back in 1891, JastchinskiFollowing this, a modified classification based on the Adachi classification was proposed by Yamaki et al.A study of the variability of origin of parietal branches of the IIA stated that the inferior gluteal and internal pudendal vessels were given off by a common trunk in 63.2% of cases.,,,The uniqueness of the variant described in this case lies in the branching of the IGA and the IPA. The stellate branching pattern, including the common trunk and two muscular branches, was observed in the gluteal region. Branching of the IGA and IPA outside the pelvis has been reported in the literature previoulsy,,The obturator artery in this case took origin from the inferior epigastric artery, as already reported in the literature. One study has reported a low rate of incidence of the obturator artery originating from the inferior epigastric artery, in 6% of cases,,,During embryonic life, the most appropriate channels of the developing IIA enlarge, while the others disappear, giving rise to a final arterial pattern. In this process, there is a chance of disappearance of one of the major appropriate channels or vice versa, which may result in a variant arterial pattern as reported here. Successful ligation of the IIA is important for surgeons, since efficacy of ligation of this artery in pelvic surgeries varies from 42 to 75%.Knowledge of the stellate branch of the internal iliac artery is of importance to plastic surgeons engaged in raising inferior gluteal artery perforator flaps for breast reconstruction surgery.Though it is common to see variant internal iliac artery branching patterns, the current combination of variant branches including a stellate artery in the gluteal region, an abnormal obturator artery and its unusual branches has not been reported previously. The large stellate artery under the gluteus maximus muscle could cause bleeding during posterior approaches to the hip joint and also in hipbone fractures. Unusual branches of the abnormal obturator artery could get damaged during prostate surgeries. Knowledge of these variations is therefore of importance to general surgeons and orthopedic surgeons."} +{"text": "In In The LAMA2 mutation in Patient 20 is also listed incorrectly in the fourth sentence of the second paragraph of the MDCMD subsection of the Results section. The correct sentence is: Three frameshift deletions or insertions , four splice site variants , and one nonsense mutation were expected to produce truncated proteins."} +{"text": "Image-based plant phenotyping facilitates the extraction of traits noninvasively by analyzing large number of plants in a relatively short period of time. It has the potential to compute advanced phenotypes by considering the whole plant as a single object (holistic phenotypes) or as individual components, i.e., leaves and the stem (component phenotypes), to investigate the biophysical characteristics of the plants. The emergence timing, total number of leaves present at any point of time and the growth of individual leaves during vegetative stage life cycle of the maize plants are significant phenotypic expressions that best contribute to assess the plant vigor. However, image-based automated solution to this novel problem is yet to be explored.A set of new holistic and component phenotypes are introduced in this paper. To compute the component phenotypes, it is essential to detect the individual leaves and the stem. Thus, the paper introduces a novel method to reliably detect the leaves and the stem of the maize plants by analyzing 2-dimensional visible light image sequences captured from the side using a graph based approach. The total number of leaves are counted and the length of each leaf is measured for all images in the sequence to monitor leaf growth. To evaluate the performance of the proposed algorithm, we introduce University of Nebraska\u2013Lincoln Component Plant Phenotyping Dataset (UNL-CPPD) and provide ground truth to facilitate new algorithm development and uniform comparison. The temporal variation of the component phenotypes regulated by genotypes and environment are experimentally demonstrated for the maize plants on UNL-CPPD. Statistical models are applied to analyze the greenhouse environment impact and demonstrate the genetic regulation of the temporal variation of the holistic phenotypes on the public dataset called Panicoid Phenomap-1.The central contribution of the paper is a novel computer vision based algorithm for automated detection of individual leaves and the stem to compute new component phenotypes along with a public release of a benchmark dataset, i.e., UNL-CPPD. Detailed experimental analyses are performed to demonstrate the temporal variation of the holistic and component phenotypes in maize regulated by environment and genetic variation with a discussion on their significance in the context of plant science. The complex interaction between genotype and the environment determines the phenotypic characteristics of a plant which ultimately influences yield and resource acquisition. Image-based plant phenotyping refers to the proximal sensing and quantification of plant traits based on analyzing their images captured at regular intervals with precision. It facilitates the analysis of a large number of plants in a relatively short period of time with no or little manual intervention to compute diverse phenotypes. The process is generally non-destructive, allowing the same traits to be quantified repeatedly at multiple times during a plant\u2019s life cycle. However, extracting meaningful numerical phenotypes based on image-based automated plant phenotyping remains a critical bottleneck in the effort to link intricate plant phenotypes to genetic expression.The analysis of visible light image sequence of plants for phenotyping is broadly classified into two categories: holistic and component-based . HolistiZea mays) or corn, has been the preeminent model for studying plant genetics over the past century, and is widely employed in both private and public sector research efforts in Asia, Europe, and the Americas. Maize is one of the three grass crops, along with rice and wheat, that directly or indirectly provides half of the total world caloric consumption each year. Arabidopsis and Tobacco have been widely used as the model plants for various applications in computer vision based plant phenotyping, i.e., leaf segmentation using 3-dimensional histogram cubes and superpixels )"} +{"text": "Flavivirus; Flaviviridae) with the capacity to infect only mosquitoes have been described in the last 10 years. By contrast, only two such viruses had been described in the previous 33 years.1 This burgeoning expansion of the known virome has largely outpaced the scientific capacity for characterizing these agents in any detail. Furthermore, many of the methodologies used for rapid genetic detection (such as placing mosquitoes directly in nucleic acid extraction buffers and the use of FTA cards for blood samples) preclude the isolation of the agents.The expanded use of next generation sequencing tools has led to an explosion in the rate of discovery of novel viral agents and has had a measured effect on the capacity to genetically identify the presence of previously described viruses in new geographic environments and within different hosts and vectors. For example, more than 30 flaviviruses , disease potential (through pathogenesis testing), and potential for serological protection against known human disease agents.A slowed pace of arbovirus discovery in international tropical locations (1970\u20132005) has been associated with the cessation of directed virus discovery efforts by the Rockefeller Foundation in the late 1960s.American Journal of Tropical Medicine and Hygiene, a collaborative group of investigators from Brazil and the United States genetically and phenotypically describe viruses from the Gamboa serocomplex .4 The investigators identified four genotypes within the serocomplex, evidence of reassortment events among the three genomic segments, and the existence of more complicated genetic relationships between the viruses that were previously identified by classical serological techniques . The study demonstrates the power of combining archival and prospective sampling for viruses in concert with new technologies that allow for the rapid genetic characterization of viruses. The existence of isolates allowed the investigators to determine the seroprevalence of Gamboa virus in birds and mammals, and it was revealed that exposure rates are considerably higher in birds. Coupled with the isolation of viruses from birds and ornithophilic mosquitoes, these data further implicate birds as important reservoir hosts. Pathological characterization in newborn chickens demonstrated seroconversion with limited disease presentation. Although no human disease association with Gamboa virus has been identified, these kinds of studies provide a comprehensive presentation of genetic, antigenic, and epidemiological data from which a more complete appreciation for human and veterinary disease potential can be assessed, and serve as a reminder that without balanced efforts to produce material from field samples that can be used for phenotypic and serological characterization, only a partial understanding of viral epidemiology and pathogenesis can be achieved.In this issue of the"} +{"text": "It was found that staggerer mothers produced smaller litters than controls and the number of oocytes produced in their ovaries was reduced by the staggerer mutation. These results indicate a pleiotropic effect on fertility of the Rorasg gene underlying the cerebellar abnormalities of the staggerer mutant.Disturbances in several reproductive functions of the staggerer cerebellar mutant mouse have been observed. In this study, reproductive efficiency of"} +{"text": "The biological processes that come into play during orthodontic tooth movement (OTM) have been shown to be influenced by a variety of pharmacological agents. The effects of such agents are of particular relevance to the clinician as the rate of tooth movement can be accelerated or reduced as a result. This review aims to provide an overview of recent insights into drug-mediated effects and the potential use of drugs to influence the rate of tooth movement during orthodontic treatment. The limitations of current experimental models and the need for well-designed clinical and pre-clinical studies are also discussed. During orthodontic treatment, the application of sustained force on teeth sets in motion processes that ultimately lead to alveolar bone remodeling. These biochemical processes involve a multitude of cellular and molecular networks and discuss the potential of pharmacological strategies aimed at supporting orthodontic interventions.A necessary prerequisite for selecting, designing and testing suitable molecules to influence OTM is a detailed knowledge of the role of the different cellular and molecular components driving the biological process of OTM.To achieve OTM, mechanical forces are applied on teeth. This initially causes fluid movement within the periodontal ligament (PDL) space and distortion of the PDL components , setting into motion the process of release of a multitude of molecules which initiate alveolar bone remodeling , which act on capillaries and cause the adhesion and migration of blood leukocytes into the area of compression are subjected to mechanical stress they express cytokines, growth factors and cytokine receptors. Osteoblasts express IL-1b, IL-6, IL-11, TNFa and their receptors in response to compressive stress. IL-b shows an autocrine effect and enhances the phenomenon which are produced by activated fibroblasts.MMPs either degrade collagen fibers (MMP-1 and MMP-8) or eliminate the degraded collagen (MMP-9 and MMP-2) to allow tooth movement are the chemokines. These are chemotactic cytokines that have been recognized as playing key roles in inflammatory processes but only recently the role of CC inflammatory chemokines in mechanically-induced bone remodeling is starting to become clearer is the most widely researched PG with respect to OTM. PGE2 is produced mainly by PDL fibroblasts and osteoblasts and LTD4 (a cysteinyl leukotriene), , IL-1RA (a receptor antagonist cytokine which controls the effects of IL-1), IL-12, and IL-10 under low stress conditions and many of them have been cloned and could provide drug targets for the regulation of the synthesis of specific prostaglandins, such as PGE2 in the case of OTM preparations are polyspecific and polyclonal immunoglobulin therapeutic preparations used as a replacement therapy in immunodeficient patients exerts its effects directly on osteoblasts and indirectly on osteoclasts through binding to the PTH type 1 receptor on osteoblasts, leading to expression of insulin-like growth factor-1 (IGF-1), which promotes osteoblastogenesis and osteoblast survival, and of RANKL, which promotes osteoclast activation. PTH is also likely interacting with bone lining cells promoting early osteogenesis , to inhibit tooth movement and enhance stability, stems from the known physiological role that OPG plays within the PDL in regulating the bone resorbing activity of osteoclasts. OPG is produced by osteoblasts and is a decoy receptor for RANKL which prevents the interaction of RANKL present on osteoblasts' surface with its receptor RANK on osteoclasts. In the absence of RANKL-RANK interactions, the activation, terminal differentiation and survival of osteoclasts are negatively affected. Changes in the ratio of RANKL/OPG in the PDL can fine-tune alveolar bone resorption. A number of groups attempted to influence tooth movement in animal models by locally altering the concentration of either OPG or RANKL aiming to enhance or decrease the resorptive action of osteoclasts are derivatives of the tetracycline groups of antibiotics that lack antimicrobial activity and the adverse effects associated with the conventional tetracyclines. Their ability to inhibit MMPs and pro-inflammatory cytokines and their apoptotic effects on osteoclasts initially rendered them attractive therapeutic agents for the management of chronic periodontitis. The CMTs have been shown to modify the COX-2 enzyme leading to inhibition of PGE2 production ATPases (proton pumps) upon their activation. These V-ATPases are transported through interaction with microfilaments within the cytoplasm to the plasma membrane- a unique ability limited to clast cells, thus making the targeting of this process a highly specific means for modulation of tooth movement. Enoxacin was first identified by crystal structure-based virtual screening as one of the potential molecules which could block the binding site of V-ATPase to actin inhibits RANKL-induced osteoclast formation from bone marrow-derived macrophages possibly by supressing a key event in the RANKL induced intracellular signaling pathway or by interfering with M-CFS signaling increases under conditions of hypoxia and mechanical stress in PDL fibroblasts and that hypoxia has been shown to lead to the expression of key genes involved in the recruitment and activation of osteoclasts were tested in a split mouth design in rats epitopes. Such proteins include osteopontin and bone sialoprotein the inherently different biology in animals which prevents complete inference of the effects and the side effects of the pharmacological agents in humans, (2) the inability to calculate from animal experiments suitable dosages for clinical testing, as systemic application of the drug is often used in these models and may not result in effective or comparable (species difference) drug concentrations at the site of orthodontic intervention, (3) the generally small sample sizes used and different ages of animals used, which makes reliable conclusions even in the animal studies impossible to be reached, (4) the lack of longer-term animal studies, which would enable examination of the effects of an agents on tooth movement over a time period and also allow observation of side-effects.Tooth movement is a complex process controlled by the nature of the mechanical stimuli, by a multitude of signaling pathways and influenced by the individual's genetic make-up recruitment of sufficient patient numbers, (2) evaluation of the effect of individual variation, (3) need for initial dose-response studies including measurements to assess the levels of the therapeutic agent at the sites of interest and measurements of the tissue-level outcomes.The huge majority of the above mentioned studies have employed a systemic administration or local injection of the pharmacological agents. There is multitude of problems associated with these approaches. A systemic administration does not ensure a constant delivery of an ideal dose of the agent in the PDL. As it is not clear how circulating values correspond to the gingival dose and how they fluctuate in time, mainly due to degradation of the agent, in many cases a dose tested in the experimental set-up could have been insufficient to obtain the desired biologic effect. A more important problem is the potential of systemic administration to provoke undesirable systemic effects, especially when pharmacological agents lacking specificity are used. The mere evaluation of side effects is doubtful in the current experimental protocols as either the test periods are too short or the experimental protocol has not included specific methods to evaluate such effects.Local injection of a pharmacological agent or local gene transfer also come with significant problems. For an intervention to be clinically useful, it must be characterized by practicality in its application (in terms of cost and time) and minimal discomfort for the patient. Daily or even frequent invasive procedures are a major prohibiting factors for clinical application. In addition, during tooth movement different biological processes take place at distinct sites of the PDL. Bone resorption and bone formation occur simultaneously at different areas of the PDL (areas of compression. vs. areas of tension). The proteoglycan component of the extracellular matrix of the PDL, a hydrated gel, allows the diffusion of free small molecules within the pdl and presents a big challenge to the effort of achieving targeted therapeutic interventions at specific sites (compressions vs. tension). Most of the drugs used can and will influence both processes and can furthermore influence other physiological cellular functions. The use of integrin inhibitors is a good example of that.Targeting key processes in the bone remodeling mechanism without other undesirable local side effects precludes a detailed knowledge of the cellular events involved. The selection of appropriate targets for the drugs and the design of novel drugs or suitable analogs of naturally occurring molecules with high specificity, is key for clinically successful strategies. The pharmacological agents also need to demonstrate high potency and efficacy in order to achieve clinically significant differences.The cost implications of the process of designing, testing and eventually obtaining approval for the clinical application of a potential therapeutic agents need to be carefully considered and kept to the minimum to decrease the eventual cost to the patient. A very useful approach as demonstrated by the processes that led to the identification of enoxacin, is to screen already known and approved molecules for clinical applications.Suitable drug delivery materials need to be developed to provide the appropriate mode of release of pharmacological agents in their active form, that is, at the desired rate and amount for a long period of time (reflecting the duration of orthodontic treatment or retention time). Sustained and low grade prostagladin release through a suitable delivery system could, for example, be used to induce and sustain further endogenous PG production (through a known amplifying mechanism). This can be used to prolong the effects of short periods of stress in OTM.It is also important that the drug carriers do not have a tendency to spread into a larger area from the application site or that are induced only at areas of bone remodeling (such as matrix-bound inducible molecules). An ideal vehicle for drug delivery must be non-toxic/biocompatible and the Swiss Cancer League.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Genomic characterization of plant cell wall degrading enzymes and in silico analysis of xylanses and polygalacturonases of Fusarium virguliforme\". However, this was incorrect and the title of the manuscript should read \u201cGenomic characterization of plant cell wall degrading enzymes and in silico analysis of xylanases and polygalacturonases of Fusarium virguliforme\u201d. This has since been acknowledged and corrected in this erratum.Upon publication of the original article , the tit"} +{"text": "We extend the discussion on how their findings contribute to our understanding of the role of the PVT in drug seeking by providing new insight on the role of the PVT in the regulation of food-seeking and fear responses. We also consider the significance of the neuroanatomical findings reported by Clark et al., that the PVT is reciprocally connected with areas of the brain involved in addiction and discuss the implications associated with the source and type of dopaminergic fibers innervating this area of the thalamus.This commentary focuses on novel findings by Clark et al. (2017) published in The article describes experiments using a virally driven strategy in which the authors investigate the anatomic, physiologic, and behavioral properties of D2R expressing neurons in the PVT. Here, we will extend the discussion on how their findings contribute to our understanding of the role of the PVT in drug-seeking behavior by providing new insight into the recently described participation of this region in the regulation of food-seeking and fear responses. In addition, we will consider the significance of the neuroanatomical findings reported by This commentary focuses on novel findings by There is an emerging agreement that the paraventricular nucleus of the midline thalamus (PVT) is an important component of the forebrain circuits that mediate emotional and motivated behaviors . This hain situ hybridization for D2R mRNA in wild-type mice, in vitro. They found that a large proportion of D2R-containing PVT neurons were tonically active and that application of the D2R agonist quinpirole inhibited the firing rate of these cells without altering the response of non-D2R-containing neurons in the same region. Quantification of GFP positive neurons revealed that approximately two thirds of PVT neurons express D2R, a striking high density if we consider that only one third of the nucleus accumbens neurons express D2R into the PVT of DrD2-Cre mice to label the axonal projections of these neurons across the entire brain. The pattern of innervation observed after specifically targeting D2R-containing neurons was consistent with previous tracing studies describing the efferents of the posterior aspects of PVT . A denseIn contrast to the PVT-striatal pathway, the presence of reciprocal connections between the PVT and the prefrontal cortex reported in It is also interesting to note that while the PVT receives projections from dopamine neurons in the hypothalamus and periaqueductal gray , the monLow levels of striatal dopamine D2R may predispose individuals to use stimulant drugs like cocaine . Consist"} +{"text": "In many areas of human life, people perform in teams. These teams' performances depend, at least partly, on team members' abilities to coordinate their contributions effectively information about co-actors' actions and the team score affects team performance. Additionally, they test for differences in the effectiveness of specific coordination strategies over time.To assess the coordination of baseball infielders, Pina et al. measure team performance by discriminating between successful and unsuccessful offensive plays in association football. They use social network analyses to calculate variables describing a team's passing network and test the predictive value of these network variables for the successfulness of team performance. Li et al. consider co-authorships of published articles as an indicator of team knowledge creation. Social network analysis is used to calculate variables describing the co-author networks and to test relations between network variables and team knowledge creation. Ramos et al. assent to the contributions of social networks analysis to understanding team behavior. With the goal of further expanding the capabilities of this methodological approach, the authors evaluate the use of hypernetworks that simultaneously access cooperative and competitive interactions between teammates and adversaries across space and time and on various levels of analysis.Three contributions engage in network analysis. Feigean et al. use boat velocity as a performance measure. They describe changes of the crew performance after a 6-week training interval and explore to which extent practice induced team benefits are obtained through distinct individual adaptions of the rowing patterns.In a case study involving a newly assembled rowing crew, Stevens and Galloway use EEG data of the members of performing teams to quantitating the teams' neurodynamic organizations. Individual EEG data linked to measures of social coordination during the evolution of performed tasks are transformed into symbolic information units about the team's neural organization and synchronization. The authors discuss the potential the results raise for developing quantitative models of team dynamics that enable comparisons across teams and tasks.Gesbert et al. adopt a phenomenological approach to explore how soccer players' lived experiences are linked to the active regulation of team coordination during offensive transition situations. They present different collective regulation modes that result from the qualitative analyses of the athletes' phenomenological reports.Gorman et al. illustrate the use of viewing teams as dynamical systems for understanding the coordination principles underlying teamwork. They advocate a systems perspective on teamwork that is based on general coordination principles lying within the individuals and present a framework for understanding and modeling teams as dynamical systems.Reviewing empirical findings, Steiner et al. provide an integrative perspective on coordination in interactive sport teams and define a framework that considers the coexisting contributions of shared mental models, situation-specific information and individuals' constructionist perspectives on current game situations to enabling team coordination.Bowers et al.'s contribution is dedicated to team resilience. The concept is used to explain why and how teams are able to maintain performance levels when facing adversity in the form of specific stressors. The authors provide a theoretical model of team resilience as an emergent state at the group level.The contributions to this Research Topic offer a multifaceted insight into current research on team coordination and team functioning. We hope that they inspire further research on the topic as much remains to be learned about the successful coordination of team behavior. The many areas of human life in which performance is delivered by teams adumbrates the large field of application that could benefit from a deepened understanding.SS, RS, and NC contributed to the ms and gave final approval of the version to be submitted.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "DEAR EDITOR1 Soft tissue tumors such as basal cell carcinoma (BCC) are the most common tumors of this region. The choice of treatment for BCC is surgical demolitions. Since these tumors grow more aggressively and are so likely to recur from the involved surgical margins, the surgery requires often wide resection of the midfacial region\u2019s structures to obliterate adequate margin.2 Thus reconstruction surgery of defected regions plays an important role in the integrity of complex facial expressions, facial functions and the aesthetic outcome. In the present study, we described an eccentric case of a recurrence of a BCC managed by a paramedian forehead flap in an unusual condition.Nose is the most common site of facial skin cancer because of its continuous sun exposure.The patient is an 85-year old man, who was referred to our center for BCC recurrence after previous resection in almost 30 years ago . He undAs it is well known, vascular pedicles in both sides of nasofrontal zone, containing supratrochlear arteries as axial, is the basis of median flap. If the stalk of flap is dissected for any reasons, other vessels and their multiple anastomoses supply forehead and supraorbital zones. In this case, we used paramedian flap in base of previous median flap pedicle. Since all of the cardinal arteries were dissected in the previous surgery, as the narrowed remained pedicle, the blood supply of the used paramedian forehead flap was only based on blood supply from a rich plexus of the anastomosing vessels from the terminal branches of angular artery in the nasal bridge. 3-5Unlike the absence of a certain arterial supply in the flap pedicle there was not any ischemic event after the flap raised and inset. Then the patient candidate for three stage flap operation. The long term follow-up did not show evidence of any abnormalities of the wound or recurrence of the tumor . Median In conclusion, for special occasions, to inset in defected area, it is acceptable to raise paramedian flap in base of previously used flap pedicles, without the presence of any axial arteries and only in base of rich anastomotic arterial plexus in the nasofrontal angle of each side. The authors declare no conflict of interest."} +{"text": "Optimal contribution methods have proved to be very efficient for controlling the rates at which coancestry and inbreeding increase and therefore, for maintaining genetic diversity. These methods have usually relied on pedigree information for estimating genetic relationships between animals. However, with the large amount of genomic information now available such as high-density single nucleotide polymorphism (SNP) chips that contain thousands of SNPs, it becomes possible to calculate more accurate estimates of relationships and to target specific regions in the genome where there is a particular interest in maximising genetic diversity. The objective of this study was to investigate the effectiveness of using genomic coancestry matrices for: (1) minimising the loss of genetic variability at specific genomic regions while restricting the overall loss in the rest of the genome; or (2) maximising the overall genetic diversity while restricting the loss of diversity at specific genomic regions.Our study shows that the use of genomic coancestry was very successful at minimising the loss of diversity and outperformed the use of pedigree-based coancestry (genetic diversity even increased in some scenarios). The results also show that genomic information allows a targeted optimisation to maintain diversity at specific genomic regions, whether they are linked or not. The level of variability maintained increased when the targeted regions were closely linked. However, such targeted management leads to an important loss of diversity in the rest of the genome and, thus, it is necessary to take further actions to constrain this loss. Optimal contribution methods also proved to be effective at restricting the loss of diversity in the rest of the genome, although the resulting rate of coancestry was higher than the constraint imposed.The use of genomic matrices when optimising contributions permits the control of genetic diversity and inbreeding at specific regions of the genome through the minimisation of partial genomic coancestry matrices. The formula used to predict coancestry in the next generation produces biased results and therefore it is necessary to refine the theory of genetic contributions when genomic matrices are used to optimise contributions. A) that represents expected relationships assuming neutrality and does not take into account variation due to Mendelian sampling. Thus, although its use has proved to be efficient to manage genetic diversity, it has some limitations. For instance, individuals from the same (full-sib) family inherit different sets of alleles but they are assumed to be equally related. In addition, since matrix A does not consider differences between genomic regions, optimisation of contributions will, on average, control the rate of coancestry to the chosen value, but some genomic regions may have substantially higher rates than desired.It is generally accepted that control of the rate of coancestry provides a general framework to manage genetic variability. Optimal contribution (OC) methods , 2 permiA is replaced by a realised relationship matrix that is calculated by taking into account variation in the level of relationship between animals of the same family and variation between genomic regions ),This study confirms that the use of genomic coancestry in the optimisation of contributions is substantially more efficient in maintaining genetic diversity than the use of pedigree coancestry. Moreover, the use of genomic coancestry permits the targeting of specific genomic regions to minimise the loss of genetic diversity and the extension of the optimisation procedure to include restrictions for additional regions. This study also highlighted the need to refine the theory of genetic contributions using realised genomic relationship matrices in order to ensure that optimal contribution methods properly manage the genetic diversity available in a population."} +{"text": "Such individual case reports are valuable because of the lack of approved antivirals for norovirus infections and the increasing reports of enhanced norovirus-associated morbidity and mortality in immunocompromised patients [Tpatients .Nitazoxanide has previously been shown to reduce the duration of symptoms in immunocompetent individuals and to clear infection in a single immunocompromised patient with chronic myeloid leukemia . In contThe mechanism of action of nitazoxanide against norovirus is currently unknown. For other viruses, it has been shown to be 2-fold, involving inhibition of cellular processes that are required for viral infection and potentiation of the innate immune response . TherefoUntil recently, the lack of a robust and reproducible cell culture system for human noroviruses has presented a major barrier to the development and characterization of antivirals and molecular studies of human norovirus replication. Two recent advances now offer the opportunity to further investigate potential therapeutic approaches and potential strain-specific resistance phenotypes. The demonstration that immortalized B cells allow for limited norovirus replication in cell culture provides"} +{"text": "Apis mellifera L. Hatching spines were indeed discovered on first instar A. mellifera. The honey bee hatching process appears to differ in that the spines are displayed somewhat differently though still along the sides of the body, and the chorion, instead of splitting along the sides of the elongate egg, seems to quickly disintegrate from the emerging first instar in association with the nearly simultaneous removal of the serosa that covers and separates the first instar from the chorion. Unexpected observations of spherical bodies of various sizes perhaps containing dissolving enzymes being discharged from spiracular openings during hatching may shed future light on the process of how A. mellifera effects chorion removal during eclosion. Whereas hatching spines occur among many groups of bees, they appear to be entirely absent in the Nomadinae and parasitic Apinae, an indication of a different eclosion process. This article explores the occurrence of hatching spines among bee taxa and how these structures enable a larva on hatching to extricate itself from the egg chorion. These spines, arranged in a linear sequence along the sides of the first instar just dorsal to the spiracles, have been observed and recorded in certain groups of solitary and cleptoparasitic bee taxa. After eclosion, the first instar remains loosely covered by the egg chorion. The fact that this form of eclosion has been detected in five families (Table 1 identifies four of the families. The fifth family is the Andrenidae for which the presence of hatching spines in the Oxaeinae will soon be announced.) of bees invites speculation as to whether it is a fundamental characteristic of bees, or at least of solitary and some cleptoparasitic bees. The wide occurrence of these spines has prompted the authors to explore and discover their presence in the highly eusocial The term \u201chatching spine\u201d was used by Apis mellifera, and (3) an investigation into the egg hatching process of species of Apidae the first instar of which do not exhibit hatching spines.This study is presented in three parts: (1) a survey of hatching spines and hatching processes in solitary and cleptoparasitic bees, (2) an investigation of hatching spines in first larval instars of the highly eusocial bee, A. mellifera, oriented with anterior ends toward the left. Larval specimens had been preserved and stored in Kahle\u2019s Solution .All SEM micrographs were captured using a Hitachi S5700 in the Microscopy and Imaging Facility of the American Museum of Natural History. All figures (except for Tetrapedia diversipes Klug (Apidae: Tetrapediini) by The presence of hatching spines in bees was initially detected in Monoeca haemorrhoidalis (Smith) and related taxa (Apidae: Tapinotaspidini) . They debees see . This exbees see for CentThe information in A. mellifera L. (Apidae: Apini) have not mentioned hatching spines are playing a mechanical role in breaking the chorion.\u201d We agree, but if the spines primarily serve to puncture the serosa e.g., thereby Triepeolus dacotensis emerged through an opening in the front of the egg as did Epeolus compactus according to While compiling Regarding other families of bees, we know of no taxa where the first instar emerges from the front end of the egg. Although in Ericrocis lata , may yield valuable intermediates between those of known nonsocial bees and those of A. mellifera.It seems likely that the spines involved in the dissolution of the chorion and serosa of the honey bee are homologous with the hatching spines of nonsocial bees both in location and apparent function. For both nonsocial bees and"} +{"text": "Dipole moments of hydrocarbons are not an easy property to model with conventional 2D descriptors. A comparison of the performance of the most commonly used sets of topological descriptors is presented, each set containing descriptors derived from the regular and Detour distance matrix, Electrotopological State Indices, and the basic number of atoms of each type and bonds. Data were taken on a representative set of 35 hydrocarbon dipole moments previously reported and the classical multivariable regression analysis for establishing the models is employed."} +{"text": "During the last few centuries oceanic island biodiversity has been drastically modified by human-mediated activities. These changes have led to the increased homogenization of island biota and to a high number of extinctions lending support to the recognition of oceanic islands as major threatspots worldwide. Here, we investigate the impact of habitat changes on the spider and ground beetle assemblages of the native forests of Madeira (Madeira archipelago) and Terceira (Azores archipelago) and evaluate its effects on the relative contribution of rare endemics and introduced species to island biodiversity patterns. We found that the native laurel forest of Madeira supported higher species richness of spiders and ground beetles compared with Terceira, including a much larger proportion of indigenous species, particularly endemics. In Terceira, introduced species are well-represented in both terrestrial arthropod taxa and seem to thrive in native forests as shown by the analysis of species abundance distributions (SAD) and occupancy frequency distributions (OFD). Low abundance range-restricted species in Terceira are mostly introduced species dispersing from neighbouring man-made habitats while in Madeira a large number of true rare endemic species can still be found in the native laurel forest. Further, our comparative analysis shows striking differences in species richness and composition that are due to the geographical and geological particularities of the two islands, but also seem to reflect the differences in the severity of human-mediated impacts between them. The high proportion of introduced species, the virtual absence of rare native species and the finding that the SADs and OFDs of introduced species match the pattern of native species in Terceira suggest the role of man as an important driver of species diversity in oceanic islands and add evidence for an extensive and severe human-induced species loss in the native forests of Terceira. Relative to their area, islands make a disproportionately large contribution to global biodiversity but have long been severely impacted by human intervention leading several authors to consider that the present biodiversity crisis is particularly acute in island ecosystems \u20134. Over In recent years there has been a growing interest in the assessment of invertebrate extinction. For example, relevant information of high extinction levels on island ecosystems has been put forward for molluscs, a group of invertebrates where the presence of a shell is crucial to evaluate changes in community composition across time , 11, 12.a priori between the observed low-abundance range-restricted species and their vulnerability to extinction because this group of species may include tourists and poorly-sampled species along with the truly rare ones )The analysis of species rarity in groups of low-abundance range-restricted spiders and ground beetles from the native laurel forests of Madeira and Terceira highlighted substantial differences between the two study islands. The absence of rare native species coupled with the presence of a high number of tourists (mostly introduced species) in Terceira\u2019s laurel forests suggests the past extirpation of populations of the most vulnerable native species. It is widely recognized that low abundance and narrow distribution are drivers that predispose species to extinction and both factors have already been associated with previous extinctions in a variety of animal and plant taxa , 79, 83.In contrast with the findings in Terceira, the potential rare species in Madeira Laurisilva are largely true rare species (14 endemic species), but pseudo-rare microhabitat specialists (only native species) are also well represented. The low presence of tourist species in Madeira Laurisilva, combined with the small number of introduced species, illustrates the favourable conservation status of these forests. Nevertheless, it must be emphasized that Madeira Laurisilva has also suffered a considerable destruction in the past which, in combination with invasive species introductions, has led to the loss of some endemic species , 76, 77.In conclusion, Madeira and the Azores have both been affected by human-mediated activities during the last few centuries which have altered the biodiversity in both islands. However, our comparative study on the spider and ground beetle assemblages of native forests in Madeira and Terceira has highlighted considerable differences in community structure and composition that mirror the differences in severity of human-induced changes between the two islands. The high proportion of introduced species, the virtual absence of rare native species and the finding that SADs and OFDs of introduced species match the pattern of native species in Terceira reinforce the role of man as an important driver of species diversity in oceanic islands, and provide additional evidence of the extensive and severe human-induced loss of the indigenous diversity of Terceira native forests that cannot be fully understood based on the current knowledge on species extinctions. The performance of comparative studies on the community structure and composition of island arthropods addressing the relative contribution of true rare endemics and introduced species to island biodiversity patterns can be very useful to evaluate the extent of species loss because \u201cto neglect such extinctions is to ignore the majority of species that are or were in need of conservation\u201d .S1 Table(DOC)Click here for additional data file.S2 Table(DOC)Click here for additional data file."} +{"text": "Markov and Amasheh). The authors provide a detailed description of the electrophysiological studies that yield the knowledge around the transmesothelial permeability and the set of studies on paracellular permeability that provide significant evidence of tight junction switch in the inflamed human pleural mesothelium. During inflammation a protein expression change from sealing to pore forming claudins in the tight junctions occurs, a finding that could be prone to therapeutic intervention. In the same context an experimental study demonstrated the calcium dependent effects of increased paracellular permeability induced by bradykinin, histamine and thrombin in rat primary mesothelial monolayers (Kuwahara). This study provides further evidence of paracellular permeability changes due to the effects of inflammation related molecules attributable to reorganization of mesothelial cell F-actin cytoskeleton and increased actin polymerization. These aspects of pleural membrane permeability changes can supplement the understanding of the pathophysiology of pleural effusion dynamics.The Research Topic \u201cMesothelial Physiology and Pathophysiology\u201d aimed at providing a forum for investigations conducted in all fields of serosal membranes research. The physiology and pathophysiology of mesothelial cells and membranes is a research field with a growing community given that many of the clinical entities that involve serosal contribution, require the detailed understanding of the underlying biological processes in order to provide effective treatments. The research production volume in the field of mesothelial physiology and pathophysiology is a function of the frequency of the clinical entities that they are involved into. Thus, the most active area is pleural research followed by peritoneal and lastly pericardial. Pleural effusions are a common entity that involves the abnormal accumulation of pleural fluid in the pleural cavity due to abnormal turnover, and the underlying diseases stem from congestive heart failure to infectious lung diseases and several intra- and extra- thoracic malignancies . This process renders new roles to pleural mesothelial cells in the context of lung injury and repair mechanisms that can contribute to the development of lung mesenchymal pathologies such as interstitial pulmonary fibrosis. The authors also focus on the therapeutic potential of modulation pleural mesothelial activation and thus parenchymal disease progression. From a different angle the pleural mesothelium is sensitive to the effects of environmental or engineered nanoparticles that through inhalation reach the pleural cavity by disrupting the lung parenchyma and transmigration or in the case of smaller particles through the circulation. A well characterized such pathology is the malignant pleural mesothelioma (MPM) due to asbestos fibers exposure. Currently there is increasing evidence that engineered nanoparticles such as carbon nanotubes (CNTs) can cause a similar pleural pathology. Lohcharoenkal et al. demonstrated that single-walled CNTs induced neoplastic-like transformation in human mesothelial cells after prolonged exposure. The observed effects were due to H-Ras upregulation and ERK1/2 activation that led to increased expression of cortactin, a protein implicated in cell motility and neoplastic development, and thus a more aggressive transformation phenotype . In another experimental study Ady et al. investigated the role of Tunneling Nanotubes (TnTs) in MPM cell communication . The authors in very well designed studies demonstrated TnT occurrence in ex vivo preparations of MPM patient tumor tissues as well as in 2D and 3D MPM cell cultures. Furthermore they provided some interesting preclinical evidence of decreased tumor growth and survival of immunodeficient mice implanted with TnT-primed human MPM cells.A Perspective on the pleural mesothelium in development and disease provided an overview on the mesothelial contribution of lung development and also highlighted the ability of mesothelial cells to differentiate into several types of cells depending on the underlying stimuli . The research in the peritoneal mesothelium has stemmed from questions that have risen due to its use as a dialyzing membrane during the Peritoneal Dialysis (PD) modality in End Stage Renal Disease patients. In such a modality the peritoneal membrane is in a chronic state of inflammation caused by the PD fluids that are daily introduced in the peritoneal cavity. The authors discuss the influence of several insults to the mesothelial cells that are due to the effects of PD fluids, such as increased oxidative stress or increased osmotic stress, that can lead to differences in the mesothelial cells propensity to exhibit intercellular communication via TnTs and highlight the open questions in the field of PD and mesothelial cells TnTs. Pertinent to this is also the Review article of Moinuddin et al., regarding a severe form of peritoneal related disease which is the Encapsulating Peritoneal Sclerosis (EPS). The authors provide a global presentation of EPS in terms of etiology, risk factors, diagnosis and the underlying pathophysiology along with all the available treatment and prevention options. Another manuscript regarding PD was a Mini Review on the available in vivo and ex vivo animal models for the study of PD effects on the peritoneal mesothelial membrane . Nikitidou et al. critically discuss the advantages and disadvantages of each model and present the parameters studied in each one concluding in the need for establishment of standardized protocols in order to yield clinically relevant and applicable results.TnTs between peritoneal mesothelial cells in physiological and pathophysiological conditions in the peritoneal cavity has also been the theme of a Mini Review article . The authors provide detailed information on the anatomy and histology of the pericardium, the composition of the pericardial fluid and the physiology underlying its dynamic turnover in the pericardial cavity.Pericardium is the most neglected research field of mesothelial membranes. A Mini Review that filled an important gap in the literature was centered on the physiology of pericardial fluid production and drainage (The author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The authors wish to make the following corrections to this paper :The IMU consists of three gyros and three accelerometers. The gyro provides change of Euler angles, while the accelerometers give the specific force. This model is based on the inertial measurement system error modeling method presented by Jonathan Kelly and integrates the modified Rodrigues parameters kinematic equation. We obtained the IMU measuring equations as follows:The ICP algorithm is utilized to estimation the LiDAR-IMU time delay and relative orientation. At the beginning, the transforms between the LiDAR-IMU orientation curves are computed through iteratively selecting Step I: Registration Rules.The ICP algorithm operates by iteratively selecting the closest point between the IMU orientation curve and the LiDAR measurement point, and the concept of ICP proximity requires a suitable distance measurement. The minimum value of distance function can be computed asStep II: Nonlinear Iterative Registration Rules.We can getBy using Lagrange multipliers and incorporating the constraints The authors would like to apologize for any inconvenience caused to the readers by these changes."} +{"text": "Pelvic Schwannoma is an extremely rare event. Laparoscopic approach for radical resection on pelvic region already has been described in the literature. However, with better image quality provided by optic in the laparoscopy we can assure an improvement in this kind of approach for tumor resection. Our goal is to describe and evaluate the results of one laparoscopic resection of presacral and obturator fossa tumor. We present a case of a 60-year-old man with progressive congestion in the right inferior member and CT scan revealing a mass with miscellaneous content located behind of the right iliac vessels and right obturator nerve. Exploratory transperitoneal laparoscopy was indicated. During laparoscopy it was possible to see the mass between the spermatic cord and external iliac artery. We made the identification and preservation of iliac vessels and obturator nerve. Resection of the tumor was performed carefully, allowing the safe removal of the specimen with complete preservation of the iliac vessels and obturator nerve. Mean operative time of 150 minutes. No perioperative complications occurred. Two days of hospital stay. Posterior histopathological exam confirmed that the mass was a Schwannoma. The maximization of the image in the laparoscopic surgery offers dexterity and capacity of dissection required for complex mass dissection on pelvic region."} +{"text": "Bilateral mandibular tooth transposition is a relatively rare dental anomaly caused by distal migration of the mandibular lateral incisors and can be detected in the early mixed dentition by radiographic examination. Early diagnosis and interceptive intervention may reduce the risk of possible transposition between the mandibular canine and lateral incisor. This report illustrates the orthodontic management of bilateral mandibular canine-lateral incisor transposition. Correct positioning of the affected teeth was achieved on the left side while teeth on the right side were aligned in their transposed position. It demonstrates the outcome of good alignment of the teeth in the dental arch. A tooth may deviate from its normal path of eruption usually as a result of severe crowding or presence of an obstacle such as a supernumerary tooth or an odontoma. Such eruption deviation can occur with no apparent local or systemic cause, resulting in ectopic eruption of the tooth in a place normally occupied by another permanent tooth. The most frequently ectopically erupted tooth is the mandibular permanent lateral incisor which may occur unilaterally and bilaterally \u20134. A stuEarly diagnosis of a disturbed eruption of a mandibular permanent lateral incisor can be made in young children during the early mixed dentition at the age of 6\u20138 years, though some variation in timing of eruption of that tooth has been reported .Tooth transposition is defined as an interchange in position of two adjacent permanent teeth in the same quadrant of the dental arch or eruption of a tooth in a place normally occupied by another tooth . It is aTransposition in the mandible is relatively rare and occurs between the canine and lateral incisor and is usually unilateral. Only few cases of bilateral transposition of a canine and lateral incisor in the mandible have been reported , 14. TheThe etiology of transposition is unknown and the reason why a tooth deviates from its normal path of eruption is still obscure. Several theories have been suggested such as genetic factors , 21, intTooth transposition has been reported to be associated with other dental anomalies such as missing teeth, small or peg-shaped maxillary lateral incisors, retained deciduous mandibular lateral incisors and canines, rotations and malposition of adjacent teeth, and root dilacerations and impactions .The literature on early detection and treatment procedures for this abnormality is relatively sparse. The purpose of this article is primarily to emphasize early diagnosis and detection of bilateral mandibular tooth transposition and describe its orthodontic management and outcome.The early mixed dentition period, between 6 and 8 years, is the best time for assessing the development and path of eruption of the mandibular permanent lateral incisors. These age group children are usually first examined by a pediatric or general dentist who should evaluate both the dental health condition and the dental development. Using a panoramic radiograph is very useful for early diagnosis of the position and path of eruption of the unerupted teeth.A routine panoramic radiograph of a 6-year-old boy, taken at the Pedodontic Department of Tel Aviv University School of Dental Medicine, demonstrated normal dental position and development of the mandibular permanent lateral incisors, which are expected to erupt into their proper position in the arch uneventfully . SurprisThe primary objectives were to derotate the mandibular permanent lateral incisors and upright and reposition them to their normal position next to the central incisors. This will allow the canines and first premolars to erupt into their normal place and avoid the possible development of transposition between the canines and lateral incisors.The early diagnosis is of crucial importance for establishing a correct treatment planning. The retained deciduous lateral incisors and canines were immediately removed at the age of 8 years . EdgewisPeriodic radiographs taken during treatment showed that the right permanent canine was already erupting between the central and lateral incisors, while the left canine and lateral incisor were almost overlapping each other . It woulThe developing mandibular permanent lateral incisor normally resorbs the root of the deciduous tooth during the process of eruption into the oral cavity. It is still unclear what causes a tooth to deviate from its normal path of eruption and erupt ectopically. The presence of an obstacle such as a supernumerary tooth or an odontoma could be a factor causing the deflection and migration of a tooth. Several theories have been suggested as etiological factors to explain why a tooth deviates from its normal path of eruption to become transposed: interchange in position of the anlage at the very early stage of tooth development , geneticIt is not yet clear whether the retained deciduous tooth is the cause or the result of the displacement and ectopic eruption of its successor.Treatment considerations for transposed teeth include repositioning them in their normal place in the dental arch, maintaining them in their transposed position, or extracting one of the transposed teeth.In managing treatment for mandibular tooth transposition several factors should be considered such as the amount of distally displaced lateral incisor and the intrabony position of the permanent canine. Early detection of the abnormal eruption path of the lateral incisor allows for early intervention by uprighting and moving the lateral incisor to its normal place in the arch prior to the eruption of the canine into transposition with the lateral incisor. This was successfully achieved in our presented case only on the left side. On the contralateral side, however, the position of the canine was already between the central and lateral incisors and to avoid a possible risk of root resorption it was allowed to erupt into complete transposition with the lateral incisor. The canine's cusp tip was reshaped to resemble a lateral incisor.Early detection of a distally displaced mandibular permanent lateral incisor at the early mixed dentition, at the age of 6\u20138 years, and timely interceptive intervention may reduce the risk of tooth transposition in the mandible and avoid complex orthodontic therapy. The early orthodontic management and treatment outcome of mandibular bilateral canine-lateral incisor transposition have been described."} +{"text": "There is an error in the caption of Table 2. Specifically, the descriptions of the results for Sensitivity and Specificity are switched (sensitivity linked to non-fishing detection instead of fishing detection and vice versa for specificity). Please see the corrected"} +{"text": "Parents of disabled children often face the question whether or not to keep the child at home or to place them. The choice between the two alternatives resides with the parents and various factors influence their decision. Several researchers have identified these factors, which include child-related parameters, family and parental attitudes, the influence of the social environment, and the external assistance provided to the family. In a pilot study, we attempted to isolate the main factors involved in the parental decision either to keep the child at home or place the child by examining a sample comprised of 50 parents of children suffering severe intellectual disability studying in a special education school and 48 parents of adults with intellectual disability working in sheltered workshops. Each parent filled out a questionnaire used in a study in the United States and results of the research indicated parental-related factors as the dominant factors that delayed the placement of their child in residential care; guilt feelings were the main factor."} +{"text": "The potential oncogenic effect of some heavy metals in people occupationally and non-occupationally exposed to such heavy metals is already well demonstrated. This study seeks to clarify the potential role of these heavy metals in the living environment, in this case in non-occupational multifactorial aetiology of malignancies in the inhabitants of areas with increased prevalent environmental levels of heavy metals.Using a multidisciplinary approach throughout a complex epidemiological study, we investigated the potential oncogenic role of non-occupational environmental exposure to some heavy metals in populations living in areas with different environmental levels (high vs. low) of the above-mentioned heavy metals. The exposures were evaluated by identifying the exposed populations, the critical elements of the ecosystems, and as according to the means of identifying the types of exposure. The results were interpreted both epidemiologically and by using a GIS approach, which enabled indirect surveillance of oncogenic risks in each population.2\u03c7 test (p < 0.05)]. The GIS approach enables indirect surveillance of oncogenic risk in populations.The exposure to the investigated heavy metals provides significant risk factors of cancer in exposed populations, in both urban and rural areas [The role of non-occupational environmental exposure to some heavy metals in daily life is among the more significant oncogenic risk factors in exposed populations. The statistically significant associations between environmental exposure to such heavy metals and frequency of neoplasia in exposed populations become obvious when demonstrated on maps using the GIS system. Environmental surveillance of heavy metals pollution using GIS should be identified as an important element of surveillance, early detection, and control of neoplastic risks in populations, at the level of a single locality, but even on a wider geographical scale. The role of chronic exposure to heavy metals in non-occupationally exposed populations is less clear. This paper presents some outcomes of a three-year study of this topic, in two counties in North-Western Romania, on randomly selected representative samples of exposed and non-exposed subjects from within the general population. The level of heavy metals in that environment were studied cross-sectionally , and the results were compared with historical results from similar studies performed during the previous three decades by the same laboratories. Biological tests were performed in order to establish the impact on peopl\u00e8s health of environmental exposure to heavy metals. All tests were finally correlated with the health status of the populations of the two regions. The preliminary results presented in this paper show significant differences to the extent of non-occupational exposure to heavy metals of environmental origin in the two selected areas.2.In the frame of the multifactorial causality of cancers, using a multidisciplinary approach through a complex epidemiological study, we investigated the potential oncogenic role of non-occupational environmental exposure to some heavy metals, in population living in areas with different prevalent environmental levels (high vs. low) of heavy metals. In other words, this study seeks to clarify the potential role of such heavy metals in the living environment in the context of non-occupational multifactorial aetiology of malignancies in areas with increased environmental levels of heavy metals.The study was performed during the period 2006\u20132008 in the North-Western part of Romania, covering a geographical area containing naturally existing high levels of heavy metals (Maramures County) in its environment, to be compared with a control area having much lower naturally existing levels of heavy metals as pollution (Cluj County). The levels of heavy metal ambient pollution and their correlation with the incidence of malignancies were analysed in exposed humans living in the polluted study area (Maramures County), as compared to the results obtained in the control less polluted area (Cluj County). We analysed as the different potential biomarkers the levels of chrome (Cr), nickel (Ni), copper (Cu), zinc (Zn), cadmium (Cd), lead (Pb) and arsenic (As) in soil, drinking water, and food, as environmental elements. The results were correlated and interpreted using the GIS mapping method.In this paper, we focused on the following objectives:Evaluation of exposure to selected heavy metals in urban and rural populations in designated areas;Evaluation of the potential use of GIS as a surveillance tool to estimate the impact of environmental factors on the health status of populations.Levels of exposure could be calculated based on identification of the exposed populations, of the critical elements of ecosystem, and as according to the ways of identifying the means of exposure.For data collection we used a questionnaire . The questionnaires were completed, not by the subjects interviewed, but by specialists in our institution who had been trained to implement these questionnaires. Questionnaires were originally validated on a sample of subjects. The research and questionnaire used were approved by the Ethics Committee of the University of Medicine and Pharmacy. The questionnaires were used only for subjects who had signed an informed consent form.The evaluation of risks of cancer incidence associated to chronic non-occupational exposure at low levels to heavy metals, individually or in combinations, involved the following objectives:Establishing the history of pollution in the areas of interest.Determination of the concentration of heavy metals in soil, drinking water, food and biological samples obtained from the exposed subjects living in the area studied, and from the control subjects unexposed to heavy metal pollution.Correlating the available epidemiological, environmental and geographical data, stored in databases.The evaluation of the role that chronic exposure to heavy metals in food and drinking water might have had in populations exposed or unexposed to low levels of these metals, and with /without malignancies, was performed by monitoring the intake of heavy metals in food and water, as determined by laboratory analysis of local samples of food and drinking water (data to be published). All cases of malignancies reported to/by public health authorities were included, not making any specific relationship between any given heavy metal and any particular neoplasia.3.The results were interpreted both epidemiologically and using the GIS approach. GIS maps were constructed for both investigated areas, combining both medical and environmental investigated parameters, allowing indirectly the surveillance of oncogenic risk in those populations.There are naturally-occurring higher levels of the investigated microelements in the environment surrounding Baia Mare area than can be found in Cluj County, as shown in the following GIS maps \u20137.The prevalence of cancers in the inhabitants of the two areas is higher in the heavier polluted area (Maramure\u015f county) , 9, 10.2\u03c7 test proved (p < 0.05) that exposure to heavy metals is associated with a significantly increased risk of cancer in exposed populations, both in urban and rural areas.The The RISCANSIM software that has been created (by TC) allows the user to create personal scenarios of heavy metals exposure. The user has the possibility to set the concentration of different heavy metals, and the application can estimate the prevalence of cancers and of the genetic disorders for the given scenario.4.The oncogenic effects of environmental non-occupational exposure to heavy metals in human populations are well-known and accepted The major objectives to be considered when evaluating the exposure to in-taking pollutants were:the source of investigated chemical agent;means of exposure;measuring / estimating concentrations and duration of exposure;defining the exposed population;the integral analysis of exposure.The existing industrial sources of pollution (in Maramures County) have a certain influence on the depreciation of the quality of the environment (soil included). The appropriate and accurate evaluation of this phenomenon could be made only if the natural background geochemical composition of the soil in the area is known. The concentrations of available metals in soil constitute mainly the toxic fraction (having influence on plants and underground waters). The maximal values accepted for heavy metals in soil refer exclusively to total concentrations and vary from country to country.We did not intend to assess individually the relationship of each heavy metal with cancer incidence, on account of the simultaneous presence and effects of other non-occupational exposures which the general population experiences in each area. The relationship between the levels of environmental pollution and oncogenic risks of exposed populations are well known, even if some opinions are divergent 5.Despite the multifactorial etiological factors influencing the incidence of oncological conditions, the role of heavy metals in the non-occupational environment aspects of our daily lives appears to be important but \u2013 unfortunately \u2013 is insufficiently studied and understood.Chronic exposure to some heavy metals is one of the relevant oncogenic risk factors in exposed populations.Statistical significant associations between environmental exposure to selected heavy metals and incidence of neoplasia in exposed populations has been clearly demonstrated, and could be represented on maps using GIS systems.The environmental surveillance of heavy metals pollution using GIS could be an important element in the surveillance, early detection, and control of neoplastic risks in populations, including when applied more generally than when applied only in one locality.A high priority must be a determination to enlarge the spectrum of health-related variables to be used as indicators for environmental factors .New regulations would have to include maximal admitted values for the concentration of available metals in soil that represent the potential toxic fraction for plants which might enter the food chain."} +{"text": "New Ecological Paradigm (NEP) Worldview has a stronger standing among the studied Chinese farmers than the Dominant Social Paradigm (DSP) Worldview.Failure to curb water pollution in China brings to the fore the issue of environmental values and attitudes among Chinese farmers. Applying the New Ecological Paradigm Scale this study finds that the pro-environmental value of The state of the Chinese environment has in recent decades been painted in bleak colours and assessed to be at the brink of collapse . Seriouspolicy for the environment \u2026. requires a change in thinking and a change in attitudes. It requires environmental values at the heart of environmental policy\u2019 ."} +{"text": "The high negative bias of a sample in a scanning electron microscope constitutes the \u201ccathode lens\u201d with a strong electric field just above the sample surface. This mode offers a convenient tool for controlling the landing energy of electrons down to units or even fractions of electronvolts with only slight readjustments of the column. Moreover, the field accelerates and collimates the signal electrons to earthed detectors above and below the sample, thereby assuring high collection efficiency and high amplification of the image signal. One important feature is the ability to acquire the complete emission of the backscattered electrons, including those emitted at high angles with respect to the surface normal. The cathode lens aberrations are proportional to the landing energy of electrons so the spot size becomes nearly constant throughout the full energy scale. At low energies and with their complete angular distribution acquired, the backscattered electron images offer enhanced information about crystalline and electronic structures thanks to contrast mechanisms that are otherwise unavailable. Examples from various areas of materials science are presented. The idea of immersing the sample under observation in a strong electric field by means of electrons is one of oldest principles appearing in the development of electron microscopy. By the early 1930s, the so-called immersion objective lens was described showed aThe production and employment of emission electron microscopes began in the early 1960s in Germany . SurfaceAdvantages of reducing the landing energy of the focused primary electron beam in the SEM were recognized at the very beginnings of the development of this kind of instrumentation. Thei.e., the electronic energy band structure characteristic for a particular crystal system and its orientation with respect to the incident electron beam.At low energies, the crystalline information is enhanced, as for example, the grain contrast in polycrystals. The reasons for this phenomenon include the dependence of the generation and absorption of SE as well as of electron backscattering on crystal orientation, together with the increased influence of surface layers such as oxides also having their thickness orientation dependent. Angular variations in the backscattered electron (BSE) yield from crystals have been shown to dominate over those of the SE . As we wIn addition to the possibility of arbitrarily reducing the landing energy of electrons, the above-sample electric field is capable of collimating toward the optical axis the complete emission of BSE. Traditionally, the BSE emission has been acquired with a coaxial detector placed below the objective lens and held on ground potential. In this case the straight trajectories of the BSE impinge on the detector within a cone limited up to, say, a polar angle of 45\u00b0 with respect to the optical axis, leaving the high-angle BSE unutilized. Experience has shown the high-angle BSE bearing significantly enhanced crystallographic information; this effect has been repeatedly verified, although clear explanation is still lacking. When immersing the sample in a strong electric field, we can easily control acquisition of the high-angle BSE already at only moderately reduced landing energy of electrons.i.e., the diffraction contrast . Th. Th50]. in-situ treatment was performed so that very thin oxide and hydrocarbon layers may form on surfaces before their loading in UHV and participate on phenomena observed.i.e., the TEM at units of keV [Examination of free standing thin films with transmitted electrons is traditionally performed with the directly imaging transmission electron microscope (TEM) by means of fast electrons. Typical energies of the TEM varied during the history from tens of keV up to units of MeV and then back to some 200 to 400 keV as the most frequent values at present. Use of very high energies was motivated with efforts to improve resolution by reducing the diffraction aberration. With onset of the Computer-aided Design (CAD) programs for design of lenses and columns that provided lower geometrical aberrations, the MeV range was abandoned, while later the aberration correctors made possible the recent TEM solutions operated in tens of keV. On the other hand, efforts to increase the image contrast led to the opposite extreme, s of keV .Scanning transmission electron microscopy (STEM) as the traditional counterpart to the TEM is performed with relatively rare dedicated instruments, aberration corrected and reaching resolution in tens of pm ,53, and The inelastic mean free path (IMFP) of electrons in solids exhibits more or less uniform energy dependence with a global minimum falling below 1 nm at about 50 eV . Below tPilot experiments with the VLESTEM mode revealedThe VLESTEM method is in its initial phase so only few preliminary results are available but prospects include the possibilities of detailed examination of any 2D crystals. Implementation of multiple channel detectors with separation both in polar and azimuthal angles is desirable.Tuning the information depth by means of the landing energy of electrons may be considered the most straightforward, if not trivial application of the SLEEM method. However, very thin surface layers in units and tens of nm in thickness stop to be fully transparent and hence start contributing to the image signal only at energies well below 1 keV that are usually not available in conventional SEM devices without any kind of beam deceleration. For this reason, we briefly mention this family of samples, illustrated in When immersing the sample in a scanning electron microscope in strong electric field, we faced certain restrictions regarding the sample shape and surface treatment but we gained a totally free choice of the landing energy of electrons and, as a side effect, the possibility of complete acquisition of the backscattered electrons at all energies from units of keV to fractions of eV. These features offer a broad range of enhanced or improved contrast mechanisms and even new contrast mechanisms not activated in traditional instruments. Recently the major producers of electron microscopes have been including in their products the possibility of biasing the sample in the kV range so instrumentation is available for the above described imaging method. Thus, precautions have been made to accelerate a so far very slow accumulation of data collected with the SLEEM method."} +{"text": "Understanding collective behavior of moving organisms and how interactions between individuals govern their collective motion has triggered a growing number of studies. Similarities have been observed between the scale-free behavioral aspects of various systems . Investigation of such connections between the collective motion of non-human organisms and that of humans however, has been relatively scarce. The problem demands for particular attention in the context of emergency escape motion for which innovative experimentation with panicking ants has been recently employed as a relatively inexpensive and non-invasive approach. However, little empirical evidence has been provided as to the relevance and reliability of this approach as a model of human behaviour.This study explores pioneer experiments of emergency escape to tackle this question and to connect two forms of experimental observations that investigate the collective movement at macroscopic level. A large number of experiments with human and panicking ants are conducted representing the escape behavior of these systems in crowded spaces. The experiments share similar architectural structures in which two streams of crowd flow merge with one another. Measures such as discharge flow rates and the probability distribution of passage headways are extracted and compared between the two systems.Our findings displayed an unexpected degree of similarity between the collective patterns emerged from both observation types, particularly based on aggregate measures. Experiments with ants and humans commonly indicated how significantly the efficiency of motion and the rate of discharge depend on the architectural design of the movement environment.Our findings contribute to the accumulation of evidence needed to identify the boarders of applicability of experimentation with crowds of non-human entities as models of human collective motion as well as the level of measurements (i.e. macroscopic or microscopic) and the type of contexts at which reliable inferences can be drawn. This particularly has implications in the context of experimenting evacuation behaviour for which recruiting human subjects may face ethical restrictions. The findings, at minimum, offer promise as to the potential benefit of piloting such experiments with non-human crowds, thereby forming better-informed hypotheses. Understanding collective behavior of complex systems such as the flocking behavior of fish and birds and ant Growing interest towards an understanding of the collective behavior of non-human organisms as well as increasing number of studies that have been carried out to examine patterns of scale-free behaviour of pedestrian crowds under emergency conditions and under the impact of different architectures using biological entities inspired this study. In other words, in recent years, exploring the collective behavior of biological entities during extreme escape has received an increasing attention as an approach of gathering experimental evidence for exploring crowd dynamics under emergency conditions. This approach has been proved most useful for gaining basic insights as to aggregate measures of collective movements. Despite the fact that utilizing this innovative approach has received acceptability in the literature, the reliability of results obtained from this type of experiments remains to be discussed as to whether or not the findings emerged from these experiments, panicking non-human entities, would translate to human behavior in real context.Moreover, modeling pedestrian crowd movement has received a growing recognition in the recent years due to an increase in crowd-related incidents around the world. In the spirit of guaranteeing the safety of pedestrians, an accurate understanding of pedestrian crowd behavior and the rules that govern their motion is of critical importance. As suggested by , the arcOne of the methods of data provisioning that has been employed recently is experimenting with non-human entities particularly under extreme conditions of escape suggesting the practicality of this approach for gaining basic insights towards the collective patterns and characteristics of pedestrian flows under emergency conditions. Despite the great effort made to design and perform animal crowd experiments and despite the great deal of evidence that has emerged as a result, the literature does not offer sufficient evidence as to the extent of relevance of this experimental method. The method is completely non-invasive to humans and is also often much less expensive compared to its equivalent human-experiment method. However, the literature lacks adequate evidence and systematic studies as to how reliably animal crowd models of behaviour can be used as a proxy for their human peers.As Haghani and Sarvi recentlyUp to now, mice and ants have been the most common organisms of experimentations. One of the early studies in this field investigA large number of studies using animal models of escape have explored the effect of architectural designs of the escape environment and their impact on evacuation efficiency. A group of studies of ant experiments \u201326 invesIn order to understand the impact of conflicting geometries on the emergency response of pedestrian facilities, a number of studies , 30 inveAnother stream of studies has extensively practiced biological entities particularly panicked ants to gain insight towards complex phenomena such as \u2018faster is slower\u2019 effect which refers to a situation where elevated desire for escape may actually impede escape efficiency. There are studies that have addressed this particular topic with the use of animal experiments. In a study using ants panicked (repelled) by citronella escaping from triangular chambers with one narrow exit, Soria, Josens observedSobhani, Sarvi examinedBased on this overview of the existing literature, there is limited knowledge regarding the consistency between conclusions drawn based on the experiments with non-human entities and what pedestrian crowds actually behave under emergency conditions. There is also limited evidence in the extent to which these conclusions can be generalized to the actual emergency conditions. Additionally, understanding the effect of geometrical features and design of merging streams on the motion patterns of human and non-human entities can be explored using experimental setups. As revealed through a comprehensive literature review, most of the previous studies in pedestrian crowd domain and biological domain have focused on investigating collective movements within simple environments and under normal condition (lower level of rush). However, there are limited studies which conducted to understand the effect of various architectural settings on the collective behavior.In this study, we adopted two main approaches for gathering empirical evidence for examining the aforementioned issues in which humans and ants were the subjects of experiments. We investigate the emergency escape behaviour of pedestrians and ants evacuating crowded confined spaces. We quantitatively contrast the scale free behaviour of non-human entities along with collective movements of humans and estimated the distribution of output rate versus time, the probability distribution of time intervals between the passage of two successive entities and velocity distribution versus time and distance. The impact of different designs of merging configurations on the emergency response rate of escape areas is also investigated.Linepithema humile), as abundantly available species in Melbourne. Further, according to Monash University Animal Ethics Office, animal ethics clearance is exempted for this particular species of ants (Monash University 2015). In these experiments desire to flee and speed are taken to extremes in order to observe how the patterns of movements of these organisms are influenced by the layouts of escape area and presence of conflicting maneuvers. In these experiments, the colonies of Argentina ants were collected from the various sites at the Clayton Campus of Monash University and prepared for experiments following the procedures describes in [Experimentation with non-human subjects allow a wide range of phenomenon to be explored that are otherwise hard to be replicated in experiments with human without concern about participants\u2019 safety. In this study, we performed a wide range of experiments with Argentine ants . In addition, analysis of human experiments reveals that complementary cumulative distribution function (CCDF) of pedestrian\u2019s headway display slowly decaying tail for the asymmetrical merging setups and the corresponding power-law exponent is significantly lower for the asymmetrical merging settings. This suggests that the presence of asymmetrical merging configurations in escape area might increase the possibility of longer headways and blockage in both systems. According to the figure, in the system of panicking ants as well the curves corresponding to the symmetrical layouts are more skewed to the left indicating a shorter headway and less delay which is in consistence with the empirical evidence provided by our experiments with humans.One can note the similarity between the patterns indicated by the left panel of It is also noticeable (more distinctly based on plot 5(c) (ant experiment) and less distinctly based on plot 5(a) (human experiment) that the passage headway in downstream of the merging setup has greater variability in asymmetric setups than in symmetric setups as reflected in the length of the bow-and-whisker plots representing them. This could be a further indirect indication of more frequent traffic instability, flow interruption or transient clogging in such setups making quite large values of time intervals between successive pedestrians to be observable.To investigate the impact of merging configurations in escape areas on the collective movement of evacuees in terms of distribution of speed, average escape velocities along the corridors of the experimental layouts were estimated. We used a particle image velocimetry tool PIVLAB for the The speed analysis reveals the formation of \u201cstop and go\u201d phenomenon in the flow of the merging branches which causes the flow in one of the merging streams stop or experience a very low speed compared to the other stream. This crowd turbulence phenomenon has been reported by . As can Temporal fluctuations of the velocities in two single branches for human experiments under similar merging configurations are illustrated in Using the trajectories information we also measured the density and average velocity of the subjects at each point in time and space through discretisation of the evacuation space as well as the evacuation time. The calculated spatial averages of velocities and densities were also averaged over time for each scenario and visualised using colour coding methods in order to understand the considerable difference in velocity and density variations and fluctuations associated with the single branches in different merging layouts. Detailed analysis of velocity and density reveals that distribution of densities is not homogeneous and the density near the junctions is higher than other parts of the merging setups resulting in a abrupt reduction of velocity in those areas. In addition average density in the area that two single branches start to join each other in asymmetrical merging configurations is severely high leading to considerably low velocity in this type of merging setups.Whether there is any element of truth to be learnt from such experiments and the type of questions that can be addressed using animal crowd movement experiments as models of their human counterparts have been largely unknown requires replication of crowd motion experiments in both contexts. Systematic links between the contexts of the collective movement of humans and non-humans has so far been scarce in the literature. Given the non-invasiveness and relative logistical easiness of experiments with non-human entities as opposed to staging similar experiments with large groups of people, the answer to this question could have significant implications for the continuation of research in the field of crowd dynamics. Here in this work, we reported on linking between collective motion of humans and ants in experimental conditions as an attempt to bridge this gap.Based on previous studies, some degree of similarity between the collective motions of different biological systems has been observed. However, it is highly unknown that whether or not inferences drawn based on experiments with non-human organisms are consistent with the actual behavior of pedestrian crowds in near field laboratory experiments or to what extent they are deviated from the real human behaviour. In other words, the extent to which and the circumstances and contexts under which findings generated from experimentation with non-human organisms can be relied upon as the proxy of the actual behaviour of pedestrian crowds in natural settings remains to be investigated.In this study, a large number of experiments are conducted with panicking ants and humans providing the possibility of observation and quantitative analysis of the collective movements. A comprehensive data analysis was conducted by examining the motion trajectory of the participants which provides some measures of their scale-free behaviour. This led to findings both in terms of crowd behavior during evacuation from conflicting geometries as well as findings related to degree of resemblance between the collective pattern of movements emerged from both data sets. In other words, the laboratory experiments with human also shed interesting insights into the dynamics of human crowds during escape from environments that impose conflicting maneuvers (i.e. merging corridors).The results inferred from the collective movements of humans under emergency condition (participants were asked to run and evacuate the setting as quick as possible) were in some way the same as those obtained from the experiments with non-human entities. This suggests that extreme emergency conditions might lead to the emergence of scale-free behavior. With regard to the resemblance between the collective movements of humans and ants, our results confirm that the patterns associated with the velocity variations and the fluctuations derived from both set of observations shares a remarkable degree of similarity in terms of the creation of stop-and-go phenomenon and severe imbalance between the distribution of velocities of the two merging branches over space and time. In addition the analysis of the escape rate in both systems suggest that despite the considerable dependency between the average escape rate and its distribution over time and merging angle, the relation between them does not prove to be monotonic.However, there are certain limitations that need to be taken into consideration in terms of drawing possible generalisation to all escape concepts since this single work cannot be regarded as adequate proof for the reliability of ant crowd experiments, although it offered promising evidence to their possible relevance at least at the level of collective motion and based on aggregate measures. Much further evidence, however, is required for a better understanding of the possible similarities between the dynamics of humans and non-human crowd traffic. We believe data collected from experimentation with two organisms as complements suggesting more of experiments with human is required to make solid conclusions as to the limits of generalizability and applicability of experiments with non-human entities.S1 Video(WMV)Click here for additional data file.S2 Video(WMV)Click here for additional data file.S3 Video(AVI)Click here for additional data file.S4 Video(AVI)Click here for additional data file.S5 Video(AVI)Click here for additional data file.S6 Video(AVI)Click here for additional data file.S7 Video(AVI)Click here for additional data file.S8 Video(AVI)Click here for additional data file.S9 Video(AVI)Click here for additional data file.S10 Video(AVI)Click here for additional data file.S11 Video(AVI)Click here for additional data file.S12 Video(AVI)Click here for additional data file.S1 File(7Z)Click here for additional data file."} +{"text": "Transient osteoporosis of the hip (TOH) is a benign, selflimiting condition characterised by acute onset groin pain in adults. Early diagnosis is important to differentiate it from progressive conditions such as osteonecrosis. We report on a middle-aged male who presented with right groin pain without any prior trauma. The diagnosis of transient osteoporosis of hip was confirmed by Magnetic Resonance Imaging (MRI) and he was successfully treated with a course of Alendronate sodium, anti-inflammatory analgesics and a period of non-weight bearing ambulation. Transient osteoporosis of the hip (TOH) is an idiopathic, selflimiting condition observed mostly in men in the fourth or fifth decade of life and women in the third trimester of pregnancy or the immediate postpartum periodA 43 years old Asian gentleman presented with acute onset of right thigh pain, over the anterolateral aspect of two weeks duration. There were no significant findings on clinical examination and he was referred for physiotherapy.On review one month later, the pain was persistent and now localized to the groin with local tenderness and terminal restriction of movements. Blood tests including inflammatory markers, serum uric acid and serum rheumatoid factor were negative. An MRI scan of the pelvis was requested to obtain further detail and it confirmed typical changes of transient osteoporosis of the right hip . The patAt follow up seven weeks later, the patient had complete relief of pain with full painless range of movements. At further review five months after diagnosis and treatment the patient was mobilizing with full weight bearing. He had a full and painless range of movements at the right hip. Repeat MRI scan confirmed complete resolution of changes . With thAt review seven months after diagnosis, the patient was pain free with full range of movement at the affected hip and had resumed normal activities. He was discharged from outpatient follow up care.Transient osteoporosis is a rare clinical syndrome that was first described by Curtiss and Kincaid in 1959. TOH typically involves the middle-aged population and is more frequent in men than women. In women, it is more common during the third trimester of pregnancy. It is idiopathic with no history of traumaThis uncommon condition has been reported in the literature only in the form of case reports and small case seriesEarly differentiation of TOH and osteonecrosis is crucial as the natural history, treatment and outcome of the two conditions are different. TOH is known to have complete clinical and radiological recovery while untreated osteonecrosis of the femoral head leads to progressive damage and arthritis of the hipIn conclusion, a high level of suspicion is needed for early diagnosis of TOH. Typical MRI findings are diagnostic and unnecessary invasive investigation and intervention can be avoided. The treatment of this self-limiting condition is symptomatic and bisphosphonates including alendronate sodium are useful in its management."} +{"text": "Anhanguera-like pterosaurs from the contemporaneous Toolebuc Formation of central Queensland and the global distribution attained by ornithocheiroids during the Early Cretaceous. The morphology of the teeth and their presence in the estuarine- and lacustrine-influenced Griman Creek Formation is likely indicative of similar life habits of the tooth bearer to other members of Anhangueria.The fossil record of Australian pterosaurs is sparse, consisting of only a small number of isolated and fragmentary remains from the Cretaceous of Queensland, Western Australia and Victoria. Here, we describe two isolated pterosaur teeth from the Lower Cretaceous (middle Albian) Griman Creek Formation at Lightning Ridge and identify them as indeterminate members of the pterodactyloid clade Anhangueria. This represents the first formal description of pterosaur material from New South Wales. The presence of one or more anhanguerian pterosaurs at Lightning Ridge correlates with the presence of \u2018ornithocheirid\u2019 and Pterosaurs first appeared in the Late Triassic and diversified rapidly into the Jurassic. At the peak of their diversity in the Cretaceous, pterosaurs where present on all continents, including Antarctica . During By contrast, the fossil record of pterosaurs in Australia is very sparse and composed solely of isolated and fragmentary remains from the Cretaceous of Queensland, Victoria and Western Australia . The taxMythunga camara are preserved as isolated crowns, missing the roots and with eroded distal tips.LocalityLRF 759 was excavated in the 1970s from an underground mineral claim at \u2018Holden\u2019s Four Mile\u2019 opal field, approximately 4 km south west of Lightning Ridge . LRF 314PreservationBoth LRF 759 and LRF 3142 are isolated tooth crowns with eroded apices; LRF 759 is also missing a portion of the distal part of the crown near the base. Both teeth are preserved as translucent potch, a form of non-precious opal; in LRF 759 the potch displays mauve play of opal colour, whereas LRF 3142 contains areas of dark grey within honey-coloured potch. In LRF 759, the translucency of the potch reveals a thin-walled basal cavity that has been infilled with a body of purple opal and buff-coloured mudstone ; the samDescriptionLRF 759 has an eUnlike LRF 3142, in LRF 759 the tooth crown is ornamented by longitudinal grooves extending essentially apicobasally along the surface . A serieLRF 3142 is a genElongate, conical teeth similar in morphology to those described above have been previously reported from Lightning Ridge, and include plesiosaurs , ichthyoCooyoo australis .in situ dentary and maxillary teeth of Mythunga camara ; and Aussiedraco molnari (QM F10613), both from the Lower Cretaceous of central Queensland. The jaw fragment WAM 68.5.11 , in the absence of articulated or associated skeletal material they typically are insufficient for identification of the tooth-bearer to a specific or generic level. In Australia, this problem is exacerbated by the scarcity of pterosaur remains to which the teeth described here can be compared. It is currently not possible to determine with certainty whether the Lightning Ridge teeth belong to one of the named or unnamed but potential Australian pterosaur taxa, or whether they constituted the dentition of a taxon that is yet to be discovered. Furthermore, given the subtle observed morphological differences between the two teeth it is also uncertain whether the two teeth are derived from a single taxon or separate taxa. Further finds are needed in order to evaluate whether these differences are indicative of the presence of more than one pterosaur taxon, or whether taphonomic or other processes have affected the appearance of the tooth crowns.The identification of anhanguerian teeth from the Griman Creek Formation is consistent with the reports of anhanguerid-like and \u2018ornithocheirid\u2019 skeletal material from the Early Cretaceous of Queensland and the Isolated teeth excavated from the Lower Cretaceous Griman Creek Formation at Lightning Ridge, New South Wales, are identified as pertaining to pterosaurs. The oval basal cross-section, slight distal recurvature, irregularly-striated enamel ornamentation, and slender crowns bear a striking similarity to those of anhanguerian pterosaurs. This represents the first description of pterosaurs from New South Wales and contributes to the growing diversity of vertebrates from the Griman Creek Formation. The isolated remains cannot be conclusively assigned to any known pterosaur taxon, although their presence is consistent with the known record of anhanguerid-like pterosaurs from the contemporaneous Toolebuc Formation of central Queensland. The simultaneous presence in New South Wales and Queensland of anhanguerian pterosaur remains in sediments displaying characteristics of shallow-water lagoonal and lacustrine depositional environments indicates likely similarities in life habits of these pterosaurs. Further finds and descriptions of Australian pterosaurs are necessary to further characterise the diversity of this poorly understood group of reptiles both locally and in Australia as a whole."} +{"text": "The data presented in this article are related to the research article entitled \u201cOptions for the remediation of embankment dams using suitable types of alternative raw Specifications TableValue of the data\u2022The data presents the optimized technology of remediation embankment dams by suitable alternative raw materials and could be used by others researchers.\u2022The rheological properties of grouts were measured by using Mash cone and were compared to others chemical grouts.\u2022This data allows other researchers to extend the statistical analyses.1The dataset of this article provides information about the innovation remediation technology of embankment dams by use the suitable types of alternative raw materials. 22.1The experiments of use the optimal grout were carried out on several types of embankment dams in the Czech Republic. The experiments carried out even before the winter, due to the fact that the dam was ready for the upcoming winter season and especially for the spring floods caused by melting snow, regional or local rainfall. On designed grouts were monitored the effects of mixtures on leaks of the dams. GE clay together with two types of ash and lime (L) were selected for the design of optimal grouts. When designing, the mixtures were combined with the selected types of additives and their quantities. Before the application of grouts in practice, the tests on grouts were performed in raw and hardened state. On designed grouts were measured the viscosity (in raw state), compressive strength and volumetric shrinkage (in hardened state). The viscosity was determined according to EN 12 715"} +{"text": "Mice are arguably the dominant model organisms for studies investigating the effect of genetic traits on the pathways to mammalian skull and teeth development, thus being integral in exploring craniofacial and dental evolution. The aim of this study is to analyse the functional significance of masticatory loads on the mouse mandible and identify critical stress accumulations that could trigger phenotypic and/or growth alterations in mandible-related structures. To achieve this, a 3D model of mouse skulls was reconstructed based on Micro Computed Tomography measurements. Upon segmenting the main hard tissue components of the mandible such as incisors, molars and alveolar bone, boundary conditions were assigned on the basis of the masticatory muscle architecture. The model was subjected to four loading scenarios simulating different feeding ecologies according to the hard or soft type of food and chewing or gnawing biting movement. Chewing and gnawing resulted in varying loading patterns, with biting type exerting a dominant effect on the stress variations experienced by the mandible and loading intensity correlating linearly to the stress increase. The simulation provided refined insight on the mechanobiology of the mouse mandible, indicating that food consistency could influence micro evolutionary divergence patterns in mandible shape of rodents. The developmental pathways related to rodent skull morphogenesis have been extensively studied in the past years, identifying the effect of all traits related to their DNA sequence on the shape of the various skull bones, including mandible . Consequently, food consistency could result in evolutionary divergence patterns triggered through altered mandibular growth. However, the mandible of rodents is exposed to diurnal forces of high complexity and therefore the effects of mechanical loadings remain unclear due to limitations inherent to current experimental models. Computational methods may offer a good alternative to heuristic/experimental methodologies having the potential to answer important endured questions and confirm or not generalized assumptions on the mechanobiology of mandibles.In order to consolidate this hypothesis, we undertook an in situ mechanical response of biological systems modeling has naturally evolved from traditional engineering disciplines to the study of living tissues, rapidly covering a broad spectrum of clinical applications and FE modeling techniques were employed to determine whether different masticatory forces and movements are able to significantly alter the stress/strain equilibrium of the mouse mandible. The obtained results indicate that food consistency may be associated with micro evolutionary modifications in rodent mandible morphology that will overall impact on skull shape adaptations.A mouse skull (C57Bl/6-Sv129 genetic background) was scanned by \u03bcCT in order to reconstruct a 3D model required for the intended analysis. This technique is capable of producing 2D images of various structures, based on their ability to withstand the emitted X-radiation. As bone and all other hard tissues have a unique spectrum of X-ray permeability, they shade in different tones of white/gray within a CT slice, thus allowing their relatively unhindered segmentation with some minor overlapping of hard tissue types. This resulted in a 2D outline of the various model entities within every scan, while the 3D geometrical data set was generated by overlaying consecutive slices. Data acquisition was in accordance to DICOM .*.stl files (triangle surface models) of the bone contour through software programs. There exists, however, a consensus throughout literature that highly accurate models require semi-automated segmentation, supported by manual correction of the threshold results by experienced operators. A multi threshold segmentation technique was employed for the purpose of this study and the mean gray-scale within the image was calculated by employing sensitive edge detection filters, to distinguish the apparent tissue types , and chewing (molar biting) and both of them examined for two load intensities, corresponding to a food type each . Gnawing was simulated with a purely vertical load applied at the tip of the incisors whereas chewing considered a loading direction inclined by 30\u00b0 to the dorsal-ventral axis of the molars. The fracture strength of these pellet fragments was determined experimentally, based on uniaxial compression tests, and applied as the masticatory load during the two biting scenarios.As the size of the food pellets was disproportionally large to the mouse skull dimensions, the gnawing/chewing load was established on the basis of fragmented pellet bites fitting the animal's mandible size. The fragments, considered during the compression tests, were in these terms similar in size to the rodent's oral cavity e.g., slightly wider than both incisal edges, while height and depth were restricted to the inter-incisor distance. These loads were equally distributed over the molars and incisors (both sides) as literature advocates bilateral biting to be more realistic than unilateral , while not contributing at all to the anteroposterior stabilization of the mandible. Its fibers insert on the medial fossa and the dorsal surface of the angular process and its attachment surface was simulated by a set of 2016 element nodes.\u27a2 The internal pterygoid was considered to bare 23% of the superoinferior load (Fz) and 30% of the anterior-posterior stabilization force (Fx). It was simulated as being attached on a set of 953 nodes in the area of the masseteric fossa and the masseteric crest ventral to the first and second molars.\u27a2 The medial masseter pars zygomaticomandibularis is a major contributor to proximal movements of the mandible and was thus loaded with 24% of the mandible closing force (Fz) and 30% of the ventral-dorsal load (Fx).\u27a2 The medial masseter pars zygomaticomandibularis was inserted on the ventral margin of the angular process of the mandible . It was considered as the strongest among all masticatory muscles contributing 36% of the vertical (Fz) providing however 40% of the anterior-posterior stabilization (Fx).\u27a2 The posterior fibers of the temporal muscle, only involved in mandible retraction, were not considered during the biting scenarios. The posterior part of the temporal muscle attaches on the anterior border of the ramus of the mandible initiating from the short coronoid process up to the last molar covering a total of 782 element nodes. Due to its small proportions, the temporal muscle was considered to bare only 5% of the mandible closing force . The material properties applied to each tissue type are summarized in Table The final STL model of the mouse skull, reverse engineered through \u03bcCT is shown in Figure The forces resulting for each one of the scenarios are summarized in Table The temporomandibular joint was simulated by articulating the mandible surface contacting the temporomandibular disc at its mediolateral axis Figure . An overThe different loading scenarios resulted in significant variations of the stress and strain patterns developing on the mouse teeth and mandible. Biting types had a dominant effect on the stress fields experienced by the mandible, with loading intensity resulting in an almost linear stress increase.The analysis indicated that the masseter ridge was one of the most stressed areas of the mandible, with incisal biting (gnawing) also resulting in stress augmentations of the mental foramen of the mandible as well as on the temporomandibular joint Figure . This caLoading patterns occurring during chewing were more pronounced in molars when compared to incisors, where the stress was observed at their posterior part Figure . ChewingIt is notable that the mastication forces during chewing were considerably cushioned by the periodontal ligament, thus preventing overloading of the alveolar bone and isolating the masseter ridge from abrupt loading. Although the results indicated a linear correlation of this effect to the applied load, this cushioning is in reality expected to fade along with an increasing masticatory force. This limitation is attributed to the linear-elastic nature of the introduced model, as there is no literature available concerning the nonlinear-viscoelastic material properties of the periodontal ligament. Different stress fields within the periodontal ligament were experienced during the various biting scenarios Figure . It seemDespite the use of dentition and skull morphology to classify both living and extinct species (Machado-Allison and Garcia, in-situ occurring stress fields in the periodontal ligament is vital in assessing its cellular response, as the values provided in Figure in-vitro, to experimentally assess the mechanobiology of the cells endemic to this tissue.This is the first attempt to associate the occurring masticatory forces to evolutionary aspects of rodent mandibles. The masticatory loads, adopted from the pellet compression tests, exhibited significant variations in magnitude and direction (see Table Rodents exhibit two mutually exclusive biting modes (Cox et al., Recent studies observed phenotypic differentiations in laboratory bread mice and associated them to food consistency and muscle driven remodeling (Renaud et al., It remains however unclear whether the trigger for this plastic response is due to environmental influences or genetic effects, as it stands to reason that the recorded stress variations will also affect cell proliferation or differentiation events thus influencing additional bone or other tissue formation based on biting type and intensity (Haudenschild et al., The results shown in this study attest that gnawing can double the stress intensity experienced by areas of the mouse mandible when compared to chewing, while also increasing the strain of both, bony and dental tissue. A response that is in agreement with recent literature findings, which indicated that gnawing induced the highest mean stress across the skull (Cox et al., in vitro (Orr et al., in vivo (Delaine-Smith and Reilly, Since every form of life, from cells to organisms is mechanosensitive, mechanical stimulus is widely accepted to regulate the growth and development of any tissue type under physiological conditions (Farng et al., in-vitro experiments, to determine the in-situ response of cells to masticatory forces. Cells extracted from the periodontal ligament will, in these terms, be subjected to the calculated stress values (presented in Figure Future work includes the use of the presented Finite Element model, in combination with AT and TM: Contributed to the conception of the hypothesis of the study, collaborated in the development of the model and was involved in the evaluation of the results and preparation of the manuscript. He also provided approval for the publication of this version. LJ: Contributed to the development of the model and the interpretation of data for the work. She was also involved in the preparation of the manuscript and provided approval for the publication of this version. EK: Contributed to the development of the model, the acquisition and the analysis of data for the work. He was also involved in the preparation of the manuscript and provided approval for the publication of this version. NM: Contributed to the development of the model, the analysis and the interpretation of data for the work. He was also involved in the preparation of the manuscript and sanctioned the publication of this version.EK was employed by company BETA CAE Systems S.A. The other authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling Editor declared a past co-authorship with two of the authors TM and LJ, and states that the process nevertheless met the standards of a fair and objective review."} +{"text": "High prevalence of hypertension is observed in diabetic patients of both the types. Diabetic nephropathy is one of the major reason for high morbidity, mortality and financial burden in such hypertensive diabetic patients. For this review, electronic databases including PubMed/Medline, Embase, Cochrane and Google scholar were searched from 1990-2013. Multiple inter-related factors are responsible for the development of hypertension and therefore nephropathy in the chronic diabetic patients. Majority of such factors are identified to lead to extensive sodium reabsorption and peripheral vasoconstriction and thus leading to microvascular complications like nephropathy. Management of hypertension by targeting such mediators is the highly recommended therapy for controlling and treating diabetic nephropathy. Clinical trials suggests that drugs inhibiting the renin-angiotensin-aldosterone pathway should be used as the first-line agents for the management of hypertensive diabetic nephropathy patients. These agents are effective in slowing the progression of the end-stage kidney disease as well as lowering albuminuria. Researchers are also investigating the effectiveness of drug combination for better management of hypertension and diabetic nephropathy. The present article is a review of the evidences which explains the underlying pathological changes which leads to the development of nephropathy in a hypertensive diabetic patients. The review also observes the clinical trials for different anti-hypertensive drugs which are recommended for the treatment of such patients. High prevalence of hypertension is observed in diabetic patients of both the types. Diabetic nephropathy is one of the major reason for high morbidity, mortality and financial burden in such hypertensive diabetic patients.Prevalence rates of diabetes has been observed to increase drastically in past two decades due to both the ageing population around the world as well as unhealthy lifestyles which is increasing obesity and overweight problems. For this review, electronic databases including PubMed/Medline, Embase, Cochrane and Google scholar were searched from 1990-2013. Diabetic patients are always on a high risk of developing diabetes related complications like hypertension, neuropathy, nephropathy, retinopathy, stroke and others. Observations over-the-years suggest that one-third of the diabetic patient develops diabetic nephropathy which on long turn leads to chronic renal problems. Diabetic nephropathy is a one of the commonest problems responsible for diabetes related morbidity, mortality, and financial burdens. It is therefore essential to isolate the menace issues associated with the advancement of diabetic nephropathy. It is also highly advisable to have better knowledge of early treatment procedures so that extensive morbidity and mortality can be avoided (.4% (4). For the type 2 diabetes mellitus patients, hypertension usually co-exist before the onset of kidney diseases. This can be explained by the fact that obesity and overweight problem is a common risk factor which is responsible for both the glucose intolerance as well as hypertension. Different studies suggest that prevalence of hypertension in the diabetic mellitus type 2 patients who are yet not having proteinuria is 58-70%. The research also clarifies that chronic diabetes is less associated with the development of hypertension than the impaired renal function. In fact, hypertension further exaggerate and worsen the dysfunction of kidneys and therefore directly contributes to exaggeration of the cardiovascular conditions .The overall findings of the entire research suggest that microalbuminuria always precedes the hypertensive stage in both the types of diabetes patients and then the worsened renal functions contributes in degradation of the cardiovascular functions and the vicious cycle continues. The severity of high tension in diabetic nephropathy patient rises with every phase of the chronic kidney disease which in turn worsen the kidney functions and ultimately 90% of the patients are approached to final-phase renal ailment. An individual\u2019s susceptibility to the development of both the high tension and renal ailment is caused due to various metabolic and hemodynamic changes which are shared by most diabetic patients. Genetic determinant are important decisive factors to dictate patient\u2019s vulnerability. While certain inherited genes may make the person prone to the disease while some are renoprotective. Although this is yet unclear if these genetic factor defines the incidences of nephropathy diabetes or only make the person more vulnerable to the renal diseases in general context along with the other risk factors .The international statistics for the prevalence of diabetic nephropathy reveals striking epidemiological variations even within the European nations. The proportion of the diabetic nephropathy sufferers requiring renal replacement treatment is more in Germany than the Unites States of America. Reports suggest that in the year 1995, 59% of the patients admitted in Southwest Germany hospitals for renal replacement were having any form of diabetes while 90% of them were having type 2 diabetes mellitus. A high correlation between final phase renal ailment and type 2 blood sugar has also been observed for the countries like Denmark and Australia where the overall prevalence of diabetes is low compared to the other states. Equal prevalence rate of the disease has been found in both the male and female patients. Epidemiological data of the condition in context of patient\u2019s age suggest that diabetic nephropathy is rarely observed during the initial 10 year of the type 1 diabetes mellitus duration. On the other hand, peak incidences of the condition i.e. 3% per year has been found in the patients with diabetes for more than 15-20 years. Patients who usually requires management for final phase renal ailment are of average 60 years. While the reports from around the world suggest higher incidence of diabetic nephropathy in hypertensive diabetes sufferers of geriatric age, still the part of age in the progression of the ailment in younger sufferers is not clear. The Pima Indians having type 2 diabetes mellitus that are susceptible to the development of diabetes at younger age are also equally prone to the advancement of final phase renal ailment. Till a 20 year age, about fifty percent of every diabetic Pima Indian has acquired blood sugar nephropathy, where 15 percentile of these persons have advanced to end-stage renal disease (ESRD). Race based epidemiology of diabetic nephropathy reveals that the severity as well as occurrence rates of the condition are higher in the blacks who are around 3-6 times more affected by the condition than the whites, Mexican Americans, and Pima Indians who are having non-insulin reliant diabetic mellitus. The high prevalence of blood sugar nephropathy in black not only suggest the underlying genetic factors but also points out the effect of poor socioeconomic conditions which is the causative factor for poor diet, uncontrolled hyperglycemia, poorly managed hypertension and obesity. All these socioeconomic factors are also the threat aspects for the progress of diabetes, hypertension and thus blood sugar nephropathy. These facts also suggest that familial clustering may also be present in these population .The overall pathogenesis of blood sugar nephropathy is highly complex and is hugely driven by the altered internal milieu around the renal apparatus which initiates multiple pathways leading to the advancement of the ailment. Extensive hyperglycemia is a chief culprit in causing renal dysfunction as it leads to glomerular hyperfiltration along with endothelial dysfunction. Both the conditions together contributes to the changes in the basement membrane properties which are characterised by hypertrophy and hyperplasia of the intraglomerular cells. Other such basement membrane changes as observed by researchers are nodular intercapillary glomerulosclerose and glomerular matrix changes. Detection of albumin in the urine of patient is usually considered an indicative sign of such changes happening inside the body . All theThe altered renin-angiotensin aldosterone system (RAAS) in the blood sugar patient also is a significant contributor to the advancement of blood sugar nephropathy. The altered RAAS system affects the systemic as well as glomerular blood pressure along with affecting the sodium reabsorption process by virtue of its angiotensin II and aldosterone effects on the pro-fibrotic level . AdditioThe observations that control of hypertension from the initial phases of blood sugar reduces the progression of blood sugar nephropathy suggest the important part of hypertension in the development of the condition. Prolonged systemic hypertension is an important contributor of the endothelial injuries to the kidneys. Human studies suggest that lowering the blood pressure with any of the pharmacological agent in a type 2 diabetes mellitus patient acts as a powerful intervention in the prevention of this complication .The pathogenesis of diabetic nephropathy can be summarised by three distinctive mechanisms. 1) The expansion of mesangial membrane due to prolonged hyperglycemia which leads to heightened creation of matrix and/or glycosylation of the proteins in the matrix. 2) Coagulating of the glomerular cellar tissue because of various inflammatory processes. 3) Factors like expansion of the afferent renal artery or ischemic wound caused by the contracting of the blood vessels innervating the glomerular apparatus due to hyaline deposition . The majAssociation of hereditary aspects in the growth and progression of blood sugar nephropathy can be explained by the fact that all the patients with either type of diabetes do not develops the condition. This suggests that inherited genetic factors in the family predisposes the person both to diabetes as well as diabetic nephropathy. According to a theory, a person born with comparatively lesser number of nephrons is highly susceptible to the growth of renal problems like nephropathy in the later life. Animal studies show that if the mothers are exposed to extensive hyperglycemia during pregnancy, the child is highly susceptible to the progress of diabetes in initial phases of life and also to the progress of final phase renal ailments. If this fact is extrapolated in case of humans, it suggest that certain maternal factors might also be responsible for the development of such complications .Genetics along with the other risk factors like age, obesity, smoking, history of hypertension, and the length of the diabetic condition predisposes a diabetic patient to the development of diabetic nephropathy . Family-While the contribution of each of these genes in the progress of blood sugar nephropathy is not yet ascertained, extensive research in the field of diabetes related genetics may provide important information about the underlying pathophysiology of diabetic nephropathy as well as useful treatment targets in future. Two such genes which are presently extensively researched are CNDP1 and CCR2. Both the genes and their impacts are discussed below:CNDP1: The gene is an allelic variant of the carnosinase gene and has been strongly connected the development of blood sugar nephropathy by altering the carnosine pathway. The gene is responsible for encoding the enzyme carnosinase which functions as hydrolyzing enzyme to the substrate L-carnosine. Carnosine is also known as \u03b2-alanyl-L-histidine dipeptide and is found inside most of the cells in the body. Maximum amount of the amino-acid is found in the muscles and from there it is released inside the serum. The amino acid is also available from the nutritive sources. The main function of the carnosine is to inhibit the ACE in a natural environment and also hinder the production of the progressive glycation end-yields. The amino acid also reduces the oxidative stress by scavenging. Animal experimentation of the amino-acid as well as its laboratory cultures has proved that it hugely influence the breakdown of glucose in the physique and therefore reduces the risk of hyperglycemia. This in turn allows carnosine to protect the mesangial cells of the renal apparatus from the inflammatory effects and oxidative stress due to high blood glucose levels. In this manner carnosine prevents or at least slow down the progress of microvascular problems of blood sugar such as nephropathy . ExtensiCCR2: CCR2 encodes for chemokine receptor-2 gene. The chemokine receptor also acts as a co-receptor for macrophages and therefore plays an important role in inflammatory pathway. Mutation in the gene recognized as CCR-V64I is responsible for altering the monocyte chemoattractant protein-1 (MCP-1)-receptor and therefore induces the inflammatory processes under the influence of environmental conditions. The cascades initiated by such genes causes the development of inflammation mediated microvascular conditions like diabetic nephropathy .While so many different menace issues for the blood sugar nephropathy was recognised, none of them are globally accepted as an accurate predictor of the condition. Presently the healthcare professionals use the signs of microalbuminuria as an early predictor of diabetic nephropathy. Albumin excretion as measured in the urine falling between 30 to 300 mg per day is considered as microalbuminuria. Also albumin creatinine ratio of 2.5-25 and 3.5-35 mg/mmol in men and female patients correspondingly is also considered as microalbuminuria. The ratio is identified from a random urine sample. These assessment not only predicts nephropathy but also indicates towards the progressive renal complications and therefore provides a prognosis related to mortality in both types of diabetes patients . While tAlternate non-invasive procedures for the early prediction of diabetic nephropathy are under investigation. Researchers have identified various urinary markers which acts as early predictors of renal complications. These new biomarkers are kidney injury molecule-1 (KIM-1), \u03b1-1 microglobulin and neutrophil gelatinase-associated lipocalin (NGAL). These biomarkers along with their practical application methods are further researched so that they can be used for prediction of both the acute as well as chronic renal damages caused due to diabetes .Urine proteome analysis is another non-invasive technique which can be used for the early prediction of diabetic nephropathy. The method uses capillary electrophoresis coupled mass spectrometry to identify specific biomarkers in the patient\u2019s urine sample . The proVarious randomised control trial has proved that strict control of hyperglycaemic condition reduces the occurrence rate of albuminuria in the diabetic patients of both the types . It is nInhibition of the renin-angiotensin aldosterone arrangement having an ACE inhibitor as well as angiotensin receptor blocker is presently considered as best treatment for preventing the development of overt blood sugar nephropathy to the final phase renal ailment. A relative risk reduction rate for reduced renal functioning as observed with the curing with ACE inhibitors in blood sugar sufferers is 50%. However the risk reduction with similar treatment plan was found to be only 15% for the final phase renal ailment for the non-insulin reliant blood sugar sufferers with fully established diabetes nephropathy .Different strategies for effective blockade of RAAS have been proposed which even includes increased dosing of the drugs or even the utilisation of two or three different RAAS blocking drugs simultaneously. But such treatment plan implementation requires watchful observation of the patients hemodynamic balance as all these drugs when taken together may lead to hyperkalaemia in the patient .All these data together suggest that even the rigorous management of hypertension and albuminuria has limited part in restricting the development of microvascular complications of blood sugar and thus adjunctive treatments are required which can directly affect the pathological pathway for the development of this conditions. Only pathway targeted treatment plans can effectively diminish the advancement of the diabetic nephropathy and added complications especially in type 2 diabetes mellitus patients.Animal studies on benfotiamine found that the drug can prove to be an effective treatment option as it can act on the three major pathways of the hyperglycaemic damage. This sterol solvable thiamine derivative was proved to inhibit the hexosamine pathway, inhibit the forming of high level glycation final products, and can inhibit the diacylglycerol (DAG)-protein kinase C (PKC) pathway. The drug is also capable of blocking the initiation of the proinflammatory transcript issue NF-\u03baB which is triggered by hyperglycemia. Instigation of the NF-\u03baB conduit by hyperglycemia leads to the extensive production of triphosphates and superoxides which are responsible for the induction of metabolic pseudohypoxia. Ischemic damage caused by this pseudohypoxia is characterised by mitochondrial dysfunction and damage of renal cells by free oxygen radicals which ultimately leads to permanent vascular damage . MoleculCMJN was the single author of the paper.The author declared no competing interests.Ethical issues have been completely observed by the author.None."} +{"text": "This article offers an overview of what has been done until now on restorative research with children and opens up new inquires for future research. Most of the work has studied children's exposure to nature and the restorative benefits this contact provides, focusing on the renewal of children's psychological resources. The paper begins with an introduction to children's current tendency toward an alienation from the natural world and sets out the objectives of the article. It is followed by four main sections. The first two sections report on what we already know in this research area, distinguishing between children with normal mental capabilities and those suffering from attention-deficit hyperactivity disorder (ADHD). The findings gathered in these sections suggest that children's contact with nature improves their mood and their cognitive functioning, increases their social interactions and reduces ADHD symptoms. The next section describes five suggestions for future research: (1) the need for considering the relational dynamics between the child and the environment in restoration research, and the concept of constrained restoration; (2) the possibility of restorative needs arising from understimulation; (3) the importance of considering children's social context for restoration; (4) the relationship between restoration and pro-social and pro-environmental behaviors; and (5) children's restorative environments other than nature. We close by making some final remarks about the importance of restoring daily depleted resources for children's healthy functioning. Studies conducted within research areas as environmental psychology, public health, and outdoor recreation suggest that exposure to nature can alleviate some of the negative symptoms of our children's contemporary lifestyle. Time spent in green outdoors reduces children's probability of being overweight , one could think that being under close parental surveillance while being involved in non-organized activities in nature might constrain children's restorative experiences .The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Ghana is classified as being in the malaria control phase, according to the global malaria elimination program. With many years of policy development and control interventions, malaria specific mortality among children less than 5\u00a0years old has declined from 14.4% in 2000 to 0.6% in 2012. However, the same level of success has not been achieved with malaria morbidity. The recently adopted 2015\u20132020 Ghana strategic action plan aims to reduce the burden of malaria by 75.0%. Planning and policy development has always been guided by evidence from field studies, and mathematical models that are able to investigate malaria transmission dynamics have not played a significant role in supporting policy development. The objectives of this study are to describe the malaria situation in Ghana and give a brief account of how mathematical modelling techniques could support a more informed malaria control effort in the Ghanaian context. A review is carried out of some mathematical models investigating the dynamics of malaria transmission in sub-Saharan African countries, including Ghana. The applications of these models are then discussed, considering the gaps that still remain in Ghana for which further mathematical model development could be supportive. Because of the collaborative approach adopted in their development, some model examples Ghana could benefit from are also discussed. Collaboration between malaria control experts and modellers will allow for more appropriate mathematical models to be developed. Packaging these models with user-friendly interfaces and making them available at various levels of malaria control management could help provide the decision making tools needed for planning and a platform for monitoring and evaluation of interventions in Ghana. Globally, about 3.2 billion people were at risk of malaria infection in 2015, with the number of cases being 214 million. Deaths attributable to malaria were estimated to be 438,000, most of which occurred in sub-Saharan Africa. These interventions were predominantly deployed in major cities. Insecticide Treated Bed nets (ITNs) were deployed nationally in 2004, following evidence from field trials of their effectiveness in 1996 in Ghana and elsewhere. A policy was also made to subsidise delivery of ITNs in 2007.Since 2005, IRS activities have been recommended; however their deployment has been on a limited scale. Figure 3Despite the high levels of coverage of the interventions, the level of malaria morbidity in Ghana remains relatively high, making iWithin the frame of the 2030 targets set by the Global Malaria Programme (GMP), appropriate mathematical models could help to find deployment strategies for existing and new intervention packages across the three distinct epidemiological zones of the country. These models should also provide a suitable platform for monitoring and evaluating the impact of deployed interventions and track progress towards set goals at both the zonal and national level.Mathematical modelling techniques have been in use for centuries to study transmission dynamics of various infectious diseases.The foundations of modern mathematical modelling techniques specifically for malaria transmission dynamics were laid by Ronald Ross in the early part of the eighteenth century when he demonstrated the link between mosquitoes and malaria transmission and subsequently developed a simple mathematical model to describe transmission of malaria.,22 The RIn Ghana, attempts have been made to develop similar mathematical models to describe the dynamics of malaria. Using ordinary differential equations, compartments representing susceptible, exposed, infected and recovered/removed (SEIR) and susceptible, exposed infected (SEI) for both human and mosquito populations respectively were developed. The focus of these models was on investigating the basic reproductive number, conducting stability analyses and simulation studies to determine when Ghana could achieve malaria free status. The secoAnother model was developed with two infectious classes for the human population using differential equations. This modModifying the Kermack\u2013Makendrick model, a suscepWhile the results of all the aforementioned models were informative, potentially key factors of malaria epidemiology in Ghana are yet to be considered. Factors including the varying epidemiological settings of populations across the country and available monthly aggregated data from all health facilities in all districts rather than the annualised data could offer more insights to the transmission dynamics of malaria in Ghana at the specific ecological zone level, given the varying meteorological factors across the country that lead to different incidence of malaria.Despite the potential usefulness of country-specific mathematical models in studying infectious diseases such as malaria, there seems to be no evidence to suggest that they have guided malaria intervention strategies in Ghana.Models have been developed elsewhere in Africa where the diversity in malaria transmission across regions is taken into account through the incorporation of meteorological factors such as rainfall and temperature. These models were successfully used to investigate the impact of interventions such as IRS and ITNs on the transmission of malaria.\u201335The earliest mathematical model for malaria transmission in Africa was the Garki model, which was developed and tested in a large field trial in northern Nigeria in 1974. The GarkOther models in Nigeria, Ivory Coast and Mali aided the investigation of malaria transmission dynamics, the condMathematical models have also been successfully developed in eastern Africa. Some investigated malaria transmission dynamics in mosquitoes and humans,\u201345 otherEven though South Africa is largely malaria free, the dynamics of malaria in the northern provinces of South Africa were investigated using various mathematical models.\u201351 TheseWhile the results from these studies are useful, one major drawback is often the seeming lack of awareness of these models by programme managers at the national and district level. Providing these models, built with the involvement of programme managers, on user friendly platforms will be the next necessary step that may prove to be invaluable in helping to combat incidence of malaria on the African continent.The National Malaria Control Program (NMCP) in Ghana has made laudable strategic plans to reduce the burden of the disease by 75% across the country by 2020. The aim is to achieve this goal by intensifying the distribution of treated bednets (ITNs) and scaling up monitoring and evaluation (M&E) activities.However, the NMCP will need the tools to adequately justify the approach in which interventions are deployed across different epidemiological settings across the country. Their inRegardless of the availability of health facility level reported cases on the DHIMS, which are analysed periodically to assess progress of malaria morbidity at the regional level, these analyses can only be carried out retrospectively. The ability to provide evidence prospectively to support decision making that could form a basis for intervention deployment may not be available with the usual approach of analysing the data.The retrospective limitations of conventional approaches may also not allow evaluation of the impact and cost effectiveness of existing or new interventions without Although a few attempts have been made to develop malaria transmission models in Ghana, there are some limitations in their implementation. While some models aim at GThis project therefore aims to develop a suite of mathematical models that could be used to predict the transmission of the disease in all the different epidemiological/ecological zones of Ghana and can also be used to assess the optimal impact of interventions individually or in combination. The study also aims to extend these models for evaluation of the cost effectiveness of interventions. Finally a friendly user interface visualising the data and incorporating these models will be developed for use at various levels of malaria control. These tools will support the development of relevant policies for the effective control of malaria with limited resources.A spatially explicit population level dynamic mathematical model that takes into account the varying epidemiology of malaria morbidity in Ghana, validated using routine health facility data from all districts, will be developed to investigate the impact of malaria interventions and support planning.To develop and validate a suite of mathematical models that can be used to predict malaria transmission and to investigate the impact and cost of malaria interventions in Ghana.Determine the relationship between malaria morbidity and weather variables in Ghana using statistical methods.Develop dynamic spatially explicit population level mathematical models and assess the optimal combination of various interventions in different epidemiological settings in Ghana.Evaluate the cost effectiveness of these interventions on various populations.Assess prospects of achieving proposed local and international set targets, based on the formulated mathematical models.Develop an interactive and user friendly interface of the mathematical models using the R software and making them available to policy makers and managers of malaria control, at the regional and national level.Specific objectives:The proposed mathematical models will be developed using nonlinear ordinary differential equations. The basic model structure will include coupling compartments for populations that are susceptible, infected and recovered for humans and susceptible, infected and infectious for mosquitos. Due to the endemicity of malaria transmission in Ghana, the concTransitions between compartments will be modelled incorporating appropriate rate parameters estimated from observed data or from the literature. Seasonality of malaria transmission will be captured through forcing functions or functions of meteorological variables of the various ecological zones as covariates.The impact of interventions will be tested by simulating various levels of intervention coverages (singly or in combination) on the potential reduction on transmission of disease in the various zones. Similarly the prospects of reducing the prevalence and incidence of malaria to desired set targets, by the NMCP and those for malaria elimination, will be tested based on the impact of varying coverage levels of interventions on transmission.Cost effectiveness of various interventions will also be calculated based on the net intervention cost. That is, benefits in terms of cost reduction derived to the health system following implementation of the interventions. Average cost effectiveness ratio (ACER) will then be calculated as the ratio of the net cost of interventions and the net effect of interventions. Other deModels will be fitted with the aim to matching the features and trajectory of observed monthly reported cases within each zone and the best parameter set obtained through maximum likelihood methods.Clinical data for this study will be obtained from the health facility records of confirmed uncomplicated and severe malaria cases, malaria attributable deaths and records of pregnant women confirmed to have malaria. These data are captured and stored on the DHIMS platform and are available to the NMCP. Aggregated monthly records of all confirmed cases of malaria for all age groups and districts from 2000 to 2016 will be used.Data for the cost effectiveness analysis will also be obtained from NMCP and other published literature. These will include budget estimates and previous expenditures for malaria interventions and malaria specific activities and programs in Ghana.For the purposes of studying the relationships between malaria morbidity and meteorological variables, monthly average precipitation (mm) and temperature (minimum and maximum) (\u00b0C) from all 10 regions in Ghana for the period 2000\u20132016 will be used. These data will be obtained from the Ghana Meteorological Agency (GMET).Ethical approval will be sought from the Navrongo Health Research Centre Institutional Review Board and the Faculty of Science Ethics Committee of the University of Cape Town. Permission to use the data will also be sought from the NMCP.Towards malaria control and elimination in Ghana: challenges and decision making tools to guide planning.Investigating the relationship between seasonal dynamics of reported malaria cases and weather variability in Ghana.Accounting for regional transmission variability and the impact of malaria interventions in Ghana .Cost effectiveness of malaria control interventions in Ghana: a mathematical modelling approach.Expected outcomes will include the following proposed articles that will be submitted for peer review and publication:There seems to be no evidence of country-specific mathematical models playing a role in supporting policy decision making with regards to malaria intervention strategy development, although a number of studies, as mentioned earlier, have been conducted as part of efforts to understand the epidemiology of malaria in Ghana.One of reasons accounting for this may include the non-availability of data, prior to the DHIMS platform, needed to be used to validate model parameters. Another reason could be limited interaction between modellers and policy makers. Tapping the full potential of mathematical models to support policy may require a collaborative effort between model builders and malaria control stakeholders such as the National Malaria Control Program, Ministry of Health, and Ghana Health Service.Examples of collaborative research into building mathematical models include one in Cambodia and another in Thailand. In Cambodia, the National Institute for Health designed and implemented, collaboratively with policy makers and other stakeholders of malaria control, a malaria early warning system supported by seasonal climate forecasting, weather monitoring, statistical and dynamic models. The system was implemented at the municipal level and was made available for use by malaria control managers. A user-fIn Ghana, developing a mathematical model to support policy development will require detailed parameterisation and validation using results of the numerous epidemiological studies available in published literature as well as data unpublished and available to malaria control stakeholders. The recent analyses undertaken by the NMCP to generate an updated malaria prevalence map for Ghana using data from 1960 to 2011 is one very useful source of data. Additionally, aggregated monthly health facility data maintained on the DHIMS ,59 will Therefore, the mathematical model development process proposed in this study will consider both population and sub-population level dynamics of malaria transmission along the varying epidemiological settings of Ghana. This will be done by factoring in local disease transmission dynamics of malaria and also by engaging policy makers, such as the NMCP to gain more insights, including the practical difficulties of intervening in malaria transmission in the country.It is envisaged that the process of conducting this research collaboratively with the NMCP in Ghana will afford the opportunity to support policy makers and stakeholders in the field of malaria control through country-specific mathematical models. These models should be useful for investigating malaria transmission dynamics for purposes of disease control and policy evaluation.Specifically, the challenges that may be addressed by the earlier proposed modelling venture are well articulated by the NMCP of Ghana in the following statement: \u2018Among the challenges facing the future of effective control include a more rational basis for stratified intervention delivery, better planning information and an ability to generate sufficient evidence to demonstrate impact and value for money\u2019 (p. 1) .More importantly, the opportunity for interaction between malaria control experts and modellers will create a platform for information sharing, presenting a unique platform for the development of more practically focused models for guiding malaria control activities in Ghana.Subsequently, packaging these mathematical modelling tools into user friendly interfaces and making them available for use by malaria control management teams at various levels across Ghana will be the way to exploit synergies for a common goal of a possible malaria elimination by 2030 as envisaged by the Global Malaria Programme."} +{"text": "The dataset on the effects of social demographic on job satisfaction was obtained through self-administered questionnaire. The survey was situated in a Nigerian manufacturing company and the valid ninety two copies of the questionnaire were analyzed by AMOS 21. Structural Equation Modelling (SEM) analysis was carried out on the constructs. In addition, further analysis of the data will assist in establishing the significant level of demographic on job satisfaction. Specifications TableValue of the data\u2022The outcomes of the data can assist in managerial decisions such as recruitment and selection processes.\u2022The analyzed data can provide insights into the generational differences and how each affects job satisfaction.\u2022Managers can also leverage on the data for workforce diversity management.1The dataset contained effects of social demographic on job satisfaction. The survey is premised on quantitative method and the Structural Equation Modelling (SEM) statistical tool was adopted to identify the significant effects of demographic characteristics of employees of a manufacturing company on job satisfaction 2The statistics presented in this data set was based on the quantitative study conducted that examined the influence of social demographic variables on job satisfaction in a manufacturing company in Nigeria. Descriptive survey research design which help to assess sample at the specific time without inconsistencies was adopted. One of the leading manufacturing firms in Ogun State, Nigeria was sampled. The study population consisted all employees of the sampled manufacturing firm. Researchers used complete enumeration of employees because the population of the study is relatively small. Data was collected with the use of a structured questionnaire. However, Structural Equation Modeling (AMOS 22) was used for the analysis of data"} +{"text": "Introduction. The use of cyclosporine (CsA) in the treatment of nephrotic syndrome (NS) contributed to a significant reduction in the amount of corticosteroids used in therapy and its cumulative side effects. One of the major drawbacks of CsA therapy is its nephrotoxicity. Prolonged CsA treatment protocols require sensitive, easily available, and simple to measure biomarkers of nephrotoxicity. NGAL is an antibacterial peptide, excreted by cells of renal tubules in response to their toxic or inflammatory damage. Aim of the Study. The aim of this study was to assess the suitability of the NGAL concentration in the urine as a potential biomarker of the CsA nephrotoxicity. Material and Methods. The study was performed on a group of 31 children with NS treated with CsA. The control group consisted of 23 children diagnosed with monosyptomatic enuresis. The relationship between NGAL excreted in urine and the time of CsA treatment, concentration of CsA in blood serum, and other biochemical parameters was assessed. Results. The study showed a statistically significant positive correlation between urine NGAL concentration and serum triglycerides concentration and no correlation between C0 CsA concentration and other observed parameters of NS. The duration of treatment had a statistically significant influence on the NGAL to creatinine ratio. Conclusions. NGAL cannot be used alone as a simple CsA nephrotoxicity marker during NS therapy. Statistically significant correlation between NGAL urine concentration and the time of CsA therapy indicates potential benefits of using this biomarker in the monitoring of nephrotoxicity in case of prolonged CsA therapy. Since the 1980s calcineurin inhibitors have been the basis of immunosuppressive therapy in the prevention of transplant rejection and in the treatment of various autoimmune diseases. A significant limitation to their long-term use is a specifically defined nephrotoxicity. The nephrotoxic action of calcineurin inhibitors depends on genetically determined drug pharmacodynamics and demonstrates a dependence on their serum concentrations and on the duration of the therapy . In the Ischaemic nephron injury, caused by afferent arteriole constriction, develops in early stages of cyclosporine-induced nephropathy . The druThe most precise way to assess the nephrotoxic effect of CsA is a kidney biopsy. A histopathological analysis reveals features of arteriolopathy , band-like interstitial fibrosis, and ischaemic atrophy of the renal tubules. The detection of changes typical for the CsA toxicity in a renal biopsy specimen attests to a considerable advancement of the process and has a nature of chronic and progressing remodelling of the nephron and tubulointerstitial compartment .Numerous studies have reported that ischaemic, osmotic, or toxic injury of the epithelial cells of the proximal tubules induces the synthesis of a large amount of neutrophil gelatinase-associated lipocalin (NGAL) by the cells of the distal tubules and intercalated cells of the arcuate tubules , 9.NGAL, which is called \u201crenal troponin,\u201d is a sensitive marker of acute renal injury . It is pNGAL excreted in urine is a sum of the protein filtered in the renal glomeruli , synthesized de novo by cells of the nephron loop thick limb and arcuate renal tubules, as well as a pool of protein produced by neutrophils and macrophages that infiltrate the organ during an ongoing inflammation \u201315.Thus, an increase in NGAL concentration in urine may indicate the impairment of the reabsorption of this protein by the proximal tubule cells, increased NGAL synthesis by the distal parts of nephron as a reaction to inflammatory or ischaemic injury, and activation of immune cells in the renal parenchyma and migrant cells , 16.In the available literature there are single reports concerning the usefulness of determining the level of NGAL in urine for the detection of the early stages of cyclosporine A nephrotoxicity in children chronically treated with this drug due to glomerulopathy with nephrotic syndrome .The aim of the study was to evaluate the usefulness of determining the NGAL urine concentration as a potential biomarker of cyclosporine A nephrotoxicity.2) were included in the study, after exclusion of active foci of infection, UTIs, urinary tract anomalies, and liver diseases.The examined group consisted of 31 patients, children and adolescents, at the age of 3\u201317 (mean 9.2), including 19 boys and 12 girls. The patients were treated in the Department and Outpatient Clinic of Paediatric Nephrology of Chorzow Centre for Paediatrics and Oncology due to steroid-dependent or steroid-resistant nephrotic syndrome. Characteristics of the examined group are showed in All patients received cyclosporine A for at least 3 months. The mean duration of the therapy was 26.5 months. The daily dose ranged from 3 to 6\u2009mg/kg of the body weight.During the entire treatment, the serum drug concentration was systematically monitored, with C0 level between 80 and 120\u2009ng/ml. All the children were in the stage of complete remission of nephrotic syndrome at the time of the study. Apart from cyclosporine A patients enrolled into the study were treated with steroids at different doses; also they received antihistaminic drugs, proton pump inhibitors, and supplemented calcium, magnesium, and vitamin D. None of the patients enrolled into the study did not use ACE inhibitors at the time of the blood and urine samples collection.All of the patients had cyclosporine C0 and C2 levels determined and they underwent biochemical tests as well to assess renal function and tests of inflammation markers and biochemical parameters, which are typical disorders for nephrotic syndrome .GFR from the serum creatinine concentration was calculated with the use of the Schwartz formula:k\u2009\u2009\u00d7 height [cm]/creatinine [mg/dl] .eGFR = l/cys C)].GFR from the serum cystatin C concentration was calculated with the use of the Filler formula: eGFR = 1.962 \u00d7 [1.123 \u00d7 log). We collected a morning urine sample prior to the administration of CsA (NGAL 1) and another sample was gathered 2 to 4 hours after the CsA administration (NGAL 2) from all the patients to determine urine NGAL concentration. Urine samples collected for NGAL determination were frozen at \u221270 degrees Celsius. Enrollment of patients into the study and the collection of blood and urine samples lasted for six months.After sample collection, the NGAL level was determined with the use of an immunoenzyme test by Bioporto Diagnostics, with strict adherence to the manufacturer's instructions. All samples were diluted to optimal density. The results of the immunoenzyme reaction were read with the use of a microplate spectrophotometer. The measurements were expressed as nanograms per millilitre. The limit of detection was 0.1\u2009ng/ml.The control group consisted of 23 children aged 4\u201317 (mean 10.1), including 14 boys and 9 girls, diagnosed in the Chorzow Centre for Paediatrics and Oncology due to monosymptomatic nocturnal enuresis. In the control group the first morning urine samples were obtained to measure NGAL and creatinine concentration and laboratory tests were performed to confirm a normal systemic metabolism.The examination protocol was approved by the Bioethical Commission of the Medical University of Silesia, decision number 10/2009.t-test was used to analyse the differences between the groups.The statistical comparison of the study and control groups was preceded by the verification of continuous variable distributions with the normal distribution based on the Shapiro-Wilk test. For continuous variables with normal distribution, a parametric Student's For the variables not demonstrating normal distribution, a nonparametric Wilcoxon test was used.r-Pearson correlation, Kendall Tau correlation, and linear robust regression models were used.To analyse the influence of the duration of cyclosporine A therapy on NGAL excretion and the influence of various factors on NGAL concentration changes and NGAL/creatinine ratio changes during CsA therapy, the The data presented in A significant negative correlation was shown between the difference of the NGAL/creatinine ratio in urine samples collected before and after CsA administration and the C0 cyclosporine level. The higher the baseline concentration of the medicine, the lesser the influence of the next dose of the medicine on urinary NGAL excretion, expressed as a ratio of NGAL concentration to creatinine concentration. We found no statistically significant correlation between the C0 cyclosporine concentration and the difference of absolute urinary NGAL concentration expressed in ng/ml .Significant positive correlation was found between the NGAL/creatinine ratio in the urine samples and the serum concentration of triglyceride. The higher the serum triglyceride concentration, the higher the urinary NGAL excretion Figures and 3.The similar dependence was observed between the absolute levels of NGAL excreted in urine expressed in ng/ml and the concentration of serum triglycerides. Relationship between NGAL concentration in the urine sample prior to and 2 hours after CsA administration and serum triglyceride levels is shown in Figures The statistical analyses performed using the regression robust model demonstrated an influence of the duration of cyclosporine A therapy on urinary NGAL excretion in the examined group.The duration of treatment had a statistically significant influence on the NGAL to creatinine ratio in the urine sample collected before the administration of the drug. The concentration changes are presented in The similar dependence concerns NGAL urine excretion expressed in ng/ml .The difference in the NGAL to creatinine ratio before and after the administration of CsA positively correlated with GFR calculated using the Filler formula .Such a relationship with GFR calculated using Schwartz formula from the creatinine concentration was not observed.The difference in the NGAL to creatinine ratio before and after the CsA administration negatively correlated with total serum cholesterol concentration .In available literature there are only a few reports concerning the comportment of NGAL in children suffering from different types of nephrotic syndrome. It has been shown that in course of chronic glomerulonephritis plasma NGAL concentration and the urinary excretion of this protein are higher than in healthy people, but that does not apply to all types and periods of chronic glomerulonephritis \u201320. NishOur research in children treated for various types of nephrotic syndrome demonstrated no differences in the mean concentration of NGAL in the urine samples compared to the control group. This may be due to the fact that all the children in the study group were in the remission period and in any case there was no significant proteinuria. This is essential for further consideration of cyclosporine's nephrotoxicity and the comportment of NGAL in children treated with this drug. It allows us to assume that NGAL in the urine is not a biomarker of only nephrotic syndrome. Cyclosporine A used in its treatment induces ischemia and the damage of the epithelial cells of renal tubules by the vasoconstriction of afferent arterioles. In experimental tests and the culture of human tubular cells in vitro it has been demonstrated that ischemic injury increases the NGAL mRNA , 22.Because the nephrotoxic effects of many drugs, including CsA, are a huge clinical problem, attention was drawn to the potential usefulness of the NGAL urine concentration measurement, as an early biomarker of this complication , 23. SieTo this date, there were very few reports in literature evaluating the usefulness of NGAL in the detection of nephrotoxicity of calcineurin inhibitors in patients after organ transplantation and those treated with other clinical indications. Wasilewska et al. conducted a survey concerning NGAL serum concentration and urinary NGAL excretion in children treated with cyclosporine A due to various types of nephrotic syndrome .The study included 19 children with steroid-dependent nephrotic syndrome on the ground of MCD and FSGS. In the study group the serum concentrations of NGAL and NGAL excretion in the urine, expressed as a ratio of NGAL to creatinine in the urine sample, were measured four times. The first measurement was performed during exacerbation of nephrotic syndrome, prior to the initiation of CsA therapy. The subsequent measurements were performed in the 3rd, 6th, and 12th months of treatment.The NGAL urine excretion differed between the study and the control group before the initiation of treatment. The difference increased up to 6 months of the therapy and after that the time differences in NGAL excretion were reduced, which may be connected with the reduction of cyclosporine dosage. In our study, we observed no differences in the urinary NGAL excretion between the study group and the control group.The NGAL excretion in the urine in the study of Wasilewska et al. remained without any relation to the cyclosporine A blood concentration. This observation corresponds to our results.In this study, the urinary NGAL excretion expressed as a ratio of NGAL to creatinine increased up to the 6th month of therapy and then decreased. In our observation, the urinary NGAL excretion converted per mg of creatinine increased in direct proportion to the duration of the cyclosporine treatment. The cyclosporine dependence, observed in a large number of patients, determines the elongation of CsA treatment. The treatment with this drug was initially predicted for a year or two. In many centers at the moment, patients are treated with cyclosporine even for several years. In this context increasing the concentrations of NGAL in urine during the treatment time observed in our study may be an important diagnostic clue indicating slow but steadily progressing renal tubular damage.On one hand using CsA to treat nephrotic syndrome reduces lipid metabolism disorders, but on the other hand, it may itself cause hyperlipidemia. Cyclosporine as a lipophilic substance binds with lipoproteins in the plasma. Lipid disorders may therefore have an effect on drug metabolism and bioavailability. In examining the impact of various metabolic disorders in nephrotic syndrome on the urinary NGAL excretion in children treated with CsA, we found a strong, statistically significant positive correlation between NGAL and creatinine, calculated from both urine samples and the serum concentration of triglycerides. Moreno-Navarrete et al., investigating lipocalin 2 serum concentrations in patients with metabolic syndrome, insulin resistance, and excessive intake of fats, found a statistically significant positive correlation between changes in the concentration of triglycerides in the serum of subjects and changes in serum concentrations of NGAL . FassettAt the moment, lipocalin 2 aspires to the role of a \u201crenal troponin.\u201d Acute renal failure, as well as chronic kidney disease, is conditions that often coexist with the dysfunction of other organs and systems. A high proportion of patients diagnosed with renal diseases have serious lipid disorders. The influence of a concentration of serum triglycerides on the urinary NGAL excretion found in our study certainly requires further research and maybe it will be included in the interpretation of tests in the future.In our study we found a negative correlation between the total cholesterol level and the difference in the NGAL urine concentration in the samples collected before and 2 hours after the administration of CsA. The theoretically high concentration of CsA two hours after the administration of the drug should stimulate ischemic nephron injury, which would enhance lipocalin excretion by the proximal tubule and collecting duct cells. However, our study did not reveal such a correlation, which seems to be connected with very high cholesterol levels in children with nephrotic syndrome. Cyclosporine as an active lipophilic substance binds with lipoproteins present in excess in the serum of sick children. Perhaps for this reason, its vasoconstrictive reaction in these children is not as rapid as expected.The difference in urinary NGAL excretion before the administration of CsA and at the peak of its serum concentration in the examined children demonstrated a positive correlation with the GFR calculated using the Filler method (but not the Schwartz method).The results of our study demonstrate that the concentration of NGAL in urine (expressed as a NGAL to creatinine ratio in the urine sample) cannot be used to detect early stages of CsA nephrotoxicity in children with nephrotic syndrome that is treated with this drug. This observation is consistent with evidence obtained from the only available study in literature evaluating NGAL as a potential marker of CsA nephrotoxicity in children with nephrotic syndrome. The complex correlation between the urine lipocalin excretion and characteristic for the nephrotic syndrome metabolic disorders complicates the interpretation of the results. The positive correlation between the urine NGAL concentration over the course of the CsA treatment seems to have a big practical meaning in the context of prolonged CsA treatment observed in recent years.The examination of the urinary NGAL concentration is a simple and noninvasive test delivering certain information concerning the condition of renal parenchyma. Its periodic measurement and interpretation together with other available parameters of nephrotoxicity may be beneficial in the individualization and optimization of CsA therapy.The concentration of NGAL in urine cannot act as an early detector of CsA nephrotoxicity in children with nephrotic syndrome treated with this drug.The concentration of NGAL excreted in the urine demonstrates a statistically significant correlation with serum triglycerides concentration.A statistically significant correlation between the urinary NGAL concentration and the duration of the CsA treatment indicates the potential usefulness of this marker in the monitoring of CsA nephrotoxicity, but only in relation to the other available indicators of kidney function."} +{"text": "Transglutaminase catalyzes the formation of intermolecular isopeptide crosslinks in polypeptides . Because2+ that occurs during synaptic transmission [Glutamine residues are necessary for the formation of the crosslinks and most transglutaminase substrates contain a segment of short tandem sequence repeats . A numbeIn addition to diseases of polyQ expansion, neurological diseases associated with the formation of protein aggregates include Alzheimer\u2019s disease, Parkinson\u2019s disease, prion diseases and some forms of amyotrophic lateral sclerosis. All of these diseases are characterized by the deposition of specific proteins: the amyloid \u03b2-peptide and hyperphosphorylated tau in Alzheimer\u2019s disease, \u0251-synuclein in Parkinson\u2019s disease and the conformationally altered Prion protein in Prion disease. Numerous reports have provided supporting evidence linking neurological diseases and the formation of aggregates to the action of transglutaminase . The rolIn view of the likely participation of transglutaminase in neurodegenerative diseases and of the elusive function of transglutaminase in brain, we endeavored to identify the transglutaminase substrates present in brain thus hoping to clarify the function of the enzyme in brain. For this we developed a functional proteomics strategy in which incorporation of biotinylated amine-donor and amine-acceptor probes is used to affinity-purify the transglutaminase substrates, which are then identified by mass spectrometry. The biological significance of the 166 substrates that we found was determined using the Ingenuity Pathway Analysis. We were surprised to find that most of the brain substrates identified were known to interact with huntingtin, the amyloid precursor protein or \u03b1-synuclein and that neurological disease was the most significant canonical pathway associated with the substrates. In view of the likely association of transglutaminase substrates with neurological disease, we wondered whether the aggregates associated with the diseases contained such substrates. This was demonstrated in two ways. First transglutaminase promoted very efficiently the incorporation of the amine-acceptor probe into both the inclusions of Huntington disease brain, and huntingtin-containing polymers purified from a patient\u2019s cerebral cortex. Second, we randomly selected some substrates and tested their presence within the inclusions by in-situ immunolabeling. All the substrates that we examined could be detected in the inclusions were randomly selected for the experiment and all three were found to be selectively polymerized in neuronal cells when cytosolic calcium concentration was raised. These results strongly support the idea that the crosslinking activity of brain transglutaminase participates in the formation of the protein aggregates found in diseases of the central nervous system and reinforce the notion that transglutaminase might constitute a useful target in the search for prophylactic or therapeutic molecules inhibiting the aggregation process [Since the substrates that we had found had been identified in an process ."} +{"text": "The last decade witnessed an explosion of interest in cancer stem cells (CSCs). The realization of epithelial ovarian cancer (EOC) as a CSC-related disease has the potential to change approaches in the treatment of this devastating disease dramatically. The etiology and early events in the progression of these carcinomas are among the least understood of all major human malignancies. Compared to the CSCs of other cancer types, the identification and study of EOC stem cells (EOCSCs) is rather difficult due to several major obstacles: the heterogeneity of tumors comprising EOCs, unknown cells of origin, and lack of knowledge considering the normal ovarian stem cells. This poses a major challenge for urgent development in this research field. This review summarizes and evaluates the current evidence for the existence of candidate normal ovarian epithelial stem cells as well as EOCSCs, emphasizing the requirement for a more definitive laboratory approach for the isolation, identification, and enrichment of EOCSCs. The present review also revisits the ongoing debate regarding other cells and tissues of origin of EOCs, and discusses early events in the pathogenesis of this disease. Finally, this review discusses the signaling pathways that are important regulators of candidate EOCSC maintenance and function, their potential role in the distinct pathogenesis of different EOC subtypes, as well as potential mechanisms and clinical relevance of EOCSC involvement in drug resistance."} +{"text": "Poisoning has always been pointed as one of the leading causes of human death throughout the world. Despite the best efforts made by many research institutes, the worldwide true figure on mortalities with poisoning could never be achieved due to many reasons. One of the main reasons is the unavailability of complete database from the rural and catchment areas of the world where these types of incidents are usual. People can be made aware about this problem by presenting data articles on regular basis, therefore to mark a resource document these data should be regularly up-dated. The current data report is a briefing of types and trends of chemical poisoning amongst human in southern hilly region of Himachal Pradesh (HP), India. This research database is an outcome of five year retrospective study based on assessment of records pertaining human deaths associated with poisoning occurred in southern Himachal Pradesh, and reported at State Forensic Science Laboratory (SFSL), Junga during 2010-14. Cases where ethyl alcohol was detected have been put under exclusion criterion. All the cases were reviewed and summarized in terms of yearly and monthly frequency of reports wrapping important information portraying the involvement of gender, age, locality, types of poison, and mode of death in the poisoning incidents. Review of these scientific reports showed some notable figures having a direct concern with public and legal domains to promote risk reduction and prevention of chemical poisonings. Specifications TableValue of the data\u2022Routine monitoring of deaths due to poisoning is an essential exercise to refreshing the database from all corners of the world. So, the current data become of high value as not much literature is available on chemical poisoning in human from HP.\u2022The data provided here is first of its kind information for readers to fully understand types and trends of poisoning prevailing in this part of the world.\u2022The information contained in this data report is intended for general use to assist public knowledge and research as well. These data are also a source of information for the state and native poison control centres and many other institutes to conduct research and strategize to deal this type of problem.1Due to easy availability, the use of chemical poisons has remained one of the most common ways of ending human lives by any mode of death. Large numbers of people die every year due to poisoning, especially acute pesticide poisoning 2The mountainous state of Himachal Pradesh is situated in the western Himalayan region of India. It comprises an area of 55,673 square kilometres divided into 12 districts which are further grouped into three divisions namely Shimla, Kangra and Mandi. The division of Shimla controls Shimla, Kinnaur, Sirmaur and Solan districts located in the southern region of this state. This division is a habitat of ~29% of total population and same coverage of geographical area of the state. These four districts come under the jurisdiction of SFSL established in Junga town. Post-mortem samples including blood, urine, viscera, gastric lavage or vomits material of victim obtained while autopsy are sent to this laboratory for chemical analysis of poisonous substances (if present any). The present database is inference of cases reported at SFSL, Junga from 1st January, 2010 up to 31st December, 2014. Data analysis involved all kinds of chemical poisoning.Dichlorovos and Paraquat was reported in majority (~30%) of cases. Out of 18 cases of paraquat poisoning, the maximum (10) cases were reported in year 2013 from all districts. Use of aluminum phosphide or zinc phosphide was reported in 12 cases, whereas in rest of the cases only phosphine was mentioned as poisonous substance.Database revealed 1291 positive reports out of 2721 total cases submitted during study period. Data presented herein is a realistic information depicting year wise reporting of positive cases from all selected districts , gender"} +{"text": "Avulsion fracture of the brachioradialis origin at its proximal attachment on the lateral supracondylar ridge of the distal humerus is exceedingly rare, and only two cases have been reported in the literature so far. In this article, we present a 38 years old patient who sustained a closed avulsion fracture of the lateral supracondylar ridge of left humerus at the proximal attachment of brachioradialis following a fall backwards on outstretched hand after being struck by a lorry from behind while riding on a two-wheeler (motorcycle). He was managed with above elbow plaster for four weeks followed by elbow and wrist mobilization. At final followup, the patient had painless full range elbow motion with good elbow flexion strength. The unique mechanism by which this avulasion fracture occurred is explained on the basis of the mode of injury, position of the limb and structure and function of the brachioradialis muscle. In this report, we discuss in detail the mechanism of injury of this type of avulsion fracture, the importance of proper imaging and the line of management, with brief review of the literature pertinent to this rare injury.Avulsion fracture of brachioradialis origin is an exceedingly rare entity. Thus far only two cases have been reported in the English literatureA 38 years old male patient presented to the emergency room with left elbow pain following a road traffic injury. He was travelling in a motorcycle when he was hit by a lorry from the rear and fell backward on the outstretched hand. On physical examination, there was minimal swelling around the elbow but significant tenderness with crepitus on the lateral supracondylar ridge just proximal to the lateral epicondyle. He had restriction of active terminal elbow extension by 10 degrees with near normal active elbow flexion, pronation and supination. Active flexion and extension at wrist were painful along with the painful terminal elbow extension. There was significant pain at the lateral distal humerus when active elbow flexion against resistance was performed in the mid-pronated position of the forearm. There was no distal neurovascular deficit.Antero-posterior (AP) plain radiograph of the left elbow showed a fracture of the lateral distal humerus at the proximal part of the lateral supracondylar ridge above the epicondyle . ComputeIn view of the minimally displaced fracture, the patient was managed non-operatively with a long arm plaster cast with the elbow in 90 degree flexion and wrist held in midpronated position to avoid stresses on the wrist extensors and the brachioradialis muscle. At the end of four weeks, the cast was removed, and clinical reassessment revealed mild tenderness at the fracture site with minimal pain upon active resisted elbow flexion in the semi-pronated position of the forearm. He was started on rehabilitation with gradual elbow, wrist and finger range of motion exercises and was instructed to abstain from lifting heavy objects for a period of 12 weeks. By 14 weeks, there was no local tenderness and the resisted active elbow flexion test was painless with full strength across wrist and elbow. Follow-up radiograph showed bone healing with good callus formation .3. Avulsion fracture of the brachioradialis muscle origin at the lateral distal humerus is extremely rare, and only two cases have been reported till now2.An avulsion fracture, usually seen in athletes, occurs when a small chip of bone attached to a ligament or tendon is pulled off from the main mass of the bone due to external force or forceful eccentric contraction of muscle. The common sites of humeral avulsion fractures include the fractures of lateral or medial epicondyles and fractures of greater or lesser tuberosities4. Brachioradialis along with the extensor carpi radialis longus and brevis form the dorsal mobile wad of the forearm and are innervated by the radial nerve. Given the location of its proximal and distal attachments, this muscle has a significant mechanical advantage for elbow flexion, specifically in the mid-pronated forearm5.Brachioradialis is the most superficial muscle along the radial side of the forearm and forms the lateral border of the cubital fossa. It arises from the proximal two-thirds of the lateral supracondylar ridge of the humerus and the anterior surface of the lateral intermuscular septum and inserts on the styloid process of the distal radius1 reported one case of type I open (Gustillo-Anderson) distal humerus fracture secondary to avulsion of the brachioradialis muscle origin in a man who was thrown backwards after being struck by a fireworks shell. According to them, when the victim was thrown, he attempted to prevent the fall with his arm placed backward, forearm pronated, elbow and shoulder extended which resulted in eccentric contraction of the muscle causing the avulsion fracture with a bony spike and a grade I open fracture. They managed him with debridement without any fixation followed by 48 hours of antibiotics with good bony union by six months.Guettler and Mayoet al2 reported a similar type of case in a professional lacrosse player who sustained injury following a direct check on the lateral distal humerus by a defending player\u2019s stick. Their patient had an associated superficial radial nerve injury along with avulsion fracture of the brachioradialis origin. The patient was treated conservatively with elbow splints with complete recovery of superficial radial nerve by eight weeks.Marchant 1 as here the patient fell backward on his outstretched hand after being struck by a lorry from the rear when he was travelling on a motorcycle. Our patient while being thrown would have attempted to prevent the fall with his arm placed backward, forearm pronated, elbow and shoulder extended. This position of the upper limb led to the forceful eccentric contraction of the brachioradialis muscle which resulted in the avulsion fracture of the muscle from its origin at the lateral supracondylar ridge of the distal humerus. This is the first case reported following road traffic injury.The mechanism of injury in our case is very similar to that reported by Guettler and MayoOn imaging, we found a 3cm crescentic bony fragment avulsed from the lateral distal humerus 1.2cm above the epicondyle with anterolateral displacement. The typical location over proximal part of the lateral supracondylar ridge, crescentic shape and anterolateral displacement of the fracture fragment in our patient were consistent with the avulsion fracture of brachioradialis muscle from its origin.One of the cases reported as in our case had been treated by immobilization with splints while the other reported case which had debridement for the open wound. The rationale for treating this avulsion fractures non-operatively is that undisplaced or minimally displaced avulsion fracture of muscle usually heal with minimal sequelae unless the bony fragment is a part of joint congruency and the muscle served as an important stabilizer for the joint. Also, open surgery carries a very high risk of injury to the radial nerve in view of its close proximity to the avulsion fracture fragment apart from the implant related complications and the need for second surgery to remove the implants.Avulsion fracture of brachioradialis muscle origin is exceedingly rare. Proper history, knowledge of the exact mechanism of injury, meticulous clinical examination and appropriate imaging will help in the diagnosis of such a rare injury. Non-operative treatment with splints for four weeks followed by mobilization is adequate for good fracture healing and functional outcome."} +{"text": "This study presents the findings of a questionnaire-based investigation of knowledge about the relationship of physical activity to health among adolescent participants of a community-based physical activity intervention program in S\u00e3o Paulo, Brazil. Qualitative and quantitative methods were applied to examine the participants responses to two open-ended questions concerning the health benefits of physical activity and the educational goals of the intervention. More than 75% of all participants stated that health benefits (of some type) are attained through participation in physical activity. More than 50% of participants reported that the goal of the intervention was to educate people about the importance of a healthy, active lifestyle. Adolescents understand the relationship of physical activity to health as reflected in their knowledge assessments; their lifestyle choices support these beliefs. These findings offer encouragement for the development and implementation of educationally oriented interventions aimed at providing physical activity information and programming."} +{"text": "While the principal role of interstitial cells in veins seems to be pacemaking, the role of arterial interstitial cells is less clear. This review summarises the knowledge of the functional and structural properties of vascular interstitial cells accumulated so far, offers hypotheses on their physiological role, and proposes directions for future research.Blood vessels are made up of several distinct cell types. Although it was originally thought that the tunica media of blood vessels was composed of a homogeneous population of fully differentiated smooth muscle cells, more recent data suggest the existence of multiple smooth muscle cell subpopulations in the vascular wall. One of the cell types contributing to this heterogeneity is the novel, irregularly shaped, noncontractile cell with thin processes, termed"} +{"text": "Over the past 10 years, key genes involved in specification of left-right laterality pathways in the embryo have been defined. The read-out for misexpression of laterality genes is usually the direction of heart looping. The question of how dextral looping direction occurred mechanistically and how the heart tube bends remains unknown. It is becoming clear from our experiments and those of others that left-right differences in cell proliferation in the second heart field (anterior heart field) drives the dextral direction. Evidence is accumulating that the cytoskeleton is at the center of laterality, and the bending and rotational forces associated with heart looping. If laterality pathways are modulated upstream, the cytoskeleton, including nonmuscle myosin II (NMHC-II), is altered downstream within the cardiomyocytes, leading to looping abnormalities. The cytoskeleton is associated with important mechanosensing and signaling pathways in cell biology and development. The initiation of blood flow during the looping period and the inherent stresses associated with increasing volumes of blood flowing into the heart may help to potentiate the process. In recent years, the steps involved in this central and complex process of heart development that is the basis of numerous congenital heart defects are being unraveled."} +{"text": "In human patients with autoimmune, viral, and bacterial diseases, the generation of antibodies (Abs) to foreign antigens and/or autoantibodies to self-antigens usually occurs. Some Abs with different catalytic activities may be induced spontaneously by primary antigens and can have characteristics of the primary antigen, including the catalytic activity of idiotypic and/or anti-idiotypic Abs. Healthy humans usually do not develop Abzs or their activities are low, often on the borderline of sensitivity of the detection methods. Detection of Abzs was shown to be the earliest indicator of development of different autoimmune diseases (ADs). At the early stages of ADs, the repertoire of Abzs is usually relatively narrow, but it greatly expands with the progress of the disease, leading to the generation of catalytically diverse Abzs with different activities and functions. Some Abzs are cytotoxic and can play an important negative role in the pathogenesis of ADs, while positive roles have been proposed for other Abzs. Abzs with some low activities can temporarily be present in the blood of patients in the course of viral and bacterial diseases, but their activity increases significantly if these infections stimulate development of ADs. A significant increase in the relative Abz activities associated with a specific reorganization of the immune system, including changes in the differentiation and proliferation of bone marrow hematopoietic stem cells and lymphocyte proliferation in different organs. Different mechanisms of Abz production can be proposed for healthy externally immunized and for autoimmune mammals during the development of pathology."} +{"text": "The errors and associated corrections described in this document concerning the original manuscript were accountable to the production department handling this manuscript, and thus are no fault of the authors of this paper.During the production process Ou-Chen Wang was omitted from the list of corresponding authors in the original article . Both Xi"} +{"text": "We report on a patient who was referred for port implantation with a two-chamber pacemaker aggregate on the right and total occlusion of the central veins on the left side. Venous access for port implantation was performed via left side puncture of the horizontal segment of the anterior jugular vein system (AJVS) and insertion of the port catheter using a crossover technique from the left to the right venous system via the jugular venous arch (JVA). The clinical significance of the AJVS and the JVA for central venous access and port implantation is emphasised and the corresponding literature is reviewed. Venous access for chest port implantation may be compromised for a variety of reasons and finding a suitable alternative can be difficult. If a conventional approach for this purpose is not an option, collateral veins may offer an attractive alternative. A crossover route utilising the anterior jugular venous system (AJVS) and the jugular venous arch (JVA) may be considered. We present, to the best of our knowledge, a unique case where anatomical constraints led us to utilise this crossover technique for port placement.The placement of a chest collateral port in crossover technique through the anterior jugular vein system can be quite simple and the most appropriate method in the mentioned or a comparable scenario.A 70-year-old woman was admitted to our hospital for staging and therapy of a recently diagnosed adenocarcinoma of the esophagogastric junction. The significant past medical history of the patient included implantation of a two-chamber pacemaker on the right side and subtotal sternum resection with a pectoral muscular flap graft coverage for treatment of sternum osteomyelitis. Contrast enhanced computed tomography (CT) of the thorax showed chronic occlusion of the left-sided central veins with the presence of a cervical collateral venous flow from the left to the right side reaching the nonoccluded right-sided central veins, specifically the confluence to the right innominate vein (IV) and, as part of the collateral circulation, a transverse connecting, in parts quite tortuous, and u-shape midline jugular vein measuring about 3\u2009mm in diameter . StagingThe patient was presented to our radiological department for port implantation. We decided to opt for venous access on the left upper body to avoid problems with the pacemaker aggregate on the right upper body. The plan was to recanalise the left IV. If this proved unsuccessful, alternative access via puncture of a left-sided collateral vein to reach the contralateral central veins would be instigated.The procedure was performed under local anesthesia in our angiographic suite. Phlebography of the left arm demonstrated the venous anatomy of the shoulder region, as described above in the previously performed CT . After tIn our institution, both radiologists and surgeons utilise the anterior chest wall for the standard approach for port placement. As insertion of arm ports does not necessarily show always the best results , 2, we cFor patients with implantable cardiac devices, the general recommendation is to place the port on the contralateral side to avoid any damage to pacing or defibrillator leads during port placement , 4. FurtIn our patient, we considered a right-sided approach via the internal jugular vein (IJV) with tunnelling the catheter over a long distance to the other side, but we refrained from this because of the extended presternal scars after sternal osteomyelitis and subtotal sternal resection.In this clinical case, we proposed port placement at the left upper body to avoid problems with the pacemaker aggregate on the right side.Translumbar port placement had been discussed with the patient as an alternative, in the event of failed thoracic placement.A closer look at the anatomical conditions of our patient reveals that the supraclavicular punctured collateral vein segment is the horizontal lateral segment of the AJVS, as has been described and defined under functional and clinical aspects by Chasen and Charnsangavej and SchuThe JVA is an infrequently found transverse connecting trunk extending across the midline between the two anterior jugular veins (AJV) of either side and lying in the suprasternal space between superficial and pretracheal layers of the cervical fascia. The JVA serves as a natural crossover collateral and may become prominent in cases of deep venous outflow obstruction. It is the midline part of the AJVS, typically in u-shape or v-shape configuration. Apart from textbooks of surgery in the context of, for example, thyroid surgery or tracheostomy, it is mentioned in the literature mostly in relation to malposition of central vein catheters or unintended crossover placement of central lines , 8.The AJVS is an important collateral venous network across the midline of the superoanterior aspect of the thorax and, if fully developed, is composed of three segments: the JVA as the transverse midline segment and the The segmental anatomy of a fully developed AJVS is illustrated in the schematic drawing .Schummer et al. stated tThe literature covering collateral vein access for port insertion is rather limited. Teichgr\u00e4ber et al. reportedThe important role of the AJVS as a collateral also becomes apparent in the context presented by Yamada et al. , where aWe describe a case of placement of a port catheter by direct puncture of the horizontal lateral segment of the AJVS and crossover through the JVA that was technically possible without the use of special equipment, not necessitating greater effort or significantly higher costs than an ordinary port implantation, and feasible despite a tortuous and relatively narrow diameter JVA.In the unusual constellation of patients with central venous occlusion on one side and requiring ipsilateral port implantation, closer consideration of a potentially fully developed and enlarged AJVS is warranted, as this vessel has been proven clinically to be a major cervical crossover collateral vein.For clinicians, the AJVS can play an important role as a collateral for the insertion of port catheters, pacemaker leads, or other types of central devices."} +{"text": "Retinopathy of prematurity (ROP) is an ocular disorder which affects infants born before 34 weeks of gestation and/or with birth weight of less than 2000 grams. If not detected on time and appropriately managed, it can lead to irreversible blindness.The retinal blood vessels first appear between 15\u201318 weeks of gestation. These vessels grow outwards from the central part of the retina and extend towards the retinal periphery. The nasal part of the retina is fully vascularised by 36 weeks of gestation followed by the temporal retina which is completely vascularised between 36\u201340 weeks of gestation age . FollowiThe development of ROP can be divided into two phases - an initial phase of delayed growth of retinal vessels followed by a second phase of retinal vessel proliferation .At birth the lungs of an infant born preterm are immature placing him/her at a high risk of developing abnormally low level of oxygen in arterial blood.To overcome this, the newborn infant is given supplemental oxygen in NICU. Prior to 32 weeks of gestation, the retina is very immature and the retinal metabolic demand is low. This excess oxygen creates retinal hyperoxia and oxygen toxicity, inhibiting the production of VEGF. This is followed by temporary stopping or stoppage of normal retinal growth, and constriction of new immature vessels. As a result, there is a reduction of blood supply to retinal tissue and shortage of oxygen needed for metabolism.With increasing age of the preterm infant the retina matures. There is an increase in metabolic demand and oxygen consumption by the retina, creating a relative decrease in oxygen level. This promotes increase in the level of vascular endothelial growth factor (VEGF), triggering the formation of new blood vessels along the inner retinal surface. A demarcation ridge develops along the retina that separates the central vascularised retina from the peripheral avascular retina .The growth of retinal blood vessels at this stage may restart normally or may progress to significant ROP as seen by an abnormal growth of retinal vessels into the vitreous and over the surface of the retina. These new vessels are weak and underdeveloped failing to fulfill the oxygen demand of retinal tissue resulting in continuous growth of abnormal vessels. There is leakage of fluid or blood from these weak blood vessels. If not treated on time this can result in scarring or traction of the retina leading to retinal detachment and blindness ."} +{"text": "The effectiveness of immigrant integration policies has gained considerable attention across Western democracies dealing with ethnically and culturally diverse societies. However, the findings on what type of policy produces more favourable integration outcomes remain inconclusive. The conflation of normative and analytical assumptions on integration is a major challenge for causal analysis of integration policies. This article applies actor-centered institutionalism as a new framework for the analysis of immigrant integration outcomes in order to separate two different mechanisms of policy intervention. Conceptualising integration outcomes as a function of capabilities and aspirations allows separating assumptions on the policy intervention in assimilation and multiculturalism as the two main types of policy approaches. The article illustrates that assimilation is an incentive-based policy and primarily designed to increase immigrants\u2019 aspirations, whereas multiculturalism is an opportunity-based policy and primarily designed to increase immigrants\u2019 capabilities. Conceptualising causal mechanisms of policy intervention clarifies the link between normative concepts of immigrant integration and analytical concepts of policy effectiveness. Over the last three decades, the integration of immigrants has become a salient political issue in most European countries and policy makers implemented targeted policies to foster integration of new immigrants Givens, . IntegraOn the methodological side, this article uses \u2018actor-centered institutionalism\u2019 as its framework of analysis Scharpf, as integThis paper is structured in the following way. In a first step, the normative challenge in the current literature on immigrant integration is reviewed and the need for the conceptualization of causal mechanisms is discussed. Then I present a new institutionalist framework that proposes to evaluate the outcomes of integration policies as a function of integration capabilities and integration aspirations. In the next section follows an application of the framework to assimilation and multiculturalism as distinct types of integration policy regimes in order to derive two different logics of policy intervention. The paper concludes by assessing the benefits and implications of the proposed framework for future research.Policies of immigrant integration share the aim of steering and guiding the integration process in a more favourable direction. Any attempt to study immigrant integration policies requires a particular conceptualization of integration including the normative assumptions that a particular policy is based on. As argued by Spencer and Charsley , any undAlthough there is no universal definition of the term, the core meaning of integration is commonly described a social process of settlement and the accommodation by both the native and the immigrant populations, resulting in an increased social membership of immigrants Givens, , p. 72. As an opposing mode of integration, the concept of multiculturalism has become an important focus of scholars and policymakers.Literature on immigrant integration has mainly focused on assimilation and multiculturalism as two opposing modes of integration and regulatory institutions (policies).The framework presented in Table Public policies, such as immigrant integration policies, are specific institutions affecting the life of individuals much more directly than the formal design of the state that is commonly defined as institutions immigrants to participate in the receiving country which makes it necessary to make costly efforts to ensure they become an integrated part of society. Integration is understood to be a burdensome process that needs efforts to make up for deficiencies of immigrants. The duty lies primarily with the immigrant that is expected to undertake integration efforts. Accordingly, policies of assimilation are designed to influence the interest of immigrants in adapting to the majority population. They are characterised by a restrictive provision of rights to immigrants. The provision of equal rights and citizenship is granted only after migrants successfully integrated into society. The theoretical expectation is that this restrictive and conditional provision of rights incentivizes immigrants to undertake integration efforts. The incentives of assimilation policies are expected to drive immigrants, who would otherwise stay within their ethnic groups, to acquire the necessary linguistic and cultural skills to integration. Higher aspirations in turn are expected to contribute positively to immigrants\u2019 capabilities: If immigrants are interested enough in integration the opportunities will follow see Fig.\u00a0. Thus, aincentive-structure influencing the aspirations of immigrants, here, whether to aim for the acquisition of country-specific skills and inter-ethnic contacts, or a preference for staying within ethnic communities.As illustrative example for this approach, the influential empirical study of Koopmans 9 where tIn the second perspective of multiculturalism, immigrant integration takes place as a process of inclusion. The acquisition of country-specific skills is meant to enable immigrants to participate as equals and to unfold their potential in the receiving society. Immigrants are perceived as individuals with potentials and an intrinsic motivation to take part in the society of the receiving country. The duty lies primarily with the receiving society that is supposed to provide opportunities for the participation of immigrants on equal terms. Accordingly, policies of multiculturalism are designed to influence the ability of immigrants to integrate. They are characterised by an extensive range of rights granted to immigrants. The theoretical expectation is that a liberal provision of rights to immigrants facilitates their participation. Rights provide resources and access to institutions that increase the ability of immigrants to realise equal participation. Thus, any systematic underperformance of immigrants is expected to be the result of their limited capabilities. More capabilities are expected in turn to contribute positively to immigrants\u2019 aspirations: If immigrants are able to participate as equals, their perception of opportunities is given a boost and instil immigrants a belief that they can thrive in the receiving society see Fig.\u00a0.10 If riopportunity-structure influencing the capabilities of immigrants, whether immigrants are able to political mobilization and participation. Whether immigrants decide to participate is seen depending primarily on the resources available to them.As illustrative example for this approach is the prominent research of Irene Bloemraad on the political integration of immigrants or employment (socio-economic) or social belonging .The essential feature of actor-centered institutionalism is that it allows for the combination agency and structure within a uniform explanatory framework. Furthermore, the new framework allows learning how different ideas about the normative nature of the integration process result in particular types of policy intervention and what causal mechanism they imply. This might not only facilitate to disentangle contradicting findings on integration policy effectiveness but may also minimize the implicit transmission of outdated and stereotypical attributions of policy regimes on the interpretation of policy outcomes. The two concepts of assimilation and multiculturalism imply different specifications of the policy problem, and identify different causal pathways behind the relationship between the policy output and the policy outcome.The comparative analysis of immigrant integration policies face opposing normative notions of integration and a lack of concept validity when it comes to empirical measurements of policies. As a result, empirical evidence on which types of integration policy contribute under which circumstances to favorable integration outcomes remains inconclusive and disputed.The argument lined out in this article is that the confusion in the literature is at least partially the result of the underspecification of causal mechanisms regarding different types of integration policies. How specific policies are assumed to intervene into the integration process remains often a blackbox. Therefore, these assumed mechanisms need to be specified in a systematic manner and the nature of integration process linked with a concept of policy intervention. By applying the perspective of actor-centered institutionalism to integration policy outcomes a new coherent framework distinguishing integration capabilities from integration aspirations is presented. In this perspective, integration can be conceptualised as a function of individual aspirations to become a full member of the receiving country, and the individual capability to translate these aspirations into effective integration. In order to participate in and feel belonging to the receiving country, immigrants require both the motivation to integrate and the ability to do so.The application of this capability-aspiration framework to the policy design of multiculturalism and the policy design of assimilation illustrates how these two opposing policy types differ in their assumption about the policy problem, the policy solution, and the logic of policy intervention. Different policy regimes for immigrant integration address different aspects of the integration process and the dynamics between the individual immigrant and the society at large: assimilation seeks to strengthen the aspiration of immigrations, whereas multiculturalism seeks to strengthen the capabilities of immigrants. The integration policy problem is either seen as under-aspiration in the case of assimilation, or as under-capability in the case of multiculturalism. Accordingly, in the perspective of assimilation, integration outcomes can be improved by fostering immigrants\u2019 aspirations. In the perspective of multiculturalism, integration outcomes can be improved by fostering immigrants\u2019 capabilities. The fundamentals of the proposed framework are considering capabilities and aspirations as two sides of the integration coin, and the assignment of rights to immigrants as a potential double-edged sword. The study illustrates why assimilation and multiculturalism remain the main concepts of immigrant integration: they are based on two different visions of the integration process shaping two opposing premises about the policy problem and the remedy to it. Assimilation and multiculturalism form public philosophies of integration that structure and constrain their specific policy interventions. This article argues that assimilation is an incentive-based theory and multiculturalism is an opportunity-based theory.Nevertheless, the chosen approach of actor-centered institutionalism may ignore or conceal certain aspects of integration processes. Other explanatory approaches focussing on factors such as ethnic discrimination or educational resources are complementary to the proposed framework. Considering integration outcomes as a product of capabilities and aspirations is illuminating for an institutionalist perspective on integration policy outcomes and allows identifying specific policy intervention logics as demonstrated in this article. The benefits of this study are threefold. Firstly, from a theoretical perspective, the capability-aspiration framework provides a conceptual tool for the analysis of the effectiveness of policy interventions by combining the idea of an agency-structure relationship of actor-centered institutionalism with the comparative study of immigrant integration. Instead of a full-fledged explanation model, the capability-aspiration framework is a research heuristic on a high level of abstraction that allows the identifying and arranging of different explanatory factors relevant for immigrant integration outcomes in different dimensions.how policies are expected to influence integration outcomes of immigrants. The framework may help to translate normative notions of integration into useful analytical concepts of policy intervention to identify complex causal relationships.Secondly, from an empirical perspective, I illustrate how the contradicting findings in the literature can be disentangled, and elaborate on differences in the policy problem, the policy solution, and distinct causal mechanisms that have only been assumed implicitly in existing studies. While there are useful normative and descriptive typologies of integration policy regimes, the capability-aspiration framework offers a mechanistic typology of Thirdly, from a practical perspective, the new framework can help to understand the motivation and reasoning of politicians regarding specific integration policy interventions. Gaining certainty about the (perception of the) policy problem will allow policy-makers to draft more targeted policy intervention by anticipating the effects on immigrants\u2019 capabilities and aspirations. The framework allows establishing clear links between the notion of integration and the corresponding policy intervention. Therefore, the proposed heuristic may provide a useful tool, sharpening the understanding of integration policy effectiveness.Once established that integration outcomes are a complex interaction of aspirations and capabilities and that integration policy effectiveness depends on a complex interaction of incentives and opportunities, we can start building up systematic knowledge of how they interact. If this conceptual distinction is ignored or incentives and opportunities are taken as the same thing, we risk to reproduce the normativity trap and to miss the complex interaction of aspirations and capabilities. Future research should further specify and investigate the causal mechanisms outlined in this study in order to solve the puzzle of contradicting findings in existing literature on immigrant integration. Empirical studies on policy effectiveness may gain explanatory power if they specify capabilities and aspirations as mediating factors. The major challenge for future research will be to find empirical strategies to measure capabilities and aspirations in specific dimensions of immigrant integration in order to test the different logics of policy intervention against each other. The capability-aspiration framework provides a research heuristic meant to facilitate comparative studies over mutually exclusive approaches to immigrant integration, and proposes to focus on assumed but broadly untested causal relationships in different types of integration policy regimes."} +{"text": "Anatomical variations of the celiac trunk and its branches are particularly important from a surgical perspective due to their relationships with surrounding structures. We report here a particularly rare variant involving absence of the celiac trunk in association with trifurcation of the common hepatic artery. These variations were found in an adult male cadaver. We perform a review of the literature and discuss the clinical and embryological significance of these variations. Recognition of celiac trunk and hepatic artery variations is of utmost importance to surgeons and radiologists because multiple variations can lead to undue complications. The celiac trunk (CT) or celiac artery is the first ventral splanchnic branch of the abdominal aorta. It is the artery of the foregut and it originates from the ventral aspect of the abdominal aorta at the level of the junction between the T12 and L1 vertebrae. It is about 1.5-2 cm in length and usually terminates by dividing into three main branches: the left gastric, common hepatic and splenic arteries.During dissection classes for medical undergraduates, we observed the following variations of abdominal blood vessels in an adult male cadaver aged approximately 60-65 years, with a height of approximately 1.65 m, and body weight of 60 kg. The celiac trunk was absent. The left gastric artery, splenic artery and common hepatic artery all arose directly from the abdominal aorta . The supAnatomic variations of the celiac trunk and its branches are the result of anomalous embryogenesis of primitive ventral blood vessels originating from the abdominal aorta. Each dorsal aorta gives off ventral splanchnic arteries which supply the gut and its derivatives. Initially, these ventral branches are paired, but with the fusion of the dorsal aorta, they also fuse to form a series of unpaired segmental arteries which run in the dorsal mesentery of the gut. They gradually fuse to form the arteries of the foregut, midgut, and hindgut.Absence of the celiac trunk is a rare anomaly with incidence rates ranging from 0.1Knowledge of anatomical variations of the celiac trunk is important for surgeons during liver transplantation, laparoscopic surgery, radiological abdominal interventions, and penetrating injuries to the abdomen. Also, acquaintance of unique variations of absence of the celiac trunk may be useful in planning and performing radiological interventions such as celiacographyAnother variation that we report here is trifurcation of the common hepatic artery. Although many reports have been published on variant branches of the hepatic artery, its trifurcation is uncommon. Nayak et al.Recent advances in imaging studies have made accurate evaluation of the vascular anatomy of the upper gastrointestinal tract easier. Recognition of celiac trunk and hepatic artery variants is of utmost importance because it aids in planning of several surgical and interventional procedures, thereby helping to avoid undue complications."} +{"text": "Here, I discuss recent advances and the molecular details of the DNA segregation machinery, focusing on the formation of the segrosome complex.Bacterial extrachromosomal DNAs often contribute to virulence in pathogenic organisms or facilitate adaptation to particular environments. The transmission of genetic information from one generation to the next requires sufficient partitioning of DNA molecules to ensure that at least one copy reaches each side of the division plane and is inherited by the daughter cells. Segregation of the bacterial chromosome occurs during or after replication and probably involves a strategy in which several protein complexes participate to modify the folding pattern and distribution first of the origin domain and then of the rest of the chromosome. Low-copy number plasmids rely on specialized partitioning systems, which in some cases use a mechanism that show striking similarity to eukaryotic DNA segregation. Overall, there have been multiple systems implicated in the dynamic transport of DNA cargo to a new cellular position during the cell cycle but most seem to share a common initial DNA partitioning step, involving the formation of a nucleoprotein complex called the segrosome. The particular features and complex topologies of individual segrosomes depend on both the nature of the DNA binding protein involved and on the recognized centromeric DNA sequence, both of which vary across systems. The combination of The process of DNA segregation is a crucial stage of the bacterial cell cycle and it depends on the precise coordination with other cellular events. The faithful inheritance of genetic information during cell division ensures that each daughter cell receives a copy of the newly replicated DNA. In many organisms, the DNA-encoded genome consists of a core genome (the chromosome) and accessory genomes . MGEs often confer evolutionary advantages to the host bacteria, including the adaptation to different environmental niches. Many, if not most, naturally occurring MGEs are in low or unique copy number and thus bring their own post-replication survival apparatus encoded in stability determinants .par) systems help to reliably segregate sister DNAs via a process that could be seen as functionally analogous to the mitotic segregation of chromosomes in eukaryotic cells. The best studied and probably the most common partitioning systems constitute a compact genetic module that is tightly auto-regulated by one of the gene products and consists of only three elements: a cis-acting DNA sequence and two trans-acting proteins. The DNA sequence denotes a par site or centromere-like region, and can be located at a single site (upstream or downstream of the operon) or at multiple positions within the MGE. The trans-acting proteins consist of a centromere-binding protein (CBP) that binds to the centromere and forms a nucleoprotein complex (partition complex or segrosome), and a motor protein (an NTPase), that sometimes is a cytomotive filament, which effectively moves the MGE inside the bacteria through direct interaction with the segrosome. Initially, these systems were classified as follows, based on the molecular nature of the NTPase: type I , which further divided into Ia and Ib based on differences in the trans-acting proteins and the position of the centromere in the operon; and type II vary, the sites always contain inverted repeats. The parS site contains two different repeats asymmetrically arranged around a binding site for the IHF protein or the DNA wraps around a CBP oligomer (type II and III partition systems). The resulting segrosome is a single and discrete structure.par operon and consists of direct and inverted repeats. However, in plasmid pCXC100 the centromeric site contains only direct repeats is continuous and consists of direct and inverted repeats separated by AT-rich regions. A DNA region between the operon genes and the centromere (OF) comprises more repeats that play important roles in partitioning and transcription regulation consists of tandem repeats localized in a single locus upstream of the operon. The arrangement can be continuous is localized upstream of the operon and contains several direct repeats in a single locus that can be split into two (pBtoxis) or three (pBsph) blocks, resembling discontinuous parC sites , located downstream of the partition operon (Oliva et al., It is still common for new partitioning systems to be discovered in plasmids, phages, and on chromosomes. Together with a growing body of molecular insights these will help to broaden our understanding of DNA trafficking during bacterial cell division and in particular how DNA is attached to the CBP during segrosome formation and then to the motor protein through the segrosome.MO conceived and wrote this mini-review.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling Editor declared a shared affiliation, though no other collaboration, with the author and states that the process nevertheless met the standards of a fair and objective review."} +{"text": "Pruritus can be one of the important factors of exacerbation of discomfort in patients with chronic renal failure. In an unpublished clinical trial which was performed in the Semnan University of Medical Sciences, the effect of reduction of dialysate temperature on controlling the pruritus was assessed. The results suggest a positive effect. The author recommends further studies to consider reducing dialysate temperature on control of pruritus.Pruritus is a common problem in patients with chronic renal failure 25 to 35% before dialysis patients and 60 to 80% of dialysis patients may complain of pruritus . DespiteThe use of opioid antagonists in treatment of uremic pruritus have been based on this matter that endogenous opioid peptides accumulate in uremic patients and its concentration increases in plasma. In a controlled study the effects of naltrexone on pruritus in hemodialysis patients was studied and showed a positive effect in these patients, although another study showed controversy result . In a stThere is an important matter which is in addition to other problems; pruritus can be one of the important factors of exacerbation of discomfort in patients with chronic renal failure. In an unpublished clinical trial that was performed in the Semnan University of Medical Sciences, the effect of reduction of dialysate temperature on controlling the pruritus was assessed. The results suggest a positive effect . Some syConclusionPruritus can be one of the important factors of exacerbation of discomfort in patients with chronic renal failure. In an unpublished clinical trial which was performed in the Semnan University of Medical Sciences, the effect of reduction of dialysate temperature on controlling the pruritus was assessed. The results suggest a positive effect. The author recommends further studies to consider reducing dialysate temperature on control of pruritus.MRT was the single author of the paper.The author declared no competing interests.Ethical issues have been completely observed by the author.None."} +{"text": "Streptococcus pneumoniae (pneumococcus) is the leading infectious cause of childhood bacterial pneumonia. The diagnosis of childhood pneumonia remains a critical epidemiological task for monitoring vaccine and treatment program effectiveness. The chest radiograph remains the most readily available and common imaging modality to assess childhood pneumonia. In 1997, the World Health Organization Radiology Working Group was established to provide a consensus method for the standardized definition for the interpretation of pediatric frontal chest radiographs, for use in bacterial vaccine efficacy trials in children. The definition was not designed for use in individual patient clinical management because of its emphasis on specificity at the expense of sensitivity. These definitions and endpoint conclusions were published in 2001 and an analysis of observer variation for these conclusions using a reference library of chest radiographs was published in 2005. In response to the technical needs identified through subsequent meetings, the World Health Organization Chest Radiography in Epidemiological Studies (CRES) project was initiated and is designed to be a continuation of the World Health Organization Radiology Working Group. The aims of the World Health Organization CRES project are to clarify the definitions used in the World Health Organization defined standardized interpretation of pediatric chest radiographs in bacterial vaccine impact and pneumonia epidemiological studies, reinforce the focus on reproducible chest radiograph readings, provide training and support with World Health Organization defined standardized interpretation of chest radiographs and develop guidelines and tools for investigators and site staff to assist in obtaining high-quality chest radiographs.Childhood pneumonia is among the leading infectious causes of mortality in children younger than 5 years of age globally. Streptococcus pneumoniae (pneumococcus) is the leading infectious cause of childhood bacterial pneumonia; it is estimated to have caused 411,000 and 335,000 deaths globally in 2010 and 2015, respectively, in children in this age group [Streptococcus pneumoniae and Haemophilus influenzae type b, has become routine in low-income countries with the widespread introduction of pneumococcal conjugate vaccines and the Haemophilus influenzae type b vaccine in the past decade [Childhood pneumonia is among the leading infectious causes of mortality in children younger than 5 years of age globally. . StreptoHaemophilus influenzae type b and pneumococcal conjugate vaccines vaccine trials , and for determining this spectrum of disease burden from epidemiological studies. The definitions were not designed for use in individual patient clinical management because of their emphasis on specificity at the expense of sensitivity [It is recognized that the chest radiograph has a number of limitations compared to imaging modalities like ultrasound (US) and computerized tomography (CT) of the chest in the diagnosis of childhood pneumonia \u201310. Howesitivity .Haemophilus influenzae type b pneumonia. If the endpoint lacks specificity, the number of false-positives increases, which reduces the power and precision of the bacterial vaccine trial. Therefore, the World Health Organization standardized chest radiograph definition of primary endpoint pneumonia, which is used as the endpoint in bacterial vaccine trials, compromises on sensitivity, recognizing that certain cases will be missed, but that this does not greatly affect the precision of the vaccine efficacy outcome. The World Health Organization standardized definition of other infiltrates is not used as an endpoint in bacterial vaccine trials [In bacterial vaccine trials, specificity for measurement of vaccine efficacy is emphasized, with the aim to establish whether the vaccine has activity to prevent pneumococcal or e trials , 15.These World Health Organization standardized definitions and endpoint conclusions were published in 2001, and an analysis of observer variation for these conclusions using a reference library of chest radiographs was published in 2005 . The refHaemophilus influenzae type b Initiative Radiology Workshop in Hanoi, Vietnam, in 2011 [By 2005, World Health Organization support for standardized chest radiograph interpretation had declined due to resource limitations, despite the increasing application of this tool in a number of epidemiological settings. The applications included pneumonia vaccine efficacy trials, pneumonia vaccine probe studies, malaria efficacy trials, an evaluation of indoor air pollution reduction, pneumonia surveillance activities and pneumonia etiology studies \u201319. Reco in 2011 and a se in 2011 . These m in 2011 , 15.In response to the technical needs identified through these meetings, the World Health Organization CRES project was initiated. It is designed to be a continuation of the World Health Organization Radiology Working Group. The World Health Organization CRES project is a subproject of the pneumococcal conjugate vaccines technical coordination project, which is a collaboration between the Immunization, Vaccines and Biologicals Department of the World Health Organization and the Johns Hopkins University Bloomberg School of Public Health, in Baltimore, Maryland, funded by the Bill and Melinda Gates Foundation. The World Health Organization CRES project is based at the Murdoch Children\u2019s Research Institute in Melbourne, Australia, and is particularly important for countries with planned or ongoing studies to measure the impact of pneumococcal conjugate vaccines, especially in Asia where the uptake of pneumococcal conjugate vaccines has lagged behind that of other regions.The aims of the World Health Organization CRES project are to clarify the definitions used in the World Health Organization defined standardized interpretation of pediatric chest radiographs in vaccine impact and pneumonia epidemiological studies , reinforThe clarifications to the original World Health Organization standardized chest radiograph interpretation definitions relate tThe silhouette sign refers to the absence of depiction of an anatomical soft-tissue border. The silhouette sign results from the juxtaposition of structures of similar radiographic attenuation and this sign actually refers to the absence of a silhouette. It is caused by consolidation and/or atelectasis of the adjacent lung, a large mass or by contiguous pleural fluid . HoweverA footnote to the original World Health Organization definitions refers to the silhouette sign as follows: \u201cIn the presence of any visible adjacent opacity, a silhouette sign, where the length of loss of an anatomical border is greater or equal to the length of a posterior rib and one adjacent rib space at the same level, is considered to indicate of endpoint consolidation. A silhouette sign of this size without a visible adjacent opacity is considered to fulfill the definition of other infiltrate.\u201d This is illustrated in Fig. An updated set of reference training chest radiographs is being developed under the World Health Organization CRES project. Prior to the June 2016 meeting, the World Health Organization Technical Working Group applied the proposed clarifications to the World Health Organization definitions by interpreting a set of 400 chest radiographs that were not part of the original World Health Organization reference training set and 50 chest radiographs from the original World Health Organization reference training set. These 450 chest radiographs were read by the World Health Organization Technical Working Group members and assigned to one of the three conclusions. Thirteen readers from the World Health Organization Technical Working Group completed all 450 chest radiographs at the time of a data freeze on 20 June 2016. Detailed analysis of these readings showed good correlation between the final reading using the original World Health Organization definitions and those of the new readers using the clarified criteria. As described in the literature using World Health Organization defined standardized chest radiograph definitions , the intThe new set of World Health Organization training chest radiographs are being prepared, emphasizing the need for inclusion of chest radiographs with a high World Health Organization CRES committee reader agreement. The chest radiographs will have comments and annotations added by a subgroup of the World Health Organization CRES project in order to improve their value as a teaching tool. Guidelines for the training and assessment of chest radiograph readers and support for studies in the form of a centralized arbitration process for discordant chest radiograph interpretations are also under development.The original World Health Organization Radiology Working Group provided standardized definitions on chest radiograph quality. An adequate chest radiograph allows for confident interpretation for primary endpoint pneumonia and other infiltrates. Suboptimal chest radiograph allows for the interpretation of primary endpoint pneumonia but not for other infiltrates. Uninterpretable chest radiograph is not interpretable with respect to the presence or absence of primary endpoint pneumonia or other infiltrates Table 11. The cuThe World Health Organization CRES project aims to support investigators using the World Health Organization standardized methodology for interpreting pediatric chest radiographs for vaccine efficacy and pneumonia epidemiological studies, including providing clarifications to the original World Health Organization defined standardized chest radiograph interpretation, an updated reference training set of chest radiographs, resources for the training and assessment of readers, guidance on chest radiograph quality and safety, updated reference publications and a centralized arbitration process for the resolution of chest radiographs with discordant interpretations. The final World Health Organization materials and guidance are expected to be published and available in 2017."} +{"text": "Trail Registration. The case report is registered in Research Registry under the UIN researchregistry743.Minimally invasive hysterectomy is a standard procedure. Different approaches, as laparoscopically assisted vaginal hysterectomy, vaginal hysterectomy, and subtotal and total laparoscopic hysterectomy, have been described and evaluated by various investigations as safe and cost-effective methods. In particular, in comparison to abdominal hysterectomy, the minimally invasive methods have undoubted advantages for the patients. The main reason for a primary abdominal hysterectomy or conversion to abdominal hysterectomy during a minimal invasive approach is the uterine size. We describe our course of action in the retrospective analysis of five cases of total minimal-access hysterectomy, combining the laparoscopic subtotal hysterectomy and the vaginal extirpation of the cervix in uterine myomatosis with a uterine weight of more than 1000 grams, and discuss the factors that limit the use of laparoscopy in the treatment of big uteri. The disadvantages of abdominal hysterectomy in comparison to the vaginal or laparoscopic approach have been shown in various studies . HoweverRetrospective analysis of the outcome of five cases of total minimal-access hysterectomy is realized by the same gynecologic surgeon in uteri of more than 1000 grams.We realized five total laparoscopic hysterectomies with a median weight of 1422 grams (1035\u20132100\u2009g.). In all five cases we performed a laparoscopic subtotal hysterectomy with electric morcellation and subsequent vaginal extirpation of the cervix. The median duration of surgery was 194\u2009min (135\u2009min.\u2013237\u2009min.). In all cases we performed an initial diagnostic hysteroscopy to avoid the morcellation of intrauterine malignant pathologies and used an abrasor as uterine manipulator and an electric power morcellator.In all five cases the premenopausal patients presented with bleeding disorders (hypermenorrhea and dysmenorrhea) and symptoms due to the uterine size such as compression of bladder and bowel and pressure on the sacral plexus . The preWe performed the supracervical resection of the uterine body after coagulation of the ascending branches of the A. uterina, avoiding contact to the bladder and the ureteral region. The morcellation was performed with a 15\u2009mm reusable electric morcellator applicate through the left inferior incision. Subsequently the cervix was easy to resect by vaginal approach avoiding abdominal contact to the ureter, followed by peritoneal closure and suture of the vaginal cuff. The localization of the optical trocar depended on the cranial uterine extension. We choose an incision in the median line between umbilicus and xiphoid process epigastric area or an alternative first approach in the Palmers Point. In three cases we used 2 auxiliary trocars (5\u2009mm), in two cases 3 auxiliary trocars, and in each case a 5\u2009mm optical system . Thus thA good surgical result is always based on a correct indication. In case of very large fibroid uteri the evaluation of the possibility of a minimally invasive approach has to consider the uterine size, uterine mobility, the patients clinical history (prior surgeries and adhesion), and general physical conditions. The examination in general anesthesia by the experienced surgeon must be part of the presurgical evaluation. The success of the total minimal-access hysterectomy in uteri of more than 1000 grams also depends on the following factors: surgical experience of the team, instrumental equipment, positioning of the patient, uterine mobilization by a manipulator, and the flexibility of the anterior abdominal wall, the intraabdominal residual volume and ventilation pressure respectively. The duration of the surgery also depends on these factors and not only on the uterine weight. Interestingly we were able to realize the fastest surgery in the largest uterus. De Wilde showed the possible complications depending on the duration of a surgery . TherefoTchartchian et al. described the combined laparoscopic hysterectomy (LACH) in a case of a uterus with a weight of 2400 grams using the so-called switch-over technique , which mIn all cases we preferred to combine the laparoscopic subtotal hysterectomy with the subsequent vaginal extirpation of the cervix, instead of performing a total laparoscopic hysterectomy. The most important surgical step in laparoscopic hysterectomy is the devascularisation of the uterus. After the coagulation and dissection of the uterine arteries the risk of bleeding is minimal. In case of a total laparoscopic hysterectomy the preparation of the bladder and the paracervical region would be the next step. As the preparation in big uteri is more difficult than normal, the laparoscopic subtotal hysterectomy is easier to perform and can avoid bladder and ureteral complications. Cipullo et al. described a higher rate of major complications in TLH compared to LASH . MorelliA condition of the total minimal-access hysterectomy is the morcellation of the uterine body with an electric morcellator in order to extract the tissue from the abdominal cavity. Because of potential tissue dissemination within the abdominal cavity, the Food and Drug Administration recently warned against the use of electromechanical power morcellation in hysterectomy and myomectomy . The risIn large uteri of more than 1000 grams the total minimal-access hysterectomy is a safe surgical approach with all the advantages of minimally invasive surgery. The feasibility of the method does depend not only on the uterine weight but on a complex variety of factors that must be considered in the process of indication by the experienced surgeon. Thus abdominal hysterectomy can be even avoided in many cases of big uteri."} +{"text": "Schizophrenia spectrum disorders have major implications for the individuals, their families and society.Antipsychotic medication is the cornerstone in the treatment of psychotic symptoms and is effective in the reduction of psychotic symptoms and of relapse after remission of psychotic symptoms. This is the reason for recommending maintenance treatment with antipsychotic medication in national and international guidelines for the treatment of schizophrenia, one year after remission of psychotic symptoms in first episode psychosis.The aim of the study is to investigate the effect of tapered discontinuation versus maintenance therapy with antipsychotic medication in patients with newly diagnosed schizophrenia or persistent delusional disorder and with minimum three months remission of psychotic symptoms, and to find minimal effective dose of antipsychotic medication. Negative symptoms, cognitive impairments and the side effects of antipsychotic medication can cause a serious and long-term burden for patients and can reduce their quality of life. The TAILOR study will investigate these important aspects.The study is a randomized multicenter single blinded clinical trial. The aim is to include 250 patients from the outpatient early intervention program, OPUS, a 2 years manualized psychiatric treatment programme. At baseline patients must have 3 months remission of psychotic symptoms as documented by the SAPS (Schedule for Assessment of Positive Symptoms in Schizophrenia).The patients will be randomized to either tapered discontinuation or dose reduction of antipsychotic medication or treatment as usual stratified according to substance abuse. The intervention will last for 1 year, and follow up interviews will be made after 1,2 and 5 years.The patients will receive a user-developed mobile phone application to make daily registrations.The study has been including patients since May 2017.The first data is expected in 2019.The TAILOR trial will contribute to knowledge about the effect of tapering/discontinuation of antipsychotic medication in early phases of schizophrenia spectrum disorders and hopefully the results may guide future clinical treatment regimens of antipsychotic medication.The trial is a complex medical intervention, and it raises ethical, practical and organizational challenges.When designing the TAILOR trial ethical questions were raised regarding blinding and the design of the intervention. In the trial only the researchers are blinded, neither clinicians nor patients, because they should be attentive of the high risk of relapse in the discontinuation group. The design gives the clinicians the possibility to adjust the dose of the antipsychotic medication to ensure sufficient treatment. Therefore, the trial only includes assessor blinding and the groups might end up being more similar than intended.In general, it is of ethical consideration that the trial participants in the tapering/discontinuation group will be subjected to a higher risk of relapse. On the other hand, it seems unethical if research were not to discover the group of patients who can discontinue antipsychotic medication without relapsing.Practical challenges will be sufficient recruitment or patient motivation and dropout."} +{"text": "Background. The evidence exists that radicals are crucial agents necessary for the wound regeneration helping to enhance the repair process. Materials and methods. The lineshape of the electron paramagnetic resonance (EPR) spectra of the burn wounds measured with the low microwave power (2.2\u2009mW) was numerically analyzed. The experimental spectra were fitted by the sum of two and three lines. Results. The number of the lines in the EPR spectrum corresponded to the number of different groups of radicals in the natural samples after thermal treatment. The component lines were described by Gaussian and Lorentzian functions. The spectra of the burn wounds were superposition of three lines different in shape and in linewidths. The best fitting was obtained for the sum of broad Gaussian, broad Lorentzian, and narrow Lorentzian lines. Dipolar interactions between the unpaired electrons widened the broad Gaussian and broad Lorentzian lines. Radicals with the narrow Lorentzian lines existed mainly in the tested samples. Conclusions. The spectral shape analysis may be proposed as a useful method for determining the number of different groups of radicals in the burn wounds. The healing process of burn wounds with the loss of tissue and infection of the damaged place occurs by granulation. Healing of burn wounds is a complex, biologic process, based on the replacement of the damaged tissue with a living one , 2. Restnd Il-10 . Radicalnd Il-10 . HoweverThe numerical analysis of the lineshape of the electron paramagnetic resonance (EPR) spectra was performed to find different groups of radicals in the burn wounds. Different types of thermally formed radicals in skin and tissue revealed characteristic EPR signals, which were identified. The aim of this study was the application of numerical procedures to multicomponent microwave absorption curves to obtain information about the number of different groups of radicals in the burn wounds. This work was the continuation of our earlier comparative EPR studies of different groups of radicals in the burn wounds treated with the propolis and silver sulphadiazine salt . The advThe study protocol was approved by the Regional Ethics Committee for Experiments on animals of the Medical University of Silesia. The experiments were conducted in the Central Experimental Animal Quarters of the Medical University of Silesia. The Polish Landrace pigs were kept in uniform zoohygienic conditions both before and during the experiment. The animals were fed with a balanced mixture R 233 in the full physical and mental comfort eliminating the stress reactions. The skin burn wounds were inflicted according to Hoekstra's model . In the The EPR spectra of the burn wounds located in the thin-walled glass tubes with the external diameter of 3\u2009mm were measured at room temperature by the use of an X-band (9.3\u2009GHz) electron paramagnetic resonance spectrometer of the Radiopan Firm with magnetic modulation of 100\u2009kHz. The mass of the samples in the tubes was obtained by weight of Sartorius Firm (Germany). The mass of the burn wounds samples was calculated as the difference between the mass of the tube with the sample and the mass of the empty tube. The high-paramagnetic purity characterized the tubes and they were of the EPR signals in the measurement conditions.oM is the total produced microwave power (70\u2009mW) and M is the microwave power used during the EPR measurement. Microwave frequency (\u03bd) was directly measured by MCM101 recorder produced by EPRAD Firm .The total microwave power produced by klystron in the microwave bridge of the EPR spectrometer was 70\u2009mW. The EPR spectra were recorded without microwave saturation effects with the low microwave power equal to 2.2\u2009mW. This low value of microwave power corresponded to the high value of attenuation equaling 15\u2009dB, according to the formula (1)attenThe EPR spectra were measured as the first derivative of microwave absorption. The Rapid Scan Unit of Jagmar Firm was used as the data acquisition system. The professional spectroscopic program of the Jagmar Firm and LabView program (USA) was applied.S).The lineshape of the EPR spectra of the burn wounds were numerically analyzed. The spectra were fitted by single lines and by the sum of two and three lines. The lines with Gaussian and Lorentzian shapes were used. The experimental EPR spectra of the burn wounds were fitted by theoretical lines as a superposition of two Gaussian lines (GG), two Lorentzian lines (LL), and by the sum of Gaussian and Lorentzian (GL) lines. The experimental spectra were fitted by the sum of three Gaussian Lines (GGG), three Lorentzian lines (LLL), two Gaussian lines and one Lorentzian line (GGL), and one Gaussian and two Lorentzian lines (GLL). The best fitting of the experimental EPR spectra of the burn wounds was recognized as the one giving the lowest value of the standard deviation (Bpp) and the percentage fractions of each component's line in the total EPR spectrum were determined. The fraction of the individual lines in the total spectrum was obtained as the percentage fraction of the integral intensity of these component's lines in the integral intensity of the total spectrum.The linewidths (\u0394fr(x),\u2009x = 1. N represents a discrete set of function values within domain and time step \u2206x = 10\u22123. The best match of fr function using fp function defined as a composition of basic functions was searched. The first derivative of microwave absorption function for radicals was taken into account. The signal was filtered before approximation and its reference point was defined.The numerical procedure of the spectroscopic analysis is described below. The analysis of the complex signal, which is given as a time series Filtering of the signal was performed by moving an average filter and the filter based on fast Fourier transformation (FFT) , 18. Thefr according to x- and y- axes. To find the reference values, the following rules were used: (a) the sum of values of samples separated by x-axis is close to 0, and (b) the absolute value of the sum of function from left side of y-axis plus the absolute value of the sum of function from the right side of y were maximized. Finally, the filtered and shifted fr function is processed by the approximation algorithm. The Gaussian and Lorentzian functions were selected to the approximations. The genetic algorithm with [The reference point was found by defining a horizontal and vertical shift of a filtered signal thm with the conjthm with .Radicals and other paramagnetic molecules play an important role in both physiological and pathological conditions. Radicals are formed in burn wound matrix in thermolysis processes . As a reDifferent groups of radicals in the skin burn wounds have been found in our study.The experimental EPR spectrum of the burn wounds is shown in 225The numerical analysis of the spectral shape pointed out that the experimental EPR spectrum of the burn wounds was a superposition of the three lines Tables and 2. TBpp) of their EPR signals. The parameters of the EPR lines of these groups of radicals are visible in A different chemical origin of each of the group of radicals was reflected in linewidths line > narrow Lorentzian (L2) line > broad Lorentzian (L1) line . The high oxygen . These rThis work was a fine example of applying the electron paramagnetic resonance and the numerical procedures of the spectral lineshape analysis for identification of the complex radical system in tissues. The analysis of the total EPR spectra gives only information about the whole radicals system. The performed deconvolution of the resultant EPR spectra to the component lines allowed us to different groups of radicals in the biological samples.Our findings have significant implications since the healing of the burn wound remains a challenge to modern medicine. Oxidative stress could contribute to secondary tissue impairment and altered immune function after burn traumatic injury. Moreover, the study indicated various groups of radicals arising during the healing process of burn wounds which may modulate the transition of ECM components as well as influence the activity of the components participating in degradation by matrix metalloproteinases, thus contributing to controlling the healing process at cellular level and conditioning the formation of the favorable biochemical environment favoring an effective tissue repair , 25. DueThe numerical study of the lineshape of the complex EPR spectra was a helpful method for identification of the number of different groups of radicals in the burn wounds. These EPR spectra were superposition of three lines: broad Gaussian (G1) line, broad Lorentzian (L1) line, and narrow (L2) Lorentzian line. The modern spectral analysis was proposed. This method additionally gave information about the percentage contents of each type of radicals in the burn wounds. Radicals with weak dipolar interactions responsible for the narrow Lorentzian (L2) line existed mainly in the examined burn wounds."} +{"text": "In this work, an estimation of the relative yield losses of wheat due to ozone exposure is made by means of two approaches proposed by the CLRTAP (Convention on Long Range Transboundary Air Pollution): the exposure-response approach, which deals with the exposure of plants to ozone during a certain time, and the accumulated uptake approach, which, besides ozone exposure, deals with the velocity of absorption of the contaminant and the environmental factors that modulate that absorption. Once the relative yield losses are calculated by means of the two approaches, the aim is to establish which index (the exposure-response index or the accumulated uptake index) best characterizes the response of wheat plants to ozone. The relative yield losses are compared considering two watering regimes: well watered and nonwatered. The results obtained show that the relative yield losses in wheat due to ozone exposure are much more strongly linked to the real quantity of ozone absorbed by plants than to the environmental ozone exposure, which means that the accumulated uptake approach is much more realistic than the exposure-response approach. Relative yield loss estimations were higher in a crop with no watering; 3% of relative yield losses more than a crop watered until field capacity."} +{"text": "The involvement of the reticular formation (RF) in the transmission and modulation of nociceptive information has been extensively studied. The brainstem RF contains several areas which are targeted by spinal cord afferents conveying nociceptive input. The arrival of nociceptive input to the RF may trigger alert reactions which generate a protective/defense reaction to pain. RF neurons located at the medulla oblongata and targeted by ascending nociceptive information are also involved in the control of vital functions that can be affected by pain, namely cardiovascular control. The RF contains centers that belong to the pain modulatory system, namely areas involved in bidirectional balance (decrease or enhancement) of pain responses. It is currently accepted that the imbalance of pain modulation towards pain facilitation accounts for chronic pain. The medullary RF has the peculiarity of harboring areas involved in bidirectional pain control namely by the existence of specific neuronal populations involved in antinociceptive or pronociceptive behavioral responses, namely at the rostroventromedial medulla (RVM) and the caudal ventrolateral medulla (VLM). Furthermore the dorsal reticular nucleus may enhance nociceptive responses, through a reverberative circuit established with spinal lamina I neurons and inhibit wide-dynamic range (WDR) neurons of the deep dorsal horn. The components of the triad RVM-VLM-DRt are reciprocally connected and represent a key gateway for top-down pain modulation. The RVM-VLM-DRt triad also represents the neurobiological substrate for the emotional and cognitive modulation of pain, through pathways that involve the periaqueductal gray (PAG)-RVM connection. Collectively, we propose that the RVM-VLM-DRt triad represents a key component of the \u201cdynamic pain connectome\u201d with special features to provide integrated and rapid responses in situations which are life-threatening and involve pain. The new available techniques in neurobiological studies both in animal and human studies are producing new and fascinating data which allow to understand the complex role of the RF in pain modulation and its integration with several body functions and also how the RF accounts for chronic pain. The involvement of the brainstem reticular formation (RF) in the transmission and modulation of pain is well established. A long path has been covered since the initial anatomical studies demonstrating that the RF projects to the thalamus, passing by the functional approaches showing that manipulation of the RF changes behavioral nociceptive responses and going through imaging studies in humans indicating activation of the RF in response to pain. The study of the role of the RF in pain is challenging due to anatomical and functional reasons. Anatomically, the RF is defined as an aggregation of neurons with several morphological configurations and without distinct connection pattern. Functionally, and besides the sensory component of pain, the RF is involved in a plethora of functions which include arousal, motor reactions, cardiovascular control and visceral functions. This anatomofunctional complexity of the RF deviated neuroscientists from a global study of the involvement of the RF in pain processing and most studies have been directed to specific areas of the RF. Taking into account the concept that pain control cannot be studied apart from other brain functions and also the intrinsic feature of the RF as the brain network, by excellence, it is possible that the RF represents an outstanding example of the \u201cdynamic pain connectome\u201d , the caudal ventrolateral medulla (VLM) and the dorsal reticular nucleus . After reviewing the anatomofunctional features of the involvement of each area in pain transmission and modulation, we then discuss how the triad RVM-VLM-DRt plays a key role as a gateway to allow pain modulation from the brain to target the spinal cord, i.e., top-down modulation Figure . The rolThe involvement of the spinoreticulothalamic pathway as a major ascending pathway for nociceptive transmission to the brain is well established. Overall this multisynaptic pathway originated from neurons mainly located in the spinal cord laminae IV\u2013V and VII\u2013VIII targets areas of the medullary and pontine RF which have collaterals of the spinothalamic tract is involved in stimulation-produced analgesia anatomical location; (ii) connections with the spinal cord; (iii) nociceptive control; and (iv) local neurochemical systems.The RVM includes the midline located nucleus raphe-magnus (NRM) and the adjacent RF in the vicinity of the nucleus reticularis gigantocellularis. In what concerns its involvement in pain modulation, these anatomical components of the RVM are frequently considered together inasmuch that local modulation techniques, such as electrical or chemical stimulation, do not allow a precise discrimination of the NRM from the adjacent RF. Rostrocaudally, the RVM extends from the pontomedullary junction to the level of appearance of the pyramidal decussation.The ascending projections from the spinal cord to the RVM are very scarce. The descending input from the RVM to the spinal cord is prominent and targets almost all spinal segments, reaching mainly the dorsal horn (laminae I\u2013V) but also lamina X Millan, . As to tThe involvement of the RVM as a crucial relay station from the pain modulatory actions arising from the PAG is well established and the lateral reticular nucleus (LRt) and appears to play a specific role in pain modulation , dorsomedially, the Sp5C, laterally and the VLM, ventrally , allowing to investigate more accurately the spinal cord as well as the brainstem, also confirm the involvement of the DRt in pain processing , a mechanism of top-down inhibitory control of spinal dorsal horn neurons are interconnected Figure . The VLMThe RVM-VLM-DRt triad is in a privileged position to collect input from higher brain centers and convey this modulation to the spinal cord Figure . The RVM5 and A7 , the technical problems of studying small components may be surpassed as the triad should be considered as a gateway from the brain to the spinal cord. The RVM-VLM-DRt triad probably represents an example of the \u201cbrain pain connectome\u201d and should not be analyzed as to its individual components, IT was responsible for the overall organization of the manuscript and wrote the parts related to the rostroventromedial medulla and the ventrolateral medulla. IM was responsible for the items related to the role of the dorsal reticular nucleus. The overall manuscript was reviewed by both authors.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer ES and handling Editor declared their shared affiliation, and the handling Editor states that the process nevertheless met the standards of a fair and objective review."} +{"text": "Tendons feature the crucial role to transmit the forces exerted by the muscles to the skeleton. Thus, an increase of the force generating capacity of a muscle needs to go in line with a corresponding modulation of the mechanical properties of the associated tendon to avoid potential harm to the integrity of the tendinous tissue. However, as summarized in the present narrative review, muscle and tendon differ with regard to both the time course of adaptation to mechanical loading as well as the responsiveness to certain types of mechanical stimulation. Plyometric loading, for example, seems to be a more potent stimulus for muscle compared to tendon adaptation. In growing athletes, the increased levels of circulating sex hormones might additionally augment an imbalanced development of muscle strength and tendon mechanical properties, which could potentially relate to the increasing incidence of tendon overload injuries that has been indicated for adolescence. In fact, increased tendon stress and strain due to a non-uniform musculotendinous development has been observed recently in adolescent volleyball athletes, a high-risk group for tendinopathy. These findings highlight the importance to deepen the current understanding of the interaction of loading and maturation and demonstrate the need for the development of preventive strategies. Therefore, this review concludes with an evidence-based concept for a specific loading program for increasing tendon stiffness, which could be implemented in the training regimen of young athletes at risk for tendinopathy. This program incorporates five sets of four contractions with an intensity of 85\u201390% of the isometric voluntary maximum and a movement/contraction duration that provides 3 s of high magnitude tendon strain. Tendinopathy is a clinical condition that is associated with pathological processes within the tendon and pain radial adaptation, (b) longitudinal adaptation and (c) adaptation of specific tension . The role of metabolic stress for muscle growth in response to exercise has been attributed to the associated systemic growth-related hormone and local myokine up-regulation and/or the increased fiber recruitment with muscle fatigue for load application does on the other hand not seem to be of particular relevance for the adaptive response .Aside from mechanical loading, maturation induces profound changes of the skeletal, neuromuscular and tendinous system in young athletes. The following section briefly reviews muscle and tendon development from child to adulthood, then focuses on somatic and hormonal changes that might challenge the balance of the development of muscle and tendon properties. The section closes with a synopsis of recent experimental evidence of an imbalanced muscle and tendon development in adolescent athletes.Whole body muscle mass increases progressively from childhood to adulthood, with a pronounced rise during adolescence, especially in boys maturation and (b) a predominantly plyometric loading profile can lead to a musculotendinous imbalance that increases the load and internal demand for the tendon. Mersmann et al. comparedCollectively, the evidence reviewed in this chapter strongly suggests that muscle and tendon show differences in the time course of adaptation to mechanical loading and in the types of mechanical stimulation that effectively elicits adaptive processes. Maturation acts as an additional stimulus on the muscle-tendon unit of young athletes and could further contribute to a development of an imbalance of muscle strength and tendon stiffness. Adolescence could be a critical phase in that context due to the associated increase of sex hormones. However, the interplay of mechanical loading and changes of the hormonal milieu on muscle and tendon plasticity in general, and with regard to adolescence in particular, is still largely unknown. Similarly, though recent evidence demonstrated that an imbalanced musculotendinous adaptation can occur during adolescence . Though tendinopathy has certainly a multifactorial etiology, the mechanical strain theory is currently considered the most probable injury mechanism and attributes the histological, molecular and functional changes of the affected tissue to mechanical overload the resultant increased mechanical demand is a candidate mechanism to induce overload, (b) the prevalence of soft-tissue overuse injuries is high at time-points in the training process and in sport disciplines that favor the development of a muscle-tendon imbalance from a mechanobiological point of view, and (c) maturation seems to be a potential risk factor for the development of both tendinopathy and musculotendinous imbalances in young athletes. There certainly is a need to provide more direct support for an association between imbalanced muscle and tendon adaptation and overuse. Yet, it appears that the interaction of maturation with mechanical loading could potentially increase the likelihood of the occurrence of such imbalances, which in turn might be related to the increasing risk of tendon overuse in adolescence. However, these assumptions need to be supported by further research.Following the hypothesis that an imbalanced development of muscle strength and tendon stiffness could increase the risk for tendon overuse injury, it might be promising to target the increase of tendon stiffness in groups at risk . The following chapter provides evidence-based suggestions for an effective tendon training and comments on open issues. The chapter concludes with a critical discussion of the current evidence on the effects of preventive interventions outlines the anticipated effects of an intervention-induced increase of tendon stiffness on athletic performance.in vivo (see section Mechanotransduction in Tendon) suggests that both contraction intensity and contraction duration need to exceed a certain threshold to provide an efficient training stimulus. The training intensity is considered to be optimal around 85\u201390% of iMVC and the contraction duration around 3 s the risk of tendon injury and (b) athletic performance. A recent systematic review on the effects of preventive interventions for tendinopathy concluded that evidence for their efficacy is only limited (Peters et al., Aside from the potentially beneficial effects for the health of young athletes, tendon stiffness and the interaction of muscle and tendon during movement are important contributors to movement performance. For example, increased tendon stiffness is associated with a lower electromechanical delay, a greater rate of force development and jump height (Bojsen-M\u00f8ller et al., Current scientific evidence strongly supports the idea that the development of muscle strength during a training process is not necessarily accompanied by an adequate modulation of tendon stiffness. The differences in the time course of adaptation and in the mechanical stimuli that trigger adaptive processes provide two mechanisms that can account for a dissociation of the muscular and tendinous development. Though the additional influence of maturation is still a heavily under-investigated topic, it is likely that an imbalanced development of muscle strength and tendon stiffness is a relevant issue for youth sports and it seems that the risk might even be increased compared to adults. Adolescence, with its associated somatic and endocrine processes, could be a critical phase in that regard. Due to the mechanical loading profile, musculotendinous imbalances especially concern athletes from jump disciplines and the high prevalence of tendinopathy in those sports as well as the increasing incidence during adolescence support the hypothesis that imbalances of muscle strength and tendon stiffness could have implications for the health of young athletes. The implementation of interventions targeting the improvement of tendon mechanical properties could be a promising approach to prevent such imbalances, promote athletic performance and reduce the risk of tendon injury. However, there is still a clear lack of information on the time course of changes of the musculotendinous system during premature development and the interaction of maturation and mechanical loading. The effects of changing sex hormone levels on tendon properties and plasticity is also widely unknown. Similarly, the association of musculotendinous imbalances with tendon overuse injury as well as the preventive value of interventions that promote the development of tendon mechanical properties has not been established thus far. The effects of recovery for tendon adaptation in general are largely unexplored and a future challenge with regard to the application of preventive tendon training in youth sports is the determination of age-specific dose-response relationships and the implementation in the training schedule in elite sports. The increasing prevalence of tendinopathy in athletic adolescents certainly calls for further research on these issues.All authors substantially contributed to the interpretation of the literature addressed in this review. FM drafted and finalized the manuscript. SB and AA made important intellectual contributions in revision of all sections of the manuscript. AA supervised the preparation of the manuscript. All authors approved the final version of the manuscript and agree to be accountable for the content of the work.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Toxoplasma gondii in the etiopathogenesis of schizophrenia is supported by epidemiological studies and animal models of infection. However, recent studies attempting to link Toxoplasma to schizophrenia have yielded mixed results. We performed a nested case-control study measured serological evidence of exposure to Toxoplasma gondii in a cohort of 2052 individuals. Within this cohort, a total of 1481 individuals had a psychiatric disorder and 571 of were controls without a psychiatric disorder. We found an increased odds of Toxoplasma exposure in individuals with a recent onset of psychosis . On the other hand, an increased odds of Toxoplasma exposure was not found in individuals with schizophrenia or other psychiatric disorder who did not have a recent onset of psychosis. By identifying the timing of evaluation as a variable, these findings resolve discrepancies in previous studies and suggest a temporal relationship between Toxoplasma exposure and disease onset.A possible role for Toxoplasma gondii has been previously associated with an increased risk of serious psychiatric disorders such as schizophrenia. However, this association has been found in some studies and not others. We examined whether the differences among previous studies might be explained by the timing of patient evaluation and testing. We found that individuals who were evaluated soon after the onset of psychosis had increased odds of exposure to Toxoplasma gondii as evidenced by the measurement of antibodies in their blood. However. we did not find an increased rate of exposure to Toxoplasma gondii in individuals who had a diagnosis of schizophrenia or bipolar disorder but who did not have recent onset psychosis. Our findings are consistent with Toxoplasma exposure occurring around the time of onset of psychiatric symptoms in individuals with schizophrenia. Our findings might lead to the evaluation of new methods for the early treatment of schizophrenia in some individuals.The protozoan parasite Toxoplasma gondii is an apicomplexan protozoan with a worldwide distribution. Felines serve as definitive hosts for Toxoplasma and can support the complete life cycle of the organism including sexual reproduction and the shedding of oocysts in the feces. Most other species of warm-blooded animals support the replication of portions of the Toxoplasma life cycle including asexual reproduction and the development of tissue cysts in multiple organs including the brain. Humans can become infected with Toxoplasma through the ingestion of oocysts shed from cats into the environment or by the consumption of tissue cysts in the meat of infected food animals such as pigs, sheep, and cows. Human fetuses can also become infected through vertical transmission from the mothers, although this mode of transmission is relatively rare compared to the other modes of transmission.As in the case of other intermediate hosts, initial exposure to Toxoplasma in humans can lead to the formation of tissue cysts in multiple organs including the brain. Exposure also leads to a vigorous immune response evident from the presence of specific antibodies directed at Toxoplasma proteins. Previously, tissue cysts were not thought to cause symptoms in immune competent hosts. However recently studies have documented that tissue cysts are engaged in active metabolism and interaction with the host . AccordiToxoplasma gondii. An association between Toxoplasma and schizophrenia was first made in 1953 [Schizophrenia is the psychiatric condition that has been most extensively studied in terms of association with in 1953 However recently a number of studies have also been reported which did not find a significant association between serological evidence of Toxoplasma and risk of schizophrenia \u201313.A metWe postulated that, in areas of low Toxoplasma prevalence, individuals with the recent onset of the symptoms of schizophrenia would be more likely to have evidence of exposure to Toxoplasma than individuals with established schizophrenia who are receiving antipsychotic therapy.The study population consisted or 2052 individuals with recent onset psychosis, schizophrenia, bipolar disorder, major depressive disorder, or non-psychiatric controls who were enrolled during the period January 1999 through May 2017 in the Stanley Research Program at Sheppard Pratt, Baltimore, Maryland, USA. This cohort, which is ongoing, has been recruited for the study of the association between infection, immunity, and psychiatric disorders. Detailed descriptions of the methods employed for the recruitment and analysis of the cohort have been as previously described [The inclusion criterion for recent onset psychosis was the onset of psychotic symptoms for the first time within the past 24 months defined as the presence of a positive psychotic symptom of at least moderate severity that lasted through the day for several days or occurred several times a week and was not limited to a few brief moments and which was not substance-induced. Participants meeting the criteria for a recent onset of psychosis could have a DSM-IV diagnosis from amoThe inclusion criterion for individuals with schizophrenia was a diagnosis of schizophrenia, schizophreniform disorder, or schizoaffective disorder. The inclusion criterion for individuals with bipolar disorder included a diagnosis of bipolar I disorder, bipolar II disorder, or bipolar disorder not otherwise specified. Those with major depressive disorder had either a single episode or recurrent episodes. Participants who met the criteria for recent onset of psychosis and another diagnosis were assigned to the recent onset group.The psychiatric participants were recruited from inpatient and day hospital programs of Sheppard Pratt and from affiliated psychiatric rehabilitation programs. The diagnosis of each psychiatric participant was established by the research team including a board-certified psychiatrist and based on the Structured Clinical Interview for DSM-IV Axis 1 Disorders and avaiThe inclusion criterion for the non-psychiatric control individuals was the absence of a current or past psychiatric disorder as determined by screening with the DSM-IV Axis I Disorders, Non-patient Edition The contrParticipants in all groups met the following additional criteria: age 18\u201365 (except the control participants who were aged 20\u201360); proficient in English; absence of any history of intravenous substance abuse; absence of intellectual disability by history; absence of HIV infection; absence of serious medical disorder that would affect cognitive functioning; absence of a primary diagnosis of alcohol or substance use disorder per DSM-IV criteria. The occurrence of psychosis not of recent onset was not an exclusion criterion for individuals in the bipolar disorder or major depression diagnostic groups.All participants were individually administered a brief cognitive battery, the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) Form A at the study visit. This battery measures a range of domains and yields a scaled Total Score with a nominal population mean of 100., ParticiToxoplasma gondii using solid phase enzyme immunoassay as previously described. Assay reagents were obtained from IBL America, Minneapolis Minn. A standard sample with a known amount of antibody was also analyzed in each assay run. For each antibody measurement, a sample was considered to be reactive if it generated a signal which was at least 0.8 times the value generated by the standard as previously described[Each participant had a blood sample obtained, generally at study enrollment. For 1875 (91.4%) if the study individuals the sample was obtained within 90 days of intial screening. Plasma was separated from the blood sample by centrifugation and stored at -70 until testing. At the time of testing, the sample was thawed and test for IgG class antibodies to Univariate analyses were performed by means of analysis of variance for continuous variables and Pearson\u2019s chi square for categorical variables. The odds ratios associated with seropositivity and clinical diagnosis was calculated by the use of logistic regression models employing age, gender, race, maternal education (as a marker of socioeconomic status), and place of birth (United States or Canada vs other countries). These covariates were selected since they have been previously shown to be associated with the prevalence of antibodies to Toxoplasma and other infectious agents. Missing data were added by imputation. All analyses were performed by STATA Version 12, College Station, Texas.This research was approved by the Institutional Review Boards of the Sheppard Pratt Health System and the Johns Hopkins School of Medicine. All participants were at least 18 years of age and provided written informed consent.There were a total number of 2052 individuals in the study population. These included 221 individuals with the recent onset of psychosis, 752 individuals with established schizophrenia, 444 individuals with bipolar disorder without a recent onset of psychosis, 64 individuals with major depressive disorder without a recent onset of psychosis, and 571 control individuals without a psychiatric disorder. Of the 221 individuals with recent onset psychosis, 206 (93.2%) were hospitalized at the time of study recruitment, either in the inpatient service or day hospital. Of the 752 individuals with schizophrenia without a recent onset of psychosis, 141 (18.8%) were hospitalized at the time of recruitment, Of the 444 individuals with bipolar disorder without the recent onset of psychosis, 281 (63.3%) were hospitalized at the time of recruitment. A total of 59 (92.2%) of the 64 individuals with major depressive disorder without the recent onset of psychosis were hospitalized at the time of recruitment. The demographic and clinical characteristics of the population are depicted in 2 = 11.8, p < .017). This difference was further explored by means of logistic regression models employing age, gender, race, maternal education (as a measure of socioeconomic status) and birth outside of the United States or Canada. As depicted in As noted in Toxoplasma gondii as determined by antibody measurement. This increase was independent of demographic factors associated with Toxoplasma exposure such as age, place of birth and socioeconomic status. The odds ratio associated with Toxoplasma exposure in our population with recent onset psychosis was, 2.44 . This odds ratio is similar to that reported in some recent meta-analyses examining the association between Toxoplasma and recent onset psychosis and schizophrenia [We found that individuals with recent onset psychosis had a significantly increased rate of exposure to ophrenia , 23. It On the other hand, the level of Toxoplasma exposure in our population of individuals with established schizophrenia or bipolar disorder, while somewhat elevated, did not differ significantly from that of the control population. It is of note that the individuals with these disorders were derived from the same geographic area and tested by the same methods as those employed for the analysis of individuals with recent onset psychosis, suggesting that these differences are not related to socio-demographic factors.The reasons for finding an increased rate of Toxoplasma exposure in individuals with recent onset psychosis but not established schizophrenia is not known with certainty but may be related to changes in antibody levels over time. It had previously been thought that Toxoplasma seropositivity was lifelong. However this concept has been called into question on the basis of longitudinal and population based analyses , 25 leadOur finding that individuals with established schizophrenia or bipolar disorder did not have increased odds of having Toxoplasma exposure is consistent with several other studies performed in low prevalence populations where individuals were receiving medications for extended periods of time. Our findings thus serve to resolve some of the discrepancies in past studies based on differences in regard to the timing of illness onset and assessment for Toxoplasma exposure as well as the receipt of medications. Prospective longitudinal studies relating to the timing of exposure to Toxoplasma and the onset of psychiatric disorders are necessary to directly address the issue of the temporal relationship between Toxoplasma exposure and subsequent risk of psychiatric disorders. These studies should include measurements of additional class-specific and subclass specific measures of antibodies, such as measurements of IgM and IgA class antibodies and measurements of IgG subclasses, which were not available from all members of this study population.It is of note that the prevalence of Toxoplasma is decreasing in many areas of the world. In the case of our study population, the prevalence of exposure to Toxoplasma in the control population was 6.1%, a level consistent with recent studies of young adults living in the United States. The reasThe effect of this decrease in Toxoplasma exposure on human diseases relating to Toxoplasma is unclear. On the one hand, the rate of these diseases may decrease due to a lower level of organisms in the environment. On the other hand, a lower level of exposure in childhood may result in a larger number of individuals who are susceptible to primary infection in later life, resulting in an increased incidence of adult onset disorders. The long-term effects of these epidemiological changes on the role of Toxoplasma on human health and human disease are thus worthy of close examination.Unlike Toxoplasma infection in immune deficient individuals which is associated with the rapid replication of the tachyzoite form of the organism, Toxoplasma infection in immune competent hosts is associated with slowly replicating tissue cysts which are relatively resistant to currently available anti-Toxoplasma medications . Recentl"} +{"text": "Within the last years, and since the inclusion of the biofilm concept in Endodontics, our understanding of root canal infections and the periapical tissues have developed tremendously. Development of new technology and inclusion of novel clinical antimicrobial protocols nowadays rely in the fact that microbial biofilms are formed inside the root canal of teeth and thus these are main targets for elimination. However, main concepts underlying important biological aspects of biofilm formation in root canals remain unclear. In this presentation, a hypothesis is presented where the endodontic biofilm is considered as an extension of dental plaque. By means of this hypothesis, a basis for clarifying important aspects of formation, maturation and resistance of root canal biofilms are proposed."} +{"text": "Saccharomyces cerevisiae as factors crucial for the mother-bud separation, septin genes were later identified in almost all eukaryotes except higher plants. Initial studies on septins were limited to their functions in yeast, with a major focus on cell division. However, much information has been gathered about mammalian septins in the present century and septins are gaining importance as a distinct fourth component of the mammalian cytoskeleton. The presence of multiple septin genes with possibly redundant roles and the lack of potent and specific septin inhibitors have hindered functional characterization of the septin cytoskeleton. These difficulties have been effectively tackled by the widespread use of RNAi technologies and at least 10 out of the 13 mouse septin genes have been successfully knocked out, aiding in the identification of several general and isoform-specific physiological functions of septins. This Frontiers Research topic on the emerging functions of septins compiles contributions from the diverse field of septin research dealing with complex issues from yeast cell division to human cancer, from cellular morphogenesis to protein stability and from plant pathogens to human infections.Septins are a family of conserved cytoskeletal GTPases with unique heteropolymerization properties and diverse physiological functions . While an siRNA screen identified SEPT2 family members as mediators of calcium signaling in HeLa cells (Sharma et al., Our understanding on the canonical functions of septins in cell division is constantly being revisited. In addition, the foot print is spreading far and wide as more and more physiologically relevant functions emerge for the septin family proteins. Collectively, the articles in this research topic including original research, perspectives and reviews on diverse aspects of septin research comprehensively summarizes the advances in the field. The contributions also provide ideas of promising directions for future investigations.Both authors equally contributed to the drafting and revision of the editorial and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The objective of this study was to differentiate the attention patterns associated with attention deficit disorder with or without hyperactivity using continuous performance test (CPT). The diagnoses were based on the DSM-III, III-R, and IV criteria and of the 39 children who participated in the study, 14 had attention deficit disorder with hyperactivity (ADDH) and 11 had attention deficit disorder without hyperactivity (ADDWO), while 14 normal children served as a control group. Attention patterns were examined according to the performance of subjects on the CPT and parental scores on the ADHD Rating Scale, the Child Attention Profile, and the Conners Rating Scale. CPT performances were assessed before and after administration of 10 mg methylphenidate. We found as hypothesized that the CPT differentiated between the ADDH and ADDWO groups. However, contrary to our expectations, the ADDH children made more omission errors than the ADDWO children; they also showed more hyperactivity and impulsivity. The performance of both groups improved to an equal degree after the administration of methylphenidate. It is conluded that different subtypes of the attention deficit disorders are characterized by different attention profiles and that methylphenidate improves scores on test of continuous performance."} +{"text": "There is an error in the final sentence of the Results. The correct sentence is: Fig 3B shows that the heritability of education increases with increasing deprivation (ie decreasing SES).There are errors in the final two sentences of the fifth paragraph of the Discussion. The correct sentences are: On the other hand, the heritability of education showed significant interactions with SES, with increasing heritability at lower SES levels. Prior evidence has suggested that education has substantial genetic correlation with IQ and may be a suitable proxy phenotype for genetic analyses of cognitive performance [35]; thus our results may indirectly support earlier studies of the SES moderation of IQ heritability, though in the opposite direction of that reported in twin study data."} +{"text": "Complex facial injuries with soft tissue degloving and bony avulsion are very devastating to the patient. Partial degloving injuries are described but hemifacial degloving with zygoma avulsion are rare. The author presents a case of post-traumatic degloving of the left upper lip, nose, part of forehead, upper and lower eyelids and cheek with avulsion of the left zygoma. The management included immediate resuscitation and early surgery to reposition the skeletal as well as soft tissue avulsion. The wound was thoroughly washed and primary repositioning and fixation were done. Early one stage surgery with meticulous debridement and alignment of the anatomical landmarks results in very good aesthetic and functional outcome. The face is a very important part of the human body giving identity and sense of confidence to a person. Any post traumatic facial deformity not only causes functional problems but also psychosocial dysfunction. This article reports a case of post-traumatic degloving of majority of left side of the face with avulsion of the zygomatic bone. There are various reports suggesting better outcomes with early primary reconstruction with less risk of infectionA 56 years old female presented with history of road traffic accident with resultant avulsion of her left side of face. There was degloving and avulsion of the left hemi-face including the upper lip nose, forehead skin and eyebrow, upper and lower eyelids and the entire left cheek. The zygomatic bone was also avulsed and displaced laterally. The globe was displaced inferiorly towards the maxillary sinus .The patient was immediately resuscitated airway management, control of bleeding and maintainence of circulation. Other injuries were quickly ruled out and CT scan with 3D CT of the face was done. CT scan showed avulsion of the zygoma with lateral blowout fracture of the orbit. There was no brain injury. The vision was tested for finger counting which was present at 2 to 3 feet. Decision was taken for an immediate single stage reconstruction of the bony as well as soft tissue avulsion.The patient was taken for surgery and under anesthesia the wound was thoroughly washed with normal saline and diluted betadine solution. All contaminants and foreign bodies were removed. Careful debridement was done and crushed nonviable tissues were excised. The orbital cavity was then reconstructed. The globe was seen to be displaced towards the maxillary sinus. The avulsed zygoma still had soft tissue attachments. There was a piece of lateral orbital rim with the soft tissue flap which was repositioned and the lateral orbital wall was reformed. The fractures were fixed with titanium miniplates. The eyeball was thus repositioned to its original position and the volume of the orbital cavity was restored and 3.th postoperative day. All wounds healed well without any flap necrosis (Once the bony fixation was done the soft tissue reconstruction was started. The degloved flap was repositioned. Tacking sutures were taken to the periosteum wherever possible to keep the flap in place. The upper lip was repaired in layers after marking the anatomical landmarks. The mucosa and muscle were repaired with absorbable sutures and skin with nylon. The nose was reconstructed with repair of the mucosal lining followed by repair of the cartilage framework. The nasal ala was repositioned and the skin was repaired. The eyelids and forehead were repaired in layers . The posnecrosis . The pat6-8 Meticulous matching of the anatomical landmarks and repair in anatomical layers gives very good final results. Immediate soft tissue reconstruction results in less scarring and infection. Delayed repair results in oedema developing which obscures the anatomical landmarks and give inferior aesthetic results. There is higher risk of infection too in case of delayed repair. The complete avulsion with lateral displacement of the zygoma is a rare occurrence and it causes displacement of the globe inferiorly and outwards. Early repositioning of the bony fragments and rigid orbit reconstruction repositions the globe and helps maintain visual acuity.9-11Complex hemifacial avulsion injuries are challenging and difficult to treat. The most common causes are high velocity trauma and assault. There is not only a soft tissue degloving but also a bony avulsion in such cases. The problem is compounded by the presence of foreign bodies and contaminations including dirt and stone particles. Early single stage reconstruction provides excellent functional and aesthetic outcomes. Thorough washing and removal of foreign bodies and precise debridement help to prevent infection and flap loss.Surgical management of unusual complex problems is highly challenging as they do not follow any set protocols. Sticking to the basic tenets of reconstruction with matching of the anatomical landmarks and reconstructing in layers gives very satisfying outcomes even in the most ghastly injuries. Immediate single stage procedure with meticulous reconstruction is the key to excellent functional and aesthetic results.The authors declare no conflict of interest."} +{"text": "Their test battery was comprised of a series of, \u201ctasks based on established measures of avian cognitive performance: a motor task, color and shape discrimination, reversal learning, spatial memory and inhibitory control.\u201d rather than as a primary factor with secondary specific intelligences. A psychometric approach that has been employed to analyse human cognition (and many other forms of behavior and experience) is that of facet theory and crystalized (gc) general intelligence to reveal the structure of avian intelligence in the form of a mapping sentence. By using a mapping sentence to design research and non-parametric statistical analyses this allows the investigation of the structure of avian intelligence in a way that does not make unsubstantiated assumptions of the psychometric qualities of the data. Furthermore, facet theory based research produces cumulative and directly comparable results across studies employing a faceted design. Thus, the use of facet theory could potentially develop greater understanding of avian cognitive processes.The author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Avulsion fractures of the inferior pole of the patella and proximal tibial apophysis are independently rare injuries. They occur in children due to the relative weakness of the apophyseal cartilage compared to the ligaments and tendons. The combination of these two fractures, is exceedingly rare, with only a few previously described cases in the literature. Due to the infrequent presentation of this injury, careful examination and consideration of advanced imaging is important for diagnosis and preoperative planning. Here we present two cases of combined sleeve fractures of the inferior pole of the patella and tibial apophysis, with discussion of the pathophysiology, classification, identification and management of the injury. An intense muscle contraction may result in the rupture of a muscle or tendon, or an avulsion fracture. In contrast to adults, children are more likely to suffer from avulsion fractures due to the relative weakness of their apohyseal cartilage compared to connective tissues such as ligaments and tendons.Fractures of the lower pole of the patella are relatively rare. As these are often predominantly cartilaginous avulsions, they are often difficult to distinguish on plain radiographs.Avulsion of the proximal tibial apophysis is also a rare injury patellar dislocation developed pain and inability to bear weight in his right lower extremity following an awkward landing after jumping over a hurdle. On physical exam he was noted to have swelling and effusion of the affected knee with tenderness to palpation over the patella and tibial tubercle. Radiographs obtained in the emergency center demonstrated patella alta and a very small patella sleeve avulsion fracture . He was Intraoperative examination revealed an apophyseal avulsion of the medial 2/3 of the patella tendon from the tibial tubercle and avulsion of the lateral 1/3 of the tendon from the patella . Repair Postoperatively, the patient was placed in a knee immobilizer and with touch down weight bearing precautions and crutches for the initial four weeks following surgery. Follow-up six weeks after surgery demonstrated radiographic healing. Full range of motion had been achieved at his 3 months follow up.A 12-year-old male athlete with history of Osgood-Schlatter disease developed acute pain in his left knee while jumping during a basketball game. He was unable to ambulate following the injury and was originally seen at an outside hospital, where he was treated for a presumptive patellar dislocation. Two attempts at reduction under sedation were unsuccessful, and the patient was transferred to the authors\u2019 institution for further management. Examination at this time revealed patella alta and diffuse swelling. No neurovascular deficits were noted.Radiographs showed a likely inferior patellar sleeve fracture with a possible concomitant tibial tubercle fracture. MRI confirmed avulsion fractures of both the patellar and tibial insertions of the patellar tendon, as well as a quadriceps strain at the patellar insertion and tears of the medial and lateral patellar retinaculum \u20137. OperaIntraoperative findings revealed a 75% avulsion of the tendon from its attachment at the medial tibial tubercle with the remaining lateral quarter of the tendon intact distally but detached from the inferior pole of the patella. Additionally, the fat pad and anterior capsule had also avulsed from the inferior aspect of the patella. The injury extended into the medial and lateral retinaculum at the level of the inferior patella. Repair of the patellar tendon at the distal avulsion site was performed with screw and ligament staple through the apophysis, and reinforced with Krackow sutures tied over a bone bridge in the anterior tibia. The proximal avulsion fracture was repaired with #2 non-absorbable sutures running from the tendon through patellar bone tunnels, and tied over the bone bridge at the superior patella. The retinaculum was repaired with absorbable suture.Postoperatively the patient was placed in a locked straight leg brace for one week and 9. FThe occurrence of bifocal patellar tendon avulsion injuries in the absence of definitive radiographic signs was an especially important feature of these two cases. In Case 1, the diagnosis was not suspected from radiographs, and instead discovered intraoperatively. In Case 2, the diagnosis was suspected and then confirmed by MRI.Tibial tubercle fractures in adolescent patients are typically avulsion-type injuries, given the relative weakness of the open physis compared to the superior strength of the patellar tendon at that stage of development. The classification of these injuries was originally proposed by Watson-Jones in 1955 and described three types of fractures spanning the tubercle, physis and knee joint. Type I involved an avulsion of the tubercle distal to the physis, while Type II involved the physis but spared the knee joint, and Type III extended into the knee joint itself. This classification system was later expanded on by Ogden and colleagues in 1980 to further characterize the fracture patterns associated with each injury, and propose specific treatment approaches. With his case report of tubercle fractures associated with associated patellar ligament avulsions, Frankl et al. suggested a Type C classification to the modified Ogden system . Mosier Two traction apophysitis diseases postulated to be associated with avulsion-style fractures are Osgood-Schlatter Syndrome (OSS), at the tibial tubercle, and Sinding-Larsen Johansson (SLJ) Syndrome, at the inferior pole of the patella, respectively. Of the two, OSS is the more common and is due to repetitive traction on the secondary ossification center of the tibial tuberosity. It is most commonly seen in male athletes between 10 and 15 years of age, concurrent with the rapid growth seen in adolescents during this time. SLJ Syndrome is a rare but similar disease to OSS where the inferior pole of the patella at the site of the proximal attachment of the patellar tendon is affected. This condition is frequently seen in a younger cohort than the Osgood-Schlatter patients, most commonly at ages 10\u201312. Preceding pain is common for this kind of patients. Nevertheless, despite the fact of remote history of OSS in Case 1, neither of our patients had immediately preceding anterior knee pain before the described trauma.We report two cases of combined sleeve fractures of the inferior pole of the patella and tibial apophysis. In first case, preoperative images were confusing and the full complexity of this trauma were verified only during surgery. In the second case we used MRI examination preoperatively, which allowed us to fully characterize the origin of trauma pattern and to prepare proper option for its operative treatment. In our conviction patients of this age and with this type of trauma need extra attention during examination and preoperative planning, we suggest MRI study as clarifying imaging modality when radiographs are not conclusive.The authors declare that they have no conflict of interest."} +{"text": "Nutrients for any inconvenience caused.The integrity of several Western blot bands in Figures 3 and 4 has beenNutrients is a member of the committee on publication ethics (COPE) and takes the integrity of publications very seriously. In the interests of correcting the research literature, [erature, will be"} +{"text": "The field of plastic and reconstructive surgery has historically been the cradle for the conception and development of tissue fabrication strategies. The compelling need of tissue for challenging reconstructions has also been a fertile soil for the first practical implementation of tissue engineering paradigms , which iEBioMedicine describes the use of autologous engineered cartilage grafts for ear reconstruction in one fully documented clinical case and reports treatment of additional four patients with shorter follow-up times. Despite the nature of a preliminary, proof-of-principle study, the work represents an important milestone and offers the opportunity to some general considerations. The authors correctly embed their work in the stream of developments within the major areas required to engineer cartilage grafts, namely cells, signals and scaffolds and openly reports the need to adapt the complex surgical procedure to the specific patient conditions, to the extent that three different methods were applied for the five patients treated. In line with similar developments for nasal cartilage restoration , the autEBioMedicine opens several questions which require a bed-to-bench approach. For example, what is the relative contribution of the implanted vs resident cells in the formation of de novo cartilage over time? Could the use of cells be bypassed by delivery of suitable factors recruiting and activating endogenous progenitors? Which signals are required to maintain stability of the formed tissue? Last but not least, the work prompts addressing the fundamental issue of whether the engineered cartilage tissue could grow with the patients, especially since these are young children. Ultimately, in order to guarantee fully coordinated integration, how can an engineered graft be designed to not only replace a body part, but rather activate its development and growth? And towards this goal, should developmental processes be merely recapitulated or somewhat re-engineered, considering that boundary conditions are not the same of an embryo (Emphasis on surgical procedures does not mean neglecting the importance of underlying biological processes. Control of vascularization, management of inflammation, guided tissue regeneration as well as prevention of surrounding soft tissue contraction are some of the concepts exemplifying the critical influence of a surgical procedure on the associated healing events. These need to be more comprehensively understood in order to be more effectively regulated. In this perspective, the bench-to-bed study published in n embryo ? In thisThe authors declared no conflicts of interest."} +{"text": "Upon publication of the original article , it was As such, please see below the modified text of this paragraph, which provides a brief description of the procedures used:in silico analyses performed in this study were carried out using the microbiome data of the Tyrolean Iceman previously sequenced and released by Maixner et al., [21]. For details to the DNA extraction procedures, library preparation and sequencing please refer to the Supplementary Material of this previous study [21]. In brief, DNA extraction using 250 mg of biological samples have been performed with all the stringent laboratory procedures needed for investigations of ancient DNA in the ancient DNA laboratory of the EURAC - Institute for Mummies and the Iceman, Bolzano, Italy [26-27]. Library preparation and Illumina sequencing were performed at the Institute of Clinical Molecular Biology, Kiel University .The"} +{"text": "The obturator artery is a branch of the internal iliac artery, although there are reports documenting variations, with origin from neighboring vessels such as the common iliac and external iliac arteries or from any branch of the internal iliac artery. It normally runs anteroinferiorly along the lateral wall of the pelvis to the upper part of the obturator foramen where it exits the pelvis by passing through said foramen. Along its course, the artery is accompanied by the obturator nerve and one obturator vein. It supplies the muscles of the medial compartment of the thigh and anastomoses with branches of the femoral artery on the hip joint. We report a rare arterial variation in a Brazilian cadaver in which the obturator artery arose from the external iliac artery, passing beyond the external iliac vein toward the obturator foramen, and was accompanied by two obturator veins with distinct paths. We also discuss its clinical significance. The obturator artery (OA) is usually a branch of the anterior division of the internal iliac artery (IIA), running medially to the lateral wall of the pelvis to reach the obturator canal. This artery emits iliac branches (to the iliac fossa), a vesicle branch (to the urinary bladder), and a pubic branch, which anastomoses with the inferior epigastric artery (IEA) and the contralateral pubic branch. As it exits the pelvis, the OA divides into anterior and posterior branches.,The OV is formed in the proximal portion of the adductor region, and runs to the pelvis through the obturator foramen (OBF) in the obturator canal before running posteriorly and superiorly on the lateral wall of the pelvis, inferior to the OA, and crossing the ureter and the IIA to join the internal iliac vein (IIV).,,-These vessels are often involved in anatomic variations affecting their origins and paths and so careful studies are needed to ensure success before vascular and orthopedic procedures. It is also crucial to study these vessels in view of the many reports in the literature describing pelvic fractures associated with hemorrhage and iatrogenic lesions, with special regard to the \u201ccorona mortis\u201d arterial variation.The aim of this study is to present a case of arterial variation in which the OA arose from the external iliac artery (EIA), passing beyond the external iliac vein toward the OBF, and was accompanied by two OVs with distinct paths.During dissection of the right pelvic region of a male cadaver fixed with a 10% formalin solution, we observed that the OA originated from the EIA and passed laterally to the EIV, before crossing the pelvic brim and descending anteriorly into the OBF . MediallThe origins of the IEA and the deep circumflex iliac artery were as normal. The ON entered the OBF below the artery . There w,,,During embryonic life some arterial channels appear (or enlarge) and disappear (or retract) and the result of this process will dictate the vascular territory of each artery.,,According to the literature, the OA can arise from the common iliac artery, EIA, IEA, inferior gluteal artery, internal pudendal artery (or a common trunk between both arteries), iliolumbar artery, or even from the superior gluteal artery.Sa\u00f1udo et al.The OV can sometimes connect with the inferior epigastric vein or the femoral veinThe variable origins of the OA can also influence its relationship with the ON and OV. For instance, when it arises from the EIA, the artery would be placed superiorly in relation to such structures, which would affect the surgeon during delicate procedures.-,The corona mortis variation is a vascular connection between the obturator and the inferior epigastric vessels (or directly to the EIA) in which either the artery or vein (and sometimes both) forms an anastomosis near the superior pubic ramus. It is also a connection between the EIA and the OA.Obturator bypass is a procedure that was initially described with the premise of treating mycotic aneurysms, but has since become popular and has been widely used to treat any form of injury to the femoral and iliac systems.The OA variant reported here is very important surgically because it could cause dangerous complications during femoral ring procedures or laparoscopic interventions, due to its unusual trajectory and the presence of a supernumerary OV."} +{"text": "Extremely low frequency electromagnetic fields (ELF EMF) are commonly present in daily life all over the world. Moreover, EMF are used in the physiotherapy of many diseases because of their beneficial effects. There is widespread public concern that EMF may have potential consequences for human health. Although experimental animal studies indicate that EMF may influence secretion of some hormones, the data on the effects of EMF on human endocrine system are scarce. Most of the results concentrate on influence of EMF on secretion of melatonin. In this review, the data on the influence of EMF on human endocrine system are briefly presented and discussed."} +{"text": "The unique qualities of women can make them bearers of solutions towards achieving sustainability and dealing with the dangers attributed to climate change. The attitudinal study utilized a questionnaire instrument to obtain perception of female construction professionals. By using a well-structured questionnaire, data was obtained on women participating in green jobs in the construction Industry. Descriptive statistics is performed on the collected data and presented in tables and mean scores (MS). In addition, inferential statistics of categorical regression was performed on the data to determine the level of influence (beta factor) the identified barriers had on the level of participation in green jobs. Barriers and the socio-economic benefits which can guide policies and actions on attracting, retaining and exploring the capabilities of women in green jobs can be obtained from the survey data when analyzed. Specifications tableValue of the data\u2022The questionnaire instrument is compact and can be adapted or modified for studies in other climes, thereby comparing the results from under-developed, developing and developed countries.\u2022The data provided the descriptive statistics for the selected sample for measuring the level of participation of women compared to men in green jobs in the construction industry.\u2022The data when completely analyzed can provide insight into the obstacles hindering the career advancement of women in green jobs in the construction industry, while the socio-economic benefits of engaging women in green jobs if well harness can help the environment and the construction industry.\u2022An understanding of the barriers and socio-economic benefits can guide policy makers and construction industry stakeholders on ways to tackle the shortage of women participation in the construction industry.\u2022The data can increase the awareness of women and the girl child on the distinct features of green jobs in contrast to other jobs available in the construction industry in general.1The data instrument of a well-structured questionnaire was administered to one hundred and twenty (120) women construction professionals in Lagos State, Nigeria. The demographic characteristics of the female construction professionals is shown in 2The data collected was built on previous research conducted on women participation in the construction industry and the areas of green jobs that appear in the construction industry. Details on other researched works on the subject can be found in"} +{"text": "In the original publication the lastIncorrect version:The implications of this work are important for SSA because of the small number of RCTs performed in this part of the world [38], and the shortage of research resources. For this reason, waste must be addressed. In accordance previous works , our results highlight that waste in RCTs in SSA could be avoided with simple and inexpensive methodological adjustments as well as a better reporting of interventions. Investigators should be informed of the feasibility of these adjustments and reporting guidelines when planning their trials and drafting their reports to limit the number of flaws in trial methods and poor descriptions of interventions at an early stage . The Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network is an international initiative created to improve the reliability and value of published health research literature by promoting transparent and accurate reporting and wider use of robust reporting guidelines (http://www.equator-network.org/). In our study, articles journals recommending reporting guidelines in their instructions to authors have a better description of interventions than those that did not recommend any reporting guidelines.Correct version:The implications of this work are important for SSA because of the small number of RCTs performed in this part of the world [38], and the shortage of research resources. For this reason, waste must be addressed. In accordance previous works , our results highlight that waste in RCTs in SSA could be avoided with simple and inexpensive methodological adjustments as well as a better reporting of interventions. Investigators should be informed of the feasibility of these adjustments and reporting guidelines when planning their trials and drafting their reports to limit the number of flaws in trial methods and poor descriptions of interventions at an early stage . The Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network is an international initiative created to improve the reliability and value of published health research literature by promoting transparent and accurate reporting and wider use of robust reporting guidelines (http://www.equator-network.org/)."} +{"text": "In the search for new neuroprotective compounds, interest has turned to marijuana derivatives, since in several 2-mediated control of neuroinflammation that could liberate cannabinoids from the slavery of their central side effects.Despite the emerging evidence regarding pharmacological activities of cannabinoids, their effective introduction into clinical therapy still remains controversial and strongly limited by their unavoidable psychotropicity. Since the psychotropic effect of cannabinoids is generally linked to the activation of the CB1 receptor on neurons, the aim of our review is to clarify the function of the two cannabinoid receptors on glial cells and the differential role played by them, highlighting the emerging evidence of a CB"} +{"text": "Most patients with Tourette syndrome report characteristic sensory experiences (premonitory urges) associated with the expression of tic symptoms. Despite the central role of these experiences to the clinical phenomenology of Tourette syndrome, little is known about their underlying brain processes. In the present article we present the results of a systematic literature review of the published studies addressing the pathophysiological mechanisms of premonitory urges. We identified some preliminary evidence for specific alterations in sensorimotor processing at both cortical and subcortical levels. A better insight into the brain correlates of premonitory urges could lead to the identification of new targets to treat the sensory initiators of tics in patients with Tourette syndrome."} +{"text": "Lofa County has one of the highest cumulative incidences of Ebola virus disease (Ebola) in Liberia. Recent situation reports from the Liberian Ministry of Health and Social Welfare (MoHSW) have indicated a decrease in new cases of Ebola in Lofa County . In OctoLiberia is in the midst of the largest outbreak of Ebola to date, with approximately 6,500 reported cases as of October 31, 2014 aggregate data for newly reported suspected, probable, and confirmed cases of Ebola; 2) case-based data for persons admitted to the Foya ETU operated by MSF; and 3) test results for oral swab specimens collected from persons who died in the community and whose deaths were investigated for possible Ebola.Aggregate data for newly reported cases were obtained from the county health office and publicly available national situation reports published by Liberian MoHSW. These data include new cases reported daily by local health offices in the six districts of Lofa County. The weekly number of new cases increased from 12 in the week ending June 14 to 153 in the week ending August 16, and then decreased, reaching four new reported cases in the week ending November 1 .MSF provided deidentified case-based data of persons admitted to the Foya ETU. Final epidemiologic classification of cases was consistent with case definitions described by the World Health Organization . LaboratCase-based data for persons admitted to the Foya ETU describe a trend similar Oral swab specimens were collected by outreach teams from MSF and district health offices from persons who died in communities with symptoms suggestive of Ebola. Specimens were analyzed by the EMLab field laboratories using RT-PCR, which has similar performance on oral swab specimens and blood specimens . Test reThe trends in numbers of newly reported cases, persons admitted to the Foya ETU, and positivity rate among community decedents evaluated for Ebola virus during June 8\u2013November 1, 2014, are consistent with a substantial decrease in transmission of Ebola virus in Lofa County beginning as early as August 17, 2014. The aggregate data from the Lofa County Health Office and case-based data from the Foya ETU describe a peak of reported cases and new admissions respectively in the week ending August 16 followed by a decline in subsequent weeks. The high percentage of positive specimens collected from community decedents during June 8\u2013August 16 suggests that Ebola was causing deaths in communities, whereas the lower percentage during August 24\u2013November 1 suggests that other endemic diseases, such as malaria or typhoid, had become the main causes of mortality as transmission of Ebola virus decreased. The findings from this analysis might indicate the first example in Liberia of a successful strategy to reduce the transmission of Ebola virus in a county with high cumulative incidence.Transparency in activities and engagement with the community were central to the response strategy in Lofa County. For example, the Foya ETU was designed without high, opaque walls to minimize fear of the facility. Family members were permitted to visit their loved ones in the ETU, either by talking with them across a fence or inside the ward while wearing full personal protective equipment. Decedents in the ETU were buried in the presence of family members at designated burial sites in graves with clear identification. In communities, rapid transport of ill persons to the ETU and safe burial of persons suspected of dying from Ebola demonstrated to the local population that partners could quickly respond to requests for help. During safe burials of community decedents, family members were invited to hold grieving ceremonies according to local customs in memory of the deceased. Engagement with the local population might have built confidence in response activities and contributed to the success of the strategy.Data on final classifications of patients admitted to the Foya ETU and test results from community decedents indicate ongoing engagement from the community. The high percentage of non-Ebola cases among persons admitted to the ETU during the peak of admissions suggests that the community and health workers were aware that persons with symptoms suggestive of Ebola should be evaluated at the ETU. The high percentage of non-Ebola cases among new admissions in recent weeks and the increasing number of weekly specimens collected from community decedents suggests that trust in response activities remains strong in the local population despite the recent decrease in cases.Although transmission in Lofa County might have decreased, situation reports from MSF, EMLab, and the World Health Organization have indicated an increase in cases during September and October in Macenta ,9, a heaRecent reports indicate that transmission of Ebola virus in Liberia is ongoing . The finWhat is already known on this topic?Lofa County in Liberia has one of the highest numbers of reported cases of Ebola virus disease (Ebola) in West Africa. Government health offices, nongovernmental organizations, and technical agencies coordinated response activities to reduce transmission of Ebola in Lofa County. The intensity and thoroughness of activities increased in response to the resurgence of Ebola in early June.What is added by this report?Trends in new reported cases, admissions to the dedicated Ebola treatment unit in the town of Foya, and test results of community decedents evaluated for Ebola virus suggest transmission of Ebola virus decreased in Lofa County as early as August 17, 2014, following rapid scale-up of response activities after a resurgence of Ebola in early June.What are the implications for public health practice?A comprehensive Ebola response strategy developed with participation from the local community and rapidly scaled up following resurgence of Ebola might have reduced the spread of Ebola virus in Lofa County. The strategy implemented in Lofa County might serve as a model for reducing transmission of Ebola virus in other affected areas."} +{"text": "The ANTARES radiation hydrodynamics code is capable of simulating the solar granulation in detail unequaled by direct observation. We introduce a state-of-the-art numerical tool to the solar physics community and demonstrate its applicability to model the solar granulation. The code is based on the weighted essentially non-oscillatory finite volume method and by its implementation of local mesh refinement is also capable of simulating turbulent fluids. While the ANTARES code already provides promising insights into small-scale dynamical processes occurring in the quiet-Sun photosphere, it will soon be capable of modeling the latter in the scope of radiation magnetohydrodynamics. In this first preliminary study we focus on the vertical photospheric stratification by examining a 3-D model photosphere with an evolution time much larger than the dynamical timescales of the solar granulation and of particular large horizontal extent corresponding to The structure and dynamics of the solar photosphere is crucially determined by the mass and energy transport processes taking place across the solar surface. For the study of the solar convection and its phenomenological manifestation on the visible surface of the Sun, the solar granulation, numerical simulations not only complement observational data but also serve as a means of their own by providing complete and almost continuous information in 3-D of the physical state and the dynamics which otherwise often has to be drawn indirectly from observations. The physics of the layers surrounding the solar surface is rather involved; below the surface the opacity is sufficiently large so that the local adiabatic gradient is exceeded by the temperature gradient needed for radiative-diffusive energy transport, turning the fluid convectively unstable. This process is primarily described by mixing length theory code ANTARES , applied to the study of the near surface convection and the photosphere of the Sun. In spite of the broad range of applicability, ranging from the modeling of photospheric turbulence finite volume schemes. The code is based on a high-resolution finite-volume method that can treat turbulence by adopting local mesh refinement. Essentially, finite volume schemes are based on interpolation of discrete data using polynomials; fixed stencil interpolations work well for sufficiently smooth problems but introduce oscillations near discontinuities, whose amplitudes do not decay in the course of mesh refinement. Whereas traditional remedies such as the introduction of an artificial viscosity or the application of limiters to discard such oscillations have obvious drawbacks, ENO two-point correlations for which one quantity is fixed at the solar surface while the second one is varied with height.The one-point correlation between the vertical velocity- and temperature fluctuations A high correlation of temperature fluctuations at the solar surface with temperature fluctuations at varying heights, The gas pressure is higher in granular upflows throughout subphotospheric and photospheric regions resulting in an entirely positive correlation From the images of the vertical velocity in Fig.\u00a0Finally, we discuss the two-point correlation of the intensity fluctuations at the surface with the mean opacity fluctuations at varying heights, mentclasspt{minimamentclasspt{minimaAltogether these data provide the necessary information for outlining the overall vertical structure of the photosphere (see also Table\u00a0n et\u00a0al. , we can d Nelson or the 2d Nelson , we findWe introduce the radiation hydrodynamics code ANTARES that we applied to the study of the solar granulation and that has not yet received much attention in the Solar Physics community. We used correlation analysis to examine the vertical stratification of the photosphere and determined height levels subdividing the photosphere in layers that exhibit characteristic dynamics of their own: The subphotospheric layers up to a height of The WENO scheme implemented in ANTARES avoiding oscillatory solutions at discontinuities which otherwise occur due to the interpolation of discrete data in finite volume methods is particularly useful for the ongoing study of shocks which are observed in the intergranulum of our model photospheres and for the study of acoustic oscillations in the scope of RHD. We intend to further investigate photospheric wave excitation and propagation by the application of wavelet and wavepacket analysis and to quantify the associated energy transfer through the photosphere.As this RHD-code is currently heavily under development with an imminent RMHD upgrade to be released, we intend to soon present further model results and focus on photospheric, small-scale, intergranular rotating plasma jets that have been detected and studied in our RHD simulations (Lemmerer et\u00a0al."} +{"text": "The Back to Sleep Campaign was initiated in 1994 to implement the American Academy of Pediatrics' (AAP) recommendation that infants be placed in the nonprone sleeping position to reduce the risk of the Sudden Infant Death Syndrome (SIDS). This paper offers a challenge to the Back to Sleep Campaign (BTSC) from two perspectives: (1) the questionable validity of SIDS mortality and risk statistics, and (2) the BTSC as human experimentation rather than as confirmed preventive therapy.The principal argument that initiated the BTSC and that continues to justify its existence is the observed parallel declines in the number of infants placed in the prone sleeping position and the number of reported SIDS deaths. We are compelled to challenge both the implied causal relationship between these observations and the SIDS mortality statistics themselves."} +{"text": "There are errors in the Results and Conclusions sections of the Abstract.The last sentence of the Results section of the Abstract should read: In nerves, the median time until demonstrable histological changes was 24 hours.The last sentence of the Conclusions section of the Abstract should read: Whereas pancreas and liver showed the first damages after 1\u20132 hours, this took 24 hours in nerves and 7 days in blood vessels."} +{"text": "Cognitive impairment in multiple sclerosis is an increasingly recognized entity. This article reviews the cognitive impairment of multiple sclerosis, its prevalence, its relationship to different types of multiple sclerosis, and its contribution to long-term functional prognosis. The discussion also focuses on the key elements of cognitive dysfunction in multiple sclerosis which distinguish it from other forms of cognitive impairment. Therapeutic interventions potentially effective for the cognitive impairment of multiple sclerosis are reviewed including the effects of disease modifying therapies and the use of physical and cognitive interventions."} +{"text": "Equus hemionus) in order to assess the suitability of several skeletal elements to reconstruct the life history strategy of the species. Bone tissue types, vascular canal orientation and BGMs have been analyzed in 35 cross-sections of femur, tibia and metapodial bones of 9 individuals of different sexes, ages and habitats. Our results show that the number of BGMs recorded by the different limb bones varies within the same specimen. Our study supports that the femur is the most reliable bone for skeletochronology, as already suggested. Our findings also challenge traditional beliefs with regard to the meaning of deposition of the external fundamental system (EFS). In the Asiatic wild ass, this bone tissue is deposited some time after skeletal maturity and, in the case of the femora, coinciding with the reproductive maturity of the species. The results obtained from this research are not only relevant for future studies in fossil Equus, but could also contribute to improve the conservation strategies of threatened equid species.The study of bone growth marks (BGMs) and other histological traits of bone tissue provides insights into the life history of present and past organisms. Important life history traits like longevity or age at maturity, which could be inferred from the analysis of these features, form the basis for estimations of demographic parameters that are essential in ecological and evolutionary studies of vertebrates. Here, we study the intraskeletal histological variability in an ontogenetic series of Asiatic wild ass ( The study of bone growth marks (BGMs) is nowadays the focus of many investigations due to its potential to reconstruct many aspects of the life history of present and past vertebrates . These hFrom dinosaurs to mammals, the annual periodicity of the CGMs is the basis for inferences of life history strategies in many groups of fossil organisms e.g., . The numThe histological analysis of bones for this kind of research in mammals is still little explored in comparison with other vertebrate groups . HoweverEquus hemionus Pallas, 1775). With this study, we aim to find out what life history information can be inferred from the histological study of equids and to try to determine which is the best skeletal element to develop skeletochronological studies in this mammal. The kulan or Asiatic wild ass, a mammal endemic to the Gobi desert, is one of the eight extant species of the family Equidae between different limb bones of the same individual in the Asiatic wild ass to estimate its bone perimeter at the time of death. The perimeter of adult individuals was not determined and only the length of the BGMs identified within the EFS is shown. Because it is generally considered that the presence of EFS indicates the cessation of radial growth in long bones has also been noted in some of the limb bones studied, regardless of the orientation of the cutting plane .All bones of The histology of kulan\u2019s femora was previously described in Tibial cortices consist of laminar bone and remoBone tissue and vascular arrangement is very similar in metatarsi and metacarpi. In both skeletal elements, the bone cortex is mainly composed of a FLC with primary osteons (POs) oriented in circular rows . The vasE. hemionus) in mammals present a variable number of LAGs. As it is shown in Two BGMs are identified in the tibia and the metatarsus of the juvenile individual (IPS83155) while the femur and the metacarpus present only one , Fig. 5.Wild adult individuals (IPS83876 and IPS83877) also present differences in the number of BGMs between limb bones , Fig. 6.Based on the ontogenetic time schedule obtained from the study of the BGMs, we represented the growth curve for the different bones of each specimen \u20137D. In tIn adult individuals, the growth curves, as well as the plots of growth rate estimations, indicate a change in the pace of growth during ontogeny. Equus hemionus for the first time. Previous studies have addressed this issue in isolated bones of fossil vertebrate species . Our results show that these bones record a similar total number of BGMs as the femur than to skeletal maturity. Skeletal maturity, however, is recorded in growth curves as a significant drop in periosteal growth rate."} +{"text": "Alterations in consciousness are central to epileptic manifestations, and involve changes in both the level of awareness and subjective content of consciousness. Generalised seizures are characterised by minimal responsiveness and subjective experience whereas simple and complex partial seizures demonstrate more selective disturbances. Despite variations in ictal origin, behaviour and electrophysiology, the individual seizure types share common neuroanatomical foundations generating impaired consciousness. This article provides a description of the phenomenology of ictal consciousness and reviews the underlying shared neural network, dubbed the 'consciousness system', which overlaps with the 'default mode' network. In addition, clinical and experimental models for the study of the brain correlates of ictal alterations of consciousness are discussed. It is argued that further investigation into both human and animal models will permit greater understanding of brain mechanisms and associated behavioural consequences, possibly leading to the development of new targeted treatments."} +{"text": "For data analysis, the systematic text condensation method was used. Results: The study participants described two opposing positions regarding the development of community pharmacies in the future. Reform supporters emphasized increased professional independence and more healthcare-oriented operation of community pharmacies. Reform opponents argued against these ideas as community pharmacists do not have sufficient practical experience and finances to ensure sustainable development of the community pharmacy sector in Estonia. Conclusion: Based on the current perception of all respondents, the future operation of the community pharmacy sector in Estonia is unclear and there is urgent need for implementation criteria for the new regulations.Objectives: From 2020, the ownership of community pharmacies in Estonia will be limited to the pharmacy profession, and the vertical integration of wholesale companies and community pharmacies will not be allowed. The aim of this study was to evaluate the perception of different stakeholders in primary healthcare toward the new regulations of the community pharmacy sector in Estonia. Methods: A qualitative electronic survey was distributed to the main stakeholders in primary healthcare and higher education institutions providing pharmacy education ( Due to the relevant role that pharmacists play in the delivery of healthcare, community pharmacies in the majority of cases are highly regulated in most European countries . The folThere are several examples of countries in Europe where the ownership of community pharmacies is limited to the pharmacy profession ,4. On thEstonia has been a country with a liberal pharmaceutical policy for more than 20 years . The phaPrivatization of the community pharmacy sector in Estonia began immediately after the regaining of independence in 1991. The opening, operation and management of community pharmacies are strictly regulated by the Medicinal Products Act. Since 1996, however, the ownership of community pharmacies has no longer been limited to the pharmacy profession. The reasoning behind liberalization was mostly connected to economical needs and less connected to improved patient care.Vertical and horizontal integration of community pharmacies started to emerge in the second half of the 1990s. The liberal system has led to a rapid growth in the number of community pharmacies, from about 250 in 1993 to 476 in 2015. Currently, approximately 90% of community pharmacies (the majority operating in larger towns) are joined through ownership or partner status to chains what are mostly connected to wholesale companies. This is one of the reasons why the vast majority of pharmacies buy most medicinal products from certain wholesalers. In the current situation, competition in the pharmaceutical wholesale and retail market is limited and new competitors find it difficult to enter the market .Demographic and geographic restrictions to the opening of new entities existed between 2006\u20132013, but did not fulfill their purpose about a more even distribution of pharmacies in rural areas. Even contrary, since 2006, the number of community pharmacies has decreased by 5% in towns, and by 12% in the countryside. In December 2013, the State Court repealed the establishment criteria for community pharmacies . TemporaThe aim of this qualitative electronic survey was to evaluate the perceptions towards the new regulations of the community pharmacy sector among different stakeholders in and connected to the primary healthcare of Estonia.A qualitative electronic survey using the web platform Google Sheets was used for data collection. The survey was conducted in May 2015 and forwarded to 40 different parties of primary healthcare in Estonia: governmental institutions, professional and patient organizations, representatives of community (owners-pharmacists and owners-chains) pharmacies and wholesale companies of medicinal products; and universities providing pharmacy education at the Bachelor and Master\u2019s level . The surPresent research conforms to the legal and ethical standards of Estonia. Separate approval from the ethics committee was not required for this type of study.Publicly available information about discussions and positions of different stakeholders before and after changes in pharmaceutical legislation was used for the development of the survey instrument. The survey was planned and the questions were self-designed by the panel of representatives from the University of Tartu, the Estonian State Agency of Medicines, and the Estonian Pharmacy Association.(1)How could the transition of pharmacy ownership be organized to satisfy all parties involved? Should the government provide financial support to pharmacists who want to open or buy a pharmacy?(2)What impact could the prohibition of vertical integration have on the pharmacy sector?(3)Will the new regulation increase or decrease competition in the community pharmacy sector? Will new companies enter the pharmaceutical wholesale market after the prohibition of vertical integration?(4)Will the new regulation increase or decrease the number of community pharmacies? How will the situation change for community pharmacies in rural areas?(5)Will the new regulation change the quality of community pharmacy services? Should community pharmacy services be classified as healthcare services?(6)Will the pricing of medicines and wages for community pharmacy professional staff change with the new regulation?(7)Will the professional roles and responsibilities of a pharmacist and an assistant pharmacist change with the new regulation?(8)Should pharmacy education be updated according to the new regulation?The survey instrument consisted of the following open-ended questions and respondents were asked to justify their responses:n = 5) of representatives from a governmental institution, a professional organization, a wholesale company, and practicing pharmacists.For content and face validity, the survey instrument was evaluated by a small sample total impression\u2014researcher reads the entire description or all results to get a general understanding about the topic;(2)identifying and sorting meaning units\u2014the researcher identifies and organizes the data by meaning units that are related to the study question;(3)condensation\u2014the researcher examines meaning units one by one to get a detailed understanding of the content of every unit; the data is decontextualized;(4)synthesizing\u2014the researcher condenses information received from meaning units into a consistent statement/results, and the data is put back into context .The method includes four steps:Described structure was used to analyze all questions and each question was analyzed separately. If applicable, meaning units were divided and condensed by positive/negative/neutral perceptions of the respondents.n = 5), community pharmacies owned by pharmacists (n = 4) and chains (n = 6), and from a wholesale company of medicines (n = 1) .All respondents agreed on the need to classify community pharmacy services as healthcare services in the future: \u201cCommunity pharmacy services could reduce or divide the work load of family physicians and nurses, and should therefore clearly be a part of the primary care services in the future.\u201dRespondents emphasized the strong need to increase collaboration between physicians and pharmacists. Community pharmacies should be re-designed to ensure more private and patient-centered communication, and the government should be involved in the development of extended services at community pharmacy. \u201cThere should be a list of traditional and extended community pharmacy services. This document could serve as a basis for future negotiations with the Estonian Health Insurance Fund about the remuneration of extended community pharmacy services.\u201dAll respondents agreed that there is an urgent need for information about the transition conditions of community pharmacy ownership from pharmacy chains to pharmacists. They described the increased role of the government in specifying the structure of the community pharmacy sector , and in making it easier for community pharmacists to receive bank loans. In addition, a longer and more gradual transition period (between 7\u201310 years) was suggested as it would be impossible to complete the planned reforms in less than five years. The impact of the new regulations on the community pharmacy sector in Estonia was mostly described by positive or negative scenarios. All respondents agreed that the number of community pharmacies could decrease in towns and in rural areas\u2014pharmacists do not have the finances and interest to buy and operate non-profitable entities. While the smaller number of pharmacies in towns could lead to the opening of larger pharmacies and encourage the development of extended services, it could cause problems with access to medicines in rural areas. Despite the possible emergence of large pharmacies in towns, the system for the provision and development of community pharmacy services and continuing professional education of community pharmacists remains unclear.The restriction of vertical integration \u201cwill increase professional independence of community pharmacists and decrease commercial influence on operation of community pharmacies.\u201d According to another opinion, \u201cThe new regulation would jeopardize the retail sale of medicines, the maintenance and development of the current community pharmacy system needs financial support from the wholesale sector.\u201dChanges in the pricing of medicines were directly connected with restrictions of vertical integration. \u201cOpening the pharmaceutical market will enable new wholesale companies to enter the pharmaceutical market in Estonia and increased competition would result in the decrease of medicine price,\u201d future owners concluded. Current owners gave a completely opposing description: \u201cThe new regulation will end the collaboration between the wholesale and retail sector that currently provides efficient discounts on medicine prices\u201d.There is no common pattern for the pharmaceutical retail and wholesale sector in Europe. Based on the trends of the last decade, community pharmacy chains have become more prevalent and vertical integration between wholesalers and retailers has also been on the rise in EU. Some form of a pharmacy chain is allowed in 19 EU countries, and vertical integration is implemented in 10 EU countries . AlthougNew regulations of the community pharmacy sector in Estonia aim to reduce the degree of liberalism in pharmaceutical policy. Over the last 20 years, community pharmacies in Estonia have become modern healthcare institutions offering better access to a large selection of medicines and patient-centered services. On the other hand, privatization and the deregulation of pharmacy ownership have resulted in an increased number of new pharmacies, especially in larger towns. This has created a shortage of pharmacists and assistant pharmacists, which could be seen as a limiting or delaying factor in the introduction of novel, extended and patient-centered services at community pharmacies. Decreasing profit margins may threaten the viability of small community pharmacies in rural areas, potentially limiting consumer access to community pharmacy services in the future .In this study, the future operation of a newly regulated community pharmacy sector was unclear for all participants. Reform proponents based their descriptions of future developments mostly on theoretical considerations about management of community pharmacies and emphasized professional ethics and the independence of pharmacists. Without support from the government, however, these ideas would be too declarative and lack actual solutions for practical implementation. Opponents of the reforms combined economical and professional thinking linked to existing practical experience on operation of community pharmacies in their descriptions. However, the ideas described were mostly based on the defense and approval of the current situation and not really open to possible new developments or collaboration with new owners.New regulations of the community pharmacy sector have brought to light several unresolved problems within the pharmacy sector in Estonia. This could be connected to not having an actual national pharmaceutical policy . One posThis study highlighted several other unsolved problems in the community pharmacy sector of Estonia. For example, changed ownership will not resolve the uneven geographical distribution of community pharmacies or the future role of pharmacies in the healthcare system. Lately, encouraging initiatives have been launched in developing professional standards, including the specification of professional roles for pharmacists and assistant pharmacists, and the development and harmonization of the community pharmacy services . It seemWhile the sector is being reformed it would be wise first to identify the common ideas of all stakeholders, find ways for collaboration and not confront the main players of the community pharmacy sector. Otherwise, it could easily happen that main stakeholders dealing with a jumble of questions related to the new regulations forget the main purpose of healthcare for pharmacists, which is to provide quality pharmaceutical care to patients.Important stakeholders in primary healthcare as governmental institutions, representatives of general practitioners and patients as well as representatives of the academia did not participate in the study and the results could therefore be biased. As such, the results may not be generalizable to the entire community pharmacy sector in Estonia. Many of the institutions and organizations explained their non-participation in the study with not having an official position regarding the restrictions of pharmacy ownership and vertical integration, or their lack of need for a study on this topic.For data analysis, the text condensation method was used. This method helped to identify core information from open-ended replies. However, the condensation of meaning units was only possible using the positive/negative/neutral perceptions of the respondents as some of the questions received contrasting replies. However, this type of grouping could underline the impression of the stakeholders of the Estonian community pharmacy sector more as opponents than collaborators.Pharmacy ownership and vertical integration restrictions have raised many questions and unclear expectations among different stakeholders in the pharmacy sector of Estonia. The study revealed two opposing positions regarding the future development of community pharmacies. Future owners underlined the need for increased professional independence and more healthcare-oriented operation of community pharmacies. Opponents of the reform argued against these ideas as community pharmacists do not have sufficient practical experience and finances to ensure the sustainable development of the community pharmacy sector. There is an urgent need for official government standpoints regarding the implementation of new regulations to assure the continuous provision of community pharmacy services and patient care in Estonia."} +{"text": "In this paper, we introduce a biased median filtering image segmentation algorithm for intestinal cell glands consisting of goblet cells. While segmentation of individual cells are generally based on the dissimilarities in intensities, textures, and shapes between cell regions and background, the proposed segmentation algorithm of intestine cell glands is based on the differences in cell distributions. The intestine cell glands consist of goblet cells that are distributed in the chain-organized patterns in contrast to the more randomly distributed nongoblet cells scattered in the bright background. Four biased median filters with long rectangular windows of identical dimension, but different orientations, are designed based on the shapes and distributions of cells. Each biased median filter identifies a part of gland segments in a particular direction. The complete gland regions are the combined responses of the four biased median filters. A postprocessing procedure is designed to reduce the defects that may occur when glands are located very close together and to narrow the gapping areas because of the thin distribution of goblet cells. Segmentation results of real intestinal cell gland images are provided to show the effectiveness of the proposed algorithm."} +{"text": "The superior alveolar nerves course lateral to the maxillary sinus and the greater palatine nerve travels through the hard palate. This difficult three-dimensional anatomy has led some dentists and oral surgeons to a critical misunderstanding in developing the anterior and middle superior alveolar (AMSA) nerve block and the palatal approach anterior superior alveolar (P-ASA) nerve block. In this review, the anatomy of the posterior, middle and anterior superior alveolar nerves, greater palatine nerve, and nasopalatine nerve are revisited in order to clarify the anatomy of these blocks so that the perpetuated anatomical misunderstanding is rectified. We conclude that the AMSA and P-ASA nerve blockades, as currently described, are not based on accurate anatomy. Anesthetic blockade of the posterior superior alveolar (PSA) branch of the maxillary nerve has played an important role in the endodontic treatment of irreversible acute pulpitis of the upper molar teeth except for the mesiobuccal root of the first molar tooth , 2. ThisThe infraorbital nerve gives rise to middle superior alveolar (MSA) and anterior superior alveolar (ASA) branches. However, blockade of the greater palatine nerve (GPN) has been used, especially for periodontal treatment, to anesthetize the palatal mucosa, including the posterior part of the hard palate and its overlying soft tissues, anteriorly as far as the first premolar and medially to the midline, and palatal gingiva.The superior alveolar nerves course lateral to the maxillary sinus and the GPN travels through the hard palate , 2. HoweWith such a clinically significant error being propagated in the dental literature, we aimed to review this anatomy and investigate the origin of the anatomical misunderstanding. With an improved understanding of the anatomy of the innervation of the maxillary teeth, unnecessary anesthetic blockade will be avoided with resultant improved patient outcomes.Anatomy of the superior alveolar nerves The upper teeth are supplied by three superior alveolar nerves that arise from the maxillary nerve in the pterygopalatine fossa or infraorbital canal. The PSA branch leaves the maxillary nerve in the pterygopalatine fossa and runs anteroinferiorly to enter the posterior alveolar foramen on the infratemporal surface of the maxilla . It then descends under the mucosa of the maxillary sinus . Finally, this nerve divides into small branches that unite as the molar part of the superior dental plexus, which supplies the ipsilateral molar teeth, gingivae and the adjoining part of the cheek . As it rAnatomy of the greater palatine and nasopalatine nervesThe GPN\u00a0is one of the branches of the maxillary nerve that enters the greater palatine foramen to travel within the oral cavity along the roof of the mouth. It travels downward and forward giving rise to numerous branches to the ipsilateral palatal mucosa, gingiva,\u00a0and glands of the hard palate as it approaches the incisor teeth. The GPN communicates with the terminal branch of the nasopalatine nerve.The nasopalatine nerve leaves the pterygopalatine fossa through the sphenopalatine foramen to enter the nasal cavity. It passes across the cavity to the back of the nasal septum, runs downward and forward through the nasal septum in a groove in the vomer and then turns down through the incisive canal to traverse the incisive foramen at the anterior part of the hard palate. It supplies the lower part of the nasal septum and the anterior part of the hard palate where it communicates with the GPN.Propagated misunderstanding of the AMSA nerve blockThe technique of AMSA nerve block was first developed by Friedman and Hochman\u00a0based on the description in the old literature that pulpal anesthesia can be accomplished from a palatal injection as documented by Fischer in 1911 Friedman and Hochman \u00a0illustraIn their drawing, the infraorbital nerve or maxillary nerve is shown as giving rise to branches to all of the maxillary teeth through the hard palate as if they traveled in a similar course as the GPN. Furthermore, in their drawing, the main trunk of the infraorbital nerve becomes the ASA as if it were the terminal branch of the infraorbital nerve. The course of the ASA in this figure might be confused with the pathway of the nasopalatine nerve toward the incisive foramen through the nasal septum. Therefore, this publication has added to the confusion of the course of the superior alveolar branches, GPN and nasopalatine nerve, and has probably resulted in patients undergoing unnecessary nerve blockade.Concept of the P-ASA nerve block and misunderstandingThe P-ASA nerve block was proposed by Friedman and Hochman in 1999 as a priAccording to Friedman and Hochman , the useApplication to periodontal treatmentSeveral studies have reported the effectiveness of AMSA nerve block in periodontal surgery and scaling and root planing (SRP) , 15-17. CriticismsCorbett, et al. reportedBurns, et al. reportedTiny branches could exist between the GPN and AMSA branches and between the nasopalatine nerve and the ASA branch. However, if we revisit the anatomy of the maxilla, palate and nasal septum, it is easy to understand how many reports in the literature have made critical mistakes regarding AMSA and P-ASA nerve blockade.The AMSA and P-ASA nerve blockades are not based on accurate anatomy. The term \u201cAMSA\u201d nerve blockade is infiltration anesthesia to the root of the anterior, canine and premolar teeth, and blockade of the branch of the GPN. This anesthesia might provide numbness to some extent, but this comes most likely from \u201cinfiltration anesthesia.\u201d The term \u201cP-ASA\u201d nerve block is similar to the nasopalatine nerve block. The innervation of the maxillary teeth should be revisited in order to provide the best local anesthesia to patients without unnecessary injections and the potential for associated complications."} +{"text": "Spodoptera frugiperda, J.E. Smith) is a noctuid moth that is a major and ubiquitous agricultural pest in the Western Hemisphere. Infestations have recently been identified in several locations in Africa, indicating its establishment in the Eastern Hemisphere where it poses an immediate and significant economic threat. Genetic methods were used to characterize noctuid specimens infesting multiple cornfields in the African nation of Togo that were tentatively identified as fall armyworm by morphological criteria. Species identification was confirmed by DNA barcoding and the specimens were found to be primarily of the subgroup that preferentially infests corn and sorghum in the Western Hemisphere. The mitochondrial haplotype configuration was most similar to that found in the Caribbean region and the eastern coast of the United States, identifying these populations as the likely originating source of the Togo infestations. A genetic marker linked with resistance to the Cry1Fa toxin from Bacillus thuringiensis (Bt) expressed in transgenic corn and common in Puerto Rico fall armyworm populations was not found in the Togo collections. These observations demonstrate the usefulness of genetic surveys to characterize fall armyworm populations from Africa.The fall armyworm ( Spodoptera frugiperda (J.E. Smith), is the primary pest of corn production in South America and in portions of the southeastern United States ).) allele .The sudden discovery of fall armyworm in Africa presents a major concern to a continent that is already periodically troubled by insufficient and unstable food supplies. The invasion of this pest presents two general problems. The first is that the introduction of a new species into an area where its normal natural enemies are not present could allow an initial period of rapid population growth and dispersion with consequent substantial impacts on agriculture. This may be the case with fall armyworm where the economic damage of infesting populations has been identified in widely dispersed regions over a short time period . The secCO1 and Tpi markers both separately and in combination indicate that both strains are present in Togo, with the corn-strain predominant in the tested collection. The proportions of the two strains differed depending on the marker used, with Tpi giving a substantially higher corn-strain percentage than CO1 found among the 17 collected at locations north of site 21. These numbers are small and only from a single year, so they at best describe a preliminary indication of possible differences in the geographical distribution of genetically defined subpopulations of fall armyworm. A more comprehensive and systematic survey that includes plant hosts preferred by the rice-strain is needed to determine the consistency of this observation.The specimens obtained from Togo were collected from multiple regions over a six-month period in 2016. The than CO1 . These rplotypes , 23, 26.he coast . Of the CO1 barcode region used to identify the species and host strain of the Togo collection identified only a single rice-strain and corn-strain haplotype, both of which were the most common forms present in North American populations [CO11164-1287 analysis that identified only one of the four possible corn-strain haplotypes was present in Togo. This haplotype, h4, predominates in the FL-type subpopulation, suggesting that the Western Hemisphere source of the Togo infestation is most likely the region that extends northward from the Lesser Antilles to Puerto Rico, through Florida and includes much of the eastern coast of the United States (in review) was not present in Togo suggests that Puerto Rico may not be the source of the Togo infestation.The ulations . This lod States . HoweverIn conclusion, genetic markers provide an important resource for the investigation of fall armyworm infesting agricultural areas of Africa. Genetic analysis can confirm species identification based on morphology, is the only reliable means of identifying host strains, can provide an indication of where in the Western Hemisphere the population invading Africa might have originated, and can detect a Bt-resistance trait that could compromise the effectiveness of Bt pesticides and Bt crops as control options. The Togo population may not be representative of fall armyworm in other parts of Africa and may be susceptible to future invasive introductions, indicating the need for continued and more comprehensive genetic characterizations of African fall armyworm populations to monitor and forecast the spread of this invasive pest."} +{"text": "Elastic properties of model crystalline systems, in which the particles interact via the hard potential (infinite when any particles overlap and zero otherwise) and the hard-core repulsive Yukawa interaction, were determined by Monte Carlo simulations. The influence of structural modifications, in the form of periodic nanolayers being perpendicular to the crystallographic axis [111], on auxetic properties of the crystal was investigated. It has been shown that the hard sphere nanolayers introduced into Yukawa crystals allow one to control the elastic properties of the system. It has been also found that the introduction of the Yukawa monolayers to the hard sphere crystal induces auxeticity in the Comprehensive Monte Carlo simulations revealed that the structural modification of the Yukawa system in the form of nanolayers in the (111) crystallographic plane can lead both to strengthening as well as weakening the auxetic properties of the system. Reduction of auxetic properties is achieved by increasing the concentration of nanolayer particles in multilayer Yukawa systems, whereas monolayered Yukawa systems exhibit the enhancement of auxetic properties. The latter is the appearance of a new auxetic direction way of controlling the elastic properties of materials and show the consequences of introducing the nanolayers in the (111) crystallographic plane of given crystal. This information may be useful for the construction of nanocomposites, and may indicate directions for further research on materials with desired elastic properties."} +{"text": "Collection of insects at the scene is one of the most important aspects of forensic entomology and proper collection is one of the biggest challenges for any investigator. Adult flies are highly mobile and ubiquitous at scenes, yet their link to the body and the time of colonization (TOC) and post-mortem interval (PMI) estimates is not well established. Collection of adults is widely recommended for casework but has yet to be rigorously evaluated during medicolegal death investigations for its value to the investigation. In this study, sticky card traps and immature collections were compared for 22 cases investigated by the Harris County Institute of Forensic Sciences, Houston, TX, USA. Cases included all manner of death classifications and a range of decomposition stages from indoor and outdoor scenes. Overall, the two methods successfully collected at least one species in common only 65% of the time, with at least one species unique to one of the methods 95% of the time. These results suggest that rearing of immature specimens collected from the body should be emphasized during training to ensure specimens directly associated with the colonization of the body can be identified using adult stages if necessary. Insects can provide useful data in death investigations. While they have the potential to provide many types of information, the most widespread application of forensic entomology data in death investigations is the estimation of the time of colonization (TOC) by insects . The estUsing accumulated degree days (ADD) or accumulated degree hours (ADH) is one method employed by forensic entomologists to estimate the TOC . This meAn ADH method\u2013based case uses a targeted approach to sampling focused on obtaining the oldest species and the most developed life stages present on and around the body as they presumably represent the primary colonizers, which in turn provide the most accurate indicators of PMI. Succession-based PMI estimation, a widely researched ecologically based technique for PMI estimation , is moreIndoor scenes comprise approximately two-thirds of the forensic entomology cases, represented by all manners of death, analyzed by the Harris County Institute of Forensic Sciences (HCIFS) . Scene iThere are two widely adopted methods for collecting adult flies on the scene including: active sampling with a sweep net and passive sampling with a sticky trap ,11. The Cases\u2014The cases included in this analysis consist of 22 scene deaths investigated by the Harris County Institute of Forensic Sciences (HCIFS) where the forensic entomologist attended the scene and collected larval flies from the decedent\u2019s body and adult flies from the scene with the use of a passive sticky trap. All of the samples from these cases were used to calculate an estimated TOC using the ADH method [H method as part In addition to the comparison of species composition on trapping method, the presence of male and female blow flies collected on the sticky trap was recorded for comparison to data collected by Mohr and Tomberlin . HoweverCollection Methods\u2014Larval fly specimens were collected from the body according to HCIFS standard operating procedures which are based on several sources [ sources ,3,4,12. sources and the sources ,20,21,22A single sticky trap was placed within approximately 30 cm of the body at each scene following the method of Haskell and Williams in a \u201cpuThe 22 cases reviewed in this study revealed a lack of congruence between the fly specimens collected via the passive sticky trap and those collected as immatures from the body during death scene investigations. Only 65% of the cases had at least one species that was the same using both methods. The cases included both indoor and outdoor scenes; however, a majority were indoor scenes. The manner of death for the cases included several different manners ; howeverSpecies collected differed for both indoor and outdoor scenes, for the observed stages of decomposition, and months and manners of death. There is one exception in these data, however: all three of the cases with the official manner of death of accident were accidental overdoses related to acute cocaine toxicity (one coupled with chronic ethanol abuse). In these three cases, both of these collection methods yielded the same species. Two of these accidental cases were located indoors and one was located outdoors and all three were moderately decomposed . CocaineOverall more female flies (115) were collected than males (33). This would appear to be consistent with changes at the scene either allowing additional opportunities for colonization or in changing the attractive nature of the scene to additional flies. However, there did not appear to be a trend associated with the variables recorded in this limited study.An often-overlooked aspect of scene investigation is the impact of the investigation itself on insect access. Insects can gain access to the scene once the scene is entered by first responders and there can be hours between the arrival of first responders and the forensic entomologist or medicolegal death investigator who may collect specimens . This co"} +{"text": "We treated a patient with a rare combination of ipsilateral fractures of the distal and proximal ends of the radius. A man aged 42 years had simultaneous fractures of the distal and proximal ends of the radius following a roadside accident. The distal end fracture of the radius was treated with surgical reduction and T-plate volar fixation, and the undisplaced radial neck fracture was treated by an above elbow splintage for 2 weeks. The elbow mobilization was started at 2 weeks. The distal radius was protected for another 4 weeks in a below elbow functional brace. Ipsilateral proximal and distal radial fracture is an uncommon injury pattern. The series illustrates a number of problems associated with this combination. Firstly, one should be aware of this rare injury pattern and there should be greater emphasis on clinical examination of elbow in cases of wrist injuries and vice versa. Once diagnosed, one faces the dilemma of appropriate management in these cases. The appropriate management will depend on the injury characteristics including the age of the patient and the fracture pattern. One should try to preserve the radial head to prevent a possible proximal radial migration especially in younger patients. Fractures of the distal end of the radius are commonly encountered in clinical practice, while fractures of the proximal end of the radius occur mostly when an individual falls with the impact on the outstretched hand, with the elbow joint extended; these fractures should be treated with special attention to associated injury of the ulnar collateral ligament . HoweverA man aged 42 years presented with simultaneous fractures of the distal and proximal ends of the left radius after a Fractures of the distal end of the radius are often associated with elbow joint dislocation, but rarely with fractures of the proximal end of the radius. Although the reason for this finding is not clear, a fracture of the distal end of the radius may reduce axial pressure applied to the radius, and, thereby, reduce the possibility of an additional fracture occurring at the proximal end of the same radius. Both fractures of proximal and distal radius are quite common individually in adults and children. In fact, distal radius fractures account for 14% of all extremity injuries and 17% of all adult fractures treated in emergency departments . SimilarA less common explanation for these types of injuries may be a direct blow producing a fracture at one radial site along with an axial force created by fall . LookingIpsilateral proximal and distal radial fracture is an uncommon injury pattern. The series illustrates a number of problems associated with this combination. Firstly, one should be aware of this rare injury pattern and there should be greater emphasis on clinical examination of elbow in cases of wrist injuries and vice versa. Once diagnosed, one faces the dilemma of appropriate management in these cases. The appropriate management will depend on the injury characteristics including the age of the patient and the fracture pattern. One should try to preserve the radial head to prevent a possible proximal radial migration especially in younger patients.The authors declare no competing interests."} +{"text": "Clinical holistic medicine has its roots in the medicine and tradition of Hippocrates. Modern epidemiological research in quality of life, the emerging science of complementary and alternative medicine, the tradition of psychodynamic therapy, and the tradition of bodywork are merging into a new scientific way of treating patients. This approach seems able to help every second patient with physical, mental, existential or sexual health problem in 20 sessions over one year. The paper discusses the development of holistic medicine into scientific holistic medicine with discussion of future research efforts."} +{"text": "Lactobacillus and Gardnerella vaginalis have shown that these organisms shared compatibility profiling for the majority of the normal bacterial constituents of the female genital tract. Dominance disruption appears to come from the addition of compatible co-isolates and presumed loss of numerical superiority. These phenomena appear to be the keys to reregulation of BFFGT. Lactobacillus appears to be the major regulator of both G. vaginalis and anaerobic bacteria. When additional organisms are added to the bacterial flora, they may add to or partially negate the inhibitory influence of Lactobacillus on the BFFGT. Inhibitor interrelationships appear to exist between coagulase-negative staphylococci and Staphylococcus aureus and the group B streptococci (GBS) and other beta hemolytic streptococci. Facilitating interrelationships appear to exist between S. aureus and the GBS and selected Enterobacteriaceae.Analysis of 240 consecutive vaginal swabs using the compatibility profile technique revealed that only 2 bacteria have the ability to be a sole isolate and as such a candidate to be a major aerobic regulator of the bacterial flora of the female genital tract (BFFGT). Compatibility profiles of"} +{"text": "The chromosomes of 12 samples of cultured Chinese hamster kidney and prostate cells , whose tissue culture properties have already been described have been examined for numerical change and for the appearance of abnormal markers. Six transformed kidney subclones contained a consistent telocentric marker not present in the normal parental cell, and Giemsa banding demonstrated this to be the centromere and the long (q) arm of the number 4 chromosome in all cases. Two transformed prostate subclones also contained a consistent telocentric marker, not present in similarly derived normal subclones or in the normal parental cell, and Giemsa banding demonstrated this to be a different fragment (the centromere and most of the p arm) of the number 4 chromosome. It is believed that the use of a mixed-serum culture medium, designed to stabilize the karyotype of cultured Chinese hamster cells, is at least partly responsible for the detection of these transformation-associated chromosome changes."} +{"text": "Broken screws after interlocking nailing of long bones are commonly seen in Orthopaedic practice. Removal of such screws can be difficult particularly the distal part which is often held within the bone. We describe a simple technique of using Steinman pin to aid removal of broken screws in a case of non-union fracture tibia with broken interlocking nail and screws. Steinman pin being easily available and the reproducible technique make it a useful aid for removal of broken interlocking screws. Broken interlocking screws are not an unusual problem in Orthopaedic practice and its causes can be varied ,2. WhileA 28 year old male presented with increasing leg pain and disability after a previous interlocking nailing procedure for tibia shaft fracture. Radiographs of his leg showed the broken interlocking nail and screws in the tibia along with the non-union of fracture. To proceed with any revised fixation of the fracture required removal of the original metal work in situ, including the broken interlocking screws.An appropriate incision was made over the screw and the head part of the broken screw removed after dissection. The blunt end of the Steinman pin was then passed down the screw track until it touched the broken end of retained screw Several techniques and methods have been described for removal of broken interlocking nails and screws ,3. The sWritten informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The authors declare that they have no competing interests.KS was involved with the initial writing of the manuscript including the literature search. GA is the senior author who described and used the technique. He also reviewed the article and suggested final changes before submission."} +{"text": "The solvolysis of the complex trans-- has been investigated by means of spectroscopic techniques in different solvents. We investigated theindazolium as well as the sodium salt, the latter showing improved solubility in water. Inaqueous acetonitrile and ethanol the solvolysis results in one main solvento complex. Thehydrolysis of the complex is more complicated and depends on the pH of the solution as wellas on the buffer system.The ruthenium(III) complex Hlnd"} +{"text": "Sir,Despite the initial success of imatinib in the treatment of chronic myeloid leukaemia (CML) (Two groups have recently published theoretical models to study the treatment response to imatinib. Our models remain valid given the currently available data. Further experimental and theoretical investigations are needed to increase the understanding of CML stem cell dynamics and to clarify the mechanism of their insensitivity to imatinib."} +{"text": "We produced specula for laryngomicroscopy to observe blind spots in the operating field. Use of these specula has facilitated detailed observation of the lower surface of the false vocal folds, laryngeal ventricle, and subglottis, which were previously in blind spots. The specula are useful in the following ways: 1) clarifying blind spots for improved diagnosis and providing more accurate surgical margins; 2) observing the lower lips of the vocal folds in phonosurgery; and 3) Vaporizing with laser reflection. The specula are cheap and easy to use and are well worth considering for application to laryngomicroscopy."} +{"text": "Genetic and environmental factors have been widely suggested to contribute to the pathogenesis of primary biliary cirrhosis (PBC), an autoimmune disease of unknown etiology leading to destruction of small bile ducts. Interestingly, epidemiologic data indicate a variable prevalence of the disease in different geographical areas. The study of clusters of PBC may provide clues as to possible triggers in the induction of immunopathology. We report herein four such unique PBC clusters that suggest the presence of both genetic and environmental factors in the induction of PBC. The first cluster is represented by a family of ten siblings of Palestinian origin that have an extraordinary frequency of PBC (with 5/8 sisters having the disease). Second, we describe the cases of a husband and wife, both having PBC. A family in which PBC was diagnosed in two genetically unrelated individuals, who lived in the same household, represents the third cluster. Fourth, we report a high prevalence of PBC cases in a very small area in Alaska. Although these data are anedoctal, the study of a large number of such clusters may provide a tool to estimate the roles of genetics and environment in the induction of autoimmunity."} +{"text": "Traction apophysitis of medial malleolus is very rare and presented in view of its rarity. A 13 years old boy presented with pain and swelling without history of trauma around left ankle of 3 months duration. The swelling was diffuse with tenderness on anterior aspect of medial malleolus. The overlying skin was normal. The X-rays revealed fragmented accessory ossification centre of medial malleolus an left side. MRI revealed multiple foci of hypointensity in T1 and T2 weighted images of left medial malleolus apophysis. Patient was treated in below knee plaster for three weeks with restriction of sports activities for 5 weeks. The patient became asymptomatic in 8 weeks. Traction apophysitis is a well-known entity and commonly seen in children involved in sports activities. Accessory ossification centers may appear as normal variant at the medial malleolus in growing children but traction apophysitis is very rare. We are reporting a case of traction apophysitis of medial malleolus in a child with review of the literature.A 13-year-old boy presented with pain and swelling around the left ankle region for three months. He was very active in outdoor sports activities. The pain used to increase on activity. There was no history of trauma. On examination there was diffuse swelling around the medial malleolus on the left side. Overlying skin was normal in appearance. Tenderness was present over the anterior aspect of the medial malleolus and there was pain on valgus stress applied to the ankle. Routine blood investigations were within normal limits. Plain X-ray of ankle showed accessory ossification center of the medial malleolus on both sides. The accessory ossification center on the right side showed uniform density with smooth margins and on the left side was fragmented . An MRI On the basis of clinical and radiological findings a diagnosis of traction apophysitis of the medial malleolus was made. Below the knee POP slab was applied on the left side for three weeks and sports activities were restricted for another five weeks. The patient became asymptomatic in eight weeks.The traction apophyses are sites of active growth in children consisting of columns of growth cartilage uniting tendon with bone. Injuries at the traction apophysis may result from a single episode of macro trauma with resultant frank avulsion of a portion of apophysis or may result from repetitive micro trauma with resulting pain, swelling and on occasion bony and cartilaginous overgrowth referred to as apophysitis.Another factor that is active in the occurrence of traction apophysitis is the growth process itself. Longitudinal growth occurs in the bones of the extremities and spine, with the soft tissue- muscle tendon units, ligaments and so on secondarily elongating in response to this growth. During this rapid growth, there can be a measurable increase in muscle-tendon tightness around the joints, with loss of flexibility and an enhanced environment for overuse injury. As the tight muscle tendon unit is subject to repetitive overload, there is an increased potential for repetitive tiny avulsion fractures at the weakest site in the muscle- tendon unit, the apophyseal growth cartilage.The final factor associated with the onset of most of the traction apophysitis is repetitive stress applied to the apophysis, often in the form of repetitive sports training or competition.The accessory centers of ossification of the medial malleolus are common in skeletally immature individuals.\u20135 They uMany of the ossification centers are identified accidentally when radiographs are taken to evaluate injury at the ankle or foot. At times they may be mistaken for fractures. A smooth appearance on both sides of the radiolucency usually obviates the diagnosis.There have been very few reports describing symptomatic accessory center at the medial malleolus with no history of trauma.45 Ogden 4Conservative management in the form of restriction of activities with or without immobilization leads to resolution of symptoms in one to six months depending on the severity of injury.5The case described in this report showed fragmentation of the accessory center on both the X-ray and MRI. Clinico-radiological diagnosis of apophysitis of the medial malleolus was made and conservative treatment was given in the form of plaster slab and rest which showed good results.Extra centers of ossification at the tip of the internal malleolus are common in children. Most of them remain asymptomatic and eventually fuse with the lower tibial epiphysis. A few of them become symptomatic in children involved in sports activities due to repetitive trauma. Conservative management in the form of restriction of activities and splintage gives good results."} +{"text": "Our previous studies, in specimens of large intestine resected for carcinoma, have shown abnormal patterns of mucous secretion in areas of apparently \"normal\" mucosa, where goblet cells produce mainly sialomucins as compared with the true normal colonic mucosa in which sulphomucins predominate. In the present work, large bowel cancer was induced in rats by the administration of 1,2-dimethylhydrazine-2HCl (DMH). We attempted to study the sequential histological and secretory abnormalities which developed in the colonic epithelium during carcinogenesis, and to correlate these changes with those described above in the human. The microscopical and histological lesions observed in the colonic mucosa of DMH treated rats confirmed the findings of other authors and resembled the human colorectal cancer. The earliest changes detected were small foci of hyperplasia accompanied from the 6th week onwards by several foci of dysplasia. Carcinoma in situ appeared at the 15th week and finally invasive carcinoma developed from the 19th week onwards. Changes in the type of mucous secretion, with predominance of sialomucins, were observed in the majority of the areas showing mild to moderate dysplasia whilst the surrounding normal epithelium produced suphated material. Mucous depletion was a common feature in areas of severe dysplasia and carcinoma. These findings correlated well with the similar variations in the mucin composition observed in human colonic mucosa in carcinoma and further supported our previous hypothesis that mucin changes characterized by an increase in sialomucins might reflect early malignant transformation. If this hypothesis proved to be correct, the use of a simple method for the identification of mucins in large bowel biopsies would be of great help in detecting early malignancy."} +{"text": "Distal colostography (DC), also called distal colography or loopography, is an important step in the reparative management of anorectal malformations (ARMs) with imperforate anus, Hirschsprung's disease and colonic atresia (rarely) in children and obstructive disorders of the distal colon in adults. It serves to identify/confirm the type of ARM, presence/absence of fistulae, leakage from anastomoses, or patency of the distal colon. We present a pictorial essay of DC in a variety of cases. Distal colostography (DC) is an important diagnostic investigation to delineate the altered anatomy of anorectal malformations and know the spectrum of associated fistulae between the blind rectum on the one hand and the bladder, urethra, perineum and vagina on the other. It remains a dependable test for a surgeon to plan surgical repair.Anorectal malformations (ARMs) occur with an incidence of 1 in 5000[About one month after colostomy or before the reparative surgery is planned, distal colostography (DC) is essential. It serves many purposes; it helpsFind the degree of fecal impaction and ectasia of the blind end of the rectum . Prior iJudge the distance of the blind rectum from the marker placed at the expected site of the anus (pouch-to-perineum distance)Detect precisely the various types of rectal fistulae4 Figure\u20137, cloacAccording to Durham, Keiller 7The technique followed by us is as follows:A marker is placed over the anal dimple or expected position of the anus. Another marker is placed at the point where urine or fecal material is seen to be discharging.After passing an indwelling catheter through the stoma leading to the distal colon, its balloon is inflated and it is pulled back during injection of the contrast to avoid any spillage. The distal blind end of the colon gets filled progressively and pressure is maintained till the contrast fills the fistulous tract.Water-soluble contrast is used.Images are obtained under fluoroscopy.The colostogram is obtained in the lateral position, with the femora overlapping as perfectly as possible, to determine the level of the blind end of the rectum and identify the type of ARM.In practice, DC is a very useful technique since it has a high specificity. Its sensitivity can be increased if proper care is taken to demonstrate the most distal end of the blind rectum and the fistula.Some surgeons perform a defunctioning colostomy for the management of the aganglionic colon. DC confirms the earlier diagnosis and helps in planning the further course of action .In dealing with strictures due to chronic colitis or complicated diverticulosis and malignant tumors of the rectosigmoid region a defunctioning colostomy and resection with anastomosis are undertaken. DC is useful to check for any leakage from the site of anastomosis before closure of the colostomy ."} +{"text": "This paper provides a brief overview of the Canadian physical activity communications and social marketing organization \"ParticipACTION\"; introduces the \"new\" ParticipACTION; describes the research process leading to the collection of baseline data on the new ParticipACTION; and outlines the accompanying series of papers in the supplement presenting the detailed baseline data.Information on ParticipACTION was gathered from close personal involvement with the organization, from interviews and meetings with key leaders of the organization, from published literature and from ParticipACTION archives. In 2001, after nearly 30 years of operation, ParticipACTION ceased operations because of inadequate funding. In February 2007 the organization was officially resurrected and the launch of the first mass media campaign of the \"new\" ParticipACTION occurred in October 2007. The six-year absence of ParticipACTION, or any equivalent substitute, provided a unique opportunity to examine the impact of a national physical activity social marketing organization on important individual and organizational level indicators of success. A rapid response research team was established in January 2007 to exploit this natural intervention research opportunity.The research team was successful in obtaining funding through the new Canadian Institutes of Health Research Intervention Research Funding Program. Data were collected on individuals and organizations prior to the complete implementation of the first mass media campaign of the new ParticipACTION.Rapid response research and funding mechanisms facilitated the collection of baseline information on the new ParticipACTION. These data will allow for comprehensive assessments of future initiatives of ParticipACTION. ParticipACTION was launched as a physical activity communications and social marketing organization in 1971 with financial support from the Government of Canada . The orgDuring its 30-year mandate, ParticipACTION became a catalyst for physical activity across Canada and was viewed internationally as an exemplar of a highly successful national physical activity social marketing organization, with many award winning campaigns or who have a key interest in physical activity promotion as well as locally based alliances to increase physical activity . The mandate of the new ParticipACTION is contextualized within the broader Integrated Pan-Canadian Healthy Living Strategy [Pan-Canadian Physical Activity Strategy [During its first year of operations, ParticipACTION focused on developing and launching the first in a series of annual mass media campaigns. In subsequent years, it is planned to expand activities to include knowledge transfer; ongoing and coordinated communication; and supportive activities to increase the capacity of Canada's physical activity and sport delivery system, comprised of governmental and non-governmental organizations at the national and provincial/territorial level whose primary mandate is physical activity by gathering individual-level data on awareness of the brand, campaign recall, knowledge, understanding and physical activity behaviours before and during the early period after the launch of the revitalized ParticipACTION. Baseline data were also collected at an organizational level to assess the future impact of a sustained campaign through ParticipACTION on the overall capacity of the physical activity sector in Canada.http://www.researchnet-recherchenet.ca/rnr16/viewOpportunityDetails.do?prog=399&view=search&terms=intervention+research&org=CIHR&type=AND&resultCount=25). The combination of early knowledge of this ensuing natural intervention and the announcement of the CIHR funding program created a perfect opportunity to collect important baseline data before the official resurrection of ParticipACTION.During early discussions of the possible resurrection of ParticipACTION the lead author of this paper (MST) was appointed to the new Board of ParticipACTION and its Executive Committee. This privileged position allowed MST to facilitate strategic research planning extraneous to the operations of the Board and Executive. At the same time (early December 2006), though serendipitous to this strategic research planning, the Canadian Institutes of Health Research (CIHR) announced a new strategic funding opportunity titled the \"Intervention Research Funding Program\". This program was targeted towards funding research that examines \"natural interventions\" which are out of the control of researchers yet provide important, time-sensitive research opportunities. Accordingly, this funding opportunity did not follow the typical application and review process, allowing for applications to be submitted whenever the natural intervention opportunity presented itself (e.g. no fixed application date) and providing a thorough though expedited review process was convened in Saskatoon, Canada to discuss the appropriateness and viability of research possibilities to assess and study the impact of the revitalization and re-launch of ParticipACTION in the form of a natural intervention. The research team represented a good balance of experience with ParticipACTION, intervention research and natural experiments, and physical activity monitoring and measurement. The opportunity to collect pre-resurrection, baseline data at both individual and organizational levels was time-limited and required immediate action that this team could act upon. The objectives of this initial meeting were to:\u2022 Take advantage of a natural experiment opportunity\u2022 Test the new CIHR funding mechanism/opportunity\u2022 Test the viability and responsiveness of the research community RRRT concept\u2022 Link previously unlinked researchers and create networking opportunities\u2022 Challenge ParticipACTION's commitment to research and evaluation\u2022 Exploit and advance the learnings from the \"Canada on the Move\" initiative \u2022 Produce a letter of intent for the CIHR funding opportunityTwo letters of intent were generated at this initial meeting of the RRRT. One proposal focused on individual level data and a second on organizational level capacity (described further below). The cascade of events related to the preparation and outcome of this meeting are summarized below.Key dates:\u2022 CIHR funding announcement: December 8, 2006\u2022 ParticipACTION resurrection funding confirmed: December 8, 2006\u2022 Funding request to CIHR and ParticipACTION to host the initial RRRT meeting: December 19, 2006\u2022 Notice of support for meeting funding request: December 22, 2006\u2022 RRRT meeting: January 19-20, 2007\u2022 Record of Discussion of meeting completed: January 21, 2007\u2022 Two letters of intent submitted to CIHR: January 24, 2007\u2022 Approval of two letters of intent received by email: February 21, 2007\u2022 Two full research proposals submitted to CIHR: March 21, 2007\u2022 Notification that both proposals were funded: June 18, 2007\u2022 Data collection for both projects: August 2007 - February 2008Six months passed between conception of the idea and full research funding being approved and within 13 months of the RRRT first meeting, the research data collection was complete. This very rapid sequence of events was necessary to collect baseline data before the first campaign of the \"new\" ParticipACTION was widely recognized.The ParticipACTION proposals were the first to be reviewed through the new CIHR funding mechanism and although the process went reasonably smoothly (as indicated by the sequence of events summarized above) the CIHR made subsequent adaptations to the review procedures in an effort to further expedite the process. In summary, the new CIHR funding opportunity allowed for the successful completion of this research - an opportunity previously unavailable through traditional funding mechanisms in Canada. Both the CIHR and the RRRT benefited and learned from this novel funding mechanism. This is a good example of constructive and productive cooperation and collaboration between researchers and a research granting agency.To assess baseline awareness and understanding of ParticipACTION at an individual level, a population-based survey was conducted on a monthly basis over a period of six months from late August 2007 to February 2008. The survey consisted of a set of questions on the Physical Activity Monitor conducted by the Institute for Social Research at York University on behalf of the Canadian Fitness and Lifestyle Research Institute (CFLRI). The details of this project and the findings of the baseline data collection are described in the paper by Spence and colleagues .The examination of population based physical activity campaigns has been limited with a focus on dissemination and accuracy, the behavioural effect of messages, and/or the recall of media items by the general public -17. LessA sample of respondents from the quantitative survey were conAn assessment of the inaugural mass communications campaign of the \"new\" ParticipACTION was completed using an internet-based survey conducted by Angus Reid Strategies. This survey evaluated awareness of ParticipACTION, its recent campaign and assessed the recall and interpretation of the campaign messaging in both the target market (parents with children aged 7-12 years) and others. The findings from this campaign evaluation are presented in the paper by Craig and co-workers .Finally, Bauman et al. provide ParticipACTION has a distinguished history and can serve to inform future physical activity social marketing initiatives. The resurrection of ParticipACTION provided a unique research opportunity to establish baseline data to allow for future campaign assessments. A rapid response research team was established to exploit this opportunity and the new research funding opportunity of the CIHR allowed this opportunity to flourish. The findings from the baseline research are summarized in the papers that follow this brief introduction. This research provides a valuable platform for monitoring the future impact of ParticipACTION on the physical activity levels of Canadians. It is also hoped that this short series of papers informs the efforts of others attempting to assess the impact of physical activity communications and social marketing programs and campaigns elsewhere.The authors have been involved on various committees and advisory groups for both the previous and \"new\" ParticipACTION including the Board of Directors (MST). The Canadian Fitness and Lifestyle Research Institute (CFLRI) was previously contracted by the \"new\" ParticipACTION to perform research and surveillance - CLC is President of CFLRI but received no personal compensation for the contracted work.MST conceived of and initiated the study of the natural intervention of the re-emergence of the \"new\" ParticipACTION. MST and CLC co-wrote this manuscript."} +{"text": "Sir,Periodontitis is a destructive inflammatory disease of the supporting tissues of the teevth and is caused by specific microorganisms or group of specific microorganisms resulting in progressive destruction of periodontal ligament and alveolar bone with periodontal pocket formation, gingival recession or both.2 The hos"} +{"text": "A versatile automated apparatus, equipped with an artificialintelligence has been developed which may be used to prepare andisolate a wide variety of compounds. The prediction of the optimumreaction conditions and the reaction control in real time, areaccomplished using novel kinetic equations and substituent effects inan artificial intelligence software which has already reported [1].This paper deals with the design and construction of the fullyautomated system, and its application to the synthesis of asubstituted N-amino acid. The apparatus is composedof units for perfoming various tasks, e.g. reagent supply,reaction, purification and separation, each linked to a controlsystem. All synthetic processes including washing and drying of theapparatus after each synthetic run were automatically performedfrom the mixing of the reactants to the isolation of the products aspowders with purities of greater than 98%. The automatedapparatus has been able to run for 24 hours per day, and the averagerate of synthesis of substituted N-amino acids hasbeen three compounds daily. The apparatus is extremely valuablefor synthesizing many derivatives of one particular compoundstructure. Even if the chemical yields are low under the optimumconditions, it is still possible to obtain a sufficient amount of thedesired product by repetition of the reaction. Moreover it waspossible to greatly reduce the manual involvement of the manysyntheses which are a necessary part of pharmaceutical research."} +{"text": "The immune response of the neonate is poor and is dependent on passive immunity provided by maternal Ig. However, here we show that exposure of the neonate to environmental antigens induces a germinal center (GC) reaction. In the peripheral blood of premature infants one finds IgG class switched B cells expressing a selected V-gene repertoire. These data suggest that restrictions in the repertoire rather than immaturity of the immune system is responsible for the poor immune responses of the neonate."} +{"text": "Radial nerve compression by a ganglion in the radial tunnel is not common. Compressive neuropathies of the radial nerve in the radial tunnel can occur anywhere along the course of the nerve and may lead to various clinical manifestations, depending on which branch is involved. We present two unusual cases of ganglions located in the radial tunnel and requiring surgical excision.A 31-year-old woman complained of difficulty in fully extending her fingers at the metacarpophalangeal joint for 2 weeks. Before her first visit, she had noticed a swelling and pain in her right elbow over the anterolateral forearm. The extension muscle power of the metacarpophalangeal joints at the fingers and the interphalangeal joint at the thumb had decreased. Sonography and magnetic resonance imaging of the elbow revealed a cystic lesion located at the area of the arcade of Frohse. A thin-walled ovoid cyst was found against the posterior interosseous nerve during surgical excision. Pathological examination was compatible with a ganglion cyst. The second case involved a 36-year-old woman complaining of numbness over the radial aspect of her hand and wrist, but without swelling or tumor in this area. The patient had slightly decreased sensitivity in the distribution of the sensory branch of the radial nerve. There was no muscle weakness on extension of the fingers and wrist. Surgical exposure defined a ganglion cyst in the shoulder of the division of the radial nerve into its superficial sensory and posterior interosseous components. There has been no disease recurrence after following both patients for 2 years.Compression of nerves by extraneural soft tissue tumors of the extremities should be considered when a patient presents with progressive weakness or sensory changes in an extremity. Surgical excision should be promptly performed to ensure optimal recovery from the nerve palsy. Compressive neuropathies are important and widespread debilitating clinical problems. The two most common compressive peripheral nerve disorders in the upper limb are carpal tunnel syndrome and cubital tunnel syndrome, however, radial tunnel syndrome occurs less frequently . The radA 31-year-old woman complained of difficulty in fully extending her fingers at the metacarpophalangeal joint for 2 weeks. Before her first visit, she had noticed swelling and pain in her right elbow over the anterolateral forearm. Examination revealed a tender swelling in the anterolateral region of the antecubital fossa but no clinical evidence of a mass. The extension muscle power of the metacarpophalangeal joints at her fingers and the interphalangeal joint at her thumb had decreased. The patient had full strength with resisted wrist extension and resisted supination. The radial nerve sensibility was normal. An electromyogram and nerve conduction study showed early partial neuropathy of the posterior interosseous nerve. Sonography revealed a mass continuous with the posterior interosseous nerve immediately distal to the radiocapitellar joint. Magnetic resonance imaging (MRI) of the elbow revealed a cystic lesion located at the area of the arcade of Frohse, which was attached to the anterior capsule of the radiocapitellar joint Figure . Due to A 36-year-old, right-handed woman complained of numbness over the radial aspect of her hand and wrist. She complained of pain in the lateral aspect of her elbow but did not notice swelling or tumor in this area. On physical examination, a mass could not be definitely detected by palpation. The patient had slightly decreased sensitivity in the distribution of the sensory branch of the radial nerve. There was no muscle weakness on extension of her fingers and wrist. Electrodiagnostic studies were consistent with the diagnosis of neuropathy of the sensory branch of the radial nerve. Ultrasonography and MRI each demonstrated a well-defined mass just anterior to the radiocapitellar joint Figure . SurgicaRadial nerve entrapment in the radial tunnel is uncommon in peripheral nerve compressive neuropathies. There are three different types of palsy in the radial tunnel syndrome: posterior interosseous nerve palsy, neuropathy of the sensory branch of the radial nerve, and neuropathy of both nerves ,3. The pet al. reported 14 patients presenting with posterior interosseous nerve (PIN) paralysis due to a ganglion at the elbow, located proximal to the proximal edge of the supinator muscle in 13 cases and distal in one [Our first case experienced lateral forearm pain that is initially often difficult to distinguish from lateral epicondylitis, synovitis of the radiocapitellar joint, and a muscle tear of the extensor carpi radialis brevis . Howeverl in one . These fCompression of the posterior interosseous nerve by a ganglion was first reported by Bowen in 1966 . He recoWhen a ganglion is not detected by palpation in cases with palsy of the posterior interosseous nerve or sensory branch, differential diagnosis of the cause of the palsy may be difficult. Compression of nerves by extraneural soft-tissue tumors of the extremities, although not common, should be considered when a patient presents with progressive weakness or sensory changes in an extremity. This is true whether or not a soft-tissue mass is found during examination, since an occult soft-tissue tumor was found in approximately one-third of patients at the time of operation in a report by Barber .Ultrasound and MRI have been used to detect space-occupying lesions causing nerve compression . UltrasoStandard surgical management for persistent neuropathy, refractory to non-surgical treatment, is open decompression of the radial nerve. This can be done through a variety of anterior or lateral approaches. The approach includes addressing all of the potential sites of compression in addition to excising the mass lesions described previously which may cause compression of the radial nerve.Radial nerve compression by a ganglion in the radial tunnel is an uncommon condition. We report two cases of posterior interosseous nerve and superficial sensory branch compression by a ganglion cyst. The value of this case report to the practicing physician is that it sheds light on the importance of familiarity with the possible presentation, anatomic location and differential diagnosis to facilitate corrective diagnostic approaches and timely management.Written informed consent was obtained from both patients for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The authors declare that they have no competing interests.IMJ participated in the surgical interventions, contributed to the study concept and design and drafted the manuscript. HNW participated in the medical interventions, took the photographs and undertook the literature review. ISY was involved in reviewing the histological section of the case and proofreading of the manuscript. WRS participated in the surgical interventions and helped draft the final version of this manuscript to be published. All authors read and approved the final manuscript."} +{"text": "Oral potassium permanganate has been used in the management of terminal carcinoma in three patients. Symptomatic improvement occurred in all three, with elimination of oral foetor in one patient and diminished requirement of analgesics in the other two. The mental state of each patient was improved and normal activities were resumed."} +{"text": "Laser in vivo confocal microscopy noninvasively provides images that are equivalent to high quality histology. We have now applied this technique to identify pathological characteristics of traumatic recurrent corneal erosion (RCE).Six eyes of six patients with traumatic RCE were studied. Corneas were examined with a slit lamp biomicroscope and with a laser in vivo confocal microscope (Heidelberg Retina Tomograph II\u2013Rostock Cornea Module or HRTII-RCM) at various times after the onset of the most recent recurrence of corneal erosion.Brightly reflective granular structures were detected by the HRTII-RCM system in the basal and wing cell layers of the corneal epithelium in all eyes affected by recurrent erosion. Activated keratocytes and scattered fine particles were also apparent in the shallow stroma of five of the six affected eyes. These features were not observed in the normal cornea.The HRTII-RCM system allows detection of characteristic abnormal structures in the cornea of individuals with traumatic RCE. The presence of granular structures in the corneal epithelium as well as persistent inflammation in the shallow stroma may contribute to the deterioration of the corneal epithelial cell alignment and to the weakening of adhesion between the basal epithelial cells and the basement membrane in RCE lesions. Recurrent corneal erosion (RCE) is characterized by repeated episodes of corneal epithelial defects that are usually accompanied by the sudden onset of eye pain upon awakening as well as by hyperemia, photophobia, and tearing . RCE is The development of confocal optics has allowed changes in corneal cells to be examined layer by layer ,6. FurthWe undertook a retrospective evaluation of six patients with traumatic unilateral RCE at Yamaguchi University Hospital, Ube, Yamaguchi Japan. We also examined 30 volunteers (30 eyes) without any apparent pathology of the cornea. The six patients were referred to our hospital by their local ophthalmologists for diagnosis or treatment of their corneal epithelial disorders. All patients had at least two episodes of corneal erosion since the last corneal trauma. Sex, age, best corrected visual acuity (BCVA) at the initial visit, date of onset of the most recent erosion episode, and corneal lesion findings by slit lamp biomicroscopy and in vivo confocal microscopy were recorded. All subjects were informed of the aims of the study, and their consent was obtained. The study was approved by the institutional review board of Yamaguchi University Hospital.In vivo confocal microscopy was performed with the HRTII-RCM system. Before examination, a drop of gel was placed on the front side and inside of a TomoCap . One drop of topical anesthetic and one drop of gel (Comfort Gel) were instilled into the lower conjunctival fornix of each eye. Corneal lesions were scanned by alternating vertical and horizontal movement of the applanating TomoCap from the upper edge of the lesion toward the lower edge. Several confocal microscopic images of tangential optical sections of the superficial, wing, and basal cell layers of the corneal epithelium, the stroma, and the endothelium were obtained for each eye. Oblique sections were also obtained when necessary. All in vivo confocal microscopy was performed by a single investigator (N.T.). No complications with the HRTII-RCM system were noted during examination.The clinical characteristics of the six patients with traumatic unilateral RCE are shown in Abnormal findings by in vivo laser confocal microscopy in the corneas affected by RCE includedWe divided the six cases into three groups on the basis of the time between the onset of the most recent episode of RCE and examination by in vivo confocal microscopy. Cases 1 and 2 were in the early stage group because in vivo confocal microscopy was performed within two and three days after the onset of RCE, respectively. All of the abnormal findings for RCE with the exception of keratoprecipitates for case 1 were observed with the HRTII-RCM system in the corneas of these two cases. Cases 3 and 4 were in the mid-stage group because in vivo confocal microscopy was performed at\u00a010 and 16 days, respectively, after the onset of RCE. The in vivo confocal microscopic images for cases 3 and 4 revealed enlarged basal cells, brightly reflective granular structures in the corneal epithelium and Bowman\u2019s layer, activated keratocytes in the shallow stroma, and scattered fine particles in the shallow stroma. In contrast, gaps in the epithelial cell layers and inflammatory cell infiltration in the mid stroma were not observed in this group. Cases 5 and 6 constituted the late stage group in which in vivo confocal microscopy was performed 30 days or more after the onset of RCE, respectively. Both corneas of this group manifested brightly reflective granular structures in the corneal epithelium and Bowman\u2019s layer whereas case 5 also exhibited activated keratocytes in the shallow stroma and scattered fine particles in the shallow stroma. The in vivo confocal microscopic findings for all cases are summarized in Corneal epithelial debridement was performed for the affected area in four patients . These four patients had not developed any recurrence of erosion, and no brightly reflective granular structures in the corneal epithelium were detected with the HRTII-RCM system one month after the treatment .We have identified candidate pathological characteristics of the lesions associated with traumatic RCE with the use of in vivo laser confocal microscopy. Among these characteristics, the irregularity of superficial epithelial cell alignment is suggestive of the active migration of epithelial cells to cover epithelial defects whereas the gaps in epithelial cell layers appear to reflect such defects or incomplete epithelial cell migration. The enlargement of basal epithelial cells likely represents a compensatory response to the lack of overlying epithelial cells. The absence or reduced number of subepithelial nerves suggests that the nerves are destroyed or damaged in association with the epithelial defect. Given that subepithelial nerves appeared normal in cases 5 and 6, it is possible that the damaged nerves regenerate with time after epithelial erosion. The infiltration of inflammatory cells in the mid stroma and the presence of keratoprecipitates on the endothelium are indicative of an acute inflammatory response secondary to epithelial defects. These characteristics were apparent in the early and mid stages of the erosion episodes and are likely not specific to RCE. In contrast, the brightly reflective granular structures in the corneal epithelium and Bowman\u2019s layer were observed at all stages of RCE in the cases studied. These structures likely represent deposition of hyperreflective material, most probably cell debris. In vivo confocal microscopy performed after LASIK has also revealed brightly reflective particles at the corneal interface ,12,13. TThe cause of RCE after traumatic injury is thought to be the failure to recover tight adhesion between the wounded corneal epithelial cells and the underlying stroma. RCE can also result from loss of adhesion due to a defect in the basement membrane in the cornea of individuals with epithelial basement membrane dystrophies . In suchEven in asymptomatic stages of traumatic RCE as exemplified by cases 4, 5, and 6, we observed brightly reflective granular structures in the basal and wing cell layers of the corneal epithelium as well as a persistent mild inflammatory response in the shallow stroma. Although RCE can occur in patients with epithelial basement membrane dystrophies and after traumatic injury, its pathology appears to differ between the two types of case. Even though the present study includes only a small number of cases, our findings provide a basis for speculation on a possible mechanism for traumatic RCE. This condition may thus result from a failure of reconstruction of cell-cell or cell-matrix adhesion structures at the lesion site. This failure may contribute to deterioration of the corneal epithelial cell alignment and to the weakening of adhesion between basal cells and the basement membrane. It may also lead to mild inflammation involving the activation of keratocytes in the shallow stroma and the consequent operation of a vicious cycle involving the epithelium and shallow stroma.The HRTII-RCM system allows detection of abnormal structures in the cornea. Indeed, in vivo laser confocal microscopy noninvasively provides high resolution images that are equivalent to high quality histology, having been termed \u201cin vivo biopsy.\u201d A longitudinal study of patients with RCE is required to provide insight into the natural course of the pathological changes associated with this condition. Furthermore, the efficacy of treatments for RCE such as debridement and bandage contact lenses warrants evaluation with the HRTII-RCM system.In conclusion, in vivo laser confocal microscopy reveals pathological characteristics of RCE. Although the diagnosis of RCE should be made on the basis of clinical progress and careful examination with a slit lamp biomicroscope, the in vivo laser confocal microscope is a powerful tool that can contribute to the diagnosis and management of RCE."} +{"text": "Objective: Avulsion injuries of the dorsum of the foot and ankle present difficult reconstructive problems especially in pediatric patients. Usually, there is exposure of bones, joints, tendons, or ligaments, which requires coverage by flaps. The best skin for coverage is the local skin, but the remaining intact skin is usually limited. Usage of such local skin necessitates elevation of long and narrow skin flaps. These flaps need delay to survive. Method: This study included 8 children who sustained avulsion injuries to their feet and ankles in road traffic accidents. Debridement and delaying flaps using bipedicled flap elevation technique were done in the first session. Dressing of the raw areas was done while the flaps were being delayed. The delayed flaps were elevated in the second session to cover any exposed bone, joints, or ligaments. Split thickness skin grafts were applied to the donor site and to the granulating raw areas. Results: Complete survival of the flaps and complete take of the skin grafts with minimal donor site morbidity. Conclusion: This technique of delaying flaps from the intact skin adjacent to the defect is safe, successful and allows minimal hospital stay. The skin of the dorsum of the foot and ankle is vulnerable to avulsion trauma, because it is thin and loosely attached to the underlying tendons, ligaments, and bones.Eight pediatric patients who sustained avulsion injuries to their foot and ankle in road traffic accidents were managed. They were 7 male and 1 female. They ranged in age between 4 and 12 years. They presented with avulsion injuries involving the dorsum of the foot and lateral ankle regions. There was fracture of lower tibia and fibula in one case that necessitated external fixation. There were different combinations of exposure of bones, tendons, ligaments, and joints. The edges of the wounds showed friction burns. The preoperative photo of one of the patients on admission is shown in Figure The 2-week delay in coverage of the raw area allowed granulation of most of the bed except exposed bone at the ankle joints. The 2-week gap in addition allowed healing of the friction burns at the edges of the wound. The delayed flaps survived completely and provided adequate coverage. The take of the split-thickness skin grafts over the donor site of flaps and over the granulated raw area was 100%. The donor site of split-thickness skin grafts healed in 2 weeks. The splints provided stability of the ankle joint in a right-angle position and prevented shearing of the split-thickness skin grafts. Loss of the dorsi-flexors of the lateral toes in trauma did not constitute a major functional deficit. The aesthetic outcome was acceptable because there was no donor site morbidity in any region away from the site of injury. Apart from the patient who had external fixation, the patients became ambulatory after complete wound healing. No subsequent breakdown of the grafted skin or unstable scars with regular wearing of shoes were found. There were hypertrophic scars in 3 patients, which necessitated wearing pressure garments for 3 months. The patients walked without noticeable limping after 2 to 3 months. The relatively short follow-up period did not allow investigating the effect on growth (limb-length discrepancy). The postoperative photos of 2 children are shown in Figures Reconstruction of the soft tissue defect on the dorsum of the foot and ankle is challenging. Local skin flaps are limited by the area of the remaining nonavulsed skin on the dorsum of the foot. Reverse-flow fasciocutaneous flaps have the disadvantages of sacrificing an important artery in the leg and the obvious contour deformity of the donor site.The remaining intact skin on the dorsum of the foot can be used for reconstruction, but the risk of impaired vascularity is high because the flap will be long and narrow. To avoid this risk, the flap was delayed. A surgical delay increases the survival length of a flap by allowing the choke vessels to dilate between adjacent perforators.4The advantages of the local delayed flaps are numerous. They include safe flap elevation regarding vascularity, no donor site morbidity in leg, and no flap crossing the ankle joint. One sheet graft can be applied to both raw area and the adjacent donor site of flap. It is a narrow and thin skin flap and will not add any bulk or produce obvious dog ears. The contracture, which is expected to happen in skin graft on the long term, compensates for the loss of dorsiflexors of lateral toes.This technique is advised to be used by surgeons who do not have free-flap expertise or who cannot achieve high success rate. Thin free cutaneous flaps surely will provide total coverage with no need for skin graft and better local aesthetic results."} +{"text": "Bos gaurus (H. Smith) (Artiodactyla: Bovidae) and Asian elephant, Elephas maximus Linnaeus (Proboscidea: Elephantidae), is reported from the moist forests of Western Ghats, in South India. The dominance of dwellers over rollers, presence of many endemic species, predominance of regional species and higher incidence of the old world roller, Ochicanthon laetum, make the dung beetle community in the moist forests of the region unusual. The dominance of dwellers and the lower presence of rollers make the functional guild structure of the dung beetle community of the region different from assemblages in the moist forests of south East Asia and Neotropics, and more similar to the community found in Ivory Coast forests. The ability of taxonomic diversity indices to relate variation in dung physical quality with phylogenetic structure of dung beetle assemblage is highlighted. Comparatively higher taxonomic diversity and evenness of dung beetle assemblage attracted to elephant dung rather than to gaur dung is attributed to the heterogeneous nature of elephant dung. Further analyses of community structure of dung beetles across the moist forests of Western Ghats are needed to ascertain whether the abundance of dwellers is a regional pattern specific to the transitional Wayanad forests of south Western Ghats.The community structure of dung beetles attracted to dung of gaur, Rollersnesters) .Dung beetles have a variety of effects on the ecosystem. By burying dung and carrion as food for their offspring, dung beetles may increase the rate of soil nutrient cycling and redue.g., according to their diet and their resource-relocation behavior). Dung beetle communities are strongly influenced by dung type and they change in relation to the availability of different dung types and elephants is the preferred resource for several dung beetle species and gaur, Bos gaurus (H. Smith) (Artiodactyla: Bovidae), are the major mega-mammalian herbivores in the moist forests of Western Ghats (n\u00b0 53\u2032 N latitude and 76\u00b001\u2032 E longitude), 100 km North of Calicut, Kerala state provide a wide choice of resource materials for grazers and browsers . Earlier studies on the succession pattern of dung beetles showed that dung pats that were 3\u20137 days older attracted a subset of species that were not attracted to fresh dung , Calicut and Indian Agricultural Research Institute (IARI), New Delhi.Rainfall data was collected from the records of Kerala State Electricity Board at Thirunelly. Humidity and forest floor temperature were assessed with thermo-hygrometer. Slope of the terrain was calculated using the trigonometric formula \u2018tan\u03b8\u2019 (where \u2018\u03b8\u2019 is the angle of inclination).+) and variation in taxonomic distinctness (\u039b+) indices with respect to the master list values were estimated by drawing 95% confidence funnels using Primer package and two first reports from the Western Ghats (Onthophagus laevis and O. brutus) , and Drepanocerus setosus, a dweller (10.5%), dominated the assemblage (P = 0.09). 21 species belonging to 10 genera, six tribes and three nesting guilds were collected from elephant dung were lower in comparison to elephant dung beetle assemblage . Values of both taxonomic diversity indices fell within the 95% limits of the probability funnel indicating that taxonomic diversity of both the assemblages did not vary significantly from the regional species pool.As shown in brutus) . Beetlessemblage . Tunneleundance) . Smallerant dung . Bray CuA taxonomic diversity index is a measure of biodiversity that indicates how different the species in a habitat are from each other . The taxThe first report of the community structure and diversity of dung beetles in a moist forest locality in Western Ghats and South Asian region is provided. Most conspicuous is the difference observed in the guild structure of the community, when compared to dung beetle assemblages from other moist forests of the Afrotropical region. Dwellers are the dominant functional guild after tunnelers, and rollers are lower in richness and abundance. Such high abundance of dwellers is reported previously only from the moist forests of Ivory Coast in Africa.Onthophagus laevis and O. brutus) from Western Ghats and high abundance of endemics (32.6 %) indicate that further characterization of the dung beetle faunal diversity of other forests of Western Ghats down to more local scales may reveal more details of the regional variation in endemism and localised distribution patterns.Combining the 37 species recorded from the present study along with 7 species reported exclusively from elephant dung leads toO. laetum, whose overall abundance is very low in south east Asian forests . Dominance by a few tunneler, roller or guild unspecified species in the range of 56.7 % to 29.4% or tunneler species alone in the range of 34% 40.5%, is a general pattern of tropical moist forest dung beetle communities. Moist forests of the Ivory Coast . A similar situation exists in the moist forests of Ivory Coast in Western Africa with the abundance of Oniticellus pseudoplanatus (Oniticellini) and is attributed to the availability of undisturbed elephant dung pats in the region and large rollers (Gymnopleurus) are abundant in the middle and low elevation moist forests values indicate the presence of a phylogenetically closely related dung beetle assemblage. Analysis of taxonomic evenness by truncating the tree at various places and by removing the speciose genera showed that both taxonomic evenness and diversity of gaur dung beetle assemblage equaled that of elephant when species distribution under the genera Onthophagus and Caccobius were made even in both assemblages. High unevenness in taxonomic structure of the gaur dung beetle assemblage arises from the overrepresetation of Onthophagus and Caccobius species. The presence of 24 species of Onthophagus and 3 species of Caccobius in gaur dung (65% of the species attracted to gaur dung from genus Onthophagus and 73% from genus Onthophagus and Caccobius) compared to the presence of 7 Onthophagus species (33.3%) and the absence of Caccobius in elephant dung, reduced the taxonomic evenness of gaur dung beetle assemblage. This variation is distinctly shown by \u039b+, as the variation in taxonomic distinctness index is sensitive to variations in taxonomic evenness of the assemblage and the presence of speciose genera reduces the taxonomic evenness of the assemblage which is reflected as higher \u039b+ values.Although species richness was higher in the dung beetle assemblages attracted to gaur dung pats, high \u039b+ of the assemblages showed lesser variations than \u039b+, as \u0394+ considers only the relatedness between individual member species involved and not the taxonomic evenness properties of the assemblage.he overrepresentation of closely related species, and the resulting high uneveness of the taxonomic struture of dung beetles attracted to gaur dung in comparison to elepahnt dung, we relate to the coarse and fine dung preferences of dung beetles , and to O. andrewesi, an endemic of the Western Ghats, and D. setosus recorded only from the Indian continent, and the higher incidence of the old world roller O. laetum, makes dung beetle assemblage in the moist forests of this region unusual. The dominance of dwellers (Oniticellini) over rollers makes the functional guild structure of dung beetle assemblage of the Wayanad forests more similar to the dung beetle community of the Ivory Coast forests of Western Africa and different from those of south East Asian (Borneo) and Neotropical forests. Furthermore, the current study reiterates that the abundance of dwellers is an indicator of the availability of undisturbed dung pats and herbivore abundance in moist forests. However, not enough data exists to establish that the predominance of dwellers, and the low abundance and species richness of rollers, is a general pattern applicable to entire moist forests of Western Ghats. Further studies are necessary to ascertain whether it is a regional pattern specific to the transitional Wayanad forests of Western Ghats alone.In summary, the present study provides for the first time data about community structure of dung beetles from moist forests of Western Ghats, as well from a South Asian region. Though with low species richness, elephant dung attracts a more taxonomically diverse and even dung beetle assemblage than gaur dung that is likely to be related to the more heterogenous physical nature of elephant dung with both fluid and fibrous dung particles. The presence of many endemics (27 %), predominance of"} +{"text": "Neurons in the cortex exhibit a number of patterns that correlate with working memory. Specifically, averaged across trials of working memory tasks, neurons exhibit different firing rate patterns during the delay of those tasks. These patterns include: 1) persistent fixed-frequency elevated rates above baseline, 2) elevated rates that decay throughout the tasks memory period, 3) rates that accelerate throughout the delay, and 4) patterns of inhibited firing (below baseline) analogous to each of the preceding excitatory patterns. Persistent elevated rate patterns are believed to be the neural correlate of working memory retention and preparation for execution of behavioral/motor responses as required in working memory tasks. Models have proposed that such activity corresponds to stable attractors in cortical neural networks with fixed synaptic weights. However, the variability in patterned behavior and the firing statistics of real neurons across the entire range of those behaviors across and within trials of working memory tasks are typical not reproduced. Here we examine the effect of dynamic synapses and network architectures with multiple cortical areas on the states and dynamics of working memory networks. The analysis indicates that the multiple pattern types exhibited by cells in working memory networks are inherent in networks with dynamic synapses, and that the variability and firing statistics in such networks with distributed architectures agree with that observed in the cortex. Persistent elevation in firing rates of cortical neurons during retention of memoranda has been suggested to represent the neuronal correlate of working memory While the mechanism(s) by which the patterns of activity are initiated and maintained in working memory are undetermined, a number of plausible hypothesis have been proposed. With respect to persistent elevated-rate patterns, prevailing ideas which have emerged from computational and theoretical studies are that the activity arises as stable states in recurrent attractor networks A potential source of these and other difficulties Functional architecture is a second consideration of potentially fundamental importance to the dynamics of working memory networks. Typically, efforts have focused on studying working memory within the framework of local modules or networks that exist at various specific or general locations in the cortex. However, while working memory and/or working memory-correlated neuronal activity may be maintainable within local networks (or even cellularly), considerable evidence from neurophysiological and imaging studies have shown that working memory involves widely distributed cortical networks across multiple cortical areas Recent work has indicated that working memory networks incorporate dynamic synapses. One study Several computational efforts have attempted to address various aspects of the issues described above. For example, one study demonstrated that persistent activation with realistic frequencies might be achieved if working memory corresponds to attractor states on the unstable branch, and have proposed mechanisms by which such states might be stabilized While working memory models have mostly concentrated on bistable persistent activation, some efforts have also addressed the issue of cue- or response-coupled patterns of activity that steadily increase and decrease during delay periods. For example, graded activity in recurrent networks with slow synapses has been modeled In this work we examine a cortical model of working memory incorporating dynamic synapses both within a local and a distributed cortical framework. We investigate the mechanism of dynamic synaptic facilitation in the generation of all of the different patterns of persistent activity associated with working memory and the effect of a distributed cortical architecture on the dynamics of working memory patterns. We first examine a firing rate model incorporating dynamic synapses representing a working memory network residing locally in a given cortical area. We analyze the statistics and firing-rate-patterns of this network during simulated working memory and compare the results with that of real cortical neurons recorded from parietal and prefrontal cortex of monkeys performing working memory tasks. A reduction of this model to a 2-dimensional system enables an analysis to completely characterize the states of the system. We then examined a distributed firing rate model consisting of 2 and 4 locally interconnected networks, analyzing the possible states as a function of different long-range connectivity schemes and strengths. The expansion of the architecture to multiple networks allows the incorporation of possible heterogeneity. We compare the output of these models with the activity of the database of real cortical neurons recorded extracellularly from the prefrontal and parietal cortex of primates performing working memory tasks. The model expands on previous work examining the ability of population models with dynamic synapses to produce bistable memory states, or rate changing states (either cue dependent during the stimulus period\u2014i.e. Barak and Tsodyks F(X) given bym\u03c4 is the membrane time constant, and b is a parameter inversely proportional to the noise. This form of the firing rate function mimics the firing rate of a class I neuron in the presence of noise (\u223c1/b) C in equation (1) is the strength of feedback connections in the population,s\u03c4 is the decay constant for synaptic activity, w corresponds to the synaptic facilitation, and \u03b8 is the threshold. I(t) corresponds to an external current which increases during memorandum (cue) presentation in the simulated working memory task.We start with a firing rate model of a local network . While tw) is incorporated in the model according to w\u03c4 is the decay constant, \u03b3 is a proportionality constant controlling the amount of facilitation as a function of intra-cellular calcium, and Ca is the calcium concentration. oCa is a reference parameter controlling the level of intracellular calcium at which facilitation begins to increase. The calcium concentration dynamics are given by ca\u03c4 is the decay constant, and F(x) is of the form given in equation (2).Dynamic synaptic facilitation and Calcium dynamics satisfy equations similar to (3) which are:The two populations can be considered to reside in different cortical areas or two populations within the same area. For convenience of description we can consider the populations to represent networks in different cortical areas, which for purposes of association with the real cortical data we take as prefrontal cortex (population 1) and parietal cortex (population 2). In these equations then, The distributed architecture is further extended to one consisting of four populations , by recuS), facilitation (W), and calcium concentration (Ca). A reduction of this model to 2 dimensions is achieved by assuming steady state calcium (dCa/dt\u200a=\u200a0) allowing the system to be rigorously analyzed. While assuming steady state calcium does not have an immediate justification from a neurophysiological standpoint, it produces a system with the same attractor structure as the 3-dimensional system and thus allows the rigorous analysis. We carried out analysis of the dynamics and the stability of states of the model using XPPAUT Wmax) to range over values less than the value of the baseline facilitation (Wmin) in equation (3).We begin the analysis first from the single population model. The single population possesses 3-dimensional dynamics in the variables for synaptic activity .thus obtaining the correspondence between the mean field and spiking models (as the parameter I(t) is an external current occurring during the presentation of memoranda, and amp is the amplitude of the Weiner noise. The synaptic activity of a unit js in equation (10) increases with each afferent spike according to \u03b2 corresponds to the increase in synaptic activity from a single afferent spike, mjt is the time of incidence of the mth afferent spike on the jth neuron, and \u03c4s is the decay constant. The dynamics of the synaptic facilitation jw in equation (10) is given by Ca corresponds to the intercellular calcium concentration which modulates the change in facilitation and increases with each spike according to The membrane potential dynamics of a unit in the spiking model is given by the equation The 2-population spiking network consisted of two \u201clocal\u201d networks of 100 neurons each with all-to-all connectivity, and with average weaker recurrent connectivity between populations than within the populations. The activation properties of each individual network reflect that of the single populations of the firing rate models.I(t) was applied for 300 ms. The external current raises the firing rate of many units in the populations, simulating the activity observed during presentation of the memorandum in working memory tasks. The current input and increased firing rate triggers dynamic facilitation through equations (11\u201313). After the cue period, the delay period begins. For the spiking model simulations, unit activity was analyzed over an 11-second delay period which is proportional to the delay period of the working memory tasks during which the parietal and prefrontal cells of the database were recorded. PSTH histograms of units were generated to analyze the patterns and firing rate statistics of the units. Average PSTH histograms were generated for each unit over 10 simulated working memory task trials. Pattern types appearing in the average PSTH histograms were determined and the distribution of patterns in the network were compared to the distribution of patterns observed in the parietal and prefrontal neuron populations of the database. Variability in working memory patterns occurring across trials for each unit was analyzed and compared between the 2- and 4-population networks and the neuronal populations.For the 2- and 4-population spiking networks, working memory task simulations were conducted similarly to those for the firing rate model, and the firing rate patterns and statistics were analyzed. During the baseline period of the simulated working memory task, facilitation in the models was kept low such that the firing rate of the populations was near the baseline fixed-point attractor state inherent in the model . After 20 seconds, the baseline period ended and an external current To examine the effect of network size on patterns exhibited in the networks across trials and their variability, we generated 2- and 4-population networks consisting of 2000 spiking units. For these networks the distribution of pattern types exhibited on each of 20 simulated working memory task trials was obtained and the average distribution across all 20 trials was determined. These distributions were compared to the distributions obtained with the 200 unit networks as well as that observed in the parietal and prefrontal neuron populations of the database. Variability in firing rate within trials was determined through an analysis of the coefficient of variation (CV) of the ISI's during the baseline and delay periods. Variability in working memory patterns occurring across trials for each unit was analyzed and compared to that observed in the 2- and 4-population networks of 200 units.The database with which the different models' activity is compared consists of 812 neurons recorded extracellularly from the parietal cortex and prefrontal cortex of monkeys performing working memory tasks. In parietal cortex, 521 cells were recorded from monkeys during performance of a haptic delayed matching-to-sample task S and W for synaptic activity and facilitation respectively) and thus phaseplane and rigorous mathematical analysis was carried out. The nullclines of the system (ds/dt\u200a=\u200adw/dt\u200a=\u200a0). The steady states of the system are defined by the points at which these 2 curves intersect. For sufficiently low self connectivity strengths (or low maximum facilitation), only one such point is present, corresponding to the baseline firing rate of the population significantly determines the fixed points of the system (where the facilitation nullcline intersects the synaptic activity nullcline). While this parameter is varied over a large percentile range , the resulting change in actual facilitation realized by the network is within the range of 10% to 60%\u2013within the range of reported increases that are less the background value (Wmin) in equation (3). For sufficiently small values of synaptic depression, as was the case for facilitation, the network exhibits the non-responsive pattern. As synaptic depression is increased the network exhibits the inhibitory pattern which is the mirror image of decaying memory cells , and inhibitory-inhibitory (I-I). Slightly different values for self-feedback connections strengths within the two populations were chosen.As was the case for the single-population model the PSTH histograms reveal that the 2-population model exhibits the excitatory patterns of memory and decaying-rate or ramping cells with a continuum of rate differences. The inclusion of inhibitory connections results in the presence of parameter ranges in which all of the inhibitory patterns (mirroring the excitatory ones) occur. These inhibitory patterns can occur purely as a function of inhibitory inter-population connectivity, without incorporating dynamic synaptic depression as was the case for the single population. In addition the inhibitory pattern of increasing inhibition throughout the delay (mirroring the excitatory ramping cells) which was absent in the single population model, now can occur .In the phase diagrams of the 2-populations it can bWe next consider the effect on the states of the network when the model is extended to 4 populations. In the 4-population model all of the patterned activities continue to be present over a continuum range of increases and decreases in firing frequencies. However the distributed architecture results in a \u201cspecialization\u201d of pattern activity within specific populations. As can be seen in the phase diagrams C and 6DWe next examine the statistics and dynamics of the spiking version of the distributed mean-field models. In the spiking network version of the 2-population mean field model we first replace the populations with two networks of 100 spiking units each, whose activity averaged across units approaches the activity of the populations in the mean field model . We firsAs is the case in real cortical cells, the specific pattern exhibited by a unit in the spiking network in any given trial can vary from the predominant pattern observed in the average PSTH histogram We next examine the range of statistic and memory pattern types occurring in the activity of units in a spiking network with four populations of 50 neurons each. In the spiking network we replace the four populations of the mean field model with four networks of 50 spiking units each whose activity averaged across units is the same as the activity of the populations of the mean field model. We first analyze the range of memory pattern types in the spiking networks during simulated working memory tasks. Average PSTH histograms over 20 simulated trials of a working memory task were generated and examined for each unit in the network. As was the case in the 2-network spiking model, the range of excitatory and inhibitory patterns, in addition to the non-responsive pattern are exhibited by the units in the network. The specific baseline frequencies, delay frequencies and deltas (changes in frequency from baseline to delay period) exhibited by the units for each pattern fall within the ranges observed in the real parietal and prefrontal cells . Figure As was the case in the 2-population spiking model, the specific pattern exhibited by a unit in the 4-population spiking network in any given trial of the simulated working memory task can vary from the predominant pattern observed in the average PSTH histogram . Figure We next examine the dependence of pattern type, firing rate statistics and variability as a function of population size. To do this we produced a 2- and 4-population spiking model as above consisting of 2000 units. We first analyze the range of memory pattern types in the spiking networks during simulated working memory tasks. Average PSTH histograms over 20 simulated trials of a working memory task were generated and examined for each unit in the network. As was the case in the 2- and 4 population spiking networks consisting of 200 units, the range of excitatory and inhibitory patterns, in addition to the non-responsive pattern are exhibited by the units in the network . The speThe firing rate model produces a trajectory in the phase plane which corresponds to a specific pattern type. Depending on the connections and other parameters, the stimulus causes the trajectory to remain above or below the separatrix of the phase space. In terms of the spiking model the firing rate model trajectory corresponds to the mean of the trajectory of all units. Depending on how close to the separatrix that mean trajectory is after the stimulus, fluctuations about the mean from various sources of stochasticity in the spiking model will result in a probability that units will make transitions to trajectories corresponding to pattern types different than that of the mean trajectory. The resulting pattern types will have a distribution reflecting this. Conversely the closer the system is to one of the stable attractors of the system, the less probable it is for a given level of noise that the system trajectory will depart from the pattern of the mean trajectory.There are 3 primary sources of stochasticity in the spiking model networks, not present in the mean field model that produce fluctuations resulting in units behavior departing from the single pattern type of the mean trajectory: 1) heterogeneity in the connections between units, 2) heterogeneity in the maximum facilitation, and 3) the noise present in all the units' activity. Increasing population size reduces the source of noise resulting from heterogeneous connections, and thus reduces the overall amplitude of fluctuations. A reduction in the source of variability due to noise present in all neurons during simulation\u2014i.e. the Weiner noise\u2013can be achieved by averaging across trials. An analysis of the intra-trial variance of firing rate in the model units revealed high variability in the distribution of ISIs during both baseline and delay periods of the model . The CV The results of this study demonstrated that recurrent networks with dynamic synapses inherently produce the different persistent firing rate patterns observed in real cortical neurons during working memory. The persistent patterns produced are robust with respect to variations of the parameters in the network. That is, the different patterns occur over a wide range of values of the parameter space, and given patterns do not occur only for a very narrow set of parameter values. Further, the statistics of those patterns fall within the ranges of variation observed in firing rate pattern behavior of real cortical neurons. For example the changes in firing rate from baseline to the delay period can take values along an apparent continuum with absolute changes in firing rate of less than 100% of the baseline rate. For the majority of persistently activated cells recorded from parietal and prefrontal cortex of primates during working memory this corresponds to changes in firing rate of less than 10 Hz. The present network demonstrates a mechanism beyond previous solution for achieving these realistic low delay firing rates Bistable firing rates are one of the possible activities of the model. However, the present work has focused on the range of working memory-correlated patterns of firing rate and their simultaneous, complementary occurrences in the working memory network as opposed to only fixed states that the networks or their neuronal constituents may adopt. The spiking networks exhibited all of the general patterns correlated with working memory that are observed in the database of microelectrode recordings of parietal and prefrontal cortical neurons. In addition, the statistics and firing rates of the units fall within the ranges observed in real cells, with the occurrence of the different pattern types similar in proportion to that observed in the cortical populations. In terms of the behavior of individual neurons, bistable activity is typically only observed as an average over many trials of a working memory task. Across trials, cells exhibit different average frequencies, and even within individual trials, cells exhibit significant variability in firing rather than a single stable rate While the firing pattern varies from trial to trial in cells, there are also significant variations from trial to trial in the delay frequencies for any particular patterns exhibited. The concept of a network with fixed connectivity and bistability between units is not indicated by such activity, and thus a dynamic connectivity is reasonable to consider. The idea of bistable activity corresponding to fixed attractor states may apply at the level of a population of neurons, and could be the essential neuronal correlate of working memory. However, the majority of persistent activity patterns observed consists of cells whose firing rates decay or accelerate during working memory (cue-coupled or response-coupled), and these populations should be taken into account in addition to bistability. In the present model, as stated above, bistable attractor states are present and could correspond to working memory. Here however, a second additional role is suggested for these states in terms of modulating the firing rate activity, resulting in decaying and ramping firing patterns. That is, without necessarily representing memory states in and of themselves, the attractor states allow the network, which represents working memory and its complementary functions, to become active and behave with the necessary dynamics to mediate cross-temporal contingencies. The key component, resulting here from facilitation and observable in the phase plane, is the presence of the bottleneck through which the trajectories of the system pass. The bottleneck modulates the rate at which the trajectories approach the stable attractors, thus creating the patterned activities within the actual range of frequencies observed in the cortex. This mechanism might be present and modulate activity through other components of the network in addition to facilitation. For example Durstewitz The dynamic synaptic facilitation is the component of this model which creates the bottleneck in the phase plane, and gives it its unique characteristics. Specifically it is facilitation which determines the amount of persistent activation, which, since it can adopt a continuous range of values, enables the change in firing rate from baseline to memory period to fall along a continuum. The bottleneck determines the rate at which the firing rates decay towards the baseline attractor (or increases towards the higher firing rate attractor) to adopt the continuum of firing rate values. The decay rate can be sufficiently slow such that no decay or acceleration of firing is observed for the duration of a memory period. Thus the result is an apparent or virtual bistability, which for all intents and purposes can be extended for as long as working memory is defined by the parameters of a working memory task. The fact that the rate at which persistent activation waxes or wanes is highly adjustable is consistent with the behavior of cells in the cortex during working memory. It has been observed in working memory experiments Another prediction from the dynamics of the model is that the rate of persistent activation correlates with baseline rate. In the majority of delay activated cells, the magnitude of firing rate increases are less than 100% of baseline, with the magnitude of the delay period firing rate change increasing nonmonotonically with baseline rate increases. The largest magnitude increases in delay period frequency are in those cells with the largest baseline firing rates, while the largest percentage changes are those with low baseline rates. This is naturally incorporated in the present model. The range of rates over which the population can exhibit memory cell behavior is bounded by the saddle separatrix. Once facilitation pushes the system's trajectory beyond the separatrix, further increasing facilitation (or judiciously adjusting other parameters) does not result in further continuous increases in persistent activation delay rates, but rather a change in the activation pattern itself. The parameters of the model can be adjusted however, raising the frequency of the baseline state and incrementing the entire range of frequencies within its basin of attraction. Thus both baseline and delay rates increase in a correlated fashion, and due to the nonlinearity of the nullcline of the synaptic activity, the proportional increase in frequency is nonmonotonic.The specific simultaneous patterns which may be exhibited in the populations are dependent on the relative strength of the inter-population connection strength, the intra-population connection strength, and whether the inter-population connectivities are mutually net inhibitory, or a combination of excitatory and inhibitory. The phase diagram of the 2-population firing rate model reveals a number of behavioral trends. For an excitatory-inhibitory connectivity between populations, the networks can exhibit a range of concomitant activities which includes memory cell activity in both populations, and simultaneous cue-coupled/response-coupled behavior. In contrast, with a mutually inhibitory connectivity between populations these particular behaviors are absent, and simultaneously occurring fixed-rate-memory/response-coupled behavior is present over only an extremely narrow range of the parameters. Thus memory being maintained simultaneously in both cortical areas occurs in the 2-population model only within the E-I connectivity scheme. During working memory, the simultaneous presence of memory cells in prefrontal cortex and another cortical area important to the sensory modality of the memorandum has been indicated by numerous studies. In addition to prefrontal cortex, memory cells have been observed for example in posterior association cortex including inferotemporal cortex As the network becomes more distributed, increasing to four populations, simultaneous memory cell behavior and cue-coupled/response-coupled behaviors becomes more robust with these concomitant behaviors occurring over a wide continuous range of the parameters as can be observed by the increased areas of those respective behaviors over larger continuous ranges of the parameters in the phase diagrams . While tIn addition to a distributed architecture affecting the stability of memory pattern behavior and modulating activity enabling the occurrence of complimentary patterned behaviors, certain working memory pattern behaviors apparently are exclusively a function of a distributed architecture rather than the facilitation mechanism alone. Particularly the ramping delay inhibition pattern which is observed in the cortical data was present only in the distributed versions of the model. Another phenomenon is the existence of large regions of the parameter space in which one population exhibits the non-responsive pattern, while the other population exhibits memory cell behavior . In the database of real cells, the majority of neurons from parietal and prefrontal cortex exhibit the non-responsive pattern of behavior. Interspersed within these populations of non-responsive cells are neurons that exhibit the other patterns. From the models we see that the non-responsive pattern is a common part of a working memory network coexisting with the other patterned behaviors. Studies of patterns in spike sequence of such cells It should be noted that the present analysis supplements the general attractor picture rather than replacing, or invalidating it. Cells with apparent bistable activity with high firing rates above baseline, while apparently rare in the cortex"} +{"text": "Prions are believed to be the causative agents of a group of rapidly progressive neurodegenerative diseases called transmissible spongiform encephalopathies, or prion diseases. They are infectious isoforms of a host-encoded cellular protein known as the prion protein. Prion diseases affect humans and animals and are uniformly fatal. The most common prion disease in humans is Creutzfeldt-Jakob disease (CJD), which occurs as a sporadic disease in most patients and as a familial or iatrogenic disease in some patients. Whether prions are infectious proteins that act alone to cause prion diseases remains a matter of scientific debate. However, mounting experimental evidence and lack of a plausible alternative explanation for the occurrence of prion diseases as both infectious and inherited has led to the widespread acceptance of the prion hypothesis.Interest in prion disease research dramatically increased after the identification in the 1980s of a large international outbreak of bovine spongiform encephalopathy in cattle and after accumulating scientific evidence indicated the zoonotic transmission of BSE to humans causing variant CJD. In recent years, secondary bloodborne transmission of variant CJD has been reported in the United Kingdom.Prions: The New Biology of Proteins describes the current state of knowledge about the enigmatic world of prion diseases. The book is organized into 12 mostly brief chapters, which nicely summarize the various types of prion diseases and the challenges associated with their diagnosis and treatment. These sections review the biology of prions, the underlying hypotheses for prion replication, and the biochemical basis for strain diversity. Chapters 2 through 5 describe the various characteristic features of prions, including the historical evolution of the prion hypothesis, a detailed description of the possible mechanisms by which the normal prion protein is converted into the pathogenic form, and the cellular biology and putative functions of the normal prion protein. The author\u2019s lucid descriptions of the various topics are supported by diagrams and key references. Subsequent chapters describe prion disease laboratory diagnostic tools that are available or under development. Chapter 9 succinctly summarizes the most likely target sites, from the formation of the infectious agent to its effects on neurodegeneration, which can be exploited for likely therapeutic development. The same chapter describes the various antiprion compounds that have been or are being tested as therapeutic interventions for prion diseases.The book is unusual because its entire content was exclusively authored by 1 person, resulting in a paucity of in-depth information in some areas, which may have been provided by multiple authors. However, all things considered, the book can be a valuable resource for scientists beginning to understand the world of prion diseases, the underlying biochemical mechanism of disease occurrence, and the challenges associated with the diagnosis and treatment of prion diseases."} +{"text": "Intraperitoneal diffusion chambers have been used to investigate changes in humoral factors during the development of myeloid leukaemia in mice. Normal mouse bone marrow cells form colonies of granulocytes and macrophages when cultured in semi-solid agar medium within intraperitoneal diffusion chambers. The use of mice bearing transplanted myeloid leukaemia as Agar Diffusion Chamber (ADC) hosts enhances colony formation from normal marrow. The humoral basis for this stimulation has been shown by the colony stimulating activity of the fluid entering the diffusion chambers when assayed against normal mouse bone marrow cells in agar culture in vitro. The stimulus to colony growth in ADCs and the in vitro colony stimulating activity depend on the phase in the development of the leukaemia investigated, and the stimulation was abolished by splenectomy. There was no apparent relationship between the growth of the leukaemic cell population in vivo and the level of the stimulating factor detected in leukaemic mice."} +{"text": "Chlamydia trachomatis (CT) infections of the female genital tract, although frequently asymptomatic, are a major cause of fallopian-tube occlusion and infertility. Early stage pregnancy loss may also be due to an unsuspected and undetected CT infection. In vitro and in vivo studies have demonstrated that this organism can persist in the female genital tract in a form undetectable by culture. The mechanism of tubal damage as well as the rejection of an embryo may involve an initial immune sensitization to the CT 60 kD heat shock protein (HSP), followed by a reactivation of HSP-sensitized lymphocytes in response to the human HSP and the subsequent release of inflammatory cytokines. The periodic induction of human HSP expression by various microorganisms or by noninfectious mechanisms in the fallopian tubes of women sensitized to the CT HSP may eventually result in tubal scarring and occlusion. Similarly, an immune response to human HSPexpression during the early stages of pregnancy may interfere with the immune regulatory mechanisms required for the maintenance of a semiallogeneic embryo."} +{"text": "Host responsiveness to a progressively growing methylcholanthrene (MC) induced tumour (MC6/2) was studied at varying intervals following subcutaneous (s.c.) tumour implantation by monitoring the in vitro incorporation of tritiated thymidine (3H-TdR) into lymph node cells (LNC) undergoing stimulation in vivo and concurrently determining the total numbers of the lymphoid cells present in these organs at each of the time intervals. It was found that an initial period of rapidly increasing stimulation of DNA synthesis in lymph nodes was soon followed by the onset of a stage of decrease of this activity. Within limits, the larger the tumour inoculum the stronger the initial response. The suppression of stimulation of DNA synthesis that ensued appeared to be directly related to the tumour mass and to the dose of tumour cells implanted. The total numbers of the cells accumulating in nodes also increased initially but remained elevated during the subsequent period of tumour growth. Continued presence of the tumour was essential for the increased DNA synthesis in lymph nodes since tumour removal leads to a rapid decrease to levels found in tumour-free animals. These findings demonstrate that the failure to eradicate an antigenic tumour by its host may not be solely due to \"desensitizing\" and \"blocking\" factors but that other important mechanisms are also involved. We suggest that the inability to reject the tumour in this situation is dependent in considerable measure on the development of a state of hyporeactivity in the host due to the partial inhibition of the DNA synthetic response, possibly in T cells of the tumour host, due to \"suppressor factor(s)\" interacting with the immunocompetent cells."} +{"text": "This adaptability of motility styles to the environment is currently considered to be the main reason for the failure of clinical trials testing protease inhibitors in patients with metastatic cancers. Brabek et al. [ad hoc switch between different motility strategies. The interest of molecular biologists is particularly focussed on this family of GTPases and their regulators as targets for an effective antimetastatic therapy. Indeed, instead of inhibiting a specific motility mechanism, it would be preferable to target the adaptation skills of cancer cells to the tumor microenvironment.Eukaryotic cells move within the surrounding environment essentially for two reasons: the necessity to reach a predetermined site or the hostility of the primitive site. Moving in the direction of an attractive site or factor is typical for embryonic movements and metastatic dissemination of cancer cells and motility strategies are very similar for both categories. Activation of an epigenetic process called epithelial mesenchymal transition (EMT) is indeed characteristic of embryonic development, of fibrotic or regeneration processes, and of the spreading of cancer cells from their primitive origin ,2. The mk et al. review ik et al. focus onThis microenvironment is indeed a mandatory element for the regulation of cell motility . Three kThe nervous system also plays an important role in cell motility, for two reasons: the secretion of neurotransmitters which also act as motility factors and the contribution of an alternative escaping way to migrating cells, commonly called perineural invasion. In this special issue Voss and Entschladen review this aspect with a particular focus on the role of cathecolamine and stress mediators on tumoral cell motility .As mentioned at the beginning, a second reason for cells to move is the escape from an hostile ambiente, for example due to the scarcity of growth factors (chemotaxis), due to the presence of improper ECM (aptotaxis and durotaxis), because of the accumulation of toxic or pro-oxidant factors or to escape oxygen or nutrient deprivation (hypoxia and ischemia). De Donatis et al. focus thFar from being exhaustive, this special issue focused on cell motility aims to underscore the fertility of the current research efforts in this field, as well as highlighting key questions that still are awaiting definitive answers."} +{"text": "The lung is constantly exposed to the environment and its microbial components. Infections of the respiratory tract are amongst the most common diseases. Several concepts describe how this microbial exposure interacts with allergic airway disease as it is found in patients with asthma. Infections are classical triggers of asthma exacerbations. In contrast, the hygiene hypothesis offers an explanation for the increase in allergic diseases by establishing a connection between microbial exposure during childhood and the risk of developing asthma. This premise states that the microbial environment interacts with the innate immune system and that this interrelation is needed for the fine-tuning of the overall immune response. Based on the observed protective effect of farming environments against asthma, animal models have been developed to determine the effect of specific bacterial stimuli on the development of allergic inflammation. A variety of studies have shown a protective effect of bacterial products in allergen-induced lung inflammation. Conversely, recent studies have also shown that allergic inflammation inhibits antimicrobial host defense and renders animals more susceptible to bacterial infections. This paper focuses on examples of animal models of allergic disease that deal with the complex interactions of the innate and adaptive immune system and microbial stressors. The lung is constantly exposed to the environment and its microbial components. Infections of the respiratory tract are amongst the most common diseases. Several concepts describe how this microbial exposure interacts with allergic airway disease as it is found in patients with asthma. Infections are classical triggers of asthma exacerbations. Infections of the respiratory tract, especially with various viruses, are common causes of exacerbation of asthma ,2.The hygiene hypothesis offers an explanation for rising rates of allergic diseases such as asthma in modern westernized societies. The core of this hypothesis is the complex interaction of the microbial environment and the innate immune system in the childhood of individuals. In modern societies, different factors such as small family size, high antibiotics use, and good sanitation contribute to higher living standards and life expectancy . As a reThe observed protective effect of farming environments in connection with environmental exposure to endotoxin may have a crucial role in the development of tolerance to ubiquitous allergens found in natural environments against asthma . This obLactobacillus reuteri attenuated the influx of eosinophils into the respiratory tract and was accompanied by levels of Th2 cytokines.In the last years, it has become clear that the appropriate bacterial composition of the human microflora is a factor in protection from allergy and asthma and is needed for an adequate Th1 / Th2 balance . InteresThe studies outlined above deal with the impact of bacterial stressors that preferentially promote a Th1 cell mediated response on the development of Th2 mediated allergic disease such as asthma. Conversely, a largely unknown field is the influence of established allergic diseases on infectious diseases that require an appropriate innate or Th1 cell mediated immune response.Pseudomonas aeruginosa. Mice with allergic airway inflammation had more viable bacteria and reduced levels of pro-inflammatory cytokines and antimicrobial peptides in their lung. Furthermore, the influx of neutrophils into the respiratory tract was significantly diminished in asthmatic animals.A recent study investigated the effect of an established allergic inflammation of the lung on the innate host defense during bacterial infection of the respiratory tract . The maiThe suppressive effect of the Th2 milieu was also shown in an in vitro model using differentiated human airway epithelia tissue . The Th2Mycoplasma and Chlamydia species [These results provide evidence that the adaptive immune system closely interacts with the innate immune system and that the adaptive immune response influences the innate host defense Figure . The Th2 species .The development of allergic diseases is a complex process. In recent years, it became clear that the interaction of the microbial environment with the innate and adaptive immune system during childhood is crucial for a well balanced immune system. The combination of animal models of allergic disease with infection models is a useful tool to further study the interactions between these processes and is a starting point for new therapeutic strategies.The authors declare that they have no competing interests."} +{"text": "One hundred resected cases of squamous cell carcinomas of the oesophagus were reviewed and a series of histological criteria related to the survival time. Two histological features were important in the assessment of survival. Good prognostic factors were a marked lymphocytic response to the tumour and a lack of intravenous tumour infiltration. Presence of tumour in the middle third of the oesophagus, infiltration through the muscularis propria, severe tumour necrosis, glandular or small cell tumour differentiation, lymphatic invasion and lack of peritumoural fibrosis were all factors which tended to worsen prognosis. None of these factors reached statistical significance. The degree of squamous differentiation had no effect on survival."} +{"text": "The emergence of drug resistance in treated populations and the transmission of drug resistant strains to newly infected individuals are important public health concerns in the prevention and control of infectious diseases such as HIV and influenza. Mathematical modelling may help guide the design of treatment programs and also may help us better understand the potential benefits and limitations of prevention strategies.To explore further the potential synergies between modelling of drug resistance in HIV and in pandemic influenza, the Public Health Agency of Canada and the Mathematics for Information Technology and Complex Systems brought together selected scientists and public health experts for a workshop in Ottawa in January 2007, to discuss the emergence and transmission of HIV antiviral drug resistance, to report on progress in the use of mathematical models to study the emergence and spread of drug resistant influenza viral strains, and to recommend future research priorities.General lectures and round-table discussions were organized around the issues on HIV drug resistance at the population level, HIV drug resistance in Western Canada, HIV drug resistance at the host level , and drug resistance for pandemic influenza planning.Some of the issues related to drug resistance in HIV and pandemic influenza can possibly be addressed using existing mathematical models, with a special focus on linking the existing models to the data obtained through the Canadian HIV Strain and DR Surveillance Program. Preliminary statistical analysis of these data carried out at PHAC, together with the general model framework developed by Dr. Blower and her collaborators, should provide further insights into the mechanisms behind the observed trends and thus could help with the prediction and analysis of future trends in the aforementioned items. Remarkable similarity between dynamic, compartmental models for the evolution of wild and drug resistance strains of both HIV and pandemic influenza may provide sufficient common ground to create synergies between modellers working in these two areas. One of the key contributions of mathematical modeling to the control of infectious diseases is the quantification and design of optimal strategies, combining techniques of operations research with dynamic modeling would enhance the contribution of mathematical modeling to the prevention and control of infectious diseases. The emergence of drug resistance in treated populations and the transmission of drug resistant strains to newly infected individuals are important public health concerns in the prevention and control of infectious diseases such as HIV and influenza. Mathematical modelling may help guide the design of HIV treatment programs to minimize the development and spread of drug resistant strains. Modelling may also help us achieve a better understanding of the optimal use of antiviral drugs as the first line of defence against a new strain of influenza and of the potential benefits and limitations of mitigation strategies using antiviral drugs during an influenza pandemic.While acknowledging the fundamental differences in the evolution of drug resistance between the two diseases, sufficient common ground may be found to create synergies between modellers working in these two areas. To explore this further, the Public Health Agency of Canada (PHAC) and Mathematics for Information Technology and Complex Systems organized a workshop in Ottawa in January 2007. The workshop brought together selected scientists and public health experts to discuss the emergence and transmission of HIV antiviral drug resistance, to report on progress in the use of mathematical models to study the emergence and spread of drug resistant influenza viral strains, and to recommend future research priorities.Dr. Chris Archibald indicated that PHAC strongly supports collaboration between modelers and public health policy makers to better understand the evolution of drug resistant strains and that the research priority recommendations coming out of the meeting would be closely examined.In the opening remarks, Dr. Donald Sutherland spoke about the work of WHO in the area of HIV drug resistance (DR) and the approach to HIVDR within the global goal of universal access to anti-retroviral treatment by 2010. He stressed the importance of addressing the potential problem of drug resistance during the scale-up of anti-retroviral treatment programs. WHO's program of HIVDR monitoring follows individuals who are starting HIV treatment for the first time and makes use of a variety of early warning indicators that may signal a potential problem with drug supply or treatment compliance, and hence a greater likelihood of the development of HIV drug resistance. HIVDR transmission surveillance is done through a threshold survey of a small number of recently infected persons to assess transmitted HIV drug resistance; the percentage of persons that show laboratory evidence of DR is used to decide whether to trigger a set of specific recommended actions (with 5% being the first threshold for considering action). Dr. Sutherland concluded by providing a number of useful WHO web links for HIV treatment and drug resistance information [ormation .Dr. Sally Blower echoed the concern that the scale-up of anti-retroviral treatment could generate an epidemic of drug resistant HIV. She reviewed her previous modelling work on HIV and anti-retroviral treatment [In a lecture entitled \"Predicting the Unpredictable: the Evolution of Drug Resistant HIV Epidemic in Africa\", reatment -9 that areatment ,10,11. Hreatment -15.Dr. Sally Blower's lecture also illustrated how mathematical modelling can be used to help determine the optimal use of limited resources. She presented recent work that useDr. Gayatri Jayaraman introduced the Canadian HIV Strain and DR Surveillance Program. She described the methods used to collect samples from individuals newly diagnosed with HIV and to analyze pol gene sequences to determine drug resistance mutations according to the International AIDS Society-USA list (2005). She stressed the uniqueness of this surveillance program since it gathers data at the population level rather than from a limited number of clinical sites. Dr. Ping Yan then presented his exploratory analysis of trends in transmitted drug resistance in western Canada, and Dr. Shenghai Zhang discussed further stratification of the population and its impact on the trend analysis.In summary, this work modeled trends in the prevalence of transmitted drug resistance using data collected from this surveillance program for the four provinces of western Canada between 1998 and 2004. Generalized linear models were fit to the binary data to examine trends in drug resistance for three drug classes: nucleoside reverse transcriptase inhibitors (NRTIs), non-nucleoside reverse transcriptase inhibitors (NNRTIs), and protease inhibitors (PIs). The results showed that while the overall prevalence of transmitted drug resistance has remained constant over time, there were significant differences according to drug class: a decrease in the prevalence of transmitted NRTI resistance, an increase in the prevalence of transmitted NNRTI resistance, and little change in the prevalence of transmitted PI resistance. In addition, there were distinct differences between provinces in these trends. Possible reasons for these differences over time and between provinces include changes in drug prescribing patterns and in risk behaviours. In conclusion, the results highlight the need for continued national surveillance of transmitted drug resistance to fully understand inter-regional differences and the course of the HIV epidemic in Canada .Four lectures dealt with HIV drug resistance and optimal treatment strategies.Dr. John Mittler described two kinds of regimen-sparing treatment strategies: structured treatment interruptions and induction-maintenance therapies. He then presented his model-based analysis to help determine the optimal length for the induction phase, ways to improve the success of induction-maintenance therapy, and which drugs should be included in the induction and maintenance regimens [regimens ,19.Dr. Robert Smith? emphasized that mathematical models for HIV treatment should explicitly account for the mechanics and temporal aspects of drug dosing. His analysis suggested that the use of PIs alone may lead to treatment failure, whereas RTIs would lead to treatment success, whether or not they were combined with PIs. He also presented his recent work which shows that to minimize drug resistance, RTI therapy should remain outside a certain (predictable) parameter region [r region ,21.Dr. Beni Sahai presented several theories for the loss of effectiveness of the anti-HIV immune response, and concluded that anti-HIV cytotoxic T-lymphocytes (CTLs) play an essential role in preventing progression to AIDS and that drug-resistant HIV strains do not cause a surge of viremia or accelerate progression to AIDS in the presence of multivalent anti-HIV CTLs.In his lecture entitled \"Emergence and Impact of HIV CTL-Escape Mutants on Progression to AIDS\", Dr. Jane Heffernan noted that viral load and CD4 T-cell counts in patients infected with HIV are commonly used to guide clinical decisions regarding drug therapy or to assess therapeutic outcomes in clinical trials. However, random fluctuations in CD4 T-cell count and viral load, due solely to the stochastic nature of HIV infection, can obscure clinically significant change. She reported her work [her work -24 that Influenza pandemics have historically been devastating to human populations. Considering the extent of morbidity and mortality caused by previous pandemics and the growing threat of an imminent human outbreak of the avian influenza strain H5N1, identification of effective mitigation strategies is a major global public health priority. It has been recognized that pharmaceutical measures would have the greatest impact in containing a pandemic. However, given the significant challenges involved in the development of an effective vaccine against a newly emerging pandemic strain, it is likely that antiviral drugs will be the sole pharmaceutical defense. Since a critical limitation to their application is the emergence of highly transmissible resistant viral mutants, it is paramount to evaluate the use of these drugs for both treatment and prophylaxis. To do this, mathematical models need to be developed that not only incorporate the population phenomenon of disease transmission, but also integrate viral evolution and disease dynamics at the individual level.Dr. Seyed Moghadas described a model to evaluate the potential impact of an antiviral drug strategy on the emergence of drug-resistance and containment of an influenza pandemic [pandemic . The modDr. Murray Alexander compared deterministic models with individual-based network models [k models -29 and sDr. John Glasser spoke about the evaluation of alternative vaccination and treatment strategies. He presented a standard compartmental model in which infection rates reflected different social activities and networking by age-group [ge-group . The impDrs. Fred Brauer and Zhilan Feng , Dr. Theresa Tam suggested that pandemic influenza plans should incorporate other non-medical measures and she encouraged further work that examines the drug resistance issue for scenarios involving prophylaxis as well as treatment.In a discussion facilitated by Dr. Jianhong Wu . He first invited Dr. Jayaraman to comment on the HIV issues that should be addressed through modeling and Dr. Jayaraman listed some of the areas where she and her colleagues at PHAC thought input from modelers would be most useful:The workshop ended with a group discussion on next steps coordinated by 1. To predict the incidence and prevalence of transmitted (and acquired) drug resistance;2. To model the contribution of transmitted DR to overall DR HIV prevalence;3. To model the contribution of transmitted DR to annual HIV incidence;4. To model which factors contribute the most to transmitted (and acquired) DR and what factors that can minimize the transmission of DR;5. To model the level of increased risk behaviour that would off-set the expected decrease in HIV incidence resulting from widespread anti-retroviral therapy use.The general discussion suggested that some of these issues can possibly be addressed using existing mathematical models, with a special focus on linking the existing models to the data obtained through the Canadian HIV Strain and DR Surveillance Program. Participants noted that the preliminary statistical analysis of these data led by Dr. Yan has already provided some solutions to items #1\u20134 in terms of historical data. This, together with the general model framework discussed in Dr. Blower's lecture, should provide further insights into the mechanisms behind the observed trends and thus could help with the prediction and analysis of future trends in the aforementioned items. Participants also noted the remarkable similarity between dynamic, compartmental models for the evolution of wild and drug resistance strains of both HIV and pandemic influenza, and so the above remarks also apply to the drug resistance issue for pandemic influenza. It was suggested that proposals should be prepared to apply for funding to develop new models and take advantage of the large amount of high quality surveillance data available at PHAC.The participants felt the urgent need of high quality clinical data for the study of drug resistance and it was believed that future collaboration between modelers and public health policy makers would benefit very much from a centralized clinical data base. Therefore, it was recommended that such a data base should be developed as soon as possible.Finally, participants observed that one of the key contributions of mathematical modeling to the control of infectious diseases is the quantification and design of optimal strategies. This was demonstrated in all the lectures of the workshop and the proposed priority item #5 further reinforced this observation. Combining techniques of operations research with dynamic modeling would enhance the contribution of mathematical modeling to the prevention and control of infectious diseases.Mathematics for Information Technology and Complex Systems (MITACS) provides the funding support to cover the article processing cost. Otherwise, the authors declare that they have no competing interests.JW, PY and CA helped conceive the workshop and participated in its design and coordination. JW summarized the workshop discussion based on the invited lectures and the round-table discussions, JW and PY then converted this summary to a preliminary version of the manuscript. JW, PY and CA all participated in the revision process, and all read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "The human microbiome, especially in the intestinal tract has received increased attention in the past few years due to its importance in numerous biological processes. Recent advances in DNA sequencing technology and analysis now allow us to better determine global differences in the composition of the gut microbial population, and ask questions about its role in health and disease. Thus far, roles of these commensal bacteria on nutrient acquisition, vitamin production, and intestinal development have been identified Despite all the increased attention on the interface between the microbiota and host immune responses, it is still unclear whether these commensal bacteria affect the efficacy of vaccines. Due to its impact in the development of immune function, both in the gut and other organs, it is reasonable to consider that the intestinal microbiota will significantly affect how individuals respond to vaccine antigens Shigella flexneri also showed differential protection on individuals from developing countries. In a study testing Bangladeshi adults and children, no significant immune response to this vaccine was mounted, although the same antigen was reactogenic in North American individuals Clinical trials testing the efficacy of oral vaccines against polio, rotavirus, and cholera have showed a lower immunogenicity of these vaccines in individuals from developing countries when compared to individuals from the developed world SalmonellaHaemophilus influenzae type B, and hepatitis B Salmonella infection Discerning the effects of genetic and environmental factors on vaccine efficacy is a challenging task. Large clinical trials involving individuals from different areas of the world will likely be required to shed light on whether the blunt immune responses to some of the oral vaccines mentioned herein are a consequence of genetic factors or environmental variations, such as the gut microbial community. Studies involving immigrant volunteers could be useful in addressing this issue by providing a clear distinction between the effects of genetics and the environment. Although this is still an open question, data in the literature suggest a more direct link between the intestinal microbiota composition and the development of immune responses to certain vaccine antigens. For instance, the use of antibiotics in chickens has been shown to increase the antibody response following immunization Although some studies indicate that the microbiota may play an important role in vaccine efficacy, this area of research is still in its infancy. For instance, the mechanisms involved in the pro- and prebiotic enhancement of vaccine efficacy mentioned above are largely unknown. Nevertheless, current knowledge of the effect of the intestinal microbiota on the development of not only local but also systemic immune functions provides a direct link between commensal populations in the intestine and immune responses to vaccine antigens"} +{"text": "The chemical behavior of these complexes has been studied by means ofspectroscopic techniques both in slightly acidic distilled water and in phosphate bufferedsolution at physiological pH. The influence of biological reductants on the chemicalbehavior is also described. The antitumor properties have been investigated on a numberof experimental tumors. Out of the effects observed, notheworthy appears the capabilityof the tested ruthenates to control the metastatic dissemination of solid metastasizingtumors. The analysis of the antimetastatic action, made in particular on the MCamammary carcinoma of CBA mouse, has demonstrated a therapeutic value for thesecomplexes which are able to significantly prolong the survival time of the treatedanimals. The antimetastatic effect is not attributable to a specific cytotoxicity formetastatic tumor cells although in vitro experiments on pBR322 double stranded DNAhas shown that the test ruthenates bind to the macromolecule, causing breakscorresponding to almost all bases, except than thymine, and are able to cause interstrandbonds, depending on the nature of the complex being tested, some of which results activeas cisplatin itself.In this paper we report a review of the results obtained in the last few years by ourgroup in the development of ruthenium(III) complexes characterized by the presence ofsulfoxide ligands and endowed with antitumor properties. In particular, we will focus onruthenates of general formula Na["} +{"text": "This study compared the histologic characteristics of ulcerative colitis with findings on conventional colonoscopy and on magnification and dye application for 70 sites that underwent biopsy. The primary objective was to study the correspondence between histologic findings and endoscopic findings with respect to glandular restructuring and the resolution of inflammation from the active to the remission phase of ulcerative colitis. Widened grooves, as assessed by the endoscopic staining technique and magnified observation, most closely correlated with histologic evidence of resolution of inflammation, and vascular markings and color tone of the mucosa on general colonoscopy most closely correlated with histologic evidence of glandular restructuring, such as glandular maturity. Magnifying endoscopy after dye application, in addition to conventional endoscopy, is therefore considered essential in the evaluation of ulcerative colitis during the resolving phase."} +{"text": "A case of cholangitis due to the migration of a metal clip used for surgical cholecystectomy 4 yearsearlier, is reported. The diagnostic approach and therapeutic options, either endoscopic or surgical arediscussed. The use of resorbable clips during the performance of laparoscopic cholecystectomy shouldavoid this type of complication."} +{"text": "An unusual anatomic variation of the deltoid muscle was found in a 45-year-old female cadaver during dissection of the right upper extremity. The posterior fibers of the right deltoid muscle were enclosed in a distinct fascial sheet and the deltoid muscle was seen to arise from the middle 1/3 of the medial border of the scapula. There was no accompanying vascular or neural anomaly of the deltoid muscle. To the best of our knowledge, unilateral posterior separation of the deltoid muscle with a distinct fascia has not been described previously. While dissecting deltoid, posterior deltoid, or scapular flaps, the surgeon needs to look out for this variation because it may cause confusion. The deltoid muscle derives from the dorsal muscle mass of the limb bud which is formed by somatic mesoderm during the fifth intrauterine week. VariatioDuring the gross anatomic dissection of the right upper extremity of a 45-year-old female cadaver, we observed an unusual anatomic variation of the deltoid muscle. The posterior fibers of the right deltoid muscle were enclosed in a distinct fascial sheath . In addiThe deltoid muscle arises from the anterior border and upper surface of the lateral third of the clavicle, the lateral margin and upper surface of the acromion, and the lower edge of the posterior border of the spine of the scapula. The insertion is into the deltoid tubercle on the middle of the lateral side of the body of the humerus. It is inThe continuation of the fibers of the deltoid muscle into the trapezius; fusion with the pectoralis major; and the presence of additional slips from the vertebral border of the scapula, infraspinous fascia, and the axillary border of scapula are the commonly reported variations of the deltoid muscle.The myogenic cells coalesce into two muscle masses during the fifth intrauterine week. One is tClinically fasciocutaneus, musculocutaneus or muscular deltoid and posterior deltoid flaps are especially used in; tetraplegia (by a transfer to triceps), radionecrotic defects situated over the glenohumeral joint, reconstruction of extremity, rotator cuff tears, and oral cavity. While elevating musculocutaneus or muscular deltoid and posterior deltoid flaps, the surgeon must be alert to the possibility of this variation's presence because it may cause confusion when dissecting the borders. Similarly, while elevating fasciocutaneus deltoid and posterior deltoid flaps or a scapular flap (either transverse or parascapular), an accessory deltoid may be confused with the teres major muscle because of its location and its distinct fascia and as a result of this the dissection of the pedicle can be much more difficult. To conclude, the variations possible in this region should be kept in mind during any surgery."} +{"text": "Five patients with papillary adenocarcinoma of the common bile duct (CBD) are described.These are rare tumors and make up 5% of all malignant tumors of the biliary tract. Thesymptoms and signs at the time of initial diagnosis resemble benign obstructive lesions of the bileducts. The tumor is soft, less invasive to adjacent tissues and tends to grow into the lumen. Theearly onset of the symptoms results in early intervention, with a better prognosis. Two of ourpatients are doing well after two and four years, where as three others were readmitted withrecurrent disease."} +{"text": "The ovaries of leukaemic children were studied in 31 specimens obtained at autopsy. Twenty-eight ovaries from normal children of the same age who died from misadventure served as control. All ovaries from normal childred showed follicle growth and contained several large antral follicles. Follicle development was inhibited in all ovaries of leukaemic children; 22% showed no follicle growth (quiescent ovaries), and in the ovaries in which there was follicle development, the number and size of antral follicles was significantly smaller than in the control. All children had been treated with cytotoxic drugs, the duration of the treatment being correlated with the stage of ovarian development. The ovaries of children treated for only 1 week were near-normal, while those treated for more than 2 months showed inhibition of follicle growth. It is argued that the disturbance in follicle development is an effect of the cytotoxic drugs, and not an effect of the disease itself."} +{"text": "Several methods of ascertaining and classifying childhood neoplasms for epidemiological study have been evaluated using material from the University of Manchester Children's Tumour Registry (CTR), which includes data from several sources on children with neoplasms first seen in the period 1954-73 who were under 15 years old and living in the Manchester Regional Hospital Board area at the time. Two systems of classification-the International Classification of Diseases (ICD) and the Morphology Section of the Manual of Tumor Nomenclature and Coding -were tested. No major problems arose with the Morphology Section of MOTNAC, and we recommend that the revised version of this section, in the proposed \"International Classification of Diseases for Oncology\", should be used in epidemiological reports on children's tumours whenever possible. The ICD discriminates less well between the commoner types of childhood neoplasms, but must be retained as a supplementary classification to facilitate international comparisons. A comparison of the completeness of ascertainment achieved in recent years by each source of data showed that more than 98% of the serious cases could have been identified using a combination of Hospital Activity Analysis (HAA) and cancer registration records, and more than 95% using HAA and death records. But in an analysis of 2 years' HAA returns and 6 years' cancer registrations of serious cases, nearly one quarter of the former and one fifth of the latter were shown to record diagnoses which differed from those finally assigned at the CTR. It is concluded that, in epedimiological studies based on routine records, the diagnoses given should always be checked centrally, by experts, in the light of all the available clinical and pathological material ."} +{"text": "We have carried out a case-control study to evaluate the association between Wolfe's mammographic patterns and the risk of breast cancer, and to examine the influence of control selection and the radiologist who read the films upon the results obtained. Mammograms of the non-cancerous breast of 183 women with unilateral breast cancer were compared with mammograms from two age-matched control groups: a group of asymptomatic women attending a screening centre, and a group of symptomatic women referred for the diagnostic evaluation of suspected breast disease. Films were arranged in random sequence and independently classified by 3 radiologists. A strong and statistically significant association was found between mammographic dysplasia and breast cancer when controls from the screening centre were compared to cases, but not when cases were compared to women referred for the diagnostic evaluation of breast disease. This result appears to arise in part because of an association between symptoms of benign breast disease and mammographic dysplasia, and suggests that some previous negative studies of the association of mammographic patterns with breast cancer may have arisen from the inclusion of symptomatic subjects as controls."} +{"text": "In spite of a decrease in the metabolic capacity of microsomes to induce lipid peroxidation during inflammation, the endogenous lipid peroxidation remained unchanged and unrelated with the hepatic activities measured. The continuous increase in hepatic cAMP observed during acute and chronic phases could be related to adenylate cyclase stimulation by mediators, and could be an initial step in the hepatocyte adaptation leading to the increased level of hepatic caeruloplasmin, to the reduction of cytochrome P-450 level and to the modifications of Ca"} +{"text": "In adults with congenital heart disease coronary arterial anatomy, normal as well as anomalous, may have implications in surgical reconstruction of an underlying cardiac structure. In addition to the diagnostic imaging, necessary in surgery for adult congenital heart disease, additional information with regard to the spatial relation between the relevant cardiac structure and the coronary arterial system may be required for planning the operation and providing a good outcome.The congenital cardiac surgeon should have the necessary skills in coronary artery bypass techniques.With lack of adequate data, the estimation of mortality due to complications as a result of coronary damage in surgery for adult congenital cardiac disease of below 1% seems fair. In adults with congenital heart disease, the origin and course of the coronary arteries, both normal as well as anomalous, may have implications in surgical reconstruction of an underlying cardiac structure. In isolated congenital coronary arterial anomalies, a wide variety in anatomy as well as pathophysiology and clinical presentation is found ,2. RecenIn addition to the diagnostic imaging, necessary in surgery for adult congenital heart disease, additional information with regard to the spatial relation between the relevant cardiac structure and the coronary arterial system may be required for planning the operation and providing a good outcome.The goal of this presentation is to increase the awareness for the spatial relationship between the coronary arterial system, either normal or anomalous, and underlying structural congenital heart disease and the possible consequences for surgery and outcome in this regard.Embryological studies have shown the crucial role of the pro-epicardial organ (PEO) in the development the coronary arterial system . Epicardtantly also to the coronary vasculature . It has With regard to the results of the embryological studies, it is not surprising that the vast majority of coronary arterial anomalies are found in the area of the three rings.Among the anomalies associated with myocardial ischemia are coronary fistula\u2019s, the abnormal right or left coronary artery connected to the pulmonary artery (ARCAPA or ALCAPA), congenital coronary stenosis or atresia, anomalous origin from the contralateral sinus and a siAnomalies coronary arteries usually not associated with myocardial ischemia are a left circumflex coronary artery originating from the right sinus and the coronary arteries originating from one sinus with separate orifices -3.The incidence, clinical characteristics and possible therapeutical interventions for these coronary arterial anomalies are well described. Nevertheless, attempts at further classification of coronary anomalies are being produced, a recent contribution being made with regard to the anomalous origin of a coronary artery from the opposite sinus .All of these can be regarded as isolated anomalies ,2. ExactNot all cases of isolated congenital anomalies of the coronary arteries warrant treatment ,3. For iSurgical options should depend on the local anatomy and the surgical expertise and may consist of reimplantion of the ectopic coronary arterial orifice, unroofing of the intramural coronary segment or creating of an additional aortic orifice at the end of the intramural course of the coronary artery ,3In adults with congenital heart disease, the origin and course of coronary arteries have implications in surgical reconstruction of an underlying cardiac structure, The spatial relation between the relevant cardiac structure and the coronary arterial system is essential for planning the operation and providing a good outcome. Three-dimensional representation of the diagnostic data plays an increasingly important role in this regard.It is important to realize that not only the coronary anomalies pose a risk in cardiac surgery in adults with congenital heart disease, but also injury of the normal coronary artery system.The diagnostic sequence may therefore start with the usual echocardiography . This wiThe proximal parts of both normal and abnormal coronary arteries are at risk in surgery near the proximal coronary arteries. In any aortotomy the surgeon should be aware of the position of the coronary orifices and the course of the proximal segment of the coronary arteries. Harvesting the pulmonary root in an autograft procedure may damage the left main coronary artery or the first septal branch. In any primary or redo surgical procedure of the right ventricular outflow tract a coronary arterial branch may also be at risk for injury. This may include a left anterior descending coronary artery (LAD) from the right coronary sinus, a single right coronary artery (RCA) with an LAD anterior to the pulmonary trunk, a single left coronary artery with an RCA anterior to the pulmonary trunk or a large infundibular branch from the RCA. In pulmonary root surgery the left main coronary artery at the posterior annular level of the pulmonary orifice may also be at risk. In most of these patients atherosclerotic coronary disease does not play a role, probably because of the younger age at operation No hard data are available on actual damage, but it seems reasonable to estimate mortality due to complications as a result of coronary arterial damage up to 1%.Also the coronary arterial orifices themselves may be at risk for damage in aortic root surgery. Particularly when coronary orifices are replanted in root replacement surgery as in the autograft procedure or during prosthetic valve composite graft root replacement. Especially when the coronary buttons have to be re-excised and re-implanted again, scar tissue and local fibrosis and calcification may pose an additional challenge in the pursuit of successful surgery. In case of surgery in common arterial trunk, the increased variability of the position of the coronary arterial orifices should be adequately appreciated . DespiteAlthough no firm data are available, again it seems fair to estimate mortality due to complications as a result of coronary damage in this regard at up to 1%.In all surgical procedures that involve opening of the aortic root, the coronary orifices are at risk of damage due to manipulation. The risk is small, but direct application of cardioplegia into the orifices may be traumatic. In addition there is the risk of coronary embolism by gas (air or carbon dioxide) or debris after completion of the root surgery.Not only abnormal, but also normal coronary arterial anatomy may have implications in surgery for adult congenital heart disease.Three-dimensional diagnostic tools demonstrating the spatial relation between coronary arteries and the cardiac region of interest are essential.The congenital cardiac surgeon should have the necessary skills in coronary artery bypass techniques.With lack of adequate data, the estimation of mortality due to complications as a result of coronary damage in surgery for adult congenital cardiac disease of below 1% seems fair.So far, clinical atherosclerotic coronary disease in adults with congenital cardiac disease is infrequent."} +{"text": "Dear Editor,The geographical distribution of publications as an indicator of the research productivity of individual countries, regions or Institutions has become a field of interest. A few studies on geographical distribution of articles in peer reviewed indexed journals have been conducted to identify the countries that are making maximum contributions to the development of medical science. Ever sin"} +{"text": "The incidence of lung cancer has markedly increased in the past few decades and is still increasing in many countries worldwide. Lung cancer is a leading cause of death in many developed countries, and approximately 140\u2009000 new cases are identified each year in the US alone . The majProgress in palliative care for lung cancer has been slow; however, advances have been made due to better utilisation of drugs , choice of drugs for first-line therapy and subsequent regimens, and optimal combination and timing with radiation therapy . ChemothHowever, chemotherapy benefits can be at the expense of adverse effects in different organ systems, including the lung . SeveralGefitinib (\u2018Iressa\u2019) is a new type of targeted treatment for NSCLC. It is an inhibitor of the epidermal growth factor receptor (EGFR) signalling pathway that acts intracellularly at the level of the EGFR tyrosine kinase. Two phase II monotherapy trials have reported unprecedented antitumour activity and symptom relief in pretreated patients with advanced metastatic NSCLC (vs placebo-exposed patients with NSCLC (Although gefitinib is not associated with many of the general adverse effects of broadly acting cytotoxic chemotherapeutic agents, recent reports from Japan have indicated that a proportion of patients treated with gefitinib experienced severe ILD (This experience of ILD in patients with NSCLC has posed a number of important questions relating to definition, diagnosis, management and mechanisms. A group of experts in the field of NSCLC and lung disease discussed these issues at a symposium held in Seattle in May 2003. The content of the presentations is discussed within this supplement. Hopefully, the knowledge of gefitinib may provide new insights into the field of ILD and chemotherapy-associated lung disease in the way it has in the treatment of cancer."} +{"text": "Alveolar volume measured according to the American Thoracic Society-European Respiratory Society (ATS-ERS) guidelines during the single breath diffusion test can be underestimated when there is maldistribution of ventilation. Therefore, the alveolar volume calculated by taking into account the ATS-ERS guidelines was compared to the alveolar volume measured from sequentiallly collected samples of the expired volume in two groups of individuals: COPD patients and healthy individuals. The aim of this study was to investigate the effects of the maldistribution of ventilation on the real estimate of alveolar volume and to evaluate some indicators suggestive of the presence of maldistribution of ventilation.Thirty healthy individuals and fifty patients with moderate-severe COPD were studied. The alveolar volume was measured either according to the ATS-ERS guidelines or considering the whole expired volume subdivided into five quintiles. An index reflecting the non-uniformity of the distribution of ventilation was then derived (DeltaVA/VE).Significant differences were found when comparing the two measurements and the alveolar volume by quintiles appeared to have increased progressively towards residual volume in healthy individuals and much more in COPD patients. Therefore, DeltaVA/VE resulted in an abnormal increase in COPD.The results of our study suggest that the alveolar volume during the single breath diffusion test should be measured through the collection of a sample of expired volume which could be more representative of the overall gas composition, especially in the presence of uneven distribution of ventilation. Further studies aimed at clarifying the final effects of this way of calculating the alveolar volume on the measure of DLCO are needed. DeltaVA/VE is an index that can help assess the severity of inhomogeneity in COPD patients. There is evidence that the gases inspired into the alveolar regions are not well mixed and that the alveolar units fill and empty sequentially -4. ImporIt is well recognized that the single breath diffusion test -16 requiWe hypothesize that the alveolar volume measured according to the ATS-ERS method is very different from that calculated considering subsequent phases of the expired volume in those areas where the gas composition is different owing to the fact that the slow-emptying units predominate. In the course of the single breath diffusion test, with the aid of rapid response analyzers it is now possible to follow exhalation to the residual volume after breath-hold and to measure in selected points of the exhalation process the instantaneous expired inert gas fractions which could enter into the calculations of the alveolar volume. In this way, we compared the measurements of the standard alveolar volume obtained following the ATS-ERS recommendations (VAst) to those derived by considering the whole expirate of the same single breath diffusion test, minus the dead spaces, divided into five quintiles and considering the related expired inert gas fractions (VAq) in each quintile. This procedure allowed us to evaluate whether there existed any large discrepancy between the two measurements of the alveolar volume in those cases like COPD, where the process of sequential emptying of different alveolar regions may be excessive. This comparison was made both in healthy individuals and in COPD patients.1 and FEV1/VC respectively below 70% and 88% of the normal value predicted after inhalation of a bronchodilator. Bronchodilator reversibility was defined as an increase of 12% of the baseline value and 200 mL respectively for either FEV1 or FVC above the prebronchodilator baseline, 30 minutes after inhalation of 400 \u03bcg of salbutamol [The study included 30 healthy individuals and 50 patients affected by COPD. The healthy subjects had no history of smoking, nor respiratory symptoms consistent with the diagnosis of COPD nor other pulmonary disease. The healthy subjects were receiving no respiratory medication nor any other medication which could affect the respiratory function. The patients with COPD fulfilled the diagnostic criteria of the Global Initiative for Chronic Obstructive Lung Disease guidelines . All patlbutamol . All vallbutamol . In ordelbutamol . The sinlbutamol -18. EachIn accordance with the purpose of this study the effective alveolar volume was calculated by two different methods. The first method directly measures tracer gas reduction during breath-holding time according to the ATS-ERS guidelines and was defined VAst obtained following the ERS-ATS recommendations according to the following formula:where FICH4 = methane concentration in the inspired gas, and FACH4 (750 mL) = methane concentration in the alveolar sample collected for 750 mL after having discarded the instrumental and anatomical dead spaces; Vinsp = inhaled volume; VD = instrumental dead space and VD anat = anatomical dead space. Sampling of the tracer gas CH4 was executed at the mouth level of the patient in real time and in correspondence of the measurement of the inspiratory and mean expiratory volumes.The second method, which allowed us to derive the alveolar volume subdividing the whole expirate into quintiles (VAq), uses measurements made during the same manoeuver. The difference is that the total expired space is divided into quintiles. In these quintiles the mean concentrations of CH4 were promptly read and retained for the calculations during exhalation from TLC to RV, according to the following formula:FACH4 (quintile) = the methane concentration in the expired volume of that quintile; Vinsp = inhaled volume. Two examples of the different ways of deriving the alveolar volume from the single breath CO manoeuver have been reported in Figure Group data are expressed as mean and standard deviation. The groups and functional parameters in Table Demographic characteristics and parameters of lung function in healthy subjects and COPD patients have been reported in Table The ideal approach would be to compute automatically, point by point, as a continuum, the FICH4/FECH4 ratio during the whole expiration in the course of the single-breath CO test. In order to evaluate whether the sampling of five quintiles was sufficient, we subdivided the whole expired volume into a series of four, five, six and ten exact portions. The alveolar volume calculated using four portions was significantly different when compared to that obtained using five, six or ten portions in both healthy individuals and in COPD patients. At the same time, no differences were observed when the calculated alveolar volumes, obtained subdividing the whole expirate into six portions, were compared to five or ten exact portions both in healthy individuals and in COPD patients. As a result of this prior analysis, the method of sampling five quintiles appeared sufficiently good, more precise than the use of four portions, not different from the use of six or ten portions. Therefore, the method was considered suitable for the purpose of our study and was ultimately chosen and compared to the ERS-ATS method of measuring the alveolar volume.The repeatability of VAst and of VAq was tested both in healthy individuals and in patients with COPD and has been reported in Table It is evident that variation in the calculated alveolar volume between two tests appeared statistically significant for the VAst of COPD patients. In the healthy individuals it resulted approximately close to the level of significance. On the contrary, for the alveolar volume calculated by the quintile method, the variation was only statistically significant for the 2nd quintile of the COPD patients. No other significant variations were observed in the healthy individuals nor in the other quintiles of the COPD patients. In addition, the healthy individuals and COPD patients were not different in repeatability at different lung volumes was compared between healthy individuals and COPD patients in the box-plot graph of Figure The main findings of this study are as follows: 1. the measure of the alveolar volume is different depending on the point where sampling for evaluation of the alveolar concentration of inert tracer gas methane is done in the course of exhalation of the single breath diffusion test; 2. the alveolar volume measured by the quintile method shows a progressive increase from total lung capacity to residual volume and appears significantly different when compared with that measured according to the ATS-ERS guidelines; 3. its size increases much more in COPD patients than in healthy individuals from the beginning to the end of the exhalation; 4. changes of the mean alveolar volume per litre of the expired volume exhaled, expressed as DeltaVA/VE, are significantly and remarkably greater in COPD patients with severe airflow obstruction than in healthy individuals; 5. DeltaVA/VE represents a parameter that is influenced by the effect of non-uniform distribution of convective ventilation as well as by the increased time-constants of the emptying of lung units in diseased lungs.It is well documented that the single breath diffusion test may markedly increase or slightly change upon the effect of variation in lung volume ,15,25-29Although this study did not provide any conclusive information on how the final DLCO can be affected by the changes of lung volume as well as by size and precise estimate of the alveolar volume, in our COPD patients it seems to increase on the average by 1 mmol/min/mmHg for a total increase of 2.5 litres of alveolar volume, as reported in Figure The purpose of our study was to indicate the weakness of the ATS-ERS method to measure the alveolar volume, which collects the alveolar inert gas concentration at the beginning of exhalation, especially in diseased lungs. In fact at this exact point (on the average a volume of 750 ml) the concentration of the tracer gas is not representative of the mean alveolar gas concentration and thus of the real alveolar volume, since it represents only the behaviour of the faster lung units ,17. ThisLiterature well documents the extent to which the distribution of ventilation becomes progressively more inhomogeneous at high lung volume or underThe exaggerated asymmetry of lung units caused by the obstructive airway diseases may ultimately be responsible for the inequality of gas concentration within alveolar gas and, therefore, for the very inhomogeneous dilution of the concentration of the tracer gas methane in the course of the single breath CO inhalation test.et al.[In 1978 Ferris et al. comparedet al.,35.et al.[Some explanations for the remarkable differences in the values of the alveolar volume when measured at the different intervals of exhalation are well reported in the paper by Yuh T Huang et al.. These aIt follows that the true mean alveolar volume should be that derived from the average of each alveolar volume exhaled in each quintile Figures , 3.In summary, our study provides additional information on the real estimate of the alveolar volume when different sampling points are used in the course of the single breath diffusion test for the assessment of diffusing capacity. A model was developed which subdivided into 5 parts the total volume of air exhaled after the breath-hold manoeuver was developed. The instantaneous concentration of tracer gas methane was considered in each quintile; the calculation of the alveolar volume was consequently derived and compared with that derived from the traditional method according to the ATS-ERS recommendations. A significant difference was found between these two ways of measuring the alveolar volume, and the results showed significant differences in COPD patients. The conclusion drawn is that sampling at the beginning of exhalation of the single breath-test is not representative of the real mean alveolar gas concentration, especially when an important ventilation/perfusion mismatch is present. A non-uniform distribution of ventilation, coupled with an exaggerated time constant of emptying of lung units, seems ultimately to be the mechanism responsible for the differences in the size of the alveolar volume when measured differently in the course of expiration. An index reflecting this process was identified which appeared useful to assess the degree of non-uniformity of the ventilation distribution. These analyses provide a basis for further study in order to test the effects on DLCO of this way of measuring the alveolar volume from the sampling of the instantaneous tracer gas concentrations at different intervals of exhalation, but also to observe the behaviour of the diffusivity of carbon monoxide at different intervals of exhalation, as a direct consequence of the complex emptying process of lung units in different diseased states.1 = forced expiratory volume in one second; FEV1/VC: ratio of forced expiratory volume in one second to vital capacity or Tiffeneau index; FVC = forced vital capacity; TLC = total lung capacity; RV = residual volume; FRC = functional residual capacity; RV/TLC = ratio of residual volume to total lung capacity; BMI = body mass index expressed in Kg/m2; VD anat = anatomical dead space; VD instrumental = instrumental dead space; VI = inspiratory volume; Hb= concentration of hemoglobin expressed as g/dl of blood; FICH4 = inspiratory fraction of tracer gas methane; FACH4= alveolar fraction of tracer gas methane; DeltaVA/VE = changes in percentage of the alveolar volume for each litre of expired volume exhaled; ATS = American Thoracic Society; ERS = European Respiratory Society; SD = standard deviation.DLCO = single breath diffusion capacity for carbon monoxide; VA = alveolar volume; VAst = alveolar volume measured according to the ERS-ATS guidelines; VAq = alveolar volume measured according to the method of subdividing the whole expirate into quintiles and considering the instantaneous expired tracer gas fraction in each quintile; FEVThe author(s) declare that they have no competing interests.RP participated in the design of the study and drafted the manuscript, EF and GC participated in the design of the study and helped to perform it, CC participated in the design of the study and performed the functional tests. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "Sir,Drs Mason, Johnson and Rudd suggest that criticism of the guidance issued by the National Institute for Clinical Excellence (NICE) regarding the use of irinotecan and oxaliplatin in the treatment of patients with advanced colorectal cancer may have implications for the FOCUS trial . Howevervs monotherapy), the crucial point is that all patients should have access to these three drugs.While recognising there may be merit in fine-tuning the sequence of use of these three agents (combinations"} +{"text": "The objective of this work is to obtain the correct relative DNA contents of chromosomes in the normal male and female human diploid genomes for the use at FISH analysis of radiation-induced chromosome aberrations.The relative DNA contents of chromosomes in the male and female human diploid genomes have been calculated from the publicly available international Human Genome Project data. New sequence-based data on the relative DNA contents of human chromosomes were compared with the data recommended by the International Atomic Energy Agency in 2001. The differences in the values of the relative DNA contents of chromosomes obtained by using different approaches for 15 human chromosomes, mainly for large chromosomes, were below 2%. For the chromosomes 13, 17, 20 and 22 the differences were above 5%.New sequence-based data on the relative DNA contents of chromosomes in the normal male and female human diploid genomes were obtained. This approach, based on the genome sequence, can be recommended for the use in radiation molecular cytogenetics. FISH echnique .Several questions of radiation cytogenetics are connected with the comparison of results obtained by FISH analysis and those by conventional dicentric analysis -10 and wIt is necessary to know the fractions of the genome covered by FISH probes at the translocation analysis in order to obtain the whole genome equivalent genomic frequencies of chromosome aberrations . In mostThe relative human DNA contents given in are recoWith the increasing accuracy of chromosome aberration analysis , the impIn the post-genomic era, with the completion of the international Human Genome Project , new morThe total sizes of the normal male and female diploid human genomes and the relative DNA contents of chromosomes in the diploid genomes were calculated by using the Human Genome Project data on the chromosome lengths presented in the Ensembl database . The resThe DNA contents of all chromosomes, except chromosome 13, were overestimated in the work of Morton when comThe comparison of the data from Table However, noticeable differences (larger than 5%) were found in the relative DNA contents of chromosomes 13, 17, 20 and 22 in the human diploid genomes obtained by different approaches Figure . This reThe coefficient 2.05 in the formula of Lucas et al. was re-cMany radiobiological investigations were carried out with the use of DNA probes specific for large chromosomes because the probabilities of their damages by ionizing radiations and the levels of aberrations are highest and the translocation analysis is more effective. It should be noted that taking into account small differences in the values of the relative DNA contents of large chromosomes from work and TablIn spite of the high-quality sequencing data there are still some uncertainties about the gaps in the genome sequence and human genetic variations . RecentlNevertheless, new values of the relative DNA contents of chromosomes in the normal human diploid genome based on the international Human Genome Project sequence data could be considered as the best data to date.At present we have the unique opportunity to use precise sequence-based parameters of the reference human genome including the relative DNA contents of chromosomes in the human genome instead of the approximate estimates that have been done by indirect methods at the initial stage of the Human Genome Project. New sequence-based data on the relative DNA contents of chromosomes in the normal male and female human diploid genomes were obtained. The approach, based on the DNA sequence data, can be recommended for the use in radiation molecular cytogenetics., release 52 \u2013 December 2008 [The data on the lengths of each human chromosome were taken from the public Ensembl database ber 2008 . The seqFor each pair of the autosomes the relative DNA contents were calculated as a ratio of the doubled DNA size to the size of diploid female and male genomes, correspondingly. Similarly the relative DNA content of the sex chromosome X in the female genome was calculated. The single DNA contents in the genome were used to obtain the relative DNA content of the sex chromosomes in the diploid male human genome.Fp = 2.05 fp(1 - fp)FG, relating the translocation frequency, Fp, measured using FISH to the genomic translocation frequency, FG, where fp is the fraction of the genome covered by the composite probe, the coefficient 2.05 was recalculated separately for the human female and male genomes by using the sequence-based relative DNA contents of the chromosomes from Table In the formula derived by Lucas et al. Fp = 2.05= 2.05 fp - fpFG, Ci is a fraction of the DNA content of the i-chromosome in the male or female human diploid genome.where The authors declare that they have no competing interests.MVR wrote the manuscript and PIG and LAR contributed significant editorial input and original ideas. All authors read and approved the final manuscript."} +{"text": "Malignant fibrous histiocytoma (MFH) is one of the most common types of soft tissue sarcomas in adults. The most common location of MFH are the extremities and the trunk, with the most common site for distant metastases being the lung. We describe a case with multiple synchronous sites of myxoid MFH but no lung metastases and presence of abnormalities of 19p13."} +{"text": "The usefulness of pharmacokinetically guided individualisation of drug therapy will depend, among other things, on the quality of the analytical and pharmacokinetic methods used. We surveyed the quality of analytical and pharmacokinetics methodology and reporting in a literature search of the oncology literature from 1987 to 1992, using the Medline database. Thirty articles that examined relationships between normal tissue toxicity and area under the plasma concentration-time curve (AUC) formed the study sample. Analytical procedures were adequately described in 77% of the articles, but details of validation of the assay were seriously deficient in the great majority of articles. Methods for calculation of AUC were also deficient in over half of the articles. The findings suggest that greater attention needs to be paid to the quality of pharmacokinetic investigation in oncology, otherwise progress in the use of pharmacokinetically guided individualisation of dosage may be hindered."} +{"text": "The synthetic models for the structures, spectroscopic properties and catalytic activities of metalloproteinactive sites have been reviewed. Calixarenes were used as new biomimetic catalysts because of theiradvantage of providing preorganiiation of the catalytic group, which can bind the substrate dynamically thatresults in fast turnover and fast release of the products. Functional and structural models based on calixarenesare presented and in addition importance of molecular recognition and non-covalent interactions e.g. hydrogen bonding and their role in biological systems are discussed with the help of synthetic systems."} +{"text": "Mothers of a population-based series of 86 children with osteosarcoma or chondrosarcoma were traced and their health status or cause of death ascertained. There were 6 cases of breast cancer among these mothers and 6 other cancers. Risk of breast cancer was approximately three times that expected, and appeared to be highest in mothers of boys and in mothers of children under the median age at diagnosis. The mothers who developed breast cancer were relatively young at diagnosis compared with population data. Risk of other malignancies in the mothers was not in excess of expectation. These findings are in line with those reported for breast cancer risk in mothers of children with soft tissue sarcomas, and provide further indications of a genetic component in the aetiology of these cancers."} +{"text": "A redundant publication is a manuscript which fundamentally presents results from the same study in more than one original paper. This term is synonymous with a \"dual\" or \"duplicate\" publication of identical data and with the disaggregated presentation of identical data in multiple publications derived from the same study , published by the same author or group. Hereby, the content of redundant papers may overlap in part or completely, such that the main findings of an original study are published in multiple papers in different electronic or print journals.Redundant publications in biomedical journals are considered unethical for the following reasons -3:\u2022 \"Inflation\" of the available peer-reviewed literature.\u2022 Skewing of evidence-based medicine when readers erroneously assume to be confronted with reports from independent studies.\u2022 Distortion of available scientific data by unjustified overestimation of a therapeutic effect in systematic meta-analyses.\u2022 Increased, unnecessary workload for editors and peer reviewers, leading to a backlog of \"true\" original articles in the publication process.\u2022 Cost-ineffective use of resources, waste of journal space, and waste of readers' time by reading republished material considered to be original work.\u2022 Distortion of the purpose of biomedical journals as being the source of new information.\u2022 Potential infringement of international copyright law.The Journal of Bone and Joint Surgery (British and American volumes), which is considered the most prestigious journal in the field of orthopedics, revealed that one in 13 original articles were either duplicate or fragmented publications . Permission for such secondary publication should be free of charge\".5. \"The footnote on the title page of the secondary version informs readers, peers, and documenting agencies that the paper has been published in whole or in part and states the primary reference. A suitable footnote might read: 6. \"The title of the secondary publication should indicate that it is a secondary publication of a primary publication\".Surgical Journals Editors Group [According to the official consensus statement by the rs Group , the fol\u2022 Prior publication in meeting program abstract booklets and proceedings from scientific meetings. These must be acknowledged and referenced in the final manuscript.\u2022 Expansion of the original database, published in the primary source, by 50% or more. Previous manuscripts reporting the original database must be referenced in the secondary publication.Ethical standards in publishing place a high level of self responsibility upon publishers, editors, authors and readers. The scientific/medical community at every level should safeguard the medical literature upon which so many of our therapeutic concepts are based and so many of our patients are treated. The \"slippery slope\" of self rationalization that leads a physician or scientist to justify unauthorized duplicate publication is the same deceptive impulse that results in false data and unjustified conclusions. Ultimately, what we publish and read may end up impacting the care and outcome of individuals who present to us for relief of disease and suffering. There is no line on any academic curriculum vitae worth a human life. On behalf of our patients, we must remain vigilant and educate ourselves, our colleagues, and our students on the high ethical standards of scientific publishing.The author(s) declare that they have no competing interests.All authors contributed equally to the design and writing of this editorial. All authors read and approved the final manuscript."} +{"text": "There was an error in the penultimate sentence of the Abstract. The correct sentence is: Native North Americans have received ancestry from a source closely related to modern North-East Asians (Mongolians and Oroquen) that is distinct from the sources for native South Americans, implying multiple waves of migration into the Americas."} +{"text": "The post-natal development of \u201cnatural\u201d resistance of Balb/c mice to the challenge of syngeneic tumours was studied using injections of various doses of live neoplastic cells into untreated animals of increasing age, from neonatal to 12 weeks.The minimum quantity of neoplastic cells capable of inducing tumours increased in parallel with the age of the animals. Immunodepression with whole body irradiation with X-rays reduced the resistance offered by adult mice to tumour challenge to that of the neonate. The relationship of the increase in resistance to tumour challenge with the development of the animals' own immune response during the course of post-natal growth is discussed."} +{"text": "The production of graft-versus-host (GVH) reactions in (PVGc X Wistar) F1 hybrids by the transfer of PVGc spleen cells resulted in significant resistance of these recipients to a subsequent challenge with the PVGc leukaemia. Protection was markedly dependent on dose and timing of allogeneic cell transfer and was abrogated by irradiation of the cells prior to transfer. GVH activity was shown to be a prerequisite for induction of the protective effect but was equally effective when produced by the transfer of Wistar spleen cells in place of PVGc cells. These points, plus the fact that invitro investigations of possible immune mechanisms failed to demonstrate cytotoxic immunity in treated rats, suggested a nonspecific \"bystander\" effect as the mechanism of protection. The implications of such a mechanism are discussed."} +{"text": "We report on a new methodology which allows the direct analysis ex vivo of tumour cells and host cells from a metastasised organ (liver or spleen) at any time point during the metastatic process and without any further in vitro culture. First, we used a tumour cell line transduced with the bacterial gene lacZ, which permits the detection of the procaryotic enzyme beta-galactosidase in eukaryotic cells at the single cell level thus allowing flow adhesion cell sorting (FACS) analysis of tumour cells from metastasised target organs. Second, we established a method for the separation and enrichment of tumour and host cells from target organs of metastasis with a high viability and reproducibility. As exemplified with the murine lymphoma ESb, this new methodology permits the study of molecules of importance for metastasis or anti-tumour immunity at the RNA or protein level in tumour and host cells during the whole process of metastasis. This novel approach may open new possibilities of developing strategies for intervention in tumour progression, since it allows the determination of the optimal window in time for successful treatments. The possibility of direct analysis of tumour and host cell properties also provides a new method for the evaluation of the effects of immunisation with tumour vaccines or of gene therapy."} +{"text": "Malawi is reassessing its HIV prevention strategy in the light of a limited reduction in the epidemic. No community based incidence studies have been carried out in Malawi, so estimates of where new infections are occurring require the use of mathematical models and knowledge of the size and sexual behaviour of different groups. The results can help to choose where HIV prevention interventions are most needed.The UNAIDS Mode of Transmission model was populated with Malawi data and estimates of incident cases calculated for each exposure group. Scenarios of single and multiple interventions of varying success were used to identify those interventions most likely to reduce incident cases.The groups accounting for most new infections were the low-risk heterosexual group - the discordant couples (37%) and those who had casual sex and their partners (a further 16% and 27% respectively) of new cases.Circumcision, condoms with casual sex and bar girls and improved STI treatment had limited effect in reducing incident cases, while condom use with discordant couples, abstinence and a zero-grazing campaign had major effects. The combination of a successful strategy to eliminate multiple concurrent partners and a successful strategy to eliminate all infections between discordant couples would reduce incident cases by 99%.concurrency and discordancy.A revitalised HIV prevention strategy will need to include interventions which tackle the two modes of transmission now found to be so important in Malawi - HIV infection prevention has been largely unsuccessful in many sub-Saharan countries including Malawi, where HIV prevalence remains high despite prevention activities ,2. In MaThree things make it timely to review national HIV prevention strategies. Firstly, epidemiologists have belatedly come to realise the importance of concurrent sex partners in the spread of HIV infection. Until recently transmission models, while taking the importance of multiple partnerships into account, failed to include the magnitude of the spread within concurrent partnerships in the early stages of infection. An example of this failure is in a paper written in 1989 which wrongly anticipated a heterosexually spread epidemic in the UK because the difference between serial monogamous multiple partnerships and concurrent partnerships was not realised [Secondly, the very high transmission risk in the early infection period was not fully realised until the paper from Rakai in 2005 . The stuThirdly, the belief that deep seated cultural practice of polygamy and concurrent sexual partnerships in Africa could not be changed was disproved . The proThe Mode of Transmission spreadsheet provides a simple means of assessing the impact of HIV prevention measures singly and in combination. This paper assesses the impact using a set of input variables to accurately reflect what is likely to be happening in Malawi. For this exercise absolute numbers are not as important as relative ones because the exercise is looking at the relative not the absolute value of different interventions.The mode of transmission model uses categories of those at risk of exposure: low risk are single partner couples; the casual sex group are either those having pre-marital sex or those having multiple partners; no risk are those not having sex. The mode of transmission model was populated with best estimate Malawi data for 56 of the 62 input variables required, with default estimates for Africa being used for the other six was halved in response to improved syndromic treatment services. The use of condoms with all STIs was assessed by reducing the cofactor (4 in the model) to zero. Uniform use of condoms by bar girls was assessed by increasing condom use to 100% for bar girls (from 92%) and their clients (from 64%). The circumcision rate was doubled and trebled to assess the effect on incidence in males. A 100% successful zero-grazing campaign was assessed by converting all casual and commercial sex risk to low risk adults. Doubling condom use was assessed to identify the value of this increased protection. The effect of abstinence for one year was calculated by converting 20% of the 15-24 year olds from high and low risk to no-risk.The overall national HIV incidence per year was estimated using the Estimation and Projection Package (EPP) in the lNo participants were involved in the study and hence ethics committee approval was not required.New cases in the year were estimated to be 94,454, which is an annual national incidence of 1.6% Table .The number of cases in each exposure group depended on its size Figure . The larThe results of model manipulation to assess the effect of possible HIV prevention interventions were found to produce limited effect for some - condom use with bar girls, circumcision, abstinence and reducing STIs. Major effect was found with others - condom use with discordant couples, a zero-grazing campaign, condom use with a sexually transmitted infection (STI) and in high risk sex Table . The sizThe mode of transmission model is easy to use and adapt to the local situation. Sufficient local data were found to populate 90% of necessary model variables. The use of such data and a simple model gave national planners confidence to base a new HIV prevention strategy on the results. The results were sufficiently different from preconceived notions of which population groups had the most new infections to make a big shift in strategic emphasis. It appears that while originally developed for low level epidemics the mode of transmission model can be used in generalised epidemics if sufficient local data are available as found in this study. The UNAID Estimation and Projection Package (EPP) provides estimates of overall levels of HIV transmission. The mode of transmission model provides estimates of the relative contribution of the different risk groups.The predominant data set was from the most recent DHS. This has significant problems of interpretation in relation to recall of sexual activity, on which the model relies. In this DHS which had an HIV test refusal rate of 22%, the recall of sexual activity about coital frequency and multiple partners are particular causes of uncertainty. Fortunately Malawi has some detailed sexual activity data collected by the Malawi Diffusion and Ideation Project over a number of years -19. ThisHowever the main limitation of the study is the sensitivity of the choice of risk of infection in the different risk groups. Until evidence is available to be confident about the proportion of concurrent partners and new infections that make up the casual risk group, any analysis relies on the assumptions used in the model. The model can be re-run with revised assumptions as this evidence becomes available. An advantage of the model over more sophisticated ones is that public health practitioners can test quite easily the sensitivity of the model to changes in input variables about which they are unsure. Indeed the model is simple enough to be used in planning meetings where participants can suggest alternative estimates of input variables and alternative mixes of possible interventions.The choice of intervention depends on the relative importance of the mode of transmission with which it interferes, its effectiveness and its cost. The cost-effectiveness of HIV/AIDS interventions is known . While cconcurrency and discordancy.But which HIV prevention interventions are effective in the Malawian setting? The choice of appropriate interventions requires knowledge and experience of ways which are likely to change sexual behaviour in the Malawi population. Clearly interventions used to date have been inadequate, insufficient or inappropriate. A revitalised strategy will need to include more of those existing strategies that have worked and new ones such as a \"zero grazing campaign\" and couple HCT which tackle the two modes of transmission now found to be so important in Malawi - In conclusion the mode of transmission model offers a practical means of identifying in countries such as Malawi with a generalised HIV epidemic those groups within the community for which HIV prevention programmes can be targeted.The authors declare that they have no competing interests.KM undertook the compilation of data sources for input variables and populated the model. CB modified the model, undertook the analysis of possible interventions and wrote the first draft of the manuscript for publication. Both authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/10/243/prepubData sources used to populate the Mode of Transmission model - Malawi 2007. Data sources referenced which were used to populate the Mode of Transmission model demonstrating the availability of most data items from local data sources and the limited number variables for which regional estimates were used.-30Data sClick here for fileSensitivity analysis of Mode of Transmission results. Sensitivity analysis assessing the effect on HIV incidence using Mode of Transmission model by varying estimates of risk of transmission variable and size of selected risk groups - Malawi 2008.Click here for file"} +{"text": "In spite of the reasonable size of the numbers forming the basis for the analysis, no statistically significant differences were found between smokers and non-smokers. However, it was found that the frequency of epithelial dysplasia is not higher among smokers than among non-smokers.The smoking habits of 345 Danish and 184 Hungarian leukoplakia patients were analysed against the histopathology of the leukoplakias,"} +{"text": "We report observations on the spread by metastasis and infiltration of a transplantable tumour in rats treated by 60Co gamma-irradiation of the primary, irradiation plus parenteral cyclophosphamide, or parenteral cyclophosphamide alone. The proportion of animals with overt disseminated disease and the extent of spread were measured with respect to the time elapsed after implantation and treatment of the primary tumour. The incidence of metastatic disease was broadly similar for all treatment groups, but the extent of dissemination was greater in rats whose treatment included cyclophosphamide."} +{"text": "The toxicity mechanisms of mercury and tin organic derivatives are still under debate. Generally thepresence of organic moieties in their molecules makes these compounds lipophilic and membrane activespecies. The recent results suggest that Hg and Sn compounds deplete HS-groups in proteins, glutathione andglutathione-dependent enzymatic systems; this process also results in the production of reactive oxygenspecies (ROS), the enhancement of membrane lipids peroxidation and damage of the antioxidative defencesystem. The goal of this review is to present recent results in the studies oriented towards the role oforganomercury and organotin compounds in the xenobiotic-mediated enhancement of radical production andhence in the promotion of cell damage as a result of enhanced lipids peroxidation. Moreover the conceptionof the carbon to metal bond cleavage that leads to the generation of reactive organic radicals is discussed asone of the mechanisms of mercury and tin organic derivatives toxicity. The possible use of natural andsynthetic antioxidants as detoxification agents is described. The data collected recently and presented hereare fundamentally important to recognizing the difference between the role of metal center and of organicfragments in the biochemical behavior of organomercury and organotin compounds in their interaction withprimary biological targets when entering a living organism."} +{"text": "We illustrate the principle of conformal radiotherapy by discussing the case of a patient with a primitive neuroectodermaltumour of the chest wall. Recent advances in radiotherapy planning enable precise localization of the planning target volume(PTV) and normal organs at risk of irradiation. Customized blocks are subsequently designed to produce a treatment fieldthat \u2018conforms\u2019 to the PTV. The use of conformal radiotherapy (CRT) in this case facilitated the delivery of concurrentchemotherapy and radiotherapy by significantly reducing the volume of red marrow irradiated.The lack of acute and latetoxicities was attributed to optimal exclusion of normal tissues from the treatment field, made possible by CRT."} +{"text": "British Journal of Cancer (2002) 87, 28\u201330. doi:10.1038/sj.bjc.6600427www.bjcancer.comCancer Research UK\u00a9 2002 This is a salutary lesson that highlights the need for restraint in propounding the immediate benefits of the human genome sequence to the elucidation of disease processes and the development of new treatments. Nevertheless EBV has come a long way and this slender volume edited by Kenzo Takada nicely highlights the current state of the field and sets the scene for the challenges ahead.The study of virus-associated human malignancy has provided many important insights into the pathogenesis of cancer as well as facilitating the development and testing of novel therapeutic strategies relevant to the treatment of common cancers. Epstein-Barr virus or EBV, as the first oncogenic human virus to be identified, has been at the vanguard of these studies and even 40 years after its discovery the virus continues to fascinate and surprise as it yields up its secrets. The study of EBV intersects many disciplines including cell biology, immunology, molecular biology and pathology and this provides an important advantage in tackling the complexity of the tumorigenic process. The virus also provides a useful paradigm for the benefits and challenges associated with the impact of having a complete DNA sequence on our understanding of human disease. The full DNA sequence of the EBV genome, some 172 kilobases, was published in 1984 thus we are almost 20 years into the post-genomic era but the \u2018big questions\u2019 relating to such areas as the nature of virus persistence and replication myc expression, a central player in the pathogenesis of BL, is reviewed by Ruf et al. Another important contribution from Takada's group has been the use of recombinant drug-resistant forms of EBV to infect epithelial cells and thus Imai et al provide a superb review of this area highlighting the association of the virus with NPC and other carcinomas and examining over 20 years' worth of in vitro studies attempting to elucidate the nature of the virus-epithelial cell interaction.\u2018Epstein-Barr Virus and Human Disease\u2019 is a selective collection of reviews from experts in Japan and the United States covering a range of topics. While considering detailed aspects of EBV biology such as virus replication and the function of EBV-encoded glycoproteins, some key fundamental features of the virus like the function of the latent membrane proteins (LMP1 and LMP2) or the latest theories on the nature and site of EBV persistence are not covered. This reflects the interests of the various authors and with nine of the 14 chapters being provided by Japanese researchers the volume is naturally biased towards certain topics at the expense of others. This is best exemplified in Section II which covers EBV-associated malignancies and provides two chapters on those tumours which are of particularly interest and importance in Japan but relatively rare in the West. The other review on AIDS lymphoma in relation to both EBV and HHV8 seems a little out of place and some consideration of the evidence and recent work implicating EBV in the aetiology of Hodgkin's disease or nasopharyngeal carcinoma (NPC) may have been more appropriate. These minor deficiencies are more than compensated by the editor's own contribution and that of Jeff Sample's group describing their studies using the Burkitt lymphoma (BL) cell line, Akata, to examine the role of EBV in the pathogenesis of BL. Takada was the first to describe the ability of EBV-positive Akata BL cells to lose the virus in culture with consequent loss of the transformed phenotype and his chapter summarises these studies. This system provides a valuable tool for dissecting the role of individual EBV-encoded latent genes in the oncogenic process and for generating recombinant forms of the virus. The use of this approach to examine the impact of EBV infection on cell growth and on c-oriP, is described and the possible role of cellular licensing factors in controlling replication of the EBV episome is considered. Two chapters cover the other aspect of virus replication, that is the activation of the EBV lytic cycle which results in the production of progeny virus. Tsurimi examines the role of various viral proteins in this complex process and provides a model for the EBV replication fork whereas Fujiwara describes his work on the possible influence of the EBNA2 protein on lytic cycle induction. Lindsey Hutt-Fletcher's chapter on the two major EBV glycoprotein complexes is an important reminder of the critical functions of these membrane proteins in relation to EBV replication and tropism and how these might impact on EBV pathogenesis. The final section of the book deals with animal models and therapeutic approaches. Fred Wang reviews the EBV-like herpesviruses infecting Old World primates and describes the similarities, at both the molecular and biological level, of these so-called lymphocryptoviruses (LCVs). Rhesus LCV is a particularly good model for EBV in terms of tumour association, in vitro B cell transformation and the natural history of infection and thus provides a valuable experimental model to study key aspects of EBV pathogenesis such as the role of latent genes in acute and persistent infection, virus immune evasion strategies and virus-associated tumorigenesis. Finally, Clio Rooney and co-workers consider the use of EBV-specific cytotoxic T lymphocytes for the treatment of virus-associated tumours. This group has pioneered this form of adoptive immunotherapy for the treatment of EBV-induced lymphoproliferative disease and the chapter reviews their experience with a comprehensive assessment of the \u2018pros and cons\u2019 of this therapy. The possibility of using this approach for the treatment of other EBV-associated tumours such as Hodgkin's disease is also discussed.The volume also contains a section on the molecular mechanisms of maintenance and disruption of virus latency. Two reviews deal with the factors governing the extrachromosomal maintenance and replication of the circular EBV genome. Here the interaction of the EBNA1 protein with the latent viral replication origin, \u2018Things are seldom what they seem\u2019 when it comes to studying virus-associated human tumours. There are many challenges and paradoxes. For instance, EBV is probably one of the most common viral infections throughout the world but only a small proportion of infected individuals develop tumours and these differ dependent on where you reside. This highlights the complexity of oncogenesis and the fact that whilst EBV may be essential in this process other factors working in concert with virus infection are required to fully transform a normal cell. For many years EBV-induced transformation of B lymphocytes in culture was considered to be a model for virus-associated cancers until it was discovered that EBV adopts distinct patterns of latent gene expression in different tumours and that in certain situations such as BL the virus-encoded proteins considered to have oncogenic activity are not expressed! These are important points for post-graduate students and post-doctoral research fellows to appreciate which may not be clear from a reading of this volume. However, the book provides a useful but selective snap shot of the current state of play contrasting our detailed knowledge of certain molecular aspects of EBV biology with almost complete ignorance of fundamentally important processes such as the role of EBV in the pathogenesis of NPC or the contribution of epithelial cell infection to virus persistence and replication. The book also highlights the value of new approaches, particularly the ability to generate EBV recombinants, as well as illustrating how new treatments can be developed without a full understanding of the disease process. There's much still to do and, with an increasing number of viruses being implicated in the development of human cancer, EBV stands as a paradigm for both tumour virology and for the challenges of the post-genomic era."} +{"text": "We studied 273 premenopausal women recruited from mammography units who had different degrees of density of the breast parenchyma on mammography, in whom we measured height, weight and skinfold thicknesses. Mammograms were digitized to high spatial resolution by a scanning densitometer and images analysed to measure the area of dense tissue and the total area of the breast. Per cent density and the area of non-dense tissue were calculated from these measurements. We found that the mammographic measures had different associations with body size. Weight and the Quetelet index of obesity were strongly and positively associated with the area of non-dense tissue and with the total area of the breast, but less strongly and negatively correlated with the area of dense tissue. We also found a strong inverse relationship between the areas of radiologically dense and non-dense breast tissue. Statistical models containing anthropometric variables explained up to 8% of the variance in dense area, but explained up to 49% of the variance in non-dense area and 43% of variance in total area. These results suggest that aetiological studies in breast cancer that use mammographic density should consider dense and non-dense tissues separately. In addition to per cent density, methods should be examined that combine information from these two tissues."} +{"text": "An electrolytic reduction system has been developed to model the cytotoxic action of a range of nitroimidazole drugs against DNA hypoxic cells or anaerobic microorganisms. THe degree of damage induced by these drugs (measured as the release of [14C]-dT from DNA) and their relative rates of reduction have been correlated with their redox potentials. The results show that the correlation of drug-induced damage and electron affinity is related to the amount of drug reduced, and supports the hypothesis that at the molecular level the cytotoxic mechanism of reduced nitroimidazoles is identical in hypoxic mammalian cells, bacteria and protozoa."} +{"text": "Objective: Numerous published reports have linked various disease states and pregnancy-related conditions with meteorologic factors such as weather, humidity, and temperature. The purpose of this study was to determine if temperature and dew point affect the incidence of pyelonephritis during pregnancy.Methods: A retrospective chart review of a 4-year period from 1989 to 1992 was performed. The records of women who were diagnosed with pyelonephritis during pregnancy were abstracted for the dates of admission. The climatic records of the Tampa Bay area of Florida were obtained from the National Weather Service.Results: The average, minimum, or maximum daily temperature or average daily dew point during the month of admission had no significant effect on the rate of pyelonephritis during pregnancy in the Tampa Bay area.Conclusions: The rate of pyelonephritis during pregnancy per number of deliveries in the Tampa Bay area was not affected by the average, minimum, or maximum daily temperature or average daily dew point."} +{"text": "Recently, three new polyomaviruses have been reported to infect humans. It has also been suggested that lymphotropic polyomavirus, a virus of simian origin, infects humans. KI and WU polyomaviruses have been detected mainly in specimens from the respiratory tract while Merkel cell polyomavirus has been described in a very high percentage of Merkel cell carcinomas. The distribution, excretion level and transmission routes of these viruses remain unknown.Here we analyzed the presence and characteristics of newly described human polyomaviruses in urban sewage and river water in order to assess the excretion level and the potential role of water as a route of transmission of these viruses. Nested-PCR assays were designed for the sensitive detection of the viruses studied and the amplicons obtained were confirmed by sequencing analysis. The viruses were concentrated following a methodology previously developed for the detection of JC and BK human polyomaviruses in environmental samples. JC polyomavirus and human adenoviruses were used as markers of human contamination in the samples. Merkel cell polyomavirus was detected in 7/8 urban sewage samples collected and in 2/7 river water samples. Also one urine sample from a pregnant woman, out of 4 samples analyzed, was positive for this virus. KI and WU polyomaviruses were identified in 1/8 and 2/8 sewage samples respectively. The viral strains detected were highly homologous with other strains reported from several other geographical areas. Lymphotropic polyomavirus was not detected in any of the 13 sewage neither in 9 biosolid/sludge samples analyzed.This is the first description of a virus isolated from sewage and river water with a strong association with cancer. Our data indicate that the Merkel cell polyomavirus is prevalent in the population and that it may be disseminated through the fecal/urine contamination of water. The procedure developed may constitute a useful tool for studying the excreted strains, prevalence and transmission of these recently described polyomaviruses. Polyomaviridae family that persistently infect humans and cause disease in immunocompromised individuals. These viruses have been potentially implicated in certain cancers .The nucleotide sequences obtained were deposited in GenBank [GenBank: None of the 22 sewage, sludge and biosolid samples tested positive for LPyV although typical concentrations of JCPyV and HAdV indicated human fecal contamination (data not shown). The nPCR assay showed a sensitivity of 1-10 genomic copies/reaction when the complete LPyV genome cloned iThe observation that MCPyV DNA was much more frequently detected than that of KIPyV or WUPyV might reflect that MCPyV is a more prevalent infection or that it is a highly excreted virus.Our results on MCPyV in urine, urban sewage and river water strongly support the notion that this virus shows an excretion pattern that resembles that of JCPyV and BKPyV. Human excretion of new polyomaviruses, especially MCPyV, may lead to fecal (urine) contamination of water and food.in vitro culture of the new polyomaviruses because no cell culture systems for these viruses are available at present. Furthermore, for other human polyomaviruses, such as JCPyV, the regulatory regions of strains excreted in urine present an archetypal structure and are inefficiently cultured.In this study we did not attempt the To our knowledge, this is the first report of the presence of a virus strongly related to human cancer in sewage and river water samples. We propose that the methodology reported here is suitable to study the prevalence, excretion pattern and genetic variability of recently discovered human polyomaviruses in environmental matrices.The authors declare that they have no competing interests.SBM coordinated the study, concentrated urine samples and nucleic acid extractions of the urine samples, collaborated in PCR assays, typed the amplicons detected and drafted the manuscript. JRM concentrated the sewage and biosolid samples and performed the nucleic acid extractions; he also collaborated in the PCR analysis and in the sequencing of the resulting amplicons. BC concentrated river water samples and performed nucleic acid extraction of the same samples. AC collaborated in the production of standards for the quantification of HAdV and JCPyV and in the nucleotide sequence comparisons. RG participated in the development of the methodology, conception and coordination of the study and helped to draft the manuscript. All authors read and approved the final manuscript.SBM is an assistant professor at the Department of Microbiology of the Faculty of Biology, University of Barcelona. Her main research interests are the epidemiology of human and animal polyomaviruses. She addresses their transmission through the environment and their potential as indicators of the presence of human or/and animal fecal contamination."} +{"text": "Although transmissible spongiform encephalopathies (TSE) or prion diseases are neurodegenerative disorders, the immune system is also involved, at least in the early stages of their pathogenesis. Extensive studies have focused on cells targeted by the TSE agent for its replication but few on the possible involvement of macrophages in its clearance, as in more conventional diseases. This review summarises some of the experiments aimed at demonstrating a role for macrophages in TSE and presents the application to TSE of the macrophage \"suicide\" technique, which has been used to clarify the implication of these cells in the early steps of TSE pathogenesis."} +{"text": "The death certificate only (DCO) index, which quantifies the proportion of patients for whom the death certificate provides the only notification to the registry, is a widely used measure of incompleteness of population-based cancer registration. This paper provides an algebraic assessment and a quantitative illustration of the relationship between the DCO index and incompleteness of cancer registration. It is shown that the relationship between the DCO index and incompleteness of registration is strongly dependent on the case fatality rate and the misclassification rates of cancer deaths among unregistered patients. Therefore, the DCO index is a very poor indicator of incompleteness. Similar limitations apply to the DCN index (proportion of cases first notified by death certificate), which has been proposed as an alternative measure of incompleteness."} +{"text": "Esophageal resection remains the only curative option in high grade dysplasia of the Barrett esophagus and non metastasized esophageal cancer. In addition, it may also be an adequate treatment in selected cases of benign disease. A wide variety of minimally invasive procedures have become available in esophageal surgery. Aim of the present review article is to evaluate minimally invasive procedures for esophageal resection, especially the approach performed through right thoracoscopy. Esophagus resection may be the adequate treatment in selected cases of benign esophageal diseases, but the most frequent indications for esophageal resection are the high grade dysplasia of the Barrett esophagus and the non metastasized esophageal cancer.Prognosis of patients with esophageal cancer remains poor: only 56% of the patients presented with esophageal cancer have resectable disease with an overall five-years survival rate of 10% while the five-years survival rate among operated patients is still only 18%.3 The adv4There is still controversy about the approach and extent of necessary resection. A recent prospective randomized study, in which en bloc esophageal resections with systematic abdominal and mediastinal lymph node dissection (two fields lymphadenectomy) has been compared with the classical transhiatal approach, showed that the transhiatal approach carries lower morbidity than through right thoracotomy performed extended en-bloc lymphadenectomy. Moreover, a trend was observed with advantage for the transthoracic approach in tumors located in the mid-esophagus, for the most frequent lower esophageal cancer the median survival, disease free and quality-adjusted survival were not statistically different.In an attempt to lower the mortality and morbidity rates of the conventional esophagectomy, advantages in minimally invasive instrumentation and gained experience in minimally invasive surgery make a minimal invasive approach possible. Several minimal invasive approaches mimic the conventional procedures for esophagectomy: 1) transhiatal esophageal resection by laparoscopy; 2) esophagectomy by right thoracoscopy, the so called three-stage operation and 3) esophageal resection by means of endoscopic microsurgical mediastinal dissection.\u201311 MoreoThe aim of the present review article is to evaluate this minimal invasive approach for esophagus resection, especially the approach performed through right thoracoscopy.Benign esophageal diseases are an infrequent indication for esophageal resection. Nevertheless, important caustic and peptic stenosis not suitable for treatment by balloon dilatation may finally be considered indications for resection. End stage motility diseases of the esophagus like achalasia and Chagas's disease with important mega-esophagus and the presence of multiple esophageal epiphrenic diverticula may also be an indication for resection. Also, borderline high grade dysplasia of the Barrett esophagus may be a good indication for resection by minimal invasive techniques.Cancer of the esophagus is the most frequent indication for resection. Once diagnosed, it is important to establish a good preoperative assessment of the resectability. Enhanced CT scan of thorax and abdomen and an endoscopic ultrasonography can determine the lymph node involvement, local growth in other organs and existence of metastases. Positron emission tomography (PET's scan) is currently under evaluation in order to assess the capacity to properly exclude distant metastases. Moreover, treatment of the esophageal cancer needs a multidisciplinary approach and usually patients with a locally advanced cancer will be treated by neo-adjuvant chemo-radiation, making surgery perhaps more difficult.All operable cancers in any location can be approached by minimal invasive surgery. For distal tumors the transhiatal or the right thoracoscopic approaches seem both a good option whereas for higher tumors, in the proximal esophagus and around the carina the right thoracoscopic approach is the only possibility.Resection of the esophagus (and lymphadenectomy) through a right thoracoscopy. Mobilization of the stomach and creation of gastric conduit can be performed by laparotomy or by laparoscopy followed by a cervical anastomosis (three stage technique).Ivor Lewis operation: right thoracoscopy (esophagectomy and lymphadenectomy) followed by laparoscopic gastric mobilization and intrathoracic anastomosis between mediastinal esophagus and gastric tube.Transhiatal approach, total laparoscopic or laparoscopic assisted dissection of the esophagus up to the carina followed by mobilization of the stomach, creation of the gastric conduit and subsequent cervical anastomosis.Esophageal resection through mediastinoscopy: endoscopic microsurgical dissection.Robot assisted procedureset al., described in 1992 the first combined thoracoscopic and laparoscopic approach for esophageal resection for cancer with reconstruction by means of a gastric conduit anastomosed to the neck The Theet alThe authors conclude that RTE was feasible, providing an effective lymphadenctomy with low blood loss. Standardization of the technique should reduce the complication rate, which is in the range of the rate for open resection.et al., comparing the standard transhiatal (and lymphadenectomy of the celiac trunk) approach with the transthoracic approach (and lymphadenectomy of the mediastinum and celiac trunk) has not demonstrated a significant difference between the groups concerning survival at five years. An important trend exists in favour of the transthoracic group, but not for the tumors located in the distal esophagus and cardia. In order to increase visualization during the mediastinal dissection of the esophagus decreasing the operative trauma and postoperative complications, different minimal invasive approaches have been developed first as imitation of the conventional procedures: the three stage procedure , the Ivor-Lewis procedure and the transhiatal approach. Later on, but also initiated early in the nineties by Cuschieri, the right thoracoscopic approach in prone position is becoming increasingly used because the reduction of postoperative pulmonary complications. In the beginning, all procedures had long operative times and the postoperative complications rates were similar to the conventional approach. More experience of surgeons with advanced minimally invasive procedures, the introduction of more sophisticated devices for dissection and sealing of tissues, such as Ligasure, Ultracision and endostaplers and the general acceptance that oncological procedures can be performed by minimally invasive procedures without oncological disadvantage for the patient, has stimulated importantly further developments. The enthusiasm of different Japanese groups and the systematically standardization of the Pittsburgh's group of Luketich have demonstrated that the three stage procedure can be performed not only safely, but also in an comparable operating time with important advantages in the postoperative recovery of the patients and a good oncological outcome, at least as good as after conventional surgery. One of the problems of this approach will be the difficulty of the double endotracheal tube and the collapse of the right lung to visualize adequately the mediastinum. With short operative time, the produced shunt by the collapse is to overcome; nevertheless in elderly people and longer thoracoscopic time, pulmonary complications may rise. Luketich et al., described an incidence of 7.7% of pneumonia and 1,8% of acute respiratory disease as major complications along with 4.5% atelectasis with mucus plug requiring bronchoscopy as minor complications. Moreover these problems do not affect the hospital stay of seven days and the very low mortality rate of 1.4%. To overcome these complications, the right thoracoscopic esophageal dissection performed in prone position will permit a reasonable partial ventilation of the right lung with as consequence less postoperative pulmonary complications. Palanivelu et al., have an incidence of 1.54% of pneumonia and 0.77% of acute respiratory disease. Moreover their operative time is comparable with the standard conventional and right thoracoscopic procedures. Concerning oncological parameters, R0 -R1 resections, number of lymph nodes resected and finally the survival according to stage, the two most important series of Luketich et al. and Palanivelu et al., are similar to the published with conventional procedures. The role of the promising robotic assisted esophageal resection has to be defined in a near future.Surgical treatment of esophageal cancer, with important dissections at abdominal, thoracic and cervical spaces means a tremendous operative trauma for the patient with high postoperative discomfort and a high morbidity and mortality. The approach of the esophagus through a right thoracotomy in combination with a laparotomy and cervical incision has an important rate of complications, especially the pulmonary, that will account for the long intensive care stay. Transhiatal approach, according to Orringer will reduce this complication rate by avoiding opening of thoracic cavity and therefore reducing the number of pulmonary complications. Drawback of this approach is the blind character of the mediastinal dissection that can only be performed with some displacement of the heart with consequent hemodynamic repercussions. A randomized study from Hulscher et al., compares recently the outcomes between open and minimally invasive esophagectomy and concluded that minimally invasive techniques to resect the esophagus in patients with cancer were confirmed to be safe and comparable to an open approach with respect to postoperative recovery and cancer survival.[Smithers survival.Another important point is to define the role of the laparoscopic transhiatal approach. Important advantage will be the avoidance of the thoracoscopic stage with theoretically less respiratory complications. As for the conventional transhiatal, the approach is ideal for the very distal esophageal and gastro-esophageal junction tumors. These tumors are the most frequent in the western world. Important advantage for this approach will be the initial assessment of the abdomen for metastases, the dissection of the tumor between distal esophagus and stomach, with a good view of both sides of the tumor, the hiatus and both pleuras. The dissection can be accomplished very adequately in the anatomical planes up to the carina. The dissection and construction of the gastric conduit will be performed at the same stage without changing the positioning of the patient. Drawback of this procedure is the lack of carinal lymphadenectomy. The results obtained by our group with this approach in an unselected group of 50 patients are very promising with a sConcerning the survival of the patients, new protocols will give more insight in the role of chemoradiation as neoadjuvant treatment for these tumors. Probably the combination of a preoperative precise stagimg of the tumor, the use of efficacious neoadjuvant therapy and the minimally invasive surgical resection will be the adequate approach for these patients, producing less postoperative complications and a better survival."} +{"text": "Rana tigerina and thereby reduce the complications involved in the sepsis.Frog skin has been sequentially and scientifically evaluated by our group for its wound healing efficiency. Owing to the complex structure of skin, attempts were being made to analyse the role of individual constituents in different phases of healing. Our earlier papers have shown the significance of frog skin not only in wound healing but also enhancing the proliferating activity of the epidermal and dermal cells which are instrumental for normal healing process. We also have identified for the first time novel antimicrobial peptides from the skin of The current study envisages the role of frog skin lipids in the inflammatory phase of wound healing. The lipid moiety of the frog skin dominated by phospholipids exhibited a dose dependent acceleration of healing irrespective of the mode of application. The efficiency of the extract is attributed partially to the anti-inflammatory activity as observed by the histochemical and immunostimulatory together with plethysmographic studies.Thus, frog skin for the first time has been demonstrated to possess lipid components with pharmaceutical and therapeutic potential. The identification and characterization of such natural healing molecules and evaluating their mechanism of action would therefore provide basis for understanding the cues of Nature and hence can be used for application in medicine. Hoplobatrachus sp.) skin was found to be very effective. Acceleration of wound healing by the application of several biological membranes is very common owing to their efficacy in preventing infection and sepsis. However, frog skin plays a complex role in addition to helping in the haemostasis and mechanical protection to wound site as described by Purna Sai et al., [et al., [et al, [Rana tigerina. The healing or repair process can be classified into three main overlapping and interrelating phases [Wound healing is a complex phenomenon involving a highly dynamic integrated series of cellular, physiological and biochemical processes. In our search for wound healing mechanisms, a Naga based technique of using frog according to Folch The total extract obtained from the frog skin was analyzed to study the amounts of neutral lipids, phospholipids, free fatty acids, cholesterol, cholesterol esters, diglycerides and triglycerides present.2) was created under mild anaesthesia as described in detail by Purna Sai et al., [Female albino rats of Wistar strain weighing 120 g were selected. Open excision type of wounds of a standard size and Elson and Morgan(1993), respectively ,9. Uroni61 and ElFemale albino rats of the Wistar strain (120 g) were used in groups each comprising of 6 for the study. Pedal oedema was produced by injecting 0.05 ml of freshly prepared suspension of 1% carrageenan in the plantar surface of the right hind paw. Foot pad thickness was measured plethysmographically at regular intervals after the injection of carrageenan.In one set of experiments, the lipid extract was administered intraperitoneally 1 hour before carrageenan injection as a suspension in polysorbate 80 which serves as a vehicle. In the second set of experiments, the lipid extract was injected 1 hour after the injection of carrageenan. Three doses of the lipid extract were selected for evaluating the anti-inflammatory effect and were compared with similar doses of known anti-inflammatory agent hydrocortisone.At specific intervals (1 and 4 hrs) after the carrageenan induction of oedema, both the experimental and control rats were sacrificed and the plantar region of the paw was fixed in (10%) formalin, processed by routine histological procedures and subsequently embedded in paraffin. Serial sections were cut at 8-10 \u03bc thickness and were stained with haematoxylin and eosin.et al., [9 SRBC/ml for immunization and challenge.The antigen was prepared according to Sharma et al., . Sheep e9 SRBC/ml into the right hind foot pad on day 0 and challenged 7 days later by injecting intradermally the same amount of SRBC into the left hind foot pad. Thickness of the left hind foot pad was measured with a vernier callipers at 4 and 24 hours after challenge. Frog skin lipid extract was injected intraperitoneally in doses of 6 and 12 mg/Kg body weight on each of 2 days prior to immunization, on the day of immunization and on each of the 2 days after immunization in the experimental group, while the control group was injected similarly with polysorbate 80.Hypersensitivity reaction to SRBC was induced in rats following the method of Doherty . Groups 9 SRBC on day 0. Blood samples were collected from individual animals by the retro-orbital puncture on day 7 and day 14. Frog skin lipid extract was administered intraperitoneally on days in the group forming the experimental series, while the control group was similarly injected with polysorbate 80. Antibody response was observed by the haemagglutination technique.Two groups of six rats each were immunized by injecting intraperitoneally 0.5 ml of 5 \u00d7 10The composition of the lipid extract as determined by us in our earlier paper is indicLipid extracts treated rats both topically and intraperitoneally healed completely by 12 days while the control rats took 20 days for healing. The wound healing observed was found to be dose dependent.The contraction measurements indicate a dose dependent reduction of the wound area. The rate of contraction in the experimental series was much faster and regular compared to the control group Figures and 2.The hydroxyproline and hexosamine content was found to increase gradually up to the 8th day and then decreased gradually. The increase in the hydroxyproline Figure , hexosamA dose related decrease in the pedal oedema was observed as measured plethysmographically. The decrease in the oedema was significantly greater when the extract was injected prior to the injection of carrageenan. The reduction in pedal oedema was more in the lipid extract treated animals in comparison with the response to hydrocortisone which in turn was more efficient than the control Tables and 4.Histological and histochemical studies of the rat paws showed stratified squamous epithelium with the underlying sub-epidermal region consisting of capillaries and muscles. The onset of inflammation upon the injection of carrageenan is indicated in the control, hydrocortisone, lipid extract treated animals Figure . An inteIntraperitoneal administration of lipid extract for 5 days around immunization produced a dose related increase in early (4 hrs) and delayed (24 hrs) hypersensitivity reactions in rats [Oedema induced by carrageenan is a model of exudate inflammation and agents effective in decreasing carrageenan induced oedema can be used as anti-inflammatory agents in acute inflammation. Carrageenan induced oedema is mediated through histamine and serotonin in the first hour followed by kinin release up to two and half hours and lastly by prostaglandins from two and half to six hours ,16. The , (1973) support Immunostimulatory studies indicated that lipid extract stimulates both the humoral and cell mediated responses in the experimental animals. Increase in the DTH reaction in rats in response to thymus dependent antigen revealed the stimulating effect of lipid extract on lymphocytes and accessory cell types required for the expression of the reaction.The augmentation of the humoral response to SRBC by the lipid extract as evidenced by the increased haemagglutination indicated the enhanced responsiveness of macrophages and lymphocytes involved in the antibody synthesis. In view of the key role played by macrophages in co-ordinating the processing and presentation of antigen to lymphocytes, the augmentation of the humoral response to SRBC reveals that lipid extract may enhance the effect by facilitating these processes.These studies suggest that any agent capable of accelerating any of the phases of wound healing would result in more rapid healing thereby reducing scar formation.in toto. Hence, exploring the mechanisms of the bioactive constituents would enable the use of these cues to achieve multilayered cell engineering regimens to produce more complex tissue architectures. This would lead to an increase in sophisticated biotechnological strategies for the improvement of techniques to engineer bio-artificial grafts and implants for burns and wound injuries in particular.In conclusion, the frog skin lipid extract irrespective of the mode, whether topically applied or injected intraperitoneally accelerate healing. The influence of the lipid extract on acute inflammation and immunostimulatory response indicated that the mechanism of quickened healing could be attributed partly to the reduction in the inflammatory phase that corresponds to the prime phase in wound healing. Thus, the frog skin lipid extract enhances the healing by playing a pivotal role in the first two phases of healing although it did not possess any significant antimicrobial effect. These studies therefore confirm the scientific significance of frog skin and its constituents in wound healing. This may be attributed to the fact that the amphibians being the first true land vertebrates connecting land and water obviously developed certain evolutionary specializations for regeneration and repair so as to withstand the onslaught of predators causing frequent injuries if not killing them DTH: Delayed Type Hypersensitivity; SRBC: Sheep Red Blood Cells.The authors declare that they have no competing interests.VK carried out the experiments and involved in drafting the manuscript. MB was involved in reviewing the manuscript. RR is the head of the department involved in general scientific discussions and PK conceived, designed the study and was instrumental in coordinating as well as drafting the manuscript. All the authors read and approved the final manuscript."} +{"text": "Arabidopsis Stock Centre (NASC).This review explores the UK CropNet site. The project is aimed at aiding the comparative mapping of cereal and other crop genomes. The site provides software tools for use by those working on genome mapping, and access to an array of databases that will be of interest to all members of the plant genomics research community, using several ACeDB interfaces. All screen views from the website are reproduced with the kind permission of Dr Sean May, Director, Nottingham"} +{"text": "Minisatellite DNA probes which can detect a large number of autosomal loci dispersed throughout the human genome were used to examine the constitutional and tumour DNA of 35 patients with a variety of cancers of which eight were of gastrointestinal origin. Somatic changes were seen in the tumour DNA in ten of the 35 cases. The changes included alterations in the relative intensities of hybridising DNA fragments, and, in three cases of cancers of gastrointestinal origin, the appearance of novel minisatellite fragments not seen in the corresponding constitutional DNA. The results of this preliminary study suggests that DNA fingerprint analysis provides a useful technique for identifying somatic changes in cancers."} +{"text": "The ascorbyl radical concentration has been observed, by means of electron spin resonance spectroscopy, in the blood and spleen of female Wistar rats carrying a Yoshida tumour. The ascorbyl radical concentration of the tumour tissue itself was studied as the tumour was developing, and as it was regressing after treatment with methylene dimethane sulphonate. Changes in the concentration of this radical may be related to host tumour reactions."} +{"text": "If the chloride level exceeds 250 mg l-1, then the sweetness of the beer is enhanced, but yeast flocculation can be hindered. An excess of sulphate can give a sharp, dry edge to hopped beers and excessive amounts of nitrate have been found to harm the yeast metabolism after conversion to the nitrite form. As water is a primary ingredient within beer, its quality and type is a fundamental factor in establishing many of the distinctive regional beers that can be found in the United Kingdom and is thus monitored carefully.The majority of anions found in beer are a consequence of impurities derived from the water used during the brewing process. The process of beer manufacture consists of malting, brewing and fermentation followed by maturation before filtration and finally storage. Strict quality control is required because the presence of certain anions outside strictly defined tolerance limits can affect the flavour characteristics of the finished product. The anions present were quantified using the technique of ion chromatography with the Metrohm modular system following sample preparation. The analysis produced a result of the order 200 mg l"} +{"text": "The specificity of action of mature blood cell extracts on their own progenitor cells was investigated by measuring their effects on the structuredness of the cytoplasmic matrix (SCM) using the technique of fluorescence polarization. Changes in SCM induced by the various extracts are probably closely related to the proliferation state of the cells.Saline extracts of lymphocytes, granulocytes and erythrocytes have been partially purified by ultrafiltration into selected molecular weight ranges and each tested against proliferative populations of lymphoid, granulocytic and erythroid cells. In all cases, complete specificity of effect on SCM was found, LNEs affecting only lymphoid cell populations, GCEs affecting only the granulocytic cell populations and RCEs affecting only erythroid cells. In each case, with the possible exception of the RCEs, the active fractions reside in the molecular weight ranges reported in the literature for cell extracts possessing proliferation inhibitory properties."} +{"text": "Although the rhomboid lip is a well-known structure constructing the foramen of Luschka, less attention has been directed to the structure for posterior fossa microsurgeries. The authors report two cases of the hemifacial spasm (HFS) with a large rhomboid lip, focusing on the importance of the structure during microvascular decompression.A 59-year-old female presenting with left HFS was admitted to our hospital. A preoperative magnetic resonance image demonstrated an offending artery at the root exit zone of the VII nerve. The patient underwent microvascular decompression through the lateral suboccipital approach. The intraoperative findings showed that a large rhomboid lip adhered to the IX and X cranial nerves and prevented the exposure of the root exit zone of the VII cranial nerve. The rhomboid lip was meticulously separated from the cranial nerves so that the choroid plexus of the foramen of Luschka and the rhomboid lip could be safely lifted with a spatula, and the offending artery was successfully detached from the root exit zone. In another case of a 60-year-old male, the rhomboid lip was so large that it needed to be incised before separating it from the lower cranial nerves. In each case, the HFS was resolved following surgery without any new deficits.The large rhomboid lip adhering to the cranial nerves should be given more attention in the posterior fossa surgeries and should be managed based on the microsurgical anatomy for preventing unexpected lower cranial nerve deficit. The choroid plexus protruding from the foramen of Luschka is an important landmark of the root exit zone of the VII cranial nerve during microvascular decompression surgery for hemifacial spasm (HFS).45101213 45101236A 59-year-old female presenting with left-sided HFS for 2 years was admitted to our hospital. Neurological examinations demonstrated typical HFS; it is characterized by twitching tonic spasm and synkinesis of the facial muscles. There were no other symptoms. The preoperative magnetic resonance (MR) angiogram revealed that the left posterior inferior cerebellar artery (PICA) formed a rostral loop at the level of the internal acoustic meatus . MR imagMicrovascular decompression of the left VII cranial nerve was performed through the lateral suboccipital approach. Auditory brainstem evoked response (ABR) and facial electromyography were used for intraoperative neurophysiological monitoring. The patient was placed in a lateral park bench position and craniotomy was performed below the posterior end of the incisura mastoidea along the medial boarder of the inferior half of the sigmoid sinus. After dural opening, dissection of the arachnoid membrane around the lower cranial nerves was started. When the inferolateral margin of the cerebellum and the flocculus were gently elevated, a large rhomboid lip was found at the dorsal side of the IX and X cranial nerves . The rhoA 60-year-old male presenting with right-sided HFS for 15 years was admitted to our hospital. Neurological examinations demonstrated typical HFS. The preoperative MR angiogram revealed that the right PICA formed a rostral loop at the level of the internal acoustic meatus. MR images revealed that the PICA compressed the root exit zone of the VII cranial nerve .Microvascular decompression of the right VII cranial nerve was performed through the lateral suboccipital approach with the aid of the same intraoperative monitoring as in Case 1. Intradurally, a large rhomboid lip prevented the visual tract for the root exit zone of the VII cranial nerve and adhered to the dorsal side of the IX and X cranial nerves . The surThe lateral recesses are narrow, curved pouches formed by the union of the roof and the floor of the fourth ventricle. They extend laterally below the cerebellar peduncles and open through the foramina of Luschka into the cerebellopontine angles. The ventral wall of each lateral recess is formed by the floor of the fourth ventricle and the rhomboid lip, which is a sheet-like layer of neural tissue that extends laterally from the floor and unites with the tela choroidea to form a pouch at the outer extremity of the lateral recess .611] Th11 Th611]36lateral suboccipital infrafloccular approach, in which the flocculus is gently retracted in a caudorostral direction perpendicular to the VIII cranial nerve To our knowledge, adhesion of the large rhomboid lip to the lower cranial nerves has not been mentioned in the literature on microvasular decompression although the situation does not seem uncommon. One of the possible reasons is that surgeons can mistake the rhomboid lip for part of the arachnoid membrane even when the rhomboid lip is large. The rhomboid lip is a slightly thicker and tougher structure than the arachnoid membrane, and usually does not seem as transparent as the arachnoid membrane. It may still be difficult to distinguish between the arachnoid membrane and the rhomboid lip, especially without anatomical knowledge of the rhomboid lip. Another possible reason for the large rhomboid lip, which has not been mentioned, is that the identification of the rhomboid lip might be difficult especially through the lateral route to the root exit zone of the VII cranial nerve. The infrafloccular approach seems suitable for identifying the large rhomboid lip from the anatomical aspect Figure ,c, and tThis report sheds light on the clinical importance of the rhomboid lip during microvascular decompression. The anatomical knowledge of the rhomboid lip and the adjacent area may facilitate safer exposure of the root exit zone of the VII cranial nerve. Further studies on the anatomical variation of the rhomboid lip in its size or distance to the cranial nerves are needed to reinforce the importance of the structure for microvascular decompression."} +{"text": "The use of DNA microarrays opens up the possibility of measuring the expression levels of thousands of genes simultaneously under different conditions. Time-courseexperiments allow researchers to study the dynamics of gene interactions. Theinference of genetic networks from such measures can give important insights for theunderstanding of a variety of biological problems. Most of the existing methods forgenetic network reconstruction require many experimental data points, or can onlybe applied to the reconstruction of small subnetworks. Here we present a method thatreduces the dimensionality of the dataset and then extracts the significant dynamiccorrelations among genes. The method requires a number of points achievable incommon time-course experiments."} +{"text": "Resection of large lipomatous tumours in the subdeltoid region remains technically challenging due to the risk of injury to the axillary neurovascular bundle. We describe a novel deltoid release and reinsertion technique for resection of large lipomatous tumours of the sub-deltoid region and report the functional and oncologic outcomes of six patients who underwent this procedure. Three cases were diagnosed histologically as atypical lipoma and three cases were diagnosed as lipoma. There was one local recurrence in a case of an atypical lipoma. Rotator cuff function was comparable to that of the contralateral side in all cases and the average Constant Score adopted by the European Shoulder and Elbow Society was 84 (range 81 to 92) out of 100. We conclude that patients with large sub-deltoid lipomatous tumours who undergo resection through a previously undescribed deltoid release and reinsertion technique have excellent functional outcome with a low risk for recurrence. The upper extremity is affected by bone and soft tissue neoplasms one-third as often as the lower extremity . When soLipomatous tumours of the soft tissues are the most common soft tissue tumours and these lesions can occur at every age and at almost any anatomical location , 3. The The deltoid muscle has a multipennate origin from the lateral third of the clavicle, the acromion process and the spine of scapula, and is the largest muscle of the shoulder girdle. Its function is of paramount importance in abduction and flexion. The axillary nerve exits in the quadrangular space below the lower border of teres minor, where it passes around posterior and lateral to the humerus on the deep surface of the deltoid muscle. The nerve then splits into anterior and posterior trunks, both of which run intimately with the deep surface of the deltoid muscle. Tumours in the sub-deltoid region are therefore particularly difficult to resect due to the risk of injury to the axillary nerve . VariousThe aim of this study is to report the surgical and functional outcomes of our patient population with sub-deltoid low-grade lipomatous tumours who underwent marginal resection of their tumours though extensive deltoid mobilization with preservation of the axillary nerve branches.A retrospective review of a prospectively maintained database was performed. Review of the database was approved by our institution's Research Ethics Board. We identified seven cases of large lipomatous tumours located in the subdeltoid region treated at our institution between March 1997 and July 2007. One patient was lost to follow-up, therefore a total of six patients were included in the follow-up section of the study. Patient demographics, indication for surgery, pathology and operative records were reviewed. Functional shoulder assessment was performed using the Constant scoring system adopted by the European Society for Shoulder and Elbow Surgery , 9. The With the patient under general anaesthesia and placed in a beach-chair position, we utilized a standard anterior deltopectoral approach to the shoulder. After the incision was made through skin and subcutaneous tissue, dissection was carried out through the delto-pectoral interval after isolation of the cephalic vein medially. A section of the anterior deltoid was sacrificed with the vein if necessary. Exposure was carried proximally to the clavicle and distally to the insertion of the deltoid.Mobilization of the deltoid was performed by releasing its multipennate origin with the use of a reciprocating saw from the anterior and lateral surface of the clavicle and laterally from the lateral edge of the acromion, and extending posteriorly to the quadrangular space as deemed necessary. The deltoid was also partially released from its insertion onto the humerus if necessary. The muscle was then reflected approximately 90 degrees to allowClosure was completed with reinsertion of the deltoid muscle to the acromion and lateral clavicle using multiple drill holes and attachment using nonabsorbable suture . The insThe operative arm was placed in a shoulder immobilizer for three weeks, with limited active abduction and flexion. Progressive active assisted and active exercises were then initiated and progressed gradually to deltoid and rotator cuff strengthening exercises.From 1997 to 2007, seven patients with sub deltoid lipomas or well-differentiated liposarcomas underwent excision of the tumour . One patThere were no wound complications. Minimal atrophy of the anteromedial deltoid fibres was present in all six patients. One patient who required sacrifice of axillary nerve fibres had incomplete motor and sensory deficit of that nerve. A second patient had sensory dysfunction with persistent numbness and parasthesias in the distribution of the axillary nerve. The average Constant Score was 84 , compared to 90 in the nonoperative shoulder . All sixLocal recurrence was documented in one case after nine years of the initial surgery. Final pathologic diagnosis in this case was atypical lipoma.Lipomatous tumours are the most common soft tissue neoplasms. These tumours can occur at every age and at almost any anatomical location in the body. The shoulder is one of the most common sites where these tumours may occur. Tumour excision at this site is challenging for the surgical oncologist in terms of proximity of the tumour to vital neurovascular structures and the subsequent need to balance the oncological resection and optional functional outcome.The deltoid muscle has a multipennate origin from the clavicle, acromion, and the scapula. The axillary neurovascular bundle lies in close proximity to the muscle's under surface, rendering the approach to tumours of the sub-deltoid space at risk for neurovascular injury. The use of a conventional anterior or posterior approach to resect sub deltoid lipomatous tumours may not be adequate for complete resection of a tumour which extends beneath the muscle and adheres to the axillary nerve, or extends posteriorly to the quadrilateral space.Martini was the All patients included in the current study had minor anteromedial deltoid atrophy. The deltoid atrophy is likely the result of minor denervation or interference with vascular supply of the muscle. However, the atrophy did not result in cosmetic or functional deficits. Two patients had axillary nerve dysfunction; one of them had a large tumour involving branches of the axillary nerve which were sacrificed in order to obtain adequate resection. Therefore, an unexpected axillary nerve injury occurred in 1 of 5 patients. These results are comparable to those of Tamai et al., who reported normal axillary nerve function in both patients reported [Functional assessment unitizing the Shoulder Constant Score adopted by the European Society for Shoulder and Elbow Surgery showed that all six patients were free of pain and able to return to full activities. While the range of motion in forward flexion, lateral elevation, and external rotation was normal in all patients; five patients had limitation of internal rotation up to the T12 vertebra and one had symmetrical internal rotation to the interscapular at the T7 level. Minimal deltoid muscle weakness evidenced by difference in abduction strength compared to the contra-lateral side was present in half of the patients, but was not functionally significant as evidenced by the Constant Score.In conclusion, we present a novel surgical approach to resection of lipomatous tumours of the sub-deltoid region. Our proximal-release approach results in excellent functional outcomes with a low risk of local recurrence and injury to the axillary nerve."} +{"text": "It has been claimed that publicity surrounding popular celebrity Jade Goody's experience of cervical cancer will raise awareness about the disease. This study examines the content of newspaper articles covering her illness to consider whether 'mobilising information' which could encourage women to adopt risk-reducing and health promoting behaviours has been included.Content analysis of 15 national newspapers published between August 2008 and April 2009In the extensive coverage of Goody's illness (527 articles in the 7 months of study) few newspaper articles included information that might make women more aware of the signs and symptoms or risk factors for the disease, or discussed the role of the human papilloma virus (HPV) and the recently introduced HPV vaccination programme to reduce the future incidence of cervical cancer. For example, less than 5% of articles mentioned well-known risk-factors for cervical cancer and less than 8% gave any information about HPV. The 'human interest' aspects of Goody's illness were more extensively covered.Newspaper coverage of Goody's illness has tended not to include factual or educational information that could mobilise or inform women, or help them to recognise early symptoms. However, the focus on personal tragedy may encourage women to be receptive to HPV vaccination or screening if her story acts as a reminder that cervical cancer can be a devastating and fatal disease in the longer term. Her seven-month battle with cancer attracted intense media coverage across the world. In the UK, this publicity coincided with the introduction in September 2008 of the Human Papillomavirus (HPV) vaccination programme aimed at protecting girls from cervical cancer. The timing of the coverage of Goody's experience of cervical cancer is thus potentially crucial as uptake of vaccines depends on perceptions of the balance between any putative adverse effects of a vaccine and the perceived severity and consequences of contracting the disease, . Altious year The publious year. Whilst ious year these daThe diagnosis and experience of illness amongst other celebrities, such as Olivia Newton John (breast cancer), Lance Armstrong (testicular cancer), Michael J Fox (Parkinson Disease) and Katie Couric (Colon Cancer) have also been credited with raising public awareness about their illnesses-34. HoweIn the articles covering Jade Goody's cervical cancer story there was almost no mention of her sexual history or behaviour. The media's coyness about discussing the association between cervical cancer risk and sexual behaviour in these articles is in stark contrast to the negative scrutiny of Goody's personal life prior to her diagnosis with cancer. It also contrasts with recent newspaper coverage of cervical cancer 'framed' around tThe coverage of Goody's illness focussed more on one young woman's experience of diagnosis, treatment and death from cervical cancer and is likely to contribute to a general public awareness that cervical cancer can be a devastating and fatal disease. This knowledge could mobilise women to take action and encourage them to be more receptive to screening or HPV vaccination, as research on vaccine decision-making suggests that the perceived risk posed by the disease to be prevented is weighed against the perceived risk of the vaccines-4. The nOur study has limitations. It only considered coverage in 15 newspapers, and was not able to examine other popular media which may have covered different topics or been more widely accessed by young women. Our analysis of coverage ran up to the date of Goody's funeral, and it is plausible that any later coverage or her illness may have emphasised different aspects of the causes, consequences and experience of cervical cancer. Furthermore, it was not designed to examine audience reception and understanding of the coverage.The extent to which any 'Goody Effect' contributes to saving lives in the longer term depends on how long her story acts as a reminder of the potentially devastating impact of this disease to all women including those in the highest risk groups, and whether it motivates 'hard to reach' groups to believe that actions such as attending for cervical screening and follow up treatments can make the difference between life and death. However, the failure of newsprint coverage of her illness and death to include basic information on the signs and symptoms of cervical cancer, common risk factors for the disease, and the possibility of protecting young women against contracting the viral agent which causes cervical cancer through vaccination presents an ongoing challenge to cancer researchers and specialists in cancer education. Lewison et al highlight cancer researchers' increasing reliance on the \"diverse media for informing and engaging the public\". They suggest that \"If the aim is to achieve a 'public understanding of cancer', then we need to ask how this current interface works and whether it is achieving the aspirations of both the public and the research community\". Our anaThe authors declare that they have no competing interests. The Medical Research Council (MRC) funded this research. SH and KH are funded by the MRC.SH, KH participated in the design, analysis of the data and to the writing and redrafting the final version of the manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2458/10/368/prepub"} +{"text": "In rats with 2-week obstructive jaundice the sensitivity to endotoxin was studied and the effect of a single dose of endotoxin on histological development in the kidney, liver and spleen was also investigated. We were tested the effect on accumulation and distribution within organs, of fibrinogen labelled with radioactive iodine 125. We showed an increased sensitivity to endotoxin in obstructive jaundice. The cause of death in most rats was acute circulatory failure during the course of endotoxic shock, without clinical features of disseminated intravascular coagulation. In the isotope study, after endotoxin administration there was a specific dynamic increase of fibrinogen accumulation in the kidneys of rats with obstructive jaundice. We proposed, that the cause of the kidney changes during the course of obstructive jaundice could be the local activation of intrarenal coagulation."} +{"text": "The metabolic pathway called the arachidonic acid cascade produces a wide range of eicosanoids, such as prostaglandins, thromboxanes and leukotrienes with potent biological activities. Recombinant DNA techniques have made it possible to determine the nucleotide sequences of cDNAs and/or genomic structures for the enzymes involved in the pathway. Sequence comparison analyses of the accumulated sequence data have brought great insights into the structure, function and molecular evolution of the enzymes. This paper reviews the sequence comparison analyses of the enzymes involved in the arachidonic acid cascade."} +{"text": "Over the years, there has been a tremendous increase in the use of fluoroscopy in orthopaedics. The risk of contracting cancer is significantly higher for an orthopedic surgeon. Hip and spine surgeries account for 99% of the total radiation dose. The amount of radiation to patients and operating surgeon depends on the position of the patient and the type of protection used during the surgery. A retrospective study to assess the influence of the radiation exposure of the operating surgeon during fluoroscopically assisted fixation of fractures of neck of femur (dynamic hip screw) and ankle (Weber B) was performed at a district general hospital in the United Kingdom.Sixty patients with undisplaced intertrochanteric fracture were included in the hip group, and 60 patients with isolated fracture of lateral malleolus without communition were included in the ankle group. The hip and ankle groups were further divided into subgroups of 20 patients each depending on the operative experience of the operating surgeon. All patients had fluoroscopically assisted fixation of fracture by the same approach and technique. The radiation dose and screening time of each group were recorded and analyzed.The radiation dose and screening time during fluoroscopically assisted fixation of fracture neck of femur were significantly high with surgeons and trainees with less than 3 years of surgical experience in comparison with surgeons with more than 10 years of experience. The radiation dose and screening time during fluoroscopically assisted fixation of Weber B fracture of ankle were relatively independent of operating surgeon's surgical experience.The experience of operating surgeon is one of the important factors affecting screening time and radiation dose during fluoroscopically assisted fixation of fracture neck of femur. The use of snapshot pulsed fluoroscopy and involvement of senior surgeons could significantly reduce the radiation dose and screening time. The use of fluoroscopy has increased tremendously in field of orthopedics.14346The aim of the study was to assess the effect of surgical experience on the radiation exposure and screening time during fluoroscopically assisted fixation of undisplaced fractures of neck of femur (dynamic hip screw) and ankle (Weber B).A retrospective study was conducted at a district general hospital in the United Kingdom from 2003 to 2005. The surgeons were divided into three different groups based on their years of operative experience . SurgeonA total of 700 patients with fixation of fracture of neck of femur underwent fluoroscopically assisted fixation of hip (dynamic hip screws), and 225 patients with ankle fracture underwent fluoroscopically assisted fixation during 2003\u20132005. Only patients with undisplaced intertrochanteric fracture were included in the hip group. Patients with multiple fractures, communited fracture, compound fracture, and significant morbidity were excluded.Patients with a fracture of lateral malleolus without communition was included in the ankle group. Patients with polytrauma, compound fracture, medial malleolar fracture, and communition were excluded.In each subgroup of the hip group, 20 consecutive patients who satisfied the inclusion criteria and were operated by the appropriate surgeon were included . The age and sex of the patients were comparable in the hip and ankle group.Similarly in the ankle group, in each subgroup, 20 consecutive patients who satisfied the inclusion criteria and were operated by the appropriate surgeon were included .All screening was performed using a mobile C arm fluoroscopy unit . The total filtration of the x-ray tube was 6.35 mm; the focus to detector distance was 99.5 cm, and the diameter of the routinely used input field of view was 17 cm. The dose of the patient was routinely monitored and recorded in accordance with the requirements of Regulation 7 of The Ionising Radiation Regulations [IR(ME)R] 2000 by means of a dose area product (DAP) meter permanently built into the x-ray tube housing.72. The DAP meter was calibrated following the procedure described in the National Protocol for Patient Dose Measurement in Diagnostic Radiology.8DAP is the currently accepted method of assessing the radiation exposure in complex diagnostic x-ray procedures and is measured in Gy cmIn addition to the DAP value, the overall fluoroscopic screening time (minutes) was recorded for each patient.All the patients with fracture neck of femur underwent dynamic hip screw fixation by lateral approach. The patients with Weber B fracture of ankle underwent fixation of lateral malleolus using a one-third tubular plate using a lateral approach.t test, and P values were calculated.The radiation dose and screening times of the various groups were compared using Student P = 0.0005).The radiation dose (DAP) and screening time during fixation of a hip fracture were almost three times more with surgeons and trainees with less than 3 years of operative experience (Group I) when compared with that of the surgeons with more than 10 years of surgical experience (Group III) (P = 0.0P = 0.1015).The radiation dose and screening time during fixation of ankle were not significantly affected by surgical experience of the operating surgeon, and they were comparable among all the three groups [P = 0.1015) [In this study, overall patient doses as monitored by DAP were reassuringly well below these published values. The radiation exposure during fixation of fractures of hip was significantly higher when they were performed by surgeons with fewer than 3 years of operative experience, and this was statistically significant (0.0005 ) . In pati 0.1015) .This study shows that the experience and training of the operating surgeon is one of the most important factors determining the radiation exposure to patients in fixation of hip fracture. The involvement of senior surgeons in hip fracture fixation is likely to reduce the radiation dose to patients and surgeons. We endorse the practice of using snap shot pulsed fluoroscopy, last image hold, and good set up geometry as a means of dose optimization. The practice of continuous screening during fixation of fractures of neck of femur and ankle must be discouraged."} +{"text": "Reconstruction of the facial hard- and soft tissues is of special concern for the rehabilitation of patients especially after ablative tumor surgery has been performed. Impaired soft and hard tissue conditions as a sequelae of extensive surgical resection and/or radiotherapy may impede common reconstruction methodes. Even free flaps may not be used without interposition of a vein graft as recipient vessels are not available as a consequence of radical neck dissection.We describe the reconstruction of the facial hard- and soft tissues with a free parasacpular flap in a patient who had received ablative tumor surgery and radical cervical lymphadenectomy as a treatment regimen for squamous cell carcinoma (SCC). To replace the missing cervical blood vessels an arteriovenous subclavia-shunt using a saphena magna graft was created. Microvascular free flap transfer was performed as a 2-stage procedure two weeks after the shunt operation. The microvascular reconstructive technique is described in detail. Various reconstructive options have been used in the past for the reconstruction of head and neck tissues. In a high number of patients, mainly treated because of head and neck carcinoma, tissue defects may develop following tumor therapy because of extensive resection or chronic effects of radiotherapy. In the initial years, reconstruction was limited to the use of pedicled flaps such as the pectoralis major, latissimus dorsi, and deltopectoral flap -3. IntroThis report describes the use of a vena saphena magna interposition ateriovenous subclavia-shunt for the reconstruction of complex defect in the head and neck region.In a 61-year-old male with a history of alcohol and nicotin abuse but no other serious diseases the initial diagnosis of a SCC of the right anterior floor of the mouth and cervical lymph node metastases was made. He underwent former surgery including partial resection of the tongue, the mandible and of the floor of the mouth, radical neck dissection of thr right side, selective lymphadenectomy on the left side, immediate reconstruction of the mandible with a reconstruction plate, and intraoral reconstruction with a pectoralis major flap and irradiation post surgery (60 Gy). As a consequence of woundhealing based on the postoperative radiotherapy a large extraoral soft tissue defect and exposition of the reconstruction plate occurred. In order to reconstruct the soft tissues and to establish conditions for a secondary bony defect reconstruction a vessel reconstruction procedure was performed prior to free flap surgery. The saphena magna vein was taken from the patient's right leg. One end of the vein was anastomosed in an end-to-side fashion to the right subclavia artery and the other end in an end-to-side fashion the to the right subclavia vein thus resulting in an arteriovenous loop fig. . After 1Since the introduction of a microsurgical free flap for the reconstruction of a head and neck defect in 1976 a wide variety of free flaps have been described for the reconstruction of the head and neck region . As sincIn situations when microvascular free flap transfer is impeded as recipient vessels are lacking because of surgical ablation or radiation damage pedicled could be used ,12 or reIn the head and neck region recipient vessels are predominantly branches of the external carotid artery and jugular vein ,14. ContOur patient had undergone previous tumor resection and soft tissue reconstruction, radical neck dissection and post-operative radiation. The usual options for recipient vessels i.e. branches of the external carotid artery and jugular vein were not available as was revealed by pre-operative angiographic examination. The creation of a saphenous arteriovenous loop using the subclavia artery and vein offered a reliable and practical alternative reconstruction method when no pedicled flap or other recipient vessels were available. The ateriovenous subclavia-shunt for such difficult cases seems to be a reliable alternative because of its advantages in the following aspects: 1. The subclavian vessels are located outside ablative surgical field, zone of radiation or injury. 2. The vessels' size is suitable for microsurgical anastomosis, especially the vein can accept high venous outflow from the graft. 3. Anatomy of the subclavian vessels is constant and offers favorable surgical approach. 4. The two stage approach with a 10- to 14-day interval between loop construction and free tissue transfer allows the vein to thick the vessel wall and to avoid venous collapse even in cases of low venous flow.Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the corresponding author.The authors declare that they have no competing interests.RAD drafted the manuscript and performed the operation. CN drafted the manuscript and performed the operation. UM participated in the planning of the operation and performed it. NRK participated in the planning of the operation and performed it. JGH participated in the planning of the operation and performed it. All authors read and approved the final manuscript."} +{"text": "The toxicity produced by two courses of methotrexate separated by different intervals has been studied in matched groups of rats. The maximum degree of neutropenia reached when courses were separated by 8 days or more was no greater than that seen after a single course of methotrexate. However, when courses of neutropenia following the second course of methotrexate was directly related to the level of depression of bone marrow cell numbers at the time of the second course. Conversely the anti-leukaemic effects of 2 courses of methotrexate, in terms of time of onset of leukaemia and time of death in rats transplanted with a syngeneic T-cell leukaemia, are shown to be similar when courses of methotrexate are separated by between 2 and 12 days. Thus in this system, chemotherapeutic schedules using methotrexate may be designed on the basis of minimal host toxicity without prejudicing anti-leukaemic effects. These results are discussed in relation to toxicity and anti-leukaemic effects observed during UKALL trials of treatment in acute lymphoblastic leukaemia."} +{"text": "The chick embryo was used to study the relationship between the onset of tumour neovascularization and tumour growth. Walker 256 carcinosarcoma was implanted on the chrioallantoic membrane (CAM) of about 600 embryos aged 5-16 days. Tumour diameter and changes in the CAM vasculature in response to the implants were recorded daily. Representative tumours were examined by light microscopy of Epon-embedded tissue and autoradiography after injection of [3H]-thymidine. Tumours remained avascular for 72 h, after which they were penetrated by new blood vessels and began a phase of rapid growth. The rate of growth during this vascular phase was greatest for implants on 5- and 6-day-old embryos and decreased the later the day of implantation. The time of onset of tumour angiogenesis appears to be independent of the immunological state of the chick embryo, although the rate of growth after vascularization may be modified by the onset of immunity. This study suggests that the avascular and vascular phases of tumour growth are separable, and that the avascular tumour population lives under the growth constraints which limit the size of a tumour spheroid growing in soft agar or aqueous humour."} +{"text": "Autofluorescence spectra of neoplastic tissues have been reported to be significantly differentfrom those of normal tissues when excited by blue or violet light. From this concept, a light-inducedautofluorescence endoscopic imaging system for gastrointestinal mucosa has been newly developed and the clinical evaluation ofthe prototype system has been conducted in hospitals in Canada, Netherlands and Japan.We examined the clinical usefulness of the prototype LIFE-GI system for the detection ofgastrointestinal cancer and high and low grade dysplasia. The LIFE-GI system was alsoapplied to the early detection of remnant lesions after endoscopic treatment of early gastriccancer and to the detection of laterally spreading superficial colonic tumors.This system has potential application for the diagnosis of dysplastic lesions and earlycancers in the gastrointestinal tract as an adjunct to ordinary white light endoscopy. Thissystem, which needs no administration of a photosensitive agent, may be suitable as ascreening method for the early detection of neoplastic tissues."} +{"text": "A symposium discussing collaborative research work on infectious diseases dynamics was held at Queens' College, University of Cambridge on 25 October 2006. The evolution and spread of infectious diseases are determined by dynamic processes occurring at many scales from the within-cell to the host population level. There is a considerable similarity between these different scales, whether considering the colonization and spread of salmonella within a cell or organ into the Campylobacter infections when Dr Chris Coward presented some work investigating the dynamics of the disease at the within-host level in chickens, based on findings from some previous in vitro studies of Campylobacter replication.The session then swapped focus from salmonella to Campylobacter species are a major form of human food poisoning, with Campylobacter jejuni the leading cause of bacterial gastroenteritis in humans, responsible for over 40\u200a000 laboratory-confirmed cases a year in England and Wales (http://www.hpa.org.uk/infections/topics_az/campy/data_ew.htm). A major source of human infection is thought to be consumption of contaminated poultry, there being a major reservoir of infection in the poultry industry where the organism behaves as an enteric commensal. Understanding how the numbers of organisms in poultry populations can be reduced provides a strategy for dealing with the problem in humans.Previous research at Cambridge had focused on the identification of bacterial genetic factors important for the colonization of chickens using the technique of Signature Tagged Mutagenesis STM; . In STM,Dr John McCauley from the National Institute for Medical Research at Mill Hill followed this with a presentation describing some of the different mechanisms that underlie the variation in host tropism of different influenza viruses, addressing some of the mechanisms other than just virus binding which might be responsible for both the failure and the success of H5N1 viruses to invade the human population and respiratory tract. Interferon (IFN) responses are important components of host cells' responses to virus challenge. Having described the qualitative differences between different virus approaches to evading a cell's interferon defences, he went on to describe a series of experiments relating to how different NS1 proteins in influenza viruses have their effects. Using a combination of different wild-type virus, as well as several others constructed using reverse genetics techniques, Dr McCauley described the different levels (i) of interferon induced by different human viruses, (ii) of signals induced by different influenza NS1's, and (iii) of sensitivity of different human viruses to the effects of IFN. Interestingly, the human viruses that induced most IFN were the least sensitive to its effects. Finally, experiments were described in which replication of highly pathogenic avian influenza viruses could be markedly enhanced by IFN interfering co-infections or disabling the interferon response of cells. This raised the question of whether more recent viruses, such as the recent H5N1 viruses, had already found a means of subverting this response and whether we should be measuring the host innate immune response when considering the ability of a virus to infect different species.http://www.who.int/csr/disease/avian_influenza/country/cases_table_2006_12_27/en/index.html).The final talk of the morning session was given by Dave Balkissoon, a CIDC supported student based at the BBSRC Institute for Animal Health, who presented results from investigations into the innate immune response to avian influenza in inbred lines of chickens. Another major public health issue, the potential socio-economic effects of avian influenza have been highlighted recently with mass media attention focused on the outbreak of the H5N1 strain of the disease currently moving across Asia, Africa and Europe. The poultry industry in the UK deals with approximately 40 billion chickens per year, including broilers, layers and breeders, as well as feral and wild birds, and the disease is highly contagious within avian populations. In addition, it has a high epizootic potential, and has so far (December 2006) killed 157 people worldwide , but their role as an anti-viral agent specifically with regards to influenza in chickens is unclear. Additional interest lies in determining whether a relationship exists between the Mx gene and particular production traits, synonymous with the intensive selection processes used in commercial poultry lines. This can help to determine the levels of risk among commercial poultry stocks.Of particular interest in this project is the role of the Mx anti-viral allele between different commercial and experimental poultry populations. Various production traits have been identified close to the Mx gene on chromosome 1 such as thigh muscle yield, tarsometatarsal length and leg twisting score, and whether differences are due to co-selection or chance founder effects was being investigated.Initial results suggested a very heterogeneous distribution of the The afternoon session saw the scientific focus change from the biological aspects of infectious diseases highlighted in the morning, to the role that mathematical modelling can play, in both understanding the pathogenesis of diseases at a molecular level and also in aiding investigations into the dynamics of epidemic processes at the population level.The opening talk was given by Dr Olivier Restif, a mathematical modeller working in CIDC, who examined the effect of widespread vaccination policies on the potential re-emergence of diseases due to vaccine-induced antigenic variation. The notion that certain control strategies can have an effect on epidemic development opposite to that intended is a constant concern for those implementing policy in public health. However, there are various recent examples of situations in which countries with high vaccination coverage from certain diseases have experienced increasing number of cases of the disease from new strains of the antigen. The modelling strategy investigated the effects of cross-immunity (i.e. the reduction in susceptibility caused by infection from a different strain) on the invasion and persistence of a novel strain of a disease to a finite population where an original strain of the disease is endemic. A series of stochastic simulations were used to assess the impact of cross-immunity when different levels of vaccination coverage were used.The results from a series of stochastic simulations indicate that there is an important trade-off between invasion and persistence of the novel strain, with high levels of cross-immunity resulting in an increased probability of the invading strain of the disease replacing the existing one. In the presence of vaccination, the growth rate of the novel strain depends on the coverage of the vaccine and the level of cross-protection conferred by the vaccine. Based on invasion analysis only, the worst case scenario is expected in a situation where there is high coverage of a vaccine that confers little cross-protection to the novel strain. However, stochastic simulations show that epidemiological feedback can prevent the persistence of highly invasive strains in a finite-size population. As a result, the model predicts that the most successful strains (i.e. which can invade and persist) can be either very close to or very far from the vaccine strain in terms of antigenic distances . AccountBeforehand there was a very different and novel application of mathematical techniques to help quantify antigenic variation, this time between rabies virus isolates. Rabies is a lyssavirus from the Rhabdoviridae family. It is a highly contagious epidemic disease that is almost invariably fatal. The standard control procedure used by the World Health Organization is vaccination, although the potential of the virus to mutate over time makes designing and implementing effective and efficient vaccination strategies difficult.Understanding the antigenic variation in lyssaviruses could help to provide important insights into the evolution of the rabies disease and the dynamics of its spread, aiding selection of vaccine strains and prediction and control of its spread. Daniel Horton, another CIDC student, this time based at the international rabies reference laboratory at VLA, presented some work on using antigenic cartography . The maps have also proven to be experimentally robust.Moving on from the application of mathematical modelling at the molecular level to its use at a somewhat larger scale, the next presentation was the first of two concerned with modelling the spatial spread of infectious disease spread across landscapes. Dr Colin Russell continued the rabies theme and presented results from a stochastic mathematical model for the spatio-temporal spread of the disease in raccoons in northeastern USA focused instead on modelling the spatial spread of infectious diseases in plants. The previous discussion highlighted the importance of accounting for landscape heterogeneity when modelling epidemics, and due to factors such as population expansion, climate change and changes in farming practices, landscapes are changing dramatically, having a major effect on the extent of epidemic progression. An important point given was that the dynamics of the disease vary over different spatial scales\u2014at the plant, within-field and farm premise levels\u2014and these differences need to be reflected in both the modelling strategy and the control mechanism.These ideas were illustrated with various examples of current work within the Gilligan group, most notably in some investigative work on the effects of different scale control policies on the propagation of rhizomania in sugar beet. This is a particularly persistent disease caused by a virus transmitted through the soil and into the roots by a carrier vector. It causes significant problems with regards to yield and is generally spread over large distances through the movement of agricultural equipment that can carry the virus from infested soil.The traditional response strategy is by a local containment policy, involving removal of the infected crop, prevention of future growth in infested fields and the disinfection of agricultural equipment. A key focus in planning many control policies is in trying to optimize the level of protection needed to prevent epidemic situations. Some interesting questions arise: for example, is there a minimum effective control neighbourhood, and should preference be given to protecting the more, or less infected regions? The idea that more intuitive control policies are sometimes much less effective than less intuitive ones echoes the results from the earlier presentation by Dr Restif. Some modelling simulations of the spread of disease in a host meta-population suggested that protection of the less infected regions proved more efficacious in controlling disease spread than protection within the infected regions. Additionally, not all sites needed to be protected to prevent an epidemic developing. It was shown that of greater importance is ensuring that the scale of the control policy matches the scale of the epidemic; in the rhizomania model for example, farm level control policies proved a much more effective preventive measure than field level policies.Economic and logistical constraints are also the keys in instigating response strategies, and some additional work was presented linking epidemiological and economic models in order to optimize resources in epidemic situations. This fittingly led to the final presentation of the day, given by Dr Cathy Roth of the Global Outbreak Alert and Response Network (GOARN)\u2014 part of the World Health Organization (WHO)\u2014in which she discussed the many problems facing policy makers when developing and implementing response strategies in the face of potential disease outbreaks. This drew together aspects from all the research work presented throughout the day, from the molecular scale right to population level, and the use of this knowledge in the design of coherent and efficient procedures for responding to epidemic situations.The WHO is the principal global body responsible for collating, assimilating and implementing the results of research work and applying them to real-life epidemic situations. Many of the problems facing health officials are less scientific and more practical, such as non-reporting of cases, ignorance of the methods of transmission, the financial costs of developing, producing and distributing drug treatments, limited access to decent sanitation facilities, and difficulties in educating people about important facts of disease control; for example, proper burial procedures in order to aid eradication of the virus.Policy makers are often forced to make quick and difficult decisions based upon the knowledge that they have at the time of an outbreak. Scientific rigour in research plays a vital role in influencing these decisions, by providing better understanding of the biological nature of infectious diseases and the dynamics of their spread. Well-funded research strategies are keys to increase our knowledge of infectious diseases; however, understanding the science is just one component of effective control, and one in which only collaborative effort between scientists and policy makers across many different fields can hope to achieve.The meeting highlighted the scientific benefits that come from collaboration between mathematical and laboratory-based sciences in extending our understanding of both basic infectious disease processes as well as applied issues in disease control; such an approach will continue to provide the focus for research within CIDC."} +{"text": "Lectins are proteins which have the ability to interact specifically with carbohydrate residues of glycoproteins and other glycoconjugates. The staining patterns of 10 fluorescein conjugated lectins and a protease inhibitor (F-LA) have been studied in histological sections of 11 normal or reactive lymph nodes and 6 nodes and one skin biopsy involved by Hodgkin's disease. On the basis of the patterns of lectin binding, and current knowledge of their saccharide specificities, we found that within germinal centres there is an orderly carbohydrate rich extracellular matrix which contains a higher concentration of GlcNAc and terminal Gal residues than the surface membranes of component cells. This suggests active secretion rather than simple membrane shedding, and it is possible that this pericellular domain plays a part in the regulation of the proliferative response, or controls migration of lymphocytes in and out of the germinal centre. Lectin binding in Reed-Sternberg cells suggests that the huge nucleoli contain glycoconjugates of diverse structure, which may be linked with their failure to undergo cytokinesis."} +{"text": "In the present work the cell cycle of 5 solid human tumours in vitro and the duration of their various intermitotic phases were studied using H3 thymidine and autoradiography. All the cell lines studied showed a longer G2-period than other normal mammalian cells. No relationship between the duration of the cell cycle and the modal chromosome number or malignancy of the tumours was observed.Reports are available on the studies of the cell cycle of several normal cell populations and of neoplastic effusions in man and experimental animals"} +{"text": "The use of large-scale microarray expression profiling to identify predictors of disease class has become of major interest. Beyond their impact in the clinical setting (i.e.improving diagnosis and treatment), these markers are also likely to provide clues onthe molecular mechanisms underlining the diseases. In this paper we describe a newmethod for the identification of multiple gene predictors of disease class. The methodis applied to the classification of two forms of arthritis that have a similar clinicalendpoint but different underlying molecular mechanisms: rheumatoid arthritis (RA)and osteoarthritis (OA). We aim at both the classification of samples and the locationof genes characterizing the different classes. We achieve both goals simultaneously bycombining a binary probit model for classification with Bayesian variable selectionmethods to identify important genes.We find very small sets of genes that lead to goodclassification results. Some of the selected genes are clearly correlated with knownaspects of the biology of arthritis and, in some cases, reflect already known differencesbetween RA and OA."} +{"text": "Recently we reported the variable presence of hypoxia adjacent to necrosis in human glioma lines grown as subcutaneous tumours in severe combined immunodeficient (SCID) mice. To assess the basis for this observation, we examined the pattern of oxygenation in M006 and M006XLo glioma spheroids. We found a wide range of binding of [3H]misonidazole to cells adjacent to the necrotic core, analogous to the patterns seen in xenografts, indicating substantial differences in the central oxygen tension of the spheroids. Clonal selection was used to isolate single cell-derived sublines of the M006XLo line. Some sublines gave spheroids that showed narrow distributions of [3H]misonidazole binding to the cells adjacent to necrosis, whereas other sublines showed a range of binding similar to that seen in spheroids of the parent line. After additional passages in monolayer culture, clonal sublines occasionally gave rise to spheroids in which the mean oxygen tension of cells adjacent to necrosis differed substantially from that of the initial spheroids. No relationship was evident between the thickness of the rim of viable cells and the presence or absence of central hypoxia, over a wide range of rim thickness. These results indicate that different oxygenation characteristics of glioma spheroids and tumour microregions are unlikely to arise from stable genetic variants coexisting in the parent line."} +{"text": "The highly effectiveness and robustness of receptor-mediated viral invasion of living cells shed lights on the biomimetic design of nanoparticle(NP)-based therapeutics. Through thermodynamic analysis, we elucidate that the mechanisms governing both the endocytic time of a single NP and the cellular uptake can be unified into a general energy-balance framework of NP-membrane adhesion and membrane deformation. Yet the NP-membrane adhesion strength is a globally variable quantity that effectively regulates the NP uptake rate. Our analysis shows that the uptake rate interrelatedly depends on the particle size and ligand density, in contrast to the widely reported size effect. Our model predicts that the optimal radius of NPs for maximal uptake rate falls in the range of 25\u201330 nm, and optimally several tens of ligands should be coated onto NPs. These findings are supported by both recent experiments and typical viral structures, and serve as fundamental principles for the rational design of NP-based nanomedicine. Viruses invade living cells via protein-mediated endocytosis The highly effective and robust adhesion-driven process has raised many fundamental questions with regard to the physical principles harnessed by the evolutionary design of viruses. While it has long been known from biochemistry that the molecular recognition of receptors and ligands allows viral invasion to be type specific, it was only recently fully understood from mechanics point of view that viral invasion is also size selective From a fundamental mechanics point of view, adhesion and membrane deformation play the roles of driving force and resistance to NP endocytosis, respectively. A rational biomimetic design of NPs should either reduce the resistance or enhance the adhesion to facilitate NP internalization. Indeed, it has both experimentally In this article, we aim to establish guiding principles for the biomimetic design of NPs with high uptake rate, one of the key parameters that assess the efficacy of NP-based therapeutics. Noting that correlating the biophysical parameters of NPs with the uptake rate may analytically be complex, we circumvent the difficulty by separately deriving the endocytic time of a single NP and the equilibrium cellular uptake when immersing the cell in a solution with dispersed NPs. The endocytic time and cellular uptake together indicate the uptake rate. From thermodynamic analyses, we reveal that particle size and ligand density interrelatedly govern the uptake rate. The interrelated effects can be interpreted from a general framework of energy balance between NP-membrane adhesion and membrane deformation. The interrelation suggests that tailoring only one design parameter may not be effective to achieve high uptake rate. We construct a phase diagram of the uptake rate in the space of particle size and ligand density, which may serve as a design map for NP-based therapeutics. Finally, we extend our discussions by including the effects of other relevant biophysical parameters.R is the NP radius and From an energetics point of view, NP engulfment by cell membrane is driven by adhesion but involves significant membrane deformation cost Wrapping also involves pulling excess membrane area toward the wrapping site, for which work needs to be done to overcome membrane tension, denoted here by Because of the nonlinearity of The total deformation energy at the fully wrapped state indicates a characteristic particle radius Several unique features arise when endocytosis is receptor-mediated. First, as adhesion is supplied by ligand-receptor binding, wrapping of NPs requires diffusing receptors to the binding sites, thereby setting a characteristic time scale of endocytosis and limiting the uptake rate 2. We assume that the receptors are initially uniformly distributed on the cell membrane with an initial receptor density R and coated with ligands of surface density k may be described interchangeably at given K.To reflect the discreteness of receptor-mediated endocytosis, it is convenient to set the cross-sectional area of the receptor as the unit area, denoted by local\u201d energy balance. Such a local consideration ignores the entropic effect of receptors, which represents the \u201cglobal\u201d aspects of adhesion. Equation (2) also manifests the interrelated effects of particle size and ligand density on NP endocytosis.The dual character of receptors suggests that the adhesion strength in receptor-mediated endocytosis can be decomposed into two components, i.e., in vitro involve immersing biological cells into a solution with dispersed NPs To measure the cellular uptake rate, experiments Unless otherwise mentioned, the following parameter values are used for our analysis: The fact that wrapping necessitates receptors to diffuse from the far field to the wrapping site sets a diffusion-limited endocytic time. The endocytic time can thus be determined by formulating a diffusion problem that involves tracking the wrapping front of the cell membrane R being wrapped by cell membrane, as shown in M is the total membrane area. We consider a general stage of wrapping characterized by the degree of wrapping Similar to crack extension or healing in a crystal lattice \u22122. At Since t\u223cD is the diffusivity of the receptors. Solving R, and substituting them into Eq. (8), the characteristic impact length scale and hence the endocytic time can be obtained.Conservation of the receptors in the wrapping zone and the impacted area specifies a characteristic length N, are wrapped by cell membrane with different degrees of wrapping k, one follows We next analyze the cellular uptake of NPs when immersing the cell in a solution with dispersed NPs of bulk density It is noteworthy to point out the close similarities of the adhesion strengths The unified energy-balance framework of adhesion and membrane deformation for the endocytic time and the cellular uptake suggests that one may define the uptake rate asFrom the phase diagram of the uptake rate in The ridge line of the phase diagram in kBT) and receptor-ligand binding energy kBT), our model predicts that the optimal amount of ligands coated onto viruses falls in the range 10\u2013100 irrespective of the virus size. The extensively studied model system, the Semliki Forest virus (SFV), is about 35nm in radius, covered with 80 glycoproteins (ligands) Considering viruses as NPs optimized by nature via evolution, the number of ligands decorated on the surfaces of viruses should obey the optimal number: In addition to the particle size and ligand density, many factors may influence the phase boundaries of the uptake rate, as presented below.Several factors affect the density of receptors expressing on the cell membrane. First of all, receptors internalized by NPs may be recycled back to the host membrane; they may also be degraded in the endosomes and lysosomes. In addition, new receptors may be produced and diffuse to the cell membrane. The precise amount of receptors involved in endocytosis is currently unknown. In addition to ligand-receptor binding, receptor-mediated endocytosis may be assisted by specific proteins, such as clathrin or caveolin One notes that variation of the relative energy scale leads to the change of the enthalpic component of the adhesion strength and/or the membrane deformation energy density. As the lower bound of the uptake rate is enthalpically governed, variation of the relative energy scale modifies the lower bound of the phase diagram. On the other hand, the upper bound is entropically governed, and thus only weakly dependent on The bulk density of NPs in solution appeared as a model parameter only for computing the cellular uptake. A high bulk density yields a high surface concentration of NPs on cell membrane, leading to intensified competition for receptors among adhering NPs Through thermodynamic analyses, we revealed that the endocytic time of a single NP and the cellular uptake when immersing the cell into a solution with dispersed NPs are governed by the unified framework of energy balance between adhesion and membrane deformation. We established phase diagrams in the space of particle size and ligand density for both the endocytic time and the cellular uptake. We identified from the phase diagrams the lower (upper) bounds below (beyond) which the endocytic time goes to infinite or the cellular uptake vanishes. We further revealed that the mechanisms governing the lower and upper bounds of the endocytic time and the cellular uptake are the same: the lower bounds correspond to the enthalpic limit of the NP-membrane adhesion strength, while the upper bounds to the entropic limit.The computed endocytic time and the cellular uptake allow us to define the uptake rate. It should be mentioned that the uptake rate defined here is different from what is typically measured in experiments We further discussed the effects of other relevant biophysical parameters on the uptake rate, including the receptor density, the relative energy scale of ligand-receptor binding energy and membrane bending rigidity, membrane tension, and the bulk density of NPs. All the effects can be coherently interpreted by the variation of the enthalpic and entropic adhesion strength. The phase diagram of the uptake rate in the space of particle size and ligand density thus serves as a design map that guides the rational designs of NP-based bioagents for biosensing We consider a general stage of wrapping at which an area of Corresponding to the wrapping-size distribution k using l receptors. In the present analysis, we assumed that the density of receptors bound to NPs is independent of wrapping size. This simplification does not affect the qualitative conclusions drawn here.The thermodynamic equilibrium, expressed by Eqs. (10) and (11), can be obtained by minimizing the free energy functional with respect to its two independent variables,"} +{"text": "Wild-type measles viruses have been divided into distinct genetic groups according to the nucleotide sequences of their hemagglutinin and nucleoprotein genes. Most genetic groups have worldwide distribution; however, at least two of the groups appear to have a more limited circulation. To monitor the transmission pathways of measles virus, we observed the geographic distribution of genetic groups, as well as changes in them in a particular region over time. We found evidence of interruption of indigenous transmission of measles in the United States after 1993 and identified the sources of imported virus associated with cases and outbreaks after 1993. The pattern of measles genetic groups provided a means to describe measles outbreaks and assess the extent of virus circulation in a given area. We expect that molecular epidemiologic studies will become a powerful tool for evaluating strategies to control, eliminate, and eventually eradicate measles."} +{"text": "Almost one tenth of more than 370 hepatectomies, mostly for tumors, involved resection of major parts ofthe caudate lobe, subsegment 1. Five of them were for tumors or hemangiomas here, compressing orinvading the vena cava; two were for metastases of colorectal cancer located very close to the junctions of theright and middle hepatic veins with the vena cava. We would previously have deemed these tumorsunresectable. In these patients the vein was banded above and below the liver, an internal shunt tube placedin preparation for shunting of blood, and the afferent liver blood flow controlled. Control of the vena cavaby tightening of the bands was needed in two cases. Tumor-invaded parts of the vein wall were resected intwo other cases, in whom the presence of the tube facilitated the resection but the bands did not have to betightened. The procedure did not cause morbidity and we conclude that tumors close to the vena cava canoften be resected without complex vascular exclusion techniques, even when they invade the vein."} +{"text": "The 1962-66 cancer mortality of Polish migrants to Australia is compared with the cancer mortality prevailing in Poland and in Australia. Small numbers compel to limit the analysis to the most frequent cancer sites only.The main findings are:(a) Stomach cancer mortality of Polish migrants to Australia is intermediate between the high mortality in Poland and the much lower one in Australia.(b) Intestinal tract and breast cancer mortality of Polish migrants is displaced upwards, from the low Polish level to the much higher Australian one.(c) Lung cancer mortality of Polish male migrants does not differ distinctly from the mortality observed both in the country of origin and of adoption of these migrants.The presented findings are compared with the results of a similar study of Polish migrants to the U.S. Aims for future studies are briefly outlined."} +{"text": "The reduction in size of the experimental ISIS 130 tumour has been investigated in LOU rats under the influence of increasing doses of cytostatic agents belonging to different classes. External temperatures of tumours as well as rectal temperatures have been measured at the same time, twice daily, during the whole experiment. The greater the decrease in the tumour size after drug administration, the larger was the decrease in external temperature of tumour. The rectal temperatures remained fairly stable, thus differences between the tumour and rectal temperatures increased. A possible correlation between the reduction of tumour size and the decrease of external temperature of tumour has been traced for every cytostatic agent, and the same linear relationship has been found to link these two parameters. The decrease in external temperature of tumour may, moreover, predict the decrease in tumour size within a term of 1-2 days. Measurement of the magnitude of the transient tumour hypothermia of ISIS 130, following chemotherapy, would represent a new method for measuring the efficiency and duration of action of cytostatic agents."} +{"text": "A Galeazzi fracture is defined as a fracture of the radius associated with dislocation of the distal radio-ulnar joint (DRUJ). The conventional surgical technique of nailing does not give enough stability and open reduction, internal fixation with the plate is associated with numerous complications. The stacked nailing for the management of these injuries provides adequate stability, maintains the relationship of the DRUJ and promotes uneventful union by closed technique. The purpose of this study is to evaluate the results of simple, user-friendly, low cost elastic stacked nailing for the management of Galeazzi fracture dislocation.We treated 22 young adults with fresh Galeazzi fracture-dislocation of the forearm, from January 2004 to January 2008, by percutaneous fixation of fracture by stacked elastic nailing at our institute. There were 19 males and three females and the age group ranged from 20-56 years (average 35 years). Surgery was performed within 48 to 72 hours under the guidance of image intensifier. Medullary cavity was filled with two elastic titanium nails having unequal lengths and diameter. One nail acts as a reduction nail and the other acts as a stabilizing nail. The results were evaluated using Mikic criteria based on union, alignment, relationship of the DRUJ, and movements at the inferior radio ulnar joint, elbow and wrist.In six cases, following radiological union, nails in the radius were extracted between six to nine months after operation because of discomfort complained by the patient at site of insertion. After one year follow-up, 18 patients had excellent, four had fair results.Closed reduction and internal fixation of Galeazzi fracture by two elastic rods re-establishes the normal relationship of the fractured fragments and the DRUJ without repair of the ligaments. The stability is achieved by the flexibility and elasticity of the nails, crowding of the medullary canal and anchorage they gain in the radial diaphysis. Elastic nailing can produce excellent clinical results for Galeazzi fracture-dislocation. It has the advantages of technical simplicity, minimal cost, user-friendly instrumentation, and a short learning curve. The fracture of the radius and dislocation of the distal radio-ulnar joint (DRUJ), described by Sir Astley Cooper in 1824, is a very unstable and a rare injury.9Plating of the fracture of the radius is most suitable in these cases and if correctly done it provides rigid internal fixation of the bone and stable reduction of the DRUJ in most cases.10The purpose of this article is to evaluate a simple, low-cost, elastic stacked intra medullary nailing for the management of Galeazzi fracture dislocation. and the stability it can provide in this inherently unstable injury.Twenty two patients with fresh Galeazzi fracture dislocations treated from January 2004 to January 2008 at our institute constitute the study. The mean age of a patients was 35 years (range 20-56 years). Most fractures occurred at the junction of the middle and distal third of the shaft of the radius ; 11 fracTwo fractures were Grade I open injuries as per Gustilo and Anderson\u2019s classification. Nineteen fractures were operated under brachial block and the rest three were given general anesthesia. For open fractures, reduction was undertaken after debridement of the wound. All fractures were operated under tourniquet. The fracture was reduced under C-arm guidance with manual traction exerted on the hand by the assistant after abducting the arm and flexing the elbow with the help of elbow attachment . Each ofA 2-3 mm pre bent elastic titanium nail was introduced through the styloid process of the radius utilizing a stab incision and negotiated till it reached the subchondral bone of radial head. This acts and behaves as a reduction nail. Another nail of different size and length was then passed across the fracture site into the medullary canal till further negotiation was not possible and the stability of the construct was assured by manual rotation of the construct under C arm. The forearm was then rotated to assess for any DRUJ instability. The entry site was dressed and a compression dressing was given. We had to stitch the entry wound in only two cases during the early part of this study.n=2), just proximal to the articular surface. In a case of fracture of the base of ulnar styloid, plaster immobilization was continued for a period of six weeks.If the DRUJ was found to be stable in supination, splintage in a long arm splint was given in supination for six weeks. If the DRUJ was reducible in supination but unstable, stabilization was achieved by placing 2 mm kirschner wires (K-wires) from the ulna into the radius . The follow-up period in our series was twelve months to 18 months with an average of 14 months. Our results were excellent in 18 cases and fair in four cases as per Mikic criteria.We had one case of superficial infection at the site of the nail entry which responded well to regular dressings and healed up uneventfully. In four cases subluxation of the DRUJ with restricted pronation and supination was noticed during the follow-up period. In two cases it was noted after the cast removal, although it had a stable fixation and a normal alignment in the immediate post operative period. There was loosening of the cast in these two cases in the early part of the study. In the remaining two cases there was a progressive collapse of the radial fracture and the subluxation of the DRUJ was noticed three months after the surgery. All these four cases healed uneventfully and the patients regained good functional range of movement without necessitating further surgery. Complications were recorded in these four patients with fair results.No patient had delayed union or nonunion and nerve injury. Six patients reported discomfort at site of insertion of nail hence the nail was extracted in less than 9 months. The earliest time period necessitating nail removal was after 6 months of operation, following union of the fracture There was no difference in clinical outcome of the patient with fracture of the ulnar styloid process.Galeazzi fractures are inherently unstable injuries. They are estimated to account for six to seven per cent of all forearm fracture in adults. The brachioradialis muscle and the extensors and abductors of the thumb tend to shorten the radius while the pronator quadratus rotates the distal fragment towards the ulna.4133The radius is a curved bone with its concavity towards the ulna. The medullary canal of the radius is funnel-shaped in the distal third, and curved and narrow in the middle thirdIn order to address the issue, a pre bent triangular metal nail sufficiently elastic and resilient while it traverses the canal from the point of insertion, but rigid enough to withstand the torsional, rotational, and angulatory forces while progressing to union was devised which gave fewer instances of malunion and non-union compared with other methods of open reduction and internal fixation.Internal fixation by square nail in Galeazzi fracture of the distal third of the radius not only permits rotatory motion at the fracture site, but also substantial lateral movement, which may predispose to delayed union and non union. It might lead to subsequent instability at the DRUJ. The end-of-the-rush nail acts as a potential irritant to the tendons around the wrist, necessitating early removal.Open reduction and rigid internal fixation of the fracture of the radius re-establishes the normal relationship of DRUJ without repair of the ligaments. Fixation with a plate and screw is superior to square nail fixation in these fracture.Early compression-plating with a six-hole plate will give satisfactory results for most fractures. Grafting is usually not necessary, and four weeks of immobilization in neutral pronation-supination or mild supination is adequate. Resection of the distal end of the ulna or temporary fixation of the distal radio-ulnar joint with a pin through the radius and ulna is rarely, if ever, required after compression plating of a fresh unstable fracture.2The radius is usually bowed in two planes- one in the frontal plane having a lateral convexity and the other in the saggital plane with posterior convexity.1718In our series, no nerve injuries were found. A single medullary nail within the radius has no inherent stability to withstand the muscle forces. Therefore a second nail of a shorter length, engaging into the narrowest part of the radius prevents the rotation of the fracture fragments and imparts fixation on the principle of three-point fixation and Kuntschner\u2019s principle of crowding of the medullary canal. Similarly, Hackethal who had proposed nailing for the forearm fractures and achieved excellent results based the utility of his nailing on the four premises, especially the jamming of the nails in the cortical window, jamming the waist of the medullary cavity, spreading the bunch of nails in the metaphysis and filling up the conus of the medullary cavity with short nails.19An above elbow plaster cast in full supination avoids opening and repair of the inferior radioulnar joint complex.5The use of elastic stable intramedullary nailing for displaced and unstable fractures of the radius and ulna in children is a well established method,2224Elastic stable intramedullary nailing of the radius is technically simple, cost effective with user friendly instrumentation, short learning curve and a cosmetic method with minimal risk of infection, delayed union, or tendon rupture, obeying and respecting the integrity of the soft tissues around the injured site. It provides anatomical reduction with preservation of normal curves of the radius and provides a stable fixation in both angulatory and rotational planes. The stability is achieved by the flexibility and elasticity of the nails and crowding of the medullary canal and anchorage they gain in the radial diaphysis. Our method of percutaneous insertion of the nail through the styloid process avoids injury to tendinous and the neurovascular structures of the lower end radius. This study aims to address the alternative surgical modality available for the management of these difficult injuries."} +{"text": "The European Childhood Leukaemia - Lymphoma Incidence Study (ECLIS) is designed to address concerns about a possible increase in the risk of cancer in Europe following the nuclear accident in Chernobyle in 1986. This paper reports results of surveillance of childhood leukaemia in cancer registry populations from 1980 up to the end of 1991. There was a slight increase in the incidence of childhood leukaemia in Europe during this period, but the overall geographical pattern of change bears no relation to estimated exposure to radiation resulting from the accident. We conclude that at this stage of follow-up any changes in incidence consequent upon the Chernobyl accident remain undetectable against the usual background rates. Our results are consistent with current estimates of the leukaemogenic risk of radiation exposure, which, outside the immediate vicinity of the accident, was small."} +{"text": "Morphological identification of cell multiplication (mitosis) and cell deletion (apoptosis) within the lobules of the \"resting\" human breast is used to assess the response of the breast parenchyma to the menstrual cycle. The responses are shown to have a biorhythm in phase with the menstrual cycle, with a 3-day separation of the mitotic and apoptotic peaks. The study fails to demonstrate significant differences in the responses between groups defined according to parity, contraceptive-pill use or presence of fibroadenoma. However, significant differences are found in the apoptotic response according to age and laterality. The results highlight the complexity of modulating influences on breast parenchymal turnover in the \"resting\" state, and prompt the investigation of other factors as well as steroid hormones and prolactin in the promotion of mitosis. The factors promoting apoptosis in the breast are still not clear."} +{"text": "Kashmir valley located in the northern division of India, surrounded by Himalayas has an unique ethnic population living in a temperate environmental conditions having distinctive food habits which play an overwhelming role in the development of GIT cancers over the genetic factors.-3 The foColorectal cancer being the commonest cancer, is the major cause of mortality and morbidity worldwide, there are nearly one million new cases of colorectal cancer diagnosed world-wide each year and half a million deaths. The inci13Here we have presented three cases of colorectal cancer in form of images, obtained from the patients who visited Department of General Surgery of SKIMS, for the resective surgery for the immediate treatment of the cancer."} +{"text": "Variations in formation of the superficial palmar arch are common. A classic superficial palmar arch is defined as direct continuity between the superficial branch of the ulnar artery and superficial palmar branch of the radial artery. During routine dissection classes to undergraduate medical students we have observed formation of superficial palmar arch solely by superficial branch of ulnar artery without any contribution from the radial artery or median artery. Knowledge of the anatomical variations of the arterial pattern of the hand is crucial for safe and successful hand surgery. Superficial palmar arch (SPA) is an arterial arcade which lies beneath the palmar aponeurosis and in front of the long flexor tendons, lumbrical muscles and palmar digital branches of the median nerve. SPA is the major blood supply to the hand. Additional circulation for the palm may come from the median artery or interosseous arterial system. The radial and ulnar arteries form four circuits in the hand, the anterior and posterior carpal arches at the level of the carpal bones and the superficial and deep palmar arches at the mid palmar level . The anaDuring routine dissection of the right upper limb of a 45-year-old South Indian male cadaver, we observed SPA formed exclusively by superficial branch of the ulnar artery . The supSuperficial branch of ulnar artery solely forming SPA is reported -7. McCorThe use of radial arteries as an arterial bypass conduit is an invasive procedure which is becoming popular among various medical centers. The greatest risk associated with harvesting the radial artery is ischemia of the soft tissues of the hand. But in patients where ulnar artery is the main blood supply to the first web space the least number of complications may be expected.The SPA is the center of attraction for most of the surgical procedures and traumatic events in the hand. The hand surgeon should keep in mind this kind of variations before performing surgical procedures such as, arterial repairs, vascular graft applications, and free and/or pedicled flaps. SPA is an anastomosis fed mainly by the ulnar artery. When ulnar artery is occluded, the viability of the structures in palm supplied by the ulnar artery depends on the efficacy of the collateral circulation. In our finding there was no anastomosis between the ulnar artery and radial or median or interosseous arteries. So ulnar artery occlusion in cases like ours there will be no collateral flow of blood to meet the metabolic demands of the palmar tissue i.e., results in acute ischaemia, manifested by rest pain and / or gangrene.During surgical procedures of thumb in the cases similar to ours ligation of radial artery may not be sufficient to stop the profuse bleeding since major blood supply was coming from the superficial palmar arch. Several techniques are used to identify and locate any unusual vessel in the upper limb, including Doppler ultrasound, the modified Allen test, pulse oximetry and arterial angiography."} +{"text": "Individual equilibrium binding constants (KB) have beendetermined from spectroscopic titrations employing the hypochromism induced in the visibleabsorbance of the cations on interaction with the nucleic acid. These demonstrate both stereo- andenantioselectivity in the binding interactions. These KB data, together with induced circulardichroism and DNA thermal denaturation results, are all indicative of selective intercalation of thebidentate components of the cations into the nucleobase stack of the duplex. Supportiveevidence for a secondary binding mode for the picchxn complexes is provided by the differentmutagenicity profiles obtained for related cations.A study of the interaction with calf thymus DNA is described of a novel set of chiral ternary complex cations of general form [Ru(N"} +{"text": "Differences between the sexes in time trends of colorectal cancer incidence 1962-87 and mortality 1960-91 in England and Wales are examined in relation to changes in female hormonal factors. There was a trend in the sex ratio of this tumour, particularly marked for the descending colon, whereby the female excess in risk at young ages has almost disappeared but the male excess at older ages has increased. This trend started for cohorts born since the 1920s and coincided with the increase in the use of oral contraceptives and, to a lesser extent, with increases in fertility. The decline has been particularly pronounced for women at young ages born since 1935-39, coinciding with the spread of oral contraceptive use to younger age groups. These results are consistent with the hypothesis that female hormonal factors may play a role in the aetiology of colorectal cancer and with the possibility that oral contraceptive use might exert a protective effect in the descending colon."} +{"text": "Sir,Transcutaneous gas monitoring is a noninvasive technique of measurement of oxygen and carbon dioxide tension across the skin. Several studies have validated the accuracy of the values obtained from this monitor.2 TranscuA total of 48 patients who had cardiac surgery during April-July 2008 in our center, underwent transcutaneous gas tension monitoring measured with TINA TCPM4 . During the monitoring, two patients were noticed during routine clinical examination, to have developed vesicles at the site/s of use of the sensor for the equipment on the chest Figures and b. TThe concept of transcutaneous measurement of respiratory gases looks promising because of its noninvasive nature. The manufacturers, in their product brochure mention the potential for the occurrence of burn injuries at the site of sensor application and recommend the use of sensor at 44\u00b0C to minimize the potential risk of burn injury and pressure-induced necrosis. In two of our patients, the application of probes led to blister formation despite adhering to these instructions. The possibility of contact dermatitis was ruled out by the non-inflammatory nature, absence of itching and the occurrence of blisters within 4 hours of application of the probes. Possible mechanisms of blister formation in the patients were thermal injury caused by the increased local skin temperature and the negative pressure within the well of the sensor while the gas expelled via the skin is sampled.In the light of our experience, we recommend that clinicians wishing to use the transcutaneous measurements may need to restrict the usage time of sensor at any site for less than 3 hours and at temperatures less than 44\u00b0C. The sensors need to be applied on aesthetically less significant areas. Further, the possibility of developing such vesicles may be informed to patients during informed consent."} +{"text": "Preterm birth remains the leading cause of perinatal mortality and morbidity. Evidence suggests that intrauterine infection plays an important role in the pathogenesis of preterm labor. This article reviews the clinical data supporting this theory and the cellular and biochemical mechanisms by which intrauterine infection may initiate uterine contractions. The clinical and laboratory methods of diagnosing clinical chorioamnionitis and asymptomatic bacterial invasion of the intraamniotic cavity are also reviewed. Finally, the management of clinical chorioamnionitis and asymptomatic microbial invasion of the amniotic fluid and the use of adjunctive antibiotic therapy in the treatment of preterm labor are presented."} +{"text": "Retinopathy of prematurity (ROP) is one of the leading causes of preventable blindness in childhood. Early posterior pole vascular signs of severe ROP have been studied since the first description of the disease. The progressive changes that take place in the posterior pole vessels of an extremely premature baby occur in a predictable fashion soon after birth. These vascular changes are described as plus disease and are defined as abnormal dilation and tortousity of the blood vessels during ROP that may go on to total retinal detachment. The ophthalmological community now has a better understanding of the pathology and cascade of events taking place in the posterior pole of an eye with active ROP. Despite many years of scientific work on plus disease, there continue to be many challenges in defining the severity and quantification of the vascular changes. It is believed that understanding of the vascular phenomenons in patients with ROP will help in designing new treatment strategies that will help in salvaging many of the eyes with severe ROP. Their work defined the differentiation between two clinical forms of ROP which differed in prognosis and potential timing of the treatment. The international classification of ROP described in 1984 and its revision in 2005 used the term plus disease to signify the vascular features of ROP that may alert the attending ophthalmologist to for consider treatment of the premature avascular retina. The purpose of this review is to describe the current understanding of the clinical presentation, pathophysiology, value of diagnostic tools, and discuss treatment options in the management of patients with ROP/plus disease.34Retinopathy of prematurity (ROP) results from important changes occurring in the posterior retinal vessels of eyes of pre-mature infants. Since the initial classification of retrolental fibroplasia in 1953, many attempts have been made to identify those eyes that may progress to retinal detachment and blindness if not treated in a timely fashion.There are many studies focused on the potential etiologic factors contributing to ROP. The introduction of pulmonary oxygen therapy for preterm infants in the 1940s, played a major role in an epidemic of ROP that subsequently occurred.67Mesenchymal cells (such a primitive astrocytes), of the ischemic, nonvascularized peripheral retina produce vascular endothelial growth factor (VEGF) which in the absence of any regulation, stimulate the production of neovascularization in ROP.8The premature retina responds dramatically to the presence of VEGF. The presence of VEGF results in the remodeling of vessels as a direct response to fluctuations in oxygen from an ischemic bed.2112214plus disease.pre-plus changes of ROP.Different mechanisms have been proposed to explain the cascade of vascular events taking place in the posterior pole in the presence of ROP. Arteriovenous shunts develop because of the reduced capillary resistance from the remodeling of cells combined with an increase in retinal blood flow.1A wide spectrum of posterior retinal vascular changes exists in ROP. Plus disease describes the most severe vascular changes of dilation and tortuosity and is associated with severe ROP and visual morbidity if left untreated. The diagnosis of plus disease is historically a dichotomous decision based on subjective comparison to reference images.1219162122et al.,et al.,et al.26According to current protocols and international standards for screening for ROP, all infants with a birth weight less than 1500 g and/or less than 32 weeks of gestational age, are required to have weekly bedside retinal examination by a specialist with experience in ROP.21Apart from the classic ROP presentation which is fairly predictable in timing of progression and regression, there is a newly recognized form of ROP called \u201caggressive posterior ROP\u201d (AP-ROP). AP-ROP represents the most severe active form of the disease. AP-ROP occurs in the youngest and smallest infants and does not progress through the stages of severity of disease as in the classic form. AP-ROP is posterior in location and may occur in zone I or the posterior aspect of zone II. Plus disease develops early and is pronounced in all four quadrants. Often there is no ridge at the junction between vascularized and nonvascularized retina and only a flat network of neovascularization may be present. This can be difficult to view using indirect ophthalmoscopy; however, the massive, early and unusual presence of plus disease may be a major marker of this virulent form of ROPRecent research has addressed potential quantitative approaches to the diagnosis of vascular changes in ROP. There is disagreement among experts on both the diagnosis of pre-plus and plus disease showing discrepancies among observers.216282611283132et al. to our knowledge were the first group to attempt a grading system for the vessel changes of ROP,et al.2et al.The posterior pole changes may be more easily seen than zone or stage of ROP.192As explained, the inconsistencies and subsequent anxieties placed on the ROP examination could be alleviated by a quantifiable system of ROP screening, potentially to be performed automatically by computer image analysis tools. One of the initial semi-automated programs available for assessing the retinal vessel changes in preterm infants was developed by Martinez-Perez at Imperial College London.3637Subsequent to retinal image search and analysis (RISA), computer-aided image analysis of the retina (CAIAR) was developed at Imperial College London. The user input necessary was reduced to helf by more intuitive vessel location abilities based on maximum likelihood model-fitting in a scale space framework. The estimated model of Gaussian profile with parameters of height, width, and orientation are computed at each location in the image as illustrated in Figures Concurrently, ROPtool has been under development at Dukes University, North Carolina, an extrapolation of a technique for measuring tubular objects in three-dimensional images initially used for locating the intracerebral vasculature from magnetic resonance angiography (MRA) images.3Other programs have been adapted from diabetic retinal image analysis techniques for analysis of images from preterm infants. Vasculomatic a la Nicola (IVAN) is one such program. IVAN measures retinal vessel diameters of the six largest arterioles and six largest venules located 0.5\u20131.0 disc diameters from the disc margin . Each im39Recently, a completely automated system, RetVas has been developed to detect the retinal blood vessels of images from preterm infants at risk of developing ROP. A sensitivity of 97% for locating one arteriole and one venule in each quadrant of 75 images has been reported. The software is based on a model reverse engineered from human vision, using the same processes as the human visual cortex to observe images and select features. Pre- and postautomated analysis images are illustrated in Figures 34et al.et al using the same principle of measuring a series of points along each vessel to calculate vessel tortuosity, Wallace et al.23939Each vessel localization program uses its own algorithm in calculating the width and or tortuosity of the blood vessel, it has segmented. For example, CAIAR uses a multi-scale approach successively subdividing the vessel sections into two parts. The geometric concept involves perpendicular bisection of the vessel chord at its midpoint and subsequent reapplication of the subdivision on the resultant segments until the segment lengths fall below a specified value (four pixels).Despite great advancements in the area, there are many barriers to the implementation of new concepts for real-time screening at bedside. Firstly, most of these programs are not fully automated and require a trained operator to select a vessel. Secondly, the degree the accuracy of such a system requires rigorous testing in the form of multicenter clinical trials to ensure the small changes between normal and abnormal vessels are adequately and repeatedly detected in infants with and without ROP. Third, images\u2014specifically high quality images, may not be widely available due to equipment restrictions.2The goal of treatment of Type I ROP is to remove the stimulus for abnormal neovascularization due to an ischemic retina.12229Most recently, the Early Treatment Trial for ROP (ET-ROP) has defined the severity of ROP into two types, Type I and Type II, recommending the guidelines for treatment as outlined in 2435With our understanding of the pathophysiology of retinopathy of prematurity from the role of proangiogenic factors to the cascade of events ending with dilation and tortuosity of posterior vessels, it is possible to predict the natural history of ROP and its close relationship to salvaging the eye as the disease progresses. Plus disease has been described as a factor of prognostic significance in determining stage, diagnosis, and signs of severity of the disease. Our access to current technology and the availability of objective diagnostic tools may lead us to earlier recognition and appropriate treatment. The presence of plus disease has now become an indication of severity and for determining the adequate moment of treatment. Although zones and stages of ROP are noted, they are of secondary importance in determining whether laser treatment is needed.42"} +{"text": "Canada's federal government has once again failed to shut North America's only authorized supervised injection facility: Insite. A majority ruling issued by the BC Court of Appeal on 15 January 2010 upheld an earlier British Columbia Supreme Court ruling in 2008 that protected the rights of injection drug users (IDUs) to access Insite as a health facility as per the Charter of Rights and Freedoms component of the Constitution of Canada. The majority decision from Honourable Madam Justices Rowles, Huddart and Smith also established a jurisdictional victory safeguarding Insite as most appropriately run under the authority of the province of British Columbia rather than the federal Government of Canada. The Federal Government has appealed the case to the Supreme Court of Canada. A hearing date has been set for 12 May 2011. The appeal will be a legal one but even more so, it will be an appeal to humanity. Canada's federal government has once again failed to shut North America's only authorized supervised injection facility: Insite. A majority ruling issued by the BC Court of Appeal on 15 January 2010 upheld an earlier British Columbia Supreme Court ruling in 2008 that protected the rights of injection drug users (IDUs) to access Insite as a health facility as per the Charter of Rights and Freedoms component of the Constitution of Canada.The majority decision from Honourable Madam Justices Rowles, Huddart and Smith also established an important jurisdictional victory emerging from the cross appeal by the operators of Insite: the PHS Community Services Society (PHS). The ruling further safeguards Insite as most appropriately run under the authority of province of British Columbia rather than the federal Government of Canada.Insite opened on 21 September of 2003 under an exemption granting it status as a scientific pilot study until 12 September 2006. The primary goals of the program are: (1) to reach a marginalized group of IDUs with healthcare and supports who would otherwise be forced to use drugs in less safe settings (2) to reduce dangerous injection practices (syringe sharing) thereby reducing the risk of infectious diseases like HIV and HCV; and (3) to reduce fatal overdoses in the population of people that use the facility. The program also aims to provide referrals to treatment and detoxification, reduce public disorder (public injection) and validate the personhood of a deeply stigmatized target population[The legal battle began near the end of Insite's three-year exemption for scientific study when a minority conservative government was elected in Canada on 6 February 2006. The new government voiced opposition to the program during and after the election. On 1 SeControlled Drugs and Substances Act until 30 June 30 2008. A looming threat of closure by the conservative led government led the PHS to take the Government of Canada to court in late 2007[Controlled Drugs and Substances Act (CDSA) in Canada is unconstitutional as it pertains to Insite because the closure of the program under the Act would impede people with addictions from receiving life saving healthcare. BC Supreme Court Justice Ian Pitfield ruled that the use of the CDSA to shut Insite would undermine the fundamental right, under Canada's Charter of Rights and Freedoms to life, liberty and security of the person[On 2 October 2007, the project was given an additional exemption to operate under the late 2007. The outSince its inception, Insite has been subject to an independent review by a team of physicians and scientists put in place to provide an \"arm's length\" evaluation of the program. The results of this scientific evaluation have been published in peer-reviewed academic journals and have indicated that Insite has reduced unsafe injection practices, public disorder, overdose deaths and HIV/Hepatitis while increasing uptake of addiction services and detox. To dateDespite this support from the scientific and medical community, the Conservative government of Canada remains entrenched in its position having served the PHS with court documents indicating their intention to appeal the case of Insite to the highest court in the country: the Supreme Court of Canada. A courtThe author declares that they have no competing interests."} +{"text": "A study of 2072 children who developed cerebral or spinal cord tumours of varying degrees of malignancy before 15 years of age has shown that there is equally good representation of fatal and non-fatal cases in official registrations. Attack rates are higher for boys than girls and the prognosis is better for girls than boys. The risk of an early death is negatively correlated with age at diagnosis, and the risk of a late death shows the opposite relationship. These observations and a relatively high incidence of hindbrain tumours are suggestive of an embryonic origin for most of the cases."} +{"text": "Electrochemotherapy (ECT) is a novel anticancer therapy that is currently being evaluated in human and pet cancer patients. ECT associates the administration of an anti-tumor agent to the delivery of trains of appropriate waveforms. The increased uptake of chemotherapy leads to apoptotic death of the neoplasm thus resulting in prolonged local control and extended survival. In this paper we describe the histological features of a broad array of spontaneous tumors of companion animals receiving pulse-mediated chemotherapy. Multivariate statistical analysis of the percentage of necrosis and apoptosis in the tumors before and after ECT treatment, shows that only a high percentage of necrosis and apoptosis after the ECT treatment were significantly correlated with longer survivals of the patients . Further studies on this topic are warranted in companion animals with spontaneous tumors to identify new molecular targets for electrochemotherapy and to the develop new therapeutical protocols to be translated to humans. Local management of solid neoplasms in humans generally involves multimodality approaches whose cornerstones are surgery combined with radiation therapy ,2. The uThe high rate of local control in our preliminary investigation that accMany tumor histotypes show a marked responsiveness to pulse-mediated chemotherapy, leading to tumor shrinkage and clinical remission. In particular, ECT seems promising at controlling oral mucosal melanomas either as a single modality therapy or in conjunction with surgical cytoreduction . MoreoveDespite the consistent number of preclinical and clinical publications on this topic, there are few data on the histopathological modifications induced by this therapy . Mir andThe lack of extensive investigation in this field prompted us to run a thorough revision of our histological samples to gather a broader picture of patterns of tumor response and eventually to identify possible prognostic factors.A total of 127 companion animals with spontaneous tumors were enrolled in different phase II ECT trials over a 7 years period and biopsies were collected at presentation, after the first session of ECT and at the completion of the treatment were collected before the beginning of ECT, after one session (one week after the treatment) and at the completion of the protocol (two weeks after the end of the protocol). More than 370 specimens were analyzed. Histopathology specimens embedded in paraffin have been cut into 6 \u03bcm sections and stained following standard protocols, using Hematoxylin/Eosin, Hematoxylin/Van Gieson, and toluidine blue staining (to identify poorly differentiated mast cell tumors) . The TUNHistopathological patterns were tested for significance regarding response to first treatment, response to second session of ECT and the recurrence within the first six months of follow-up. Patients died of unrelated disease were censored in the analysis. Variables tested were: percentage of necrosis, apoptotic index. All analyses were performed following an intention to treat analysis method. The time to progression was calculated as the period from the date of starting treatment to the first observation of disease progression or to death from any cause. Survival plot were performed by Kaplan-Meier product-limit method . The difThe review of more than 370 bioptic specimens allowed to determine different features of tumor histopathology following ECT. Patterns of response in the early phases of the treatment (after 1 session of ECT) involved an acute inflammatory response made up of neutrophils, lymphocytes and plasmacells, followed by extensive necrosis. At the completion of the treatment (after two weeks), the tumor samples showed a dramatic decrease in cell number, with the majority of the remaining cells in apoptosis with no inflammatory response, while most of the residual tumor mass was made up of scar tissue. In three cases of feline sarcomas that experienced local failure, the tumor recurred as a less aggressive histotype: a neurofibroma-like lesion rather than an high grade sarcoma. Interestingly, we always detected lack of sufferance in the normal tissues surrounding the neoplasm. Table ECT has several advantages on other anti-tumor techniques: ease of administration, low cost, minimal toxicity , and high response rate. Our morphological study shows on a broad selection of tumors of companion animals a progressive and highly selective destruction that frequently allowed conservative surgeries in the case of extensive tumors. The histopathological analysis of the treated tumor revealed that cancer cells did not undergo the typical response to bleomycin consisting with enlargement and polynucleation , but eviOn the other hand, this pattern of tumor lysis seems to play a key role in the prevention of local recurrence and distant dissemination for melanomas treated with this technique. In fact, we recently described a vitiligo-like lesion in canine malignant melanomas of the oral cavity where the absence of any pigment at the treatment site in the long term survivors might imply the aiming of the immune system to melanin and other deep melanoma antigens . AnotherTo the best of our knowledge, this is the most extensive description of histopathological patterns of response to electrochemotherapy in tumors of companion animals, showing previously undescribed mechanisms of response. The morphological analysis further confirm the efficacy and selectivity of this novel anticancer treatment as evidenced by the lack of sufferance elicited in the normal tissues surrounding the neoplasms. Studies are currently ongoing at our laboratory to further define the nature of the immune response elicited by the ECT treatment, in order to refine and ameliorate the efficacy of our protocols.Spontaneous tumors of pets share striking clinical, histopathological and molecular similarities with their human counterparts, thus providing an invaluable bench to clinic bridging .The results of our studies are being currently translated to humans . Indeed,The author(s) declare that they have no competing interests.All authors read and approved the final manuscript. EPS set up the ECT protocols and treated the animals; FB, PM, FF and ADA prepared the histological samples and described the histopathological patterns; GC gave advise on the work and helped in the interpretation of the data; FB and BV performed the statistical analysis; AB supervised all the work and wrote the paper together with EPS."} +{"text": "The properties of human benign prostatic hyperplasia (BPH) and rat prostate were compared after culture in the absence of insulin and testosterone. Quantitative methods were used to assess changes in tissue composition and the height of the epithelial cells. BPH appeared less sensitive than rat prostate to withdrawal of hormone support, and the changes which occurred during culture of BPH were more typical of a repair mechanism to injury than of a castration effect. Cell kinetics was investigated using [125I] iododeoxyuridine and vincristine. Both approaches demonstrated a spontaneous surge in proliferative activity of BPH reaching a peak at about Day 4. In contrast, proliferative activity in rat prostate tended to fall over the period of 2-8 days of culture. The significance of these findings in terms of age linked effects is discussed."} +{"text": "Despite the rising interest in homeotic genes, little has been known about the course and pattern of evolution of homeotic traits across the mammalian radiation. An array of emerging and diversifying homeotic gradients revealed by this study appear to generate new body plans and drive evolution at a large scale.This study identifies and evaluates a set of homeotic gradients across 250 extant and fossil mammalian species and their antecedents over a period of 220 million years. These traits are generally expressed as co-linear gradients along the body axis rather than as distinct segmental identities. Relative position or occurrence sequence vary independently and are subject to polarity reversal and mirroring. Five major gradient modification sets are identified: (1)\u2013quantitative changes of primary segmental identity pattern that appeared at the origin of the tetrapods ; (2)\u2013frame shift relation of costal and vertebral identity which diversifies from the time of amniote origins; (3)\u2013duplication, mirroring, splitting and diversification of the neomorphic laminar process first commencing at the dawn of mammals; (4)\u2013emergence of homologically variable lumbar lateral processes upon commencement of the radiation of therian mammals and ; (5)\u2013inflexions and transpositions of the relative position of the horizontal septum of the body and the neuraxis at the emergence of various orders of therian mammals. Convergent functional changes under homeotic control include laminar articular engagement with septo-neural transposition and ventrally arrayed lumbar transverse process support systems.Morotopithecus that are still seen in humans supports establishment of a new \u201chominiform\u201d clade and suggests a homeotic origin for the human upright body plan.Clusters of homeotic transformations mark the emergence point of mammals in the Triassic and the radiation of therians in the Cretaceous. A cluster of homeotic changes in the Miocene hominoid At the dawn of modern genetics, William Bateson's Among the questions that Bateson sought to address by studying homeotics was the way in which genetic change could lead to the emergence of new body plans. Neither classical morphology nor standard Darwinian analysis has provided truly satisfying explanations of such major body plan innovations as the origin of the Bilaterians by symmetric right/left replication of the organism or the origin of the vertebrates by body axis inversion of the original Bilaterian design The discovery of the homeobox in the 1970s It is now reasonable to return to Bateson's project. Evolutionary change in the system of homeotic genes seems to be involved in body plan transformation. Modularity theory Can the study of homeotic change help show how morphogenetic evolution relates to the emergence of new body plans This report examines the question of whether duplications and homeotic changes have played a role in new body configuration change in three events of special biological interest-the emergence of mammals among the synapsid amniotes, the diversification of mammal groups in the Late Cretaceous, and the emergence of \u201chominiforms\u201d among the catarrhine primates in the Early Miocene.The study of axially arrayed serial homeotic characters in a group such as the mammals necessitates the study of vertebrae. This is a topic that has been relegated to limited sub-specialist and medical interest for more than 150 years. However, before Darwin, many of the major attempts to assemble a biological explanation for similarity among animals involved vertebrae explicitly. Most prominently, the widely attended zoological works of Goethe Hox, Pax and other Bilaterian homeotic and morphogenetic gene families have further increased the relevance of attention to evolution of axial structures In addition to the progress of axial skeletal fossil discoveries, the remarkable advances in our understanding of the embryologic development of axial structures and their relationships to The hominiform example is particularly compelling. Proconsulid hominoids differed from old world monkeys in having a Y-5 pattern of molar cusps but were otherwise similar to them in body form and ecological niche\u2013most appear to have been generalized quadrupeds Morotopithecus bishopiProconsul nyanzaeOreopithecus bamboliiPierolapithecus catalaunicusFor most of the past two hundred years, models of the origin of human upright posture and bipedalism have been based primarily on evidence from the appendicular and cranial skeleton, but evidence from the spine has played little or no role in our understanding. A series of discoveries of axial skeletal fossils from species including Homo sapiens lumbar vertebra in UMP 67-28, a hominoid fossil from 21.6 million years ago Given the many unique aspects of load bearing and movement requirements, it is not at all surprising that the lumbar vertebrae of modern humans are strikingly different in structure and function from typical mammalian vertebrae. However, the appearance of most of the unique features of the Morotopithecus. Upright bipedalism plays a significant role in all the species of a clade that share the morphogenetic transformation with Morotopithecus.For a variety of reasons, the term \u201chuman\u201d has been applied to a clade of hominoids commencing at the split from the chimpanzee lineage about six million years ago The significance of the anatomical adaptations to upright posture and varying degrees of bipedalism seem among the hominoids has been a matter of ongoing interest However, when the various components of axial anatomical specialization in hominoids are fully identified, and their context in the broader setting of mammalian homeotic evolution is made clear, an alternate sequence of events becomes increasingly compelling. This is the possibility that a distinct and ancient clade within the hominoids can be identified that share a major modification of axial architecture that underlies the upright posture and primary bipedalism of modern humans. This morph appears to persist across the succeeding 21 million yeas to be preserved in primitive form in modern humans. The various other types of specialized locomotion seen among existing hominoids are made possible by comparatively minor secondary and tertiary modifications of the original primitive upright, bipedal architecture. This is the basis for asserting a homeotic transformation is the basis of the origin of humanity.This study revealed that body configuration modification in the Mammalia often involves emergence and change of homeotic gradients. In a number of instances clusters of multiple different homeotic gradient changes occurred at the stem of a major systematic radiation .These clusters of homeotic change generally qualify as body plan changes and often relate to significant alterations in the adaptive zone of the descendant groups. These clusters of changes are often preserved as a fixed homeotic set in the descendant group across tens of millions or hundreds of millions of years.Within individual lineages many of the gradients demonstrate alterations on a sporadic basis (at the level of species or higher level clades). Some lineages show a very high frequency of homeotic change for some gradients. Other lineages show little or no homeotic change over hundreds of millions of years (Monotremata).Some homeotic alterations appear to be relatively highly conserved\u2013they fluctuate in their expression among more ancient lineages but eventually become fixed (e.g. lumbar rib suppression). A few homeotic features never change after their initial appearance .At a finer level, some gradients clearly are subject to independent alteration in rate and tempo of expression along the body axis\u2013some progress incrementally along the segmental series, some commence abruptly and then progress slowly and these properties vary across taxa. The gradients may respect medio-lateral and dorso-ventral positional relationships relative to each other or they may cross as they progress down the body axis. The segmental locations of onset of gradient change do not follow rigidly fixed sequences relative to each other.Once established, the expression pattern of these gradients and of the morphological substrates upon which the gradients act then diversify . Some apOne remarkable aspect is the mirroring and duplication of homeotic gradients. A gradient series usually seen with a given polarity and location recurs with opposite polarity at a different location. New gradients may act along the entire body axis or in replicated form within each segment. The emergence of new types of structures by duplication with subsequent diversification of the new version mimics the pattern of change often seen with gene duplication at the level of the genome.The basic homeotic distinction of five major spinal regions is apparThe thoraco-lumbar transition within the vertebral series of mammals, however, depends on a variety of gradients that defy simple counting and categorization \u2013this issThe lumbo-sacral boundary collectively affects multiple gradients in concert and is therefore a discreet phenomenon like the cervico-thoracic boundary. The recent advent of a molecular resolution to the deep relationship of mammalian groups Scutisorex provides the most dramatic example of morphogenetic disruption of the homeotic system among the mammals Scutisorex has twelve lumbar vertebrae.Petrodromus tetradactylus (USNM 241593)\u2013a species with a remarkably accelerated rate of morphological evolution Another informative homeotic character state is the replication of the \u201cdiaphragmatic\u201d thoraco-lumbar transition vertebra in a specimen of the macroscelid Reduction in the number of dorsal (thoracic+lumbar) segments is relatively uncommon. It is typical of the Order Chiroptera and the Order Cingulata. Among hominoids this occurs in all of the species of the hominiform clade , 4 but nThe initial reduction in number of lumbar vertebrae in the hominiforms appears to be a shift from the catarrhine modal number of seven down to a modal number of five or six . Modern Pongo, then Gorilla, and then Pan, with the longer more flexible lumbar spine retained in primitive form in hominines such as Australopithecus and Homo can have mobile ribs on all of their lumbar vertebrae. In fact, many groups of Mesozoic mammals also have mobile uniarticulate ribs on their lumbar vertebrae. It is only among the therian mammals that lumbar ribs are lost definitively Monotremes part of the vertebra that was revealed and characterized by this study. Some of these gradients appear to have profound functional significance, others seem to be best valued as windows into the morphogenetic mechanisms in play in mammalian evolution. The neomorphic element can be termed the The neomorph appears to arise by a medio-lateral duplication on the lamina of the vertebra. A single primitive extension or process seen in most tetrapods (the diapophysis) becomes two side by side extensions , 10, 11.Once established it actually becomes more constant than the primitive extension that it replicates. In the posterior thoracic region of many mammals, the diapophysis is suppressed along with the dorsal rib head, but the laminapophysis still appears. It is therefore clear that its morphology is determined by a new homeotic gradient that is not necessarily subject to events that alter the old homeotic gradient responsible for the diapophysis.The laminapophysis disassociates most of the trunk musculature from the ribs, thus significantly disengaging the rib cage from the locomotor musculature of the body . This isAt its earliest appearance there are no additional homeotic gradients affecting it. In monotremes it proceeds with monotonous uniformity of shape through all dorsal vertebrae . Independiaphragmatic\u201d joint along the spine (reverse polarity) in additHystrix, Hydrochoerus) and almost certainly reflects an entirely independent genetic event.Anterior mirroring also occurs in most carnivores, all pholidotans (pangolins), many artiodactyls and some perissodactyls suggesting that this is an echo of a single homeotic gene-based replication event in an ancient clade within the Laurasiatheria which took place after the divergence of the Chiroptera and the Eulipotyphyla . A similMirroring or replication of homeotic gradients also occurs in regard to several features in the Xenarthra resulting in multiple facet pairs at each articulation between lumbar vertebrae. In some species, the primary articulation takes on an unusual cylindrical shape, so the appearance of a mirror image cylinder is highly suggestive of a duplicated morphogenetic instruction .Mammalian groups appear to display a virtual collapse of the homology paradigm when their different types of lumbar transverse processes (LTPs) are examined in detail. More than fifteen different types of lumbar transverse process serial homology were observed and therThe explanation appears to be a morphological field that that varies in the site of contact of its induction point upon the vertebra. The variation affects both dorso-ventral location and antero-posterior position within the segment as it can apparently coopt a variety of different axial structures to form the lumbar transverse process (LTP) depending on the location of where its induction point impacts the forming vertebra.Nemegtbaatar (Multituberculata) A few similar antecedents appear in occasional non-mammalian synapsids Large LTPs can structurally support large body size [37] so Cetacea display two distinct types \u2013one typehominiform\u201d clade within the Hominoidea. The resulting relocation of the LTP structural support is the fundamental functional change that underlies upright posture in hominiforms. This character is first seen at 21.6 million years ago in the lumbar vertebra of Morotopithecus bishopiThe transition in LTP homology is a key basis for the proposal in this paper to establish a \u201cDivision of the chordate body into dorsal and ventral portions is defined by a rib-bearing horizontal septum in vertebrates and by dorsal and ventral divisions of the ramifying segmental spinal nerves. It is conventional to appreciate that vertebrate bilaterians have their neural tube dorsal to the horizontal septum while invertebrate bilaterians have the neural axis ventral to it. Overall, this is an issue of the fundamental patterning mechanisms of the dorso-ventral gradients of morphogenesis as well as a key point in the systematics of the Bilateria.Oddly enough, in humans the horizontal septum is actually dorsal to the neural axis in the lumbar region. In fact, this situation occurs sporadically in groups appearing in various lineages scattered throughout the therian mammal phylogeny , 17A, 18Dorso-ventral transposition of the horizontal septum and of the neuraxis occurs at a crossing point that may be termed the \u201csepto-neural inflexion point\u201d and reflects the crossing of two somewhat independent morphogenetic gradients see .The ancestral synapsid condition The details are still unclear for some Mesozoic mammal groups, but for all therian mammals there is a major shift of the pararthrum to a position near the dorsal margin of the vertebral body , 7A. ThiEmbryologically and evolutionarily, the ribs arise at intersection lines between the horizontal septum and segmental myosepta. Because of this, the relatively dorsal or relatively ventral position of the attachment point of a costal derived process or lumbar transverse process on the vertebra reveals the relative position of the septal and neural horizontal body planes in the animal.In Archosaurs, there is a very abrupt and complete inflexion and transposition . In the The particular type of transition seen in archosaurs almost never occurs in mammals because the synapsid/mammalian primary rib articulation tends not to cross the \u201cneuro-central suture\u201d of the vertebra . In mammals, when the horizontal septum becomes transposed to a position dorsal to the neuraxis, there may be non-costal lumbar transverse processes (as in humans) but there are almost never ribs dorsal to neuraxis. Exceptions to this occur in the form of rib articulations on the pedicles in Superorder Xenarthra (Order Pilosa) and in tSepto-neural inflexion patterns have not been previously appreciated as an important aspect of tetrapod morphologic and functional evolution. Nonetheless, they may play an important role in the emergence of large cursorial mammals at the close of the Mesozoic, the emergence of the Carnivora from the ungulates at the Cretaceous-Tertiary boundary and in the origin of the anatomical basis of upright posture in humans in the stem hominiform hominoids of the Early Miocene.th gradient set that specifies the dorso-ventral position of the diarthrum relative to the neuraxis as well as its relation to the horizontal septum.A different type of change in horizontal body planes occurs in most australodelphian metatherians. This is the transposition of the ancient more dorsal rib articulation plane to become ventral to the neuraxis in the lumbar region , 20. ThiIn australodelphians, there is never any further dorsal shift of the horizontal septum. Many eutherians including the Eulipotyphla in the Superorder Laurasiatheria show a similar stable relation of the horizontal septum and the neuraxis.Dorsal repositioning of the horizontal septum is typical of the superorder Afrotheria. In proboscideans, some members of the group display a full transposition Dorsal repositioning of the septum is universal in the Ferungulata , 22, 23.In Perissodactyls, the septum apparently undergoes a secondary and partial ventral descent. The result is the obliteration of the neural foramina since the septum and the neuraxis become co-linear. The nerve roots in perissodactyls exit the spinal canal through perforations in the pedicle and they do not have intervertebral neural foramina as in most vertebrates .Some artiodactyl groups that have secondary ventral shifting of the horizontal septum also have co-linearity with the neuraxis and thus have parallel evolution of the pedicle perforations for the nerve roots instead of intervertebral foramina . Nerve eIn the eutherian Superorder Euarchontoglires, the horizontal septum is parallel to or just ventral to the neural canal . HoweverMorotopithecus bishopi dated at 21.6 million years ago so this event may literally be the anatomic determinant of \u201chumanity\u201d. Although it is conventional to apply these criteria only to a \u201chominine\u201d clade originating about six million years ago, the understanding of the impact of this septo-neural transposition event is a formidable challenge to that framework. If the same feature and same genetic event that underlies human upright posture and bipedalism is simply preserved in its primitive form in the stem hominines of six million years ago, how do we exclude the original species in which it appears\u2013There is a common functional requirements of the spine in quadrupedal therians to resist hyperextension due to gravity while allowing dorso-ventral flexibility in locomotion. It is therefore not surprising that there are multiple convergent anatomical structural solutions. Most of these have not been appreciated in earlier attempts to model the mammalian spine on a global engineering basis without adequate attention to the context and detail of the specific anatomical structures actually involved Universally in the Ferrungulata, Paenungulata, Xenarthra and Ameridelphia where septo-neural transposition takes place, there are supplementary modifications of the lumbar spine that relate to resistance against extension of the spine . TypicalHystrix cristata (porcupine with weight up to 30 kg\u2013and note much larger extinct related species such as Neosteiromys pattoni) . Among pIn a number of mammalian groups including both therians and metatherians These and other types of arrays with the tip of the LTP ventral to the effective axis of rotation for lumbar extension participate in a dynamic, elastic, ligamentous system that supports the lumbar region and resists extension . This apIn this elastic system, as the vertebral column passes into extension, LTPs whose tips are below the effective intervertebral axis of rotation begin to separate from each other . This apPongo and Gorilla have bony blocks to lumbar hyperextension that mimic the situation in ungulates in Pan is not supported by bony rigidification of the lumbar region. However, Pan differs from other hominiforms such as Morotopithecus and Homo in having thin flat lumbar transverse processes held under tension by heavy ilio-lumbar ligaments suspended between high iliac crests share them because they emerged in a common hominiform ancestor and are preserved as a synapomorphic character set of the group.Many of the features attributed here to the hominiform pattern of lumbar vertebral architecture do occur more or less sporadically in other mammalian superorders although they are rare in the Superorder Euarchontoglires and are not seen in any non-hominiform primate group. It is worth considering that each of the hominiform lineages could have undergone the septo-neural transposition and consequent loss of the styloid and the ventrally tensioned LTP array on a homoplastic basis. However, this is no more convincing than the more parsimonious suggestion that all the hominiforms known to display these features provided the basis for large body size in therians of the Late Cretaceous. Changes in three homeotic gradient systems mark the Early Miocene establishment of a body plan committed to upright postures in the hominiform hominoids.Homeotic change can have major adaptive effects. When a diverse radiation of taxa shares the homeotic innovations of the stem group, there is a Morotopithecus at 22 million years (hominiform/proconsulid divergence) so it is clear that these changes happened with some temporal proximity to each other if not simultaneously. Future discoveries from the fossil record of this time period will no doubt reveal further details about the sequence and tempo at which this body plan generating event took place.The hominiform homeotic transformation is bracketed between the cercopithecoid/hominoid divergence around 24 million years ago and the appearance of Duplication of homeotically determined structures and gradients in the Theria clearly relate to a remarkable explosion of new mammalian body plans. Based on divergence patterns, there is considerable evidence that this took place during the ten to fifteen million years prior to the Cretaceous-Tertiary Boundary and not after it.This is an excellent candidate explanation for the odd pattern of total absence in the fossil record of any mammals much larger than one or two kilograms for the first 160 million years of the existence of this group Therian mammals deploy symmetrical gaits for rapid locomotion The appearance of large mammals at the close of the Cretaceous is at least coincident with the appearance of two major types of architectural transformation of the lumbar spine to provide non-muscular support against extension in the lumbar region. These are convergent class of rigid locking systems in groups with septo-neural transposition , and 26 These data also support the concept of a threshold effect in diversification of the mammals\u2013progress awaited morphogenetic innovation. This supports an enlarged role for a mutational view A comparative evaluation of serially repeating structures and homeotic patterning in 250 extant mammalian species and fossil forms was carried out. For extant forms, specimens in the collections of the Harvard University Museum of Comparative Zoology (MCZ), Harvard Peabody Museum, Smithsonian Museum (USNM) and Chicago Field Museum (FMNH) were selected to provide coverage of all mammalian families except for the order Chiroptera and Rodentia where coverage was at the level of the superfamily. Specimens were selected based on preparations in which a complete naturally articulated spine in which all details could be observed. The objective was to obtain a representative overview across the Class Mammalia but variation within species was not addressed extensively. In essence there simply is not sufficient material available to provide any real comprehensive assessment of variation if the full systematic array of mammals is to be covered. In addition the vertebral nomenclature of Owen"} +{"text": "The concept of using a stent to maintain patency of a lumen is not new. As early as 1969, stents werebeing investigated in the peripheral arterial system as a means of preventing restenosis after dilatationby balloon angioplasty . Since then, numerous reports have demonstrated the useof stents in both the peripheral and coronary artery systems . Concomitant with the investigation of expandable endovascularmetal prosthesis has been the development of prosthetic devices for management of tracheobronchial,gastrointestinal, and genitourinary diseases. We will review the use of endoscopicallyplaced prosthetic devices in the management of diseases affecting these systems."} +{"text": "We have reported in the preceding paper that the treatment of plateau phase mouse EMT6 tumour cells with a combination of hyperthermia and trifluoperazine greatly enhances the cytotoxicity of the antitumour drug belomycin (BLM). The cytotoxic action of BLM is thought to arise from the induction of DNA damage in a manner which reflects chromatin accessibility. Thus we have studied the effects of the two modifiers (HT and TFP) on chromatin structure and BLM-induced DNA damage. Co-treatment of cells with HT and TFP altered chromatin organisation by the formation and slow resolution of new DNA attachment sites at the nuclear matrix. HT increased drug-induced DNA damage by the general depression of repair rather than through the generation of new sites for drug action. TFP produced a more discrete block in the repair of alkali-labile lesions in DNA. Both processes appear to occur for the combination of BLM, HT and TFP, and we propose that the novel chromatin configuration permits the accumulation of potentially lethal DNA strand breaks. Our study indicates the potential value of chromatin/DNA repair modifying regimens for overcoming the poor responsiveness of some tumour cells to chemotherapeutic drugs and provides a rational basis for the use of calmodulin inhibitors in thermochemotherapy."} +{"text": "The data in the columns \"No. of Cases,\" \"No. of Controls,\" and \"Unadjusted Odds Radio\" for the Rosiglitazone rows are duplicates of the equivalent data for Pioglitazone. Please view the corrected Table 4 here:"} +{"text": "In Vitro equivalent of the pale and intermediate electronlucentepithelial cells located in the inner zone of the trout thymus, constituting indirectevidence of the phylogenetical relationships of the inner zone of the teleost thymus withthe thymic cortex of higher vertebrates.We present an enzyme- and immuno-cytochemical, and ultrastructural characterization oftrout thymic nurse cells (TNCs). Our data suggest that isolated trout thymic multicellularcomplexes are epithelial cells with acidic compartments that may be involved in the processingof antigens and in the generation of the MHC-II proteins that these cell express, andalso that isolated TNCs are the"} +{"text": "Sir,Periodontitis is a destructive inflammatory disease of the supporting tissues of the teeth and is caused by specific microorganisms or group of specific microorganisms resulting in progressive destruction of periodontal ligament and alveolar bone with periodontal pocket formation, gingival recession, or both.2 The hos"} +{"text": "Foodborne illness of microbial origin is the most serious food safety problem in the United States. The Centers for Disease Control and Prevention reports that 79% of outbreaks between 1987 and 1992 were bacterial; improper holding temperature and poor personal hygiene of food handlers contributed most to disease incidence. Some microbes have demonstrated resistance to standard methods of preparation and storage of foods. Nonetheless, food safety and public health officials attribute a rise in incidence of foodborne illness to changes in demographics and consumer lifestyles that affect the way food is prepared and stored. Food editors report that fewer than 50% of consumers are concerned about food safety. An American Meat Institute (1996) study details lifestyle changes affecting food behavior, including an increasing number of women in the workforce, limited commitment to food preparation, and a greater number of single heads of households. Consumers appear to be more interested in convenience and saving time than in proper food handling and preparation."} +{"text": "This report describes the first pathologic and immunohistochemical recognition in Australia of a rabies-like disease in a native mammal, a fruit bat, the black flying fox . A virus with close serologic and genetic relationships to members of the Lyssavirus genus of the family Rhabdoviridae was isolated in mice from the tissue homogenates of a sick juvenile animal."} +{"text": "The family histories of 131 patients with histologically defined Hodgkin's disease (HD) were studied and 2,517 first and second degree relatives and spouses were identified and followed-up. The causes of death in deceased relatives were ascertained from death certificates. The numbers of deaths from selected causes were compared with the numbers that would be expected if the relatives had suffered the same mortality rates as the Scottish national population. A 4-fold increase in deaths due to HD was found among first and second degree relatives of patients with the disease (6 cases observed compared with 1.4 expected). Five of the 6 familial cases were related to index patients with the mixed cellularity form of the disease, the remaining case was the brother of a patient with the lymphocyte-depleted form of the disease. The increased risk was seen among relatives of both young and older patients and there was no consistent intrafamilial similarity in age of onset or time of onset of disease."} +{"text": "Two patients with mucosal cancer of the periampullary region were treated with papillocholedochectomy,which entails removal of the papilla of Vater and the whole length of the common bile duct. The neoplasm is dissected out through the plane between the duodenal circular and longitudinal muscles, deep to the sphincter of Oddi and the fibromuscular layer of the bile duct. Pathological examination showed that cancer was confined to the mucosal layer without stromal invasion, and that the operation achieved radical cure. For mucosal cancer, papillo-choledochectomy isan alternative to pancreatoduodenectomy, provided that repeated frozen-section studies confirm thecompleteness of excision."} +{"text": "Peers and Linsell (1973) demonstrated a significant association between the incidence of primary liver cancer and ingested aflatoxin in a study in the Muranga district of Kenya. A study of hepatitis B antigen in the same district showed no significant differences between the low altitude area, with a relatively high incidence of primary liver cancer, and the high altitude area with a lower incidence of the tumour. Current evidence is more in favour of aflatoxin playing an important role in the aetiology of primary liver cancer but hepatitis B antigen may play an ancillary role."} +{"text": "In the BUPA Study, a prospective study of 22,000 men attending a screening centre in London, serum samples were collected and stored. The concentration of beta-carotene was measured in the stored serum samples from 271 men who were subsequently notified as having cancer and from 533 unaffected controls, matched for age, smoking history and duration of storage of the serum samples. The mean beta-carotene level of the cancer subjects was significantly lower than that of their matched controls . The difference was apparent in subjects from whom blood was collected several years before the diagnosis of the cancer, indicating that the low beta-carotene levels in the cancer subjects were unlikely to have been simply a consequence of pre-clinical disease. Men in the top two quintiles of serum beta-carotene had only about 60% of the risk of developing cancer compared with men in the bottom quintile. The study was not large enough to be able to indicate with confidence the sites of cancer for which the inverse association between serum beta-carotene and risk of cancer applied, though the association was strongest for lung cancer. The association may be due to beta-carotene affecting the risk directly or it may reflect an indirect association of cancer risk with some other component of vegetables or with a nonvegetable component of diet that is itself related to vegetable consumption."} +{"text": "Copper(II) complexes of amino acids and peptides containing the chelating bis(imidazolyl) residues havebeen reviewed. The results reveal that bis(imidazolyl) analogues of these biomolecules are very effectiveligands for metal binding. The nitrogen donor atoms of the chelating agent are the major metal binding sitesunder acidic conditions. In the presence of terminal amino group the multidentate character of the ligandsresults in the formation of various polynuclear complexes including the ligand and the imidazole bridgeddimeric species. The most intriguing feature of the coordination chemistry of these ligands is that thedeprotonation of the coordinated imidazole-N(1)H groups results in the appearance of a new chelating site inthe molecules. It leads to the formation of stable trinuclear complexes via negatively charged imidazolatobridges."} +{"text": "Data were examined to determine trends in survival from cancers of the oral cavity and pharynx in Scotland between 1968 and 1987, and to analyse survival rates and the previously noted increases in the incidence of such cancers according to the level of social deprivation. Incidence data on oral cavity and pharyngeal cancer and survival rates following diagnosis were obtained from the Information and Statistics Division of the Common Services Agency for the National Health Service in Scotland, covering the period 1968-92. It was found that survival rates for cancers of the tongue, mouth and pharynx diagnosed among persons less than 65 years of age decreased between 1968-72 and 1983-87. Five year relative survival rates fell from 47% to 39% over this period, while the equivalent rates among persons older than 65 years have shown a modest improvement from 34% to 38%. When considered by level of social deprivation, survival is lower among persons from the most deprived areas, and it is among such persons that the recent increases in occurrence of cancers of the oral cavity and pharynx have primarily occurred. The poorer survival among those from more socially deprived areas, and the evidence that the largest increase in incidence has occurred in such areas may to some extent explain the non-favourable trends in mortality. More importantly it emphasises the potential benefits of targeting such a population for oral health information. An educational campaign should include both information on the risk factors for developing oral cancer, and also the importance of seeking an early professional consultation in the case of symptoms."} +{"text": "A pathological review was carried out on 600 patients with breast carcinoma entered into the 'Nolvadex' Adjuvant Trial Organisation (NATO) study. The tumours were graded histologically and these results were compared with the oestrogen receptor (ER) status of the tumours, the numbers of recurrences and the length of survival of the patients. It was found that histological grading was predictive both in terms of events and survival, and correlates significantly with oestrogen receptor status; within histological grades I and II, patients receiving 'Nolvadex' had fewer events and deaths compared with patients in the control group. For patients with grade III tumours qualitatively it was in the same direction as the benefit obtained in patients with grade I and II tumours."} +{"text": "One hundred and thirteen biopsies of the palate in people accustomed to smoking cigars, most of them with the burning end of the cigar inside the mouth, have been studied.Thirty-eight of these showed mild to severe atypical changes in the epithelium. There were 19 lesions showing orthokeratosis and 53 showing hyperorthokeratosis.The earliest atypical change is seen in the mouths of the ducts of the glands.There were 3 cases showing microinvasive carcinomas.Pigmentation is a prominent feature in these cases.The papules with umbilication could be due to hyperplasia of the mucous glands.It is suggested that stomatitis nicotina occurring in men and women with the habit of reverse smoking is probably precancerous because of the presence of atypical changes in the epithelium and also the finding of 3 microinvasive carcinomas without any macroscopic evidence.There is no acceptable explanation why the soft palate escapes getting either stomatitis nicotina lesion or carcinoma in reverse smokers."} +{"text": "An attempt has been made to reverse cachexia and to selectively deprive the tumour of metabolic substrates for energy production by feeding a ketogenic regime, since ketone bodies are considered important in maintaining homeostasis during starvation. As a model we have used a transplantable mouse adenocarcinoma of the colon (MAC 16) which produces extensive weight loss without a reduction in food intake. When mice bearing the MAC16 tumour were fed on diets in which up to 80% of the energy was supplied as medium chain triglycerides (MCT) with or without arginine 3-hydroxybutyrate host weight loss was reduced in proportion to the fat content of the diet, and there was also a reduction in the percentage contribution of the tumour to the final body weight. The increase in carcass weight in tumour-bearing mice fed high levels of MCT was attributable to an increase in both the fat and the non-fat carcass mass. Blood levels of free fatty acids (FFA) were significantly reduced by MCT addition. The levels of both acetoacetate and 3-hydroxybutyrate were elevated in mice fed the high fat diets, and tumour-bearing mice fed the normal diet did not show increased plasma levels of ketone bodies over the non-tumour-bearing group despite the loss of carcass lipids. Both blood glucose and plasma insulin levels were reduced in mice bearing the MAC16 tumour and this was not significantly altered by feeding the high fat diets. The elevation in ketone bodies may account for the retention of both the fat and the non-fat carcass mass. This is the first example of an attempt to reverse cachexia by a diet based on metabolic differences between tumour and host tissues, which aims to selectively feed the host at the expense of the tumour."} +{"text": "Sir,The study of virulence traits among the bacteria represents an upcoming focus in the field of bacterial pathogenesis, where another serious emerging health concern today is the appearance and spread of resistant bacterial stains. It has bCurrent knowledge on pathogen\u2019s effects on host cell function, induction of secretory pathway that many pathogenic bacteria use to export virulence-related proteins, regulation of virulence genes and identification of these by analysis of bacterial DNA sequence data are the proof of revolution in modern science and advanced technology. It has gAnother important issue that needs serious attention is the understanding of evolution of pathogens, considering their phylogenetic distribution, while the genetic and evolutionary factors that have contributed to their emergence are important to find out spatial virulence. The comparison of spatial virulence worldwide will definitely solve related mechanism of virulence to discover the strategies used by bacteria to subvert immune defences and cause disease. A detail"} +{"text": "Murine embryonal carcinoma cells, the pluripotent stem cells of teratocarcinoma were injected simultaneously into caudal and cranial sites on the back of syngeneic recipients in order to determine whether regional anatomical differences affect their take and growth rate and differentiation. The overall tumour take rate was higher in caudal than cranial sites, but the initial weight of tumours was higher in the cranial than caudal sites. Tumours developing in the two anatomical sites grew at the same rate with a linear increase in volume. At the end of the 4-week experimental period the differences in the size of anterior and posterior tumours were negligible and no histological differences were noted between the two groups. Our data indicate that regional factors significantly affect the take rate and the initial growth of this murine teratocarcinoma, i.e. the establishment of solid tumours from injected stem cells. The growth rate of established tumours was not affected by regional factors."} +{"text": "A model has been developed for studying the capability of cells from primary murine mammary tumours to establish colonies in distant organs. The model involves the i.v. inoculation of disaggregated tumour cells into autologous and syngeneic recipients. The results show that the metastatic colonization potential of cells from a given tumour is consistent within the animals of an inoculated batch. Also, the findings are uniform in the autologous host and the syngeneic recipients. Tumours vary in their colonization potential and can be classified in 2 main groups designated high and low. These findings indicate that: (i) cells from 37% of mammary tumours can heavily colonize the lungs when inoculated i.v., even though the incidence of metastatic spread of these tumours in the undisturbed animal is almost zero. Thus, the relative infrequency of spontaneous metastasis from murine mammary tumours is not due to inability of the tumour cells to survive and colonize once free in the blood stream; and (ii) the colonization potential of the tumours is an intrinsic property of the tumour cells rather than of the host, whose prior acquaintance with the cells does not seem to confer resistance to colonization. The model presents opportunities for identification of possible differences between tumours of high and low colonization potential, and is being used to study cellular properties which favour colonization of distant organs by comparison of observations in vitro with the behaviour of cells from the same tumour in vivo."} +{"text": "Oestrous ovis deposits its larvae in the conjunctival eye sac of human. The present paper reports a case study of ocular myasis among sheep farm workers caused by Oestrous ovis. The ocular myasis and the associated mucopurulent conjunctivitis are occupationally acquired in these cases. This study also suggests the treatment of patients and the recovery of the larvae.Ocular myasis and associated mucopurulent conjunctivitis in human eyes is a rare phenomenon. However, if the sheep bot fly abounds and poor hygienic environment prevails, the Cyclorhapid and Oestridae may produce myasis. The Oestrous ovis, the common sheep bot fly breeds in the nasal cavity and sinuses of sheep. The fly enters the nostrils and deposits its larvae. The larva crawls and reaches the brain cavities, they mature and fall on the ground and become adult. In the sheep farm the standard of hygiene is always very low and there are no techniques adopted to stop the activities of these insects.[In India, agriculture is a major profession. In the agricultural sector, the laborers engaged in field activities may be exposed to several organisms causing zoonotic diseases. The breeding and rearing of sheep or goats is also one such agricultural based activity in India. An unorganized labor force is deputed by the farmers for taking care of animals in sheep farm. These laborers may have the chances of zoonosis due to insects in the vicinity of sheep farm. There has been prevalence of ocular myasis associated with mucopurulent conjunctivitis among these workers. Ocular myasis is due to the infestation of the eye with maggots or bots of certain flies. The infe insects.3 We haveThe present paper describes the case study on the pathology, treatments and associated microbes for the development of mucopurulent conjunctivitis acquired occupationally in the sheep farm workers.Oestrous ovis in eye and eye discharge due to mucopurulent conjunctivitis were the cases examined. The infected eye was observed by ophthalmoscope. The collected discharge namely serous, mucopurulent was analyzed for the presence of pathogens by culture techniques. The bacterial strains isolated and characterized by staining techniques. Antibiotic sensitivity pattern of the organism was studied by Kirby-Bauer method.The sheep farm worker infected with first stage larvae of in question was done by immobilizing with the topical application of 1% xylocaine and grasped with the help of forceps. The diseased eyes were treated with suitable anti-inflammatory drugs and antibiotics determined out of the culture studies.The antibiotics such as garamycin, neomycin, erythromycin, tetracycline, ofloxacin, norfloxacin and sulfacetamide were tested. The recovery and the removal of insect larvae Oestrous ovis. These larvae caused tissue damage over the eye ball and conjunctival sac. The movement of the larvae was affected by the body bristles and barbs [Staphylococcus aureus, Staphylococcus epidermidis, Pseudomonas and Moraxella sp. The important observation was the presence of Chlamydiae trachomatis in certain cases.The observation on the infected eyes of cases (workers of sheep farm) through the slit lamp revealed the presence of larvae of nd barbs . There wnd barbs and 3. FOestrous ovis larvae is associated with mucopurulent conjunctivitis causing eye pathogens. The degree of mucupurulent conjunctivitis depends on the nature of tissue damage caused by the larva. The results of antibiotic disc sensitivity test exhibited the sensitivity to various antibiotics [These results suggested that the infection with ibiotics . Among tOestrous ovis is abound, there may be chances of deposition of larvae in the conjunctival sac of the human eyes. The incidences of ocular myasis associated with mucopurulent conjunctivitis described in the present cases may be occupationally contracted. The erosion of epithelial tissues on the eye ball as well as conjunctival cavity may be prone for the infection of microbes.[The ocular myasis due to the infection of first stage larvae of sheep bot fly associated with mucopurulent conjunctivitis in human eyes is a rare occurrence. However if the sheep bot fly microbes. This is The treatment suggested out of the present study for the recovery and eradication of the conjunctivitis may be curative. The occupationally acquired mucopurulent conjunctivitis among the sheep farm workers may be the first of this kind. It has been known that the activities of the said fly are high in sheep farm. The fly repellents (or) fumigation of insecticides are not very effective. It has been suggested that injection of ivermectin, doramectin or moxidectin into the nostril of the sheep may be effective against the larvae and the control of adult flies.Chlamydiae trachomatis which is recently known for causing blindness.[The significant observation in this context is the presence of lindness. It is de"} +{"text": "Anatomic obstruction by granulomas or distortion of the normal anatomy by fibrosis surrounding the reproductive tract structures is the commonest cause of infertility. The diagnosis is usually based on a suggestive history along with evidence of granulomatous infection on a tissue sample. The management depends on the site of obstruction and surgery is usually helpful only in cases with discrete ejaculatory duct obstruction. However, most other patients are candidates for Tuberculosis of the male reproductive tract can result in infertility. The infection can involve any part of the reproductive tract including the testis, epididymis, vas deferens, seminal vesicles, prostate and the ejaculatory ducts. Infertility usually results from the inflammation and scarring that follow the infection, resulting in distortion of the normal anatomy and causing obstruction. Infertility may be the first presentation of genitourinary tuberculosis and patients may have no recollection of any other symptoms. InvolvemThe site of infection and the resultant scarring determine the manifestation of reproductive tract tuberculosis. In infertile men, determination of the site is critical for deciding the line of management and the potential fertility outcome.The epididymis is one of the favored sites for tuberculous infection. In a recent review of 69 patients diagnosed with genital tract tuberculosis, the epididymis was found to be involved in over 78%. In anothUnlike other parts of the reproductive tract that are usually infected secondarily due to a retrograde spread of infection from the bladder, the globus minor of the epididymis tends to get infected primarily through a hematogenous spread of the bacilli. The high vascularity of the globus minor may be one of the reasons for this preferential involvement. Chronic epididymitis is, in fact, one of the typical manifestations of genitourinary tuberculosis.et al., however, believe that this trend of bilateral involvement in now decreasing.[Tubercular epididymitis may manifest as an acute infection, chronic infection or infertility. Acute inflammation is usually a combined epididymo-orchitis with pain, tenderness and swelling. This may be the commonest manifestation in up to 40% cases. The othecreasing.The testis is a rare site for tuberculous involvement. One of the reasons for this may be the presence of a bloodtestis barrier that may impede seeding of the testicular parenchyma. Testicular involvement usually occurs contiguous to the epididymal involvement.et al.,[Infertility results from either a direct obstruction by granulomatous masses in the epididymis or vas deferens or from scarring and distortion of normal anatomy. The patients may have no recollection of an acute infection and, in fact, the infection may never have had an acute manifestation. The diagnosis is suspected on finding nodules within the epididymis or nodularity in the vas deferens. This often needs confirmation with a percutaneous fine needle aspiration biopsy or excision of the epididymis. In a retrospective study of 11 cases of confirmed and treated epididymal tuberculosis, Gueye et al., found thet al.,et al.,[An ultrasonographic examination of the scrotum may reveal diffuse hypoechogenicity in epididymal and testicular inflammation. Chung et al., evaluateInvolvement of the vas deferens is considered one of the principal sources of genital tuberculosis. Retrograde spread of infection may occur due to bacterial invasion or reflux of urine through the ejaculatory ducts and into the vas deferens. Clinically, this manifests as multiple nodules in the palpable, scrotal part of the vas and is often bilateral. Infertility results from the direct obstruction caused by these nodules.The seminal vesicles, prostate and the ejaculatory ducts exist in such close association that it is usually not possible to isolate the involvement of one from the other in the causation of infertility. Clinically, this involvement may manifest in one of two manners.Inflammation and scarring may occasionally be restricted to a discrete terminal portion of the ejaculatory ducts near their opening into the prostatic urethra. Obstruction at this level results in dilatation of the proximal ductal system including the vas deferens and seminal vesicles. Seminal vesicle secretions make up the bulk of the ejaculate, contain fructose and alkalinize the ejaculate. Obstruction at the level of the ejaculatory ducts prevents seminal vesicle secretions from reaching the ejaculate. The patients thus present with azoospermia and a low-volume ejaculate or may be aspermic. Further, the ejaculate is acidic and lacks fructose. In such patients, other causes of low-volume azoospermia need to be excluded. These include congenital absence of vas deferens and retrograde ejaculation. The former can be excluded through a good clinical examination while thet al.,[Tuberculous infection of the seminal vesicles or the prostate may be diffuse and result in aspermia without a demonstrable obstruction of the ejaculatory ducts. These patients will have a clinical manifestation similar to that described for a discrete obstruction; however, the proximal ductal system will not be dilated. The disease often results in calcification of the seminal vesicles, prostate and the vas deferens. Fraietta et al., reportedet al.,[Fibrotic, atrophic seminal vesicles with ejaculatory duct obstruction have even been considered a diagnostic feature for tuberculosis. Paick et al., reviewedet al., Pryor anet al.,et al.,[Mycobacterium gastri infection of the seminal vesicles in a diabetic patient that resulted in infertility. Semen parameters of this patient improved after antitubercular chemotherapy.Rarely, non-tuberculous mycobacteria may also be responsible for causing seminal vesicle infection and infertility. Indudhara et al., reportedTuberculous involvement of the penile shaft and the glans penis can result in severe disfigurement, ulcers and bulbous enlargement. This mayManagement of tuberculous infertility consists of two parts. The primary aim is to treat the infection and requires antitubercular therapy. Rarely, in patients with an early diagnosis of the disease who have not developed bulky granulomas causing an obstruction, this therapy may result in restoration of fertility.In the majority of cases, the presentation is late and antitubercular therapy does not improve the fertility status. Infertility here usually results from anatomic obstruction and therapy depends on the site and feasibility of reconstruction.While the diffuse, fibrotic type of tuberculous lesions of the ejaculatory duct and seminal vesicles are not amenable to surgery, patients with discrete obstructions at the distal end of the ejaculatory duct may benefit. The surgery in these cases involves resection of the obstructed terminal segment and marsupialization of the dilated portion of the ducts into the urethra.The ejaculatory ducts open as paired structures on either side of the verumontanum. In normal individuals, these are not visible on a cystourethroscopic examination. During a transurethral resection of the ejaculatory ducts (TURED), the location of these openings can be identified by instilling a colored dye into the seminal vesicles through an ultrasound guided needle. This needle is used to first aspirate the seminal vesicle fluid to confirm the presence of sperms and then to instill the dye, immediately prior to the TURED. In the lithotomy position, the assistant places a finger in the patient's rectum and applies pressure over the seminal vesicles, forcing the dye to extrude through the opening. Once the opening is identified, it is resected with a cautery loop using a resectoscope. The resection is often carried deep and laterally into the prostatic tissue to ensure a wide-mouthed opening. An alternative method of dye instillation is through a vasotomy incision made for performing a vasogram. This technique, however, has a greater likelihood of causing inflammation within the vas deferens.An alternative technique uses a combined incision-resection (TUIRED) process to avoid the instillation of dye into the seminal vesicles. During urethroscopy, delicate cuts are made with a cold knife, lateral to the verumontanum at the expected site of the ejaculatory ducts. A finger may be placed in the patient's rectum to apply pressure over the seminal vesicles to help extrude the collected seminal secretions and also help gauge the depth of the incision and prevent rectal injury. Entry into the ejaculatory ducts is confirmed when thick secretions emanate from the incision. The opening is then widened either using the cold knife or resected using the resectoscope and loop. A Foley type catheter is left in the bladder for the first postoperative day. We usually recommend examination of the semen within the first week. Presence of sperms confirms the adequacy of incision and diagnosis.TURED or TUIRED incisions have a propensity to close over a period of time. We therefore routinely advise patients with a successful procedure to follow up regularly with semen analysis and proceed to assisted reproduction with an Intrauterine insemination (IUI), In-vitro fertilization (IVF) if the initial few attempts at spontaneous pregnancy fail.Obstruction to the epididymis may be due to the presence of nodules and masses or may be the result of a distorted anatomy. Patients with palpable masses usually require excision of the epididymis, both for diagnosis and therapy of tuberculosis. This precludes surgical reconstruction in most cases.Patients who have obstructive azoospermia with normal volume, fructose-positive ejaculate may have discrete obstruction either within the vas deferens or at the vaso-epididymal junction. These patients should undergo surgical exploration for a possible microsurgical reconstruction.If a nodule is palpable in the vas deferens, the vas is incised proximal and distal to the site of the nodule. Distal patency of the vas is confirmed either through the injection of saline or a formal vasogram is obtained using contrast material. Fluid from the proximal part of the vas is sampled for the presence of sperms. Presence of sperms and a patent distal vas is an indication for a vasovasal anastomosis. However, this is feasible in an extreme minority of cases as the most common lesion is a multiple site obstruction that is not amenable to reconstruction. Occasionally, it is possible to bypass multiple segmental obstructions including one at the vaso-epididymal junction by performing a vasoepididymal anastomosis between the epididymal tubule and the patent distal vas deferens.in-vitro fertilization or Intracytoplasmic sperm injection (ICSI). The site of sperm harvest depends on the site of infection and the degree of destruction. Fortunately, the testis is usually spared in these cases and testicular sperm is almost always available for aspiration. Tuberculous obstruction of the genital tract does not adversely influence the outcome of assisted reproduction techniques using epididymal or testicular sperm. In a comparison of outcomes of ICSI in seven men with tuberculous obstruction versus another 37 with other indications for ICSI, the rates of fertilization and pregnancy were found to be similar.[et al.,[The majority of patients with tuberculous infertility will require assisted reproduction. Since the most common manifestation of tuberculous infertility is azoospermia, the intervention has to be either similar. Kondoh e.[et al., similarlTuberculosis is an uncommon cause of male infertility, however, its diagnosis is important for managing not just infertility but also the systemic ramifications of the disease. In the absence of discrete nodules or granulomas, the diagnosis is based on a suggestive history and bacteriologic examination. Most cases of tuberculous infertility are not amenable to surgical correction. A rare exception is discrete ejaculatory duct obstruction that may respond to transurethral resection. Most other cases will require assisted reproduction which provides results comparable to those in men without this disease."} +{"text": "The presence in the thymus of hemopoietic cells other than thymocytes has been known formany years, but the extent of the hemopoietic activity of the thymus and the possiblefunctional implications have only recently begun to receive much attention. This reviewsummarizes the literature in this field, especially in the light of current cytokine andthymic-factor knowledge, and includes clinical relevance where possible."} +{"text": "The incidences of malignant melanoma recorded by 59 population-based cancer registries were investigated to determine the effects of racial and skin-colour differences. White populations exhibited a wide range of melanoma incidences and females commonly, though not invariably, had a higher incidence than males. Non-white populations experienced in general a much lower incidence of melanoma although there was some overlap of white and non-white rates. No predominant sex difference emerged among non-whites. Populations of African descent were found to have a higher incidence than those of Asiatic origin, but it was concluded that this was due largely to the high frequency of tumours among Africans on the sole of the foot. A clear negative correlation between degree of skin pigmentation and melanoma incidence emerged for the exposed body sites. These data provide strong support for the hypotheses that UV radiation is a major cause of malignant melanoma and that melanin pigmentation protects against it. Further research is required to elucidate the aetiology of melanoma of the sole of the foot."} +{"text": "This review summarizes bladder cancer studies dealing with both Thus, array technologies represent high-throughput means to identify molecular targets associated with these biological and clinical phenotypes by comparing samples representative of distinct disease states.AQMicroarrays constitute a group of technologies characterised by the common availability of measuring hundreds or thousands of items, including DNA sequences, RNA transcripts or proteins, within a single experiment using miniaturised devices. The appropriate experimental design and the use of well-characterised Hybridisation-based methods and the microarray format constitute together an extremely versatile platform provide for both static and dynamic views of DNA structure, as well as RNA and protein expression patterns in cultured cancer cells and tumour tissues. The most widespread use of this technology to date has been the analysis of gene expression are often found mixed with genes of known function.This represents the traditional approach of assigning a functional role to a gene when overexpressed, and observing the effect(s) of its expression on known pathways or processes. Such an approach has been especially useful in identifying the downstream targets of transcription factors. The genes identified as either up- or downregulated in these experiments are likely to play important roles in the signalling network in which the gene under investigation participates.This is one of the most promising and powerful applications of expression profiling with expression microarrays. The integration of gene expression patterns is providing complementary tools to histopathological criteria for classifying tumours into biologically meaningful and clinically useful categories. In addition, expression profiling of well-annotated tumour specimens has the potential of identifying target genes for novel diagnostic, prognostic or therapeutic approaches. High-throughput transcriptome analysis will become a means at improving cancer treatment by an early and accurate diagnosis of tumour subtype and determining the most effective therapeutic intervention.gene and pathway discovery in bladder cancer. Tumour cell growth inhibition mediated by genistein was produced to the susceptible bladder tumour line TCCSUP. Expression profiling was then analysed at various time points, using cDNA chips. Induction of genes involved in cell growth and cell cycle, such as EGR-1, was observed, and these events were related to the proliferation and differentiation effects of treatment or for assigning tumours to known classes (class prediction). Reports using different microarray platforms and analysing specific cancer subgroups are finding consensus on subclasses and signatures of the disease with predictive utility.in vitro and in vivo models are warranted to functionally characterise the pathways by which many of the targets are already identified to be involved in tumorigenesis or bladder cancer progression. The utility of the application of microarrays has not yet estimated many clinical issues. Identification of Ta-T1-IS subtypes within the superficial disease and patients more likely to develop positive lymph nodes or distant metastases are critical subclassification questions to be answered.Overall, the use of microarray technologies for the study of bladder cancer remains a new research field. The results reported so far represent preliminary data that need to be contrasted by different groups using different series of patients. Creation of international tumour banks represents an option that might facilitate interactive research among different laboratories. Further efforts using An area that will provide critical targets for clinical intervention is that of pharmacogenomics. Studies evaluating biological markers to predict the drug efficacy or the relative risk of adverse effects in individual patients are still needed for many tumour types. In the near future, gene profiling will provide an effective means of predicting the response against specific therapeutic regimes based on the molecular signatures of the tumours associated with their chemosensitivity or resistance to anticancer drugs. Moreover, the discovery of molecular pathways altered in cancer progression, as well as the identification of molecule-susceptible targets, would lead to the development of novel alternative therapies. The combined information revealed by these studies allows also identification of new molecular determinants involved in the progression of the disease with clinical diagnostic or predictive utility. The classical tumour marker concept of an individual biological determinant will be substituted by the use of cluster of genes as predictive classifiers. These genetic signatures will allow a better chance of cure by opting for the most appropriate treatment, while maintaining the quality of life."} +{"text": "Objective: Group B streptococcus is an important cause of neonatal sepsis. Prevention is possible by intrapartum screening for maternal GBS carriership and antimicrobial treatment of colonized women with risk factors during labor. The conflicting results of diagnostic performance are reported both for the newly developed rapid GBS antigen tests and Gram's stain.Methods: The value of Gram's stain in GBS screening was investigated prospectively in 1,020 women. Intrapartum Gram's stains of the cervix from these women and of the introitus from 510 of them were compared with cultures of the cervix, introitus, and anorectum in a semiquantitative way.Results: The sensitivities of the cervical and introital Gram's stains were 25% and 31%, respectively, and the specificities 99% and 98%, respectively. Higher sensitivities were found in heavily colonized parturients. No significant influence of rupture of the membranes was detected. There was a poor correlation between the number of gram-positive cocci in the Gram's stain and the growth density.Conclusions: We do not recommend the routine use of the Gram's stain for intrapartum GBS detection because of both the limited sensitivity and positive predictive value."} +{"text": "The system used by the National Nosocomial Infection Surveillance (NNIS) program to measure risk of surgical site infection uses a score of 3 on the American Society of Anesthesiologists (ASA)-physical status scale as a measure of underlying illness. The chronic disease score measures health status as a function of age, sex, and 29 chronic diseases, inferred from dispensing of prescription drugs. We studied the relationship between the chronic disease score and surgical site infection and whether the score can supplement the NNIS risk index. In a retrospective comparison of 191 patients with surgical site infection and 378 uninfected controls, the chronic disease score and ASA score were highly correlated. The chronic disease score improved prediction of infection by the NNIS risk index and augmented the ASA score for risk adjustment."} +{"text": "Single Nucleotide Polymorphisms, among other type of sequence variants, constitute key elements in genetic epidemiology and pharmacogenomics. While sequence data about genetic variation is found at databases such as dbSNP, clues about the functional and phenotypic consequences of the variations are generally found in biomedical literature. The identification of the relevant documents and the extraction of the information from them are hampered by the large size of literature databases and the lack of widely accepted standard notation for biomedical entities. Thus, automatic systems for the identification of citations of allelic variants of genes in biomedical texts are required.. Here we describe the development of a new version of OSIRIS which incorporates a new entity recognition module and is built on top of a local mirror of the MEDLINE collection and HgenetInfoDB: a database that collects data on human gene sequence variations. The new entity recognition module is based on a pattern-based search algorithm for the identification of variation terms in the texts and their mapping to dbSNP identifiers. The performance of OSIRISv1.2 was evaluated on a manually annotated corpus, resulting in 99% precision, 82% recall, and an F-score of 0.89. As an example, the application of the system for collecting literature citations for the allelic variants of genes related to the diseases intracranial aneurysm and breast cancer is presented.Our group has previously reported the development of OSIRIS, a system aimed at the retrieval of literature about allelic variants of genes OSIRISv1.2 can be used to link literature references to dbSNP database entries with high accuracy, and therefore is suitable for collecting current knowledge on gene sequence variations and supporting the functional annotation of variation databases. The application of OSIRISv1.2 in combination with controlled vocabularies like MeSH provides a way to identify associations of biomedical interest, such as those that relate SNPs with diseases. In the last years the focus of biological research has shifted from individual genes and proteins towards the study of entire biological systems. The advent of high-throughput experimentation has led to the generation of large data sets, which is reflected in the constant growth of dedicated repositories such as sequence databases and literature collections. For instance, MEDLINE indexes more than 17 million articles in the biomedical sciences by April 2007), and it's increasing at a rate of more than 10% each year AND \"Polymorphism, Single Nucleotide\" [MeSH] AND \"Humans\" [MeSH] AND hasabstract [text] AND English [lang] AND (\"2004/01/01\" [PDAT] : \"2005/01/01\" [PDAT]) AND \"Chemicals and Drugs Category\" [MeSH]This set is focused to a field of interest to our group and limited to years 2004 and 2005. As a first step in the annotation process, the set of abstracts was carefully reviewed to determine if it contained explicit mentions of sequence variants in the text that allow their mapping to a dbSNP identifier: the position of the variation in the sequence, one or both alleles, and the gene to which the variation is mapped to. Only 311 abstracts included this information and were retained in the corpus. Thus, these abstracts contained mentions of variations that could potentially be normalized to dbSNP identifiers . The next step consisted in the manual annotation of gene and sequence variant occurrences with their database identifiers. Since the process of manual annotation is very laborious and time consuming, a subset consisting of the first 105 abstracts (in terms of date of publication) was selected to perform the full annotation. The quality of the annotation was prioritised in front of the size of the corpus. Nevertheless, the size of the corpus (n = 105) is similar to other corpora used in other evaluations carried out in the field .The annotation of the abstracts consisted in manual identification of mentions of genes and variations and their disambiguation to database identifiers. The NCBI Gene database was used in the case of genes and HgenetInfoDB in the case of variations. The text inspection was performed at the level of title and abstract and the resulting annotations were recorded in the corpus as an XML file using the Vex editor in the fThe search queries used to select the set of articles related to the diseases of interest are detailed below. Results presented in this article correspond to searches conducted on MEDLINE database updated to August 2007.For intracranial aneurysm and subarachnoid haemorrhage: OR \"intracranial aneurysm\" [MeSH Terms] OR cerebral aneurysm [Text Word](\"subarachnoid hemorrhage\" [TIAB] NOT MEDLINE [SB]) OR \" subarachnoid hemorrhage\" [MeSH Terms] OR subarachnoid hemorrhage [Text Word]For breast cancer:(\"breast neoplasms\" [TIAB] NOT Medline [SB]) OR \"breast neoplasms\" [MeSH Terms] OR Breast Cancer [Text Word]The search query used for compiling the corpus was:\"Pathological Conditions, Signs and Symptoms\" [MeSH] AND \"Polymorphism, Single Nucleotide\" [MeSH] AND \"Humans\" [MeSH] AND hasabstract [text] AND English [lang] AND (\"2004/01/01\" [PDAT] : \"2005/01/01\" [PDAT]) AND \"Chemicals and Drugs Category\" [MeSH]LIF participated in the design of the system, implementation, evaluation and analysis of the results and prepared the manuscript document. HD and MHA contributed to the design of the system, and HD contributed to the database design and implementation. FS participated in the design and coordination of the work. All authors contributed to the manuscript, and read and approved its final version."} +{"text": "Increased amounts of intestinal endotoxin are absorbed in obstructive jaundice. The precise mechanism is not known but the increased absorption may arise from alterations in the luminal contents, in the intestinal flora, in the gut wall or in interactions between all three. To examine the effects of the intestinal flora we have compared the morphological changes in the small intestine in obstructive jaundice in germ free and conventional rats while the effects of bile constituents have been examined by addition of bile constituents to the diet of bile duct ligated rats. Changes in the intestine were examined, histologically, by enzyme histochemistry, and by transmission and scanning electron microscopy. The results showed no differences in response between germ free and conventional rats. Feeding of diets containing bile salts exacerbated the lesion. Feeding of diets containing cholesterol, however, reduced the degree of intestinal changes produced by cholestasis and completely antagonised the increase in damage caused by feeding of bile salts."} +{"text": "Three recent outbreaks of locally acquired malaria in densely populated areas of the United States demonstrate the continued risk for mosquitoborne transmission of this disease. Increased global travel, immigration, and the presence of competent anopheline vectors throughout the continental United States contribute to the ongoing threat of malaria transmission. The likelihood of mosquitoborne transmission in the United States is dependent on the interactions between the human host, anopheline vector, malaria parasite, and environmental conditions. Recent changes in the epidemiology of locally acquired malaria and possible factors contributing to these changes are discussed."} +{"text": "Results using the same drug in phase II studies of treatment in ovarian cancer vary widely. An analysis of five phase II studies with a total of 93 patients was carried out to determine whether factors other than the efficacy of the drug affect response. The drugs for the phase II studies were chosen on the basis of in vitro activity or previous activity in humans. Univariate analysis showed that several factors were of significance in predicting response. The most significant was interval from the end of previous treatment to entry into a phase II study. Others were the original presenting stage of the patient, the second line treatment given and the best previous response to therapy. In multivariate analysis, however, only two factors were shown to be of importance which were interval and the FIGO stage of the patient. Using these two variables the discriminant analysis predicted 89% of those who did not respond and 75% of those who did, with an overall correct prediction of 85%. The importance of interval is emphasised by the observation that the response rate for those patients who progressed on treatment or who relapsed within 3-6 months of primary therapy had a response rate of less than 10%. Future phase II studies should probably exclude patients in this category, since the chance of their responding is very low."} +{"text": "Saccharomyces cerevisiae genome. The collection was produced by the transatlantic yeastgene deletion project, a collaboration involving researchers in the USA, Canada andEurope. The European effort was part of EUROFAN where some of the strains could feed into various functional analysis nodesdealing with specific areas of cell biology. With approximately 40% of human genesinvolved in heritable disease having a homologue in yeast and with the use of yeast invarious drug discovery strategies, not least due to the dramatic increase in fungalinfections, these strains will be valuable in trans-genomic studies and in specialised intereststudies in individual laboratories. A detailed analysis of the project by the consortium is inpreparation, here we discuss the yeast strains, reported findings and approaches to usingthis resource.In the year 2001 a collection of yeast strains will be completed that are deleted in the 6000open reading frames selected as putative genes by the initial bioinformatic analysis of the"} +{"text": "High energy tibial plateau fractures along with calcaneal fractures individually produce several challenges for the orthopaedic surgeon. The principles of bony reconstruction include anatomic reduction and rigid internal fixation of intra-articular fractures and accurate restoration of the coronal, sagittal and transverse mechanical axes. Due to the tenuous nature of the soft tissue and devitalisation of the comminuted fragments with open reduction, external fixation of type 6 tibial plateau fractures is recommended. We report a case with ipsilateral high energy tibial plateau and calcaneal fractures both of which were managed with an ilizarov ring fixator.A 55-year-old Kashmiri female presented to our department with an ipsilateral fracture of the tibial plateau and the calcaneum. Both were closed reduced and stabilized with an ilizarov ring fixator.The circular wire fixator provides a viable method to manage such fractures especially if they are co existent. This is especially true in situations where the soft tissue is compromised. Fractures of the tibial plateau and the calcaneum sometimes occur together in the same patient because of the common causative axial load mechanism. Fractures of the calcaneus account for approximately 60% of tarsal injuries and usually are the result of a fall from height. It is necessary for other injuries of the appendicular skeleton be ruled out in all patients presenting with either a calcaneal or a tibial plateau fracture. The goal of the treatment in either fracture is the restoration of the articular congruity and axial alignment as also the achievement of joint stability and functional motion. This has to be done while allowing early range of motion and minimising wound complications. Closed manipulation restores the overall shape of the calcaneus, with emphasis on restoring the Bohler angle, obtaining facet congruency and re-establishing the normal heel width. This method is useful in patients who are not candidates for open treatment .High energy tibial fractures are often associated with a high incidence of severe complications with traditional internal fixation .We describe the management of a patient with high energy tibial and calcaneal fracture of the same limb, with an ilizarov fixator. The method provided manifest advantages in the management of the two high energy ipsilateral fractures sustained by our patient.A 55-year-old female presented to the out patient department of our hospital with a history of a fall from height. The patient had experienced immediate pain and swelling in the knee and ankle area of the limb. On removing the splint used to transport this patient and examining the patient, swelling of the knee, upper tibia and heel were noticed. Palpation produced crepitus of the upper tibia. Significant tenderness was elicited on palpation at the knee as well as the heel.Radiographs of the knee in the anterioposterior and lateral plane showed a type VI Schatzker fracture of the tibial plateau. Radiographic examination of the calcaneum showed a tongue type fracture of the calcaneum with complete loss of the Bohler angle & 2.The patient's limb was placed in a well padded splint and observed for compartment syndrome. Over a period of three days the patient developed significant ecchymosis around the upper tibia.In view of the compromised soft tissue envelope, age of the patient and complexity of the trauma, it was decided to manage both the fractures in a ring fixator .The patient was anaesthetised and placed on a traction table. The proximal fragments were approximated with manual pressure under image intensifier control. Ilizarov wires were placed to obtain compression in the coronal plane. A cancellous transverse lag screw was placed to obtain further stability. An additional ring was placed and affixed to the bone, below the metaphysiodiaphysial comminution. This ring was affixed to the diaphyseal bone with two Schanz pins placed at right angles to each other. The foot was immobilised with a Schanz pin attached to this fixator. This Schanz pin was placed in the first metatarsal & 5.A Schanz pin was placed into the tongue shaped calcaneal fragment, starting posteriorly. This pin was attached to the second ring by means of a plate and two threaded rods. By gradual distraction and lateral pressure under image intensifier control, the Bohler angle was restored. The Schanz pin was kept in this position in the distraction mode. The patient was allowed range of motion exercises of the knee from the first post operative day. At 10 weeks the fixator was removed and patient allowed partial weight bearing crutch walking. 12 weeks post fixation full weight bearing was allowed & 7. At High energy fractures of the tibial plateau and the calcaneum challenge the orthopaedic surgeon due to the difficulties in restoring the complex bony architecture and the tenuous nature of the soft tissues.Cotton and Wilson in 1916 wrote that the man who breaks his heel bone is done . The manClosed manipulation of the calcaneum has been recommended . Paley eLow energy tibial plateau fractures have excellent clinical outcomes with few complications with contemporary internal fixation, however in high energy fractures; these methods produce severe complications . The metThe principles of bony reconstruction, particularly in weight bearing joints, include anatomic reduction and rigid internal fixation of intra-articular fractures and accurate restoration of coronal, sagittal and transverse mechanical axes. . High enThe advantages of using hybrid fixation with a circular small wire external fixator are numerous, with minimal additional devitalisation, capture of very small metaphyseal and sub chondral fragments with lag effect ,10. HoweIn our case the fixator was able to manage both fractures simultaneously. The fixator permitted early range of motion, adjustability and observation. The Ilizarov fixator can be an important methodology in the management of such coexisting injuries.The circular wire fixator provides a viable method to manage such fractures especially if they are co existent. This is especially true in case the soft tissue envelop is compromised."} +{"text": "Earlier studies have demonstrated an unexplained depletion of the epidermal growth factor receptor (EGFR) protein expression in prostatic cancer. We now attribute this phenomenon to the presence of a variant EGFR (EGFRvIII) that is highly expressed in malignant prostatic neoplasms. In a retrospective study, normal, benign hyperplastic and malignant prostatic tissues were examined at the mRNA and protein levels for the presence of this mutant receptor. The results demonstrated that whilst EGFRvIII was not present in normal prostatic glands, the level of expression of this variant protein increased progressively with the gradual transformation of the tissues to the malignant phenotype. The selective association of high EGFRvIII levels with the cancer phenotype underlines the role that this mutant receptor may maintain in the initiation and progression of malignant prostatic growth, and opens the way for new approaches in the management of this disease including gene therapy. \u00a9 2000 Cancer Research Campaign"} +{"text": "Infant mortality is a major public health problem in the State of Michigan and the United States. The primary adverse reproductive outcome underlying infant mortality is low birthweight. Visualizing and exploring the spatial patterns of low birthweight and infant mortality rates and standardized incidence and mortality ratios is important for generating mechanistic hypotheses, targeting high-risk neighborhoods for monitoring and implementing maternal and child health intervention and prevention programs and evaluating the need for health care services. This study investigates the spatial patterns of low birthweight and infant mortality in the State of Michigan using automated zone matching (AZM) methodology and minimum case and population threshold recommendations provided by the National Center for Health Statistics and the US Census Bureau to calculate stable rates and standardized incidence and mortality ratios at the Zip Code (n = 896) level. The results from this analysis are validated using SaTScan. Vital statistics birth and linked infant death records obtained from the Michigan Department of Community Health and aggregated for the years 2004 to 2006 are utilized.For a majority of Zip Codes the relative standard errors (RSEs) of rates calculated prior to AZM were greater than 20%. Spurious results were the result of too few case and birth counts. Applying AZM with a target population of 25 cases and minimum threshold of 20 cases resulted in the reconstruction of zones with at least 50 births and RSEs of rates 20\u201322% and below respectively, demonstrating the stability reliability of these new estimates. Other AZM parameters included homogeneity constraints on maternal race and maximum shape compactness of zones to minimize potential confounding. AZM identified areas with elevated low birthweight and infant mortality rates and standardized incidence and mortality ratios. Most but not all of these areas were also detected by SaTScan.Understanding the spatial patterns of low birthweight and infant deaths in Michigan was an important first step in conducting a geographic evaluation of the State's reported high infant mortality rates. AZM proved to be a useful tool for visualizing and exploring the spatial patterns of low birthweight and infant deaths for public health surveillance. Future research should also consider AZM as a tool for health services research. Infant mortality refers to infants born alive who die within their first year of life. In 2006, Michigan's infant mortality rate was 7.6 infant deaths per 1,000 live births with African American infants at substantially higher risk (17.7) than white infants (5.2) of death was the adjusted expected number of cases within the window under the null-hypothesis. C-E [c] was the expected number of cases outside the window. I was an indicator function. When SaTScan scans for clusters with high rates, I was equal to 1 when the window had more cases than expected under the null-hypothesis, and 0 otherwise . HypotheThe authors declare that they have no competing interests.SCG originated the study and participated in the planning of the study, analysis of the data and writing of the brief. HE contributed to the preparation of the data, analysis of the data, and writing of the brief."} +{"text": "In recent years proteomic techniques have started to become very useful tools in a variety of model systems of developmental biology. Applications cover many different aspects of development, including the characterization of changes in the proteome during early embryonic stages. During early animal development the embryo becomes patterned through the temporally and spatially controlled activation of distinct sets of genes. Patterning information is then translated, from gastrulation onwards, into regional specific morphogenetic cell and tissue movements that give the embryo its characteristic shape. On the molecular level, patterning is the outcome of intercellular communication via signaling molecules and the local activation or repression of transcription factors. Genetic approaches have been used very successfully to elucidate the processes behind these events. Morphogenetic movements, on the other hand, have to be orchestrated through regional changes in the mechanical properties of cells. The molecular mechanisms that govern these changes have remained much more elusive, at least in part due to the fact that they are more under translational/posttranslational control than patterning events. However, recent studies indicate that proteomic approaches can provide the means to finally unravel the mechanisms that link patterning to the generation of embryonic form. To intensify research in this direction will require close collaboration between proteome scientists and developmental researchers. It is with this aim in mind that we first give an outline of the classical questions of patterning and morphogenesis. We then summarize the proteomic approaches that have been applied in developmental model systems and describe the pioneering studies that have been done to study morphogenesis. Finally we discuss current and future strategies that will allow characterizing the changes in the embryonic proteome and ultimately lead to a deeper understanding of the cellular mechanisms that govern the generation of embryonic form. The wor century . Both doDuring the early period of experimental embryology, fundamental concepts have been formulated despite the lack of any knowledge about the molecular basis of development. One of the most influential theories was certainly the proposal of morphogens as inducers of patterning ,5. MorphIt appears now that in several species, including the \"classic\" model systems Drosophila melanogaster and Xenopus laevis, a reasonably \"complete\" list of genes that are involved in early pattern formation has emerged. This undertaking has become even more feasible in recent years, since the genomes of several model animals have been completely sequenced, including fruit fly, nematode and sea urchin, and other genomes are close to completion and/or large cDNA and EST databases have been built -19. TherTo study morphogenesis essentially means to ask how cells and tissues translate the positional information they have received into regional specific cell behaviors to give the embryo its defined shape. In animals, the first global cell rearrangements occur as gastrulation is initiated. Gastrulation is defined as the internalization of the prospective endoderm and mesoderm into the embryo and, while different species use different means and mechanisms to achieve internalization, the result is always the same: the endoderm forms the inner layer, the ectoderm remains on the outside and the mesoderm is located in between. After gastrulation, the three germ layers are in close apposition to each other; but they are clearly separated and mixing between the different layers rarely occurs, indicating that these cells can distinguish \"similar\" cells from \"different\" cells. This ability was demonstrated in dissociation and reaggregation experiments. Cells from early amphibian embryos can easily be separated by removal of calcium from the culture medium. Mixing of cells from the different germ layers and restoring of the calcium levels will lead first to a ball of mixed cells, but then cells will not only reaggregate according to their germ layer affiliation, but also assume the same position as in the embryo: ectoderm on the outside, mesoderm next and endoderm on the inside. These classical experiments led to the proposal of differential affinity between germ layer cells and later to the concept of differential adhesion to explain the ability of cells of different tissues to remain separate from each other -29. One Likewise, our understanding of the molecular mechanisms that drive the cellular movements and cell shape changes during embryogenesis is still very fragmented. In several species gastrulation movements have been characterized in great detail and employed to develop general models of coordinated cell movements and shape generation -41. ThesProteomic approaches have already strongly contributed to many research areas in the medical sciences and in biology. Such areas include for example the study of cellular organelles -60 and pDuring embryogenesis, discrete regions in the embryo display specific morphogenetic activities. The sum of these activities produces the final shape of the embryo. Regionalization of morphogenetic behavior poses an important challenge for the application of proteomic tools, mainly because sufficient amounts of a given tissue -that undergoes a specific movement- have to be obtained for subsequent analysis. Tissue formation and thereby the related cell behavior is under the control of upstream patterning signals and, to obtain large amounts of starting material, the manipulation of such signals has been employed to obtain mutant embryos that are either deficient or enriched in a certain tissue.In Drosophila gastrulation begins with the internalization of mesodermal precursor cells on the ventral side of the embryo. This movement is initiated by shape changes of ventral cells that lead to the formation of a furrow. To study changes in the protein composition of ventral cells during this period of development the group of Jonathan Minden compared mutant embryos which are strongly ventralized to embryos where the ventral cells have adopted a lateral fate . By diffA second study compared gastrulation stage zebrafish embryos that consisted mainly of cells of ectodermal respectively mesendodermal character . In zebrAnother possibility is to interfere specifically with signals that have been found to be involved in the control of tissue movements and to identify thereby downstream signaling events. This approach has been used to study Fyn/Yes dependent signal transduction during zebrafish axis formation . During The last decades have provided a large pool of mutants and antisense tools to manipulate patterning and morphogenetic events in the embryo. Combined with the continuous improvements of proteomic techniques and the increasing availability of proteomic facilities this will certainly lead to an increase in comparative proteome studies of mutant or knock down embryos. In addition, comparative studies on isolated tissues of an embryo might also be possible. Most notably, amphibian embryos like Xenopus laevis are known for the ease with which relatively high amounts of different tissues can be manually isolated. Manual isolation has been employed in many different contexts, e.g. to construct tissue specific libraries, to compare expression of marker genes or for antibody-based comparison of protein levels -105 and The examples cited above illustrate how comparative studies can provide novel candidates for proteins that participate in the regulation and execution of morphogenetic movements. 2D-gel-based approaches were used most commonly for good reasons, but it might be beneficial to additionally use peptide-based approaches to detect changes in protein concentration that are underrepresented in 2D-gels -82. Thisin vivo imaging, to decipher how these signals are coordinated to produce the forces that shape a complete organism from an undifferentiated ball of cells.Within just a few years proteomic techniques have been used in a variety of model organisms of developmental biology and in applications ranging from the development of species specific protein databases down to the isolation and identification of single proteins of interest. These pioneering studies demonstrate the usefulness of proteomic approaches and with increasing availability of proteomic facilities and technical expertise, such approaches offer exciting possibilities in many areas of embryology. These new possibilities -we believe- will strongly influence the study of morphogenesis. So far our understanding of the driving forces behind morphogenetic movements is largely based on detailed descriptions of changes in cell shape and protrusive activity as well as explantation/ablation techniques to elucidate their relative contribution to the forces that form the embryo. However, cell biology has already made incredible progress in the characterization of the protein components that determine the mechanical attributes of a cell, its internal architecture and its external interactions. Drawing from the immense wealth of cell biological studies, morphogenetic studies have also been extended to the investigation of some of the molecular regulators of cell mechanics. Proteomics provide now the means to systematically study the regulatory events that link patterning signals to the structural changes that determine tissue specific cell behavior. In the short term this will provide new candidate regulators of morphogenesis. In the longer term, the continuing improvements in proteomic techniques and data analysis and presentation will provide a more comprehensive picture of time and tissue specific protein composition and posttranslational modifications. This will allow for a systematic analysis of the active signaling networks in a given tissue at a given time point and form the necessary basis for a multidisciplinary approach that includes e.g. biomechanics and The authors declare that they have no competing interests.WER and CAM co-wrote the manuscript and approved the final version."} +{"text": "The operative methods of total uterine mucosal ablation (TUMA) as well as new abdominal and vaginal hysterectomy techniques are described. Classic intrafascial serrated edged macro-morcellator (SEMM) hysterectomy (CISH) by pelviscopy or laparotomy and intrafascial vaginal hysterectomy (IVH) are techniques that allow the nerve and the blood supply of the pelvic floor to remain intact, mainly because only the ascending branches of the uterine arteries are ligated. TUMA avoids the removal of the uterus altogether and is reserved for hypermenorrhea or menorrhagia without major enlargement of the uterus. Both CISH and IVH reduce the physical trauma of hysterectomy considerably and have the advantages of the supravaginal technique. Prophylaxis against cervical stump carcinoma is assured by coring out the cervix with the SEMM. In patients in whom both procedures are possible, IVH is preferred because it combines the minimal trauma and short operative time of vaginal hysterectomy. The decreased diameter of the cervix after coring out greatly simplifies this type of vaginal hysterectomy, the technique that has always been favored because of its short operative times and minimal trauma."} +{"text": "This special supplement to the Archives of Oral Biology reflects a collaborative initiative within the area of oro-facial growth and development. A Workshop was held in Liverpool in November 2007 to establish an International Collaborating Centre in Oro-facial Genetics and Development. This issue contains papers from that Workshop refereed by colleagues from the wider dental community.In bringing this group of colleagues together we have been inspired by the statue in Lisbon which ceAs a basis for that future development this issue contains a series of papers which are either critical perspectives on specific topics or which report new studies in the area.The initial group of papers sets the scene. The first paper considers the multifactorial, multilevel, multidimensional and time related interactions which occur during normal and abnormal dental development. In the second paper a particular aspect of the multifactorial effects, the role of human sex chromosomes in dental development is reviewed while from the multilevel effects the third paper focuses on epithelial histogenesis during tooth development. This initial overview section closes with a consideration of multidimensional and time dependant effects by developing a current, clinically relevant update of the morphogenetic fields concept, incorporating a synthesis of the clone theory and the odontogenic homeobox code.The next section of papers considers the multifactorial effects on the initiation and morphogenesis phases of dental development. It begins with a critical evaluation of studies involving twins to determine genetic and environmental influences on human dental development. There follows a series of new studies on variation and anomalies of human tooth number, size and shape. These begin with family studies, first on the aetiology of hypodontia and then on the tooth dimensions in a family with hypodontia associated with an identified PAX 9 mutation. The contrasting tooth dimensions in patients with hypodontia and supernumerary teeth compared with controls are examined in the next study. There follows a paper reporting the detailed investigation of multiple crown size variables of upper incisor teeth in patients with supernumerary teeth compared with controls. This group of new studies concludes with the examination of variability and patterning of permanent tooth size in four human ethnic groups.The third group of papers consists of three new studies on the differentiation and biomineralisation phases of human dental development. The first of these examines enamel defects in extracted and exfoliated teeth from patients with Amelogenisis Imperfecta using the extended Enamel Defect Index and image analysis. This is followed by a study of the patterns of enamel hypoplastic defects in two archaeological populations and the section concludes with an investigation of demineralisation rates in pre-natal, neo-natal and post-natal enamel.The fourth section considers first the crucial role of critical evaluation of measurement studies of the phenotype resulting from the dental developmental process. It concludes with a consideration of innovative 3D methodologies to allow more detailed definition of dental phenotypes in order to enhance discrimination between individuals and to provide further insight into the aetiology of dental anomalies in the future.I am grateful to several people for their input to the preparation of this issue. Rebecca Griffin has been highly efficient and supportive as Assistant Guest Editor. Karen Barnes provided substantial and valuable administrative support in the organisation of the International Workshop. The Editor-in-Chief of the Archives of Oral Biology, Rex Holland, has given firm and clear guidance on the editorial process and has carefully and appropriately overseen the refereeing process. On the production side Nick Dunwell, Account Executive for Elsevier Pharma Solutions has been helpful and encouraging.This issue both makes available papers from the International Workshop to the wider research community and acts as a basis for future collaborative work. It is made possible by generous financial support from the Wellcome Trust, for those papers arising from the Wellcome Programme on Biomineralisation, from Boots plc and from the University of Liverpool. The Workshop was funded by donations from Boots plc, the Universities of Adelaide and Liverpool and Scantron Ltd. We are most grateful for this generous support."} +{"text": "The dual properties of genetic instability and clonal expansion allow the development of a tumour to occur in a microevolutionary fashion. A broad range of pressures are exerted upon a tumour during neoplastic development. Such pressures are responsible for the selection of adaptations which provide a growth or survival advantage to the tumour. The nature of such selective pressures is implied in the phenotype of tumours that have undergone selection. We have reviewed a range of immunologically relevant adaptations that are frequently exhibited by common tumours. Many of these have the potential to function as mechanisms of immune response evasion by the tumour. Thus, such adaptations provide evidence for both the existence of immune surveillance, and the concept of immune selection in neoplastic development. This line of reasoning is supported by experimental evidence from murine models of immune involvement in neoplastic development. The process of immune selection has serious implications for the development of clinical immunotherapeutic strategies and our understanding of current in vivo models of tumour immunotherapy. \u00a9 2000 Cancer Research Campaign"} +{"text": "The continued northwards spread of Rhodesian sleeping sickness or Human African Trypanosomiasis (HAT) within Uganda is raising concerns of overlap with the Gambian form of the disease. Disease convergence would result in compromised diagnosis and treatment for HAT. Spatial determinants for HAT are poorly understood across small areas. This study examines the relationships between Rhodesian HAT and several environmental, climatic and social factors in two newly affected districts, Kaberamaido and Dokolo. A one-step logistic regression analysis of HAT prevalence and a two-step logistic regression method permitted separate analysis of both HAT occurrence and HAT prevalence. Both the occurrence and prevalence of HAT were negatively correlated with distance to the closest livestock market in all models. The significance of distance to the closest livestock market strongly indicates that HAT may have been introduced to this previously unaffected area via the movement of infected, untreated livestock from endemic areas. This illustrates the importance of the animal reservoir in disease transmission, and highlights the need for trypanosomiasis control in livestock and the stringent implementation of regulations requiring the treatment of cattle prior to sale at livestock markets to prevent any further spread of Rhodesian HAT within Uganda. Human African Trypanosomiasis (HAT) or sleeping sickness is a parasitic disease of humans, transmitted by the tsetse fly. There are two different forms of HAT: Rhodesian (in eastern sub-Saharan Africa), which also affects wild and domestic animals, and Gambian . Diagnosis and treatment of the two diseases differ, and disease characterisation is based on prior knowledge of known geographical disease distributions. Presently, the two forms of HAT do not overlap in any area: Uganda is the only country which sustains active transmission of both types.In recent years, Rhodesian HAT has spread into areas of Uganda that had not previously been affected, thus narrowing the gap between areas of Rhodesian and Gambian HAT transmission. This spread has raised concerns of a potential overlap of the two types of the disease, which would severely complicate their diagnosis and treatment. Earlier work indicated that Rhodesian HAT was introduced to Soroti district due to the movement of untreated cattle from affected areas. Here we show that the continued spread of HAT in Uganda (to a further 2 districts) may also have occurred due to cattle movements, despite legal requirements to treat livestock from affected areas prior to sale at markets. These findings can assist in the targeting of HAT control efforts in Uganda and show that the stringent implementation of animal treatments at livestock markets should be a priority. Trypanosoma brucei rhodesiense causes an acute disease in eastern sub-Saharan Africa and has a reservoir in wild and domestic animals while Trypanosoma brucei gambiense causes a chronic form of the disease in western and central sub-Saharan Africa. Uganda has had the misfortune to sustain active transmission of both types of the disease: T. b. gambiense in the north west and T. b. rhodesiense in the south east T. b. rhodesiense) was introduced into Tororo District in 1987, the disease has persistently spread northwards into previously unaffected areas of Uganda T. b. rhodesiense focus is a persistent concern. The Northwards spread of disease has narrowed the area between the active foci of Rhodesian and Gambian HAT, with an estimated 150 km now separating the two forms of the disease Human African trypanosomiasis (HAT), or sleeping sickness, is caused by two sub species of a hemoflagellate parasite that are transmitted by tsetse flies. It is essential that the dynamics of disease spread are understood if HAT is to be controlled in Uganda. A comprehensive understanding of the factors involved in the disease's spatial distribution and movements will enable more effective targeting of control efforts. The spatial distribution of HAT is driven by complex interactions of many factors. The occurrence of disease in an area is dependent on the establishment of disease transmission, which in turn is reliant on the suitability of an area for the disease. Within affected areas, a spatially varying intensity of transmission can result in the heterogeneous village level prevalence of disease. These two processes giving rise to i) the establishment of HAT transmission and ii) the heterogeneous prevalence of HAT in an area are likely to be driven by different environmental, climatic and social factors associated with the presence and density of tsetse flies Spatial analysis and geographic information systems (GIS) have been applied increasingly to infectious disease epidemiology in recent years, including to the analysis of HAT T. b. rhodesiense HAT in two newly affected districts of Uganda (Kaberamaido and Dokolo) was examined in relation to several environmental, climatic and social variables. Prevalence of HAT was then predicted spatially to highlight areas with the potential for high prevalence and to enable the targeting of future control efforts. The utilities of two different methodologies were compared: a two-step regression method and a traditional one-step regression method. The two-step regression was used to allow the separate analysis of factors governing the occurrence and prevalence of HAT. The prevalence analysis in the two-step regression model was conducted solely on areas that had a high predicted probability of occurrence. This was anticipated to provide an increase in predictive accuracy due to the exclusion of large areas with little or no HAT transmission.The spatial distribution of T. b. rhodesiense). Kaberamaido (Eastern region) and Dokolo (Northern region) districts lie to the north of Lake Kyoga with a combined area of approximately 2740 km2. The main economic activities within the study area are agriculture and fishing, with the majority of the population engaged in subsistence farming The study area included Kaberamaido and Dokolo districts in Uganda see , two of A handheld global positioning system was used to geo-reference the central point of all villages within the study area with guidance from local government staff. Coordinates were taken in the WGS84 geographical coordinate system in decimal degrees . Comprehensive HAT hospital records were collected in collaboration with the Ugandan Ministry of Health from the two HAT treatment centres serving the study area; Lwala Hospital (Kaberamaido district) and Serere Health Centre IV (Soroti district). To maintain anonymity of subjects and patient confidentiality and to adhere to the International Ethical Guidelines for Biomedical Research Involving Human Subjects, no patient names were recorded within the database or as part of the data collection process. The hospital records were matched with the geo-referenced villages by cross-referencing each case's village of residence with the names from the geo-referenced villages. This resulted in a spatially referenced dataset of all patients residing within the study area who had received a diagnosis of HAT .T. b. rhodesiense in the reservoir, the control programme resulted in an altered epidemiology of HAT within the study area in the subsequent year and so may have affected the results of the regression analyses.Cases occurring from February 2004 (when the first cases were reported) to December 2006 were included in the analysis. Cases diagnosed later than December 2006 were excluded because a control programme was instigated in September 2006 that involved the mass treatment of cattle in the study area and adjoining districts. By decreasing the prevalence of human infective The geo-referenced HAT case data were visualised using ArcMap 9.1 . External covariate datasets as listed in Several temporal Fourier-processed indices were obtained from Advanced Very High Resolution Radiometer (AVHRR) imagery: land surface temperature (LST), NDVI and middle-infrared . NDVI is a measure of the amount of green vegetation Predicted tsetse suitability maps were obtained from the Food and Agricultural Organization Distances to physical features (in km) were calculated. Land cover data In addition, distances to the closest livestock market and health centre (of any type) were calculated using the coordinates of each of these features that were obtained during fieldwork. The distance to the closest health centre (of any type i.e. not necessarily trained or equipped to diagnose or treat HAT) was used to deal with the confounding effect of access to health care. The distance to the closest livestock market was included to investigate the possibility that cattle movements in this area may have caused or contributed to the introduction and establishment of HAT transmission, as was found in a neighbouring district Exploratory analysis was conducted for each of the covariates: i) scatter plots to examine relationships with HAT prevalence; ii) box and whisker plots to examine the distributions of covariate data in villages which have had cases of HAT compared to villages which have not and iii) visualisation of the geographical distributions of the outcome variables in relation to the external covariates. Seventeen covariates were selected for use in the regression analyses based onThe statistical modelling was carried out using logistic regression: a generalised linear model used for the analysis of binomial data such as disease occurrence or disease prevalence (where the outcome is bounded between zero and one) An OR of one indicates no association, an OR greater than one indicates a positive association with the odds of disease and an OR less than one indicates a negative association This methodology comprised two logistic regression models applied sequentially as well as prevalence (how intense was the transmission within affected areas) which are confounded in a one-step approach.Forwards stepwise addition beginning with the null model (no explanatory variables) was used in the model fitting. At each step the variable resulting in the greatest reduction in deviance was selected. A Chi-squared likelihood ratio test was used to compare models, and additional explanatory variables were accepted only if this test was significant and the covariate was significant within the model. Any variables that lost significance in subsequent steps were removed from the model. The stepwise addition of plausible interaction terms (if interaction is present the effect of one variable on odds of disease changes in relation to the effect of another variable) was then carried out in the same manner after the variables were centred .The sensitivity (true positive rate) and specificity (true negative rate) of the fitted model were calculated for a variety of cut-off points using the predicted and observed values, and plotted against the cut-off points. The cut-off point where the sensitivity and specificity crossed was selected as a suitable cut-off point for the classification of case and non-case villages: this point maximises both the specificity and the sensitivity of the classification of locations. A 10-fold cross-validation was performed using ten random sub-divisions of the dataset. The area under the receiver operator characteristic curve (AUC) was calculated; this value gives a measure of the overall performance of the model in classifying villages. An AUC of 1 indicates perfect discrimination between case and control villages, and an AUC of 0.5 illustrates a model that is in effect worthless for discrimination purposes.2 (including the study region) and a 1.1 km cell size . All villages within the study area lying within an area of high predicted probability of occurrence were extracted for use in the second step of the analysis.The resulting regression equation (probability of occurrence as a function of the explanatory variables) was used to predict probability of occurrence of HAT across a grid with an area of 30,000 kmThe outcome variable for the second step of the two-step regression was defined as prevalence of HAT (number of cases divided by village population). Prevalence data from all villages within areas of high predicted probability of occurrence were included in the model, including those with no reported cases . Forwards stepwise addition was used in the model fitting procedure, as for the first step. For this section of the analysis, the distance to health centre variable was forced into the model (regardless of it's significance) to ensure that access to health care was controlled for in the final results. The fitted model was used to predict the prevalence of Rhodesian HAT across the same area as was used in the first step.For the one-step analysis as shown in Four covariates were found to influence significantly the occurrence of HAT across the study area . The predicted suitability for transmission across the study area using the specified model is illustrated in The prediction was used to create a mask over the study area; all areas with a predicted probability of occurrence less than 0.2 were excluded. 279 villages lay within the area defined as having a high probability of occurrence. However, seven of those villages had no population data and so were excluded from the remaining analysis leaving 272 villages. The results from the second model are shown in p\u200a=\u200a0.05, variable forced into the model). Prevalence was negatively correlated with distance to the closest livestock market with every additional kilometre resulting in a 20% decrease in odds of disease. This was shown to interact with distance to the closest area of woodland, which in turn showed a positive correlation with prevalence. In addition, HAT prevalence was negatively correlated with distance to the closest area of bush and maximum NDVI and positively correlated with NDVI phase of annual cycle, NDVI annual amplitude, LST phase of annual cycle, LST annual amplitude and minimum LST.HAT prevalence was significantly correlated with nine variables in addition to distance to the closest health centre that was negatively correlated and of borderline significance . The model had a small tendency to over predict prevalence with a median error of 0.05% . The mean absolute error for the predicted prevalence per 100 population was 0.24%. The scatter plot of predicted prevalence against observed prevalence shows a Nine variables were shown to be significantly associated with prevalence of HAT across the study area using the one-step regression, as shown in The correlation between predicted and observed prevalence values was 0.58 indicating a modest linear association. The model was slightly biased with a very small tendency to over-predict prevalence (median error\u200a=\u200a0.02%) and the mean absolute error was 0.13% . The scatter plot of predicted prevalence against observed prevalence values illustraTo allow a direct comparison of the predictive accuracy of the two methodologies, the one-step model was used to calculate predicted prevalence for the villages with high predicted probabilities of occurrence from the two-step analysis (i.e. excluding areas with a predicted probability of occurrence of less than 0.2). The correlation between predicted and observed prevalence was 0.50, lower than that for the two-step regression method (0.57). Again, the model was shown to have a tendency to over predict prevalence, with a median error of 0.05% . The mean absolute error was 0.24%, equal to the mean absolute error from the two-step regression methodology.Spatial determinants for HAT are poorly understood across small areas. This study examined the relationships between Rhodesian HAT and several environmental, climatic and social factors in two newly affected districts, Kaberamaido and Dokolo. The application of a two-step regression approach for the prediction of HAT prevalence in a newly affected area of Uganda allowed the investigation of factors influencing the occurrence and prevalence of HAT separately, and overall resulted in a slight increase in predictive accuracy when compared to a one-step analysis in areas with high predicted probability of occurrence. Each of the models has illustrated an increased risk of HAT in villages closer to livestock markets than in villages further away, suggesting the persistent spread of Rhodesian HAT in Uganda may have resulted from the continued movement of untreated cattle.The two-step regression model gave a slight increase in predictive accuracy in comparison with the one-step analysis with a correlation between fitted and observed prevalence values of 0.57 for the two-step regression and 0.50 for the one-step regression analysis (when looking only at areas with a high predicted probability of occurrence). Both models tended to predict higher prevalence than was observed, particularly in villages of zero prevalence, with a median error of 0.05% for both models. The mean absolute error was equal for the two methods (0.24%). The difference in predicted prevalence of HAT from the two methods was small over the majority of the prediction area, with divergences mainly occurring in areas of high predicted prevalence outside of the study area see .There were only two health centres trained and equipped to diagnose and treat HAT serving the study population during the study period. It has been shown previously that levels of geographical accessibility to treatment facilities can have an effect on the observed spatial distribution of HAT, with smaller numbers of cases reported from areas which are further from the treatment centres T. b. rhodesiense endemic areas in the south east, through the study area and neighbouring districts, to the T. b. gambiense endemic areas in the far north west of Uganda towards southern Sudan. Clearly, this increases the risk of overlap of the two subspecies, particularly if the regulations regarding the treatment of cattle being moved from T. b. rhodesiense endemic areas continue to be broken. The stringent implementation of regulations requiring the treatment of cattle prior to sale at livestock markets should be a priority for the Ugandan Government and tsetse control efforts may be more efficiently targeted to areas surrounding livestock markets to prevent the establishment of transmission in previously unaffected areas as occurred in Soroti district in the late 1990s and Kaberamaido and Dokolo districts in 2004.Distance to the closest livestock market was an important predictor in the one-step regression and in both steps of the two-step regression, with decreasing odds of infection at increasing distances. Previous research has confirmed the introduction of HAT to a previously unaffected area via the introduction of untreated, infected livestock Other variables that were also significantly correlated with HAT prevalence and/or occurrence included distance to the nearest health centre, maximum Normalised Difference Vegetation Index (NDVI), NDVI phase of annual variation, NDVI annual amplitude, minimum Land Surface Temperature (LST), LST phase of annual variation, LST annual amplitude, mean LST, distance to the closest area of woodland and distance to the closest area of bush. The significance of these variables highlights the importance of climatic and environmental conditions for HAT transmission. Distance to the closest health centre was also a significant factor in each model, with decreasing prevalence observed at increasing distances. This suggests a confounding relationship due to accessibility of health services as has been previously reported Each of the regression models (the one-step regression model and each step of the two-step regression models) included maximum NDVI (negative association) and minimum LST (positive association) as significant predictors. These are likely to relate to the habitat and environmental requirements of the tsetse fly vector of disease. The additional variables found to be significantly correlated with HAT prevalence in each analysis are probably also linked to the suitability of an area for the tsetse fly vector , and so will influence the intensity of transmission and observed prevalence of HAT.Analysis of the residual variation (after accounting for the covariate effects) indicated that there was some spatial autocorrelation in the residuals from the one-step regression and the probability of occurrence analysis . For the two-step regression, the probability of occurrence regression was carried out partially to provide a mask over areas with low predicted probability of occurrence to enable the focusing of the prevalence analysis, and so the small amount of spatial autocorrelation in the residuals is not seen as problematic as it would have a negligible effect on the final prevalence model. However, for the one-step regression, the small amount of spatial autocorrelation in the residuals may lead to inflated statistical significance for some of the covariates. Further research is underway to address this autocorrelation in the residuals and to assess any increase in the predictive accuracy using a model-based geostatistics approach T. b. rhodesiense infected livestock from endemic areas through livestock markets within the study area occurs periodically. A complex interaction of factors is involved in the establishment of transmission following such an occurrence. In addition to the variables included in the current analysis, tsetse and livestock densities, human-cattle-tsetse contact and also to a large degree, chance, may play roles. Further research is planned to build upon these findings, incorporating detailed livestock market data and cattle trading networks to give a more thorough understanding of the spatial and temporal dynamics of HAT within Uganda.From these and previous findings"} +{"text": "Human genetic variation produces the wide range of phenotypic differences that make us individual. However, little is known about the distribution of variation in the most conserved functional regions of the human genome. We examined whether different subsets of the conserved human genome have been subjected to similar levels of selective constraint within the human population. We used set theory and high performance computing to carry out an analysis of the density of Single Nucleotide Polymorphisms (SNPs) within the evolutionary conserved human genome, at three different selective stringencies, intersected with exonic, intronic and intergenic coordinates.We demonstrate that SNP density across the genome is significantly reduced in conserved human sequences. Unexpectedly, we further demonstrate that, despite being conserved to the same degree, SNP density differs significantly between conserved subsets. Thus, both the conserved exonic and intronic genomes contain a significantly reduced density of SNPs compared to the conserved intergenic component. Furthermore the intronic and exonic subsets contain almost identical densities of SNPs indicating that they have been constrained to the same degree.Our findings suggest the presence of a selective linkage between the exonic and intronic subsets and ascribes increased significance to the role of introns in human health. In addition, the identification of increased plasticity within the conserved intergenic subset suggests an important role for this subset in the adaptation and diversification of the human population. Although it is widely accepted that genome changes have driven evolution, there is still a lack of consensus as to what aspect of genome function are most affected to bring about phenotypic change. Many conjecture that changes within exonic coding regions are most important ,2 whilstOne method to address the question of where the majority of functional polymorphisms lie within the human genome is to examine the densities of polymorphisms within the different functional components of the conserved human genome. Thus, functional portions of the genome under the strongest selective pressure will contain less polymorphisms due to removal of less fit individuals from the population at an early age. The advantages of examining the conserved genome to select for functional importance is that mechanistic bias towards particular subsets is removed and the importance of a particular sequence to survival is defined by its retention through evolution. Once these conserved sequences have been identified they can be divided into functional subsets and densities of polymorphisms within these subsets can be compared. Thus, if one portion of the conserved genome contains a lesser density of polymorphisms, despite being conserved to an identical degree, it can be assumed that this portion has been subjected to a higher degree of purifying selection within the human population and is consequently more important in maintaining species fitness and conferring disease susceptibility prior to reproductive age if compromised.In order to determine the densities of polymorphisms, in the form of SNPs across tUsing this simple but unique set theory approach we have been able to demonstrate that, in keeping with the current understanding of its importance in evolution and health, the conserved exonic subset has a significantly reduced density of SNPs and has therefore been subjected to greater selective pressures than other areas of the genome. Unexpectedly, comparison of the SNP densities between the intergenic and intronic components, both previously considered \"junk DNA\", demonstrated significant differences in SNP densities, such that the intronic portion had a statistically identical SNP density to the exonic component and the intergenic component contained a significantly higher SNP density. These observations demonstrate that the conserved intronic subset of the human genome has been subjected to identical levels of purifying selection as the exonic component within the human population. These novel and far reaching observations point to a critical role for conserved intronic sequences in the maintenance of species fitness and human health and give added weight to the analysis of intronic polymorphisms in the search for the causes of human genetic disease. In addition, the higher SNP density within the intergenic subset is indicative of its important role in driving the adaptive changes that reflect diversity within the human population.The genomic data of chromosomal positions for transcripts and the positions of exons within those transcripts were downloaded from the UCSC genome browser through the table browser portal from theThese data were held on a MySQL database implemented on a 56 node High Performance Cluster (HPC) IBM blade array operated by Microsoft compute cluster server 2003. All programs were written in Visual Basic .net on the Microsoft .net 3.5 framework, in a parallel design using the database to pass messages and data to the worker nodes. The database was designed so that the queries were optimized during the analysis process, and also to optimize subsequent analysis of the results. In utilizing set theory, each chromosome was considered as a set with its members being its base pairs. Each chromosome was considered separately as an entity of DNA of independent evolutionary path. The bases of each chromosome were categorized according to their position with respect to the different annotation information gathered.All autosomal chromosomes were analysed , although the \u00d7 and Y chromosomes were removed from the analysis due to being under different selective pressures and being represented differently within the population. The mitochondrial genome was also removed from the analysis. We also removed repetitive regions as the repetition, frequency and random nature of these repeat regions present problems when using pairwise alignment analyses. Un-sequenced regions of the genome, such as centromere regions of each chromosome were also removed from the analysis as no alignments or SNPs can be mapped to these regions . From the starting genomic annotations, set algebra was used to define subsets for further investigation, as described in Table The analysis was carried out on each chromosome with pairwise alignments of each species aforementioned at three different selective \"stringencies\" of 70%, 80% and 90% over 100 base pairs. However at the higher level of stringencies the size of conserved genome for large evolutionary depth was small and the number of SNPs reduces to a very small number. Therefore, in order to keep the analysis statistically valid across all species the 70% data was selected for the majority of the analysis in this paper although the 80% and 90% data demonstrated a similar trend. Statistical analyses of the results were carried out in MATLAB version 7.1 (Mathworks) and Microsoft Excel 2003. Tests of normality were undertaken using the Jarque-Bera test (JB test) on the mean SNP density counts for all regions as described in Table et al. we have compared SNP data from 2003 to that held within dbSNP in 2009 , and have determined that the major reservoirs of human SNP variation within the human genome conserved at three different stringencies within amniotes are to be found within the non-coding portion of the genome (the intergenic and intronic subsets) (Table We sought to determine if there was a relationship between SNP density and evolutionary depth as it has been shown that highly conserved sequences in the human genome are indicative of functionally important sequences within genes , intronsThe results of this analysis demonstrated that SNPs occur at a significantly lower rate in the most highly conserved regions, conserved since the common ancestors of humans and birds, amphibians and fish, as confirmed by one-way ANOVA analysis . The obsBecause they had been conserved to the same degree we initially hypothesised that SNPs would occur with equal density throughout the conserved genome, irrespective of the identity of the conserved subset. We carried out ANOVA analysis to determine whether there were significant differences in the densities of SNPs within the intergenic, intronic or exonic subsets at three different comparative stringencies. However, comparisons between the SNP densities for the exonic and intergenic subsets demonstrate significant differences in the SNP distribution between these subsets Table .These data suggest that, despite selecting on the basis of conservation, conserved intergenic and conserved exonic sequences appear to be subjected to different strengths of selective pressure within the human population. We considered the possibility that the differences observed in SNP density between the conserved exonic set and the conserved intergenic set reflects differences in the functional mechanism of these sets. The exonic sequences are functionally dependent on a mechanism involving the three base pair codon usage required for coding proteins, whilst intergenic regions and intronic regions are non-coding. Interestingly, comparison of the exonic and intronic subset by ANOVA analysis Table shows thBy contrast, the SNP density within the conserved intergenic subset is consistently higher than in either the conserved coding or intronic subsets irrespective of the species under comparison (see Table A number of large scale bioinformatic studies have previously explored SNP densities throughout the human genome ,24, and , and 24,The current study also poses a fascinating contradiction. Before the start of this study we predicted that subsets of the genome that had been conserved to the same degree must contain similar densities of SNPs. However, we demonstrate that SNP density between the three subsets analysed is not the same and the intergenic subset contains significantly higher SNP density to either the intronic or exonic subset. This fascinating observation suggests that the variation seen within extant human populations has been selected for in a different way to the variation that has driven vertebrate evolution.We suggest two alternative hypotheses that might explain why the conserved human intergenic genome contains a higher SNP density than either the exonic or intronic genome. The first hypothesis is that the majority of the sequence within the conserved intergenic genome consists of \"junk\"; DNA that plays little or no functional role and has been conserved by chance. However, a number of recent studies demonstrating the important role played by conserved intergenic sequence in gene regulation argue against this hypothesis . A seconThe present study suggests the presence of a selective linkage between the exonic and intergenic subsets and ascribes increased significance to the role of introns in human health. In addition, the identification of increased plasticity within the conserved intergenic subset suggests an important role for this subset in the adaptation and diversification of the human population.SD undertook the design, computer analysis and statistical analysis of the study. AS and AM conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript."} +{"text": "The pathological findings in 50 patients with Hodgkin's disease following laparotomy for diagnostic purposes are described. Forty-four patients had laparotomy before treatment and within a few months of the original diagnosis, while 6 patients had delayed laparotomies. The Rye histological classification was applied to the original lymph node biopsy, the abdominal lymph node and Hodgkin's tissue in the spleen. The variation in appearance both of these tissues and of the liver biopsies is discussed."} +{"text": "The influence of anthracyclines on membrane permeability functions has been investigated in HeLa cells by monitoring the efflux of fluorescein. Release of the fluorescent dye, dependent on the metabolic energy supply, occurs after the intracellular accumulation and enzymatic hydrolysis of the non-fluorescent substrate fluorescein diacetate (FDA). Flow cytometric evaluation of the efflux kinetics showed that adriamycin (ADR), N-trifluoroacetyladriamycin-14-valerate (AD-32) and daunorubicin (DNR) inhibited the permeability process. The degree of inhibition was dependent, though to different extent, on the intracellular concentration of each drug. An increase in the efflux rate was always observed when the cells were treated with the drugs in the presence of 20 mM glucose. Relationship of these effects with energetic metabolism was supported by the finding that ATP levels were lowered by the drugs and increased by glucose. Evaluation of the cytotoxicity induced by each drug showed that the intracellular amount necessary to inhibit cell survival by 50% was of the same order of magnitude as that which decreases to 50% membrane permeability to fluorescein. These results indicate a correspondence in the concentrations of anthracyclines required for inducing cytotoxicity and for inhibiting membrane permeability functions dependent on the metabolic energy supply."} +{"text": "The presence or absence of lymphocytic mucopolysaccharides (MPS) is studied in 223 subjects: 100 normals (controls); 8 cancer patients cured for more than 6 years; 30 cancer patients at the start of their treatment; and 85 relatives of first degree consanguinity of these last patients. The data are studied by statistical and genetic analysis. The results confirm the findings reported earlier and show that the difference in the probability of a high frequency of leucocytic MPS between the relatives of cancer patients and the controls is highly significant. Furthermore, this probability in a relative of first degree of consanguinity of a cancer patient is more than three times greater than in an individual of the general population. Genetic segregation analysis shows that the high leucocytic MPS trait segregates in the families of cancer patients after a classic pattern of dominant autosomal inheritance. Applying Falconer's nomogram it is concluded that the whole of this phenotypic variation is of genetic origin. Its interrelationships with cancer are discussed and it is postulated that this disturbance of the lymphocytic MPS represents a subclinical variant, not known until now, of the clinical mucopolysaccaridoses."} +{"text": "A total of 1,834 cases of colorectal cancers were divided into two diagnostic groups and studied. The ratio of smaller, less advanced carcinomas to the total number of colorectal cancers diagnosed by electronic videoendoscopy has increased as compared to the ratio in cases diagnosed by fiberscopy. This seems to be largely influenced by concomittent developments such as the implementation of colorectal cancer screening assisted by the popularization of immunological fecal occult blood tests and the increase in number of examinees. On the other hand, the use of the electronic videoendoscopy has been accompanied by increases in the diagnosis of minute carcinomas measuring 5 mm or less and flat carcinomas which were previously difficult to diagnose. Under these circumstances, endoscopic mucosal resection has gained popularity, widening the spectrum of therapeutic options. Nonetheless, endoscopic treatment also has been associated with increases in cases requiring laparotomy when carcinoma was found in the resected margin or those demonstrating invasion in the submucosa. This may be one result of the more aggressive application of endoscopic treatment, but the histologically recognized inability to detect carcinoma in the resected intestinal tract leaves room for improvement."} +{"text": "The development of nanodevices for agriculture and plant research will allow several new applications, ranging from treatments with agrochemicals to delivery of nucleic acids for genetic transformation. But a long way for research is still in front of us until such nanodevices could be widely used. Their behaviour inside the plants is not yet well known and the putative toxic effects for both, the plants directly exposed and/or the animals and humans, if the nanodevices reach the food chain, remain uncertain. In this work we show that magnetic carbon-coated nanoparticles forming a biocompatible magnetic fluid (bioferrofluid) can easily penetrate through the root in four different crop plants . They reach the vascular cylinder, move using the transpiration stream in the xylem vessels and spread through the aerial part of the plants in less than 24 hours. Accumulation of nanoparticles was detected in wheat leaf trichomes, suggesting a way for excretion/detoxification. This kind of studies is of great interest in order to unveil the movement and accumulation of nanoparticles in plant tissues for assessing further applications in the field or laboratory. Several areas, such as medicine, materials science and electronics, have begun to benefit and apply nanotechnology for their research since some decades ago. However, only during the recent years, researchers from other disciplines start to see the potential applications of nanoscience, as it is the case of agriculture . NanosenDuring the last years, some works have been published about absorption and uptake of nanoparticles by plants, but mainly dealing with and focused on their putative adverse effects -8. NeverCucurbita pepo) [In a previous research, we analyzed the penetration and transportation of magnetic carbon-coated nanoparticles through the leaves and aerial part of the plant in cucumber (ta pepo) ,10. The ta pepo) applyingta pepo) . Furtherta pepo) might beHelianthus annuus) from the family Compositae; tomato (Lycopersicum sculentum) from the Solanaceae; pea (Pisum sativum), from the Fabaceae; and wheat (Triticum aestivum), from the Triticeae.In the present work, we have studied the absorption and translocation of magnetic carbon-coated nanoparticles through the root in four crop plants belonging to different families: sunflower allowing visualizing the roots . When thFirstly we assessed that bioferrofluid was able to penetrate into the treated roots and to reach the vascular cylinder in a short period of time. Study of the samples taken at the point of application showed that after only 24 hours of exposure to the bioferrofluid, nanoparticles were able to leak into the vascular tissues of the tested crops Figure . This inAt this point, there are no studies about the real mechanism by which nanoparticles can penetrate into the plant cells. However, there is a recent work dealing with internalization of gold nanoparticles using tobacco protoplasts . In suchNanoparticles were detected easily in the xylem vessels of the four crops studied, but some differences were observed among species. Pea roots accumulated higher contents of bioferrofluid Figure than sunAfter a successful uptake of the nanoparticles by the plant roots, we monitored the translocation of such nanoparticles into the aerial part. Figure Subsequent sections of upper parts of the plants confirmed that nanoparticles had spread and reached most of the aerial part after 24 hours of exposure to the bioferrofluid. Following the same pattern, accumulation of nanoparticles was detected in xylem vessels corresponding to the first Figure and secoIt is remarkable that nanoparticles strongly accumulated in leaf trichomes of wheat plants Figure . The preFinally, the presence of nanoparticles in roots not exposed directly to the bioferrofluid was checked Figure . The chaBecause these microscopic techniques allow observation only with low resolution, the bioferrofluid was usually visualized inside xylem vessels where big accumulations of nanoparticles took place. However, 48 hours after roots exposure to bioferrofluid, nanoparticles were also detected in vascular and cortical parenchimatic cells of the plants Figure . As statIn summary, in this work we have presented results showing how carbon-coated magnetic nanoparticles can be absorbed by the root system of four different crop plants and spread using the vascular system to reach the whole plant. There are differences in the speed of absorption and distribution of the nanoparticles depending on the species, being faster in pea and wheat than in tomato and sunflower. In addition, it seems that sunflower shows a lower capability for radial movement of bioferrofluid outside the vascular tissues than the other crops. Within the first 24 hour of exposure to the suspension, the nanoparticles can reach the upper part of the plants, and in the case of wheat they accumulate inside leaf trichomes. After 48 hours of exposure, the bioferrofluid is located outside the vascular tissues and has moved downwards to non treated roots. This fast movement of the nanoparticles inside the plants can have an important impact for the development of nanoparticles as smart delivery systems inside the plant and further studies about their distribution and accumulation. It seems clear that root application is faster and more reliable than leaf treatments ,10. ThisThe authors declare that they have no competing interests.ZC carried out the nanoparticle treatments to the plants and the microscopy study, the processing of plant samples, and wrote the first manuscript draft. LC carried out the synthesis of nanoparticles and the bioferrofluid suspension. CM and MRI participated in the design of the nanoparticle synthesis and preparation of the suspension, in the design of the study and to the writing of parts of the manuscript. JMF contributed to the experimental design of nanoparticle synthesis and to the writing of parts of the manuscript. DR participated in the design of the study and helped in experiments of nanoparticle treatments to the plants. APL conceived the study, participated in the design and coordination of the work and helped to draft the manuscript. All authors read and approved the final manuscript.Hydrodynamic size. The data show the hydrodynamic size of the nanoparticles measured by Dynamic Light Scattering technique.Click here for file"} +{"text": "Fas gene is frequently reduced or lost during the development of colorectal carcinoma. However, loss of heterozygosity at the Fas locus or Fas gene rearrangements do not account for the loss of expression of Fas, raising the possibility that methylation of the Fas promoter may inhibit gene expression in colorectal carcinomas. We have examined the Fas promoter region CpG island for evidence of hypermethylation in colorectal tumours. Forty-seven specimens of colorectal adenoma and carcinoma, as well as six samples of normal colonic mucosa, were examined by Southern blotting for methylation at Hpa II and Cfo I sites in this region. No methylation was detected in any of the specimens, suggesting that hypermethylation is not primarily responsible for the loss of expression of the Fas gene during colorectal tumorigenesis. \u00a9 2000 Cancer Research CampaignExpression of the apoptosis-promoting"} +{"text": "Optimized methods for extraction and enzyme assay in crude tissue preparations were used to determine the amounts of terminal deoxnucleotidyl transferase (TdT) in malignant lymphomas. The TdT concentration was increased only in lymphoblastic lymphomas (LL) and was as high in these tumours as in the white blood cells from untreated patients with acute lymphoblastic leukaemia (ALL). The enzymes extracted from such lymphomas and from the leukaemic lymphoblasts had the same properties. Moreover, forms of TdT with low and high mol. wt were found in the LL tumours, similar to other reports of TdT-positive leukaemias. The overall study points at some basic biochemical identity of certain lymphoblastic malignancies, irrespective of whether the transformed cells are in solid tumours or are disseminated in the blood."} +{"text": "Nanoparticles introduced in living cells are capable of strongly promoting the aggregation of peptides and proteins. We use here molecular dynamics simulations to characterise in detail the process by which nanoparticle surfaces catalyse the self-assembly of peptides into fibrillar structures. The simulation of a system of hundreds of peptides over the millisecond timescale enables us to show that the mechanism of aggregation involves a first phase in which small structurally disordered oligomers assemble onto the nanoparticle and a second phase in which they evolve into highly ordered Protein misfolding and aggregation are associated with a wide variety of human disorders, which include Alzheimer's and Parkinson's diseases and late onset diabetes. It has been recently realised that the process of aggregation may be triggered by the presence of nanoparticles. We use here molecular dynamics simulations to characterise the molecular mechanism by which such nanoparticles are capable of enhancing the rate of formation of peptide aggregates. Our findings indicate that nanoparticle surfaces act as a catalyst that increases the local concentration of peptides, thus facilitating their subsequent assembly into stable fibrillar structures. The approach that we present, in addition to providing a description of the process of aggregation of peptides in the presence of nanoparticles, will enable the study of the mechanism of action of a variety of other potential aggregation-promoting agents present in living organisms, including lipid membranes and other cellular components. Indeed, it is well known that colloids in vivo, nanoparticles are often covered by peptides and proteins that determine their behaviour in the cell With the advent of nanoscience much interest has arisen about the ways in which nanoparticles interact with biological systems, because of their potential applications in nanotechnology and effects on human health et al. investigated the process of assembly of amphiphatic peptides in the presence of lipid vescicles In this work we use molecular dynamics simulations to investigate the molecular mechanism of peptide self-assembly in the presence of spherical nanoparticles. Although computational studies using full atomistic models have provided considerable insight into the role of fundamental forces in promoting the self-assembly of polypeptide chains, they are restricted to relatively small systems of peptides and short timescales In the present work, we adopted an off-lattice protein model onds see . The majStarting from the experimental observation that amyloid formation is a phenomenon common to most polypeptide chains We first performed molecular dynamics simulations at a peptide concentration, We consistently observe that the presence of this hydrophobic nanoparticle effectively removes the lag-time prior to aggregation , which rncreases . Althougncreases , larger in vivo nanoparticles are always covered by biological molecules.In order to provide a detailed analysis of the structure of the clusters that form on the nanoparticle surface we calculated the liquid crystalline order parameter ordering . To inveordering . The enhIn our simulations the lag time for nanoparticle induced peptide aggregation is about a microsecond which is quite short compared to the lag times typically observed in experiments. The latter range from some hours to several days, but it should be noted that both peptide concentration (3.4 mM) and, especially, nanoparticle concentration (6.5 \u00b5M) are much higher than in experiments less effective as well in reducing aggregation lag times In our model system the binding of peptides to the nanoparticle is stronger for the more hydrophobic surface . As a reWe did not observe an increase of the lag time prior to aggregation by using a smaller nanoparticle diameter, As a final remark, we observe that the molecular mechanism associated with the condensation ordering transition for peptide nanoparticle association described here is independent of particle size and hydrophobicity. The structural reorganization of protein chains in the early disordered oligomeric assemblies from their native or unstructured conformation to the We have characterised the process of nanoparticle-catalysed peptide aggregation in terms of a condensation-ordering mechanism and investigated its dependence on the nanoparticle diameter and the strength of the nanoparticle-peptide interactions. A similar mechanism of aggregation has already been observed in the absence of catalysing factors We used a modified version of the tube model The quasi-cylindrical symmetry of the tube is broken by the geometric requirements of hydrogen bonds. These geometrical requirements were deduced from an analysis of 500 high resolution PDB native structures The energy of hydrogen bonds was set to We performed discontinuous molecular dynamics (DMD) simulations 9, performed in every simulation corresponds qualitatively to 0.78 milliseconds.In order to associate the number of collision steps performed in our simulation to a real time we measured the long time self-diffusion coefficient of our model peptide, Video S1Configurations obtained from the molecular dynamics trajectory that corresponds to (5.47 MB GIF)Click here for additional data file.Video S2Final configuration obtained from the molecular dynamics trajectory shown to (3.25 MB GIF)Click here for additional data file."} +{"text": "Highly specific high-throughput assays will be required to take full advantage of the accumulating information about the macromolecular composition of cells andtissues, in order to characterize biological systems in health and disease. We discussthe general problem of detection specificity and present the approach our grouphas taken, involving the reformatting of analogue biological information to digitalreporter segments of genetic information via a series of DNA ligation assays. Theassays enable extensive, coordinated analyses of the numbers and locations of genes,transcripts and protein."} +{"text": "To the Editor: Recently, Pascal Del Giudice et al. published an interesting article of dermatitis caused by aterials , panel COur data coincided with those of the French study and reinforce the specificity of this dermatologic sign. However, this was not the only coincidence; cases also occurred among the investigators after contact with the infected material in each of the outbreaks. Perhaps both signs may characteristic this dermatitis: the comet sign and \u201cthe sign of the infected investigators\u201d of the outbreaks."} +{"text": "Rehabilitation of patients after surgical removal of carcinomas in facial skeleton is one of the most difficult therapies of the stomatognathic system. Significant deformation of tissues, dysfunctions of the stomatognathic system with concurrent biological imbalance of the oral cavity environment frequently affect the treatment to become arduous. Scars and contraction of the oral crevice may cause serious psychological deficiencies that are another aspect of the treatment schedule.Three Turkish patients ages 46 , 61 and 24 who experienced similar operations were rehabilitated with maxillary obturators. The situations was ideal for patient no 1. Patient no 2 could not receive an immediate obturator and patient no 3 rejected using permanent obturator. The paper describes the advantages of a surgical obturator which is constructed before operation and inserted immediately following partial maxillectomy and expresses long term complications when neglecting the use of definite obturator prosthesis, in the light of three cases.The primary objective of oral-maxillofacial and plastic surgeons and prosthodontists when treating tumors is to eliminate disease and to improve the quality of life including the facial contours which influences the psychological condition of patient. Neglecting immediate obturator construction may cause serious facial appearance problems due to soft tissue contracture. When permanent obturator is rejected, serious contracture of soft tissues and facial disharmony is inevitable. Prostodontic rehabilitation of maxillectomies is the preferred treatment in most centers over autogenous tissue reconstructions -5. It reThe following case reports illustrate the benefits of obturator prostheses by emphasizing the advantages of the obturator that was constructed before operation and inserted immediately following maxillary surgery. Denial of using permanent obturator is also demonstrated.46-year old male patient presented to his local dentist complaining of pain and swelling associated with his upper teeth. Routine dental diagnostic procedures and periapical radiographs failed to determine the origin of swelling and he was referred to the department of Oral & Maxillofacial Surgery at Suleyman Demirel University, Faculty of Dentistry. A firm swelling was seen to involve on the left side of the maxillae. An incisional biopsy was performed and this revealed the mass to be a Squamous Cell Carcinoma (SCC). The decision following the consultation was to resect the tumor and to obturate the defect with an immediate prosthesis. The patient was informed about the treatment procedure and the immediate obturation which would minimize the alteration of his appearance. Prior to surgery impressions of the maxilla and mandible were obtained and the cast models were attached to a semiadjustable articulator. The predicted excision was performed on the maxillary model. An immediate obturator with 1 cm extension into the resected side was constructed with adams retention clasp on the right second molar teeth in the preserved side. Under general anesthesia the left side of the maxillae was resected together with the lower third of the nasal septum. After the removal of the tumor, tissue conditioning material was placed over the extension of the immediate obturator to fit the surgical defect accurately and to support the defect area and split-skin grafts Fig. which wa61 years old male patient was sent to the department of Maxillofacial Surgery with the suspicion of a malign neoplasm in the left side of maxillae due to causeless mobility of left molar teeth and swelling. The biopsy revealed a Squamous Cell Carcinoma (SCC) and hemimaxillectomy was planned for the patient. The patient preferred to receive the surgical operation and adjuvant radiotherapy in a different city where his children lived. The patient returned for prosthetic rehabilitation after 3 months. The history revealed that he has used neither an immediate nor an interim prosthesis. A definitive prosthesis was constructed following an interim obturator for 3 weeks period. The adversely contracted soft tissues did not permit an ideal prosthetic rehabilitation Fig. .24 years old female patient was referred with a swelling localized on her left cheek Fig. . The bioNeglecting timely prosthodontic cooperation may cause inappropriate facial contour which is almost impossible to reconstruct ,9,14. InThe borders of the defect may collapse more rapidly if support of the immediate obturator is neglected and in a few weeks after surgery \u2013 the healing period \u2013 especially the anterior and lateral border of the defect migrates towards the center of the defect causing facial aesthetic problems. Supporting soft tissues prevents the continuing of the migration and collapse of the soft tissues. Long term lack of support leads to slower migration but bigger problems which is hard or nearly impossible to treat or reconstruct. For the patient no 3 Levator Anguli Oris and Levator Labi Superior muscles may have been contracted and elevated the comissura as they originate from the maxilla which is already resected. Zygomaticus Major and Minor muscles may also have an effect on the elevation of the comissura considering the surgical procedure. Nevertheless the problem is more than an elevation of the comissura but a collapse of the left mid facial region which even causes the deviation of the nose hip. Therefore lack of support and soft tissue contraction due to radiotherapy is thought to have been more effective on the facial deformity.Immediate prosthetic replacement is a successful and time-saving procedure that may afford many advantages in the surgical and postoperative management of the patient . ServingThe use of immediate obturators is essential for the optimum rehabilitation of oral functions and cosmetics with prosthetics as well as the maintenance of definitive prosthesis. Immediate obturators support soft tissues after surgery and minimize scar contracture and disfigurement that may have a positive effect on the patients' psychology. Avoiding immediate obturator construction may cause serious facial appearance problems due to soft tissue contracture. When wearing the permanent obturator is neglected, the dynamics of non-supported soft tissues change towards serious contracture and facial disharmony.Written informed consent was obtained from the patients for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The authors declare that they have no competing interests.TB and MAA operated the patients no 1 & 3. TB also diagnosed the patient no 2. ST and MMO constructed the prosthesis for both three patients and prepared draft manuscript. All authors read and approved the final manuscript."} +{"text": "The growth in mice of subcutaneous isografts of any of 5 methylcholanthrene-induced fibrosarcomas was associated with macrophage stimulation, reflected in an increased incidence of DNA-synthesizing cells among marcophages in the uninjected peritoneal cavity. This occurred at some stage with 4 tumours that induced concomitant immunity and one that did not. Some degree of splenomegaly also occurred with all 5 tumours. The spleens of all the tumour-bearing mice showed histological evidence of increased haemopoietic activity. Histological changes in the lymphoid elements of the spleen were very different with different tumours, ranging from lymphoid stimulation to lymphoid atrophy. The lymph nodes draining the sites of primary isografts which induced concomitant immunity showed signs of stimulation in the paracortical areas, followed by plasmacytopoiesis in the medullary areas. Stimulation of the paracortical areas was not detected in the nodes draining sites of injection of a tumour not inducing concomitant immunity. Nodes draining the sites of challenge isografts in mice exhibiting concomitant immunity showed plasmacytopiesis."} +{"text": "A method is presented to evaluate the influence of statistical errors and inherent variation on the determination of mitotic and labelling indices of human tumours. In most of the experiments reported here, sufficient cells were counted to yield a statistical error which is small in comparison to the inherent differences in the proliferative indices, both between different sites in the same tumour and between different tumours of the same histological type. These inherent fluctuations are, theefore, a critical factor in cell kinetic studies of human tumours."} +{"text": "In a study of three socio-economic groups in Hong Kong, the high income group had a high faecal concentration of bile acids, especially the dihydroxy bile acids, compared to the low income group. The faecal bile acids were also more highly degraded. The faecal flora contained more bacteroides and fewer eubacteria. Very few of the clostridia able to dehydrogenate the steroid nucleus were isolated. An epidemiological study based on street blocks indicated that the high income group also have a higher incidence of cancer of the large bowel and of the breast. The results are discussed in terms of theories on the aetiology of large bowel cancer."} +{"text": "The prevalence of diabetes mellitus has reached epidemic proportions, this condition may result in multiple and chronic invalidating long term complications. Among these, the diabetic foot, is determined by the simultaneous presence of both peripheral neuropathy and vasculopathy that alter the biomechanics of the foot with the formation of callosity and ulcerations. To diagnose and treat the diabetic foot is crucial to understand the foot complex kinematics. Most of gait analysis protocols represent the entire foot as a rigid body connected to the shank. Nevertheless the existing multisegment models cannot completely decipher the impairments associated with the diabetic foot.A four segment foot and ankle model for assessing the kinematics of the diabetic foot was developed. Ten normal subjects and 10 diabetics gait patterns were collected and major sources of variability were tested. Repeatability analysis was performed both on a normal and on a diabetic subject. Direct skin marker placement was chosen in correspondence of 13 anatomical landmarks and an optoelectronic system was used to collect the data.Joint rotation normative bands (mean plus/minus one standard deviation) were generated using the data of the control group. Three representative strides per subject were selected. The repeatability analysis on normal and pathological subjects results have been compared with literature and found comparable. Normal and pathological gait have been compared and showed major statistically significant differences in the forefoot and midfoot dorsi-plantarflexion.Even though various biomechanical models have been developed so far to study the properties and behaviour of the foot, the present study focuses on developing a methodology for the functional assessment of the foot-ankle complex and for the definition of a functional model of the diabetic neuropathic foot. It is, of course, important to evaluate the major sources of variation (true variation in the subject's gait and artefacts from the measurement procedure). The repeatability of the protocol was therefore examined, and results showed the suitability of this method both on normal and pathological subjects. Comparison between normal and pathological kinematics analysis confirmed the validity of a similar approach in order to assess neuropathics biomechanics impairment. The chronic hyperglycemia of diabetes, a highly widespread chronic disease, is associated with long-term damage, dysfunction, and failure of various organs. In particular, patients experience neuropathy and blood vessels degeneration. These two complications develop into the foot disease which alters the biomechanics of gait and eventually leads to the formation of callosity and ulcerations.The social and economic burden of the diabetic foot can be reduced through early diagnosis and treatment. Diabetic neuropathy is present in 25% of the patients after 10 years of disease, and it is the most significant risk factor for the development of foot ulcers. It consists in the distal symmetrical polyneuropathy which affects the motor and sensitive systems, both involved in the pathogenesis of the diabetic foot . Callus st metatarsophalangeal and subtalar joints y = Vabs , and aftClick here for file"} +{"text": "Hormone measurements during the menstrual cycle were assessed in six premenopausal women undergoing breast cancer surgery and ten controls to determine whether the stress of diagnosis and surgery influenced cycle characteristics. There was hormonal evidence for normal ovulation in all cancer and control women, although the length of the luteal phase of the cycle was prolonged because of a delay in menstruation in two cancer patients. The timing of surgery in the cycle did not influence the hormonal data. The hormonal characteristics of the menstrual cycle thus appear to be normally preserved in women during the month in which breast cancer surgery is performed."} +{"text": "When a virus infects a cell, it must contend with a hostile environment and host machinery that is intrinsically antiviral. One of the hallmarks of herpes simplex virus (HSV) infection is the dramatic reorganization of the infected cell nucleus leading to the formation of large globular replication compartments in which gene expression, DNA replication, and encapsidation occur . During The formation of replication compartments follows an ordered assembly process, resulting in a drastic remodeling of the nucleus Remodeling of the nucleus also involves the reorganization of the cellular PQC machinery, including cellular chaperone proteins and components of the 20S proteosome Herpesviruses have also evolved a complex relationship with host DNA damage response pathways and the single-strand binding protein (ICP8) interact with each other, are recruited to replication compartments, and are essential for efficient virus production (During the earliest stages of viral infection, HSV transforms the cellular environment from one that is hostile to virus infection to one that supports virus growth. Work in this area has only scratched the surface in terms of defining the mechanisms by which ICP0 and other viral proteins manipulate cellular pathways and create environments that promote lytic infection. Elucidating how cellular homeostatic pathways limit viral infections will be important to our understanding of the delicate balance between lytic and latent infection and to aid in the development of new antiviral therapies."} +{"text": "The authors report a case of hormonally silent duodenal somatostatinoma. The main clinicalfeatures, the natural history and the currently available therapies of these rare neoplasms aredescribed on the basis of this case and of the scientific literature. Although the antiblastictherapies are still debated, the patient showed a surprising outcome following chemotherapy."} +{"text": "Positive selection of host proteins that interact with pathogens can indicate factors relevant for infection and potentially be a measure of pathogen driven evolution.Our analysis of 1439 primate genes and 175 lentivirus genomes points to specific host factors of high genetic variability that could account for differences in susceptibility to disease and indicate specific mechanisms of host defense and pathogen adaptation. We find that the largest amount of genetic change occurs in genes coding for cellular membrane proteins of the host as well as in the viral envelope genes suggesting cell entry and immune evasion as the primary evolutionary interface between host and pathogen. We additionally detect the innate immune response as a gene functional group harboring large differences among primates that could potentially account for the different levels of immune activation in the HIV/SIV primate infection. We find a significant correlation between the evolutionary rates of interacting host and viral proteins pointing to processes of the host-pathogen biology that are relatively conserved among species and to those undergoing accelerated genetic evolution.These results indicate specific host factors and their functional groups experiencing pathogen driven evolutionary selection pressures. Individual host factors pointed to by our analysis might merit further study as potential targets of antiretroviral therapies. Pan troglodytes troglodytes) gave rise to the HIV-1 groups M, N and O range. The same procedure was repeated on the positively selected genes inferred in the human-chimp comparison and correlated to the viral gene ranking based on the HIV-1/SIVcpz alignment.The interactions were grouped into bins defined by viral gene ranking. The ranks of host genes within bins were averaged. Bins of different sizes were slid along the viral gene ranking scale, advancing from one gene to adjacently ranked gene in each step. For each bin size we calculated the correlation between viral gene ranks and the averaged host gene ranks. Because of the small number of viral genes (18) a symmetrical approach of averaging over viral gene ranks was not performed. We tested all 18 possible bin sizes. We then used the permutation procedures described below to test for the significance of the correlation obtained for the averaged binned ranks of each bin size.host-oriented test consists of retaining the degree of each of the host gene nodes in the network and randomly sampling a corresponding number of interacting viral genes from the set of all viral genes. The virus-oriented test consists of retaining the degree of each viral gene node and randomly sampling a corresponding number of interacting host genes. Performing two different permutation tests allowed us to test if certain results were due to the differing numbers of interactions reported for different host and viral proteins. We developed additional permutation tests allowing for random node degrees in the network and found the permutation tests conserving aspects of the network topology to be more stringent in assessing the statistical significance of our observations. We therefore used the host- and virus-oriented tests to assess statistical significance of the results of our analyses.In order to assess the statistical significance of previous analyses we developed permutation tests of the HIV-human interactions. The HIV-human interaction data can be represented by a bipartite graph with nodes representing host and viral proteins and edges connecting interacting host and viral proteins. We designed two procedures of permuting the host-virus interaction network. The KB and TL conceived this project and wrote the manuscript. KB designed the study, and collected and analyzed the data. All authors read and approved the final manuscript.list of all analyzed genes, scores and annotations.Click here for filedetailed results of the GO term enrichment tests.Click here for fileinformation on the alignment quality.Click here for file"} +{"text": "Based on the analysis of the drafts of the human genome sequence, it is being speculatedthat our species may possess an unexpectedly low number of genes. The quality of thedrafts, the impossibility of accurate gene prediction and the lack of sufficient transcriptsequence data, however, render such speculations very premature. The complexity ofhuman gene structure requires additional and extensive experimental verification oftranscripts that may result in major revisions of these early estimates of the numberof human genes."} +{"text": "To evaluate the clinical significance of serum levels of hepatocyte growth factor (HGF) in colorectal cancer patients, we measured the venous and portal concentrations of HGF in 60 patients. The tissue concentrations in the tumour and adjacent normal mucosa were also determined. The serum HGF concentration for the peripheral venous blood of the patients was significantly higher than that in normal controls. The content of HGF in cancer tissue was also significantly higher than that in normal mucosa, and it was correlated with the serum HGF concentration for the peripheral venous blood. The serum concentration of HGF reflected pathological features, including tumour size and lymph node or liver metastasis, and it showed an association with various preoperative nutritional parameters and the preoperative haemoglobin level. The serum HGF concentration was also correlated with the serum concentrations of immunosuppressive acidic protein and interleukin-6, indices of the host's immunological condition. Serum HGF seems to be a useful index of the disease status of patients with colorectal carcinoma."} +{"text": "The potential applications of operative laparoscopy have expanded with improvements in technology and instrumentation. With newly developed techniques to complete both pelvic and paraaortic lymph node dissection, the use of the laparoscope has increased in patients with pelvic malignancies. Gynecologic oncologists are currently incorporating the techniques of operative laparoscopy in the management of patients with cervical, endometrial, and ovarian cancer. Multicenter prospective clinical trials are necessary to further define the role of laparoscopy in gynecologic oncology."} +{"text": "Cystic lesions of the pancreas are relatively uncommon.We describe the case of a young man with acomplex cystic mass located within the head of thepancreas. The patient underwent exploration withresection of the mass. Pathology revealed a ciliatedepithelial cyst, a rare cystic lesion of the pancreas."} +{"text": "We report a case of severe weight loss secondary to anorexia nervosa causing bilateral superficial peroneal nerve entrapment in a young female patient who was treated successfully by bilateral surgical decompression. Among entrapment neuropathies, superficial peroneal nerve (SPN) entrapment is relatively rare -8 and onSevere weight loss, as a result of anorexia nervosa, associated with common peroneal nerve entrapment is very rare -17 and SHerein we report a case of severe weight loss secondary to anorexia nervosa causing bilateral SPN entrapment in a young female patient who was treated successfully by bilateral surgical decompression.A 20-year-old, female university student presented to our outpatient orthopaedic clinic with a two month history of vague pain on the outer border of both legs, and numbness over the dorsum of the feet and big toes. Her symptoms were exacerbated by walking and running and partially relieved by elevation. She had to stop to rest after 30 minutes of walking because of intolerable pain.There was neither history of trauma or surgery to the lower limb nor history of lower back problems. There was, however, a history of severe weight loss of (30 kg) during the previous six months and the patient was diagnosed with anorexia nervosa using criteria from the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) and the World Health Organization's International Statistical Classification of Diseases and Related Health Problems (ICD).Physical examination revealed bilateral tender points approximately 11 cm proximal to the ankle joint on the outer surface of the leg, Tinel sign was also positive bilaterally. There were sensory deficits on the dorsum of both big toes but no muscle weakness or abnormal reflexes.Examination of the lumbar spine and lower limbs revealed no clinical abnormalities in the joints and there was neither suspicion of nerve root compression at the level of the lumbar spine nor nerve entrapment at the neck of the fibula.Radiographic examination of the lumbar spine, legs and feet were normal and EMG studies were positive for bilateral entrapment neuropathy of the SPN proximal to the ankle joint with no abnormality of the common peroneal nerves or of the proximal nerve roots.After preoperative assessment, the patient was admitted for surgical treatment with the diagnosis of SPN entrapment. The operation was done under general anaesthesia, using pneumatic tourniquet. Bilateral explorations of the site of tenderness revealed adhesions of both SPNs to the fascia with perineural fibrosis. Careful dissections were done to free the nerves and neurolysis was successfully performed Figure . The nerSymptoms of bilateral peroneal nerve entrapment were relieved immediately and completely in the postoperative period. Physiotherapy was started immediately to prevent postoperative adhesions. No recurrence was observed in the first year following the operation.Superficial peroneal nerve syndrome is an entrapment neuropathy that usually results from mechanical compression of the nerve at or near the point where the nerve pierces the fascia to travel within the subcutaneous tissue.A thorough and accurate knowledge of the course of the SPN and its relationships is essential to understand the pathophysiology, and a thorough and careful physical examination is important for diagnosing this condition. Stephens et al. described a physical sign to identify the distal subcutaneous course of the SPN below the skin, primarily by means of plantar flexion and inversion of the ankle and foot and, secondarily by a passive flexion of the fourth toe .In his study Styf, described 3 provocative tests for nerve compression at rest at rest following exercise . In the Electrophysiological studies are helpful for the diagnosis, however, normal conduction velocity may be found especially at rest which does not exclude compression of the superficial peroneal nerve .Injection of the nerve with lidocaine or Marcaine just above the site of involvement may be the most valuable diagnostic tool. The patient can define the extent of relief obtained from such an injection, which can be helpful in defining the zone of injury and expected relief from surgical release or excision.Entrapment of the superficial peroneal nerve has traumatic and non traumatic causes. Local trauma and compression are the most common causes of nerve entrapment. This may be due to recurrent stretch injuries or certain positions like prolonged kneeling and squatting, which cause perineural fibrosis ,18. OedeNontraumatic causes of SPN entrapment are commonly due to anatomical variations such as fascial defects, with or without muscle herniation about the lateral lower leg, where the nerve is entrapped as it emerges into the subcutaneous tissue or a short peroneal tunnel proximally. Nerve compression in patients with fascial defects is explained by the normal increase in muscle relaxation pressure and intramuscular pressure at rest during and after exercise. This increase is sufficient to cause herniated muscle tissue and this can impinge upon or compresses the nerve .Lowdon reported a case of an abnormally long course of the SPN nerve through the deep fascia which was thought to have caused compression. Exercise may have exacerbated the symptoms by producing mechanical irritation or by raising the pressure in the peroneal compartment and thus increasing compression of the nerve .In our case, the bilateral involvement forced us to think about a systemic cause of SPN entrapment. The patient had severe loss of weight in a period of few months due to previously undiagnosed anorexia nervosa which may have caused changes in the subcutaneous tissues that led to adhesions and perineural fibrosis. Although the exact cause is unknown; SPN entrapment should be kept in mind especially in patients with severe weight loss and changes in body habits.The authors declare that they have no competing interests."} +{"text": "Increasingly mRNA expression patterns established using a variety of molecular technologies such as cDNA microarrays, SAGE and cDNA display are being used to identify potential regulatory genes and as a means of providing valuable insights into the biological status of the starting sample. Until recently, the application of these techniques has been limited to mRNA isolated from millions or, at very best, several thousand cells thereby restricting the study of small samples and complex tissues. To overcome this limitation a variety of amplification approaches have been developed which are capable of broadly evaluating mRNA expression patterns in single cells. This review will describe approaches that have been employed to examine global gene expression patterns either in small numbers of cells or, wherever possible, in actual isolated single cells. The first half of the review will summarize the technical aspects of methods developed for single-cell analysis and the latter half of the review will describe the areas of biological research that have benefited from single-cell expression analysis."} +{"text": "Infant feeding by HIV-infected mothers has been a major global public health dilemma and a highly controversial matter. The controversy is reflected in the different sets of WHO infant feeding guidelines that have been issued over the last two decades. This thematic series, 'Infant feeding and HIV: lessons learnt and ways ahead' highlights the multiple challenges that HIV-infected women, infant feeding counsellors and health systems have encountered trying to translate and implement the shifting infant feeding recommendations in different local contexts in sub-Saharan Africa. As a background for the papers making up the series, this editorial reviews the changes in the guidelines in view of the roll out of prevention of mother to child transmission (PMTCT) programmes in sub-Saharan Africa between 2001 and 2010. The papers in this thematic series highlight the multiple challenges to infant feeding that have surfaced within the framework of postnatal prevention of mother to child transmission of HIV (PMTCT). The focus is in particular on the implementation of the global HIV and infant feeding guidelines at the local level in various settings in sub-Saharan Africa. The symposium which preceded this publication dwelt on lessons learnt through the past decade of experiences with the implementation of the PMTCT programme. The symposium took place between 2 and 4 September 2008, in Rosendal, western Norway and was an interdisciplinary endeavour with students and scholars from eastern and southern Africa, Canada and Norway. The event offered an opportunity for dissemination of results from Masters, PhD and post-doctoral studies on the PMTCT challenge. The seven research papers included in this issue report on the experiences of women and their partners, of PMTCT counsellors and of policy makers in selected countries in sub-Saharan Africa between 2001 and 2009. During this period the WHO 2001 HIV and infant feeding guidelines . 7. 3], anWith the current knowledge and technology most cases of postnatal MTCT are preventable through antiretroviral (ARV) drugs and modifications in infant feeding practices. During the last decade an increasing number of HIV-infected women have gained access to antiretroviral treatment or prophylaxis effectively reducing transmission during pregnancy and birth , but theThe initial standard PMTCT programme package which was implemented in most sub-Saharan settings between 2001 and 2009 was exclusively preventive. The sole aim of all interventions including medication of the mother was to prevent the baby from acquiring HIV. The services included voluntary and later routine counselling and testing (VCT/RCT), different regimens of ARV prophylaxis to mother and child around the time of birth, and infant feeding counselling based on the 2001 WHO guidelines [The issue of infant feeding has been particularly challenging in a PMTCT context because the options available, breastfeeding or not breastfeeding, both involve risks to child health and survival. How to weigh the benefits of breastfeeding against the risk of HIV infection is an issue that has been vigorously debated. The dilemma becomes particularly acute since the greatest proportion the global burden of HIV and AIDS lies in sub-Saharan Africa where the major causes of infant death are malnutrition and infectious diseases . BreastfWithin PMTCT programmes, research on how to make breastfeeding safer identified the increased risks of mixed feeding . The cusThe WHO HIV and infant feeding guidelines have been shifting with developments in knowledge and technology. The WHO in collaboration with UNAIDS and UNICEF have to our knowledge produced 16 documents, possibly more, on infant feeding guidelines for HIV-infected women over the last quarter century ,16-29. TIn 1997-98 the WHO published new infant feeding guidelines which advised that all mothers should be counselled about possible feeding options and thus be allowed to make their own decision about infant feeding. This was interpreted as a major policy shift towards the promotion of replacement feeding. Based on this shift the distribution of free infant formula to HIV positive women enrolled in PMTCT programmes was initiated in some countries [With the rapidly emerging evidence of the challenges of replacement feeding in low income contexts the WHO, in 2001 were intAccording to this guideline the decision about whether or not to breastfeed should be made by every mother based on full information of the options available. The guiding principle was that HIV-infected mothers should be supported in making an informed decision about how to best feed their infants . This poAlthough the 2001 guidelines brought in the AFASS criteria which promoted an assessment of each woman's situation and choice, they still strongly communicated that replacement feeding was the first choice of infant feeding method for HIV positive women, and that exclusive breastfeeding should be discontinued as soon as replacement feeding was AFASS. Major challenges have been that both exclusive breastfeeding and replacement feeding have been found to be extremely difficult to adhere to for the large majority of the HIV-infected mothers -36, and In 2006, after strong evidence of the risks of childhood infections and malnutrition associated with replacement feeding, and with the path-breaking documentation of a higher HIV free survival rate among exclusively breastfed than among replacement fed infants , the WHOIn 2009, WHO launched the so-called Rapid Advice building\u2022 Mothers known to be HIV-infected should exclusively breastfeed their infants for the first six months of life, introducing appropriate complementary food thereafter, and continue breastfeeding for the first 12 months of life;\u2022 Mothers who decide to stop breastfeeding should stop gradually within one month; stopping breastfeeding abruptly is not advisable;\u2022 Mothers known to be HIV-infected should only give commercial infant formula milk as a replacement feed to their HIV uninfected infants or infants who are of unknown status, when specific conditions are met (referred to as AFASS);\u2022 Mothers known to be HIV-infected should be provided with lifelong antiretroviral therapy or antiretroviral prophylaxis interventions.The 2001 and the 2010 guidelines each produced in response to the existing evidence available, stand in sharp contrast to one another both in terms of the recommended first choice of feeding method ; breastfeeding cessation and in terms of the principle of informed choice . It is however predominantly the 2001 version promoting replacement feeding as the first choice and exclusive breastfeeding only if replacement feeding is not AFASS that has informed training and infant feeding counselling in PMTCT programmes across sub-Saharan Africa during the past decade. The HIV epidemic coupled with the assumed benefits of infant feeding formula for all HIV-infected mothers have in complex ways changed public ideas about infant feeding, and has introduced an important threat to well established breastfeeding practices.The pace, frequency, and characteristics of the changes have been influenced by continuing weaknesses in the scientific understanding of the mechanisms and risks of infection, but also by failure in early political leadership to demand timely funding to conduct research to properly address these weaknesses . The suiThe history of the shifting WHO guidelines on HIV and infant feeding in the period 1998-2010 can be seen as a history of retreat from recommendations based on narrow medical research and on ideologies of individual choice, to recommendations recognising the importance of local social and cultural context as well as local knowledge on breastfeeding and survival. Most significantly the shifts in infant feeding recommendations in the 2010 guidelines promoting exclusive breastfeeding for six months as the method of choice, and complementary feeding after six months are thus more in line with the public health recommendations to the general population. They are also more in line with customary infant feeding practices in sub-Saharan Africa. The move away from replacement feeding as first choice brings breastfeeding back as the prime way of feeding an infant and implies a major gain in public health. Moreover it is expected that the new formulation will also protect HIV-infected women from suspicion and stigma attached to non-normative infant feeding practices. In the long run the most important outcome will be to protect the general public against spill-over effects from the replacement feeding promotion that indirectly has taken place through PMTCT infant feeding counselling [The papers included here look back on the past decade of PMTCT implementation and discuss the lessons learnt. They reveal different dimensions of the PMTCT guidelines and how they enter into action across national and local settings. They present glimpses of the intricate challenges encountered by the targets of global policy guidelines - the HIV-infected mothers, the PMTCT counsellors, local policy makers and peer supporters. The papers add to and challenge the strictly defined biomedical evidence, and bring to the forefront experience-based and context bound knowledge.et al.'s paper from Malawi [The problems involved in translating the 2001 WHO HIV and infant feeding guidelines to national policy and programming are dealt with in Chinkonde m Malawi . In a com Malawi . The lacet al.'s paper describes the confusion and fear that Ethiopian mothers face trying to adhere to the advice given by the nurse-counsellors [The confusion in relating to the guidelines is more concretely revealed in the powerful accounts of HIV-infected mothers who have to handle their own HIV positive status while at the same time struggling with the possibility of transmitting the virus to their infant through breastfeeding. Koricho nsellors . The intnsellors .et al.'s paper from Uganda where both exclusive breastfeeding and exclusive formula feeding were deemed inappropriate and unsatisfactory by the study participants [The lack of local applicability of the PMTCT guidelines is also revealed concretely in Engebretsen icipants . The papicipants . Such laThe acknowledgement of the importance of partner involvement in PMTCT programmes has led to a strong call for partner disclosure. A woman's disclosure of positive HIV status to her partner at the vulnerable time of pregnancy or after birth has however been experienced as highly problematic. Njunga and Blystad's paper describes the implications of partner disclosure within a matrilineal and matrilocal kinship structure in Malawi . Men in et al's study from Malawi demonstrates the many obstacles of emotional, practical and economic kind that HIV-infected women experience in their attempts to wean their children abruptly and at a very early age [et al. describes the demands of the PMTCT programme as a 'downloading of responsibility' onto HIV-positive mothers who are unable to secure the baby a safe transition to other foods without support [At the core of the 2001 guidelines lies the concept of early and rapid cessation of exclusive breastfeeding at six months. But extended breastfeeding and mixed feeding is normative in large parts of the world. Levy arly age . Mothers support .et al.'s study from Uganda describes how highly the women in the study valued the support for exclusive breastfeeding from the peer educators and how this became decisive for their ability to adhere to exclusive breastfeeding [The modifications of customary infant feeding practices required in order to adhere to the 2001 WHO guidelines have been very complex and challenging. There has been a growing recognition that without support for the women who must relate to difficult infant feeding regimes there is little chance that they will succeed. A number of different initiatives of support have been launched in the past decade and include projects like mother to mother support groups, home based care, peer education and peer counselling. Nankunda tfeeding . The peetfeeding .However, peer PMTCT support programmes as well as other support initiatives operate in a context fraught with tension and fear. The peer counsellors are left with the challenge of convincing HIV-infected mothers that either exclusive breastfeeding or exclusive replacement feeding is a safe and feasible way of feeding a baby and of reducing the risk of HIV transmission. This is not an easy task. Nkonki and Daniels's paper from South Africa demonstrates how the counsellors struggled to 'sell their service' to a sceptical group of potential recipients through a continuous negotiation of their own credibility . Nkonki The papers together highlight some of the key challenges that emerged in the wake of WHO's 2001 HIV and infant feeding guidelines. They reveal the complexity of dilemmas and the adverse effects that can be generated by global policy guidelines that are very distant from local lives.The WHO infant feeding guidelines have changed considerably since the studies in this issue were carried out. In light of new evidence, operational experiences and increased availability of ARV both for prophylaxis and treatment, one may ask how relevant the results in the present papers are to current and future PMTCT programme implementation. Recent guidelines have a broader approach than the 2001 version. The protection of maternal prenatal and postnatal health, viral load and CD4 status have been included in addition to HIV-free survival of the child . The infThe authors declare that they have no competing interests.KMM, AB and MDP wrote the first draft of the paper. DS contributed to subsequent drafts. PVE was a key person during the workshop and wrote a summing up paper that was drawn upon in the introduction. SCL commented on later drafts. All authors approved the final paper."} +{"text": "The details of surgical techniques for laparoscopic removal of endometriosis and adenomyosisare described briefly in textbooks and gynaecological journal articles. We have described awide variety of techniques for the various procedures required in the treatment of endometriosisand adenomyosis, excluding hysterectomy. The principles are based upon those usedin removal of primary cancer lesions. The limitations of thermal ablation are discussed, andevidence of improved results after excision of lesions have been submitted for publication."} +{"text": "Sir,The majority of us who routinely practice fine needle aspiration biopsy (FNAB) understand the value of immediate on-site evaluation of adequacy/triage and interpretation of this procedure.\u20133 The cl4Since the October 2009 National Coding Corrective Initiative (NCCI) policy manual publication, many of us have faced repeated inquiries about the appropriateness of using the CPT code 88172 pertaining to this immediate cytologic evaluation. The queset al. found that on-site interpretations consume between 35 and 56 min, exceeding the average time required for frozen section evaluations by approximately 16 min.[et al. documented that without on-site evaluation, the rate of inadequate aspirates would increase, resulting in substantial institutional cost for repeat procedures and testing.[The body of evidence is strong that the time required for on-site assessments by cytopathologists is well spent. In a comprehensive cost analysis, Layfield y 16 min. Not surp testing. While biGiven this background and the related questions and uncertainty, it is worth noting that the College of American Pathologists (CAP) recently added a specific question about immediate adequacy assessments and initial interpretations and their documentation in the Laboratory Accreditation Program cytopathology section checklist effective June 2009 (# CYP. 05325). The cytopathology community has the obligation to clearly advocate for reimbursement policies that support cost-effective cytopathology services for patients. It is our hope that this editorial will encourage individual pathologists, our professional organizations and the greater medical community to adequately support cost-effective cytopathology services. Because of widespread confusion on the current correct billing among payers and members, the ASC and CAP submitted a proposed change to the CPT for code 88172 to more clearly define the unit of service.The Economic and Government Affairs Committee of the American Society of Cytology strongly supports a clear definition of the unit of service for CPT code 88172 that allows the appropriate use of multiple units when multiple procedures are separately evaluated to assure adequacy of the material for diagnosis. As of this writing, the Center for Medicare Services (CMS) is expected to issue additional written guidance on the issue around the November of 2010.No competing interest to declare by any of the authors.http://www.icmje.org/#author Each author participated sufficiently in the work and takes public responsibility for appropriate portions of the content of this article.Each author acknowledges that this final version was read and approved. All authors qualify for authorship as defined by ICMJE"} +{"text": "To study the impact of these factors, we measured the uptake of fluorochrome-labelled IgG using confocal laser scanning microscopy, interstitial fluid pressure by the \u2018wick-in-needle\u2019 technique, vascular structure by stereological analysis, and the content of the extracellular matrix constituents collagen, sulfated glycosaminoglycans and hyaluronan by colourimetric assays. The impact of the microenvironment on these factors was studied using osteosarcomas implanted either subcutaneously or orthotopically around the femur in athymic mice. The uptake of IgG was found to correlate inversely with the interstitial fluid pressure and the tumour volume in orthotopic, but not subcutaneous tumours. No correlation was found between IgG uptake and the level of any of the extracellular matrix constituents. The content of both collagen and glycosaminoglycans depended on the site of tumour growth. The orthotopic tumours had a higher vascular density than the subcutaneous tumours, as the vascular surface and length were 2\u20133-fold higher. The data indicate that the interstitial fluid pressure is a dominant factor in controlling the uptake of macromolecules in solid tumours; and the site of tumour growth is important for the uptake of macromolecules in small tumours, extracellular matrix content and vascularization.\u00a9 2001 Cancer Research Campaign"} +{"text": "The authors present an unusual case of a polar mass in the frontal lobe of the brain, causing acute monocular visual loss in a 50-year-old woman with history of breast carcinoma treated with surgery, radiation and chemotherapy. Neuroimaging demonstrated herniation of the gyrus rectus into the suprasellar cistern resulting in compression of the anterior visual pathway. Frontal lobe tumors often produce impairment in cognitive and motor functions of the patient. They can produce visual loss either from chronic papillaedema or by direct compression of the optic nerve and chiasm. The latter is often caused by tumors located in the posterior part of the inferior surface of the frontal lobe. We reporA 50-year-old nondiabetic and nonhypertensive woman complained of painless, progressive loss of vision in her left eye (OS) of one week duration. She did not have any history suggestive of raised intracranial pressure. There was no history of trauma. One year back she had undergone radical mastectomy for breast carcinoma followed by radiotherapy and chemotherapy. Ophthalmic examination OS showed visual acuity of no light perception. There was an afferent pupillary defect. Examination of the ocular motility and fundus was unremarkable. Examination of the right eye including visual field by confontration method showed no abnormalities. Systemic examination revealed ascites and hepatomegaly. Magnetic Resonance Imaging (MRI) of the brain showed a rounded mass in the polar region of the left frontal lobe with significant edema in the surrounding region . The lefet al.[Our patient was a known case of breast carcinoma with metastasis to the frontal lobe, liver and peritoneum. The frontal mass was located near the anterior end of the frontal lobe and did not cause any neurological deficit. MRI of the brain showed considerable edema of the medial aspect of the frontal lobe and herniation of right gyrus rectus in the suprasellar cistern causing its obliteration. The extensive edema around the mass had apparently extended to the gyrus rectus causing it to herniate into the suprasellar cistern with compression of intracranial part of optic nerve and anterior chiasma. Clinically, the patient demonstrated only features of optic nerve compression. Careful examination of the visual field OD did not show any defect despite apparent anterior optic chiasmal compression. Herniatiet al. reportedUsually, the prognosis in cases of brain metastasis following breast carcinoma is poor. In our case, the radiological findings of significant edema around the lesion and the previous history of surgery, radiotherapy and chemotherapy for breast carcinoma led to a provisional diagnosis of brain metastasis. In many cases, patients are not forthcoming about previous history of malignancy leading to delays in the diagnosis. Our case emphasizes the fact that visual loss can result from a tumor of the frontal lobe located some distance away from the anterior visual system, and the importance of obtaining detailed clinical information including history of previous malignancies in arriving at the right diagnosis and for timely management."} +{"text": "The prehistory of African trypanosomiasis indicates that the disease may have been an important selective factor in the evolution of hominids. Ancient history and medieval history reveal that African trypanosomiasis affected the lives of people living in sub-Saharan African at all times. Modern history of African trypanosomiasis revolves around the identification of the causative agents and the mode of transmission of the infection, and the development of drugs for treatment and methods for control of the disease. From the recent history of sleeping sickness we can learn that the disease can be controlled but probably not be eradicated. Current history of human African trypanosomiasis has shown that the production of anti-sleeping sickness drugs is not always guaranteed, and therefore, new, better and cheaper drugs are urgently required. Trypanosoma that live and multiply extracellularly in blood and tissue fluids of their mammalian hosts and are transmitted by the bite of infected tsetse flies (Glossina sp.). The distribution of trypanosomaisis in Africa corresponds to the range of tsetse flies and comprises currently an area of 8 million km2 between 14 degrees North and 20 degrees South latitude . Co. Co27]. f Uganda ,20. The f cancer , but wasAt the turn of the millennium, the scale of sleeping sickness had almost reached, yet again, the levels of the epidemics seen at the beginning of the century Fig. 27,32].,32.27,32Glossina species [2) infested with only one tsetse fly species, the PATTEC initiative has to deal with a vast area of sub-Saharan Africa (~10 million km2) inhabited by at least 7 different Glossina species recognised as vectors for transmission of sleeping sickness. Therefore, many scientists are sceptical that the PATTEC project will succeed as similar eradication campaigns failed in the past because the tsetse fly infested areas could not be isolated [In 2001, the Organisation of African Unity (OAU) launched a new initiative, the Pan African Tsetse and Trypanosomiasis Eradication Campaign (PATTEC) to eliminate the tsetse fly from Africa . It was species . The ste species . Howeverisolated . The hugisolated .The only new drug candidate currently in development for treatment of sleeping sickness is the diamidine pafuramidine (DB289). In January 2007, pafuramidine had completed enrolment for Phase III clinical trials in the Democratic Republic of Congo and Angola ,38 whichThere is also an urgent need for accurate tools for the diagnosis of human African trypanosomiasis. The existing tests for diagnosis are not sensitive and specific enough, due to the characteristically low number of parasites found in the blood of sleeping sickness patients. Therefore, the Foundation for Innovative New Diagnostics (FIND) and the WHO launched, in 2006, a new initiative for the development of new diagnostic tests to support the control of sleeping sickness . It is eth century one can learn that a concerted approach of systematic case detection and treatment is the appropriate method for the control of sleeping sickness and that discontinuation of these control measures will lead to re-emergence and spread of the disease. History has also shown that African trypanosomiasis always prevented the introduction of stock farming in endemic areas. A consequence of this is that much of tropical Africa is still present today and has not been converted into grassland for cattle breeding.The history of African trypanosomiasis gives an example of how a disease not only affected the evolution of humans but also the cultural and economic development of people in sub-Saharan regions. From the historical events of the 20The author declares that he has no competing interests."} +{"text": "Exactly five years ago a little flower in the garden of wisdom blossomed as the Hair Research Society of India. Early in 2004, when I registered to participate in the \"World of Hair\"- fourth intercontinental meeting of hair research societies in Berlin, I was asked to contact the hair research society of my country for a letter of recommendation. Having discovered that such an organization did not exist in my country, I sought the help of my teacher Prof. Patrick Yesudian who had been a \"lead kindly light\" in all my endeavors. Giving no room for second thought said he, \"why not we?\" The seed sown by him was nurtured into a sapling by Prof. S. Shivakumar and enthusiastic colleagues with the thirst for knowledge.On 13 June 2004, from the land of diverse culture, history and mystery the Hair Research Society of India was started with the aims of gaining and disseminating the knowledge of hair with the transformation of the same into safe and ethical practices in order to abolish quackery and promote science. It was decided to conduct quarterly meetings with guest lectures by inter-disciplinary specialists, clinical case discussions and to publish a newsletter which would carry the proceedings of the meeting.\"when confronted with matters genetic; inclined to be somewhat frenetic; thought to be archaic; invokes a mosaic; with Blashco becomes energetic\", wanted the Hair Research Society of India to be a \"Growing Hair Society\" and not simply a hair-growing society.It was Prof. Rudolph Happle, the king of limericks who Our Indian hair newsletter used to be like a sumptuous dinner with a starter of a short note on the current trichological delicacy by our president Prof. Patrick Yesudian, a main course of a highly informative guest lecture and a cool dessert of challenging clinical hair disorders. Prof Urlike Blume-Peytavi, the past president of the European Hair Research Society (EHRS) appreciated our intention of stimulating our colleagues with the knowledge of trichology and encouraged photo documentation of clinical hair disorders in the Indian scenario.International Journal of Trichology, the world\u2032s first dedicated peer-reviewed journal for hair. The journal is fortunate to have a genius like Prof. Desmond J Tobin to be the executive editor who had designed the creative cover page of the journal so meticulously. The editorial board glitters with the luminaries in trichology across the globe.Having conducted 20 quarterly meetings and published 20 newsletters, the Hair Research Society of India has reached yet another milestone in the journey of academic pursuit with the launch of the Thiruvalluvar says in Thirukural, a repertory of universal thoughts and truths, \"As deep you dig the sand spring flows; As deep you learn the knowledge grows.\"Inspired by the EHRS, the Mecca of hair science, the Hair Research Society of India has come a small way in this half a decade. At this moment I thank the presidents, members, researchers, colleagues and patients from whom we learn. The joy of learning continues as saint"} +{"text": "Variations in the position of the bifurcation of the common carotid artery and the origin or branching pattern of the external carotid artery are well known and documented. Here, we report the trifurcation of the right common carotid artery in a male cadaver aged about 55 years. The right common carotid artery was found to divide into the external and internal carotids and the occipital artery. High division of bilateral common carotid arteries and a lateral position of the right external carotid artery at its origin were also observed in the same cadaver. There were two ascending pharyngeal arteries on the right side - one from the occipital artery and another from the internal carotid artery. The intraarterial approach is one of the most important routes for the administration of anticancer drugs for head and neck cancers. A profound knowledge of the anatomical characteristics and variations of the carotid artery such as its branching pattern and its position is essential to avoid complications with catheter insertion. The right common carotid artery originates in the neck from the brachiocephalic trunk while the left arises from the aortic arch in the thoracic region. The cervical portions of the common carotids resemble each other very closely. The common carotid artery is contained in a sheath known as the carotid sheath, which is derived from the deep cervical fascia and also encloses the internal jugular vein and vagus nerve, the vein lying lateral to the artery and the nerve between the artery and vein on a plane posterior to both. At approximately the level of the fourth cervical vertebra, the common carotid artery bifurcates into an internal carotid artery and an external carotid artery in the carotid triangle. The external carotid artery lies anteromedial to the internal carotid artery at its origin but becomes anterior and lateral as it ascends. In the neck, the external carotid artery gives off six branches: superior thyroid, lingual, facial and occipital, ascending pharyngeal and posterior auricular arteries. Variations of the common carotid artery include the rare absence of the common carotid artery,2 the higThe carotid system of arteries were observed for variations in 25 cadavers for the period of 3 years from 2004-2007, in routine educational dissection for undergraduate students. In the academic year of 2006-2007 in our department, this variation of right common carotid was observed in a male cadaver aged about 55 years. The right common carotid artery divided into the external and internal carotids and the occipital artery. High termination of both common carotid arteries and the lateral position of the right external carotid artery at the origin were also observed in the same specimen. The branching pattern was also different in the right external carotid artery.In the male cadaver, on the right side, the common carotid artery divided at the higher level coinciding with the level of the tip of the hyoid bone. The length of the right common carotid artery was 10.5 cm . At its et al.[et al.[et al.[et al.[The origin of the occipital artery from the carotid bifurcation has been reported by Quain, Livini, Gurburz et al. In a larl.[et al. studied l.[et al. observedl.[et al.The position of the carotid bifurcation reflects the degree of embryological migration of the external carotid artery and is variable. Huber reports The first report of a lateral position of the external carotid artery was that of Hyrtl in 1841.et al.[Lasjaunias et al. have deset al.[Furthermore, Lasjaunias et al. believe Carotid endarterectomy is the main treatment for atherosclerotic plaques of the cervical internal carotid artery. The branches of the external carotid artery are the key landmarks for adequate exposure and appropriate placement of cross-clamps on the carotid arteries. It is necessary to understand the surgical anatomy of the carotid arteries to carry out successful removal of plaques and minimize postoperative complications in a bloodless surgical field. Transcatheter embolization procedures in the external carotid artery are largely used on hypervascular tumors, epistaxis and traumaThe patterns of variability in the branches of the carotid artery are of paramount importance not only in clinical practice but also in theoretical considerations. Among th"} +{"text": "We present a case of simultaneous dorsal perilunate dislocation of both wrists, without a history of fall on outstretched hands. In contrast, it appeared that the mechanism was reverse. His hands were held in radial deviation with wrists in full palmar flexion. The forearms were in neutral position and elbows in mid-flexion. The wrists were suddenly and forcibly pronated. The radiographs of both wrists showed dorsal perilunate dislocation with avulsion fracture of the tip of ulnar styloid process and avulsion fracture of posterior horn of lunate. Radial translation of the carpal bones was also noted. The mechanism is proposed and discussed. Fractures or dislocations of carpal bones usually result from a fall on outstretched hands with wrists in hyperextension. This usually happens in motor vehicle accidents. Dorsal perilunate dislocation of wrist is one of the commonest patterns.13The present concept of the mechanism of dorsal perilunate dislocation in literature is a fall on outstretched hand causing dorsiflexion and axial impaction of the carpal bones on the forearm bones with ulnar deviation and supination of the wrist over the fixed pronated forearm.A 25-year-old man, an apprentice welder in a heavy steel industry, was returning home after watching a late-night movie show. When he was walking alone, before he realized two robbers followed and stopped him for money. Then the two robbers one on each side twisted his arms toward his back. They held his forearms in neutral and elbows in 90\u00b0 flexion. His wrists were forced in full volar flexion, radial deviation, and pronation. The situation was simulated in the diagram . While hFor the first time, MouchetIn the present case, the mechanism of posterior perilunate dislocation is contrary to what is being taught and described in books. The hands in this case, held in radial deviation with wrist in full volar flexion, were forcibly pronated over steadily held forearms in neutral position. Because of this maneuver, the ulnar collateral ligament becomes tight and dorsal intercarpal ligaments with capsule get stretched. On the volar aspect of the wrist, the tough radioscaphocapitate and radiolunate ligaments along with other intercarpal ligaments with capsule become lax. If the hand in such a position is forcibly pronated, it results in the avulsion of ulnar collateral ligament of the wrist with or without fracture of the tip of ulnar styloid process. Further pronation of the hand causes rotation of the head of the capitate in lunate producing avulsion of the dorsal lunocapitate with avulsion fracture of the posterior lip or horn of lunate. Finally, the same pronation force produces tear of the scapholunate ligaments, which results in the posterior dislocation of the carpi to lie on the posterior surface of the lunate and distal articular margin of the radius.Posterior perilunate dislocation of the wrist could also happen if the individual falls forwards on the dorsum of the radially deviated hand with wrist in full volar flexion and on outstretched opposite hand, to result in forcible pronation twist of the wrist with hand in radial deviation.Unrelated to the trauma, a persistent bilateral posterior perilunate dislocation due to generalized laxity in case of Marfan syndrome was reported."} +{"text": "Exocrine cells of the pancreas of male rats bearing the Walker carcinoma show a striking accumulation of stainable neutral lipid in the form of small aggregated droplets in the base of the cells. In several cases, epithelial cells of small ducts also contained fat. Stainable lipid is sometimes present in cells of the pancreas of normal rats and in rats in which the Walker tumour has failed to grow: lipid in duct cells was confined to tumour-bearing animals."} +{"text": "Twenty-eight mammary carcinomata were maintained in organ culture in the presence of various hormones. The effects of the hormones have been assessed histologically by estimation of total dehydrogenases activity of the pentose glycolytic pathway and by the incorporation of tritiated thymidine or uridine into DNA or RNA. No significant effects on tumour cell activity due to hormones have been observed."} +{"text": "A 66-year-old man underwent abdominoperineal resection for advanced rectal cancer. On day 3 post surgery, a decompression tube was placed for postoperative ileus. Symptoms associated with ileus immediately disappeared. On day 7 post surgery, the patient vomited large amounts of fresh blood and became hemodynamically unstable. An emergency angiography revealed active bleeding from the stump of the superior rectal artery communicating with the third portion of the duodenum. Complete obliteration of the stump by proximal coil embolization was performed to achieve successful hemostasis. The postclinical course was uneventful and the patient was discharged on day 40 post surgery. The ligation of the inferior mesenteric artery (IMA) or superior rectal artery (SRA) is required for the complete dissection of regional lymph nodes in rectal cancer. Here we report a case of delayed hemorrhage arising from the stump of the SRA after abdominoperineal resection for rectal cancer. Postoperative bleeding in colorectal surgery has been commonly reported in cases of hemorrhage arising from pseudoaneurysm of the pelvirectal vessels . Our casA 66-year-old man underwent abdominoperineal resection of the rectum for advanced rectal cancer after preoperative chemoradiotherapy. During the operation, the root of the inferior mesenteric artery (IMA) was not ligated. The IMA was tagged and preserved, separating the nervous and lymphatic tissues from the root to a site just peripheral of the confluence of the left colic artery (LCA), then the superior.rectal artery (SRA) was ligated . On day One of the rare points in the present case is the site of bleeding. Delayed postoperative hemorrhage in colorectal surgery has rarely been reported. All reports, including Japanese case reports, have demonstrated that the usual source of hemorrhage is a pelvirectal space associated with pseudoaneurysm of the ramification of the iliac artery and that the mechanism for formation of pseudoaneurysm was violation or exposure of the tunica adventitia of these vessels caused by lymph node dissection or postoperative anastomotic leakage \u20134. In thIn the present case, the patient developed delayed postoperative hemorrhage 4 days after placement of the decompression tube for postoperative ileus. Although we could not identify the formation of pseudoaneurysm on the emergency CT or angiography, it is assumed that exposure of the tunica adventitia of the IMA and contact between the decompression tube and the stump of the SRA caused a pseudoaneurysm of the stump.To avoid this severe complication, we should improve surgical procedures including the reduction of the clearance of peritoneum, expansion of mobilization of the left colon to the splenic flexure, leading to withdrawal of over spasticity in the stump of the SRA, the mesenteric root, and the duodenum."} +{"text": "The Inter-Regional Epidemiological Study of Childhood Cancer included 43 cases of soft tissue and 30 cases of bone sarcomas, together with their 146 matched controls. Analysis of a wide range of aetiological factors revealed few risk factors relating to events during the index pregnancy, the earlier medical experiences of the case child, or parental medical, occupational and smoking history. Associations which did emerge included: lower birth weight in children with Ewing's tumour, an excess of mothers of children with soft tissue sarcoma with symptoms of toxaemia in pregnancy; and more children with rhabdomyosarcoma who received antibiotics soon after birth. There was some evidence that mothers of children with soft tissue sarcoma may have had reduced fertility with a significant excess of the case mothers having no other pregnancies. Slight excesses of congenital malformations in the case children and of malignant and benign/borderline neoplastic disease in the older mothers were consistent with the existence of a degree of genetic predisposition in the development of the tumours in this series."} +{"text": "In an attempt to rationalize the use of intraperitoneal drainage of the subhepatic space after simple,elective cholecystectomy, a prospective study was designed to compare the post-operative course withand without drainage. There was a higher incidence of postoperative fever of unknown origin andwound infection in the drained group. In the group without drainage the hospital postoperative stay wasshorter and there were no complications. The results suggest that routine surgical drainage afteruncomplicated cholecystectomy is unnecessary and could be a source of postoperative fever and a higherincidence of wound infection."} +{"text": "Dermatosis neglecta is an often misdiagnosed and under-diagnosed condition. In dermatosis neglecta, a progressive accumulation of sebum, sweat, keratin and other dirt and debris, occurs due to inadequate local hygiene resulting in a localized hyperpigmented patch or a verrucous plaque. Vigorous rubbing with alcohol-soaked gauze or soap and water results in a complete resolution of the lesion. This is the first case of dermatosis neglecta reported in a patient with multiple traumatic injuries.We report a case of a 35-year-old male Caucasian of Pakistani origin, with multiple fractures, neurological deficit and immobility sustained in a fall, leading to the development of dermatosis neglecta of the left hand.Early and prompt clinical recognition of this condition eliminates the need for aggressive diagnostic and therapeutic procedures. The term dermatosis neglecta was first coined by Poskitt et al. in 1995 to denote a condition in which formation of a localized hyperpigmented lesion occurs as a consequence of lack of cleanliness of a particular body part or region, usually due to some disability [The lesion forms due to a combination of tallow, sebum, sweat, keratin and bacteria in the unclean area. The time of evolution is usually 2 to 4 months and the patients usually have an associated chronic disease characterized by pain or immobility [A 35-year-old man presented with a 3-week history of progressive blackish discoloration of the dorsum of the left hand along with increased verrucosity and scaling over his palm Figure . There wThe patient had sustained multiple fractures of the left humerus and metacarpals, along with dislocation of the left shoulder and radial nerve palsy, in a fall about 2 months previously which had left the limb immobile and numb. He was gradually recovering the motor and sensory functions but still handled the limb very gingerly.Considering the history and clinical examination, a diagnosis of dermatosis neglecta was made. The area over the dorsum of the hand was cleaned with a methanol swab, revealing completely normal skin underneath. The patient was prescribed a keratolytic ointment for the palmer surface and advised to maintain better hygiene of the affected area despite the disability. Upon follow-up examination two weeks later, the hand was completely devoid of any pigmentation or verrucosity Figure .The term dermatosis neglecta is used to describe a condition in which localized hyperpigmentation and scaling of the skin occurs as a consequence of poor hygiene of a particular body part and the lesion can be easily rubbed off using soap and water or an alcohol soaked swab. The lack of cleanliness is usually a result of hyperesthesia or prior trauma of the affected region ,3. TerraClinically, the patient presents with a hyperpigmented patch or plaque with a variable degree of scaling and verrucosity. Adherent cornflake-like scaling has been described . The patCases have previously been described at the site of pacemaker insertion, mastectomy surgery and radiotherapy ; and in Differential diagnosis includes dermatitis artefacta which is an act of commission rather than an act of omission as is the case in dermatosis neglecta , verrucoTreatment includes counselling and encouraging the patient to maintain appropriate hygiene of the affected region in spite of his or her disability. Daily lightly scrubbing of the affected area with soap and water or alcohol is effective in most cases. For more resistant and verrucous lesions, application of a keratolytic agent in combination with an emollient may be required.Dermatosis neglecta should be kept in mind in the differential diagnosis of all hyperpigmented localized lesions, especially in a patient with some accompanying disability, as its prompt recognition can eliminate the need for any elaborate diagnostic or therapeutic endeavours.Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in chief of this journal.The authors declare that they have no competing interests.SNRQ conceived the case report and prepared the initial draft of the manuscript. AE played an important role in the initial diagnosis, treatment and follow-up of the patient as well as in writing the manuscript. NR contributed significantly to the final draft of the manuscript and analysis of relevant literature. All authors read and approved the final manuscript."} +{"text": "Neisseria gonorrhoeae provides an example of relevance to the female genital tract. A review of the virulence factors of the gonococcus is presented to serve as an example of the variety of virulence properties associated with pathogenic bacteria. Molecular biology has begun to clarify one of the important paradigms of pathogenic bacteriology\u2014that bacteria change their expression of virulence properties in response to their location within a host or to the stage of infection. Thus, infection involves not only the possession of virulence factors, but also the carefully controlled use of those factors. Virulence is often controlled by the coordinate expression of many virulence-associated genes in response to one environmental signal. With regard to low-virulence organisms present in the female lower genital tract, we are beginning to identify some of their virulence attributes. Examples from the work of our laboratory include the hemolysin of Gardnerella vaginalis and an immunosuppressive mycotoxin produced by Candida albicans. Demonstrating the coordinate expression (or other control mechanisms) of virulence factors in these sometimes innocuous and sometimes inimical organisms represents the next frontier in the study of normal vaginal microbiology.The vast majority of infections involving female pelvic structures arise from organisms that are members of the normalflora. In addition, exogenous organisms that invade through the lower genital tract must interact with organisms that are part of the host's flora. In contrast to the concept that the normal flora is entirely innocuous, recent research has begun to identify what appear to be virulence attributes among these ordinarily low-virulence organisms. Most of our understanding of virulence has been derived from highly virulent organisms, of which"} +{"text": "For the past 20 years thickness of the primary tumour has been accepted as the most important guide to prognosis for patients with primary cutaneous malignant melanoma. The changing epidemiology of melanoma with an increasing number of patients with thin tumours has necessitated a reappraisal of this, with particular reference to interactions among tumour thickness, the patients' sex and the presence or absence of ulceration of the primary tumour. All primary cutaneous malignant melanomas diagnosed in Scotland between 1979 and 1986 were used as the test group (1978 patients). The proportional hazards model was used on all potential risk factors in the database and their two-way interactions, and the resulting models based on stepwise procedures were subsequently validated on 289 melanoma patients first diagnosed in 1987 in the same geographic area. Four distinct subgroups of males and females with ulcerated or non-ulcerated lesions were identified. For females with ulcerated lesions, tumour thickness, mitotic count and anatomical site of primary all gave valuable prognostic information, whereas for females with non-ulcerated lesions only tumour thickness was of prognostic value. For males with ulcerated lesions, level of invasion was the only prognostic guide, while for males with non-ulcerated lesions both tumour thickness and level of invasion contributed significantly to prediction of prognosis. Prognosis markedly different across subgroups of the melanoma population, even to the extent that essential prognostic factors are not the same in the distinct subgroups. Verification of these prognostic guides derived from 1979-86 patients has been achieved for all patients diagnosed with melanoma in 1987 from the same geographic area. These data will therefore be useful aids for clinicians managing patients."} +{"text": "The lability of cell surface histocompatibility antigens of 2 murine lymphomata was examined. These 2 tumours differ greatly in their capacity to metastasize in syngeneic hosts. Cells of the metastatic lymphoma released histocompatibility antigens in vivo and in vitro at a greater rate than cells of the non-metastasizing lymphoma. Antigen/antibody complexes formed by the addition of allo-antiserum to intact cells disappeared more rapidly from the surface of cells of the metastatic line. We propose that the instability of surface antigens may be an integral feature of malignant cells and that there may be a quantitative relationship between the lability of membrane components and the capacity of the tumour to metastasize."} +{"text": "Paediatricians can be empowered to address the Priority Mental Health Disorders at primary care level. To evaluate the effectiveness of a collaborative workshop in enhancing the adolescent psychiatry knowledge among paediatricians.A 3-day, 27-hours workshop was held for paediatricians from different regions of India under the auspices of the National Adolescent Paediatric Task Force of the Indian Academy of Paediatrics. A 5-item pretest-posttest questionnaire was developed and administered at the beginning and end of the workshop to evaluate the participants' knowledge acquisition in adolescent psychiatry. Bivariate and multivariate analyses were performed on an intention-to-participate basis.Forty-eight paediatricians completed the questionnaire. There was significant enhancement of the knowledge in understanding the phenomenology, identifying the psychopathology, diagnosing common mental disorder and selecting the psychotropic medication in the bivariate analysis. When the possible confounders of level of training in paediatrics and number of years spent as paediatrician were controlled, in addition to the above areas of adolescent psychiatry, the diagnostic ability involving multiple psychological concepts also gained significance. However, both in the bivariate and multivariate analyses, the ability to refer to appropriate psychotherapy remained unchanged after the workshop.This workshop was effective in enhancing the adolescent psychiatry knowledge of paediatricians. Such workshops could strengthen paediatricians in addressing the priority mental health disorders at the primary-care level in countries with low-human resource for health as advocated by the World Health Organization. However, it remains to be seen if this acquisition of adolescent psychiatry knowledge results in enhancing their adolescent psychiatry practice. In most of the developing countries training in adolescent psychiatry are largely limited to mental health specialists; pediatricians, and other primary care physicians receive little training. As a result, effective diagnostic, treatment and prevention models that have been developed are not yet widely applied at the primary-care paediatric settings ,2. HowevIt has been suggested that gaining practical experience and training in adolescent psychiatry is an effective way for the paediatrician to acquire perspectives and skills that is helpful in hospital or community paediatrics . Many moIt is evident from the mental health burden and resources mismatch in developing countries that national efforts are required to restructure paediatric mental healthcare delivery . ProvidiStrengthening the Paediatrician Project, a collaborative workshop for paediatricians on adolescent psychiatry with psychiatrists. Here we focus on the effectiveness of the workshop in enhancing adolescent psychiatry knowledge among paediatricians.Adolescent Psychiatry training in such special workshops may require evidence of their effectiveness, and this type of data has been difficult to obtain. In an earlier paper we have studied the need, content and process of We review only the relevant aspects of our methods here; an extensive description of the workshop has been presented in the accompanying paper and detaStrengthening the Paediatrician Project (SAPP), a seminal adolescent psychiatry workshop to explore the need, content, process and effectiveness of a workshop for paediatricians in collaboration with psychiatrists. The workshop was conducted at the Child Development Centre, Thiruvananthapuram from the 19th to 21st of June 2006 under the auspices of the NFLLSE of the Indian Academy of Paediatrics was for 27 hours and spread over three days.Data used in the present analyses are from the content facilitation was done by psychiatrists (facilitators) who were proficient in the theory and practice of psychiatry. The participants were divided in to five groups of about 8-10 each and five psychiatrists invited by the NFLLSE facilitated each group. After the pre-workshop assessment of knowledge related to adolescent psychiatry, the two morning sessions in the first day introduced the Adolescent Psychiatry as well as systems in mind with their phenomenon and related psychopathology identifiable in a mental status examination. The two sessions in the afternoon focused on identifying the psychopathology with case vignettes and interviewing 'cases' (enacted by psychiatrists) for the psychopathology. The morning session of the second day translated the psychopathology noted in the mental status examination of the 'cases' to the International Classification of Diseases: Mental and Behavioral Disorders - Tenth version (ICD-10) [third day addressed the PMHD that requires medication and pharmacotherapy of these disorders. In the post-lunch session, the post-workshop assessment was conducted.In brief, the process facilitation included didactic teaching using audio-visual materials, interacting with the child psychiatrist to clarify their queries, small group case-work up with case vignettes, presenting the case by a participant from each group supported by the respective group member, identifying the psychopathology and diagnosing the PMHD using role plays, conducting diagnostic interviews of the 'cases' based on ICD-10 diagnosis. Finally of the analysing of the video recording of the interviews by the members of each group and then along with their facilitators as well as the child psychiatrist for interviewing skills, mental status examination and diagnostic formulation was conducted.The A brief questionnaire to assess the acquisition of adolescent psychiatry knowledge by pediatricians was specifically developed and used. It had five multiple choice items and each correct endorsement was given a score of one and thus a score range of 0-5 was possible. Each of the five items was intended to evaluate one of the five areas namely: (1) understanding of the phenomenology, (2) identification of the psychopathology, (3) diagnosis of the PMHD, (4) selection of the appropriate psychotropic medication and (5) non-pharmacological interventions for which appropriate referrals will have to be made. The same multiple-choice questions were completed at the beginning of Day 1 before the first session began and during the last session of Day 3 before the focused group discussion.The descriptive data was presented in percentages and the acquisition of knowledge was analysed using Wilcoxon Matched-Pairs Signed-Ranks test. Multivariate linear regression was done to adjust for the confounding effect of the level of paediatric training, and years of experience in paediatric care. To account for the 14.6% of the participants who left the workshop immediately prior to the closing session to catch commuter and intercity trains we did an intention- to- participate analysis where the pretest scores brought forward as the posttest scores for analysis. Significance was set at P < 0.05 (two tailed) and data was analysed using SPSS 11.5.The participant characteristics are described in the accompanying paper .In the bivariate analysis, the overall knowledge of the paediatricians statistically significantly improved after the workshop. The maximum gain was in the areas of understanding the phenomenology, identifying the psychopathology and diagnosis of common mental disorder as well as selecting the psychotropic medication. The areas that did not show statistically significant gains were making diagnoses that involved multiple psychological constructs and ability to refer for appropriate psychotherapy Table .In the multivariate regression analysis when the possible confounders namely the level of training in paediatrics and the number of years of experience in treating children were adjusted, the acquisition in overall knowledge, understanding of phenomenology, identifying psychopathology, diagnosing the PMHD and selecting the psychotropic remained significant as in the bivariate analysis Table . HoweverWe are not aware of any study documenting the effectiveness of an inter-disciplinary collaborative workshop to enhance the acquisition of adolescent psychiatry knowledge among academic and practicing paediatricians using multimodal training techniques. However, the findings from our study are consistent with that of other studies found in the pharmacy, nursing and medical education literature.Our findings suggest that this collaborative, multimodal training approach to teaching is enjoyable and effective in the acquisition of theory and clinical skills related to adolescent psychiatry. The authors speculate that paediatricians possibly displayed a positive attitude to learn adolescent psychiatry even before attending the training workshop. However, they acquired adolescent psychiatry knowledge and skill significantly after the workshop. A statistically significant increase in questionnaire responses was observed in 4 of the 5 of the questions and therefore, it can be concluded that there was a relationship between the training intervention and the increase in knowledge for all but one area.When the mental illnesses addressed in the workshop were viewed from a biopsychosocial perspective, the significant increase in knowledge of phenomenology, diagnosis and pharmacological management and insignificant improvement of knowledge in psychological management and referral for psychotherapy suggest that the workshop was more successful at increasing 'medical constructs' and less successful at changing 'psychosocial constructs' of the paediatricians relevant to adolescent psychiatry.Research on the outcomes of educational improvement interventions can be utilized to strengthen the theoretical basis for required regulatory training as well as to validate interventions for health-care education. This knowledge and skill acquisition suggests that when this adolescent psychiatry module is added in the various training processes like postgraduate training or CME successfully increase the knowledge towards the identification of psychopathology, a classificatory system based diagnosis of disorders, psychopharmacological management and feasible psychological interventions or referrals as recommended by World Health Organization . Also, cThe workshop elements focused strongly on cognitive knowledge, with the assumption that an increase in knowledge would result in a concomitant improvement of attitudes, and practicing skills. It may be possible to develop an additional training element that specifically addresses underlying assumptions and fears that can compromise the clinical skills that should emerge from the knowledge gained. Such a training workshop might utilize open discussions or hands-on approaches. The addition of a clinical psychologist to the multidisciplinary training team may improve the outcome.Further research is needed to focus on the specific components of the workshop. While this study evaluated the impact of the entire multi-element workshop, no conclusions could be drawn for the individual elements of the intervention such as the didactic sessions, case-vignettes, simulated case workups, or video feedbacks. Which of these intervention elements had the most impact on increasing the knowledge is conjectural. The study should also be extended to other teaching settings (like conference and CME programs) and the teaching elements themselves could then be modified to include other methods designed to specifically address these settings . In the future, based on the positive response, this multimodal training with a collaborative approach will be continued in the Postgraduate Diploma in Adolescent Paediatric health training program at Child Development Centre. The next learning experience will occur in our Postgraduate Diploma in Developmental Neurology program in which neurologist will learn basic theory and practice of adolescent psychiatry.The main caveats of this study are the specific nature of the training subject and the nature of the population. Firstly, the study assumed that the choice of data gathering instruments was appropriate for the task at hand. While the present study utilized a specific set of knowledge evaluation questions that concentrated on what the interdisciplinary team believed represented appropriate concerns of paediatricians facing adolescent mental health issues at the primary care level, all of the specific needs of the paediatricians at different practice settings were not assessed during this study. An expanded and validated knowledge evaluation instrument could be beneficial in identifying real knowledge acquisition. Secondly, this study teases out the adolescent psychiatry component of a multicomponent workshop for measurement and therefore lack of a comprehensive measure inclusive of the various components of the workshop could have negatively affected the performance of the participant in answering the questionnaire. Finally, as this study utilized voluntary participation rather than specific random sampling, extensions of these conclusions to other paediatricians are understandably weakened as possibly paediatricians motivated to learn the discipline of adolescent psychiatry only responded.In conclusion, this model of inter-disciplinary collaborative, multimodal educational workshop is effective in enhancing the adolescent psychiatry knowledge among paediatricians. However, it remains to be seen if the paediatricians are able to retain the acquired knowledge of adolescent psychiatry and apply in their clinical practice as well. If such information retention and application follows, this model of strengthening the paediatricians can partly help reinforce the efforts of WHO in addressing the Priority Mental Health Disorders among the adolescents. Further studies to explore if the acquired adolescent psychiatry knowledge is applied and thus integrated in clinical practice are required.CME: Continuing the Medical Education; ICD-10: International Classification of Diseases: Mental and Behavioral Disorders - Tenth version; NFLLSE: National task Force on Family Life and Life Skill Education; PMHD: Priority Mental Health Disorders; SAPP: Strengthening the Paediatrician Project.The authors declare that they have no competing interests.PSSR was involved in the conception, designing, data analysis and interpretation, drafting and approving the final version. NMKC was involved in the conception, drafting and revising the final draft. All authors read and approved the final manuscript."} +{"text": "Dihydropyrimidine dehydrogenase (DPD) is responsible for the breakdown of the widely used antineoplastic agent 5-fluorouracil (5FU), thereby limiting the efficacy of the therapy. To identify patients suffering from a complete or partial DPD deficiency, the activity of DPD is usually determined in peripheral blood mononuclear cells (PBM cells). In this study, we demonstrated that the highest activity of DPD was found in monocytes followed by that of lymphocytes, granulocytes and platelets, whereas no significant activity of DPD could be detected in erythrocytes. The activity of DPD in PBM cells proved to be intermediate compared with the DPD activity observed in monocytes and lymphocytes. The mean percentage of monocytes in the PBM cells obtained from cancer patients proved to be significantly higher than that observed in PBM cells obtained from healthy volunteers. Moreover, a profound positive correlation was observed between the DPD activity of PBM cells and the percentage of monocytes, thus introducing a large inter- and intrapatient variability in the activity of DPD and hindering the detection of patients with a partial DPD deficiency. \u00a9 1999 Cancer Research Campaign"} +{"text": "High-quality epidemiologic research is essential in reducing chronic diseases. We analyzed the quality of systematic reviews of observational nontherapeutic studies.We searched several databases for systematic reviews of observational nontherapeutic studies that examined the prevalence of or risk factors for chronic diseases and were published in core clinical journals from 1966 through June 2008. We analyzed the quality of such reviews by using prespecified criteria and internal quality evaluation of the included studies.Of the 145 systematic reviews we found, fewer than half met each quality criterion; 49% reported study flow, 27% assessed gray literature, 2% abstracted sponsorship of individual studies, and none abstracted the disclosure of conflict of interest by the authors of individual studies. Planned, formal internal quality evaluation of included studies was reported in 37% of systematic reviews. The journal of publication, topic of review, sponsorship, and conflict of interest were not associated with better quality. Odds of formal internal quality evaluation and either planned, formal internal quality evaluation or abstraction of quality criteria of included studies increased over time, without positive trends in other quality criteria from 1990 through June 2008. Systematic reviews with internal quality evaluation did not meet other quality criteria more often than those that ignored the quality of included studies.Collaborative efforts from investigators and journal editors are needed to improve the quality of systematic reviews. The (AMSTAR) addresse(AMSTAR) -18 or bi(AMSTAR) ,15,19,20Previous research and guidelines ,21-23 foAbridged Index Medicus (119 indexed titles). We defined observational nontherapeutic studies as observations of patient outcomes that did not examine procedures concerned with the remedial treatment or prevention of diseases ; yes indicated that the authors reported the list of retrieved citations, the list of excluded studies, and justification for exclusion.To evaluate selection bias, we abstracted whether the authors of systematic reviews described the search strategy ; We abstracted as dichotomous variables whether the authors of systematic reviews did any of the following:Stated the aim of the review and the primary and secondary hypotheses of the review.Included or justified exclusion of articles published in languages other than English.Searched for gray literature, including abstracts and unpublished studies, to evaluate publication bias .Described any contact with authors of the included studies.Analyzed sponsorship of and conflict of interest in the included studies.We abstracted how the authors of systematic reviews described obtained statistical methods with justification and models for pooling with fixed or random effects models in sufficient detail to be replicated . We abstracted whether the authors of pooling analyses reported statistical tests for heterogeneity and whether heterogeneity was statistically significant .We used 3 categories to classify whether the authors of systematic reviews had evaluated the quality of included studies by using developed or previously published checklists or scales : 1) the We abstracted several explanatory variables that could be related to the quality of systematic reviews:The year of publication, defined as a continuous variable. We created categories of 4- or 5-year periods: 1990 to 1994, 1995 to 1999, 2000 to 2004, and 2005 through June 2008.The journals of publication.The country where the systematic reviews were performed.The sponsorship of the reviews. Those that had either governmental or foundational support or were fellowships were defined as having nonprofit support.The disclosure of conflict of interest by authors of reviews .The number of disclosed relationships with industry, defined as a continuous variable.The sponsor's participation in data collection, analysis, and interpretation of the results of the review.The review outcomes as risk factors for prevalence or incidence of chronic conditions or diseases.2 tests and Fisher's exact tests in cases of small numbers. All calculations were performed at 95% confidence intervals (CIs) by using 2-sided P values with SAS version 9.1.3 .We summarized the results in evidence tables. We used prespecified categories of dependent and independent variables and did not force the data into binary categories for definitive tests of significance. We used univariate logistic regression to examine the association between internal quality evaluation and the year of the publication by using the Wald test. Odds ratios (ORs) were calculated with binary logit models and Fisher's scoring method technique. We computed the fractions of systematic reviews meeting various quality criteria in each of the 4 time periods considered. The proportions of systematic reviews that met different levels of each quality criterion were evaluated by using \u03c7We found 145 eligible systematic reviews of observational nontherapeutic studies , assessed gray literature (27%), or addressed language bias (29%) . Only 2%Planned and detailed quality assessment of included studies was reported in 37% of systematic reviews, and 18% abstracted more than 1 criterion of external or internal quality; significant positive trends were reported during the evaluated time . QualityThe quality of systematic reviews did not differ much by study location or by the journal of publication. Systematic reviews of prevalence or incidence or risk factors of the diseases did not differ in their quality measures. Sponsorship was not associated with quality of the reviews. The role of conflict of interest was impossible to establish because the authors of 56 reviews did not disclose funding and authors of 106 reviews did not disclose conflict of interest.The journal of publication, topic of the review, and continent where the review was conducted were not associated with the likelihood of internal quality evaluation. Systematic reviews of risk factors tended to conduct internal quality evaluation of the included studies more often than reviews of incidence or prevalence or of levels of risk factors. Systematic reviews sponsored by nonprofit organizations conducted internal quality evaluations of individual studies more often than reviews that received corporate funding. Systematic reviews that disclosed conflict of interest conducted internal quality evaluation of individual studies less frequently than reviews with no disclosure . Odds of formal internal quality evaluation and either planned, formal internal quality evaluation or abstraction of quality criteria increased over time. Disclosure of conflict of interest by the authors of systematic reviews was not associated with greater odds of internal quality evaluation.Complete documentation of the literature search including time period, databases searched, and exact literature search strings was less common among reviews with planned, formal internal quality evaluation than among reviews without it . HoweverThe association between quality of systematic reviews and sponsor participation in the data collection, analyses, and interpretation was difficult to analyze because this information was either omitted or reported in various ways. Less than 10% of systematic reviews contained a clear statement that the sponsors did not play any role in gathering the studies or analyzing or interpreting the results and did not influence the content of the manuscript. Other reviews omitted mention of the role of the sponsor in approval of the manuscript or provided a general statement that sponsors did not influence the conclusions or the content of the paper. Two reviews included statements of unconditional or unrestricted sponsorship of the meta-analyses.Our analyses showed that less than half of the systematic reviews of nontherapeutic observational studies that were published in core clinical journals met each quality criterion. Quality of systematic reviews did not improve over time. Planned, formal internal quality evaluations of the included studies was reported in less than half of systematic reviews, but the prevalence of internal quality evaluations has increased during the last decade. Our findings are in concordance with previously published methodologic analyses of systematic reviews that also found inconsistent quality and incomplete internal quality evaluation of individual studies . MethodoJournal commitment to high-quality research, however, was associated with improved reporting quality of the publications. For example, adoption by journals of the Consolidated Standards of Reporting Trials (CONSORT) improved the quality of the publications of interventional studies ,180. An We could not identify the factors that can explain differences in quality of systematic reviews. The role of sponsorship and conflict of interest could not be estimated because of poor reporting of this information. The quality and reliability of quality evaluation of the included studies is unclear because development of the appraisals was described in a small proportion of systematic reviews (32 of 80 studies), and only 6 of 80 studies tested interobserver agreement for quality assessment. We did not evaluate all reviews of observational studies that were published in epidemiologic journals. However, it is unlikely that the quality of reviews published in other journals would be better than those in core clinical journals. Future research should investigate the factors that can explain differences in the quality of systematic reviews.Peer reviewed publications of high-quality systematic reviews can provide the best available research evidence for evidence-based public health . Evidenc"} +{"text": "The global nature of antimicrobial resistance and the failure to control the emergence of resistant organisms demand the implementation of a global surveillance program involving both developed and developing countries. Because of the urgent need for infection control interventions and for rapid distribution of information about emerging organisms, we initiated the International Network for the Study and Prevention of Emerging Antimicrobial Resistance (INSPEAR). Its main objectives are to serve as an early warning system for emerging antimicrobial-drug resistant pathogens, to facilitate rapid distribution of information about emerging multidrug-resistant pathogens to hospitals and public health authorities worldwide, and to serve as a model for the development and implementation of infection control interventions."} +{"text": "The free osteocutaneous fibula flap is an established method of reconstruction of maxillary and mandibular defects. The vascularity of the skeletal and the cutaneous components is provided by the peroneal artery via the nutrient artery and the septo- and musculocutaneous perforators. In rare situations, these perforators may arise from other major leg arteries. In such circumstances, the procedure has to be either abandoned or modified so that neither the vascularity of the flap nor the donor limb is compromised. We present a case of an anomalous musculocutaneous perforator, which originated from the proximal part of the posterior tibial artery, passed through the soleus muscle and supplied the skin paddle. The flap was elevated as a single composite unit and was managed by two separate vascular anastomosis at the recipient site, one for the peroneal vessels and the other for the anomalous perforator. The reconstruction of oro-mandibular defect after tumour ablative surgery is a challenging task in terms of anatomical and functional results and this has been fulfilled by the free fibula flap. Ever since described, the free fibula flap has become the procedure of choice for treating post tumour excision of oro-mandibular defects in majority of the cancer centers. Over the years, there are literature reports of various modifications in the surgical technique and so do the reports of variations of normal anatomy of the leg and that of the flap. We describe a unique situation involving the perforators supplying the skin paddle of the flap which is significant in terms of flap survival and management.A 44-year-old female who was diagnosed with carcinoma of the lower alveolus with involvement of the middle third of the mandible, anterior floor of the mouth, gingivo-labial sulcus and the overlying skin. It was planned for composite resection followed by reconstruction with free fibula osteocutaneous flap. During the preoperative assessment, her dorsalis pedis and posterior tibial pulsations in both legs were normal by palpation. The left leg was selected and the fibula with a skin paddle of 22 \u00d7 9 cm was raised through standard anterior approach under tourniquet control. After the distal and proximal osteotomies and after ligating the distal end of the peroneal vessels and moving proximally, no septocutaneous perforators were noticed. Further dissection revealed a single musculo cutaneous perforator coming out of the soleus muscle and proceeding to the skin paddle. With the possibility of anomalous perforator in mind, the vessel was dissected along its entire length through the substance of the soleus muscle and was The osteocutaneous fibula flap was described by Taylor and late5Remove the skin paddle from the flap orUse fibula for bony reconstruction and another flap such as the radial forearm flap for skin and soft tissue orAbandon the procedure and use the other limb.In our case, we were prudent enough to dissect and include the anomalous perforator after confirming that it was the only source of blood supply to the skin paddle. Though there are reports of anomalies involving the axial arteries of the leg and defi10To conclude, the anomalies involving the perforators may be minor when compared to the major leg vessels but awareness about the possibilities will enable the surgeon to salvage the free fibula osteocutaneous flap as a single unit or as two separate [skin and bone] units with appropriate number of anastomosis as required for successful reconstruction."} +{"text": "Fractures of the neck of the femoral component have been reported in uncemented total hip replacements, however, to our knowledge, no fractures of the neck of a cementless forged titanium alloy femoral stem coated in the proximal third with hydroxy-apatite have been reported in the medical literature.This case report describes a fracture of the neck of a cementless forged titanium alloy stem coated in the proximal third with hydroxy-apatite.The neck of the femoral stem failed from fatigue probably because of a combination of factors described analytically below. Fracture of the femoral stem was a frequent complication of first-generation, forged stainless steel or cast cobalt chrome femoral components used for total hip arthroplasty. Charnley estimated a prevalence of 0.23%, although the prevalence was as high as 11% with other stem designs. With the development of femoral stems made of forged cobalt chromium-molybdenum or titanium alloys in the 1980s, this complication became rarer. Fractures of the neck of the femoral component have been reported in uncemented total hip replacements, however, to our knowledge, no fractures of the neck of a cementless forged titanium alloy femoral stem coated in the proximal third with hydroxy-apatite have been reported in the medical literature.A 64 year-old man of weight 70 kg and height 1.65 m underwent a total hip arthroplasty (THA) in January 2003, due to severe osteoarthritis of his left hip following avascular osteonecrosis . These markings lie also on the anterior aspect, when the stem is implanted on a left hip. Scanning electron microscope figures of the fracture surface of the part supporting the head clearly show that the fracture has ended on the posterior side. Typical beach marks can be seen in the middle of the surface high stresses in the stem due to increased patient weight, a high level of activity, or a relatively undersized prosthesis; b) poor proximal bone support or fixation, which may be due to the absence of the calcar; c) varus orientation of the stem; d) cantilever bending resulting from good distal fixation in the presence of an inadequate proximal cement mantle; e) the presence of a stress riser; and finally f) material defects in the stem itself due to either inadequate design or the fabrication process.Porosity can be generated within the cast during the solidification of the metal, which is accompanied by shrinkage, leaving voids within the metal . Voids can also be generated during expulsion of gases as part of the solidification process .The grain size is another factor closely related to the fatigue properties of the stem . HistoriFinite-element analysis has shown that the highest stress concentrations are around the lateral aspect of the middle third of the femoral stem. A fracture of the stem usually originates from its antero-lateral aspect correspoFracture of the neck of the femoral stem is a rare phenomenon, and few have been documented ,6,11-13.Burstein and Wright reportedLee et al reportedVatani et al reportedThe neck of the femoral stem failed from fatigue. The stereomicroscope and scanning electron studies cannot give evidence of the exact location of the initiation of the fracture, due to iatrogenic marks of tools at the external surface of the fractured neck as well as due to attrition of the fracture surfaces.2), and the potential stress riser effect of either the laser marking or the edge of neck machining.The implant failed probably because of a combination of factors: the high demands of the patient , the reduced section of the neck declare that they have no competing interests.TBG was the principal investigator of the study, operated upon the patient, conducted the collection of data and was involved in drafting the article. ODS was involved in drafting the article and in collection of the literature; SAP helped in manuscript drafting and in the collection of the literature; PFB performed the interpretation of stereomicroscopic information, compiled the technical report and was involved in drafting the article; GT, IK, and PA were involved in collection of the literature. All the authors read and approved the final manuscript.Written patient consent was obtained for publication of the report."} +{"text": "Flavonoids can exert beneficial health effects through multiple mechanisms. In this paper, we address the important, although not fully understood, capacity of flavonoids to interact with cell membranes. The interactions of polyphenols with bilayers include: (a) the partition of the more non-polar compounds in the hydrophobic interior of the membrane, and (b) the formation of hydrogen bonds between the polar head groups of lipids and the more hydrophilic flavonoids at the membrane interface. The consequences of these interactions are discussed. The induction of changes in membrane physical properties can affect the rates of membrane lipid and protein oxidation. The partition of certain flavonoids in the hydrophobic core can result in a chain breaking antioxidant activity. We suggest that interactions of polyphenols at the surface of bilayers through hydrogen bonding, can act to reduce the access of deleterious molecules (i.e. oxidants), thus protecting the structure and function of membranes."} +{"text": "We tested the hypothesis that modulation of monoaminergic tone with deep-brain stimulation (DBS) of subthalamic nucleus would reveal a site of reactivity in the ventromedial prefrontal cortex that we previously identified by modulating serotonergic and noradrenergic mechanisms by blocking serotonin-noradrenaline reuptake sites. We tested the hypothesis in patients with Parkinson's disease in whom we had measured the changes of blood flow everywhere in the brain associated with the deep brain stimulation of the subthalamic nucleus. We determined the emotional reactivity of the patients as the average impact of emotive images rated by the patients off the DBS. We then searched for sites in the brain that had significant correlation of the changes of blood flow with the emotional impact rated by the patients. The results indicate a significant link between the emotional impact when patients are not stimulated and the change of blood flow associated with the DBS. In subjects with a low emotional impact, activity measured as blood flow rose when the electrode was turned on, while in subjects of high impact, the activity at this site in the ventromedial prefrontal cortex declined when the electrode was turned on. We conclude that changes of neurotransmission in the ventromedial prefrontal cortex had an effect on the tissue that depends on changes of monoamine concentration interacting with specific combinations of inhibitory and excitatory monoamine receptors. We have shown that activity in a circumscribed region of the medial prefrontal cortex undergoes a change of activity when the region is challenged by administration of the serotonin-noradrenaline reuptake inhibitor clomipramine. In this previous study , the cha1A, 5-HT1B, 5-HT2A, 5-HT3 and 5-HT4) receptors facilitates dopamine (DA) release, with the exception of the 5-HT2C receptors that strongly inhibit DA release in the ventral tegmental area but have no effect on DA release in the frontal cortex 1 and D2 receptors that respectively facilitate and inhibit the excitability of the neurons on which these receptors reside.Serotonergic, noradrenergic and dopaminergic mechanisms interact closely in the frontal cortex. Stimulation of the majority of serotonergic receptors To test if emotional reactivity relates to changes of neurotransmission in the prefrontal cortex, we obtained PET images of the effect of DBS of the subthalamic nucleus on the cortical activity in seven patients suffering from Parkinson's disease. The patients were approached by one of the authors (JG) of this study and gave written informed consent to the procedures specified in a protocol approved by the official Central Danish Regional Science Ethics Committee.We correlated the changes of activity in these images with the ratings of emotional impact of standardized emotive images recorded by the patients. Selected details of the patients' condition including medication are given in Details of the PET measurements of cerebral blood flow with oxygen-15-labeled water were given by Geday et al. We correlated the change of blood flow with the emotional impact of the emotive images in the seven patients. We recorded the emotional impact of the images in a separate session with the DBS stimulation off and on, respectively. As in the study reported by Geday and Gjedde 3 for all cortical gray matter at an FWHM of 12 mm.We analyzed the PET images by voxel-wise regression of PET volumes with local voxel SD estimates against the ratings of emotional impact, as implemented in the Dot statistical parametric mapping of the Montreal Neurological Institute 3 at an FWHM of 12 mm.In addition to the global analysis of all cortical gray matter (excluding the cerebellum), we restricted a search to a volume of interest (VOI) consisting of one 6-mm-radius sphere centered on a site in the right inferior medial prefrontal cortex (IMPC) previously shown to be deactivated by emotional content We completed the regression analysis of blood flow changes elicited by STN stimulation against emotional impacts ratings, equal to the difference between ratings of pleasant images and ratings of unpleasant images, on a scale from \u22123 to +3 in the seven subjects. The regression revealed several sites in which the correlation between the two measures reached significance . The correlation is shown as negative in One of the sites at which blood flow underwent this change associated with the DBS in correlation with emotional impact coincided with the area at which blood flow equally underwent a change during the clomipramine challenge reported by Geday and Gjedde We specifically determined the change of relative blood flow values at this site in a specific volume-of-interest with a radius of 6 mm, centred on the previously identified coordinates, as shown in The results support the predicted link between the emotional impact of emotive images and changes of activity in the prefrontal cortex associated with specific drug challenge or electrode stimulation. Thus, in subjects with a low emotional impact, activity measured as blood flow at a site in the ventromedial prefrontal cortex rose in patients with Parkinson's Disease and DBS electrode turned on, as it did in healthy subjects challenged with a serotonin-noradrenaline reuptake inhibitor, while in patients of high impact, the activity at this site in the ventromedial prefrontal cortex declined when the electrode was turned on, as it did in healthy subjects of high impact challenged with the drug. We explain the findings in both groups by the claim that monoamines are released from terminals in the ventromedial prefrontal cortex and that this release has an effect on the emotional reactivity that depends both on the monoamine concentration and on the relative distributions of inhibitory and excitatory dopamine receptors.1A receptors mediate lower excitability and most other serotonin receptors mediate higher excitability of the cells where they reside, and the dopaminergic receptors among which the D1 receptors when occupied contribute to raised excitability and the D2 receptors when occupied lower the excitability of the cells on which they reside. The direction of the effect (inhibitory or excitatory) of an increment of endogenous ligand occupation then depends on the baseline ligand concentration from which the increment occurs. At low concentrations, the effect of an increment is in one direction, depending on the distribution of excitatory and inhibitory receptors. At concentrations above a certain threshold, the effect of the increment is in the opposite direction. The net effect of the transmitter release therefore reflects the pre-existing transmitter concentration in relation to the receptor distribution, as well as the magnitude of the increment.In the previous study Geday et al. 2 receptor radioligand raclopride in the striatum What is the evidence that there is monoamine release in the ventromedial prefrontal cortex or elsewhere during DBS? One PET study of the binding of the D1 receptors, indicating that the increased excitability is due to release of dopamine. During high frequency stimulation of the STN in rats, intracellular recordings from dopaminergic neurons in substantia nigra pars compacta by Lee et al. However, considerable circumstantial evidence links dopamine release in the prefrontal cortex to the regulation of emotive and cognitive functions subserved by this part of the brain. In rats, Bean et al. Together these studies provide a strong basis for the claim that DBS of the STN directly or indirectly leads to release of monoamines in the ventromedial prefrontal cortex, the effect of which is contingent upon the quantity of monoamines released and the relative densities of excitatory and inhibitory monoamine receptors, with important consequences for the emotional impact of emotive sensations.File S1Related paper by authors. This is a recently published paper to which the submitted ms refers(0.24 MB PDF)Click here for additional data file."} +{"text": "Sir,Tracheal intubation is older than the recognized history of general anesthesia itself. In the late 18th century, the royal humane society of London used tracheal intubation for resuscitating the near-drowned.The first reported use of a lighted stylet used to facilitate intubation came in 1957 when Sir Robert Macintosh describeCurrent lightwands are largely fiberoptic in design and have either external or internal light sources. Newer devices have been developed that use trans-illumination or transmission of sound, in combination with other intubating guides. The newer ones combine a lighted stylet with a fiberoptic scope. This combination is intended to provide trans-illumination and internal visualization of the larynx either directly or on a screen. The costLighted stylet tracheal intubation requires practice, but is easy to learn. The distal end of the stylet should be lubricated water-based surgical lubricant and the tracheal tube loaded onto it. For oral intubation, the nondominant hand is used to open the mouth with the thumb placed against the mandibular molars while the opposing index finger is pressed against the ramus of the mandible; this hand should be kept as far lateral as possible to allow unobstructed midline placement of the lighted stylet. A firm anterocaudad jaw thrust elevates the epiglottis, improves tactile sensation, and facilitates intubation via gentle lighted stylet motions with the dominant hand. The lighted stylet is introduced into the oropharynx from the side and brought into the midline following the midsagittal plane transecting the tongue. The tip of the lightwand should be passed under the tongue, and with gentle anterior traction, a bright well circumscribed circle of light at the level of the hyoid indicates that the tip lies in the vallecula. Gentle forward pressure may displace the epiglottis, allowing immediate tracheal intubation. If a bright red glow is seen off the midline, then the tip of the lightwand may lie in one of the pyriform fossae, and the unit should be withdrawn slightly and rotated back toward the midline. If resistance is felt preventing passage into the trachea, then the obstructing epiglottis may be circumvented by a series of rocking or scooping movements redirecting the tip to the thyroid prominence by using the light as a guide after careful withdrawal of the stylet following the contour of the oropharynx, the 15-mm adapter is reconnected and the tracheal tube placement is confirmed in the usual manner.[There are chances of the dislodgement of the bulb attached at the distal end while removing the lighted styletThe lighted stylet can be used only for adult patients.The bulb might get heated and may be a source of thermal injury and the possibility of heat damage to the tracheal mucosa during prolonged intubation attempts should be kept in mind.Minor trauma to upper airway leading to bleeding, sore throat, hoarseness, and dysphagiaThe most severe upper airway damages reported after lighted stylet intubation are reported cases of arytenoid cartilage dislocation."} +{"text": "This paper reports a survey of nasal cancer in Northamptonshire during the period 1950-79. An increased risk of various histological types of nasal tumour has been observed within the footwear manufacturing industry, which seems to be limited to the minority of men and women exposed to the dust of leather soles and heels. In Northamptonshire this exposure has usually occurred in the preparation, press and finishing rooms of factories making boots and shoes by the welted process. This type of leather is tanned by treatment with vegetable extracts, not chrome salts. Although the population of workers involved has diminished over the period of the study there has been no evidence of a decline in incidence of these tumours within it."} +{"text": "During studies of the ability of antilymphocyte serum (ALS) to suppress the immune mechanism of mice and thereby allow HeLa cells to grow into a large tumour in the mice, it was observed that many tumours continued to grow even after the ALS treatment had been stopped and full immunological competence of the mice had returned. The HeLa cells of such tumours appeared to be unchanged in their ability to induce further tumours in ALS treated mice to which they were transferred and, furthermore, the mice which were carrying such tumours in the presence of immunological competence were able to reject additional injections of HeLa cells or other human tumour cells. The four possible explanations for this phenomenon, (i) depression of cellular response; (ii) local reaction at the graft site; (iii) the presence of a blocking factor; and (iv) the elevation of the humoral response, have been investigated."} +{"text": "Chromosome damage in vitro after bleomycin treatment during the late S and G2 phases of the cell cycle was studied in the peripheral lymphocytes of 19 untreated patients with primary testicular tumours and 22 age-matched healthy men with no excess of cancer incidence in the families. The occurrence of spontaneous chromosome aberrations was not shown to be different in the studied groups. However, in the lymphocytes treated with bleomycin, cancer patients exhibited higher numbers of break events per cell and increased frequency of cells with aberrations than control group. Aberrant cells of cancer patients had more aberrations than cells of the control sample . The frequency of chromosome 1 aberrations, often encountered in cancer cells of testicular and other solid tumours, was significantly higher in lymphocytes of patients with testicular cancer , the long arm of this chromosome being predominantly affected . These results support the view that a genome disposed to testicular cancer is less effective in the ability to repair non-specific DNA damage in this region, more susceptible to damage, or both."} +{"text": "The presence of pancreatic islets alone in the peripancreatic region and splenic hilum is an uncommon occurrence. Herein, we describe their presence in this rare location. Ectopic pancreatic tissue may occur from displacement of small amounts of pancreas during embryonic development, resulting in formation of a nodule which is independent of the pancreas. It often has a proper ductal system and circulation ,2. In maRare case reports of pancreatitis occurring in the ectopic pancreatic tissue in the mesentry of the small intestine of a child have been described in the literature . Ectopic"} +{"text": "The records of 1243 patients with myasthenia gravis (M.G.) have been reviewed in a retrospective study of the incidence of extrathymic neoplasms. Ninety-four malignant neoplasms were traced.The onset of the disease (M.G.) coincided with a marked increase in the incidence of extrathymic neoplasms. The observed number of neoplasms in the year of onset of M.G. was three times higher than the expected in a control group. This was in sharp contrast to the lower than expected incidence in the years preceding the onset of M.G.The incidence remained at higher than the expected levels throughout the course of the disease in patients who did not undergo thymectomy, while in those patients who had thymectomy the incidence decreased to the levels of the general population after the second postoperative year.These observations suggest an oncogenic thymic influence. The possibility is discussed of the potential oncogenic role of abnormal clones of immunocompetent small lymphocytes of thymic origin."} +{"text": "Sir,+FOXP3+ regulatory T cells (Tregs) (Carcinogen-induced tumours in intact mice exhibit a substantial enrichment of CD4Evidence collected from mice suggests that depletion of Tregs enhances immunosurveillance of tumours and uncovers new responses to tumour antigens in patients (The optimal vaccination/depletion strategy needs to be established by defining the impact of preoperative chemotherapy and surgery on the development of antitumour immune responses. Surgical removal of malignant disease is likely to remove the bulk of Treg TIL and reduce the production of suppressive signalling networks, which would undoubtedly improve the likelihood of successful antitumour immune responses to clear residual malignant cells. However, the impact of major surgery on the capacity of the immune response needs to be established. It is also important to weigh up the benefit of postoperative immunotherapy and adjuvant chemotherapy. Chemo/radiotherapy might reduce the capacity of the immune system to mount antitumour immunity. In light of conflicting reports describing the ability of certain treatments to deplete Treg efficiently ("} +{"text": "Crotalus durissus terrificus venom into the foot pad of mice did not induce a significant inflammatory response as evaluated by oedema formation, increased vascular permeability and cell migration. The subcutaneous injection of the venom, or its addition to cell cultures, had an inhibitory effect on the spreading and phagocytosis of resident macrophages, without affecting the viability of the cells. This effect was not observed when the venom was added to cultures of thioglycollate elicited macrophages, but it was able to inhibit these macrophage functions when the cells were obtained from animals injected simultaneously with the venom and thioglycollate. These observations suggest that the venom interferes with the mechanisms of macrophage activation. Leukocyte migration induced by intraperitoneal injection of thioglycollate was also inhibited by previous venom injection. This down-regulatory activity of the venom on macrophage functions could account for the mild inflammatory response observed in the site of the snake bite in Crotalus durissus terrificus envenomation in man.The injection of"} +{"text": "A detailed casenote review was performed on all 65 patients registered with testicular non-seminomatous germ cell tumours (NSGCT) during 1989 under the Scottish Cancer Registration Scheme. Details of management at presentation and 2 years following diagnosis were recorded and analysed. In a small number of patients an unacceptable delay in diagnosis was noted. Variation was found in the frequency and type of investigations performed on patients placed on surveillance, types of chemotherapy regimens used and numbers of patients entered into trials. Three per cent of patients had a biopsy of the contralateral testis and 27% of patients defaulted from clinic attendance. Considerable variation in the management of testicular NSGCT in Scotland has been identified. The introduction of management guidelines should result in a more consistent approach to the care of these patients. Support, both financial and psychological, may reduce the unacceptable rate of default."} +{"text": "Intrapleural growth of transplanted rat tumours was prevented or retarded by intrapleural administration of double-stranded RNA. A similar suppression of growth was achieved with peitoneal tumours by the intraperitoneal injection of the compound. These studies indicate the possible potential of this form of treatment of thoracic and peritoneal tumours for clinical application in the treatment of mesothelioma."} +{"text": "Both primary leiomyosarcoma of bone and sarcoma arising in association with a bone infarct are rare events. In this casereport we describe for the first time a case of leiomyosarcoma arising in a bone infarct. The tumour arose in a medullaryinfarct in the proximal femur of an elderly patient. As in other cases of sarcoma arising in a bone infarct, the prognosis waspoor, the patient dying within 6 months of diagnosis."} +{"text": "Within 48 hours of the institution of severe phenylhydrazineinduced anaemia in mice bearing ascites tumours or generalised leukaemia, a substantial proportion of the malignant cells disappeared respectively from the peritoneal cavity or infiltrated liver. The results of radiobiological experiments permitting determination of the proportion of viable leukaemia cells which were severely hypoxic and relatively radioresistant in the livers of leukaemic mice, showed that induction of anaemia was associated with a several hundredfold increase in the proportion of such cells. The proportion of hypoxic cells was greatly reduced when the anaemic leukaemic mice were transfused with packed erythrocytes or allowed to breathe oxygen under high pressure. Similar experi - ments with solid sarcomas indicated that a high proportion of the tumour cells were hypoxic in non-anaemic mice breathing air. The hypoxic fraction was not significantly reduced when tumour-bearing mice were made severely anaemic during growth of the tumour and were later transfused. Thus, the hypoxic cells in leukaemic livers and those in solid tumours are markedly different in their capacity for oxygenation following the induction of relative hyperoxaemia."} +{"text": "Necrotizing enterocolitis (NEC) is a severe disease of the gastrointestinal tract of pre-term babies and is thought to be related to the physiological immaturity of the intestine and altered levels of normal flora in the gut. Understanding the factors that contribute to the pathology of NEC may lead to the development of treatment strategies aimed at re-establishing the integrity of the epithelial wall and preventing the propagation of inflammation in NEC. Several studies have shown a reduced incidence and severity of NEC in neonates treated with probiotics .The objective of this study is to use a mathematical model to predict the conditions under which probiotics may be successful in promoting the health of infants suffering from NEC. An ordinary differential equation model is developed that tracks the populations of pathogenic and probiotic bacteria in the intestinal lumen and in the blood/tissue region. The permeability of the intestinal epithelial layer is treated as a variable, and the role of the inflammatory response is included. The model predicts that in the presence of probiotics health is restored in many cases that would have been otherwise pathogenic. The timing of probiotic administration is also shown to determine whether or not health is restored. Finally, the model predicts that probiotics may be harmful to the NEC patient under very specific conditions, perhaps explaining the detrimental effects of probiotics observed in some clinical studies.The reduced, experimentally motivated mathematical model that we have developed suggests how a certain general set of characteristics of probiotics can lead to beneficial or detrimental outcomes for infants suffering from NEC, depending on the influences of probiotics on defined features of the inflammatory response. Necrotizing enterocolitis (NEC) is a severe disease of the gastrointestinal (GI) tract that is characterized by increased permeability of the intestine and is primarily observed in pre-term babies. Although the causes of this disease are not fully known, most studies conclude that prematurity is the greatest risk factor. NEC affects Although its pathophysiology is not entirely understood, NEC is thought to be related to the physiological immaturity of the GI tract and altered levels of normal flora in the intestines. A mature intestine contains many defense mechanisms that act as barriers to harmful bacteria. Many of these defense mechanisms, such as peristalsis and tight junctions between intestinal epithelial cells Bifidobacterium and Lactobacillus is necessary for the normal development and protective function of the newborn intestine Bifidobacteria than the amount found in breast-fed infants, and indeed, pre-term infants fed formula have significantly higher rates of NEC than those fed breast milk An abnormal pattern of bacterial colonization in pre-term infants may also contribute to the pathogenesis of NEC. Colonization by normal flora such as Recently, Toll-like receptor-4 (TLR-4) has been shown to be significantly increased in mice and humans with NEC compared with healthy infants Bifidobacterium and Lactobacillus. Probiotics compete with pathogenic bacteria for host binding sites and nutrients while also stimulating host defense mechanisms and enhancing intestinal maturation. Probiotic bacteria can protect against systemic bacterial invasion by decreasing the permeability of the gastrointestinal wall Given this growing understanding and identification of the factors that contribute to NEC, it seems important to develop treatment strategies aimed at bolstering the integrity of the epithelial wall, preventing excessive inflammation, and limiting the presence of pathogenic bacteria. One proposed treatment method is the administration of probiotics, which are defined as non-pathogenic species of bacteria that promote the health of the host Lactobacillus acidophilus and Bifidobacterium infantis. Infants treated with a probiotic mixture in two separate studies Lactobacillus were shown to have an increased incidence of sepsis, and the observed decrease in NEC incidence was not statistically significant. Similarly, Land et al. Lactobacillus sepsis in infants treated with probiotics. However, lactobacillemia can occur naturally and thus may or may not have been related to probiotic treatment.Several studies have shown a reduced incidence and severity of NEC in neonates treated with probiotics Experimental studies have shown a potential clinical benefit of probiotics in NEC patients but have not identified the mechanisms underlying the efficacy of probiotic treatment. It is hypothesized that probiotics improve the barrier function of the intestine by increasing transepithelial resistance, protecting against cell death, inducing specific mucus genes, and stimulating the production of nonfunctional receptor decoys in the intestinal lining We hypothesized that the protective potential of these mechanisms can be analyzed using a mathematical model. Building upon insights established by theoretical models of the acute inflammatory response A system of ordinary differential equations is used to track both pathogenic and probiotic bacteria in two compartments: an intestinal lumen compartment and a combined blood/tissue compartment see . The ratThe majority of the parameter values for this model are taken directly from two previous models of the inflammatory response In the intestinal lumen, pathogenic bacteria (Equations (4)\u2013(6) represent the evolution of pathogenic bacteria in the blood/tissue at rates To investigate various features of probiotic treatment for NEC, we first consider equations (1)\u2013(6) in the absence of probiotics for varying levels of initial pathogenic insult, Inspection of system (1)\u2013(6) shows that model steady states can take two forms, one with baseline bacterial permeability . For example, studies have shown that breast-fed infants acquire a more desirable intestinal flora than formula-fed infants, since breast milk contains many antimicrobial products and factors that promote the colonization of helpful bacteria in the infant intestine in utero growth restriction Studies have also shown that infants born vaginally tend to be colonized earlier with beneficial species of bacteria, while infants delivered by cesarean section have a delayed colonization by desirable bacteria Determining the correct probiotic dosing strategy is a key question for the realization of effective probiotic treatment for infants suffering from NEC. Our mathematical model predicts that probiotics will be most effective for low rates of pathogenic growth (Our main motivation in designing this study was to incorporate experimental observations of probiotics into a mathematical model that can be used to gain insight into key interactions of pathogens, probiotics, and the inflammatory response in the context of NEC. In this way, we hope to improve clinical translation, as part of our larger Translational Systems Biology framework Our basic modeling assumption is that the inflammatory response that takes place at the lumen/blood interface, and that involves an interplay among intestinal flora, intestinal epithelial cells, and inflammatory cells in the blood, serves to maintain a dynamic equilibrium that defines the health steady state. It is likely that an effective inflammatory response requires some small, baseline rate of efflux of luminal bacteria into the blood/tissue. The ensuing minor, self-limiting inflammatory response may serve to maintain the mostly beneficial population of intestinal bacteria while providing a sampling of intestinal contents that could lead to an early warning of changes in the proportion of pathogenic bacteria in the intestinal lumen. F or a developing infant, this equilibrium may require a constant influx of factors present in maternal breast milk. To incorporate such a baseline inflammatory response, which we currently omit, the model should be augmented to include the roles of pro- and anti-inflammatory cytokines in the inflammatory response. One important effect of anti-inflammatory cytokines is the reduction of damage to the epithelium caused by the inflammatory response. In our current model, the omission of anti-inflammatory cytokines provides a worst-case scenario with respect to the harmful effects of the inflammatory response. The qualitative relationships established in this study that indicate both beneficial and harmful effects of probiotics are still expected to hold in the presence of cytokines, but additional insight into the interplay of the immune response and probiotic treatment will require future modeling of cytokine populations The number of experimental and clinical studies that have been performed for NEC is limited due to the nature of the disease and the complexity of carrying out studies and obtaining samples in pre-term infants, and thus we used a combination of human and animal studies to provide an experimental grounding for the model presented. Additional experimental data would help to determine some of the parameter values estimated in this study and may also highlight additional factors contributing to NEC that have not been explored by this model. Interactions between bacteria and the inflammatory response are defined within the context of two lumped compartments that are assumed to be well-mixed, and thus there is no spatial component in this present model. Immune mechanisms specific to regions such as the gut mucosa and lamina propria In conclusion, based on experimental and clinical studies, we have developed a simplified mathematical model of the complex host-pathogen interaction that occurs in the setting of NEC and used it to analyze the impact of probiotic administration on the ensuing dynamics. The predictions derived from this computational study may help to explain the diverse outcomes that may arise in this setting and may be useful for guiding future experimental and clinical studies."} +{"text": "The following recollections are taken from a series of 14 extended interviews conducted with Richard W. (Dick) Hornabrook between February 1995 and February 1998, as part of a larger ethnographic study of the kuru investigation and examined one of the largest groups of patients suffering from kuru , and gav"} +{"text": "Twelve enzymes related to the direct oxidative and glycolytic pathways of glucose metabolism were assayed in 88 cancers of the cervix and 48 cancers of the endometrium of the human uterus, and the activities compared with those obtained from a group of control tissues. Significant increases for all but one of the enzymes studied were found in cancer of the cervix, when compared with normal cervix epithelium. Hexokinase, phoshofructokinase, and aldolase appear to be rate-limiting in normal cervix epithelium; however, since the increase in activity of the first two in cancers was least of all the glycolytic enzymes, redundant enzyme synthesis probably occurs in the malignant cell for the enzymes catalysing reversible reactions. There was virtually no correlation between the activity of any enzyme measured in the cancer sample and histological assessments of the degree of malignancy of the tumour, or the clinical stage of the disease. All enzymes except pyruvate kinase had significantly higher activity in normal endometrium than in normal cervix epithelium, presumably reflecting the greater metabolic requirements of the former tissue. Only phosphoglucose isomerase and pyruvate kinase were significantly higher in endometrial cancer than in normal endometrium, and there were few significant differences between cancers of the cervix and of the endometrium, despite the marked differences in their tissues of origin. These results suggest the changes occur during malignant transformation to the activities of both regulatory enzymes and those catalysing reversible reactions, in a manner justifying the conclusion that the general metabolism of tumours is convergent."} +{"text": "The growth pattern and morphology of two transplantable acute leukaemias which arose spontaneously in pure line rats are described. They differ morphologically and on the basis of their behaviour in vivo, such as infiltration of lymphoid organs and presence in thoracic duct lymph, the leukaemia syngeneic to the August strain (referred to as the SAL) appears to be of myeloid type whereas the leukaemia syngeneic to the Hooded strain (referred to as the HRL) resembles acute lymphoblastic leukaemia. The HRL cells, but not the SAL cells, are lysed by murine anti-theta serum plus complement. These two transplantable acute leukaemias appear to be useful animal counterparts ot the human acute leukaemias and may be valuable models for studies on chemotherapy and immunotherapy."} +{"text": "The immunohistochemical expression of tenascin was examined in the normal adult mucosa of the stomach, primary tumours and lymph node metastases of gastric cancer patients. In normal gastric tissue tenascin was expressed in the muscularis mucosae, muscularis propria and vessel walls, however it was not expressed in either the mucosal connective tissue or the stromal tissue in the submucosal layer. In gastric cancer, tenascin was expressed in 35 of 85 primary tumours, and in 8 of 25 metastases in lymph nodes. Tenascin was located in the fibrous stroma surrounding foci of cancer. The expression of tenascin in the primary tumour did not correlate with the depth of invasion, lymph node metastasis or prognosis. Tenascin appears during the process of either malignant transformation or tumour progression in gastric cancer, and the positive expression of tenascin may be useful as a stromal marker for the early detection of gastric cancer."} +{"text": "The area of osteonecrosis of the head of femur affected by the disease process varies from a small localized lesion to a global lesion. Without specific treatment 80% of the clinically diagnosed cases will progress, and most will eventually require arthroplasty. Therefore the goal is to diagnose and treat the condition in the earliest stage. A number of surgical procedures have been described to retard or prevent progression of the disease and to preserve the femoral head. An implant made of porous tantalum has been developed to function as a structural graft to provide mechanical support to the subchondral plate of the necrotic femoral head, and possibly allow bone growth into the avascular region. Porous tantalum implant failure with associated radiological progression of the disease is reported in the literature; however, there is no report of clinical failure of the implant without radiological progression of the disease. We report a case of clinical failure of porous tantalum implant, seven months after surgery without any radiological progression of the disease, and with histopathological evidence of new bone formation around the porous tantalum implant. The patient was succesfully treated by total hip arthroplasty. The area of osteonecrosis of the head of femur affected by the disease process varies from a small localized lesion to a global lesion. Without specific treatment 80% of the clinically diagnosed cases will progress, and most will eventually require arthroplasty.3A 42-year-old male patient presented to out patient department (OPD) with a complaint of severe pain in right hip joint. After routine clinical and radiological examination, he was diagnosed to be suffering from advanced osteonecrosis of the right femoral head. Total hip arthroplasty (THA) was planned for the right hip. At the same time he had a mild pain in left hip joint. Radiographs of the left hip joint were normal. Taking into consideration the patient complaint and osteonecrosis of the right femur head, magnetic resonance imaging (MRI) scan was performed for the left hip joint to rule out early osteonecrotic changes as a cause of pain. MRI revealed large osteonecrotic lesion involving more than 80% geographical area of articular surface with MR crescent sign . There wOn gross examination, the implant was well placed in the center of necrotic area . There wThe HPE at 12\u00d7 mThese findings could be compared with failure cases in our study, which had associated radiological progression. The pores of the failed implants had dead marrow tissue with fat necrosis and infiltration of chronic inflammatory cells.Porous tantalum has demonstrated bone ingrowth and rapid fixation in animal models1710In our case, the porous tantalum rod had provided effective mechanical support to the joint surface as there was no collapse of the articular surface on MRI or HPE of the retrieved femoral head with implant in situ.A retrieval study has reported small shards of bone stacked up on the rounded tip of the implant in nine of the fifteen cases,It seems that this continuous shell of new bony ingrowth might have nullified the effect of core decompression, by blocking the porous tantalum. Thick mantle of cancellous bone around the tip, which appeared to be reactive new bone formation on MRI might have added to the blocking effect. Though the porous tantalum rod provided good structural support to the articular cartilage and helped maintain the integrity of the articular surface, the new bone formation blocked the porous rod completely, probably leading to gradual increase in the intramedullary pressure and patient started experiencing pain. The pain was severe enough to quote this case as clinical failure and plan THA for the affected hip.The current case suggests a new mode of clinical failure of porous tantalum rod, by which the new bony ingrowth leads to blockage of the core decompression effect of the porous tantalum, leading to arise in the intramedullary pressure and reappearance of the clinical symptoms of osteonecrosis. The iatrogenic part seems to be the cancellous bone around the tip of the tantalum rod, which is not new bone, but may be cancellous bone from neck that was pushed along the tip of the implant during insertion. In the suggested scenario, steps should be taken to avoid this iatrogenic factor in the clinical failure of the porous tantalum rod."} +{"text": "Fifteen percent of all tobacco-related cancers are bladder cancers. Curiously, in Northern India this disease seems to affect younger patients. The majority of the patients present with superficial disease and are treated by transurethral resection of the bladder tumor. More than half of these patients experience recurrence, with about 20% progressing to muscle invasive disease. Outcomes in superficial and muscle invasive disease have improved over time. Intravesical chemotherapy has been found to prevent recurrence in superficial bladder cancer and the immediate postoperative instillation of mitomycin C has become the standard of care for small superficial tumors that have been resected completely without perforation. With improvements in surgical technique, and the evolution in urinary diversion procedure from ureterosigmoidostomy to orthotopic neobladder, the outcome in muscle invasive carcinoma of the bladder has improved remarkably with respect to both cancer control and quality of life.According to the Delhi Cancer Registry, in 2003, bladder cancer was the 6This edition of the Indian Journal of Urology focuses on carcinoma of the bladder.Theodorescu has discussed the molecular aspects of carcinoma bladder. The molecular level understanding of the mechanism of cancer genesis is increasing and carcinoma bladder is no exception. The authors have analyzed, at the molecular level, the factors responsible for two types of carcinoma bladder that have distinct behaviors. They discuss their potential implications (either therapeutic or prognostic) in the clinical management of the disease.Dr. Jagdish N. Kulkarni has discussed in detail the changes in the staging of carcinoma bladder, with emphasis on the importance of cystoscopy and bimanual examination even in this era of MRI and CT scans.Ashish Kamat (Prashant) has discussed the management of T1G3 bladder cancer and the role of radical cystectomy in these patients. They have also explored the role of molecular markers such as epidermal growth factor receptor (EGFR) in stage progression of non-muscle-invasive carcinoma of the urinary bladder and the potential therapeutic role of cell cycle inhibitors.The article by Dr. Rakesh Kapoor of the Sanjay Gandhi Post-Graduate Institute of Medical Sciences (SGPGI), Lucknow, explores the role of BCG in non-muscle-invasive carcinoma bladder and its mechanism of action; he also discusses their experiences at the center.Dr. Markand Kochikar has reviewed the existing literature in detail for his article on the role of surgery and adjuvant and neoadjuvant chemotherapy in the treatment of locally advanced and metastatic bladder cancer.Marcus Kcukuz and Axel D. Merseburger have analyzed their results in 21 patients of T4 carcinoma bladder who were offered primary surgery in the form of radical cystoprostatectomy. Finally, there is the article by Dr. Deepak Jain from our center, which discusses the type of urinary diversion done after cystectomy in our country at various centers.Invasive bladder cancer continues to be a very difficult disease to manage. All of us who are engaged in managing these individuals realize the limitations of aggressive therapy, as most of them do not live long and would succumb to bladder cancer. It is obvious that only surgical or radiation therapy is not the answer.There is a need to develop better molecular markers, effective chemotherapy and hopefully a gene therapy to alleviate sufferings of patients who get cancer of bladder.Finally, I take this opportunity to thank all my contributors who have helped me in organizing this symposium. I am grateful to the Editor, Indian Journal of Urology for having provided me this opportunity.With best wishes for Happy New Year."} +{"text": "Outbreak investigation is a core function of public health agencies. Suboptimal outbreak investigation endangers both public health and agency reputations. While audits of clinical medical and nursing practice are conducted as part of continuous quality improvement, public health agencies rarely make systematic use of structured audits to ensure best practice for outbreak responses, and there is limited guidance or policy to guide outbreak audit.A framework for prioritising which outbreak investigations to audit, an approach for conducting a successful audit, and a template for audit trigger questions was developed and trialled in four foodborne outbreaks and a respiratory disease outbreak in Australia.The following issues were identified across several structured audits: the need for clear definitions of roles and responsibilities both within and between agencies, improved communication between agencies and with external stakeholders involved in outbreaks, and the need for development of performance standards in outbreak investigations - particularly in relation to timeliness of response. Participants considered the audit process and methodology to be clear, useful, and non-threatening. Most audits can be conducted within two to three hours, however, some participants felt this limited the scope of the audit.The framework was acceptable to participants, provided an opportunity for clarifying perceptions and enhancing partnership approaches, and provided useful recommendations for approaching future outbreaks. Future challenges include incorporating feedback from broader stakeholder groups, for example those of affected cases, institutions and businesses; assessing the quality of a specific audit; developing training for both participants and facilitators; and building a central capacity to support jurisdictions embarking on an audit. The incorporation of measurable performance criteria or sharing of benchmark performance criteria will assist in the standardisation of outbreak investigation audit and further quality improvement. Outbreak investigation is a core function of public health agencies. Suboptimal outbreak investigation endangers both public health and agency reputations. Surprisingly, there is little guidance on enhancing the quality of outbreak investigation and control provided to public health agencies. Audits of clinical medical and nursing practice are conducted as part of continuous quality improvement and particularly where significant events have occurred, such as unexpected deaths. They arWhile recommendations for practice improvement in outbreak investigation may be found in the discussion section of journal articles reporting on outbreak responses and published reports may critique limited aspects of outbreak response performance, a comprehensive review of outbreak investigation practice often only follows high profile events subjected to an independent government audit, or a coronial inquiry-5. PubliA framework for prioritising which outbreak investigations to audit and how to conduct an audit of outbreak investigation practice is presented. The application of this approach in four foodborne outbreaks and a respiratory disease outbreak in an Australian context is reviewed.The audit methodology has evolved since its initial trial during a national workshop to test outbreak response guidelines in 1997 which focused primarily on the rationale, selection of outbreaks, and development and use of the Audit Trigger Questions The success of a structured audit is dependent upon the appropriate:\u2022 Selection of outbreaks to be audited\u2022 Engagement and preparation of stakeholders\u2022 Process of the audit\u2022 Confidentiality agreement, where necessary\u2022 Implementation and dissemination of recommendationsPublic health activities should be evaluated as part of quality assurance and outbreaks almost always hold lessons for service improvement. However, some outbreaks, characterised by criteria in Table A benefit of normalising structured audit as part of routine practice is that audits associated with inter-jurisdictional conflict will be less threatening to stakeholders if the process is regarded as routine. It is important that audits not only be conducted following perceived system \"failures\" but also to identify and promote good practice. Ideally, agencies responsible for outbreak investigation should audit at least one or two investigations annually, even where specific selection criteria are not satisfied, as there can be significant learning from small or \"routine\" investigations.While there are costs involved in conducting a structured audit, proactively addressing communication and system performance issues may well be cost saving from an organisational perspective, however, awareness of the opportunity costs involved in conducting an audit should influence the careful selection of outbreaks to be audited and the efficient conduct of the audit. We believe most structured audits can be conducted within two to three hours and should be restricted to this time limit.It is essential that the focus of the audit is on improving future practice rather than on laying blame or identifying individual people or agencies for criticism. Participants may have endured a stressful experience during the outbreak period and extensive media scrutiny. In some outbreak investigations, legal and media scrutiny can lead to criticism of key personnel investigating outbreaks. Staff may be sensitive to an audit of their work. It is important that all participants come to the audit with the expectation of a positive outcome that will improve future practice, rather than fearing further criticism.Participants should be drawn from the pool of stakeholders who can assist in the process of the audit or those who's future practice will benefit from participation including both higher level managers and frontline staff. Including participants from external but collaborative agencies will bring more divergent viewpoints to the audit and extend the range of issues explored and resolutions available. We have conducted audits with up to 20 participants, however, the number of participants should be balanced with the scope of the audit, the issues to be reviewed and the time available.Generally, it is preferable to use a skilled facilitator. Using an external facilitator has the advantage of independence and bringing a fresh perspective. However, an external facilitator may not know the roles of key people and agencies involved in the outbreak response. A facilitator should have some experience of outbreak investigation but their key expertise should be in the process of facilitation. The facilitator should ensure constructive framing of discussion and reorient interpersonal conflict to address system issues if possible through interest-based negotiation that focuses on the underlying interest of the parties rather than their competing claims or positions.The facilitator is responsible for 1) explaining the aims, ground rules, and principles of the audit, 2) maintaining the structure of the audit, 3) facilitating the process including seeking agreement on key themes and scope of the audit, encouraging contributions broadly across participants, managing time, clarifying and summarising issues, clarifying assumptions, 4) maintaining an impartial perspective, 5) summarising the outcome of the audit and assisting in writing a report, and 6) checking on progress of actions approximately six weeks after the audit.The lead agency in the investigation will usually call for an audit of an investigation and define the expectations of the audit outcome at the outset. The terms of reference, the scope of the audit, attendees, duration and the expected product should be defined in consultation with participants and in advance so that participants are supportive and prepared for the meeting. The major objective of the audit should be framed as a neutral system performance statement or question.We use the term \"structured\" to describe two aspects of this methodology - first to denote the structure used in the Audit Trigger Questions to ensure a comprehensive range of issues are addressed and secondly to denote the structure for audit meeting preparation and conduct. The Audit Trigger Questions was the process/methodology clear? 2) did the structured methodology assist or inhibit the debrief? 3) do you have any suggestions for the facilitator to improve facilitation of debriefs in the future? and 4) other general comments.Five outbreaks have undergone structured audit between 1997 and 2008, with participants ranging from a single regional health protection unit through to inter-agency and multi-jurisdictional audits. These identified a broad range of outbreak response quality improvement measures at national, state and local level (Table We trialled qualitatively rating outbreak investigation performance against the Audit Trigger Questions using criteria such as \"adequate\" or \"needs improvement\", however, it was found to be cumbersome and slowed down the preparation for the audit and subsequently this was dropped in favour of placing a tick beside those questions that should be addressed in the audit.Initial results from confidential written evaluations from eight participants from the last two structured audits demonstrate that participants consider the process and methodology to be clear and useful. Some participants find that while the structure of the audit assists and helps to make the process \"neutral\" and \"non-threatening\" it may also limit the discussion of issues that arise during the audit because these are considered \"out of scope\". The circulation of a document summarising the outbreak, the scope of the audit and issues suggested for discussion prior to the audit was considered valuable. It was suggested that where possible the facilitator should be a neutral party and that sometimes limiting the audit to two hours inhibited exploration of issues such as future prevention measures.The following issues were repeatedly identified across several structured audits: the need for clear definitions of roles and responsibilities both within and between agencies, communication between agencies and with external stakeholders involved in outbreaks, and the need for development of performance standards in outbreak investigations - particularly in relation to timeliness of response.The methodology used in the audit is acceptable to participants and there is support for continued use and development of the tool. We have found that the focus on common interests of the parties, positive reframing of issues, and compliance with the structure of the audit has minimised the potential for interpersonal conflict during the audit meetings. Agencies welcome guidance in approaching the evaluation of complex outbreak responses and this methodology has been chosen by the Public Health Laboratory Network of Australia to review the laboratory response to the 2009 influenza pandemic.The methodology for the audit has evolved over time, while it was initially structured around the Audit Trigger Questions in Additional file The importance of a supportive environment for structured audit cannot be over emphasised. In clinical audit, the most frequently cited barrier to successful clinical audit is the failure of organisations to provide sufficient protected time for healthcare teams and it likely the same would apply to public health agencies. In our Kipping et al highlighted the lack of published standards for auditing outbreak response and emphasised that further development and promotion of such standards was required.http://groups.google.com/group/outbreak-audits to promote use of the methodology and the development of a collaborative network to share learning and modifications of the structured audit methodology.This methodology is evolving with practice and we encourage feedback and modification of the process from practitioners. Future challenges include incorporating feedback from broader stakeholder groups, for example those of affected cases, institutions and businesses; assessing the quality of audit; developing training for both participants and facilitators; and building a central capacity to support jurisdictions conducting audits. The incorporation of measurable performance criteria or sharing of benchmarkable performance criteria will assist in the standardisation of audit and further quality improvement. A Google Group has been initiated at The framework was acceptable to participants, provided an opportunity for clarifying perceptions and enhancing partnership approaches, and provided useful recommendations for approaching future outbreaks. Future challenges include incorporating feedback from broader stakeholder groups, for example those of affected cases, institutions and businesses; assessing the quality of a specific audit; developing training for both participants and facilitators; and building a central capacity to support jurisdictions embarking on an audit. The incorporation of measurable performance criteria or sharing of benchmark performance criteria will assist in the standardisation of outbreak investigation audit and further quality improvement.The authors declare that they have no competing interests.CBD developed the initial drafts of the Audit Trigger Questions and criteria for auditing outbreaks, introduced the alternative dispute resolution principles to the audit and drafted the initial manuscript. TDM and SAM developed the template for reporting the outcome of the audits and further refined the audit process. DND and MDK provided intellectual input into design and conduct of the audit. All authors helped draft and revise the manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2458/9/472/prepubAppendix 1 - Audit Trigger Questions. Tabular checklist of questions to trigger further exploration in the structured audit.Click here for fileAppendix 2 - Post-audit Action Plan Template. Tabular template for recording post-audit actions and recommendations.Click here for file"} +{"text": "Sixteen groups, each of 50 Swiss female SPF mice, were treated thrice weekly with various combinations of 3,4-benzopyrene (BP) and/or the neutral fraction of cigarette smoke (NF) in acetone applied to the skin. Some groups received one carcinogen, some the other and some a mixture of the two. Skin tumour incidence rates were found to increase both with the dose of NF and with the dose of BP. With BP alone a threshold dose was found beyond which a very heavy incidence rate of malignant skin tumours occurred. After correction of the results for intercurrent deaths it was found that when NF and BP are applied together as a mixture they do not act independently in the production of malignant skin tumours but interact positively. This suggests that some of the components of NF act as cocarcinogens rather than as complete carcinogens. Treatment with NF appeared to increase the incidence of malignant lymphomas. The data were not suitable for deciding whether the various treatments influenced the rates of incidence of internal tumours of other types, for example, lung tumours."} +{"text": "Patient. We describe a case of chondroblastoma of the os calcis which metastasized to the tibia, soft tissues and lung. A complete response of the lung lesions was noted with chemotherapy.Discussion. Review of the published literature shows that metastatic chondroblastoma only arises following local recurrence of the tumour."} +{"text": "Vitamin A and its biologically active derivatives, retinal and retinoic acid (RA), together with a large repertoire of synthetic analogues are collectively referred to as retinoids. Naturally occurring retinoids regulate the growth and differentiation of a wide variety of cell types and play a crucial role in the physiology of vision and as morphogenic agents during embryonic development. Retinoids and their analogues have been evaluated as chemoprevention agents, and also in the management of acute promyelocytic leukaemia. Retinoids exert most of their effects by binding to specific receptors and modulating gene expression. The development of new active retinoids and the identification of two distinct families of retinoid receptors has led to an increased understanding of the cellular effects of activation of these receptors. In this article we review the use of retinoids in chemoprevention strategies, discuss the cellular consequences of activated retinoid receptors, and speculate on how our increasing understanding of retinoid-induced signalling pathways may contribute to future therapeutic strategies in the management of malignant disease. \u00a9 1999 Cancer Research Campaign"} +{"text": "In an attempt to probe nucleic acid structures, numerous Ru(II) complexes with different ligands have been synthesized and investigated. In this contribution we focus on the DNA-binding properties ofruthenium(II) complexes containing asymmetric ligands that have attracted little attention in the past decades.The influences of the shape and size of the ligand on the binding modes, affinity, enantioselectivities and photocleavage of the complexes to DNA are described."} +{"text": "The metal-binding abilities of a wide variety of bioactive aminophosphonates, from thesimple aminoethanephosphonic acids to the rather large macrocyclic polyaza derivatives, arediscussed with special emphasis on a comparison of the analogous carboxylic acid andphosphonic acid systems. Examples are given of the biological importance of metalion \u2013 aminophosphonate interactions in living systems, and also of their actual and potentialapplicability in medicine."} +{"text": "The National Registry of Childhood Tumours contains over 51000 records of children born in Great Britain who developed cancer under the age of 15 years. Patterns of childhood cancer among families containing more than one child with cancer have been studied. A total of 225 \"sib pair' families have been ascertained from interviews with parents of affected children, from hospital and general practitioner records and from manual and computer searches of names and addresses of patients. A number of special groups have been identified, including those with a known genetic aetiology such as retinoblastoma, twins and families with three or more affected children. A further 148 families not in any of the above groups contain two children with cancer: in 46 families the children had tumours of the same type, most commonly leukaemia. Some of the families are examples of the Li-Fraumeni syndrome; some are associated with other conditions, including Down's syndrome. There is clearly a genetic element in the aetiology of cancer in some families discussed here; shared exposure to environmental causes may account for others and some will be simply due to chance."} +{"text": "Despite an exponential development of the understanding of the disease with availability of good therapy and feasibility of good control along with availability of globally accepted guidelines, there remains a significant gap between the guidelines and prevailing practice behavior for treating asthma all over the globe. This perhaps stands as the single most deterrent factor for good asthma care worldwide. The objective of the study is to analyze the asthma prescriptions to find out the available status of the practice behaviour and the deviations from the guideline in asthma practice.The asthma prescriptions of the referred patients presenting to the OPD services of the IPCR, Kolkata were photocopied and collected. They were further analyzed based on the available information upon a format being prepared on four major areas as qualifications, clinical recording habit, practice of evaluating patients, and treatment habit that stands apparent from the prescribed medications. The doctors were divided into three categories as a) MBBS, b) MD/DNB (medicine and respiratory medicine), and c) DM and statistical analysis has been performed comparing the three groups as per the performance in the four pre-decided areas.All the groups fall short of any guideline or text of asthma care in all the areas involved.The practice behaviour of our doctors for asthma care appears deficient in several areas and seems far from guideline recommendations. This needs further evaluation and adoption of appropriate interventions. Despite tremendous improvement in knowledge and treatment of asthma and availability of guidelines, the asthma practice behaviour of the physicians concerned is perhaps the most important issue to determine the quality of care for asthma. There is always a gap between the guidelines and practice. UnderstaIt has been a cross sectional and observational study. Patients being referred to our OPD services and diagnosed as asthma on spirometry at the institute were requested to give consent and, thereafter, allowed to make photocopies of the prescriptions being carried by them for the study. These prescriptions were preserved and the available information was charted categorically as a) the qualification of the doctors b) the documentation habit that included the documentation of the diagnosis, vitals , co-morbidities (if any), and the clinical examination of the respiratory system in whatever form available, c) the investigation habit from the available record that includes advice for spirometry, chest X-ray, X-ray of the para-nasal sinuses, blood sugar, routine hemogram, IgE etc and d) the prescribing habit marked by advice for oral or inhaled medications, oral steroid, and inhaled corticosteroids.The doctors were divided into three categories as (i) MBBS, (ii) MD or equivalent including MD in respiratory medicine, and (iii) DM in any non respiratory sub-speciality of medicine. The different habits of the three different categories of the doctors were charted systematically and information derived was expressed in percentages. Furthermore, the performance of the three categories was compared statistically with unpaired\u2018t test\u2019.n = 28), (B) post graduates (n = 46), and (C) post doctoral (n = 26). The comparison between the different categories and the overall situation is tabulated below in Table The distribution of doctors in different categories as described above was as follows: (A) MBBS compared to the other categories (A and C). However, the latter group has practiced advising chest X-ray (PA) and prescribed oral medication (p<0.03) in a significantly higher proportion. Although not significant, blood pressure measurement is apparently done more frequently by MBBS doctors compared to the other two categories. The evaluation habit of the doctors also falls short of the expected; overall the routine hemogram and chest X-ray were advised by about one fourth (24 %) and one fifth (21 %) of the prescribers. The post doctoral doctors were most smart to ask a chest X-ray and the overall consideration of allergic rhinitis appears low from the fact that only 5 % doctors asked a X-ray of the para nasal sinus. The prescription of the use of inhales has scored out higher than advice for oral medication alone (58% versus 19%). When looked for, the use of inhaled medication alone, combination products (ICS+LABA) outscore the use of ICS+ SABA (44% versus 6 %); inhalers are prescribed more by the post graduate doctors.The overall impression from the available data is that there is still a huge dearth between the published guideline for asthma therapy and the expressed practice behaviour of doctors in this part of the developing world. The deficit is known as universal and seemingly unrelated to the level of qualification in our study. Thus, the behaviour noticed may not necessarily indicate the lack of knowledge regarding asthma but certainly points to the failure of achievement of guideline provided standard. It is not possible to implicate the reason which could be largely external and circumstantial for the doctors to observe a better documentation, evaluation, and prescription habits but, nevertheless, one cannot exclude a certain degree of lacunae in understanding the disease. The exceedingly low use of peak flow measurement or /and spirometry , failure to suspect asthma in about one fourth (24 %) of patients (were kept off any anti asthma medication) probably points to low level of understanding or motivation among the doctors. On the contrary, the level of the use of inhalational products either alone or in combination with oral medications was relatively impressive (58 %), along with the use of inhaled corticosteroid in combination with LABA; this suggests that the fundamental basis of use of anti-inflammatory medications in asthma has been already incorporated in the practice behaviour of the physicians concerned. If this is taken as a marker of guideline acquaintance, we may need to put more stress on the circumstantial and patient-related factors for such deficient practice behaviour.Several recommended asthma guidelines are available to help the physicians to treat the disease better on evidence-based information. However, the ability of guidelines to change a physician\u2019s behavior or the patient outcomes has been limited. For asth9The prescribing habits of different categories of drugs are different at different places. A study to explore and compare treatment decisions and the influence of specific patient characteristics on the management of asthma in five different European countries revealed that there is a significant difference in the prescribing habit of oral or inhaled corticosteroid and antibiotics. PhysiciaThis small study has a lot of limitations; several factors as the number of the prescriptions studied (was not large), the disease control status and the lung function at the presentation, the reason for presenting to us (being referred or not), the educational and socioeconomic status of the patients, the practicing area of the doctors etc. were not taken into consideration. The dose issue of the inhaled medication (ICS or non-ICS) is also not incorporated. It is not possible to ensure that the available record is the true one since some of the physicians may have been keeping their personal record separately and have written only the medications in the prescription. Moreover, the exact status of the patients presenting at the doctors is not known although the reason of attending our OPD was presumably a non satisfactory response to therapy for most of the patients. It is also possible that, in some occasions, the referring physician had just given a set of advice before referring the patient to us. Therefore, the prescriptions scanned may not be the exact representation of the actual status of the prescribing habit in our community and have some deviations from the exact prescription habit of our doctors.In our observation, the available information provides limited insight into the causes of non adherence but it amply elaborates the reality in asthma care in this part of the world and impresses upon the need of well planned and elaborate effort to look further into the dimension and factors for non adherence.Our experience from this small data suggests that there is huge gap between the guideline and the practice behaviour of our physicians. Further studies are needed to assess in-depth the factors responsible for the inappropriate and inadequate practice behaviour of our doctors to find out ways to eliminate the deficiencies to help millions of our patients."} +{"text": "Dermatofibrosarcoma protuberans (DFSP) is an uncommon, slow growing and locally aggressive tumor of the skin witha high rate of recurrence even after supposedly wide excision. The reports of regional lymph node metastasis and distantmetastasis are very rare. Because of the extreme rarity of these cases with metastasis, the experience with management ofsuch patients is very limited. A case of recurrent DFSP of scalp, with metastasis to the regional lymph nodes, in a 17-year-oldboy is reported here. This is the second case of DFSP involving scalp and 16th case of DFSP of all sites metastasizing to theregional lymph nodes reported in literature. The patient was treated with wide excision of the lesion and ipsilateral radicalneck dissection (including excision of overlying involved skin)."} +{"text": "We report a case of traumatic inguinal hernia following blunt abdominal trauma after a road traffic accident and describe the circumstances and technique of repair. The patient suffered multiple upper limb fractures and developed acute swelling of the right groin and scrotum. CT scan confirmed the acute formation of a traumatic inguinal hernia. Surgical repair was deferred until resolution of the acute swelling and subcutaneous haematoma. The indication for surgery was the potential for visceral strangulation or ischaemia with the patient describing discomfort on coughing. At surgery there was complete obliteration of the inguinal canal with bowel and omentum lying immediately beneath the attenuated external oblique aponeurosis. A modified prolene mesh hernia repair was performed after reconstructing the inguinal ligament and canal in layers.To our knowledge, this is the first documented case of the formation of an acute direct inguinal hernia caused as a result of blunt abdominal trauma with complete disruption of the inguinal canal. Surgical repair outlines the principles of restoration of normal anatomy in a patient who is physiologically recovered from the acute trauma and whose anatomy is distorted as a result of his injuries. Blunt abdominal trauma may cause both crush and shearing effects on healthy abdominal wall and viscera . Acute oThe inguinal canal extends from the anterior superior iliac spine to the pubic tubercle. A defect in the posterior wall results in a direct hernia. In our case, all boundaries of the inguinal canal including the floor, posterior, inferior, medial walls and deep and superficial rings were obliterated causing traumatic herniation of the terminal ileum and caecum beneath an attenuated external oblique aponeurosis.We describe the timely reconstruction of the abdominal wall in the inguinal region and the importance of the restoration of normal anatomy with definitive repair after resolution of swelling and haematoma.A 24 year old man was admitted to hospital following a road traffic accident after his motorcycle collided with a lorry. The speed of collision was 35 mph and abdominal injuries were sustained as a result of impact against the motorcycle handle bars.On arrival to the Emergency Department the patient was haemodynamically stable and fully conscious. Primary survey revealed a soft abdomen with tenderness, swelling and bruising in right groin and scrotum. There was no previous history of groin hernia.Secondary survey, plain X ray and CT scan confirmed a fracture dislocation of the right shoulder, open fracture of right radius and ulna, multiple right lung contusions and a new right inguinal hernia. Internal fixation of the upper limb injuries was performed.Reconstruction of the abdominal wall was deferred, in the absence of obvious visceral damage, until resolution of groin swelling and bruising Fig. .12 days after admission, repair of the inguinal hernia was performed. At surgery, the external oblique aponeurosis overlying the inguinal canal was contused inferiorly, and the inguinal ligament was found to be sheared off the full length of its attachment from the anterior superior iliac spine to the pubic tubercle, with all boundaries of the canal obliterated Fig. . As a reThe edge of the peritoneum was sutured to the lacunar and pectineal ligaments and pectineal line. The overlying external oblique aponeurosis was re-attached as the inguinal ligament Fig. . A largeHere we discuss the first reported case of the formation and successful repair of an acute direct inguinal hernia resulting from blunt abdominal trauma where the inguinal canal was completely obliterated causing bowel to lie immediately beneath an attenuated external oblique aponeurosis. Technically there was no direct or indirect hernia as there was no inguinal canal. Traumatic injuries do not respect abdominal planes; normal anatomy is frequently distorted. Delayed repair afforded the resolution of haematoma and oedema that may have resulted in more challenging surgery.As the defect was unilateral and the procedure was exploratory in the first instance an open approach was undertaken. The size of the defect afforded easy inspection of the peritoneal cavity for visceral injury. As primary repair was feasible without tension this was undertaken by reconstructing the inguinal region in layers. An alternative technique of repair would have been a laparoscopic intraperitoneal approach rather than extraperitoneal due to the location of abdominal viscera beneath the skin and obliteration of the abdominal wall in the right inguinal region. After reduction of the abdominal viscera composite mesh would be fixed to edges of the defect rather than direct suture of the cranial and caudal borders of the defect together.Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The authors declare that they have no competing interests.SB carried out the operation detailed in this report and drafted the case presentation section of the report. MV and GH drafted and compiled the document. AL gave approval of the manuscript before publishing. All of the above authors were involved in the care of the patient whilst in hospital."} +{"text": "A rare malposition of central venous catheter in the left superior intercostal vein is described. The diagnostic features and the possible ways to prevent this complication are discussed. Central venous catheterization is an essential component of modern day critical care. But the insertion of central venous catheters is not free of complications. Numerous complications described both during placement of the catheter and later in the long-term maintenance, are both hazardous to the patients and expensive to treat. Malposition of the catheter tip is one of such complications, which usually involves placement of the catheter in various large tributaries of superior vena cava. This case report illustrates a rare malposition of a central venous catheter tip in a small tributary of left brachiocephalic vein.A 28-year-old male, was admitted to our ICU with surgical site infection and severe sepsis. Ten days prior to the present admission he underwent ileal resection and ileostomy for ileal perforation with underlying ileocaecal tuberculosis. On examination, he was in respiratory distress with tachypnoea, tachycardia and bilateral crepitations on chest auscultation. Arterial blood gas showed severe hypoxia. Possibilities of fluid overload as a result of aggressive fluid resuscitation and ARDS were considered and it was decided to place a central venous catheter for monitoring of central venous pressure. A catheter was placed through the left internal jugular vein with all aseptic precautions using the Seldinger technique. The catheter was gradually advanced up to the 13 cm mark without difficulty. After free return of venous blood was obtained, the catheter was felt to be placed correctly in the superior vena cava. Only unusual thing observed was patient complaining of left sided back pain on flushing the catheter with heparinized saline.An anteroposterior chest radiograph, obtained to confirm the position of the catheter, revealed the left paramedian location of the catheter following the aortic knob and pointing laterally . The inaet al, with their experience of 2104 central venous catheterization could find only one incidence of misdirected catheter in the smaller tributary.[Malposition of central venous catheters was reported to be between 1 to 33 percent by different investigators.[et al,[Thoracic pain syndromes on flushing of misplaced central venous catheters in the smaller tributaries have been described in the literature. Webb et al, reportedet al,\u20133 Thoughet al,5 A propeet al, In the pet al, In the let al, in whichBecause of the lack of experience, the importance of back pain on flushing of the misplaced catheter in our patient was appreciated only retrospectively. The malposition was evident only on review of the postprocedural chest X-ray. The characteristic location of the catheter in the frontal view and the associated classical back pain on flushing the catheter makes the left superior intercostal vein as the most likely position of the catheter. A lateral film and a venogram could have established the exact location of the catheter beyond any doubt.Because of the longer course and more transverse lie of the left brachiocephalic and more frequent smaller tributaries, malposition of the venous catheter is commoner when cannulation attempt is made via the left brachiocephalic vein rather than its right sided counterpart.4 The malAfter placement of all central venous catheters, a chest radiograph should be obtained. A posteroanterior or anteroposterior film is usually adequate; if not, a lateral view may be taken. If uncertainty still exists, a venogram through the catheter should be performed for precise localization."} +{"text": "Animals bearing metastatic fibrosarcomas were treated with cyclophosphamide (CY) alone or in combination with flurbiprofen (FP), an inhibitor of prostaglandin synthesis. FP did not affect local growth of fibrosarcomas, and the incidence of distant metastases after resection of the \"primary\" implants was comparable in treated and control groups. Treatment with CY retarded growth of the fibrosarcomas and reduced the proportion of animals which succumbed to metastases, but this was not altered significantly by additional treatment with FP. FP did not affect the survival of rats bearing a lymphoid leukaemia. The lifespan of animals treated with CY was increased significantly, but the concomitant administration of FP did not enhance this effect."} +{"text": "There is no case of simultaneous ipsilateral proximal interphalangeal and metacarpophalangeal dislocation of a finger in the literature.A 61 years old male patient sustained an ipsilateral dorsal dislocation of the PIP joint of his fifth finger and dorsal dislocation of the metacarpophalangeal joint. Closed reduction of proximal interphalangeal joint was achieved while open reduction of the metacarpophalangeal joint was carried out.The single most important element preventing reduction of the metacarpophalangeal joint was an interposition of the volar plate between proximal end of the phalanx and the head of the metacarpal. In this report we aimed to investigate and discuss an extremely rare injury that is bipolar dislocation of the fifth proximal phalanx due to a fall on hand. Although, simultaneous dislocation of metacarpophalangeal and carpometacarpal joints of the same digit had been previously reported, we did not find any case of simultaneous ipsilateral proximal interphalangeal and metacarpophalangeal dislocation of a finger in the English speaking literature.A 61 years old male patient fell from a height and sustained an ipsilateral dorsal dislocation of the PIP joint of his fifth finger and dorsal dislocation of the metacarpophalangeal joint. X ray examination of the finger revealed no fracture and closed reduction with wrist block was attempted in the English speaking literature. Irreducible metacarpophalangeal dislocations are extremely rare injuries with buttonholing of the metacarpal head into the palm -3.The most important element preventing reduction is an interposition of the volar plate between the base of the proximal phalanx and the head of metacarpal. In this case report, taking the advantage of a direct access to the metacarpal head we had preferred volar approach for an open reduction of a complex metacarpophalangeal dislocation.To us, volar approach also has the advantage of an evaluating the integrity of the neurovascular bundle which are tented tightly over the metacarpal head.Although, simultaneous dislocation of metacarpophalangeal and carpometacarpal joints of the same digit had been previously reported , simultaPIP: Proximal interphalangeal.The authors declare that they have no competing interests.KU, KO and EU contributed to conception and design, carried out the literature research, manuscript preparation and manuscript review. FUO was involved with the case and writing of themanuscript. AE revised the manuscript for important intellectual content. KB supervised the writing and the general management of the patient.Written consent was obtained from the patient for publication of the study."} +{"text": "The ingestion of carbon and benzpyrene particles in vitro by rat peritoneal macrophages, baby hamster kidney fibroblasts (BHK-21) and mouse L-cells has been shown to be significantly stimulated by the inclusion of histone or polylysine in the culture medium. Parallel studies using methylated bovine albumin did not significantly stimulate carbon or benzpyrene uptake relative to untreated control cultures. Incubation of carbon particles with histone before inclusion in the culture medium of macrophages resulted in the same degree of uptake as in the cultures where carbon and histone were added independently of each other. The implications of these findings to in vivo chemical carcinogenesis are examined."} +{"text": "Nitroimidazole markers of tumour hypoxia bind to normoxic liver and the question has been raised whether this is due to low oxygen concentration or microregional activity of specialised nitroreductases. To answer this question, the binding patterns of the 2-nitroimidazole, pimonidazole, were compared following perfusion of surgically isolated rat livers in anterograde and retrograde directions. Perfusion at low flow rates in anterograde or retrograde directions can be used intentionally to alter oxygen gradients without altering enzyme distributions. Perfusion by means of the portal vein (anterograde direction) produced pimonidazole binding in the pericentral region of liver similar to that observed for pimonidazole binding in vivo. A complete reversal of this binding pattern occurred when the isolated liver was perfused by way of the central vein (retrograde direction). In this case, pimonidazole binding occurred in the periportal region. The extent and intensity of binding in the periportal region during perfusion in the retrograde direction was similar to that in the pericentral region during perfusion in the anterograde direction. It is concluded that low oxygen concentration rather than the non-homogeneous distribution of nitroreductase activity is the primary determinant of 2-nitroimidazole binding in liver."} +{"text": "The thymus is considered to play an important role in the pathogenesis of Myasthenia gravis, an autoimmune disease characterized by antibody-mediated skeletal muscle weakness. However, its role is yet to be defined. The studies described herein summarize our efforts to determine how intrathymic expression of the neuromuscular type of acetylcholine (ACh) receptors is involved in the immunopathogenesis of this autoimmune disease. We review the work characterizing the expression of neuromuscular ACh receptors in the thymus and advance a new hypothesis that examines the intrathymic expression of this autoantigen in disease pathogenesis."} +{"text": "We describe a case of an 8-year-old boy who developed a combined motor and sensory neuropathy of the distal ulnar nerve, after sustaining a superficial injury to the right flexor carpi ulnaris tendon at the level of the distal wrist crease. Guyon's canal syndrome is a very rare entity during childhood. We have noted only one prior description of this syndrome in the pediatric age group in a review of the English literature. The distal ulnar tunnel, Guyon's canal, is 4\u20134.5 cm long. It begins at the proximal edge of the palmar carpal ligament and extends to the fibrous arch of the hypothenar muscles. The tunnel has frequently changing boundaries and does not have four distinct walls throughout its course. From proximal to distal, the roof consists of the palmar carpal ligament, the palmaris brevis, and the hypothenar fibrous and fatty tissue. The floor of the tunnel is made up of the flexor digitorium profundus, the transverse carpal ligament, the piso-hamate and piso-metacarpal ligament and the opponens digiti minimi. The flexor carpi ulnaris, the pisiform, and the abductor digiti minimi constitute the medial wall. The lateral wall is composed of the tendons of the extrinsic flexors, the transverse carpal ligament, and the hook of the hamate .There are four levels in which the ulnar nerve may be compressed at the wrist and hand: 1) The main trunk of the nerve at the entrance to, or within Guyon's canal. These lesions produce sensory loss in the distribution of the superficial termination branch and weakness of all the ulnar-innervated intrinsic muscles. 2) The deep terminal motor branch of the ulnar nerve distal to Guyon's canal but proximal to the branches that innervate the abductor digiti minimi (hypothenar muscles). This produces weakness of all ulnar-innervated muscles of the hand without sensory loss. 3) The deep motor branch distal to the branches that innervate the abductor digiti minimi and the hypothenar muscles. This produces no sensory loss but there is weakness of all the ulnar innervated intrinsic hand muscles except the hypothenar muscles. 4) The superficial terminal sensory branch which produces sensory loss without muscle weakness [Guyon's syndrome in the paediatric age group is extremely rare; a search of the literature in English yielded one case . To our An 8-year-old boy presented to our clinic complaining of numbness of the little finger and the ulnar aspect of the ring finger. Ten days prior to presentation, the patient sustained a 1 cm laceration at the level of the distal wrist crease after falling on a piece of broken glass. On examination, he had weakness of abduction and adduction of the fingers. Movement of the thumb was unaffected.The injury was managed at the emergency department by thorough wound irrigation. There was a partial irregular cut of about 30% of the radial aspect of the FCU with intact ulnar nerve and ulnar artery. The skin was sutured. After the primary management the patient was sent to our orthopaedic clinic for further follow up. The initial examination one week after the injury revealed a clean wound, no hematoma or swelling, normal sensation of the fifth and ulnar side of the fourth finger, and normal abduction and adduction of the digits. However a gradual numbness and weakness of intrinsic hand muscles was noted after 10 days that gradually worsened. On subsequent follow up a total ulnar nerve deficit was noted distal to the injury, at the wrist level involving motor and sensory branches.Three weeks after the initial injury he developed marked weakness of all ulnar supplied intrinsic muscles with total sensory loss at the fifth and the ulnar side of the fourth fingers. Due to the progressive nature of his symptoms, exploration and decompression of the Guyon's canal was done under general anaesthesia. Exploration revealed normal healing of skin and subcutaneous tissue with excessive scar tissue at the radial edge of the FCU which spanned the ulnar nerve, narrowing the entrance of Guyon's canal and causing severe compression and cicriatrical constriction of the nerve.The ulnar nerve was completely intact Fig. . No orgaIntrinsic lesions as well as extrinsic pathologies (chronic repetitive trauma) can damage the terminal superficial and/or deep branches of the ulnar nerve at the wrist and at the hand leading to distinct clinical features -4.The most common lesion, to the proximal Guyon's canal (Type 1), is characterised by sensory loss at the ulnar portion of the hand and weakness of all ulnar intrinsic hand muscles (mixed sensory-motor dysfunction), whereas a more distal lesion within Guyon's canal Types 2 and 3) causes an isolated palsy of the deep terminal motor branch without sensory loss (pure motor dysfunction) and 3 ca,5. NumerTo our knowledge this is the first case in the literature published in English language which reports Guyon's syndrome secondary to excessive healing tissue following partial tendon injury in the paediatric age group. This may be attributable to the high healing potential in paediatric patients. In addition, the experience of this case presents a challenge to the current dogma of withholding repair of tendon lacerations of less than 60% of the tendon's cross-sectional area .In the paediatric age group, glass penetrating injuries in proximity to neurovascular structures are best explored irrespective of distal neurologic deficits. Clearly the surgical exploration should not supplant a thorough preoperative clinical examination. The potential for otherwise missing incomplete tendon and vascular injuries is high. It also gives one the opportunity to more thoroughly irrigate the wound and evaluate hematomas.A neuroma-in-continuity found at delayed exploration is more difficult to treat than the original acute injury. Surgical exploration is indicated in all lacerations of the hand and upper extremity unless the level of injury is sufficiently superficial to enable exlusion of damage to vital structures in the emergency department. The experience of this case presents a challenge to the current dogma of indications for tendon repair, especially in the paediatric population.The authors declare that they have no competing interests.AK carried out the operation, YD participated in the sequence alignment and drafted the manuscript, TTS and ANY participated in the design and coordination of the manuscript. All authors read and approved the final manuscript.Written informed consent was obtained from the patient for publication of this Case report and accompanying images. A copy of the written consent is available for review by the editor-in-chief of this journal."} +{"text": "Mucinous carcinomas are defined on the basis of the amount of the mucus component in the tumour mass. Apart from this quantitative criterion, a number of clinicopathological parameters and genetic alterations differentiate these tumours from the non-mucinous ones. Since a different set of genetic lesions implies different inducing agents, these observations suggest that there may be a 'mucinous pathway of carcinogenesis'. Further identification of genetic changes characteristic of the mucinous phenotype will help to understand the aetiology of these tumours and possibly establish markers for detection of the high-risk group."} +{"text": "Since the discovery of the epidermal growth factor more than 40 years ago by Stanley Cohen and colleagues, scientific research in this area has expanded to near exponential rates. The epidermal growth factor receptor family and its associated ligands have revealed much in terms of the molecular biology of receptor tyrosine kinases and definitely proved the first link of an activated oncogene to malignant disease. On the basis of this bedrock of science, the pharmacological development of targeted therapies against this signalling system has proved to be one of the most important areas of translational research over the last two decades. Now in the clinic, agents directed against the epidermal growth factor receptor are hugely important tools in the treatment of several different solid tumours. Ongoing clinical trials seek to refine the use of these therapies and further scientific research reveals additional applications such as the use of soluble receptor.An impressive set of experts active in the field have contributed towards the 14 chapters in this book, across a spectrum from epidermal growth factor receptor protein isolation to complex techniques such as sensitive assay for monoubiquitination of receptor tyrosine kinases. A comprehensive, yet succinct introductory chapter provides a highly readable historical perspective to those coming in this field of study. An entertaining final chapter underlines clinical and pharmacological issues in the use of monoclonal antibodies and small molecule tyrosine kinase inhibitors against this signalling system. Each chapter is well organised with a summary section followed by a brief introduction, outlining how the technique is useful while explaining its relevant application for epidermal growth factor. Materials and equipment are logically and meticulously listed and the methodology is provided with sample results using clear diagrams and simple figures. An impressive notes section at the end of each chapter, drawing upon the authors\u2019 expertise, is sufficiently detailed lending highly useful practical advice at the cutting edge, as if the expert was present in the very same laboratory.The editors are to be applauded for the structure, presentation and choice of methods presented which are relevant to the ongoing research. The references are comprehensive and clearly laid out. The information is easily digestible and very hands-on and would provide a useful laboratory resource for planning and reference. For the clinicians, this book would provide an interesting tour into much of the biology of epidermal growth factor receptor and its increasing relevance to clinical application. Overall, this is a well-planned and well-written book covering a variety of techniques applicable to the study of epidermal growth factor receptor and would find a place in those laboratories both already established and those starting out in this ever expanding field of research."} +{"text": "The effect of cysteine on the immunosuppressive activity of three alkylating agents was tested. Cysteine strongly inhibited the ability of cyclophosphamide and nitrogen mustard to depress the direct plaque-forming cell response of mice to sheep red cells. In contrast, the activity of busulphan was usually potentiated by administration of cysteine. This throws doubt on the hypothesis that busulphan exerts its cytotoxic effects by alkylating thiol groups."} +{"text": "As assessed by decrease in tumour volume and inhibition of tumour cell respiration and glycolysis, hyperthermia (intra-tumour temperature 42\u00b0C for one hour) potentiated the destructive effect of radiotherapy (1000 rad) on the allogeneic VX2 carcinoma in the hind limb of rabbits, and chemotherapy (methotrexate) produced a similar potentiation of irradiation. The resulting regression of the primary tumour in each case after dual therapy was comparable to that occurring after 3 applications of local hyperthermia, which has been shown to cure 50% of animals with this carcinoma. Combination therapy did not increase the survival time of the rabbits, however, all of which had lung and lymph node metastases at autopsy. The results focus attention on the relationship between a primary tumour and its metastases. The histological picture and the animal survival data suggest that the mechanism of tumour cell death and resorption of necrotic material following treatment may be important in enabling the host to deal with metastatic cells. After combination therapy, many metabolically and mitotically active cancer cells remained in the tumour mass, and the incomplete destruction of the primary tumour may have left the host with a burden of tumour cells too large to be destroyed by the immune system."} +{"text": "Creutzfeldt-Jakob disease (CJD) has been considered infectious since the mid-1960s, but its transmissibility through the transfusion of blood or blood products is controversial. The causative agent's novel undefined nature and resistance to standard decontamination, the absence of a screening test, and the recognition that even rare cases of transmission may be unacceptable have led to the revision of policies and procedures worldwide affecting all facets of blood product manufacturing from blood collection to transfusion. We reviewed current evidence that CJD is transmitted through blood."} +{"text": "Zoo authorities of Central Indian State of Chattisgarh reported a hyena suffering from the fatal anthrax disease in Nandanvan zoo in capital city Raipur. After laboratory confirmation of anthrax, the zoo was closed for 15 days as precautionary measure. According to the veterinarians of the zoo, all symptoms of anthrax were found in one of the hyenas. The Forest department officials decided to vaccinate animals within the 5 km radius of zoo to contain the disease. All employees of Nandanvan zoo had been put on prophylactic antibiotic medicine.;Like Orissa, Chattisgarh has very high concentration of tribal population (>35%);Both the states have extensive forest cover;The indigenous population of both these states depend less on agriculture and more on forest and animal produce for food;Overwhelming majority of people in both these states live below the poverty line.Both the states have poor public health infrastructure. This is the perfect amalgamation and interface of risk factors that are conducive for zoonotic transmission of anthrax to the human population. Hence the probability of occurrence of anthrax outbreak in the human population would increase on logarithmic scale, each time an episode of anthrax detected in animal population in a state like Chattisgarh.So, epizootology and epidemiology should be both applied not only for anthrax, but for every zoonotic disease, hence the importance of veterinary and doctors putting heads together.Anthrax disease needs to be prevented with proper legislation for meat handling as well as effective immunization of animals. RegulatiCoordination with veterinary department and animal husbandry department and for surveillance and livestock vaccination drive for of all the cattle.Behavioral change communication for anthrax prevention--all the hamlets educating the inhabitants with very important one line health education input. Not to consume raw meat, cook it well before eating else you may be at risk of contracting anthrax.Inter-state cooperation between Chattisgarh and Orissa on ways on the management of anthrax prevention and control.Orissa is carrying out annual vaccination drive of livestock in the affected districts like Koraput, Kalahandi, Malkangiri, etc. Chattisgarh state could find the sources for funding for this program in Orissa.It is important to take up live stock vaccination against anthrax. It is one of the most credible way of providing health security to the vast majority of underprivileged sections of the society, to prevent them from further impoverishment by breaking the cycle of poverty and infection.The actual incidence of anthrax in India is not known accurately mostly due to underreporting. Hence ev"} +{"text": "Malarial anaemia is an enormous public health problem in endemic areas and occurs predominantly in children in the first 3 years of life. Anaemia is due to both a great increase in clearance of uninfected cells and a failure of an adequate bone marrow response. Odhiambo, Stoute and colleagues show how the age distribution of malarial anaemia and the haemolysis of red blood cells may be linked by an age-dependent increase in the capacity of red blood cells to inactivate complement components absorbed or deposited directly on to the surface of the red blood cell. In this commentary, we discuss what has been established about the role of complement deposition on the surface of red blood cells in the pathology of malarial anaemia, how genetic polymorphisms of the complement control proteins influence the outcome of malaria infection and how the findings of Odhiambo, Stoute and colleagues and others shed light on the puzzling age distribution of different syndromes of severe malaria. In the accompanying article, Odhiambo, Stoute and colleagues show how the age distribution of malarial anaemia and the haemolysis of red blood cells (RBCs) may be linked by an age-dependent increase in the capacity of RBCs to inactivate complement components absorbed or deposited directly on to the surface of the RBC . The worMalaria remains an enormous problem in public health around the world . Over 2 Furthermore, there remain major unsolved problems about the fundamental pathophysiology of all syndromes of severe malaria. The rapid drop in haemoglobin during acute infection and the slower decline in chronic infection appear to be due to increased extravascular haemolysis of RBCs with a concomitant failure of the bone marrow to increase red cell production to compensate for these losses .The increased clearance of infected cells is readily explained by the rupture of cells after completion of the parasite's intra-erythrocytic life cycle and opsonisation and clearance of intact infected RBCs. Rather less obvious is why and how uninfected cells are also cleared. It has been estimated that approximately 10 uninfected cells are cleared from the circulation for every infected cell and so the clearance of uninfected cells is of crucial importance for the development of malarial anaemia .Why are uninfected RBCs cleared in such large numbers? Certainly the number and activation of splenic and other macrophages for phagocytosis of red cells is greatly increased during malarial infection -9. The iUninfected RBCs have a reduced deformability leading to enhanced clearance in the spleen and a severe reduction in red cell deformability is also a strong predictor for mortality measured on admission, both in adults and children with severe malaria ,11. SecoThe role of immunoglobulin and complement in marking uninfected RBCs for clearance by phagocytes was first studied by Facer and colleagues ,13 in ThThe story of how absorbed immune complexes may contribute to increased clearance of uninfected RBCs lay dormant for 20 years when Waitumbi, Stoute and colleagues based in Western Kenya began to study how immune complexes caused haemolysis . AppreciHere, a number of proteins are involved in the control of complement activation. Complement receptor 1 (CR1 or CD35), decay accelerating factor (DAF or CD55) and the membrane inhibitor of reactive lysis (MIRL or CD59) may enhance binding of C3b in immune complexes (CR1), enhance inactivation of C3 convertases (CR1 and CD55) and interfere with the assembly of the terminal components of complement that form the membrane attack complex (CD59) . Immune in vitro than RBCs from controls. Decline in CD35 or CR1 expression and increases in immune complexes bound on uninfected RBCs were associated with anaemia but these declines in CD35 (CR1) and CD55 expression were only transiently associated with malaria infection and levels returned to normal after infection had been cleared [In previous papers, Waitumbi, Stoute and colleagues have shown that the amount of red cell surface IgG is increased but red cell surface CR1 and CD55 reduced in children with severe malaria compared with asymptomatic and symptomatic controls . The dif cleared . These dHow does this relate to the age-dependent incidence of malarial anaemia? Population studies in Europe and Africa showed the CR1 expression was strongly age-dependent and increases of both CR1 and CD55 were seen after 4 years of age and low levels of CR1 and CD55 expression were seen in a cases of severe malarial anaemia compared with slightly older children with cerebral malaria . OdhimboHow do these findings help explain the age distribution of syndromes of severe malaria? If clearance of the immune complexes absorbed on to the surface of uninfected cells is reduced in younger children expressing lower levels of CD35 and CR1, then clearance of these cells would be increased, leading to more anaemia in these younger age groups.Genetic polymorphisms also affect the expression levels, sequence and domain structure of CR1 in Africans and other populations ,20. MoreIt is possible therefore, that age-related and genetically determined reduction of levels of CR1 expressed on RBCs are associated with an increased susceptibility to anaemia but protection from other forms of severe malaria and may provide an example of how innate resistance to one syndrome of malaria may be at the expense of susceptibility to other pathophysiological pathways involved in malaria infection.These hypotheses should ideally be tested in longitudinal studies using genetically and phenotypically well-characterised children to ascertain the levels of these age-dependent, complement regulatory proteins prior to infection and determine their association with haemoglobin levels during acute infection and with the incidence of severe disease through childhood. Such studies can only be done in a few sites in Africa where large populations can be followed before and after infection and presentation to a health facility using a demographic surveillance system.In assessing the role of these age-related changes to the complement regulatory proteins in the differential presentation of anaemia and coma, one would also have to consider the recently described findings of increased levels of erythropoietin (EPO) seen in younger children with anaemia and malaBMC Medicine provide us with a clearer understanding of the causes of anaemia in children with malaria. Translating this new understanding of pathology into fruitful avenues to investigate prevention and treatment of malaria is perhaps the greatest challenge of clinico-pathological studies in this and other disciplines.Finally, we would ask where these studies will lead in the quest for new methods to prevent or treat malarial infection. The findings of Odhiambo, Stoute and colleagues presented in the accompanying article in CR1: complement receptor 1; DAF: decay accelerating factor; DCT: Direct Coombs' Test; EPO: erythropoietin; MIRL: membrane inhibitor of reactive lysis; RBC: red blood cell.The pre-publication history for this paper can be accessed here:"} +{"text": "Slices of human prostatic adenocarcinoma obtained by transurethral resection were maintained in organ culture for 4 days. Preservation of histological appearance was good with little evidence of necrosis within the viable tissue. Slices of tumour cultured in the presence of testosterone showed a morphological change to a more differentiated type of neoplasm whereas explants cultured in the absence of steroid hormone, or with stilboestrol diphosphate, showed no change. In the case of a relatively anaplastic tumour, testosterone produced a significant increase in the number of mitotic figures seen."} +{"text": "Human lactate dehydrogenase is a tetramer made up of two typesof subunits, either H (heart) or M (muscle). Combination of thesesubunits gives rise to the five isoenzymes of lactate dehydrogenasewhich are found in mammalian tissues. The relative proportionsof the individual isoenzymes found in serum of patients is relatedto the severity of the lesion in the organ or tissue from which theyoriginate and the half-life of the individual tissue-specific enzymes.Thus, one cannot predict the relative proportions of the differentisoenzymes in any one patient sample.Lactate dehydrogenase catalyses the reversible oxidation of lactateto pyruvate and either reaction can be measured readily. However,in this method, the lactate to pyruvate reaction has been selectedbecause of the following reasons; the time-course of the reaction ismore linear, the reaction results in an increase in absorbance andoptimization of substrates is possible (see appendix A).The principles applied in the selection of the conditions ofmeasurement are those stated in previous publications by the IFCC\u2019sCommittee on Enzymes [1]. Human serum and tissue extracts havebeen used as the sources of enzymes. The final concentration ofsubstrates and the pH have been selected on the basis of experimentsand empirical optimization techniques and have been confirmed bycalculation from rate equations. The catalytic and physicalproperties of the isoenzymes differ, but because of the importance ofthe heart specific isoenzyme (LD1) in the assessment of coronaryheart disease and as a tumour marker, this method has beenoptimized for this isoenzyme. However, the method is also suitable,although less optimally, for the determination of the otherisoenzymes of lactate dehydrogenase which may be present in serum."} +{"text": "Dear Sir,Use of laparoscopy in trauma is still a controversial issue. The high rate of missed injury and possibility of induction of severe complications should be carefully considered. In the recent publication by Umman, the unfavourable outcome can also be seen. The sugg111"} +{"text": "The relative proportions of the various classes of cells are maintained when Hodgkin's tissue increases in bulk. Histological progression from LP/MC to MC Hodgkin's disease was associated with an increase in aneuploidy of the Hodgkin cell line and an increase in the proportion of large basophilic blast cells. Splenic Hodgkin's disease in the same patient contained fewer aneuploid cells. The significance of these findings in terms of the histogenesis of Hodgkin and Sternberg-Reed cells is discussed.The cellular composition of Hodgkin's tissue from two patients has been examined. The labelling pattern with"} +{"text": "A series of 237 cases of oesophageal carcinoma admitted to two thoracic units in Ceylon is analysed.Evidence suggestive of an aetiological link between betel chewing and high incidence of the tumour in Ceylon is presented. The sex incidence is unusual in that there is a preponderance of females in the series. A significant proportion of patients were women under 40 years of age. The middle third of the oesophagus was the commonest site affected."} +{"text": "Daylight saving time affects millions of people annually but its impacts are still widely unknown. Sleep deprivation and the change of circadian rhythm can trigger mental illness and cause higher accident rates. Transitions into and out of daylight saving time changes the circadian rhythm and may cause sleep deprivation. Thus it seems plausible that the prevalence of accidents and/or manic episodes may be higher after transition into and out of daylight saving time. The aim of this study was to explore the effects of transitions into and out of daylight saving time on the incidence of accidents and manic episodes in the Finnish population during the years of 1987 to 2003.The nationwide data were derived from the Finnish Hospital Discharge Register. From the register we obtained the information about the hospital-treated accidents and manic episodes during two weeks before and two weeks after the transitions in 1987\u20132003.The results were negative, as the transitions into or out of daylight saving time had no significant effect on the incidence of accidents or manic episodes.One-hour transitions do not increase the incidence of manic episodes or accidents which require hospital treatment. Daylight saving time (DST) is used to balance the seasonal caused daylight exposure changes to the activity peaks of a population. It is important to explore the effects of transitions into DST to the public health as DST affects millions of people annually and its impacts are still widely unknown. Turning clock forwards (on spring) or backwards by one hour presumably impacts our circadian rhythms. DST study can provide interesting knowledge about the importance of individual differences in the adjustment of changes in circadian rhythms. DST studies can also provide a useful indication of which type of people will generally find it easy to adjust and whicCircadian clocks regulate the endogenous rhythms of vital functions. Circadian rhythms are not able to adjust instantaneously to sudden changes in sleep-wake cycle and thus sudden changes as transitions into DST can cause circadian rhythm disruptions . DisturbBoth the Parliament and Council of the European Union (EU) have stated that personal injuries are one of the central health problems in societies . According to the Ministry of Social Affairs and Health of Finland, accidents cause annually costs of approximately 3.3 \u2013 5 billion euros (counted for accidents happened for fifteen year olds and elder). It is presumable that if functioning systems to prevent the accidents can be developed it decreases the amount of accidents. Even small reduction in amount of accidents can cause major profits; if the number of hospital treated accidents can be reduced even 5%, 10 billion euros will be saved from treatment and social expenditures inside EU area [On our earlier studies we have shown that fall and spring transitions into DST may have disruptive effects on the rest-activity cycle of healthy adults . TransitWe hypothesized that transitions into and out of daylight saving time may influence the incidence of hospital-treated accidents and manic episodes. This hypothesis was based on the earlier findings of sleep disruption being able to cause accidents and manic episodes. We assumed a peak in the number of hospital-treated manic episodes and accidents after DST transitions.The material was derived from the Finnish Hospital Discharge Register which is supported by the National Research and Development Centre for Welfare and Health. Permission for the study was authorized by the National Research and Development Centre for Welfare and Health (STAKES), Helsinki, Finland, which is the holder of the nationwide data for the National Hospital Discharge Register. For the analysis, all the data were anonymous. The Finnish Hospital Discharge Register covers all mental and general hospitals, as well as in-patient wards of local health centers, military wards, prison hospitals and private hospitals in Finland. From the register we obtained the information about the hospital-treated accidents and manic episodes during two weeks before and two weeks after the transitions in 1987\u20132003. The into-transitions took place on the last Sunday of March during the study period. Prior to 1996, the out-of-transitions took place on the last Sunday of September. Since 1996, Finland as a member of the European Union adopted the last Sunday of October as the out-of-transition date.Accidents and manic episodes were classified according International Classification of Diseases (ICD). Before 1995 the diagnoses were coded according to ICD-9, and for 1996\u20132003 they were coded according to ICD-10 . All periods of hospital treatment due to accidents and manic episodes appearing under the two preceding or two following weeks of spring and fall transitions were mapped out and from this data we gathered information as follows: the start and end days of treatment, diagnosis, the year of birth, sex and personal identity number.We constructed a contingency table of frequencies of accidents and manic episodes in respect to year (1987\u20132003), season , age , sex , geographical location , and period (before or after transitions). This table was modeled using Poisson regression with the cell frequency as response variable and the table marginal as the explanatory variable . The sigTransitions into or out of DST had no significant effect on the incidence of accidents or manic episodes in the Finnish population during the years of 1987 to 2003 .The reason why not seen any impact of fall and spring transitions to manic episodes and accidents can derive from following factors; Genetic contributions are important explainers of the appearance of manic episodes and it seems that in a Finnish population the prevalence of manic episodes is quite low thus theConcerning accidents the spring transition into DST might increase the number of minor accidents, such as small pedestrian casualties and tin crashes. However, since such accidents are unlikely to lead to admission, they will not be recorded in statistics. Accidents took place slightly more often in spring than in fall table . This fiThis study indicated no effect of DST transitions on admissions due to manic episodes or accidents.The author(s) declare that they have no competing interests.TAL made contributions to the analysis and interpretation to the drafting and writing of the manuscript.JH made contributions to statistical modeling and analysis and to the drafting of the manuscript.JL participated in the planning of drafting of the manuscript.TP participated in the planning of the study, in the analysis of data, and in the drafting of the manuscript.All authors red and approved the manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "Insects are not only of immense economic and medical importance, they provide important models for the advancement of knowledge and many are of ascetic value. They are also beset by an array of pathogens, despite their well developed defence systems. Interest in these pathogens revolves around their potential use as biological control agents for insect pests or the desire to protect beneficial insects against infection. The adoption of molecular approaches to the investigation of insect pathogens and pathogen-insect interactions has had a large impact on research and this volume provides a timely review of how these techniques are being applied.The multi authored book, consisting of sixteen chapters, is broken down into four themes; identification and diagnostics, evolutionary relationships and population genetics, host-pathogen interactions and genomics and genetic engineering. The chapters follow a standard format and consist of several short sections covering a wide range of topics related to the chapter heading. In some cases this results in a superficial overview, but all chapters contain a comprehensive list of references so that readers can follow up areas of specific interest. Most chapters deal with particular taxa of entomopathogenic organisms thus the reader may wish to focus upon chapters pertaining to their particular study group or take a comparative approach. Emphasis throughout is on molecular techniques, often discussed and evaluated from the point of view of the authors' personal experience. Although a useful glossary is provided the extensive use of acronyms warrants, in addition, a list of abbreviations.In the first section of the book authors of all the chapters emphasise the importance of molecular techniques to the revision of taxonomy and classification on a phylogenetic basis. Likewise in the second section authors emphasise the significant changes made to hypothesis concerning the evolutionary biology of insect pathogens following the application of DNA sequencing and analytical methods. Chapters on host-pathogen interactions cover an eclectic range of topics including the development of a baculovirus expression vector, tsetse fly immune responses and gene expression in tripartite nematode-bacterium-insect symbiosis. In the last chapter in this section and those in the final section on genomics and genetic engineering, detailed overviews of molecular strategies are given, a case study approach is frequently taken and future prospects and pitfalls discussed.As someone not directly involved with the fields of work presented in this book I found aspects of all chapters interesting. The overviews of techniques may prove valuable to PhD students or researchers considering taking a molecular approach to their work on insect pathogens however, consultation of detailed technical manuals is likely to be required before investigations can begin.The author declares that they have no competing interests."} +{"text": "Minimally displaced fractures of the surgical neck of the humerus are rarely associated with axillary artery injury. The innocuous appearance of the x-rays can be misleading and a missed arterial injury in these fractures could potentially lead to disastrous consequences. We report the case of a patient who sustained a minimally displaced fracture of the proximal humerus with vascular compromise requiring immediate investigation and referral to vascular surgeons. Despite spontaneous resolution of the vascular insult, it is important to remember the association of such fractures with vascular injuries in order to diagnose them early and prevent serious complications including amputation. Minimally displaced fractures of the neck of the humerus are rarely associated with injury to the axillary artery . In thisA relevant case is discussed and mechanisms related to vascular injuries in association with proximal humerus fractures are described with the emphasis on having a low threshold of suspecting and immediately treating such injuries in order to prevent catastrophic results.A 74-year-old white British lady attended the Accident and Emergency department after having fallen from a low height at home. She complained of pain around the right shoulder and had bruising extending from the shoulder to the elbow. There was no gross deformity of the shoulder. The fingers were cold to the touch with a delayed capillary refill. The nail beds were cyanosed. Further examination of the right upper limb revealed loss of pulsations in the brachial and radial arteries with preservation of sensations in the hand. A Doppler ultrasound also failed to detect distal pulsations. X-rays of the shoulder showed a minimally displaced fracture of the surgical neck of the humerus Figure . VasculaThe patient was admitted for monitoring of the distal pulses, but no further loss of pulsations was documented. At 12 months of follow-up, she had good functional outcome with a normally perfused limb.Fractures of the proximal humerus account for 4-5% of all fractures . AxillarThere are several mechanisms by which the axillary artery can be injured in proximal humeral fractures. A direct injury to the artery by a sharp bony fragment can cause laceration and rupture , violentDiagnosis of axillary artery injury may be difficult as peripheral pulses may remain intact initially and later disappear. As a result, vascular injury can occasionally manifest several days after a fracture of the proximal humerus .Paraesthesia is probably the most reliable symptom of inadequate distal circulation and should always be taken seriously. Collateral circulation around the shoulder is effective, and depending on the level of injury to the axillary artery, distal circulation might remain adequate and the patient asymptomatic . In DrapArterial injuries associated with proximal humeral fractures can be easily missed. Our case illustrates that minimally displaced fractures of the surgical neck of the humerus can be associated with vascular compromise and this can occur in the elderly with lesser degrees of force because of atherosclerotic rigidity of the artery. Potential complications can be avoided by careful examination of the patient and avoiding treatment of the x-ray alone.Once a vascular injury is suspected, Doppler examination is necessary to establish the magnitude and quality of the arterial signal. Angiography should be performed immediately if arterial compromise is suspected after Doppler examination, and vascular surgeons should be consulted early in the management of such patients.Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The authors declare that they have no competing interests.MS co-ordinated the entire effort and wrote the manuscript. GV helped with the draft of the case report, and obtained consent from the patient. NS was the treating and supervising consultant in charge. All authors read and approved the final manuscript."} +{"text": "Despite semen being the main vector of human immunodeficiency virus (HIV) dissemination worldwide, the origin of the virus in this bodily fluid remains unclear. It was recently shown that several organs of the male genital tract (MGT) are infected by HIV/simian immunodeficiency virus (SIV) and likely to contribute to semen viral load during the primary and chronic stages of the infection. These findings are important in helping answer the following questions: (i) does the MGT constitute a viral reservoir responsible for the persistence of virus release into the semen of a subset of HIV-infected men under antiretroviral therapy, who otherwise show an undetectable blood viral load? (ii) What is the aetiology of the semen abnormalities observed in asymptomatic HIV-infected men? (iii) What is the exact nature of the interactions between the spermatozoa, their testicular progenitors and HIV, an important issue in the context of assisted reproductive techniques proposed for HIV-seropositive (HIV+) men? Answers to these questions are crucial for the design of new therapeutic strategies aimed at eradicating the virus from the genital tract of HIV+ men \u2013 thus reducing its sexual transmission \u2013 and for improving the care of serodiscordant couples wishing to have children. This review summarizes the most recent literature on HIV infection of the male genital tract, discusses the above issues in light of the latest findings and highlights future directions of research. Shortly after the first cases of acquired immuno-deficiency syndrome (AIDS) were described in 1981 in the United-States, two main populations at risk were identified, homosexual men and haemophiliacs. Based on these observations, the cause of AIDS was hypothesized to be due to a sexually and blood transmitted pathogenic agent, even before the discovery of its aetiological agent, the human immunodeficiency virus (HIV). Twenty six years later, the spread of HIV has become a global phenomenon and infected more than 65 million people of whom 25 million are deceased, according to the latest estimates. In 2007, 2.7 million people were newly infected, 80% of whom were through sexual transmission.Semen represents the main vector of HIV dissemination, evidenced by transmission occurring more efficiently from men to women and men than from women to men ? It is established that the viral strains present in semen do not solely arise from the blood compartment but is/are the MGT organ(s) responsible for the seminal viral load?What is the nature of the interactions among HIV, the spermatozoa and their progenitor cells the testicular germ cells?Does the MGT constitute a viral reservoir resistant to current anti-HIV therapy? Highly Active Antiretroviral Therapy (HAART) does not always eradicate the virus from semen, even when achieving an undetectable viral load in blood and can favour the sexual transmission of drug-resistant strains;What is the cause of the semen abnormalities recently described in HIV+ men?Answering these questions is crucial for the design of new therapeutic strategies aimed at eradicating the virus from the genital tract of HIV+ men and for improving the care of serodiscordant couples wishing to have children. The aim of this review was to summarize the current knowledge on HIV infection of the male genital tract, discuss the above questions in view of the latest findings and highlight future directions of research.HIV is present in semen as free viral particles and infected cells. It was originally believed that the only source of HIV in semen was infected lymphocytes and macrophages coming from the blood. It has now been shown that the HIV strains present in semen evolve separately from the strains in the blood or in other anatomical compartments ]. Thus iAs infected leucocytes in semen produce viral strains that are different from those in blood leucocytes in spermatozoa purified using a gradient of Percoll without swim-up (which further separates motile spermatozoa from non-motile), these positive finding were assumed to result from contaminations of this fraction by a few remaining infected leucocytes or from false positives . In the In conclusion, spermatozoa display several receptors that could allow HIV specific binding during their progression through the male genital tract. Thus it is likely that spermatozoa can act as a carrier for viral particles encountered within the testis or epididymis, i.e. in the absence of seminal plasma, an inhibitor for some of the HIV receptors present on spermatozoa. It is established that spermatozoa do not produce HIV particles. Whether they can support the early steps of HIV replication as proposed by some authors remains speculative, as the vast majority of studies did not evidence any HIV genetic material and spermatozoa are considered as metabolically inert cells. However, non-specific mechanisms such as foreign RNA uptake could be at play and explain the detection of HIV DNA in a subset of abnormal spermatozoa. The exact nature of the interactions between HIV and spermatozoa, and their impact on spermatozoa morphology, are far from fully understood and require further studies.The infection of the testis by HIV can have important consequences for the eradication of the virus from the MGT by antiretroviral therapies. Thus, the existence of the blood testis barrier and of the drug efflux pumps of the ABC transporter family expressed by a wide range of testicular cell types, restrict the drug access to this organ, as shown for some HIV replication inhibitors . The in During the later stages of the disease, the testis morphology is severely damaged , with diTo conclude, recent data show that the testis is infected early during the course of HIV infection. This infection is not associated with either any apparent change in testicular morphology or inflammation of the organ or a cellular (e.g. the latently infected resting memory lymphocytes) site, impermeable to the action of one or several antiviral drugs and within which the virus replicates or persists despite treatment. Such sanctuaries are called reservoirs when they replenish the body in free virus or infected cells. Thus when HAART is discontinued, the blood plasma viral load that was undetectable under the pressure of the antiretroviral drugs systematically rises again from these reservoirs to have children without contaminating the partner and embryo. ART uses spermatozoa isolated from the infected components in semen and tested negative for viral DNA/RNA determining whether one or several of the infected MGT organs constitute a viral reservoir, which could explain the persistence of HIV in the semen of men under effective treatment with undetectable blood viral load; (ii) determining the aetiology of the semen parameter modifications in HIV+ men under HAART; (iii) deciphering the exact nature of the interactions between HIV, the testicular germ cells and the spermatozoa, both important issues in the context of ART; (iv) analysing the effect of HIV infection on the seminal plasma composition and its impact on HIV infectivity, which may reveal new mechanisms that could be useful in the fight against the AIDS pandemic as a few studies suggest that seminal plasma factors may influence HIV sexual transmission."} +{"text": "Recent studies support the hypothesis of a close aetiological and pathogenic association between the presence of patent foramen ovale (PFO) and cryptogenic stroke. The therapeutic options currently used in the treatment of these patients range from standard antiaggregation and standard-dose anticoagulation to the percutaneous occlusion of the PFO. The use or recommendation of treatment is based both on clinical risk factors associated with PFO, such as age, detection of states of hypercoagulability and previous history of stroke, and on the risks associated to right-to-left shunt (RLSh) and PFO, such as the size of PFO, magnitude of RLSh and the presence of atrial septal aneurysm (ASA). However, there is currently no consensus regarding the most suitable treatment and it is surprising to observe the widespread use of certain therapeutic approaches which are not supported by clinical evidence. In this revision, we analyse the relevance of PFO in cryptogenic stroke, consider the main evidence available for determining the best management of these patients and make diagnostic and therapeutic management recommendations. I). Some of these studies have shown an association between the size of PFO, the magnitude of right-to-left shunt (RLSh), and the presence of atrial septal aneurysm (ASA) with increased stroke risk vs. 6.5 ml [1.3-16.6]) and suggests that the mechanism of stroke in patients with and without RLSh/PFO is different. This hypothesis is supported by the greater prevalence of risk factors in patients with cryptogenic stroke without RLSh. The mechanisms of the stroke involved in RLSh appear less severe than those involved in patients without RLSh. We must be careful not to fall into the logical error of confusing the identification of a specific aetiology for a condition with the need to aggressively combat it. One aspect which has been little evaluated but which is very interesting when deciding which type of therapeutic intervention to undertake is the analysis of the consequences of suffering a stroke associated with RLSh/PFO. The fact that cryptogenic stroke associated with a RLSh presents a low annual risk of recurrence together with the lesser severity of stroke associated with RLSh shown by the CODICIA study ,60 advisBefore indicating PFO closure, anticoagulation or antiplatelet therapy, we need to identify the subgroup of patients at high risk of stroke recurrence which may benefit from the application of these therapies. Several stroke associations, including the American Heart Association, American Stroke Association, American Academy of Neurology ,61 and tAnticoagulant therapy may be used in selected cases with high risk of thromboembolic events such as hypercoagulable states or evidence of deep venous thrombosis . In clinical practice, PFO closure should be individualized and considered in young patients with recurrent stroke receiving medical treatment or in previously mentioned situations where anticoagulant treatment is considered . Ongoing trials have a low recruitment rate with a risk of bias if younger patients or those with severe PFO or associated anomalies are treated out of the trials. It is therefore important that effort should be made to randomize patients systematically in these trials and to improve the collaboration between neurologists, basic scientists and cardiologists so that reliable results regarding the best treatment options can be established for our patients as soon as possible."} +{"text": "We report from the second ESF Programme on Functional Genomics workshop onData Integration, which covered topics including the status of biological pathwaysdatabases in existing consortia; pathways as part of bioinformatics infrastructures;design, creation and formalization of biological pathways databases; generatingand supporting pathway data and interoperability of databases with other externaldatabases and standards. Key issues emerging from the discussions were the need forcontinued funding to cover maintenance and curation of databases, the importanceof quality control of the data in these resources, and efforts to facilitate the exchangeof data and to ensure the interoperability of databases."} +{"text": "The use of restrictive measures such as quarantine draws into sharp relief the dynamic interplay between the individual rights of the citizen on the one hand and the collective rights of the community on the other. Concerns regarding infectious disease outbreaks have intensified the need to understand public perceptions of quarantine and other social distancing measures.We conducted a telephone survey of the general population in the Greater Toronto Area in Ontario, Canada. Computer-assisted telephone interviewing (CATI) technology was used. A final sample of 500 individuals was achieved through standard random-digit dialing.Our data indicate strong public support for the use of quarantine when required and for serious legal sanctions against those who fail to comply. This support is contingent both on the implementation of legal safeguards to protect against inappropriate use and on the provision of psychosocial supports for those affected.To engender strong public support for quarantine and other restrictive measures, government officials and public health policy-makers would do well to implement a comprehensive system of supports and safeguards, to educate and inform frontline public health workers, and to engage the public at large in an open dialogue on the ethical use of restrictive measures during infectious disease outbreaks. Prior to the 2003 outbreak of severe acute respiratory syndrome (SARS), it had been more than 50 years since mass quarantine measures had been invoked in North America and 'infectious disease' . At the conclusion of the survey, respondents were asked to supply general demographic information.After the response format was explained and before the first survey item was asked, all participants were provided standardized definitions of 'quarantine' : (a) parking in a no-parking zone; (b) driving way above the speed limit on a busy street; or (c) physical assault.\" Fully 59% responded that breaking quarantine is most like 'physical assault,' whereas 27% selected 'driving above the speed limit' and 8% chose 'parking in a no-parking zone' (6% did not answer).Principal components factor analysis of the survey data yielded an underlying factor structure of four independent factors. Based on a subjective analysis of the content of items loading on each individual factor, the four factors were labelled as follows: 'Justification,' 'Sanctions,' 'Burdens,' and 'Safeguards' , p < .001], thereby indicating greater agreement that the use of quarantine is justified in the context of an infectious disease outbreak. With respect to age, older respondents (>65 yrs) indicated greater agreement that use of quarantine is justified than did the young (18-35 yrs) . Also, older respondents agreed more strongly that the use of sanctions for quarantine absconders is appropriate when compared both with the young and with the middle-aged (36-65 yrs) . There were no significant differences by region.The quarantine of exposed persons has been properly described as the most complex and most ethically and legally controversial intervention within the jurisdiction of public health . ComplexData on public attitudes toward quarantine in the wake of SARS are scarce. Public opinion polls have indicated high levels of acceptance of quarantine among samples of Toronto-area residents (97%) and US citizens (93%) . These fComparative data from international studies do lend support to the theory that cultural values and societal norms impact upon quarantine compliance rates. Researchers at the Harvard School of Public Health and the U.S. Centers for Disease Control and Prevention surveyed residents of Hong Kong, Taiwan, Singapore, and the U.S. and found significant regional variability . The proIn view of this inter-region variability, it is not surprising that the global community of public health experts is itself conflicted about the use of quarantine and other restrictive measures that impinge upon the intrinsic rights of individuals. Those who favour the consideration of quarantine during infectious disease outbreaks maintain that it is prudent public health policy , whereasWith a view to fostering further deliberation and constructive debate, we are proposing a conceptual framework for the ethical use of restrictive measures in public health emergencies (see Figure Much has been learned from the unexpected arrival of SARS in the spring of 2003 ,27. LikeOwing to the global threat of pandemic influenza, considerable planning and preparation for infectious disease outbreaks has been undertaken . There rWhile we believe the data reported here contribute to the goal of better planning and better preparedness, the present study is limited by its sample of respondents who were drawn only from the Greater Toronto Area. Our goal was to assess the attitudes and perceptions of those living in an area significantly impacted by the SARS outbreak, but further research is now required to determine the generalizability of the present findings to other geographic regions and other populations. Also, our survey was conducted after the conclusion of the outbreak; it is conceivable that public perceptions and attitudes toward the use of restrictive measures could be different during the course of an outbreak. Finally, a relatively small proportion of our survey respondents were directly affected by quarantine during SARS, which precluded any analysis of differences between those who were directly affected and those who were not.The use of restrictive measures such as quarantine draws into sharp relief the push and pull of opposing forces that characterize the dynamic interplay between the personal autonomy of the citizen on the one hand and the collective rights of the community on the other. As Bensimon and Upshur have argThe authors declare that they have no competing interests.CST performed the statistical analysis of the survey data, drafted the first version of the manuscript, and contributed to subsequent revisions. ER initiated the study, participated in the design of the survey instrument, and contributed to the revising of the manuscript. REGU participated in the statistical analysis, contributed to the revising of the manuscript, and will act as guarantor. All authors have read and approved the final version of the manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2458/9/470/prepub"} +{"text": "Ectopic internal carotid artery (ICA) is a very rare variation. The major congenital abnormalities of the ICA can be classified as agenesis, aplasia and hypoplasia, and they can be unilateral or bilateral. Anomalies of the neck artery may be vascular neoplasms or ectopic position. Carotid angiograms provide absolute confirmation of an aberrant carotid artery, while EcoColorDoppler (ECD) gives also important information about the evaluation of carotid vassels. Nevertheless Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) of the neck provide spatial information about the adjacent pharyngeal anatomy and are less invasive than angiogram. Injuries to the ICA during simple pharyngeal surgical procedures can be catastrophic due to the risk of massive bleeding. We report a case of a 56 year-old male patient suffering from dysphagia associated with aberrant ICA manifesting itself as a pulsative protruding of the left lateral wall of the oropharynx. The congenitally tortuous internal carotid artery (ICA) is an uncommon but important anomaly for the otolaryngologist, to recognize. Numerous descriptions of the anomalies of the greatest vessels of the head and neck, as well as of the ICA have been presented in the literature. The deformities of the ICA have been reported with a large variability of pattern and degree. Some of them determine a dislocation of the ICA that can be found at the level of the pharyngeal wall in some cases. Because of this dislocation, the ICA may cause a widening of the retropharyngeal and lateropharyngeal soft tissues. The ectopic ICA poses a risk during both major oropharyngeal tumor resection and less extensive procedures, such as tonsillectomy, adenoidectomy, and uvulopalatopharyngoplasty. We report a case of a 56 year-old male patient suffering from dysphagia associated with aberrant ICA manifesting itself as a pulsative protruding of the left lateral wall of the oropharynx.A 56 year-old male patient was admitted to our service with dysphagia, and malaise that had progressed over the last week. Oral examination revealed an edema at the gingival and the soft palate area, as well as a redness and pulsative protruding of the left lateral wall of the oropharynx. The rest clinical evaluations, as well as the blood tests were normal. Because of the palatal edema, he was administered methylprednisolone per os. No other medication was given.A Computed Tomography (CT) of the neck was then performed, which revealed the helicoids, ectopic course of the right internal carotid artery (ICA) at the level of the oropharynx figure . MultiplThe abnormal extension of the ICA subsequently was confirmed by Magnetic Resolution Angiography (MRA) of the neck figure . This abEctopic internal artery is a very rare variation. The venous anomalies are relatively more frequent than arterials . The ICAThe ICA ascends within the carotid sheath towards the scull base. It is first crossed laterally by the hypoglossal nerve as this nerve passes forward from its position behind the internal carotid. ICA then crosses the occipital artery, as this artery passes posteriorly from its origination of the external carotid artery. Near the skull base the ICA crosses laterally towards the posterior belly of the digastric muscle and the muscle attached to the styloid process. Laterally to the carotid canal is the deep lobe of the parotid gland. Medially to the carotid are the retropharyngeal space and the superior constrictor muscle.Other vital structures located close to the ICA, are the internal jugular vein, the cranial nerves IX to XII, and the external carotid artery. Inferiorly the internal jugular vein lies laterally to the ICA. The glossopharyngeal nerve passes forward between the internal and external carotid artery at the bifurcation. The hypoglossal nerve passes forward laterally to the internal carotid artery just above the bifurcation. The external carotid artery travels anterior to the ICA throughout its entire course.The major congenital abnormalities of the ICA can be classified as agenesis, aplasia and hypoplasia, and they can be unilateral or bilateral. Absence of the ICA is referred to as agenesis or aplasia .Anomalies of ICA in the neck may be vascular neoplasms or ectopic position. Vascular neoplasms are more common in children, but two relatively rare neoplasms that occur in the adults are the angiosarcoma and hemangiopericytoma.The ectopic carotid artery usually occurs in the temporal bone . AngulatWhile the reports of fatal posttonsillectomy hemorrhage and the dissections of Kelly clearly describe the unusual laterally placed of the ICA, midline carotid arteries are even less commonly reported . Kelly nEctopic ICAs should be differentiated from other vascular lesions, such as angiosarcoma and hemangiopericytoma. Peritonsillar abscess, masses as lymphomas, and other tumors must be take under consideration, when a panicula in the oropharynx is detected.We prefer the use of CT or MRI since they are less invasive than angiogram and provide spatial information about the adjacent pharyngeal anatomy. In MRA the resolution of details is not as precise as in angiograms and imaging artifacts due to turbulent flow or patient movement may be a major limitation. Another one examination for the evaluation of carotid vessels is the EcoColorDoppler (ECD), which is easy to perform, and gives quick and important information that MRI and CT do not provide .Transposition of the ICA bulging the posterior pharyngeal wall constitutes a risk factor for impressive intraoperative and postoperative hemorrhage in surgical procedure such as adenoidectomy, tonsillectomy, uvulopalatopharyngoplasty and incision of peritonsillar abscess, which are often performed by young and inexperienced ENT doctors. The surgeon should be careful in performing routine surgical procedures in the area of the upper pharynx, which generally represent the most frequent interventions carried out by inexperienced surgeons as the first steps of their surgical training. The hidden presence of an asymptomatic anomaly of the internal carotid artery may cause impressive and life-threatening hemorrhage. In the literature is reported a massive blood loss during tonsillectomy in a child with congenital vascular malformation of the lips and the oropharynx .In our case the referring physician thought that panicula in the lateral wall of oropharynx was edema. The otolaryngologists surgeons must use caution in evaluating patients with masses in the pharynx and augment a careful and complete head and neck examination with appropriate imaging studies before operating. A thorough ocular and digital exploration of the pharynx for arterial pulsations should never be omitted."} +{"text": "Most modern commercial graphite furnace atomic absorption spectrometers have built in microprocessors for this purpose but they often have limited capability for extensible user programs and limited data storage facilities. In this communication we describe the use of an Apple IIe microcomputer for the acquisition of data from a Pye Unicam SP9 graphite furnace atomic absorption spectrometer. Details of the interface which utilizes an in-house designed AD converter, and an overview of the Pascal and assembler programs employed are given. The system allows the user to record, store and dump the graphical display of the furnace signalsfor all analyses performed. Files containing details of peak height, and area are formatted on an eight-column spreadsheet. Details of sample type, concentrations of standards, dilutions and replication are entered from the keyboard. The calibration graph is constructed using a moving quadratic fit routine and the concentrations of the analyte in unknown solutions calculated. In addition to this, greater processing power and integration of the data into other analytical schemes can be achieved by exporting the data to other software packages and computers. Details of data transfer between the Apple IIe and an Amstrad PC 1512 are given. Some examples of the use of the system in the development of an analytical methodfor silver in plant material are given.Since its inception as an analytical technique some 30 years ago atomic absorption spectrometry has become a firmly established method for the analysis of trace metals. Graphite furnace atomic absorption spectrometry provides the analyst with the capability of analysis of solutions containing \u03bcg l"} +{"text": "The high prevalence and incidence of prostate cancer is a global phenomenon ,2. In thOver the past 15 years, there has been has been a notable change in the clinical presentation of prostate cancer, with more organ confined disease ,12. StudIn the Bahamas, a country where 85% of the population are of African ancestry, prostate cancer represents both the highest incidences of male malignancy occurrences and cancer specific deaths. Unfortunately, despite the increased campaigns for early detection since the introduction of PSA testing, there has been no down stage migration of clinical presentations of this malignancy in the Bahamas, as has occurred in the developed countries . The culWith this high incidence of advance disease and noting that hormonal therapy remains the first treatment of choice, we sought to determine the most common treatment modality employed in our institution with regards to surgical versus medical castration. Emphasizing the need for cost effective and affordable care in our developing country, would men of African ancestry in a macho dominated society opt to have surgical castration as the preferred treatment?All men presenting with advanced prostate cancer at the government-owned public health facility, the Princess Margaret Hospital, are informed by the Consultant Urological Surgeon of the various medical and surgical hormonal options and their advantages and disadvantages. They are informed also that the institution would provide the surgical option of bilateral orchiectomies at no charge, but the cost of the medical treatment option must be borne by the patient.At the only two hospitals on New Providence Island in the Bahamas, the Princess Margaret Hospital, (450 beds) and the privately-owned Doctors Hospital (70 beds), all pathology reports for biopsy proven cancers and the operative log for the number of surgical castration procedures were reviewed during a thirteen years period from 1987 to 2000. The data base is compiled from that of a solo urology service providing care in both the private and public sectors in the Bahamas; this service represents 70% of the urological health care delivered in the country. It is important to note that almost 70% of the population of the Bahamas resides on New Providence Island on which the capital city of the Bahamas is located.There were 535 pathology-diagnosed cases of prostate cancer identified. 275 bilateral orchiectomies were performed in patients presenting with advance prostate cancer during this period, an average of 21.5 bilateral orchiectomies performed annually.For the five years period 2003 to 2007 at the government's public hospital, all cases of pathology proven prostate cancer were reviewed. There were 363 documented cases of prostate cancer. During this period, there were 103 cases of bilateral orchiectomies recorded in the operative log of the hospital, averaging 20.6 cases per year. The frequency of bilateral orchiectomies performed annually was similar to that of the thirteen year period.This high rate of hormonal treatment is an indication of the continuing trends of advance disease as the initial presentation of males diagnosed with prostate cancer in the Bahamas. The trend of increasing annual mortality rates for prostate cancer has continued unabated for the past 15 years, contrary to that of the developed countries; this is well documented in the annual cancer mortality reports by the Health Information and Research Unit of the Ministry of Health and Social Services of the Bahamas.This study concludes that men in the Bahamas with advanced prostate cancer would opt for surgical castration when presented 'positively' as the preferred treatment. These findings are contrary to the perception of the macho-male image of the Caribbean male and invite further studies into the complex psyche of our Bahamian males.The author declares that they have no competing interests."} +{"text": "Similar manifestations of density dependent inhibition were found in the isolated cultures of normal and neoplastic cells: at saturation densities these cultures had low labelling indices; these indices considerably increased when the cells migrated into the wound from the dense sheet, prelabelled cells seeded on the dense sheets of unlabelled homologous cells did not proliferate. However, proliferation of neoplastic cells was not inhibited when they were seeded on the dense sheet of normal fibroblasts. Thus, neoplastic hamster fibroblasts of both lines retained sensitivity to the inhibiting effect of homologous neoplastic cells but completely lost sensitivity to the inhibiting effect of normal fibroblasts. The possible significance of this selective loss of the sensitivity to normal cells is discussed briefly."} +{"text": "The relationship between ageing and transformation has been investigated by a serial study of the changes in cell-surface morphology as normal and carcinogen-treated cells progressed in culture. A progressive increase in the density of cell surface microvilli occurred in association with the adoption of a more rounded profile and concomitant increase in the rate of cell detachment. These changes occurred earlier after carcinogen treatment, which appeared to indicate a carcinogen-induced acceleration of ageing. The alterations have also been described as characteristic of the transformed state. The observations suggest that the expression of in vitro transformation may be the result of continuous selection from a population with genetic instability and variable morphology."} +{"text": "The cross sectional optical coherence tomography images have an important role in evaluating retinal diseases. The reports generated by the Stratus fast macular thickness scan protocol are useful for both clinical and research purposes. The centerpoint thickness is an important outcome measure for many therapeutic trials related to macular disease. The data is susceptible to artifacts such as decentration and boundary line errors and could be potentially erroneous. An understanding of how the data is generated is essential before utilizing the data. This article describes the interpretation of the fast macular thickness map report, assessment of the quality of an optical coherence tomography image and identification of the artifacts that could influence the numeric data. The introduction of Stratus optical coherence tomogram (OCT) has advanced the knowledge and understanding of retinal diseases. Apart fr246The Stratus OCT comes with various scanning programs; the most commonly used protocols for retinal pathologies are line scan, cross hair scan, the macular thickness map protocol (MTM) and the fast macular thickness map protocol (FMTM). Both MTMLongitudinal assessment of the foveal thickness on OCT is an important outcome measure for many therapeutic trials. The fundus photograph reading center (FPRC), Madison WI, USA, serves as a central laboratory to receive and analyze OCTs for multicenter clinical trials. The scans are taken by trained and certified operators and evaluated at the reading center by ocular disease evaluators (ODE). Quality of the OCT is one of the most important evaluation procedures used by the ODE since it determines if the software-generated numeric data is erroneous or not. An understanding of the software algorithms and how each of the elements of the report is generated is essential for interpreting the thickness measurements. The purpose of this article is to assist with interpretation of the reports generated by the FMTM protocol and understanding of the artifacts generated by the OCT.3. The six radial B-scans are comprised of 128 A-scans each and the centerpoint of each of these scans is represented by A-scan no. 64 [The combined data from the six radial scans of the FMTM scanning protocol is represented in the FMTM retinal map analysis report. The individual component B-scans are available on retinal thickness reports. The map report should always be analyzed in conjunction with the six individual thickness reports. The map report has six components to it . (1) Then no. 64 . The algThe quality of the OCT scan needs to be evaluated before interpreting any of the components. There are two boundary lines identified by the software in a cross-sectional retinal scan-the first hyper-reflective band is the internal limiting membrane (ILM) and the second hyper-reflective band is considered to be the retinal pigment epithelium (RPE). Studies have shown that the second narrow hyper-reflective band in normal eyes is the photoreceptor inner segment-outer segment (IS-OS) junction and that the true RPE is missed by the segmentation algorithm.16 ErrorsBoundary line errors can easily be identified using the six individual B-scans. Wedge or bowtie artifacts on the pseudocolor map and a high standard deviation of the CPT are additional clues to the presence of this artifact . On occaCPT is an important parameter for evaluation in macular disease because it represents foveal thickness. An important aspect to quality assessment is to confirm whether the CPT truly represents the foveal thickness. If the radial lines of the scan pattern have not been centered on the fovea, the scan is considered decentered and the CPT would not represent the foveal thickness. The center of the macula corresponds to A-scan 64 as shown in Decentration artifact can be assessed using a combination of clues . In scanDecentration is mostly an operator-dependent error. Poor patient fixation or inability to identify the fovea in a distorted retina could be contributory factors. However, if identified, rescanning could help avoid this artifact. In decentered scans, remeasurement of foveal thickness using the software calipers (Stratus review software) is a possibility. The retinal thickness report with an identifiable fovea is selected and the calipers are placed at the inner and outer boundary lines of the point identified as fovea. This gives the true foveal measurement. The remaining data in the subfields cannot be reclaimed.In summary, artifacts on OCTs performed using the FMTM scan protocol of the Stratus OCT are common. Identification of operator-dependent artifacts and retaking images if required reduces the frequency of poor quality OCTs in clinical trials. Simple measures such as centering the scan in a well-dilated pupil, focus and z offset adjustment, and having the patient close their eyes to wet the cornea before taking the image also improves image quality. Poor quality OCTs with erroneous numeric data may still be useful for qualitative analysis of morphological abnormalities."} +{"text": "This study describes the endoscopic findings about the size of the adenoid tissue and the conditionof the nasopharyngeal orifice of the eustachian tube. Results confirmed that only fiberscopic examinationallows a thorough inspection of the nasopharyngeal anatomy to make a correct diagnosis anddesign therapeutic planning. When the presence of adenoid hypertrophy resulting in nasal obstruction,snoring, and/or otitis media was confirmed endoscopically, adenoidectomy proved to be highlyefficacious in relieving these symptoms."} +{"text": "The implementation of whole-genome sequencing in food safety has revolutionized foodborne pathogen tracking and outbreak investigations. The vast amounts of genomic data that are being produced through ongoing surveillance efforts continue advancing our understanding of pathogen diversity and genome biology. The implementation of whole-genome sequencing in food safety has revolutionized foodborne pathogen tracking and outbreak investigations. The vast amounts of genomic data that are being produced through ongoing surveillance efforts continue advancing our understanding of pathogen diversity and genome biology. Produced genomic data are also supporting the use of metagenomics and metatranscriptomics for detection and functional characterization of microbiological hazards in foods and food processing environments. In addition to that, many studies have shown that metabolic and pathogenic potential, antimicrobial resistance, and other phenotypes relevant to food safety can be predicted from whole-genome sequences, omitting the need for multiple laboratory tests. Nevertheless, further work in the area of functional inference is necessary to enable accurate interpretation of functional information inferred from genomic and metagenomic data, as well as real-time detection and tracking of high-risk pathogen subtypes and microbiomes. Microbiological food safety has traditionally been monitored using culture-based phenotypic methods for detection, characterization, and identification of target foodborne pathogens. Pathogens have primarily been studied at a species level and out of the context of microbial communities in which they reside. The breakthrough in high-throughput sequencing in the 2000s enabled the development of precise subtyping methods for tracking of foodborne pathogens and microbial communities. The implementation of these methods is profoundly changing foodborne pathogen surveillance and is expected to increasingly inform the control of foodborne pathogens in the coming years.I discuss three areas of opportunities that have tremendous potential for a paradigm shift in the detection and control of foodborne pathogens. The primary area pertains to the definition of a foodborne pathogen and associated ramifications for public health and the food industries. The next area considers opportunities for surpassing whole-genome sequencing to detect unknown pathogens, and the third area highlights the opportunities for improved control of foodborne pathogens through informed manipulation of food processing environmental microbiomes.Salmonella are highly risky for humans, while others typically cause the disease in animals and are far less frequently reported to cause illness in humans . The positive impacts of the real-time WGS surveillance are evident from the outcomes of the Listeria monocytogenes surveillance program that resulted in more frequently detected but smaller outbreaks since the implementation of the WGS in routine pathogen surveillance in foodborne pathogen surveillance in the United States resulted in a rapid increase in public availability of foodborne pathogen whole-genome sequences . By the eillance . The crihe radar . Apart fn my lab . Using woutbreak . Using tDespite outstanding successes of WGS in food safety, microbial isolation workflows coupled with WGS still overlook an estimated 38.4 million cases of domestically acquired foodborne illnesses in the United States that are caused by unspecified agents . UnidentEscherichia coli and Salmonella at a low contamination level and with a strain-level resolution directly in foods, without isolation , they should complement rather than replace chemical and physical cleaning and sanitation practices to ensure that hygienic conditions are maintained in the food processing environment.Successful implementation of whole-genome sequencing in food safety has driven remarkable advances in foodborne pathogen tracking and surveillance. The next step toward advancing food safety is utilizing the vast amounts of available foodborne pathogen genomic sequences to extract functional information and identify biomarkers predictive of isolates\u2019 pathogenic potential and other phenotypes relevant to food safety. Functional prediction will be critical for the optimization of detection methods and food safety risk assessment on a subspecies level that can have positive impacts on public health and the economic viability of food industries. Furthermore, going forward, we need to take a step beyond whole-genome sequencing and test the performance of metagenomic sequencing of food and clinical samples to facilitate identification of unknown causative agents of the vast majority of foodborne illness cases. Lastly, it is becoming increasingly important to understand the entire microbial landscape found in food systems to pinpoint the interactions and metabolic roles of individuals comprising microbial communities. This knowledge is necessary for the development of informed and biological pathogen control strategies. I anticipate establishment of metagenomic sequencing approaches in the food safety space with significant impact in the years ahead."} +{"text": "There is a lack of information regarding the safety of flexible bronchoscopy (FB) in patients with severe acute respiratory distress syndrome (ARDS). We read with great interest the letter by Kalchiem-Dekel and colleagues in whichThe insertion of the bronchoscope through the endotracheal tube (ETT) increases airway resistance by obstructing the ETT, thereby limiting respiratory flows and ultimately increasing inspiratory and expiratory pressures. If the peak inspiratory pressure is greater than the high-limit pressure set on the ventilator, the tidal volume is not fully delivered. This leads to alveolar derecruitment with well-known consequences in ARDS . If the We thank Drs. Guillon, Nay, and Kamel for their interest in our Letter to the Editor and theiOur data are limited by the retrospective nature of the study and the lack of intra-procedural pressure measurements. The primary objective of our case series was to highlight the feasibility and possible benefit of secretion clearance in ARDS patients while in the prone position. We point out that all patients who underwent bronchoscopy while in the prone position in our case series had mild to moderate ARDS by the Berlin definition , whereasFinally, data regarding optimal performance of bronchoscopy in patients with ARDS is scarce. We read with great interest the report by Nay et al. describing a novel flexible bronchoscope designed specifically for preservation of lung protective ventilation in mechanically ventilated patients . We hope"} +{"text": "This session addresses approaches to strengthening the capacity of family caregivers in the context of intense and complex care. Recognizing the increasing role that families play in delivering complex care at home to individuals with multiple conditions, this symposium highlights approaches to enhancing support and increasing the power of family caregivers. In the first half of the symposium, the papers elucidate characteristics of the caregiving situation that put caregivers at risk and suggest potential areas for intervention by health systems. The second half explores system level approaches to enhancing capacity for family caregivers. On the demand side, the first paper will examine the social network of family caregivers, highlighting effects of social isolation on caregiver health. The second paper uses national data to understand the relationship between higher demand caregiving situations and the strain and challenges that caregivers experience. On the potential solutions side, the third paper addresses a collaborative design of an intervention to enhance supports for family caregivers of persons with dementia at a critical time, during hospitalization. The last paper provides an overview of evidence-based technological solutions to support family caregiving. Taken together, these papers establish some of the demand characteristics of the caregiving situation and provide potential health system solutions to improve capacity of family caregivers. In the final segment, three discussants will reflect on the implications of these papers for clinical practice, education, research and policy."} +{"text": "Changes in climate and environmental conditions could be the driving factors for the transmission of hantavirus. Thus, a thorough collection and analysis of data related to the epidemic status of hemorrhagic fever with renal syndrome (HFRS) and the association between HFRS incidence and meteorological factors, such as air temperature, is necessary for the disease control and prevention.Journal articles and theses in both English and Chinese from Jan 2014 to Feb 2019 were identified from PubMed, Web of Science, Chinese National Knowledge Infrastructure, Wanfang Data and VIP Info. All identified studies were subject to the six criteria established to ensure the consistency with research objectives, (i) they provided the data of the incidence of HFRS in mainland China; (ii) they provided the type of air temperature indexes; (iii) they indicated the underlying geographical scale information, temporal data aggregation unit, and the data sources; (iv) they provided the statistical analysis method that had been used; (v) from peer-reviewed journals or dissertation; (vi) the time range for the inclusion of data exceeded two consecutive calendar years.A total of 27 publications were included in the systematic review, among them, the correlation between HFRS activity and air temperature was explored in 12 provinces and autonomous regions and also at national level. The study period ranged from 3 years to 54 years with a median of 10 years, 70.4% of the studies were based on the monthly HFRS incidence data, 21 studies considered the lagged effect of air temperature factors on the HFRS activity and the longest lag period considered in the included studies was 34 weeks. The correlation between HFRS activity and air temperature varied widely, and the effect of temperature on the HFRS epidemic was seasonal.The present systematic review described the heterogeneity of geographical scale, data aggregation unit and study period chosen in the ecological studies that seeking the correlation between air temperature indexes and the incidence of HFRS in mainland China during the period from January 2014 to February 2019. The appropriate adoption of geographical scale, data aggregation unit, the length of lag period and the length of incidence collection period should be considered when exploring the relationship between HFRS incidence and meteorological factors such as air temperature. Further investigation is warranted to detect the thresholds of meteorological factors for the HFRS early warning purposes, to measure the duration of lagged effects and determine the timing of maximum effects for reducing the effects of meteorological factors on HFRS via continuous interventions and to identify the vulnerable populations for target protection. China has the largest number of hemorrhagic fever with renal syndrome (HFRS) cases in the world. With the acceleration of China\u2019s urbanization process, especially in the process of rapid transition of China\u2019s agriculture-related landscapes to urban landscapes, the dual role of climate change and environmental change has led to a leap in the epidemic area range of HFRS. Exploring or clarifying the relationship between HFRS epidemic and those environmental factors may help to grasp the spread and epidemic pattern of HFRS and then the pattern could serve as the partial basis of accurate HFRS incidence prediction and the corresponding allocation of public health resources. The present systematic review first described the heterogeneity of geographical scale, data aggregation unit and study period chosen in the ecological studies that seeking the correlation between air temperature indexes and incidence of HFRS in mainland China during the period from January 2014 to February 2019. Raising the awareness of the appropriate adoption of geographical scale, data aggregation unit, the length of lag period and the length of incidence collection period is of great importance when exploring the relationship between HFRS incidence and meteorological factors such as air temperature. Three duplicated articles were subsequently removed, after intensive reading the titles, abstracts and full-texts of these article, 81 publications were also excluded. Thus, 27 publications (18 in Chinese and nine in English) were finally included in the systematic review, and among them 21 studies were with low risk of study bias and six studies were with moderate risk of study bias. The literature selection process is shown in A total of 111 articles related to the topic that published between Jan 1Among the 27 publications included, there were 22 journal articles and five dissertations; Those 22 journal articles scattered over 17 kinds of journals, with the journal \u2018PLoS Neglected Tropical Diseases\u2019 was identified as the most active journal about the topic during the study period. These 17 journals could be grouped into two categories, public health , and natural sciences, including environmental sciences .Among the included 27 studies, one study analyzed the HFRS data at the national level which included the data from 31 provinces, autonomous regions and municipalities in mainland China, and the remaining 26 studies involved 12 provinces and autonomous regions. Studies from Shandong Province accounted for 37.0% of all the included studies. Five studies collected HFRS data at provincial level, 17 studies collected HFRS data at municipal level, and four studies collected HFRS data at county level. The study period of the data nested in the included studies ranged from 3 years to 54 years with a median of 10 years, as shown in Regarding the temporal unit of data aggregation, in the included studies, fifteen studies were based on monthly HFRS incidence or the number of monthly incident HFRS cases, five studies were based on the number of daily reported HFRS cases, two studies based on annual HFRS incidence, four studies based on both monthly and annual HFRS incidence, and one study was based on the number of weekly reported HFRS cases. As for the corresponding air temperature indicators, seven studies adopted all the three indicators of average air temperature, average maximum air temperature and average minimum air temperature, and the rest of the studies were based on either average air temperature only or average maximum air temperature only. Twenty-one studies considered the lagged effect of air temperature factors on the HFRS activity and the longest lag period considered in the included studies was 34 weeks.With regards to the statistical methods adopted by the researchers to explore the correlation between air temperature and HFRS incidence, Spearman correlation analysis, Pearson correlation analysis, generalized additive model, seasonal differential autoregressive moving average model, negative binomial multivariate regression analysis, distributed lag nonlinear model conditions, conditional logistic regression analysis and wavelet analysis have been indicated in the included studies, as shown in The associations observed at one scale were not present at another one. Fifteen studies indicated the negative correlation between air temperature and HFRS activity, while seven studies found the positive correlation between air temperature and HFRS activity. There were also two studies that defined a certain temperature as the dividing point between HFRS activity and air temperature, the correlation curve was inverted \u2018U-shaped\u2019 ,29. AlsoBecause the current China\u2019s National Notifiable Infectious Disease Reporting system (NIDR) is still unable to distinguish the type of hantaviruses infected by HFRS patients, the direction and magnitude of the effects of temperature on HFRS activity in different seasons were also inconsistent. The study based on the monthly temperature index at county scale in Shandong Province found that the increase of average air temperature in spring was the risk factor of SEOV-type HFRS outbreak, but similar results could not be found in other seasons and in the HTNV-type HFRS . The stuIn addition to collecting data on the incidence of HFRS in the whole population, a certain study also collected data on HFRS incidence from different populations based on the national legal infectious disease surveillance system. The study in Huludao City, Liaoning province found that HFRS activity in the population aged 35\u201359 years was significantly affected by air temperature, but this phenomenon could not be found in the population of other age groups ; the autIt is clear from the major part of the included studies that air temperatures are indirectly associated with HFRS activity, however, the temperature-HFRS association findings were inconsistent and location-dependent. Our systematic review indicated that the ecological effects of air temperature on HFRS incidence could be affected by the spatial or temporal scale of the data and also the study period involved, which might help to partly understand the contradicting observations in the included studies. The researchers need to consider or identify which temperature indicator and data aggregation unit are more appropriate to explain the correlations between HFRS incidence and air temperature at different geographical scales .Seeing that the effect of air temperature on the HFRS activity varied among different populations , it is sThe ecological correlation analysis is not only data-driven, but also technology-concentrated, the integration of the collection of high-quality HFRS incidence data and the multi-discipline development can open vast vistas for the correlation analysis techniques\u2019 application in the field of infectious diseases epidemiology . In the The present systematic review might be improved if the following information could be considered in the future HFRS-related ecological studies. All the included studies in the present systematic review are descriptive, further exploratory analysis, explanatory analysis and statistical inference are needed. The distribution of Hantaan and Seoul type of HFRS was not available in the China\u2019s surveillance system of the legally mandated notifiable infectious diseases. The data of HFRS vaccine coverage could not be obtained in the included studies, and HFRS vaccine coverage did affect the magnitude of HFRS incidence. Only one single included study investigated the correlation between air temperatures and HFRS incidence stratified by age group, thus the characteristic of the HFRS vulnerable population that was most affected by air temperature could not be obtained. Besides the different data scale and the differences in the hantaviruses type of HFRS, the factors relating to the dynamics of the rodent hosts and human activities, such as urbanization indicators, should be considered when understanding the results from these ecological studies, given the fact that China is a topographically heterogeneous country. It should also be emphasized that air temperature as an isolated indicator that cannot explain the HFRS incidence fully, confounding factors should always be considered. Caution should be used when studying the associations between HFRS activity and the isolated or the combination of meteorological variables, because of the possible multicollinearity. Therefore, meteorological factors and the impact of climate changes on the pathogenesis of HFRS still need to be further deepened, especially in the process of rapid transition of China\u2019s agriculture-related landscapes to urban landscapes .In summary, the present systematic review first described the heterogeneity of geographical scale, data aggregation unit and study period chosen in the ecological studies that seeking the correlation between air temperature indexes and the incidence of HFRS in mainland China during the period from January 2014 to February 2019. The appropriate adoption of geographical scale, data aggregation unit, the length of lag period and the length of incidence collection period should be considered when exploring the relationship between HFRS incidence and meteorological factors such as air temperature. Further investigation is warranted to detect the thresholds of meteorological factors for the HFRS early warning purposes, to measure the duration of lagged effects and determine the timing of maximum effects for reducing the effects of meteorological factors on HFRS via continuous interventions and to identify the vulnerable populations for target protection.S1 Table(DOC)Click here for additional data file.S2 Table(XLSX)Click here for additional data file."} +{"text": "This special issue focuses on the current approaches for diagnosis, evaluation, and management of autoimmune diseases of the anterior segment of the eye, which range from immune keratoconjunctivitis to anterior uveitis. Immune diseases of the anterior segment of the eye may be caused by several local or systemic conditions and may present in a wide range of diseases including dry eye syndrome, ocular cicatricial pemphigoid (OCP), graft versus host disease (GVHD), and some forms of anterior uveitis often associated with systemic autoimmune diseases such as ankylosing spondylitis. These conditions represent some of the most difficult conditions to diagnose and manage in ophthalmology and often require a multidisciplinary approach.Specifically, in this special issue, Tao and colleagues evaluated reliability and validity of the most commonly used quality of life questionnaires in Chinese patients with dry eye disease; Szepessy et al. evaluated the alterations of central retinal and choroidal thickness and the severity of anterior chamber inflammation in patients with acute anterior uveitis associated with seronegative spondyloarthropathy; Qui et al. described the characteristics of ocular manifestations of a large cohort of patients with a diagnosis of acute or chronic ocular GVHD; Nebbioso et al. evaluated the potential role of vascular endothelial growth factor (VEGF) and mucins in patients with vernal keratoconjunctivitis (VKC). In addition, Leuci et al. reported the long-term clinical outcome of 6 patients with OCP treated with intravenous immunoglobulin therapy (IVIg), which maintained remission of the disease and did not show progression, for a total follow-up period of 9 years after the end of IVIg treatment.The topics presented in this special issue evaluated few aspects of the large and heterogeneous group of disorders which may be included in the group of autoimmune diseases of the anterior segment of the eye. However, it is clear that these conditions require a multidisciplinary approach and often represent a challenge for clinicians due to the lack of specific diagnostic criteria and specific treatments. In fact, most immune diseases are currently treated with local or systemic corticosteroids or immunosuppressive drugs, which may be effective in controlling the inflammatory reaction but are commonly associated with important local and systemic side effects. By this point of view, the increasing understanding of novel pathogenic mechanisms of autoimmune diseases will lead to the development of novel, more specific drugs designed to target the molecules involved in the inflammatory reaction.In addition, autoimmune diseases of the anterior segment of the eye may have complications that extend beyond the anterior segment and can impair visual function. In fact, visual function may be directly impaired in dry eye disease or immune keratoconjuctivits due to corneal damage and scarring, or, alternatively, other autoimmune diseases such as anterior uveitis may induce glaucoma with damage to the optic nerve or macular changes, which cause decrease of visual acuity. Not only the diseases but also some of their treatments may induce long-term adverse effects impairing visual function; for example, chronic use of corticosteroid eye drops in patients with immune diseases of the eye is a well-known cause of cataract and glaucoma.Based on these considerations, it is clear that a further progress in understanding the pathogenesis of ocular immune reactions is highly sought after, together with more standardized diagnostic procedures and protocols for several autoimmune diseases of the anterior segment of the eye. Such progresses will hopefully also lead to the development of novel and more targeted therapeutic approaches that can improve clinical outcome and quality of life of patients with these diseases."} +{"text": "A model of analysis and environmental evaluation was applied to 11 stretches of the Adige River, where an innovative procedure was carried out to interpret ecological results. Within each stretch, the most suitable methods were used to assess the quality and processes of flood plains, banks, water column, bed, and interstitial environment. Indices were applied to evaluate the wild state and ecological quality of the banks and the landscape quality of wide areas of the fluvial corridor . The biotic components were analysed by both quantitative and functional methods . The results achieved were then translated into five classes of functional evaluation. These qualitative assessments have thus preserved a high level of precision and sensitivity in quantifying both the quality of the environmental conditions and the integrity of the ecosystem processes. Read together with urban planning data, they indicate what actions are needed to restore and rehabilitate the Adige River corridor."} +{"text": "Microelectromechanical systems (MEMS) have established themselves within various fields dominated by high-precision micromanipulation, with the most distinguished sectors being the microassembly, micromanufacturing and biomedical ones. This paper presents a horizontal electrothermally actuated \u2018hot and cold arm\u2019 microgripper design to be used for the deformability study of human red blood cells (RBCs). In this study, the width and layer composition of the cold arm are varied to investigate the effects of dimensional and material variation of the cold arm on the resulting temperature distribution, and ultimately on the achieved lateral displacement at the microgripper arm tips. The cold arm widths investigated are 14 Microgrippers are an example of microelectromechanical systems (MEMS) that are extensively applied in fields dealing with the handling, positioning and assembly of micromechanical parts ,2, as weThis paper focuses on a horizontal electrothermally actuated \u2018hot and cold arm\u2019 MEMS microgripper design developed for the manipulation and deformability study of human red blood cells (RBCs). Thermally activated beam flexure has been one of the leading actuation principles within the MEMS domain ,20, wither study , the autThe presented microgripper design is aimed to study the deformability characteristics of RBCs whose pathophysiological relevance allows them to serve as an important marker of the health status of patients ,23. RBCsocedures . The mai tissues . The expThis work presents a number of microgripper structures whose design is based on the U-shape \u2018hot and cold arm\u2019 electrothermal actuator configuration . Such a One of the main factors that affects the performance of the presented microgripper design is the resulting temperature difference between the hot and cold arms under an actuation voltage. The larger this temperature difference, the more efficient a horizontal microgripper is expected to be. A higher efficiency of a horizontal microgripper is defined by a larger in-plane displacement obtained at the microgripper arm tips for the same applied voltage. A number of studies ,29,30 haThe microgripper test structures presented in this work were fabricated by MEMSCAP, Inc. , using one of their standard and commercially available Multi-User MEMS Processes was developed in CoventorWareA number of electrical, thermal and mechanical boundary conditions were implemented within the numerical model, as also shown in A critical aspect in the modelling of electrothermally actuated structures is accounting for the temperature dependency of certain material properties. For this reason, temperature-dependent material properties were used within the microgripper models in CoventorWarex-y-z positioning system. A power supply was used to apply a potential in increments of 0.5 V up to 12 V across the probe pads of each fabricated microgripper structure. The resulting gap opening between the microgripper arm tips was then characterised through the use of the optical microscope-based vision system embedded within the probe station.Experimental testing was performed on the ten microgripper design variants using the setup shown in The electrothermomechanical performance of the ten microgripper design variants has been studied both numerically and experimentally in this work. A lumped analytical model for the SOIMUMPs\u2122 microgripper structure has already been developed and presented in the authors\u2019 previous work where thThe temperature distribution developed within a microgripper structure during actuation is an important design criterion as it significantly influences the microgripper\u2019s performance. A larger temperature difference developed between the hot and cold arms is expected to result in an increase in the gap opening achieved between the microgripper arm tips due to a larger difference between the thermal expansion of the two arms. Another important aspect of an electrothermal microgripper designed for the handling of biological cells is the temperature developed at the cell gripping zone.Thermal characterisation of the microgripper as a function of the applied voltage has not been conducted experimentally in this study due to the absence of the necessary equipment to obtain such data. A micro radiometric thermal imaging microscope with adequate temperature and spatial resolutions would be able to measure and display the temperature distribution over the microgripper\u2019s surface, enabling the quick detection of hot spots and thermal gradients, as well as the possibility of numerical benchmarking. In the absence of such experimental data, the current work presents and compares the numerical thermal characterisation of the different microgripper designs under electrothermal actuation. The predicted temperature distribution on an actuated SOIMUMPs\u2122 microgripper structure, as obtained with CoventorWareIt can be noted that the value of the maximum temperature on the hot arm is the same for all the microgripper designs shown in cold arm b. It canThe process of adding a metal layer on the cold arm affects the temperature distribution on the hot arm as can be seen by comparing the design variants without and with metal deposition on the cold arm in In the \u2018hot and cold arm\u2019 microgripper design, the lateral displacement achieved at the microgripper arm tips is influenced by the temperature difference developed between the hot and cold arms, with this displacement expected to increase as the stated temperature difference increases for the same applied voltage. Different ways to maximise this temperature difference include designing the hot arm with the smallest possible cross-sectional area as allowed by the capability of the fabrication process (the resulting effect is to maximise the temperature on the hot arm due to a higher current density within the hot arm), increasing the width of the cold arm (the resulting effect is to minimise the temperature on the cold arm due to a decrease in the resistance of the cold arm), and depositing a metal layer on the cold arm . The SOIMUMPs\u2122 fabrication technology recommends a minimum feature width of 3 process , and thiThe resulting numerical temperature difference between the hot and cold arms is shown in cold arm b, conseqThe design of a horizontal MEMS microgripper should be such that it maximises the achieved lateral end-effector displacement as required for the application in consideration, while minimising the temperature developed at the arm tips in the case of biomedical applications. The predicted lateral displacement distribution of the designed microgripper structure is shown in The numerical models of the ten microgripper design variants were validated through experimental tests performed under atmospheric pressure using the setup described in As already highlighted in This paper has presented a number of design variants of a horizontal electrothermal MEMS microgripper that was developed for the micromanipulation and deformability study of RBCs. The different microgripper structures were achieved by varying the cold arm width (14 The electrothermomechanical performance, specifically the temperature distribution and the lateral deflection at the microgripper arm tips, were investigated and compared for the different microgripper designs. Steady-state numerical simulations were performed with the commercial software package CoventorWareIt could be observed that the increase in cold arm width in the case of the design variants of the SCS microgripper structure without metal deposition on the cold arm resulted in a larger temperature difference between the hot and cold arms and in an increase in the achieved gap opening at the microgripper arm tips. Moreover, for a certain cold arm width, metal deposition on the cold arm further increased the temperature difference between the hot and cold arms. It could, however, be noted that, in the case of the design variants with metal deposition on the cold arm, the benefit of depositing metal on the cold arm to maximise the resulting temperature difference decreased as the cold arm width increased, resulting in similar gap openings achieved for the different cold arm widths. This is due to the resulting reduction in both the hot arm and cold arm temperatures with increasing cold arm width in the case of the design variants with metal deposition on the cold arm, causing a much smaller effective temperature difference achieved between the hot and cold arms.All fabricated microgripper structures were actuated under atmospheric pressure and the achieved displacement at the arm tips was investigated via optical microscopy studies. Good agreement was achieved between the tip displacement results from the actuation testing and the numerical predictions. The temperature at the cell gripping zone was also studied numerically and compared with the gap opening for each design variant. It could be observed that the optimal combination of gap opening and temperature at the arm tips for RBC manipulation was obtained from a microgripper structure with a cold arm width of 70 Future work will thus focus on the enhancement of the presented microgripper design to include force feedback, on the modification of the microgripper arm geometry to limit the temperature developed at the cell gripping zone for biomedical applications as well as on the attainment of thermal experimental results via thermal microscopy studies. All these factors will build on the current work in order to futher optimise the SOIMUMPs\u2122 microgripper structure for the successful deformability study of RBCs."} +{"text": "Agrochemicals are essential but hazardous inputs being utilized at different stages in cocoa production. Safeguarding the health of workers handling these chemicals is therefore of utmost importance. Although Ghanaian government implemented mass spraying of cocoa with every essential occupational safety being followed, non-workability of the programme in many parts of the cocoa producing areas necessitates supplementary application of agrochemicals by many farmers. Therefore, a survey was conducted in Ahafo Ano North district of the Ashanti region in 2015 to understand the compliance of farmers to safety guidelines in handling agrochemicals. The survey was conducted with structured questionnaires that were written in English language and translated into the local language in the course of the interviews. A total of 246 cocoa farmers were interviewed using stratified sampling procedures. The questionnaire, which was divided into four sections solicited information on farmers\u2019 socioeconomic characteristics, safeguard measures being taken by the farmers in the course of handling agrochemicals, health complaints after handling agrochemicals and stress and occupational hazards. The dataset is herewith made available and it is considered of vital usefulness given some serious policy implications of occupational health hazards among cocoa farmers. The aim of the survey was to understand some safety precautions being taken by cocoa farmers in the course of handling agrochemicals and associated health complaints. The dataset provides researchers with some variables that can be used to explore research topics on issues of occupational hazards from mishandling of agrochemicals and health complaints among cocoa farmers. The subject of agrochemical usage in agricultural production is vital and for cocoa, it is of critical relevance given the spectrum of pests and diseases being associated with cocoa plant . The datThe data also contain information on the types of agrochemicals that were being used by cocoa farmers, awareness of precautionary measures to be taken in the course of handling agrochemicals, ownership and use of basic safety kits while handling agrochemicals and understanding of the right way of agrochemical application and disposal of containers/leftovers. 2The survey that resulted into generation of this dataset was conducted in June 2015 at Ahafo Ano North district in Ashanti region of Ghana. The district was purposefully chosen because it is among the top cocoa growing areas in the region. With the assistance of residence extension officers, we employed stratified random sampling procedure with sample size selected in proportion to the estimated number of cocoa farmers in each stratum. The district was stratified into twenty main communities based on prominence of farming. Out of these twenty strata, eight were randomly selected and sample sizes were allocated based on estimated number of cocoa farmers as provided by the extension officers. The sampled communities were Akwasiase (125), Bonkrom (42), Tepa (37), Abonsuaso (14), Jacobu (9), Kwekwewere (9), Dwahoo (6) and Anyinasuso (4). The enumerators were largely farm extension officers who were working directly with cocoa farmers in the district. Prior to the commencement of the survey, the enumerators were properly trained on the requirements of the survey and a pre-testing of the questionnaire was undertaken among few farmers cocoa farmers. In each of the selected communities, the leaders of the cocoa farmers\u2019 groups and/or the chiefs assisted the extension agents in informing cocoa farmers on the purpose of the survey. Although the questionnaire was designed in English language, interviews were conducted for majority of the farmers in their local language (Akan-twi).North-West University Management Council.The funding for this dataset was obtained from the 2014 IREA of the researcher as approved by the"} +{"text": "Microbubble is an emerging modality in the field of Medicine for treatment and imaging. Ultrasound-guided microbubble is an effective diagnosing and treatment technique as it can reduce the systemic toxicity of chemotherapeutic drugs. It is also used in targeted gene delivery in gene therapy. The objective of the review article is to formulate a narrative review on the emerging importance of microbubbles in the diagnosis and treatment of cancer and its future in cancer management. The article focuses on the effectiveness of ultrasound-targeted microbubble in the treatment of malignancy. Microbubble is useful for targeted drug delivery . MicrobuThe diameter of a microbubble, being less than 10 \u03bcm approximately, equals the size of a red blood cell and it exhibits a similar rheology in the blood vessels and capillaries in the body . Recent A case report published in China described the use of low-frequency ultrasound combined with microbubbles in the treatment of inoperable hepatic cancer . After tA clinical case study conducted on patients with pancreatic cancer demonstrated the effectiveness of the combination of ultrasound, microbubbles, and gemcitabine in decreasing the tumor size.\u00a0Being a very aggressive tumor, it is very rare to see a decrease in the size of the tumor by cancer drugs. Surprisingly, combination therapy using ultrasound and microbubbles showed a decrease in tumor growth. The patients in the treatment group were able to undergo more cycles of treatment and the overall quality of life improved. Furthermore, the patients did not notice any discomforts during the treatment cycle .Many studies utilized the blood vessel\u00a0destructing properties of microbubbles and ultrasound in the treatment of prostate cancer. One remarkable study explored the effect of low-frequency low-intensity ultrasound with microbubble on the hypoxia environment of prostate cancer . AnotherDifferent studies demonstrated the effectiveness of ultrasound targeted microbubbles in the treatment of ovarian cancer. One experiment demonstrated the effectiveness of paclitaxel-loaded microbubbles coated with Luteinizing hormone releasing hormone analogue and ultrasound-targeted microbubble destruction greatly enhanced the therapeutic effect of paclitaxel . AnotherUltrasound-targeted microbubble destruction can be used as a noninvasive way in the treatment of breast cancer. A study noted that there was tumor regression by using ultrasound targeted delivery of miR\u2010133a\u2010MB without significant side effects . US targStudies have shown that Doxorubicin, one of the widely used antineoplastic drug, can be delivered to the tumor site as Doxorubicin-liposome-containing microbubbles with the use of ultrasound . UltrasoMicrobubble-mediated ultrasound therapy with Cisplatin or Cetuximab has shown to decrease the tumor size significantly in head and neck squamous cell carcinoma .Microbubble is a treatment and diagnostic modality that is gaining popularity in the field of cancer treatment. The above-mentioned studies demonstrate the efficacy and scope of microbubbles in cancer detection and treatment. Microbubbles can greatly reduce the side effects of the highly toxic chemotherapeutic drugs as the drug is encapsulated inside the microbubble and will only be released at the desired site. To conclude, ultrasound-guided microbubble is a promising new technique for diagnosing and treating cancer. Further clinical studies and research should be needed to completely know the effectiveness and safety profile of the method. So far, the study results have been encouraging."} +{"text": "Manis crassicaudata) is the only pangolin species present in Sri Lanka. There is no comprehensive assessment of its ecology or conservation status carried out in the Sri Lankan context. The dataset described herein is a compilation of information on the distribution, habitats and conservation status of Indian pangolins in Sri Lanka which is collected from a variety of primary and secondary data sources. All information included in the dataset has been recorded between January 2000 and December 2018. The data on distribution, crimes and rescue activities involving Indian pangolins all over the country were collected from the registries maintained by the Department of Wildlife Conservation, Department of National Zoological Gardens and non governmental organizations committed to the conservation of wildlife in Sri Lanka. Verified records from mass media and reliable field data gathered by the authors and their contact networks were further included in the dataset. The data on the distribution can be analyzed to identify the different habitats of the Indian pangolins and their abundance in different climatic zones. The data on distribution include the recorded area, habitat and approximate GPS coordinates of the recorded locality. The data on crimes involving pangolins was extracted from the offices of the Department of Wildlife Conservation which record the crime, date of crime, approximate GPS coordinates of localities where crimes occurred, nature of the crime and fines/actions taken against the offenders. Data on the rescue events include approximate GPS coordinates of the places where the Indian pangolins were rescued, health conditions at the point of rescue and post rescue status [Indian pangolin ( Data on1.1Data on the distribution include 1.2Data on crime records involving Indian pangolins include 1.3Data on rescue records of Indian pangolins include 2Manis crassicaudata in Sri Lanka. Indian pangolin is the only pangolin species in Sri Lanka [Collection of ecological data on nocturnal elusive mammals such as pangolin is difficult due to lower number of records per significant time period . Thus inri Lanka and it hri Lanka . The datri Lanka . Thus, r"} +{"text": "In the original article, there was an error. Near infrared spectroscopy (NIRS) is described under the heading Invasive Multimodality Monitoring of Comatose Patients After Cardiac Arrest and the subheading Use of Invasive Multimodality Monitors when it is a non-invasive modality of monitoring. A correction has been made to the heading, which now reads Other Multimodality Monitoring of Comatose Patients After Cardiac Arrest and the subheading which now reads Use of Invasive and Noninvasive Multimodality Monitors. The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "This paper deals with the recent levels of this radionuclide in seawater and with the link between an Arctic fjord, Kongsfjorden, and the Western Spitsbergen Current (WSC), investigated using 99Tc results. By means of the WSC, the 99Tc radionuclides ultimately reach the eastern Fram Strait west of Spitsbergen . Results from oceanographic modelling and sea ice observations indicate a direct coupling between Kongsfjorden and the area west of it. The findings in connection with new radionuclide results presented in this paper concur with these assumptions. Furthermore they indicate that the inner part of Kongsfjorden is also well linked to the WSC. Surface seawater from the central part of the WSC, sampled during a cruise with RV Polarstern in the summer of 2000, shows a higher level of 99Tc than those measured in Kongsfjorden in spring 2000. However, all levels measured in surface water are of the same order of magnitude. Data from sampling of deeper water in the WSC area provide information pertaining to the lateral distribution of 99Tc. The results, along with additional data from spring 2001, indicate that Kongsfjorden is suitable for monitoring the levels of 99Tc arriving in the European Arctic and that the sheltered setting of this fjord does not necessarily provide protection against pollution from the open sea.Seawater from the western coast of Svalbard was sampled in the spring and summer of 2000 to determine levels of technetium-99 ("} +{"text": "Recent advances in neural circuitry techniques, like optogenetics and chemogenetics, have allowed for a greater understanding of the periaqueductal gray (PAG) and its importance in predator and prey behaviors. These studies in rodents have highlighted the role of the rostrolateral PAG in hunting behaviors, and have demonstrated functional differences across the dorsal-ventral/rostral-caudal axes of the PAG associated with defensive behaviors. Human imaging studies have further demonstrated that the PAG is active during situations involving imminent threat suggesting that the function of the PAG is likely largely conserved across species. This mini-review article highlights some of the recent advancements towards our understanding of the functional neuroanatomy of the PAG and its importance in the predator and prey behaviors that are critical for survival. The periaqueductal gray (PAG) is essential for the expression of both the hunting behaviors performed by predators and the defensive behaviors performed by prey. Anatomically, it is largely bordered dorsally by the superior colliculus, and ventrally by the dorsal raphe (DR) and midbrain reticular nucleus. It can be further sub-divided into four columns arranged around the cerebral aqueduct . Environmental factors, like the presence of escape routes, and the proximity to the threat, contribute to the type of defensive behavior elicited. In the case of distal threats, rodents will engage in freezing behaviors, and will switch their defensive response to escape-related behaviors like flight and jumps as the likelihood of attack increases identified distinct subsets of dorsal PAG neurons that are responsible for risk assessment, flight, and freezing with a very small percentage of cells firing in association with more than one of these behaviors is a protein that regulates expression of many synaptic plasticity-related genes, and thus its expression can be used as a marker of neural plasticity. pCREB expression is increased transiently 20 min following predator exposure in the lateral PAG activity in the PAG increases as innately threatening stimuli becomes more imminent technology, much of this research has not sub-divided the human PAG into its dorsal and ventral sub-areas (Linnman et al., TF is the sole author of this review article.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Air-sea flux exchanges influence the climate condition and the global carbon-moisture cycle. It is imperative to understand the fundamentals of the natural systems at the tropical coastal ocean and how the transformation takes place over the time. Hence, latent and sensible heat fluxes, microclimate variables, and surface water temperature data were collected using eddy covariance instruments mounted on a platform at a tropical coastal ocean station from November 2015 to October 2017. The research data is to gain the needful knowledge of the energy exchanges in the tropical climatic environment to further improve predictive algorithms or models. Therefore, it is intended that this data report will offer appropriate information for the Monsoonal, and diurnal patterns of latent (LE) and sensible (H) heats and hence, establish the relationship between microclimate variables on the energy fluxes at the peninsular Malaysian tropical coastal ocean. Specifications TableValue of the data\u2022This data showed that the tropical coastal ocean energy exchanges between the atmosphere and the ocean biospheres is dynamic in nature and thus not easily predicted by the applications of the existing TOGA-COARE models.\u2022Data of this nature are very important for researchers working on the relationship between microclimate variables and the energy budgets.\u2022The research data is related to the government policy of improving the environmental and health conditions of the coastal and estuaries in any part of the world.1The tropical coastal ocean plays a significant role in the energy exchange between the air and sea compared to higher and lower latitude regions due to increased and persistent solar radiation and high and constant water surface temperature Therefore, the data has helped to determine the Monsoonal, and diurnal patterns of latent (LE), i.e., 2Air-sea flux exchanges drive the climate and the global carbon-moisture cycle. Climate models forecast an increase in sea-surface temperature because of global warming, that subsequently intensifies the oceans\u05f3 capacity to deliver heat to the atmosphere in the form of fluxes Energy flux exchange in the tropical ocean requires further study as the magnitude of these fluxes have generally been estimated and seldom directly measured, except for a handful of studies for instance Planet-scale studies of air-sea interactions ubiquitously depend on indirect flux quantification methods such as the bulk flux algorithm"} +{"text": "We present an unusual case of five months old neglected anterior dislocation of the right elbow joint in a 19-year old man. The patient had been initially treated by a traditional bone setter, but the elbow remained unreduced. He presented to us with pain, deformity and limited range of motion of his right elbow joint. Radiographs revealed an unreduced anterior dislocation of the right elbow joint. We describe the problems encountered during open reduction and rehabilitation and result one year after the operation with the patient having a stable elbow and a functional range of motion. Acute uncomplicated anterior dislocation of the elbow joint is rare compared to its posterior counterpart and good treatment outcomes have been reported4. Old unreduced anterior dislocation of the elbow joint has not been reported. Here we discuss the likely mechanism, clinical features, disability at presentation, challenges anticipated, problems encountered and result after open reduction in a five-month old unreduced anterior dislocation of the elbow joint in a 19-year old male patient.The elbow is the second most common joint that sustains dislocation in adults, and the treatment is simple. Old unreduced posterior dislocation of the elbow joint is common in developing countries. The reasons ascribed are the lack of awareness, inadequate access to health care facilities and easy access to traditional bone setters. Acute anterior dislocation of the elbow joint associated with fracture of olecranon have been reported in adults and childrenA 19-year old male presented to us in the outpatient department with complaints pain on lifting weight with the right arm, deformity and limited range of motion of the right elbow for five months. The patient had fallen down and sustained the injury to his right elbow while hanging from the rootlets of a Banyan tree, following which, he had pain, swelling, and deformity of the right elbow. He had sought treatment from a local bone setter for four weeks following which pain and swelling decreased, but the deformity and elbow stiffness had persisted, for which he attended our hospital.On examination, the Beighton hyperlaxity score of the patient was 5/9. There was flexion deformity of the elbow joint and wasting of muscles of the arm and forearm. The olecranon process was displaced from the olecranon fossa of the right humerus and an abnormal bone mass was palpable on the anterior aspect of the distal humerus. There was a flexion deformity of 40 degrees of the elbow joint with further flexion of 70 degrees. Pronation and supination were normal. There was a valgus laxity of the right elbow joint. The differential diagnoses were neglected dislocation of the elbow joint (posterior/anterior) and mal-united supracondylar fracture.Antero-posterior and lateral radiographs of right elbow demonstrated an anterior dislocation of the elbow joint with an anterior bone mass at the distal humerus. The bony anatomy of the elbow appeared unclear on radiography, and a Computed Tomogram (CT) with 3D reconstruction confirmeWe performed an open reduction of the elbow by combined medial and lateral approach based on findings of the CT scan. We were successful in excising the bone mass but failed to reduce the elbow joint. There was some early degeneration of the articular cartilage of the distal humerus and olecranon. It was impossible to reduce the olecranon posteriorly. We extended the approach through the subcutaneous plane to the posterior aspect and performed an olecranon osteotomy. The humerus was reduced into the osteotomy, and it was fixed with tension-band wiring. Indomethacin was started at 25mg eight hourly after surgery for three weeks after the operation. We did not immobilise the elbow and started active assisted mobilisation of the elbow joint after surgery as tolerated by the patient. The patient was discharged after wound inspection on the 5th post-operative day and advised to attend the rehabilitation department for physiotherapy for six weeks.At review one year postoperative he had a painless range of motion of 30 degrees to 120 degrees at the elbow joint. He has excellent pronation and supination and could perform light activities. The olecranon osteotomy healed well though t4. Venkatram et al have described the mechanism of anterior dislocation in a 1-year old child. They hypothesised that a sudden pull on the forearm in an attempt to stop the child from running and falling down on the floor might cause an anterior dislocation at the elbow joint2. We believe that a mechanism similar to the one described by them operated in our case. An association of fractures of the olecranon, medial epicondyle, lateral epicondyle and pulled elbow has been reported with anterior dislocation of the elbow joint5.Acute anterior dislocations of the elbow joint have been described in the literature5. The case described here presented with similar clinical and anatomical findings. The delay in seeking treatment and presence of bony mass anteriorly at the distal humerus posed a challenge in arriving at the diagnosis of elbow dislocation. Good outcome has been reported in neglected posterior elbow joint dislocations after open reduction utilising the posterior approach with or without V-Y plasty. We had several challenges in deciding the surgical approach, extent of soft tissue release and duration of postoperative immobilisation in our case. We planned open reduction of the elbow by utilising combined medial and lateral approaches because of the position of olecranon and radial head in an antero-medial location. A posterior approach in itself would have severely hampered the access anteriorly. We believe that antero-medial displacement of olecranon and radial head in relation to the distal humerus led to stretching of the triceps. A triceps split or a V-Y plasty of the triceps would have been probably insufficient and challenging to bring the trochlea into the olecranon. However, the inability to reduce the humero-ulnar articulation despite complete release made us apply the unconventional step of an olecranon osteotomy through the existing medial limb of the combined medial and lateral approach.Clinical features described for an acute anterior dislocation of the elbow joint are a flexed attitude, pain, swelling, deformity and painful restriction of range of motion. Anatomically there is disruption of the bony relationship between the olecranon, the lateral and the medial epicondylesTo summarise, anterior dislocation of the elbow joint is infrequently reported. Treatment of unreduced anterior dislocation of the elbow joint can be a challenging problem. A good outcome can be expected in adequately treated cases.The authors declare no conflicts of interest."} +{"text": "In this context, multiscale simulations can contribute to deciphering the intricacies of the splicing mechanism by assessing the chemical details of the pre-mRNA cleavage, and the role of the extraordinarily convoluted protein/RNA environment in creating the appropriate structural scaffold that finely modulates introns removal complex , and electrostatic calculations disentangled the cooperative motions underlying the SPL functional dynamics, unraveling the role of electrostatics in modulating these movements (Casalino et al., A detailed comprehension of the molecular terms of eukaryotic splicing has entailed implications for revolutionary gene modulation therapies and drug discovery studies aimed at fighting the over 200 human diseases associated with splicing defects. Upon the deposition of the first SPL structure from yeast in 2015, many human cryo-EM maps have been solved, thus opening new opportunities to dissect detailed aspects of this machinery (Kastner et al., Although the reported results from all-atom simulations\u2014and all the possible future applications\u2014appear to be very encouraging (Casalino et al., In this scenario, we expect that new methodological advances in computer simulations, modeling and analysis techniques will foster atomic-level studies of the SPL, contributing to an utter comprehension of this fundamental step of gene expression. This will also be of service for a better understanding of the allosteric signaling between distal sites, which occurs via the entangled protein/RNA networks characterizing the SPL, and for the discovery of druggable allosteric sites (Palermo et al., LC and AM designed research and wrote the paper.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "We therefore conducted scaffold diversity and comparison analysis of natural products with in vitro antiplasmodial activities (NAA), currently registered antimalarial drugs (CRAD) and malaria screen data from Medicine for Malaria Ventures (MMV). The scaffold diversity analyses on the three datasets were performed using scaffold counts and cumulative scaffold frequency plots. Scaffolds from the NAA were compared to those from CRAD and MMV. A Scaffold Tree was also generated for each of the datasets and the scaffold diversity of NAA was found to be higher than that of MMV. Among the NAA compounds, we identified unique scaffolds that were not contained in any of the other compound datasets. These scaffolds from NAA also possess desirable drug-like properties making them ideal starting points for antimalarial drug design considerations. The Scaffold Tree showed the preponderance of ring systems in NAA and identified virtual scaffolds, which may be potential bioactive compounds.In light of current resistance to antimalarial drugs, there is a need to discover new classes of antimalarial agents with unique mechanisms of action. Identification of unique scaffolds from natural products with Plasmodium is highly desirable and needed. Since the molecular scaffolds as well as the pharmacophore features of a compound define the uniqueness of a compound, exploration of scaffolds of NAA may lead to identification of new antimalarial chemotypes. The term molecular scaffold is used to describe the core structure of a molecule and it determines the spatial orientation within the binding pocket of biological targets , a robust artificial neural network algorithm, was used to organize Murcko scaffolds of CRAD and bioactivity subgroups of NAA based on scaffold structural similarity. Scaffold structural similarity was assessed with \u201cFragfp descriptors\u201d from DataWarrior . Fragfp in vitro antiplasmodial activities (NAA); currently registered antimalarial drugs (CRAD) and malaria box from Medicine for Malaria ventures (MMV). The scaffold diversity of these antimalarial compound datasets were computed and compared. The scaffold count and cumulative scaffold frequency plots (CSFP) showed that CRAD is the most scaffold diverse dataset while NAA displayed more scaffold diversity than MMV. Amongst the bioactivity subgroups of NAA, the high active (HA) compound set had the highest scaffold diversity. The Scaffold count and Cumulative scaffold frequency plots (CSFP) were useful indicators of scaffold diversity of the antimalarial and antiplasmodial datasets studied.Murcko scaffolds and Scaffold Trees were generated from natural products with It was evident that many of the scaffolds from the NAA were not similar to those from CRAD, thereby highlighting the novelty of these scaffolds in the antimalarial chemical space. Moreover, most of the scaffolds have desirable drug-like properties and these novel scaffolds may be used as frameworks to design new antimalarial focused compound libraries.Scaffold Tree was also used to explore the scaffolds present in CRAD and bioactivity subgroups of NAA . The presence of scaffolds at increasing levels of the Scaffold Tree for HA, A, MA and LA indicate the prevalence of ring systems and increasing molecular complexity of the compounds in these datasets. More importantly, virtual scaffolds present at different levels of the Scaffold Tree of the NAA compounds were identified, which are chemically significant, and may also provide starting points for new potential antimalarial compounds.Overall, the exploration and comparison of the scaffolds from CRAD, MMV and NAA enabled us to find novel scaffolds and chemotypes that may result in progress towards design of new compound libraries and development drug candidates to combat malaria. This study also underscores the potentially significant contributions from nature to antimalarial drug development."} +{"text": "There are considerable individual differences in the rates of cognitive decline across later adulthood. Personality traits are one set of factors that may account for some of these differences. The current project explores whether personality traits are associated with trajectories of cognitive decline, and whether the associations are different before and after a diagnosis of dementia. The data will be analyzed using linear mixed effects regression. Across these goals is a focus on replicability and generalizability. Each of these questions will be addressed in four independent longitudinal studies of aging , then meta-analyzed, thus providing an estimate of the replicability of our results. This study is part of a registered report of existing data that is currently under stage 1 review."} +{"text": "The complete or the partial absence of pericardium is a rare congenitalmalformation for which the patients are commonly asymptomatic and the diagnosisis incidental. The absence of the left side of the pericardium is the mostcommon anomaly that is reported in the literature while the complete absence ofpericardium or the absence of the right side of the pericardium are uncommon andtheir criteria are still unrecognized given their rare occurrence in clinicalpractice. This paper aims to report a case of 19-year-old male with thecongenital partial absence of both sides of the pericardium and to highlight thesymptoms and the different cardiac imaging modalities used to confirm thediagnosis of this defect. A 19-year-old male patient was admitted to the radiology department of the MilitaryHospital of Tunis in July 2017 for a physical fitness test. His past medical historywas not remarkable. He had no chest pain, no dyspnea or other specific signs. Theelectrocardiogram (ECG) indicated a normal sinus rhythm with a heart rate of 65 bpmand diffuse T wave inversion in anteroseptal leads. The transthoracicechocardiography and the transesophageal echocardiography showed an enlarged rightventricular dimension with no evidence of atrial septal defect. The Chest X-rayrevealed a lucent area due to the interposition of lung tissue between the aorta andpulmonary artery . In addiFurthermore, the exam showed an interposition of lung tissue between the diaphragmand the base of the heart and between the aorta and the superior vena cava and D wi2 and 58.6 % which is normal.All these findings revealed by cardiac imaging examinations confirmed a diagnosis ofpartial agenesis of right and left pericardium.Right end-diastolic volume index (RVEDi) and right ventricular ejection fractionobtained by MRI were respectively 87 ml/m. Among the different classes of CAP, the absence of theleft side of the pericardium is the most common defect with a prevalence of 70 %while the incidence of the total absence of pericardium or the absence of the rightside of the pericardium is still relatively uncommon.The absence of pericardium is a rare congenital malformation generally characterizedby non-specific symptoms. The majority of clinical cases reported in the literatureshowed that the congenital absence of pericardium (CAP) includes a total absence ofpericardium and complete or partial absence of the left or the right side of thepericardium. The possible embryological origins of this pericardial anomaly are notwell understood, but most studies demonstrated that it is due to defectivedevelopment of the pleuropericardial membranes. In themajority of reported cases of CAP, the clinical presentation is not specific.Patients are asymptomatic and the findings of this congenital disease are generallyoccurring incidentally. In this regard, the advent of different cardiac imagingmodalities has significantly improved the specificity of the diagnosis of CAP byproviding valuable information and indications that confirm the presence of thiscongenital heart disease.For this reason, the majority of literature reviews have focused on the possiblesymptoms, indications, and management algorithm that could be associated with theabsence of the left side of the pericardium. Among the typical clinical signs, wecould observe chest pain, dyspnea, the episode of acute respiratory distress leadingto syncope, palpitation.In the current paper, a case of partial absence of right and left sides of thepericardium is presented. The findings of our study showed that the present case wasasymptomatic without a remarkable medical history. In addition, the ECG showed aregular rhythm with diffuse T wave inversion in anteroseptal leads. While physicalexamination and ECG are not specific for the diagnosis of partial agenesis of thepericardium, the echocardiography contributes to the identification of severalfeatures related to this defect. The typical echocardiography findings of partialagenesis of left pericardium include cardiac levoposition, abnormal septal motionand increased mobility of the heartassociated with this type of defect, an enlarged right ventricle and hypertrophiedright atrium with severe tricuspid regurgitation could be observed.A few case reports in the literature have shown the findings of echocardiography inpatients with partial agenesis of the right pericardium. Among the echocardiographyfindings described by Shah et al2 which is normal. This finding indicated that echocardiographyshowed a what appears to be a dilated right ventricle due to its anterior location.This would tend to yield a larger measured right ventricular dimension. As a result,the patient might be falsely labeled as affected by arrhythmogenic right ventriculardysplasia.In the current case, the echocardiography showed an enlarged right ventriculardimension without an atrial septal defect. RVEDi obtained by MRI was 87ml/mIn addition to the echocardiography, chest X-ray and cardiac CT play an essentialrole in the confirmation of the diagnosis and to the exclusion of some complicationsassociated with partial agenesis of the pericardium. The chest X-ray showed that theinterposition of lung tissue causes a lucent area between the aorta and pulmonaryartery.While echocardiography and chest X-ray exams are helpful in the extraction of somepartial agenesis features as well as in the exclusion of other cardiac diseases,they are not able to confirm the diagnosis of partial absence of the pericardium.For this reason, a Chest CT or a Magnetic Resonance Imaging are always required inorder to provide a definitive diagnosis and to assess the extent of theabnormality.. All These CTfindings were shown in our case with a non-visualization of the superior portion ofthe right pericardium. Furthermore, the outcome of the cardiac CT showed aninterposition of lung tissue between the aorta and the superior vena cava, whichstrongly confirms a diagnosis of partial agenesis of right and left pericardium.The chest CT features of the absence of right pericardium include visualizingherniation of right structure while the partial agenesis of the left pericardium iscommonly revealed by the interposition of lung tissue between the aorta andpulmonary arteryThe management of partial agenesis of right and left pericardium depends on the typeand the extent of the pericardial defect. Usually, an intervention is not needed incase of patients with a complete absence of the pericardium. Complications are moreoccurring for patients with partial agenesis. Among the major complications thatrequired a surgical intervention, we can note necrosis due to the herniation of theleft atrial appendage, myocardial strangulation and incarceration of cardiacstructures. In our case, the patient does not have these complications. Therefore,an intervention is not needed unless significant complications occur.In the current paper, we presented a case with the congenital partial absence of bothsides of the pericardium. The outcomes of this study showed that this defect isusually asymptomatic. For this reason, a combination of several cardiac imagingmodalities is needed to establish an accurate diagnosis of partial agenesis of rightand left sides of the pericardium."} +{"text": "The role of entropy in materials science is demonstrated in this report in order to establish its importance for the example of solute segregation at the grain boundaries of bcc iron. We show that substantial differences in grain boundary chemistry arise if their composition is calculated with or without consideration of the entropic term. Another example which clearly documents the necessity of implementing the entropic term in materials science is the enthalpy-entropy compensation effect. Entropy also plays a decisive role in the anisotropy of grain boundary segregation and in interface characterization. The consequences of the ambiguous determination of grain boundary segregation on the prediction of materials behavior are also briefly discussed. All the mentioned examples prove the importance of entropy in the quantification of grain boundary segregation and consequently of other materials properties. Grain boundary segregation is a phenomenon that influences the behavior of the whole material under external conditions . By affeExperimental data on grain boundary segregation have been most frequently obtained from measurements by Auger electron spectroscopy (AES) but recently also from analytical transmission electron microscopy or 3D atom probe tomography . The temThermodynamic quantities have been calculated by theoretical models such as density functional theory (DFT), molecular statics and dynamics, and the Monte Carlo methods . HoweveIn the present paper, it is discussed whether the differences caused by neglecting the entropic contribution are important for the determination of the grain boundary solute concentration or not. Other aspects of the entropic term, such as the enthalpy-entropy compensation effect and the reversed anisotropy of grain boundary segregation, are discussed. The effects of an incorrect determination of the grain boundary segregation on related materials properties are also briefly listed. Based on all findings, it is shown that the entropy of grain boundary segregation is an important parameter that cannot be neglected.I at the grain boundary and in the volume of the host material, M, respectively. I-I interaction in the host material M. The standard term is constructed by the enthalpy (H) and entropy (S) terms,As mentioned above, the most popular expression for the description of solute concentration at the grain boundary is the Langmuir\u2013McLean segregation isotherm,XIGBX0\u2212XIXIGBX0\u2212XIFrom the experimentally measured temperature dependence of the chemical composition of the grain boundaries, the values of all three parameters required for a complete description of grain boundary segregation, i.e., IF. There is a simple thermodynamic relationship between P is pressure and Theoretical calculations result in the values of the Helmholtz energy of solute segregation, \u0394pressure ,(6)\u0394GI\u2245T = 0 K, the entropic term is zero and I in M. Equation (7) represents the basis for comparison of the theoretical data with the experimental results [At results . A considerable number of published values of segregation energy and enthalpy exist. These values have been obtained either theoretically or experimentally, and much of the available data for selected host materials has been summarized in References ,5, whereA comparison of experimental results and theoretical calculations on the values of the enthalpy and/or energy of grain boundary segregation found in the literature shows excellent agreement between cases where all physical prerequisites are fulfilled ,13,14. LIt is obvious from The importance of entropy in grain boundary segregation was already demonstrated in the case of the enthalpy-entropy compensation effect in Reference . In factCET is the compensation temperature and In Equations (8) and (9), The existence of the enthalpy-entropy compensation effect has another very important consequence. The combination of Equations (4) and (8) gives: CET. This means that the grain boundary concentration is the same for all grain boundaries at CET. This can be documented by the reversed character of phosphorus segregation at {013} and {058} grain boundaries, as shown in As a consequence, the sign of the differences in chemical composition which occur at temperatures lower than In the above analysis, we saw that the consideration or negligence of entropy in the quantification of grain boundary segregation as a representative for intergranular properties gives very different results. Large differences are apparent between the grain boundary concentrations determined in these two ways, and the enthalpy-entropy compensation effect cannot be considered if entropy is neglected. These differences led us to conclude that calculations of the grain boundary composition that neglect the entropy term are incorrect.An incorrect determination of the grain boundary segregation can have important consequences for practical applications. It is known, for example, that phosphorus segregation at grain boundaries induces the temper embrittlement of ferritic steels, which can result in brittle fracture ,19. If tGrain boundary segregation of impurities in steels also affects other materials properties controlled by interfaces . PhosphoAs mentioned in the introduction, grain boundary segregation can also have an important effect on the reduction of grain boundary mobility and, conTo assess the segregation effect on material behavior correctly, we must know the precise value of the grain boundary concentration of an impurity. An incorrect determination of the grain boundary concentration can thus result in misleading practical conclusions. For example, the systematically lower values of grain boundary concentrations of phosphorus as determined in Both model calculations, as well as phenomena such as the temperature and concentration dependences of grain boundary segregation, clearly prove that the entropy term is irreplaceable in all considerations of grain boundary segregation. This conclusion is supported by several examples: (1) the comparison of the temperature dependence of phosphorus and silicon segregation at two differently oriented grain boundaries, calculated with and without the entropic term; and (2) the enthalpy-entropy compensation effect and its consequence in changing the character of grain boundary segregation as compared with the two defined grain boundaries. These examples clearly illustrate that the entropy of grain boundary segregation cannot be neglected in any treatment that deals with this phenomenon. As the entropy of segregation can be obtained from experimental studies on the temperature dependence of grain boundary chemistry at present, it is a great challenge to find new approaches for theoretical calculations of this parameter in order to make significant progress in understanding the phenomenon of grain boundary segregation."} +{"text": "Foot-and-mouth disease (FMD) is a highly infectious transboundary disease that affects domestic and wild cloven-hoofed animal species. The aim of this review was to identify and critically assess some modelling techniques for FMD that are well supported by scientific evidence from the literature with a focus on their use in African countries where the disease remains enzootic. In particular, this study attempted to provide a synopsis of the relative strengths and weaknesses of these models and their relevance to FMD prevention policies. A literature search was conducted to identify quantitative and qualitative risk assessments for FMD, including studies that describe FMD risk factor modelling and spatiotemporal analysis. A description of retrieved papers and a critical assessment of the modelling methods, main findings and their limitations were performed. Different types of models have been used depending on the purpose of the study and the nature of available data. The most frequently identified factors associated with the risk of FMD occurrence were the movement and the mixing of animals around water and grazing points. Based on the qualitative and quantitative risk assessment studies, the critical pathway analysis showed that the overall risk of FMDV entering a given country is low. However, in some cases, this risk can be elevated, especially when illegal importation of meat and the movement of terrestrial livestock are involved. Depending on the approach used, these studies highlight shortcomings associated with the application of models and the lack of reliable data from endemic settings. Therefore, the development and application of specific models for use in FMD endemic countries including Africa is encouraged. Aphthovirus of the family Picornaviridae called foot-and-mouth disease virus (FMDV). The primary mode of transmission of FMDV is via direct contact from infected to susceptible animals 87] relat(DOCX)Click here for additional data file."} +{"text": "Li J et al. conduct a sufficiently large cohort study and show that the risk of gestational diabetes mellitus (GDM) is inversely correlated with the height of the pregnant women . This asFirst, short stature can be associated principally through the mechanism of greater risk of obesity/fat mass . Co-presGDM, as a form of diabetes is multifactorial disease in origin. Several factors such as greater prepregnancy BMI, age, weight gain and a parental history of diabetes mellitus are independently associated with the GDM . The epiGB wrote the first draft and reviewed all drafts of the commentary. AN reviewed and provided inputs for finalization of the commentary. DJ reviewed and provided inputs for finalization of the commentary through all stages.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "It is increasingly clear that the intestinal microbiota plays key roles in the pathogenesis of the conditions known as Crohn disease and ulcerative colitis (jointly known as the inflammatory bowel diseases). Perturbations of the microbiota, termed dysbiosis, are present at diagnosis and likely reflect earlier environmental influences along with interactions with intestinal immune responses. Over the last two decades, there has been increasing interest in the use of a nutritional therapy to induce remission of active Crohn disease. Amongst a number of recent studies focusing on the putative mechanisms of action of enteral nutrition in Crohn disease, there have been several reports illustrating profound interactions between this nutritional therapy and the intestinal microbiota. Although at present it is still not clear how these changes relate to concurrent improvements in inflammation, it has become an area of increasing interest. This review article focuses on the impacts of nutritional therapy in individuals with active Crohn disease and overviews the most recent data arising from international studies. Although the two main subtypes of IBD, Crohn disease (CD) and ulcerative colitis (UC), share similar features, CD is differentiated from UC by disease location, histological changes and disease behaviour The pathogenesis of IBD is still not completely understood. At present, the best accepted hypothesis is that environmental factors lead to key changes in the intestinal microbiota that then trigger dysregulated innate and acquired immune responses in individuals with genetic risk factors At present IBD is incurable: management involves induction of remission followed by maintenance of remission 2.EEN involves the exclusive use of a liquid diet for a period of time , with exclusion of normal diet over this time Although the main role of EEN is the induction of remission, this intervention also has various other benefits. These include enhancing nutritional status and improving bone health Following a period of EEN, many units recommend ongoing maintenance enteral nutrition (MEN), involving supplementary enteral formulae intake supplied in addition to standard or habitual diet. This intervention is associated with maintenance of remission, with reduction in relapse 3.Campylobacter species, Mycobacterium paratuberculosis (MAP) and Escherichia coli (e.g. adherent invasive E. coli). As one example, a number of studies illustrate higher rates of CD developing after Campylobacter infections. In addition, data shows a significantly greater prevalence of specific Campylobacter species (C. concisus) in children with CD compared to well control children Many studies have focused on looking at a single organism as the causative factor for the development of IBD Faecalibacterium prausnitzii is paramount Most recent data illustrate that the presence of distinct changes in the patterns of the flora, so called dysbiosis, are more important in the development of IBD. This dysbiosis is considered to be present prior to and at the onset of IBD and is characterised by variations in the balance between bacterial groups (e.g. Firmicutes and Bacteroidetes). Some recent data also indicate that a reduction in one particular group of bacteria, Several animal and human observations confer a critical role for the intestinal microbiota in the development of IBD. Firstly, the presence of microbiota is required to promote the development of inflammation in various animal models of IBD. An example is the interleukin (IL)-10 knock out mouse model In the setting of individuals with CD, a defunctioning surgical procedure is typically followed by resolution of inflammation in the bypassed section of gut 4.Although EEN has been utilised to induce remission in individuals with active CD for many years, it has only been in the last few years that investigators have worked to elucidate the mechanisms by which this intervention leads to reduced inflammation. Several mechanisms have been demonstrated and proposed. These include enhanced barrier function with increased tight junction performance and direct anti-inflammatory effects during EEN therapy Clostridium leptum group was associated with change in disease activity and change in levels of S100A12 .The first reports that considered the direct effects of EEN on the intestinal microbiota utilised temperature temporal or density gradient gel electrophoresis (TTGE or DGGE respectively) methods. These reports demonstrated significant alterations consequent to the nutritional intervention F. prausnitzii species, contrary to the authors' hypothesis and contrary to expectations arising from the earlier observations made in adult patients with CD In a more recent report that also utilised TTGE (augmented with quantitative PCR), Gerasimidis et al. As well as assessing the impact of EEN on the structure of the intestinal microbiota, these authors also characterised functional aspects subsequent to the period of nutritional intervention A number of subsequent studies have employed 16S rRNA high throughput sequencing and some reports have also employed whole genome or shot-gun sequencing techniques. Some additional reports have also utilised methods to further describe the functional implications of the changes in the microbiota.One of the first studies to utilise 16S rRNA sequencing showed dysbiosis at the time of diagnosis of the five subjects with CD, which was not apparent in control children Quince et al. Lewis and colleagues This study also delineated the patterns of the intestinal microbiota after 8 weeks of therapy and stratified according to response or non-response. Response in the groups treated with EEN or a biologic therapy was associated with the microbiota reverting closer to the patterns seen in the control children.Bacteroidetes phylum, and with increases in bacteria in the Firmicutes phylum. These changes were evident after just two weeks of EEN. Interestingly the changes in the children who were recently diagnosed differed from those with longstanding disease. In contrast to the findings described by Kaakoush et al. More recently, Schwert and colleagues This study also characterised specific and detailed changes in innate and acquired immune status Dunn et al. F. prausnitzii alone. EEN was noted to reduce several types of this bacterium, similar to and supporting the changes described in the afore-mentioned paediatric study Each of the studies reported above were conducted in children. To add to this pattern, two studies have been reported to date that included adult subjects. Jia et al. Bacteroides fragilis group reduced by one tenth: there were no significant changes in the other bacterial groups assessed.Shiga and co-authors The focus of the work reviewed above was on the impact of the EEN on the intestinal microbiota in the context of CD. Interestingly, there have been a series of reports demonstrating that this nutritional intervention is also efficacious and also leads to alteration of the structure and function of the intestinal microbiota in rheumatological disease. Swedish authors reported that a course of EEN given over 3 to 8 weeks in 13 children with juvenile idiopathic arthritis (JIA) promptly resulted in marked reduction of joint inflammatory scores and decreased levels of key pro-inflammatory cytokines implicated in joint inflammation 5.There are now a number of reports that have utilised various techniques to elucidate the specific changes in the intestinal microbiota following the intervention of EEN in the setting of CD. Additional reports have also characterised some functional implications of the changes in the microbiota.F. prausnitzii, which has been characterised as a defensive bacterium in adults with CD. Whilst this reduction may just reflect the change in nutrient intake for this organism, it is intriguing that this reduction is concurrent with improved inflammation and disease activity, which is contrary to the previously-reported activity of this bacterium.Overall, these reports have all shown that EEN leads to profound alterations in the intestinal microbiota, with these changes being rapid (within one week) and generally maintained throughout the dietary intervention. The reports have also emphasised significant interindividual variations. Interestingly, at least two reports have specifically demonstrated changes in F. prausnitzii, as noted in several reports, may reflect the change in nutrient supply for this bacterium. None of the enteral nutrition formulae utilised in the recent EEN studies contain dietary fibre. The lack of this may contribute to the consequent reduction in abundance of organisms that have fermentative activities. Consistent with this are the studies that have measured specific short chain fatty acids: each of these reports have indicated a reduction in levels of faecal butyrate, a product of fermentation of dietary fibre. This feature was also shown in the studies involving children with joint disease (without known gut inflammation) reflecting that the response is more likely secondary to the nutritional change.The changes in the abundance of Several reports have also demonstrated that the changes in the structure in the intestinal microbiota may be reflective of the duration of disease (newly-diagnosed versus long-standing disease) and also provide predictive pointers to the response to EEN itself. Other reports have demonstrated that the changes seen in response to EEN are subsequently reversed in those who have an exacerbation of their disease. Together these findings indicate that characterisation of the structure of the intestinal microbiota may have significant prognostic relevance in children with CD. Prompt assessments of key aspects of the intestinal microbiota may be able to be utilised to predict the response to EEN, and also to predict the risk of subsequent relapse of disease. Inclusion of these bio-indicators in a panel with other specific bio-markers may be even more advantageous.At present the reports of the impact of EEN on the intestinal microbiota do not fully illuminate the mode of action of this intervention in the context of active CD. The network modelling of Schwert et al. Although the data outlined in this report have focused on individuals with CD, it is intriguing to note that EEN has also been demonstrated to have benefits in other inflammatory conditions, such as joint disease. Combined studies of dietary interventions in various inflammatory conditions may also help to further advance the roles and mechanisms of EEN.In conclusion, there is no doubt of the role of the intestinal microbiota in the development and pathogenesis of IBD, especially CD. These data demonstrate that nutritional intervention in individuals with CD is associated with wide and profound changes in the patterns of the microbiota. These data give strong indications that these changes may be employed in prognostic or predictive models to enhance the outcomes of individuals with CD."} +{"text": "Connection between the duplication of the middle cerebral artery (DMCA) and the presence of multiple aneurysms has been described in a small number of cases.The presence of a rare type of DMCA associated with cerebral aneurysms was diagnosed in 56\u2009year old woman after a rupture of an aneurysm on the dorsal segment of the DMCA. .. The presence of equal diameters of branches of the DMCA and anterior cerebral artery (ACA) could be recorded as trifurcation of the carotid internal artery (ICA). However, due to the anastomosis of the DMCA branches in the area of the M2 segment, the recorded anatomical change represented a segmental duplication of MCA. Three aneurysms that were directly related to the segmental DMCA were diagnosed.Anatomical variation by type of segmental DMCA is a rare subtype of DMCA. The presence of multiple aneurysms associated with this type of anatomical variation in MCA indicates their high hemodynamic instability. Anatomical variations of the middle cerebral artery (MCA) are significantly less common than those of other intracranial arteries , 2. The The presence of aneurysms directly related to anatomic variation by type of DMCA is rarely described, and they have an extremely high frequency of rupture , 9.A 56-year-old woman was admitted as an emergency, after a sudden loss of consciousness. The multislice CT (MSCT) of the brain performed in a local medical centre showed the presence of intracerebral hematoma 7x4cm in the temporobasal region, with the penetration of blood into the venticular system Fig.\u00a0. On admiThe DMCA, as we know, is classified into two types. Type A is diagnosed when branches are separated from the very tip of the ICA together with the ACA, and Type B is diagnosed when one branch occurs between the top of the ICA and the onset of AchA , 6. The The association between the duplicated MCA and brain aneurysms has been described in 32 cases. Of these, 65% were detected after previous rupture of the brain aneurysm (9). If we know that the presence of cerebral aneurysms in the general population is between 5 and 9% and their rupture occurs in 10 out of 100,000 [The specificity of our case is that the segmental DMCA is associated with the presence of multiple brain aneurysms that are directly related to the anatomical variation itself. A ruptured aneurysm located in the middle segment of the dorsal branches DMCA indicates an extraordinary haemodynamic load in that area. Providing adequate perfusion in the parts of the brain supplying MCA through two anomalous blood vessels directly leads to increased hemodynamic stress on their walls. In addition, the wall structure of the anomalous DMCA segments will have greater sensitivity to increased perfusion requirements, which in the first part leads to aneurysmal extensions, and later to their rupture.The association between segmental DMCA with multiple aneurysms and their rupture points to high hemodynamic instability of such anatomical variation."} +{"text": "Surprisingly we remain ignorant of the function of the majority of genes in the human and mouse genomes. The dark genome is a major obstacle to the interpretation of the function of human genetic variation and its impact on disease. At the same time, pleiotropy, how individual variants influence multiple phenotypes, is key to understanding gene function and the role of genes and genetic networks in disease systems. Both understanding the genetics of disease and developing new therapeutic approaches and advances in precision medicine are all compromised by our limited knowledge of gene function and pleiotropic effects. Illuminating the dark genome and revealing pleiotropy across the genome requires a highly coordinated and international effort to acquire and analyse high-dimensional phenotype data from model organisms. We describe briefly how the International Mouse Phenotyping Consortium is addressing these challenges and the novel features of the pleiotropic landscape that are revealed by functional genomics programmes at genome-wide scale. Numerous studies have employed GWAS and genome sequencing approaches to identify loci involved with a wide variety of human traits and diseases is building a catalogue of mammalian gene functions by generating and phenotyping a knockout mouse line for every protein-coding gene enable an unprecedented view of the mammalian genome landscape, particularly in revealing novel loci, many of them thus far unstudied, associated with various disease states (Meehan et al. The plethora of new genetic disease models generated by IMPC and others, together with knowledge on basic gene function and pleiotropy, will inform and underpin studies on rare diseases and Mendelian disorders, illuminate GWAS studies and ultimately help provide a more profound understanding of the function of human genetic variation and its involvement in disease. The study of the dark genome alongside the generation of a comprehensive map of the pleiotropic functions of all genes is critical to informing a deeper\u00a0understanding of the mammalian genome. Future advances in genomic and precision medicine will depend upon the success of this endeavour."} +{"text": "The torcular Herophili is formed by the joining of the straight sinus, superior sagittal sinus, and transverse sinus. The anatomic configuration of the torcular Herophili is highly variable. In the current literature, classification systems define up to nine subtypes of the torcular Herophili. The frequency of prevalence of these anatomical variants is also variable. Herein\u00a0is a case report of a circularly-shaped torcular Herophili found during cadaveric dissection. The confluence of sinuses also called the torcular Herophili lies near the internal occipital protuberance and receives venous drainage from various regional dural venous sinuses . ClassicDuring the routine dissection of an adult male cadaver aged 87 years at death, an unusual arrangement of the dural venous sinuses was identified Figure . This spRecently, Matsuda et al. reexamined anatomical variations of the torcular, focusing on venous flow and the continuity of the superior sagittal and transverse sinuses. They reported that venous flow from the superior sagittal sinus to the transverse sinus could be either symmetric or asymmetric . The finRecently, a circular variant of the torcular was reported on magnetic resonance imaging from a patient suffering from chronic headaches and questionable papilledema . In the The embryology of the dural sinuses further elucidates the genesis of anatomical variants. The torcular Herophili is an area of interest for neurosurgical and interventional procedures. Given the high variability of the region, an awareness of normal anatomy and variations such as seen in the case presented herein is crucial for preoperative planning and during the interpretation of cranial imaging."} +{"text": "Editorial on the Research TopicEditorial: Intraoperative Radiotherapy (IORT)\u2014A New Frontier for Personalized Medicine as Adjuvant Treatment and Treatment of Locally Recurrent Advanced MalignancyIntraoperative radiotherapy (IORT) is a treatment delivery technique with reports starting in the early twentieth century with the use of orthovoltage energy with limited applicability due to the energy characteristics . This tevia tumor bed devascularization, elimination of inter-fraction tumor cell repopulation, and possibly providing a systemic immune effect (There are numerous advantages of IORT in oncology. IORT has the benefit of delivering a tumoricidal radiation dose in a single treatment, while targeting the therapy to the region of highest risk of disease recurrence with direct visualization in the operating room. This provides a high relative biological effectiveness while limiting dose to normal tissue e effect . In addiSethi et al. describe the technical and dosimetric considerations for the various applicators now available to treat patients with disease intraoperatively in various locations including with flat, spherical, and even a needle applicator. Valente et al. discuss their experience with IORT from the surgical perspective and how their group decreased operative times in patients receiving breast IORT with increased utilization. Paunesku and Woloschak provide a review of the history of IORT as well as an engaging discussion of how IORT can be used in the future. Herskind et al. extend this discussion into the theoretical usage of large radiation fraction size in brain metastasis and the potential combination with immunotherapy. The series then reviews the use of IORT in various malignancies, including head and neck cancer, pancreas cancer, and brain metastasis. Three articles finally review the use of IORT in breast cancer a highly prevalent cancer with numerous radiation treatment options available. Jacobson and Sochi provide a review of the various types of partial breast therapy and the toxicities associated. Chin et al. describe their experience using IORT for patients with prior thoracic radiation exposure. Harris and Small provide a comprehensive review of the data in support of the use of breast IORT as well as the toxicities, cosmesis, and quality of life with use of this treatment modality touching on both the use of electron and photon-based IORT.In this series we focus on the use, radiobiology, and physics of IORT with an emphasis on the Intrabeam system. The Organization for Economic Cooperation and Development evaluated the spending, supply, utilization, and price of health care across 13 high income countries and found that as a percent of GDP from 1980 to 2013 health care spending is approximately 17% in the United States versus 10% in the other countries evaluated . In the Both authors contributed to the editorial.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Non-polio enteroviruses are emerging viruses known to cause outbreaks of polio-like infections in different parts of the world with several cases already reported in Asia Pacific, Europe and in United States of America. These outbreaks normally result in overstretching of health facilities as well as death in children under the age of five. Most of these infections are usually self-limiting except for the neurological complications associated with human enterovirus A 71 (EV-A71). The infection dynamics of these viruses have not been fully understood, with most inferences made from previous studies conducted with poliovirus.Non-poliovirus enteroviral infections are responsible for major outbreaks of hand, foot and mouth disease (HFMD) often associated with neurological complications and severe respiratory diseases. The myriad of disease presentations observed so far in children calls for an urgent need to fully elucidate the replication processes of these viruses. There are concerted efforts from different research groups to fully map out the role of human host factors in the replication cycle of these viral infections. Understanding the interaction between viral proteins and human host factors will unravel important insights on the lifecycle of this groups of viruses.This review provides the latest update on the interplay between human host factors/processes and non-polio enteroviruses (NPEV). We focus on the interactions involved in viral attachment, entry, internalization, uncoating, replication, virion assembly and eventual egress of the NPEV from the infected cells. We emphasize on the virus- human host interplay and highlight existing knowledge gaps that needs further studies. Understanding the NPEV-human host factors interactions will be key in the design and development of vaccines as well as antivirals against enteroviral infections. Dissecting the role of human host factors during NPEV infection cycle will provide a clear picture of how NPEVs usurp the human cellular processes to establish an efficient infection. This will be a boost to the drug and vaccine development against enteroviruses which will be key in control and eventual elimination of the viral infections. Enterovirus (consisting of 15 species); family Picornaviridae pyrimidines have also been evaluated against enteroviruses; CV-B3 and EV-A71 virus infections where they inhibited their infections but the exact mechanism was not established . More reNatural products have recently gained much interest in drug development studies. Of these; plant secondary metabolites; flavonoids have been of interest in drug therapy screens against viral infections given that they are freely available and form better part of human dietary. Screening of plant metabolites for possible use as antiviral therapy has been reported as reviewed by Zakaryan and colleagues and theiLittle success has been achieved in terms of antiviral therapy against enteroviruses. Given that drug discovery process is an expensive and time-consuming venture, most researchers have relied on the FDA approved drugs or drugs that are already in use for possible re-purposing. Not much success on drug therapy has been recorded in viral infections due to the high mutation rates observed during viral replication. Combination therapy of the drugs with different mode of actions targeting different stages of viral infections would be an alternative in targeting different stages of the enteroviral infection cycle. This will only be achieved with a complete map out of the human host factors hijacked by these viruses during infections. Thus, there is need for continued elucidation of molecular mechanisms of the already postulated viral targets as well as identifying other underlying factors and process. Vaccines have shown much success against viral infections and the success story of vaccination against poliovirus infection in the world which is a picornavirus; points for the need of continued studies towards identifying vaccine candidates against the enteroviral infections. With outbreaks of enteroviruses being recorded in different parts of the world, if not checked they might have a potential threat to the global health; just soon after near- eradication of poliovirus infection.The emergence of outbreaks of enteroviral infections in different parts of the world point to the need of mapping all the host factors involved in the infection paradigm. Given that viruses need host factors in every step of their infection from attachment, entry, replication, virion assembly and eventual entry, there is need to elucidate all the host factors involved for an improved understanding of the molecular dynamics of enteroviral infections. This will be a big boost towards the long overdue antiviral and vaccine development against these epidemiologically important viruses. There is much to be elucidated on the formation of NPEV replication complex formation as the existing mechanisms do not wholly explain the processes and steps involved in this important process during viral replication. The nuclear host factors involved in the enteroviral replication also needs to be fully described as this is a vital step in maintaining viral replication and eventual life cycle. Viral entry studies need to be carried out as the known receptors and viral entry requirements do not fully explain the myriad of disease features observed during viral infections. The role of cellular processes such as autophagy, apoptosis, necroptosis, pyroptosis as well as post-translational modifications in enteroviral infections also needs to be fully elucidated. This will be specifically important in explaining the little-known stages of viral infections such as non-lytic egress for continuous viral cycle within the host.The paucity of information on the infection dynamics of these viruses calls for concerted efforts to elucidate the viral-human cell interactions. There is still a lot to be investigated to fill the gaps that exist on the life cycle of non-polio enteroviruses. With new cases emerging in different parts of the world, it is just a matter of time before we have a global outbreak of non-poliovirus enteroviral infections in different parts of the world. There is also an urgent need for further studies especially in the field of vaccine developments as well as antiviral therapy against enteroviruses."} +{"text": "Asbestos is classified as a hazardous pollutants between the airborne particles that cause diseases such as lung fibrosis (asbestosis). This protocol describes an integrated method for determination of asbestos fibres concentration and its temporal-spatial trends in the air of urban areas. To do this, 60 samples were gathered from various areas of Yazd city with low, moderate and high traffic. For analysis of asbestos fibres in the samples scanning electron microscopy (SEM) and energy-dispersive X-ray (EDX) were utilized. The spatial and temporal variation of asbestos fibres concentration was carried out by ArcGIS 10 analysis. The Inverse Distance Weighting (IDW) method was used to draw asbestos fibres distribution maps. The interpolation of asbestos fibre by IDW method indicated that the distribution of the fibres in summer and winter were followed almost the similar pattern. However, the distribution of asbestos fibre concentrations in the direction of southeast to the northwest of the city was higher than that in the other areas due to high vehicular traffic. Specifications TableValue of the Protocol2 and an altitude of 1230\u202fm above the sea level located in the center of Iran was used . After tThe geometric mean of asbestos fibres concentrations in the sampling stations is presented in The morphology and chemical component of asbestos fibres obtained by EDX showed that about 70% of all identified fibres were asbestos and 30% were non-asbestos fibres. For example, the specific peaks of the elements such as silica, magnesium, calcium, and iron correspond to the chemical properties of the tremolite type of asbestos fibres . Also, t"} +{"text": "The disappearance of the soft-bodied Ediacara biota at the Ediacaran\u2013Cambrian boundary potentially represents the earliest mass extinction of complex life, although the precise driver(s) of this extinction remain unresolved. The \u2018biotic replacement\u2019 model proposes that an evolutionary radiation of metazoan ecosystem engineers in the latest Ediacaran profoundly altered marine palaeoenvironments, resulting in the extinction of Ediacara biota and setting the stage for the subsequent Cambrian Explosion. However, metazoan ecosystem engineering across the Ediacaran\u2013Cambrian transition has yet to be quantified. Here, we test this key tenet of the biotic replacement model by characterizing the intensity of metazoan bioturbation and ecosystem engineering in trace fossil assemblages throughout the latest Ediacaran Nama Group in southern Namibia. The results illustrate a dramatic increase in both bioturbation and ecosystem engineering intensity in the latest Ediacaran, prior to the Cambrian boundary. Moreover, our analyses demonstrate that the highest-impact ecosystem engineering behaviours were present well before the onset of the Cambrian. These data provide the first support for a fundamental prediction of the biotic replacement model, and evidence for a direct link between the early evolution of ecosystem engineering and the extinction of the Ediacara biota. Untangca 550 Ma) and the Ediacaran\u2013Cambrian boundary [In summary, our study provides the first robust test for a key prediction of the biotic replacement model for the Ediacaran\u2013Cambrian transition\u2014namely, an increase in metazoan ecosystem engineering in the latest Ediacaran. Our trace fossil data from the Nama Group of southern Namibia illustrate a gradual increase in bioturbation intensity throughout the Nama Group, but a dramatic increase in the EEI of trace fossils in the latest Ediacaran. These increases in diversity and EEIs pre-date or are at least contemporaneous with the appearance of low-diversity and potentially ecologically stressed communities of soft-bodied Ediacara biota in the same basin ,28 and aboundary \u201373. TherFinally, our findings highlight that further work is needed to spatially constrain Nama Group trace fossils in the context of the contemporaneous soft-bodied Ediacara biota, to understand changes in bioturbation during the Avalon and White Sea assemblages and to determine a precise biogeochemical mechanism(s) associated with bioturbation that could have led to, or arisen from, environmental changes. Despite this, our study provides a link between ecosystem engineering and the extinction of the Ediacara biota, provides insights into the evolution of bioturbation prior to the Cambrian and illustrates that early metazoans were capable of achieving levels of ecosystem engineering approaching those that appear in the earliest Cambrian by the latest Ediacaran."} +{"text": "The role of the cytochrome P450 superfamily of heme-thiolate enzymes in oxidative metabolism is discussed in the context of evolutionary development. Concordances between the rise in atmospheric oxygen content, elaboration of the P450 phylogenetic tree and the accepted timescale for the emergence of animal phyla are described. The unique ability of the P450 monooxygenase system to activate molecular oxygen via the consecutive input of two reducing equivalents is explored, such that the possibility of oxygen radical generation and its toxic consequences can be explained in mechanistic terms, together with an appreciation of the ways in which this oxygen activating ability has been utilized by evolving biological systems in their adaptation to an increasing atmospheric oxygen concentration over the past two billon years."} +{"text": "This Special Issue on lung diseases is aimed at giving emergent researchers and clinicians an important forum to share their original research and expert reviews on key topics within respiratory diseases. This Special Issue will be of interest to general physicians and respiratory specialist and will equip the reader with up-to-date knowledge on a wide array of lung diseases, including interstitial lung diseases, COPD, and Asthma. Journal of Clinical Medicine. Our objectives are to showcase emergent researchers working at the cutting edge of scientific and clinical advances in the broad field of respiratory diseases.Lung diseases are amongst the leading causes of mortality worldwide. One-sixth of total deaths are attributed to lung disease, accounting for over 9.5 million deaths worldwide. This burden of disease is the prime reason for this timely Special Issue on lung diseases in the Here, we provide the reader with comprehensive reviews that would interest those with general medical and specialist respiratory interests. We have two reviews covering chronic obstructive pulmonary disease (COPD), the leading cause of respiratory mortality worldwide. Dey et al. discuss Journal of Clinical Medicine.This Special Issue on lung diseases also presents a number of primary research articles spanning all aspects of respiratory diseases showcasing emergent researchers at the forefront of advances in respiratory medicine. We present a variety of themes from novel aspects of lung physiology testing in pulmonary hypertension to the prevalence of fatigue in asthmatic patients and the use of macrolide therapies during hospitalisation of children with childhood disease, amongst other interesting primary research articles, which would be of interest to our growing number of readers of the"} +{"text": "Miniaturized and integrated analytical devices, including chemical sensors, are at the forefront of modern analytical chemistry. The construction of novel analytical tools takes advantage of contemporary micro- and nanotechnologies, as well as materials science and technology. Two electrochemical techniques were used in experiments: electrochemical impedance spectroscopy and cyclic voltammetry. The goal of this study was to investigate electron transfer resistance in a model solution containing Over the years, many methods were invented or used in the investigation of protein layers immobilized on the surface. The most commonly used are the techniques based on immunological and fluorescent tests. Presently the most popular method is Enzyme-Linked Immunosorbent Assay (ELISA) . ELISA iElectrochemical impedance spectroscopy (EIS) and cyclic voltammetry (CV) are electrical techniques in which the electrochemical cells are used. They consist of three individual electrodes placed inside the vessel. In EIS the electrical response of the investigated system to the small amplitude periodic small amplitude alternating voltage (AC) signal is measured . The anaIntegration of the electrochemical cell electrodes on the surface of the common substrate allows for the cell miniaturization and facilitates the increase of the number of simultaneously conducted experiments in multiplexed measurement system. Integrated electrochemical cells can be fabricated on various substrates such as silicon , Low TemIntegrated electrochemical cells (IEC) used in our measurements were fabricated using LTCC technology. The devices made using LTCC technology can be characterized by their chemical inactivity, hermeticity, high reliability and high temperature stability. This technology allows fabrication of microfluidic systems such as flow sensors, micropumps, microvalves, micromixers, microreactors and polymerase chain reaction (PCR) devices ,18,19.Electrochemical cells fabricated using LTCC technology were used in many chemical and biological measurements such as devices for the detection of cortisol , heavy mIn this work, the preliminary experiments with the integrated electrochemical cells fabricated in the LTCC technology were conducted. The results of the test measurements with model solution containing IEC were fabricated using LTCC technology. Schematic drawing depicting the LTCC manufacturing process steps is shown in The IECs used in the experiments presented in this paper were made on rectangular substrate (20 \u00d7 3.5 mm) and were formed using three layers of green tape . Nine types of IECs were fabricated with electrodes varying in size which diFabricated IECs are meant to be used in the protein adsorption measurements done in the presence of the surface . Results surface ,25. It a surface .The eight-channel potentiostat of our own design was used in the measurements. It was designed as the EIS extension of the IMP-STM32 impedance analyzer . Its simOur system was designed to be fully compatible with the integrated electrochemical cells. IECs dimensions and contact pads placements were designed in such way that they fit into typical micro-USB connectors. The measurement head connected to the multichannel potentiostat has eight such connectors and may be placed on top of the titrate plate with sensors positioned vertically in eight of 24 wells. Impedance analyzer IMP-STM32 was used in the measurements and the range of the frequency sweep was from 1 Hz to 100 kHz. The measurement system is shown in the Q (expressed in n (dimensionless) are the parameters and The EIS spectra were analyzed using the Electrical Equivalent Circuit (EEC) method with the circuit shown in requency . The infThe SEM images of the IEC electrodes are shown in The preliminary tests of the Ag/AgCl reference electrode on LTCC ceramic formed using both highly concentrated NaCl solution and elecThe Bode and Nyquist graphs \u2014dots shoAll types of the IECs were tested using EIS in the experiments which were carried out in the PBS solution with the presence of various concentrations of cFe2+/3+ . It can The influence of the electrode geometry on the components of the Equivalent Electrical Circuit is shown in Protein adsorption to the WE surface was investigated using EIS for five type of sensors (k calculated as:It was observed that the obtained value of the electrical double layer capacitance The CV experiments were conducted in two steps. In the first step the cyclic voltammograms for five type of sensors (Cyclic voltammetry is a simple and easy means of showing the changes of electrode behavior after each assemble step and CV experiments further confirmed that the BSA was successfully adsorbed on gold WE surface. The results of experiments of k for the same reason as mentioned in The normalized cyclic voltammograms of Miniature integrated electrochemical cells fabricated using LTCC technology were used in measurement with model solution containing The results of the presented preliminary work confirmed that the IECs were used in the measurement with the presence of"} +{"text": "Hort and Simpson and in the SMGL supplement.See related articles by On behalf of the Saving Mothers, Giving Life (SMGL) Technical Working Group, we would like to thank Holt and SimpsonWe agree with the authors on the importance of bringing attention to the valuable experiences and lessons learned during the implementation and course of the SMGL activities, including the activities directly related to health systems strengthening (HSS). We generally agree with the points raised regarding a lesser focus on the evaluation of the process of implementation of the SMGL initiative in the supplement in favor of highlighting the outcomes and impacts of the initiative on maternal and newborn health. However, we beg to differ that the implementation experiences of SGML are \u201cnot well documented.\u201dThe articles constituting the supplement do not represent an exhaustive account of all aspects of the SMGL initiative. The select articles published in the supplement have largely focused on the outcomes of the initiative at its conclusion after 5 years of implementation. They add to already published accounts about the initial planning, implementation, and monitoring and evaluation of the SMGL interventions, including a comprehensive external evaluation of inputs and processes undertaken during the first year of the initiative.In the context of describing extraordinary, effective, multisectoral, and large-scale interventions that reduced maternal mortality in the SMGL-supported districts, the supplement includes numerous examples of HSS. The articles describing the comprehensive district system strengthening approaches that led to reductions in the \u201cThree Delays\u201d give ample details about strategies employed at the individual, community, health facility, and district levels. Successes and challenges to implementation of these strategies and increased accountability demanded by the initiative are thoroughly documented.The fact that the authors were able to identify examples in the supplement to discuss SMGL\u2019s successes at the macro, meso, and micro levels attests to the wealth of implementation details provided by the articles in the supplement. We echo the value of examining HSS through a more structured and formalized lens, though \u201cthere is little consensus on what health systems strengthening (HSS) entails, what the drivers of successful HSS initiatives are, and how they can be measured.\u201dWe thank Holt and Simpson for noting the importance of HSS in global health programming and research and recognize the value of continuing to share the experiences and lessons learned during the course of planning, implementing, and evaluating the SMGL initiative. We echo their thoughts on the need for future research to tease out more firmly the critical components of HSS in Uganda and Zambia. There is a wealth of qualitative and quantitative evidence that captured these crucial experiences. Continued analyses and documentation of these aspects may include bringing forward country- and district-level insights and experiences of the SMGL initiative related to HSS. Uganda and Zambia have already embarked on a road of scaling up components of the SMGL model. Policy makers and program managers in other low- and middle-income settings where similar approaches could be used to rapidly reduce maternal mortality may greatly benefit from learning about the SMGL\u2019s role in improving health systems."} +{"text": "We read with great interest the review article written by Bibb\u00f2 et al. entitledNASH has become one of the most common causes of liver disease in industrialized countries and is cConsidering that gut microbiota dysbiosis is a driver of inflammation in the development of NAFLD, the reverse modulation of intestinal dysbiosis may alter the disease process. There is emerging interest in the modulation of gut microbiota to induce benefits in inflammatory intestinal disorders, such as probiotic use, antibiotic treatment, and fecal microbiota transplantation. Although diet can significantly influence the composition of gut microbiota, clinical trials investigating the effects of dietary interventions on the gut microbiota of NAFLD patients are lacking. To ascertain the exact mechanisms of action of gut microbiota in NAFLD, additional human studies with larger patient populations and animal studies are needed. Unraveling the relationship between gut microbiota and the development of NAFLD may then allow for the identification of relevant targets for future therapeutic intervention."} +{"text": "The health state of rotating machinery directly affects the overall performance of the mechanical system. The monitoring of the operation condition is very important to reduce the downtime and improve the production efficiency. This paper presents a novel rotating machinery fault diagnosis method based on the improved multiscale amplitude-aware permutation entropy (IMAAPE) and the multiclass relevance vector machine (mRVM) to provide the necessary information for maintenance decisions. Once the fault occurs, the vibration amplitude and frequency of rotating machinery obviously changes and therefore, the vibration signal contains a considerable amount of fault information. In order to effectively extract the fault features from the vibration signals, the intrinsic time-scale decomposition (ITD) was used to highlight the fault characteristics of the vibration signal by extracting the optimum proper rotation (PR) component. Subsequently, the IMAAPE was utilized to realize the fault feature extraction from the PR component. In the IMAAPE algorithm, the coarse-graining procedures in the multi-scale analysis were improved and the stability of fault feature extraction was promoted. The coarse-grained time series of vibration signals at different time scales were firstly obtained, and the sensitivity of the amplitude-aware permutation entropy (AAPE) to signal amplitude and frequency was adopted to realize the fault feature extraction of coarse-grained time series. The multi-classifier based on the mRVM was established by the fault feature set to identify the fault type and analyze the fault severity of rotating machinery. In order to demonstrate the effectiveness and feasibility of the proposed method, the experimental datasets of the rolling bearing and gearbox were used to verify the proposed fault diagnosis method respectively. The experimental results show that the proposed method can be applied to the fault type identification and the fault severity analysis of rotating machinery with high accuracy. Rotating machinery is one of the most common mechanical equipment, which plays an important role in industrial applications. It generally operates under tough working environments, which can eventually result in mechanical breakdown that lead to high maintenance costs, severe financial losses, and safety concerns ,2. As roThe vibration signal is widely used in fault diagnosis of rotating machinery because it is easy to collect and monitor online. For example, when there is a local fault in the running process of the rolling bearing, each contact causes an instantaneous shock and stimulates the rolling bearing to conduct high-frequency free vibration attenuation according to its inherent frequency. The instantaneous impact caused by the failure has obvious periodicity, the impact frequency depends on the bearing speed, and the impact amplitude depends on the bearing fault size. Therefore, the impact characteristics caused by local damage should be extracted by signal analysis technology and then, fault identification should be conducted by the artificial classifier.As rotating machinery works in the industrial environment, its vibration signal often contains the inherent vibration signal of rotating machinery, the fault impact signal and background noise. The vibration signals collected by the accelerometers have the characteristics of non-linearity, non-stationarity and impact . TherefoDue to the nonlinearity and non-stationarity of the vibration signals of rotating machinery a time-frequency analysis method is often used to solve the problem of feature extraction of the vibration signals of rotating machinery. The fast Fourier transform (FFT) is a classical time-frequency analysis method, but it is only suitable for solving the problem of stationary signal analysis. The wavelet transform (WT) is also a classical time-frequency analysis method that can preset the time and frequency window of interest. However, the WT is not an adaptive signal decomposition method, and it requires the kernel function and its parameters to be set in advance. The wavelet packet transform (WPT) can select the frequency resolution and the WPT is more flexible than the WT. However, the WPT is still not an adaptive time-frequency analysis method. The empirical mode decomposition (EMD) is a self-adaptive time-frequency method that can adaptively decompose the vibration signal into a set of intrinsic mode functions (IMFs) that contain the amplitude and frequency characteristics. However, the EMD has the end effect and mode mixing problem in that the stability of the IMFs is poor, which affects the subsequent feature extraction process. In order to solve the problems existing in EMD, EEMD and a complete ensemble empirical mode decomposition (CEEMD) are proposed .In recent years, due to the fact that fault information contained in the vibration signals can be extracted more effectively at different time scales, a large number of scholars have applied a multiscale entropy (MSE) algorithm and its variants to fault feature extraction of rotating machinery ,11. In aAfter the fault features are extracted from the vibration signals of rotating machinery, a high performance classifier is needed to identify the fault types and fault severity. Many artificial intelligence techniques have been adopted to realize the fault diagnosis of rotating machinery, such as the artificial neural network (ANN) , supportIn view of the above problems in fault diagnosis of rotating machinery based on the pattern recognition method, this paper presents a novel fault diagnosis method based on the improved multiscale amplitude-aware permutation entropy (IMAAPE) and the mRVM for rotating machinery. The main contributions of this paper are summarized as follows:(1) As the AAPE is very sensitive to the amplitude change of the vibration signal, the vibration of rotating machinery needs to be pre-processed before the feature extraction to minimize the interference of external noise to the vibration signal. The intrinsic time-scale decomposition (ITD) was used to decompose the vibration signal of rotating machinery into a group of proper rotation components stably, among which the optimum PR component can highlight the main time-frequency characteristics of the vibration signal so as to facilitate the subsequent fault feature extraction.(2) The performance of the AAPE improved. A fault feature extraction method of rotating machinery based on the IMAAPE is proposed for the first time. The IMAAPE improves the coarse-graining procedure in a multiscale analysis and adopts the characteristics of the AAPE sensitive to the amplitude and frequency changes of the vibration signal. The IMAAPE can calculate the AAPE values in different time scales and construct the feature vectors, which can effectively describe the fault features contained in the vibration signals of rotating machinery.(3) The mRVM multi-classifier is trained to realize fault identification and fault severity analysis of rotating machinery. In this paper, two different realization methods of the mRVM and the effect of parameter selection on the identification accuracy of rotating machinery fault types are discussed by comparing experiments.The organization of the rest of this paper is as follows. The ITD is an algorithm for the efficient and precise time-frequency-energy (TFE) analysis of signals. The ITD can decompose a complex time series into a series of proper rotation (PR) components and accurately extract the intrinsic instantaneous amplitude, frequency information and other morphological characteristics of the complex time series, which is suitable for the analysis of non-stationary and nonlinear signals .Let The main steps of the ITD algorithm are as follows:Assuming that \u03b1\u2208 .According to Equation (2) and Equation (3), the PR component The baseline signal ith PR component and ith PR component and After the ITD, the time series The entropy analysis of the time series from a single scale may lose some important information of the original signal. The multiscale entropy (MSE) was proposed by Costa M. to represent the complexity of a signal. The MSE relies on the computation of the sample entropy over a range of scales to extract the characteristic information of the complex signal in different time scales . The MSE(1) The coarse-graining procedure derives a set of time series representing the system dynamics on different time scales. The coarse-graining procedure for scale (2) Then, the sample entropy of each coarse-grained time series is calculated and Bandt put forward the concept of basic permutation entropy in 2002 . At presd-dimensional space to obtain the reconstruction vectors ith permutation is called as Assume the given time series However, there are two main problems in describing the complex time series by the PE. First, the traditional PE only considers the ordinal structure of a time series, but ignores the amplitude information of the corresponding elements in the time series. Second, the effect of the elements with equal amplitude on the PE value in the time series is not clearly explained. In view of these, Azami and Escudero proposed the amplitude-aware permutation entropy (AAPE) to improve the sensitivity of the PE to the amplitude and frequency of the time series. The flow chart of the AAPE algorithm is shown in Assuming that the initial value of The AAPE calculation of the time series can be expressed as follows:The traditional RVM is a binary classifier which cannot directly solve the multi-classification problem. The multiclass relevance vector machine (mRVM) effectively solves the multi-classification application problem of the traditional RVM. The basic principle of the mRVM is described below.The input training data sample set is denoted as The continuous nature of In order to ensure the sparsity of the mRVM, similar to the RVM, a normal prior distribution with mean value of 0 and variance of The regressors When category For a certain category, the posterior expectation of auxiliary variables is:For The posterior probability distribution of the prior parameters of the weight vector is:1 follows the construction process, starting with an empty sample set, gradually adding samples according to their contribution to the method, or deleting samples with a low contribution to the method. The mRVM1 has two convergence principles: conv1 and conv2. The mRVM1_conv1 follows the principle described by [1_conv2 adds the limit of the minimum number of iterations to the mRVM1_conv1. The mRVM2 follows a top-down process, first loading the entire training sample set and then removing the unnecessary samples during the training process. The mRVM2 has two convergence principles: convA and convN. For the mRVM2_convA principle, 2_convN, the number of iterations is limited to Psorakis proposed two training methods of the mRVM in the literature and the ribed by . The mRVThe main process of the proposed fault diagnosis method of rotating machinery includes signal preprocessing, fault feature extraction and fault identification. The principle of the fault diagnosis method proposed in this paper is introduced below.Due to the fault of rotating machinery, the vibration signal has impact characteristics and the impact amplitudes are obviously different with different fault severity. In order to reduce the influence of external interference on the vibration signals and highlight the fault features of the vibration signals, it is necessary to preprocess the vibration signals before the feature extraction.Although different from the time-frequency analysis method such as the EMD, EEMD and LMD, the ITD is used to highlight the major amplitude variations in the vibration signals. The ITD algorithm is adopted to decompose the vibration signal into a sum of proper rotation components, for which instantaneous frequency and amplitude, as well as a monotonic trend, are well defined. The ITD can effectively suppress the mode mixing and end effect. The optimum proper rotation component is selected for further fault feature extraction because it contains the most obvious fault features. The calculation process of signal preprocessing based on the ITD can be referred to In order to effectively extract fault features of the vibration signals, an improved multi-scale amplitude-aware permutation entropy (IMAAPE) algorithm is proposed in this paper. This method improves the coarse-graining procedures in a multi-scale analysis and improves the stability of the fault feature extraction. In the classical MSE algorithm, when the scale factor rocedure .Supposing that the time series to be analyzed is For each scale factor The high-performance multi-classifier can realize the fault type identification and further, a fault severity analysis of rotating machinery. The mRVM is adopted to analyze and identify the fault features of rotating machinery in this paper. After the feature extraction of the vibration signal samples with different fault types and fault severity by IMAAPE, a fault feature set is formed to model the mRVM classifier. The established mRVM classifier can identify the fault type and analyze the fault severity of rotating machinery by extracting the IMAAPE fault feature from the vibration signals.The fault diagnosis procedure of rotating machinery proposed in this paper is shown in In order to verify the feasibility and effectiveness of the fault diagnosis method of rotating machinery proposed in this paper, the rolling bearing and gearbox are taken as examples to carry out the experiments and analysis. The rolling bearing experiment adopts the famous public data set provided by Case Western Reserve University Bearing Data Center . The geaThe experimental platform designed by Case Western Reserve University Bearing Data Center is shown in A vibration signal waveform of the rolling bearing with ball elements fault under the fault diameter of 7 mils with the load 0 hp is shown in The three dimensions of the IMAAPE feature vectors for different fault types under different fault diameters with load 0 hp is shown in 1_conv1 is higher than other classification methods, and at the same time, this method has reasonable operation efficiency. Therefore, the mRVM1_conv1 was used as the multiple classifier of the rolling bearing fault diagnosis method in this paper.In order to illustrate the fault identification accuracy of the fault diagnosis method based on the IMAAPE and the mRVM proposed in this paper, the samples under different loads were used to verify the effectiveness of the proposed method. The selection of the experimental samples is shown in 1_conv1 under different fault severity with the load 0 hp is shown in The fault identification accuracy of mRVMIn this paper, the effectiveness of the different fault extraction methods combined with the mRVM classifier for the rolling bearing were compared. The experimental results are shown in The experimental platform QPZZ-II was manufactured by Jiangsu Qingpeng Diagnosis Engineering Co., Ltd. A picture of QPZZ-II is shown in The vibration signal waveforms of the gearbox in different fault conditions are shown in The vibration signal is decomposed by the ITD and the ITD decomposition results (PR components) are shown in As shown in This paper presents a novel diagnosis method for rotating machinery, which can further analyze the fault severity of rotating machinery on the basis of accurately identify the fault types. The experiments were conducted to illustrate the validity and feasibility of the fault diagnosis method for rotating machinery. This paper can summarize the following conclusions:(1) The improved multiscale amplitude-aware permutation entropy (IMAAPE) proposed in this paper improves the coarse-graining process of the MSE and the problems existing in the PE, and can effectively extract the fault information contained in the vibration signals. Moreover, compared with other fault feature extraction methods, the IMAAPE has higher execution efficiency.(2) The multiclass relevance vector machine (mRVM) is suitable for the multi-classification of rotating machinery and has high identification accuracy on the basis of reasonable selection of nuclear parameters.(3) The rolling bearing experiments and gearbox experiments show the effectiveness of the proposed method. The experimental results on the rolling bearing and gear box show that the proposed fault diagnosis method for rotating machinery has a high fault identification accuracy of over 99%. In particular, the rolling bearing experiments show the potential application of the proposed method in fault severity analysis."} +{"text": "OBJECTIVES/SPECIFIC AIMS: The objective of this research is to determine under what conditions endpoints based on estimated glomerular filtration rate (eGFR) slope or on relatively small declines in eGFR provide valid and useful surrogate endpoints for pivotal clinical trials in chronic kidney disease (CKD) patients. METHODS/STUDY POPULATION: We consider 2 classes of surrogate endpoints. The first class includes endpoints defined by the average rate of change in eGFR during defined portions of the follow-up period of the trial, following initiation of the randomized treatment interventions. The second class includes composite endpoints defined by the time from randomization until the occurrence of a designated decline in eGFR or kidney failure. The true clinical endpoint is considered to be the time from randomization until kidney failure, irrespective of the trajectory in eGFR measurements prior to kidney failure. We apply statistical simulation to determine conditions under which alternative endpoints within the 2 classes are (1) valid surrogate endpoints, in the sense of preserving a low probability of rejecting the null hypothesis of no treatment effect on the surrogate endpoint when there is no treatment effect on the clinical endpoints and are also (2) useful surrogate endpoints, in the sense of providing increased statistical power that allows significant reductions in sample size and/or duration of follow-up. Input parameters for the simulations include (a) characteristics of the joint distribution of the longitudinal eGFR measurements and the time to occurrence of renal failure, (b) characteristics of the short-term and long-term effects of the treatment, and (c) design parameters, including the duration of accrual and follow-up and the spacing of eGFR measurements during the follow-up period. We use joint analyses of 19 treatment comparisons across 13 previous clinical trials of CKD patients to guide the selection of input parameters for the simulations. We apply longitudinal mixed effects models for analysis of endpoints based on eGFR slope, and Cox regression for analyses of the composite time-to-event endpoints. RESULTS/ANTICIPATED RESULTS: We have previously shown that surrogate endpoints defined by eGFR declines of 30% or 40% can provide valid and useful alternative endpoints in CKD clinical trials for interventions that do not produce short-term effects on eGFR which differ from the longer-term effects of the interventions. Other factors influencing the validity and utility of these endpoints include the average baseline eGFR, the mean rate of change in eGFR, and the extent to which the size of the treatment effect depends on the patient\u2019s underling rate of eGFR decline. We will extend these results by presenting preliminary results describing conditions under which outcomes based on eGFR slope provide valid and useful alternatives to the clinical endpoint of time until occurrence of kidney failure. DISCUSSION/SIGNIFICANCE OF IMPACT: The statistical simulation strategy described in this research can be used during the design of clinical trials of chronic kidney disease to assist in the selection of endpoints that maximize savings in sample size and duration of follow-up while retaining a low risk of producing a false positive conclusion in the absence of a true effect of the treatment on the time until kidney failure."} +{"text": "Special Issue on the topic of Electron Crystallography, now available in the August 2019 issue of Acta Crystallographica, Section B, contains contributions which we hope will interest readers of IUCrJ.A IUCrJ has published 111 articles with the term \u2018electron crystallography\u2019 as one of the keywords, with these articles attracting 450 citations a year and comparable activity continuing into 2019. This level of relevant activity provides the context and justification for the more specialized, in-depth, Special Issue on Electron Crystallography now available in the August 2019 issue of Acta Crystallographica, Section B, which contains important contributions from many of the major players in the field. We therefore believe that this Special Issue will prove attractive and relevant to the readership of IUCrJ and we wish to recommend the articles therein, as described in the Guest Editors\u2019 Introduction reproduced below.Since its launch in 2014, Introduction by the Guest Editors of the Special Issue on Electron Crystallography Structure analysis of micro- and nanocrystalline materials has witnessed immense progress in the last decade thanks to the development of electron diffraction techniques. The automation of data collection, development of new data collection modes and improvements in the data treatment have allowed unprecedented progress in most aspects of crystallography dealing with very small crystals. Probably the most notable change of paradigm is observable in the structure determination of unknown phases by electron diffraction. Three-dimensional diffraction techniques now allow almost routine solution and refinement of structures from single crystals as small as a few tens of nanometres, providing access to hitherto unsolvable crystal structures or to previously unattainable levels of structural detail. Scanning diffraction techniques allow phase and orientation mapping with nanometre resolution and even three-dimensional reconstruction of phase and orientation distributions.et al., 2019et al., 2019et al., 2019et al., 2019This special issue features a collection of original contributions covering a broad range of aspects of electron crystallography. An interested reader will find papers describing the foundations and methodological basis of structure solution by electron diffraction (Eggeman, 2019The collection of contributions in this special issue showcases the diversity of applications of current electron diffraction techniques, demonstrates the state of the development of the technique and also features work that further advances the electron diffraction methods. We believe that this special issue can serve as a starting point for anybody interested in electron crystallography and we are convinced that the contributions in this issue will become reference points for future research in this exciting field."} +{"text": "This study attempts to establish the relationships that exist between the different variables of organizational climate and job satisfaction among academic staff in some selected private Universities in South-West Nigeria, to ascertain related factors in organizational climate that can cause dissatisfaction among academics; and to determine if there is a significant difference in the way senior academics and junior academics perceive the existing organizational climate. A total of 384 copies of questionnaires were administered to selected five (5) private Universities in the South-West Zone of Nigeria but a total of 293 questionnaires were returned fully and appropriately filled. The study made use of appropriate statistics such as measurement model and Multiple Regression to obtain results. Specifications TableValue of the Data\u2022The data can produce useful highlight on the factors that university lecturers view as enhancing job satisfaction within the organizational climate.\u2022The management of schools will find the data helpful in improving staff morale and bringing about job satisfaction of their employees.\u2022The data will be of great value in recommending policies and strategies for mitigating organizational correlates of job dissatisfaction.\u2022To help in gaining understanding that the climates of an organization and job satisfaction vary together.\u2022The questionnaire attached can be modified, adopted or adapted for further comparative researches in private and public universities and other industries aside from educational industry.1Survey method was used mainly by questionnaire to collect the data from University lecturers in Southwest Nigeria. Respondents were requested to respond to questions with self-administered and structured questionnaire. The researcher utilized one structured questionnaire for both the senior academics and junior academics. This was presented personally to all respondents by the researcher in the sampled universities. This enhanced uniformity of response bearing in mind the degree of variations in perception of what the organizational climate may be referred to The study populations from which the sample was drawn consist of eighteen (18) private universities in the Southwest Nigeria. Out of these private universities, five (5) were taken as the study sample through judgmental sampling method and questionnaires were administered to the academic staff ranging from the Professors, Associate Professors, Senior lecturers, Lecturers 1, Lecturers 2, Assistant lecturers and Graduate Assistants. The total number of academic staff in the selected private universities is 754. The private universities chosen for this study are Covenant University, Bells University of Technology, Crawford University, Babcock University and Bowen University.2The evolving competition in the higher education environment in Nigeria brought about by increase in the number of new Universities has necessitated the need for good organisational climate that will enable these Universities retain their best employees. Reports by NUC (2008) revealed that though Universities are increasing, yet the number of qualified teachers is not increasing proportionately. Thus, surveys are necessary to establish the relationships that exists between the different variables of organisational climate and job satisfaction among academic staff of selected private Universities in Southwest Nigeria.Out of 384 copies of questionnaire administered, only 293 copies of questionnaires were returned representing 76.30%. Majority of the questions used were adapted with some modifications from a job satisfaction questionnaire. Questionnaire for the study were sorted and those that were not properly filled were removed. To minimize errors, data from questionnaire were coded so as to pave way for editing of data before the use of SPSS-Statistical package for Social Sciences-software.For the purpose of efficiency and thoroughness two field assistants were recruited and trained. The training focused on the pertinent objectives and importance of the study, how to administer/conduct the study instruments and how to secure respondents\u2019 informed consent. The researchers ensured that respondents were well informed about the study and the objectives of this research and they were encouraged with the participation process. Respondents were offered the opportunity to stay anonymous and their responses were treated confidentially.Hence, this study has extensive implications for the institutions, academic staff, government, educators and researchers in this regard. It can be concluded that the success of these universities depend on the ability to impact on the motivation and job satisfaction of academic staff with a wide range of benefits to promote retention and reduce job-hopping. To this end, the data presented in this article is imperative for more comprehensive analysis as presented in"} +{"text": "This study aims to present the feasibility of the open approach of hemilevator excision (HLE) as a promising alternative of the laparoscopic and/or robotic ones for the treatment of low rectal cancer extending to the ipsilateral puborectalis muscle.A 60-year-old male patient with a high-grade differentiated rectal adenocarcinoma at the right side of the lower rectum invading puborectalis muscle. The proposed operation consists of a combination of extralevator abdomino-perineal excision (ELAPE), intersphicteric resection (ISR), and low anterior resection (LAR) since it resects the ipsilateral to tumor levator ani muscle (LAM) from its attachment at the internal obturator fascia and the deep part of ipsilateral external anal sphincter (EAS), while the distal part of dissection is completed in the intersphincteric space taking out the internal anal sphincter (IAS). At the contralateral side of the tumor, the dissection plane follows the classic route of LAR.Pathology proved the oncologic adequacy of resection. MRI at the fourth postoperative week showed clearly the right aspect of anorectal junction free of tumor. Anorectal manometry revealed a fair anorectal function which is in accordance with the findings of clinical assessment of patient after restoring large bowel continuity .This is the first case of the open HLE that seems to be a good alternative compared to ELAPE or conventional APR, as it offers oncologic adequacy and a fair anorectal function. The treatment of cancer of the rectal lower third has been a challenging issue over time. Back in 1908, Ernest Miles first described the abdomino-perineal excision (APE) . HoweverA 60-year-old male patient was referred to our hospital with a high-grade differentiated rectal adenocarcinoma. The pelvic MRI revealed a tumor at the lower rectum that invaded puborectalis muscle to a length of 9\u2009mm on the right side. Moreover, the CT scan proved the absence of any distant metastasis. Given the tumor location and the absence of distant metastases, the patient went through manometric evaluation of anorectal function and clinical assessment with the Wexner scale score for incontinence . MRI at the fourth postoperative week showed clearly the right aspect of anorectal junction free of tumor and the absence of ipsilateral LAM Fig.\u00a0.Fig. 3a In the earlier days of colorectal surgery for malignant tumors of the lower third of rectum, the operation of choice was the abdomino-perineal resection (APR) in which the sigmoid, the rectum, and the anus were excised leaving the levator ani muscle complex intact in both sides. In this way, the specimen resembles an hourglass due to the characteristic \u201cwaist\u201d in the middle . HoweverThis is the first attempt at Greece to perform a technique which targets the saving of anal sphincter for very low rectal cancers with extension to the puborectalis muscle. This is the first procedure with removal of puborectalis muscle and partial excision of external sphincter with preservation of anal function. This innovative procedure requires full knowledge of pelvic anatomy. The surgical team must have experience to the standard TME. This procedure is the hope for a life without colostomy for patients with these tumors. Undoubtedly, a larger number of cases is demanded to draw firm conclusions since we have to take into account that anatomic characteristics such as gender, body mass index, etc. might affect the feasibility of the procedure."} +{"text": "The paper aims to present the reconstructive surgical approach in the case of a patient with complex soft tissue lesions of the calf. The patient was the victim of a road accident resulting in the fracture of the right tibia for which screw-plate osteosynthesis was performed. The chosen therapeutic solution was represented by covering the soft tissue defects using a complex algorithm that involved the use of a reverse sural flap associated with a medial hemisoleus muscle flap and a split-thickness skin graft.Considering functional recovery and the degree of patient satisfaction, the result of the therapeutic conduct was appreciated as very good. The association of the reverse sural flap with the medial hemisoleus flap can be a solution for solving complex cases with multiple soft tissue defects located in the middle and lower third of the calf. Fractures of the calf caused by road accidents often raise problems regarding the reduction and fixation of the fragments through the use of osteosynthesis materials. The intensity of the traumatizing agent, the high degree of contamination, and the associated lesions represent negative prognostic factors, which must be carefully integrated into the therapeutic protocol to avoid the structural, septic, and functional complications.The material presents the case of a 60 years old polytraumatized patient, the victim of a road accident that resulted in multiple injuries including minor cranial-cerebral trauma, thoracic and abdominal trauma with multiple rib fractures and fracture of the right tibia for which reduction and fixation were performed by using a plate and screws. The patient was admitted to the hospital 12 months after the trauma with calf infections associated with the exposure of the osteosynthesis material in the middle and inferior parts of the calf and local contamination with worms.The therapeutic protocol consisted of surgical debridement in order to eliminate the worm contamination, in combination with the administration of the specific antibiotic therapy after obtaining the result from the antibiogram . The next step was the ablation of the osteosynthesis material, followed by the surgical reconstruction in one stage with a medial hemisoleus flap for the coverage of the soft-tissue defect located in the middle third of the calf and a reverse sural fasciocutaneous flap for the defect in the lower third of the calf, finally the muscular flap being covered by split-thickness skin graft.The patient was reassessed periodically for 15 months, during the consults being determined that the flaps were integrated and that the functional recovery was completed, demonstrated by the socio-professional reintegration without limitations.Eradication of the septic outbreak and worm contamination was achieved within seven days after the osteosynthesis material ablation by surgical excisional debridement, daily dressing changes with antiseptic solutions (chlorine-based compounds) and administration of a drug combination that included a third-generation cephalosporin and gentamicin.A medial hemisoleus flap was used for the coverage of the soft-tissue defect from the middle third of the calf that was associated with bone exposure . In the The soft tissue defect in the lower third of the calf was covered with a reverse sural fasciocutaneous flap, followed by grafting the donor site with a free split-thickness skin graft . The locThe hospitalization time in the plastic surgery department was 14 days, the patient being discharged during the healing process with a fully functional lower limb, subsequently being reassessed regularly for 15 months after the intervention. Postoperative follow-up showed that the flaps were fully integrated without The reconstructive options for covering the soft-tissue defects of the calf have been carefully studied within the international scientific community, the specialized literature presenting a multitude of interesting articles in this respect. The current approach is to cover the defects with flaps based on the perforating arteries, which offers structural similarity with the anatomical region to be reconstructed, in the conditions of an increased safety profile and reduced morbidity of the donor site . HoweverThe hemisoleus muscle flap represents a durable solution to cover the soft-tissue defects, especially in the lesions associated with bone exposure after the healing of a septic outbreak. The generous vascular component characteristic to this type of flap -6, constThe reverse sural flap represents a firm solution for the coverage of the soft-tissue defects located in the lower third of the pelvic limb , the redThe association of reconstructive techniques that involves the usage of the reverse sural flap and the hemisoleus flap is a feasible solution for solving cases that associate multiple soft tissue defects of the calf.In this case, the use of the medial hemisoleus flap for the coverage of the soft-tissue defect located in the middle third of the calf was not associated with a significant impairment of the patient\u2019s locomotor capacity, the patient being able to move normally and carry out his daily and professional activity without limitations.The association between the hemisoleus muscle flap and the reverse sural flap may be a solution for complex cases where the use of perforator flaps is not possible.Muscle flaps represent the \u201csafety belt\u201d in the cases that require complex reconstructions of the calf, often being the optimal solution for patients with complex fractures associated with neglected septic processes and soft tissue defects of considerable size.The authors confirm that there are no conflicts of interest."} +{"text": "Since understanding how the human brain generates neural commands to control muscles during motor tasks still remains an untapped question, great interest is shown in the validation and application of muscle synergies among research groups focused on the electromyography (EMG). In the last decades, the factorization of the EMG signals by means of muscle synergies has been proposed to understand the neurophysiological mechanisms related to the central nervous system ability in reducing the dimensionality of muscle control. For this reason, we planned a special issue on validation and application of the muscle synergy theory to discuss the methodological issues and to propose novel applications in clinics, robotics, and sports. The special issue achieved success among researchers as demonstrated by the large amount of submitted papers and the scientific impact of the published ones.The special issue is composed of twelve manuscripts. Three systematic reviews are included: (i) the first one is focused on the meaning of the muscle synergy theory to understand its applicability as a neurorehabilitation tool ; (ii) the second one is useful to understand the applications of muscle synergies in the investigation of muscle coordination during walking of poststroke patients ; and (iii) the third one offers a complete overview on the tangible applications of muscle synergies in clinics, robotics, and sports .As concerns clinics, the effects of upper limb weakness and task failure, which is the inability to maintain a certain level of force during a task, on the muscle synergies are evaluated by Roh et al. and Castronovo et al., respectively. As regards robotics, the feasibility to use a muscle synergy approach to implement the control system of an upper limb exoskeleton is presented by Chiavenna et al. Moving to the sports, two papers are focused on understanding the muscle synergy organization during the execution of specific technical actions of the badminton , one paper shows the muscle synergy structure involved in stability exercises of rhythmic gymnastics , while the motor control underlying the throwing movement is studied by Cruz-Ruiz et al. Finally, two papers investigate some fundamental methodological issues; in particular concerning the influence of initialization techniques for the application of non-negative matrix factorization and the reliability and repeatability of the methodology for extracting muscle synergies during daily life activities .We hope that this special issue can represent an important step to strengthen the use of muscle synergies to explain how the human brain organizes the muscle activation both in clinics and robotics, as well as in sports applications."} +{"text": "Following publication of the original article , the autIn this Correction the incorrect and corrected version of Fig. Originally Fig. The corrected version of Fig."} +{"text": "Peripheral neuropathies of the shoulder are common and could be related to traumatic injury, shoulder surgery, infection or tumour but usually they result from an entrapment syndrome. Imaging plays an important role to detect the underlying causes, to assess the precise topography and the severity of nerve damage. The key points concerning the imaging of nerve entrapment syndrome are the knowledge of the particular topography of the injured nerve, and the morphology as well signal modifications of the corresponding muscles. Magnetic Resonance Imaging best shows these findings, although Ultrasounds and Computed Tomography sometimes allow the diagnosis of neuropathy. Peripheral neuropathies of the shoulder are common and represent an important cause of morbidity and disability in patients. They could be related to traumatic injury, shoulder surgery, infection or tumour, but usually they result from an entrapment syndrome, a condition in which the nerve is stretched into an incompressible space .Although all the nerves of the shoulder could be affected, we will focus our topic on the major nerves that could be involved in this area, principally suprascapular nerve, followed by the axillary, musculocutaneous, spinal accessory and long thoracic nerves. The sensory and motor innervations of these nerves as well as the etiology and usual site of entrapment are summarized in Table edema signal of the denervated muscle. This finding is related to an increased extracellular free water and muscle blood volume. In acute and subacute stages, high signal intensity fluid is found in T2-weighted with fat suppression sequences associated with normal signal in T1-weighted sequences could add more information than X-ray by allowing morphologic evaluation of suprascapular and spinoglenoid notches, assessing the muscular trophicity and fatty degeneration, and showing extrinsic compression of the injured nerve by labral, mucoid cysts or tumours. Ultrasonography (US) is a useful, non-invasive modality that allows bilateral with dynamic comparison of the periscapular soft tissue. US does sometimes allow morphologic assessment of the nerves, essentially on the basis of focal thickening, and can also guide diagnosis on the basis of atrophy and fatty degeneration, characterized by a loss of muscle volume and muscle hyperechogenicity. The key points concerning the imaging of nerve entrapment syndrome are the morphology and the signal changes of the innervated muscles, best demonstrated by Magnetic Resonance Imaging (MRI). The earliest MRI sign of an impairment of the relationship between motor neurons and muscle fibers is the The suprascapular nerve is a mixed nerve providing motor innervation to the supraspinatus and infraspinatus muscles and sensory innervation of the coracohumeral ligament, the coracoclavicular ligament, the subacromial bursa, the acromioclavicular joint and upper and posterior glenohumeral joint. The nerve arises from the upper trunk of the plexus brachial and is formed by the ventral rami of C5 and C6 roots and occasionally from the C4 root. Then, it crosses the posterior cervical triangle in the supraclavicular fossa, deep to the omohyoid muscle Figure 23]. In3. In23].The clinical presentation of suprascapular neuropathy may be various. The pain is usually chronic and dull, in the superior and posterior shoulder often radiation to the neck or lateral arm. Weakness, loss of function and atrophy of the shoulder are other clinical manifestations.The causes of the nerve entrapment are multiple. The suprascapular notch is the most frequent point for the suprascapular nerve to be entrapped by compression or traction. In addition, the anatomic configuration of the suprascapular notch may represent a predisposing factor to the development of entrapment. Several morphologic variations in size and shape of the suprascapular notch have been reported. Rengachary et al., described six types of suprascapular notch and their incidence, based on the shape of the notch . Natsis 911Extrinsic compression by labral or muco\u00efd cyst Figure , tumour Microtraumatisms by repetitive movements, especially with overhead activities, for example in volleyball and baseball athletes, expose the nerve to tension especially with a predisposing anatomy. As the shoulder is in extreme external rotation and abduction, the muscles of supraspinatus and infraspinatus impinge upon the scapular spine compressing the motor branch of the infraspinatus, resulting to an isolated infraspinatus muscle weakness .At the level of the spinoglenoid notch, the nerve entrapment may be due to an increased tension in adduction and internal rotation with a hypertrophied inferior transverse scapular ligament .A possible association between the suprascapular nerve entrapment and extensive rotator cuff tears has been reported. The mechanism of this entrapment is based on cadaveric study that showed that the medial retraction of supraspinatus tendon stretches the nerve by changing the angle between the nerve and its first motor branch . This re16bullseye sign as an indicator of peripheral nerve constriction in Parsonage Turner syndrome [Imaging, especially MRI, plays an important role to detect the underlying causes of nerve damage , the precise topography of injury MRI is also useful to determine the severity of the nerve injury (edema and/or atrophy) and to diagnose other cause of neuropathy such as cervical radiculopathy or Parsonage Turner syndrome. This syndrome is a rare idiopathic disorder, often attributed to viral infections or immunological reaction after vaccination, characterized by a sudden onset of pain of the shoulder girdle affecting one or several nerves of brachial plexus . There isyndrome .The axillary nerve is a mixed nerve, providing motor innervation of the deltoid and teres minor muscles, supplying collateral branches to the subscapularis and coracobrachialis muscles. This nerve provides also sensory innervation of the shoulder joint capsule (shared with suprascapular nerve), the inferior glenohumeral ligament and the posterolateral skin of the shoulder and the arm. The nerve is the terminal branch of posterior cord of the plexus brachial and is formed by the ventral rami of C5 and C6 roots, lying superior to the radial nerve and behind the axillary artery and vein 22. Five 23The quadrilateral space, also called quadrangular space or lateral axillary hiatus, is bounded by the teres minor muscle superiorly, the upper border of the teres major muscle inferiorly, the long head of the triceps brachii medially, and the surgical neck of humerus laterally . In most22The differences in innervation between the anterior and posterior branches of the axillary nerve explain the occurrence of an isolated palsy of the deltoid or the teres minor muscles with or without sensory loss in the axillary nerve distribution .unhappy triad is the condition in which multiple peripheral nerves injury occurs in combination with rotator cuff tear. In this triad, the axillary nerve is the most commonly affected [The axillary nerve injury is one of the most commonly injured nerves during surgical procedures of the shoulder making up to 10% of all brachial plexus injuries. The anterior branch of the nerve, ascending around the surgical neck of humerus is at risk during deltoid splitting or humeral nailing or prosthetic procedures . It may nerves) .Extrinsic compression by hematoma, posteroinferior labral cyst, bone callus, tumour, and accessory subscapularis muscle are other common etiologies of axillary nerve injury 28.Chronic compression of the axillary nerve, known as quadrilateral space syndrome is a rare and misdiagnosed neurovascular syndrome. This syndrome is defined as compression or mechanical injury of the axillary nerve or posterior circumflex artery as they pass through the quadrilateral space. The clinical manifestation is various including nondermatomal neuropathic pain, numbness and weakness in the shoulder or vascular manifestations such as thrombosis, digital or hand ischemia. The electromyography is often normal. This syndrome has been reported in overhead or throwing athlete including volleyball, baseball, swimming but also in yoga or window cleaning that involve abduction and external rotation . ImagingThe musculocutaneous nerve is a mixed nerve, providing motor innervation to the coracobrachialis, biceps brachii, and brachialis muscles and sensory innervation of lateral forearm. The nerve arises from the lateral cord of the plexus brachial and is formed by the ventral rami of C5 and C6 roots and occasionally from the C4 or C7 roots. There are numerous anatomical variations. Located lateral to the median nerve and axillary artery, the nerve runs frequently downward and passes through the coracobrachialis muscle and descends obliquely between the biceps brachii and the brachialis muscles, which it innervates. Then, the nerve emerges along the lateral margin of biceps aponeurosis and continues in the forearm as the lateral antebrachial cutaneous nerve .Weakness of biceps brachii muscle and, sometimes, sensory deficit in the forearm are the main symptoms of musculocutaneous neuropathy. Isolated musculocutaneous nerve injuries are rare and the reported causes are penetrating traumas, anesthetic blocks, anterior shoulder surgery especially in coracoid abutment (Latarjet procedure), and sport related entrapment such as windsurfing, rowing or weightlifting athletes. Proximal nerve injury can occur when the nerve pierces the coracobrachialis muscle during violent extension of the arm in throwing athletes or by entrapment between the biceps brachii and the brachialis muscles, in forced abduction and external rotation 3031. Ima30The long thoracic nerve (Charles Bell nerve) is a pure motor nerve arising directly from the ventral rami of C5, C6, and C7 roots and occasionally from the C4 root. The roots from C5 and C6 runs downward and pierce through the scalenus medius, while the root from C7 runs over this muscle. In the supraclavicular region, the upper division of the long thoracic nerve runs parallel and posterior to the brachial plexus close to the supra- scapular nerve. In the axilla, the upper and lower portion merge and extends along the side of the thorax to the lower border of the serratus anterior, supplying filaments to each digitation of this muscle .Traumatic injuries to the nerve after motor vehicle accidents, or after falls have been reported . The ner34Imaging shows denervation signs (edema and atrophy) of the serratus anterior and may help to eliminate other causes of scapular winging. In case of multiple muscle denervation sites, imaging could suggest a Parsonage Turner syndrome.The spinal accessory nerve provides motor innervation of the trapezius and sternocleidomastoid muscles. It is a motor nerve arising from both the medulla and the spinal cord. The cranial fibers innerve the pharyngeal and laryngeal muscles and the spinal fibers arise from the anterior horn of the upper five (or six) cervical vertebra. The spinal fibers enter the posterior cranial fossa, merge with the cranial fibers and then exit the skull via the jugular foramen, along with the vagus and glossopharyngeal nerves. The nerve passes deep into the posterior belly of the digastric muscle to supply the sternocleidomastoid muscle. Then, it passes through this muscle and runs obliquely across the posterior cervical to end in the deep surface of trapezius muscle via several terminal branches to supply upper, middle and lower trapezius muscle 37.droopy shoulder characterized by the inability to raise the affected arm above the level of the shoulder.The nerve is vulnerable in the posterior triangle of the neck, during radical neck surgery, tumour dissection and cervical lymph node biopsy. Traumatic injury by direct impact or deep tissue massage and water-skiing injuries has been reported but occurs rarely. These injuries lead to a Imaging of this nerve is quite challenging but MRI shows denervation of the trapezius and sternocleidomastoid muscles Figure . In patiDiagnosis of shoulder neuropathy is difficult and challenging because of overlapping symptoms with various origins. Imaging plays an important role to detect the underlying causes to assess the precise topography and the severity of nerve damage. The key points concerning the imaging of nerve entrapment syndrome are the knowledge of the precise topography of the injured nerve and the morphological and signal change of the innervated muscle, best shown by MRI, although US and CT may allow the diagnosis of neuropathy."} +{"text": "Identification and assessment for cognitive impairment is a difficult task further complicated by the need to determine capacity. Issues related to cognitive impairment and capacity create ethical dilemmas potentially spanning all four ethical principles: autonomy, beneficence, non-malfeasance, and justice. This paper uses a case scenario to describe different types of cognitive impairment and demonstrate ethical issues that commonly arise when treating patients with cognitive impairment in the clinical setting. The authors also recognize the complexity of capacity as an issue that spans both the medical and legal fields and provides explanations and distinctions. The overall goal of this paper is to raise awareness of the impact of cognitive impairment on the vulnerability of older adults, describe the complex ethical issues that cognitive impairment and capacity raise and the importance of defining capacity in the context of the legal and medical fields."} +{"text": "Objective: The impact of substance abuse on violent behavior in patients suffering from schizophrenia is well-known. However, the association between the pattern of substance abuse and certain aspects of criminal behavior like the severity of offense, the previous history of violence and the age at onset of the criminal career is still unclear.Method: To assess the relationship between substance abuse, schizophrenia and violent behavior we examined healthy non-offenders; healthy offenders; non-offenders suffering from schizophrenia; and offenders suffering from schizophrenia, with respect to different patterns of substance abuse .Results: Healthy offenders as well as offenders and non-offenders suffering from schizophrenia are characterized by increased rates of alcohol and illicit drug abuse. Especially multiple substance abuse appears to lower the threshold of aggression and illegal behavior. This effect is more pronounced in subjects suffering from schizophrenia. In both offender groups the abuse of psychoactive substances is associated with an earlier onset of the criminal career, but has no impact on the severity of the offenses.Conclusion: Our results point to the need for a differentiated view on the contribution of substance abuse to the criminality of subjects suffering from schizophrenia. During the last 20 years various studies have confirmed a moderate though statistically significant association between schizophrenia and violence \u20139. Most However, the aforementioned association remains unclear for several reasons:Compared with the general population, subjects suffering from schizophrenia exhibit significantly higher rates for substance abuse. This primarily concerns alcohol and cannabis, while opiates and hallucinogens are of minor importance \u201340. SubjIndependent of a subject's mental health status, substance abuse is by itself a major criminogenic factor \u201347. A meSince 1996 the World Health Organization (WHO) has compiled the Global Alcohol Database to provide a standardized reference source of information for a global epidemiological survey of alcohol use and its related problems . Since 1Our study addresses the following questions:Are there differences concerning the prevalence rates and patterns of substance abuse between offenders suffering from schizophrenia and non-offenders suffering from schizophrenia?Are there differences concerning the prevalence rates and patterns of substance abuse between offenders with schizophrenia and healthy offenders?Do the rates of substance abuse in these three groups differ from the rates in healthy controls?Is there an association between substance abuse and patterns of criminal behavior independent of the mental status of the offenders?In order to assess the impact of the patterns of substances abuse on criminal behavior of male patients with schizophrenia, we used a case-control-study-design with four groups.One hundred and three male offenders suffering from schizophrenia (OS) according to DSM-IV , admitteOne hundred and three healthy male offenders (OH), matched for age and severity of offense with the offenders suffering from schizophrenia.One hundred and three male non-offenders suffering from schizophrenia according to DSM-IV (NoS) matched for age and duration of illness with the offenders NGRI.One hundred and three male non-offenders without major mental disorders or personality disorders (NoH), matched for age with the other groups. The social stratification and the town/rural distribution of this group are representative for the overall population of Eastern Austria .The protocol of the study was approved by the ethic commission of the Medical University Vienna. All patients gave written informed consent in accordance with the Declaration of Helsinki.All subjects underwent the Structured Clinical Interview for DSM Disorders (SCID 1 + 2). The Inter-rater reliability tested for 25 patients was very satisfying . We distinguished three types of harmful substance abuse:AlcoholIllicit DrugsMultiple substances The assignment to social classes was done according to Kleining and Moore . The patThe frequency of substance abuse was analyzed by means of Chi-Square Tests, associations between the patterns of violent behavior and the different types of substance abuse by means of One-Way ANOVA (df = 3).The social and clinical data of the four groups are listed in Table Table Finally we compared the relationship between the different types of substance abuse and the patterns of criminal behavior of the two offender groups, which showed substantial similarities Table . By meanComparison of the four groups shows that a statistically significantly higher prevalence of substance abuse is associated with both psychiatric illness and offending behavior Table . Only 4.Non-offending patients with schizophrenia differ from the other groups by a relatively high prevalence of illicit drug abuse (25.2%) which is nearly exclusively confined to cannabis. This is in line with previous findings of studies on the use of cannabis in schizophrenic patients , 55, 56.Compared with the offenders suffering from schizophrenia, significantly fewer cases of multiple substance abuse were identified among the non-offenders suffering from schizophrenia allows to draw robust and differentiated conclusions concerning the impact of substance abuse on criminal behavior in offenders suffering from schizophrenia. However, Eisner stated tTherefore we have to take into account the globally different rates of criminality and substance abuse. As they are substantially higher in the USA compared to Europe, the results of US-studies apply only partly to European countries. Austria with its low rates of criminality and substance abuse is in line with most of the other Western European countries. As a consequence, methodologically similar studies in other regions are necessary to investigate the variations of the impact of the culture-specific types of substance consumption on the criminal behavior of patients suffering from schizophrenia.TS and HS planned the study and collected the data of the offender groups. TS did the statics and wrote the paper. KR collected the data of both non-offending groups.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The aim of this paper is the discussion of quality control (QC) criteria for environmental monitoring of organic contaminants at trace levels in water. In addition, QC criteria in the identification and confirmation of target analytes have been considered."} +{"text": "FRC), in the absence and presence of piracetam (gHSAFRC-PIR), was performed by fluorescence quenching of macromolecules. On the basis of obtained data we concluded that under the influence of glycation, association constant (Advanced Glycation End-Products (AGEs) are created in the last step of protein glycation and can be a factor in aging and in the development or worsening of many degenerative diseases . Albumin is the most susceptible to glycation plasma protein. Modified albumin by AGEs may be more resistant to enzymatic degradation, which further increases the local accumulation of AGEs in tissues. The aim of the present study was to analyze in vitro glycation of serum albumin in the presence of piracetam (PIR) and the gliclazide (GLZ)-glycated albumin interaction. The analysis of PIR as an inhibitor and GLZ interaction with nonglycated human albumin (HSA) and glycated by fructose human albumin (gHSA The tangent to the Scatchard curve at the intersection with the X-axis represents the average number of moles of ligand bound to one mole of albumin in the analyzed binding site. A linear Scatchard curve indicates the existence of one, independent class of binding sites in the albumin molecule. The Scatchard curve can also have a nonlinear curve. The course of the curve similar to hyperbole indicates the nonspecific nature of ligand binding, negative cooperativeness or the existence of many classes of binding sites. A \u201cconical\u201d curve indicates positive cooperativity or instability of the ligand .Hill\u2019s coefficient was determined on the basis of Hill\u2019s method (Equation (5)) :(5)log(rThe primary objective and novelty of this study was to estimate the inhibition properites of piracetam (PIR) and its impact on the gliclazide (GLZ)\u2013glycated albumin interaction. Based on the conducted in vitro data we concluded that piracetam (PIR) used in eldery simultaneously with gliclazide (GLZ) inhibits the formation of Advanced Glycation Ends Products (AGEs) and increases the binding strength of GLZ to glycated albumin that weakens its therapeutic effect. Although the studies are preliminary and cannot be directly used in clinical practice, the results highlight the novelty and validity of the studies and suggest using other research methods as a continuation of this work."} +{"text": "The prestressed near-surface mounted reinforcement (NSMR) using Fiber Reinforced Polymer (FRP) was developed to improve the load bearing capacity of ageing or degraded concrete structures. The NSMR using FRP was the subject of numerous studies of which a mere portion was dedicated to the long-term behavior under fatigue loading. Accordingly, the present study intends to examine the fatigue performance of the NSMR applying the anchoring system developed by Korea Institute of Construction and Building Technology (KICT). To that goal, fatigue test is performed on 6.4 m reinforced concrete beams fabricated with various concrete strengths and developed lengths of the Carbon Fiber Reinforced Polymer (CFRP) tendon. The test results reveal that the difference in the concrete strength and in the developed length of the CFRP tendon has insignificant effect on the strengthening performance. It is concluded that the accumulation of fatigue loading, the concrete strength and the developed length of the tendon will not affect significantly the strengthening performance given that sufficient strengthening is secured. Prestressed concrete (PSC) eases the control of the deflection and cracks in concrete structures by reducing the tensile stress through the introduction of a compressive force by means of tendons embedded in the tension zone of the structure. Ageing bridges may experience loss of their function and performance that should be recovered or improved by strengthening the structure. The strengthening of PSC girder bridge is achieved by enlarging the girder, external prestressing, carbon fiber bonding or steel plate bonding. The external bonding reinforcement (EBR) methods were applied to improve the performance by bonding the reinforcement on the tensile zone of the concrete member using an adhesive. This reinforcement took first the form of a plate made of steel that started to be replaced by Fiber Reinforced Polymer (FRP) since the mid of 1980s.The near-surface mounted reinforcement (NSMR) resembles the above mentioned external bonding reinforcement but embeds the FRP plate or bar in a groove cut with a definite depth in the concrete surface. De Lorenzis et al. reportedIn view of the efficiency of FRP, both NSMR and EBR cannot exploit fully the maximum performance of FRP due to the premature occurrence of debonding at the reinforcement-concrete interface. As passive strengthening methods, their effect appears only under the application of additional live loads without clear improvement of the serviceability in term of cracking and deflection recovery. Accordingly, numerous researchers attempted the prestressed NSMR achieving the synergy of both NSMR and prestressing . For theIn particular, FRP is checked to apply in many sites because of strong points such as high strength, light weight, chemical resistance and need to secure reliability through real size test because of low applications, low construction cases, problems with design criteria ,5,6,7,8.Despite of the numerous studies related to NSMR using FRP, a very few of them studied the long-term behavior under fatigue loading . The repAccordingly, the present study intends to examine the fatigue behavior of the prestressed NSMR applying the anchoring device developed by KICT. The cumulated effect of the fatigue load on the strengthening performance is examined by means of a series of fatigue tests performed on 6.4 m reinforced concrete beams fabricated with various concrete strengths and developed lengths of the Carbon Fiber Reinforced Polymer (CFRP) tendon. The fatigue tests were conducted in two stages. The first stage applied 2 million loading cycles and the second stage applied static loading until failure of the specimens to measure the residual strength after the accumulation of fatigue.This study intends to examine the fatigue performance of the prestressed NSMR. To that goal, the specimens are designated as shown in As shown in The structural tests were executed in two stages. The first stage applied 2 million loading cycles and the second stage applied static loading until failure of the specimens to measure the residual strength after the accumulation of fatigue.y and 0.7fy, where fy is the steel yield stress. Accordingly, the present study adopted fatigue load ranging between 60 kN and 100 kN. In addition, the loading rate was set to 2.0 to 3.0 Hz. The fatigue test conducted using a dynamic actuator with capacity of 1000 kN to apply 2 million loading cycles. Static load of 100 kN was applied after 1, 1000, 5000, 10,000, 100,000, 1 million and 2 million cycles to measure the deflection, strain and cracks and examine the eventual progress of damage according to the accumulation of fatigue.The size of the fatigue load was determined with reference to the stress of the tensile member using the calculation method suggested by ACI, AASHTO and CSA ,17,18,19Static loading was performed by 4-point loading on all the specimens using static actuators with capacity of 2000 kN to measure the deflection and strain according to the gradual increase of the load until failure. As shown in \u22126 and 6500 \u00d7 10\u22126 during the tensioning work.The tendon used in the fabrication of NSM specimens is a CFRP rod with diameter of 10 mm and its mechanical characteristics are arranged in The residual strength can be measured using the static loading results listed in The ductility of the specimens with respect to the concrete strength is shown in This study examined the strengthening performance of the prestressed NSMR according to the accumulation of fatigue loading. To that goal, fatigue test was performed on 6.4 m reinforced concrete beams fabricated with various concrete strengths and developed lengths of the CFRP tendon. The fatigue tests were conducted in two stages. The first stage applied 2 million loading cycles and the second stage applied static loading until failure of the specimens to measure the residual strength after the accumulation of fatigue. The following conclusions can be derived.The observation of the fatigue behavior with respect to the concrete strength revealed that the deflection exhibited the largest increase rate below 1000 cycles, increased gradually until 100,000 cycles and stabilized until 2 million cycles. The strain measured at the center of the CFRP tendon showed also similar tendency according to the accumulation of fatigue cycles. Moreover, the same tendency was also observed for the fatigue behavior according to the developed length of the tendon.The analysis of the strain developed during tensioning and the strain caused by the accumulation of fatigue revealed that all the specimens presented similar behavior and strain at the rupture of the tendon. In addition, the tendon could reach its tensile performance before rupture under the loading applied after the compressive failure of concrete. This indicated that the accumulation of the fatigue load had poor effect on the loss of the performance like the tension force of the CFRP tendon.The results of the static loading test with respect to the concrete strength showed that the accumulation of the fatigue load did not provoke damage nor performance loss of the prestressed NSMR specimens. Moreover, the test results of the specimens with concrete strengths of 30 MPa and 40 MPa appeared to be practically identical to those of a previous experiment without fatigue load accumulation. This indicated that the strengthening performance of the prestressed NSMR is insensitive to the accumulation of fatigue loading and the concrete strength given that sufficient strengthening is secured.The insignificance of the change in the performance provided by the specimens with developed lengths of 67%, 80% and 93% of the CFRP tendon demonstrated that the developed length of the tendon has practically no effect on the strengthening performance given that appropriate tensioning has been secured. However, an appropriate and sufficient developed length should be secured to prevent the anchor be installed outside the effective depth d in which case shear failure would occur due to the increase of inclined cracks.The analysis of the ductility with respect to the concrete strength and the developed length of the tendon provided results complying with those of previous research. All the specimens exhibited nearly the same ductility according to the concrete strength and the ductility appeared to increase with longer developed length of the tendon. Accordingly, adopting longer developed length of the tendon was recommended for securing stable ductile behavior."} +{"text": "Catechols are widely found in nature taking part in a variety of biological functions, ranging from the aqueous adhesion of marine organisms to the storage of transition metal ions. This has been achieved thanks to their (i) rich redox chemistry and ability to cross-link through complex and irreversible oxidation mechanisms, (ii) excellent chelating properties, and (iii) the diverse modes of interaction of the vicinal hydroxyl groups with all kinds of surfaces of remarkably different chemical and physical nature . TherefoThis Special Issue collects contributions from different laboratories working on both basic research and applications of bioinspired catechol systems presented by cutting edge specialists in this growing field. Taking advantage of its open access publication, this collection of papers, influenced by biomimetic approaches, will bring about new avenues for new research and innovative solutions in biomedicine and technology. Main topics addressed in the field of basic catechol chemistry include (i) a computational investigation by Barone et al. of noncoIntegrating more than replacing the many excellent reviews, the present collection will provide the reader with a concise panorama of the status quo and perspectives in the increasingly expanding field of basic and applied research on bioinspired catechol systems. It is clear that the interest for catechol-based materials is experiencing a steady burst, perfectly represented by polydopamine . Several patents based on bioinspired catechol systems and different products are already commercialized and available the market. We believe that this special issue may fulfill an important function in promoting biomimetic catechol chemistry for an increasing range of applications."} +{"text": "Cancer cells are mainly dependent on glycolysis for their growth and survival. Dietary carbohydrates play a critical role in the growth and proliferation of cancer and a low-carbohydrate diet may help slow down the growth of tumours. However, the exact mechanisms behind this effect are unclear. This review study aimed to investigate the effect of fat mass and obesity-associated (FTO) gene in the association between dietary carbohydrates and cancer. This study was carried out using keywords such as polymorphism and/or cancer and/or dietary carbohydrate and/or FTO gene. PubMed and Science Direct databases were used to collect all related articles published from 1990 to 2018.Recent studies showed that the level of FTO gene expression in cancer cells is dramatically increased and may play a role in the growth of these cells through the regulation of the cellular metabolic pathways, including the phosphoinositide 3-kinases/protein kinaseB (PI3K/AKT) signaling pathway. Dietary carbohydrate may influence the FTO gene expression by eliminating the inhibitory effect of adenosine monophosphate-activated protein kinase (AMPK) on the FTO gene expression. This review summarised what has been recently discovered about the effects of dietary carbohydrate on cancer cells and tried to determine the mediating role of the FTO gene in these effects. Recent studies suggested that healthy diet can play a major role in the prevention of cell malignancy, apoptosis of cancer cells and reduced tumour size \u20136. For eUntil recently not a lot has been found about the existing mechanisms by which dietary components affect the formation of cancer cells. The results of most studies suggest that some part of this effect may be due to the effects of dietary intake on the expression of some of the genes involved in the cell metabolism and division. The relation between gene variations and risk of cancer is well documented , 14. CanPubMed, PsycInfo and the Cochrane databases were searched to identify articles published in relevant fields. Appropriate keywords including carbohydrate, diet, FTO expression, FTO genotype, cancer, cell and metabolism were used to collect the papers. All articles published in English from June 1990 to July 2018 were studied. Of the total 180 articles, 109 articles were excluded because they failed to address the role of the FTO gene in breast cancer and/ or obesity and 63 articles for lack of sufficient information on the mechanism of the effects of FTO gene on the breast cancer and obesity. Finally, eight articles were included. Of these studies, five studies were on the relationship between dietary carbohydrate and cancer, and five were related to the molecular mechanisms of dietary carbohydrate on FTO gene.Unlike normal cells, most malignant cells are dependent on the availability of sugar in the blood constantly for energy supply and to meet demands for their metabolism. These cells are unable to metabolise fatty acids and ketone bodies due to mitochondrial dysfunction. Previous studies reported the benefits of a low-carbohydrate diets on human body weight and general health . Ho et aMoulton et al. evaluated the effects of two different ratios of carbohydrate and protein in the early development of breast tissue carcinogenesis in an animal study . After tP < 0.05). The risk of breast cancer was associated with the consumption of carbohydrates with higher GIs (P for trend < 0.001).Sieri et al. conducted a cohort study and examined whether the glycemic load (GL) and the glycemic index (GI) are associated with the risk of breast cancer in women . A totalThe association between hyperglycemia and cancer risk was examined in 33,293 women and 31,304 men in northern Sweden . The resTan-Shalaby et al. examined the impact of the modified Atkins diet on cancer development. A total of 17 patients with advanced cancer, who had not undergone chemotherapy, were enrolled for the study . They reMoreover, the effect of ketogenic diet as a well-known low-carbohydrate diet on tumour growth was investigated and approved in several studies \u201339. ReduFTO gene is known for its role in obesity and diabetes . The assRecent studies on the relationship between the macronutrients intake and the level of FTO gene expression in the hypothalamus identified that dietary carbohydrates can affect the FTO gene expression level. However, the results in this area are contradictory and in some studies carbohydrate intake up-regulated the FTO gene expression, while in other studies it suppressed FTO gene expression . It is pThe possible mechanism behind the effect of the FTO gene on cancer risk has been studied recently. The FTO gene may act as a mediator for the phosphorylation of serine in location 473 of the AKT protein and its activation has a key role in the proliferation and differentiation of cells . It was Cancer cells are dependent on glycolysis for their growth and proliferation. Dietary carbohydrate is a significant factor and the association between blood glucose levels and cancer is well documented. But the exact mechanisms of this relationship remain unclear. Recent studies showed that dietary macronutrients may have an impact on cancer by altering the expression level of the genes associated with the metabolism of cancer cells (such as FTO). If we could improve the expression levels of the genes involved in growth, metabolism, and the function of cancer cells through changing our diet, we can hope to find a nutritionally applicable solution for the treatment and control of cancer in the future. Further research on the exact mechanism of the influence of FTO gene on the growth and proliferation of cancer cells may clarify the importance of this issue and the possibility of therapeutic use of dietary components in cancer."} +{"text": "The aim of this paper is to study experimentally the effect of marble powder and green sand as partial substitute for fine aggregate on the strength and durability of M40 grade concrete. The use of metakaolin as a pozzolanic admixture by using as binder replacement is also studied to assess the properties with respect to fresh and hardened state. Several formulations were prepared with constant water-binder ratio 0.4 and varying percentages of marble powder and green sand. The results indicated that the properties of concrete were much enhanced by extent incorporation of marble powder and green sand as fine aggregate and metakaolin for cement when compared to normal concrete. The microscopic studies also confirmed the viability of using green sand and marble powder as fine aggregates. Recent decades have witnessed the rapid demand for river sand as fine aggregate which is one of the most essential ingredients in the production of concrete . Modern Previous research works have previously investigated the effects of using foundry sand ,18,19 anHence the value of the present study lies on the fulfilling of the practical design requirements such as strength and durability of concrete mixes manufactured using replacements of fine aggregate by both marble powder and green sand. In addition the synergistic effect of marble powder and green sand together with a pozzolanic admixture (metakaolin) on the concrete properties addressed in detail thereby not only saving the exhaustion of natural aggregates but also reducing the efficiency of using industrial wastes as fine aggregate replacements.The main objective of this research is to investigate the feasibility of using marble powder and green sand as fine aggregate in concrete production. The effect of metakaolin replacement in cement is also investigated. The mechanical properties of the hardened concrete made of green sand and marble powder were studied at various ages. The microstructure and hydration products were also investigated to assess the impact of using green sand on marble powder as partial replacement for fine aggregate.Ordinary Portland cement (OPC) 43 grade conforming to Indian standard BIS 8112-1989 is used The concrete mixes were manufactured in the proportion of 1:2.08:3.57 corresponding for Binder: Fine aggregate: Coarse aggregate, as per BIS 10262-2009 referrinThe fresh state properties of the concrete mixes were determined using slump test conforming to IS1199-1959 and the hardened state properties were analyzed by conducting compressive strength on 150 mm concrete cube specimens, flexural strength test on 100 mm \u00d7 100 mm \u00d7 500 mm prism specimens and split tensile strength test on 150 mm diameter \u00d7 300 mm height cylindrical specimens at several ages conforming to BIS516-1959 . Impact The workability of the concrete mixesis conducted as shown in The flexural strength development of the concrete mixes at various ages of 14, 28, 56 and 90 days are shown in The splitting tensile strength test setup on cylinder specimen is shown in The impact strength apparatus and specimens at failure is shown in The water absorption values of the concrete mixes with and without the fine aggregate replacement are provided in The XRD studies done to The morphology of the concrete mixes as obtained from the Scanning Electron Microscope Equipment and resu2 values are also shown in With the help of the experimental data, graphs were plotted between split tensile strength and flexural strength of concrete mixes at various ages. The linear relationship between split tensile strength and flexural strength is obtained through the predicted equation and their corresponding regression coefficient RThis study mainly aims at utilizing marble powder and green sand as partial substitute for fine aggregate and metakaolin for cement. From the experimental results obtained the following conclusions can be withdrawn.The workability of the concrete was very much influenced due to the addition of green sand and marble powder. The relative decrease in the workability was achieved due to the increase in amount of water required by the aggregates and the normal sand. Moreover decrease in the workability may also be caused due agglomeration and compounding effect exhibited by the metakaolin particles.An increase in the compressive strength of all the mixes occurred at various ages. The filler effect and the pozzolanic reaction of metakaolin with calcium hydrate also contributed to the increase in the strength of the concrete. This enables the use of marble and green sand as construction materials by minimizing the utilization of the river sand as fine aggregate and the reduced production of cement by replacing with metakaolin.The flexural strength of the concrete mixes was enhanced due to the fine aggregate replacement and metakaolin replacement. The use of marble powder and green sand within 15% was found to be optimum to increase the flexural strength of the concrete when used in combination with metakaolin as cement substitute.The splitting tensile of the concrete was also improved up to 15% beyond which the decrease in the reduction in the split tensile strength was observed. This shows that using 15% green sand and marble powder was found to be adequate as far as the strength parameters are concerned.The impact strength of the concrete was much enhanced for the concrete containing metakaolin as cement replacement and marble powder and green sand as fine aggregate replacement that proved to be efficient in absorbing the impact energy acting on the concrete.The use of marble powder and green sand as fine aggregate replacement created a favorable effect on reducing the porosity of the concrete. The reduction in the water absorption caused a decrease in the voids and is responsible for the improvement in the mechanical strength and impermeability properties of concrete. The addition of metakaolin caused stronger bonding of the cement paste and strengthens the adherence of the cement and the aggregates by the formation of CSH gel which is also responsible for the reduction in the porosity values.The X-ray diffraction studies also showed the minimal presence of calcium hydroxide that confirmed the consumption of calcium hydroxide in the hydration reaction which is responsible for the dense microstructure of the concrete by the addition of CSH gel. The reduced calcium hydroxide peaks inherits the formation of CSH gel that is responsible for the improved strength and durability of the concrete.The SEM images of the concrete also showed reduction in the voids in the concrete and the well distributed CSH gel that effectively bonded the fine aggregates with the cement matrix. The SEM micrographs are coherent for the obtained mechanical properties and the morphology of concrete mix and shows that the marble powder and green sand in combination with metakaolin can be used for making good quality concrete with higher strength.2 values indicate the conformity of the interrelationship between the variables.The prediction models obtained from the experimental data showed the relationship between various strength parameters of the concrete and the obtained RThe final conclusion can be withdrawn that when green sand and marble powder are used as fine aggregate replacement and metakaolin as partial substitute for cement the concrete mixture with high strength properties compared to that of the normal concrete can be obtained. The incorporation of these powdered fine aggregates instead of normal river sand proves to be beneficial to obtain concrete with higher strength. Moreover this type of concrete produced is also environment friendly and economically feasible by large utilization of the waste marble powder and green sand."} +{"text": "Drosophila (25%), rodents (65%), and humans (90%) suggest that increasing complexity of the brain is accompanied by growing numbers of glial cells . The latter has recently emerged as an excellent model of the oscillatory network , as the normal diurnal oscillation of the key neurotransmitters like glutamate, GABA, dopamine and ATP adenosine, which underlie the circadian homeostasis, can be disrupted by excessive alcohol consumption. Considering the regulation of purinergic signaling and circadian oscillations by both neurons and astrocytes, as well as their interactions, they review the diverse mechanisms by which purinergic malfunction may contribute to circadian disruption or alcohol abuse.Another in-depth review, given by Duhart et al. show effects of human glioma cells in the hypothalamic region on the circadian behavioral output in mice. Their report might be of relevance for glioma diagnosis as it provides the foundation work for future research aimed to understand the pathological consequences of astrocytic dysfunction in the circadian time-keeping, which have recently come to light as a major player in SCN , including the neuron-glia interactions that may impact the processing in the cerebellum. The authors show possible relation between neurons expressing the clock protein PERIOD and Bergmann glia, which are demonstrated to express melatonin receptors.The research by Drosophila melanogaster . G\u00f3rska-Andrzejak et al. bring attention to the glial cells located in the output of the circadian pacemaker in the neuropil of optic medulla and infer the possibility of interactions between Pigment Dispersing Factor (PDF) releasing clock neurons and the medulla glia expressing PDF receptors. Further characterization of the distal medulla glia by Krzeptowski et al. reveals two populations of cells that differ with respect to expression level of PER and the glial marker REPO. Interestingly, the authors have observed that the elevated levels of PER are characteristic for the ensheathing glia, but not the astrocyte-like ones , even though the latter are well-known to influence the circadian rhythms of Drosophila locomotor activity . Long and Giebultowicz additionally demonstrate that the rhythmic expression of PER dampens with advanced age in most of the glial subtypes. Thus, their study bring into focus the glia-related circadian changes that may significantly contribute to the loss of homeostasis in the aging brain.Functioning of both neuronal and glial oscillators is the foundation of the circadian plasticity of the visual system of Drosophila emphasizes the important role of neuron-glial interactions in the structural plasticity of the circadian network. The article by Ceriani group reveals the involvement of glial cells in the structural remodeling of neuronal pacemakers . It explains how the dynamic morphological changes of dense astrocytic meshwork might modulate the output activity of the neuronal oscillators to produce dimorphic behavior of females and males.Another study on Drosophila corroborates the hypothesis that the circadian clock may also adopt post-transcriptional mechanisms regulating transposable elements (TEs) in order to ensure proper rhythmicity. Its authors argue that BELLE, a conserved DEAD-box RNA helicase that acts as an important piRNA-mediated regulator of TEs, is a putative clock component present in both the clock neurons and the glial cells of Drosophila brain and influences circadian rhythmicity of Drosophila locomotor activity.Eventually, the last paper on All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Cytomegaloviruses (CMV) reorganize membranous system of the cell in order to develop a virion assembly compartment (VAC). The development starts in the early (E) phase of infection with the reorganization of the endosomal system and the Golgi and proceeds to the late phase until newly formed virions are assembled and released. The events in the E phase involve reorganization of the endosomal recycling compartment (ERC) in a series of cellular alterations that are mostly unknown. In this minireview, we discuss the effect of murine CMV infection on Rab proteins, master regulators of membrane trafficking pathways, which in the cascades with their GEFs and GAPs organize the flow of membranes through the ERC. Immunofluorescence analyzes of murine CMV infected cells suggest perturbations of Rab cascades that operate at the ERC. Analysis of cellular transcriptome in the course of both murine and human CMV infection demonstrates the alteration in expression of cellular genes whose products are known to build Rab cascades. These alterations, however, cannot explain perturbations of the ERC. Cellular proteome data available for human CMV infected cells suggests the potential role of RabGAP downregulation at the end of the E phase. However, the very early onset of the ERC alterations in the course of MCMV infection indicates that CMVs exploit Rab cascades to reorganize the ERC, which represents the earliest step in the sequential establishment of the cVAC. The seqThe development of cVAC is initiated immediately upon infection. It involves reorganization of the Golgi and the endosomal system in a proper sequence of required membranous structures around the cell center . The reoThe ERC represents one branch in the endosomal maturation and highly dynamic router of membrane flow that undergoes through a series of transitions regulated by the cascade recruitment of Rab proteins and their effectors . Thus, iA study in MCMV infected cells demonstrated that the endosomal rearrangement and dislocation of the Golgi are initiated already at 3\u20135 hpi . AnalysiMultiple studies on the organization of the cVAC during HCMV infection suggested that CMV infection reorganize REs into a perinuclear cluster that form the core of the cVAC . These sThe ERC represents a complex of heterogeneous subsets and functionally linked populations of REs that include relatively large perinuclear structures, tubular REs and a number of small transport intermediates . SeveralRab GTPases are master regulators of membrane traffic that control distinct steps in membrane flow by recruiting diverse effector proteins rev. by . InactivThe membrane flow into the ERC involve a transition of Rab5-positive EE/SEs to Rab11-REs and Arf6/Rab8-REs and the The sequences of regulatory networks that control membrane flow through the ERC in uninfected cells as well as the sequence of alterations that lead to the final establishment of the cVAC in CMV infected cells are far from being completely understood. A small piece of evidence indicates that alteration of Rab recruitment is exploited by CMVs as an integral part of a complex mechanism that leads to the development of the cVAC.In uninfected fibroblasts, Rab proteins and their effectors are mainly cytosolic, and only those that decorate major endosomal organelles display distinguishable structures visible by conventional confocal microscopy Figure . Many trC. elegans epithelial cells Rab5 recruits an effector which promote interaction of Rab10 GEF with Rab10 when the cut-off value was adjusted to p < 0.1, whereas none of them was significant at the cut-off p < 0.05. Even upregulation of TBC1D30 observed at 18 hpi is insignificant at p < 0.05, because of the very low level of transcript in uninfected cells. However, these alterations, as well as most of the alterations observed in the previous study (One approach used by CMVs could be manipulation with the amount of proteins that shape membranous organelles, either by up- and down-regulation of transcription, translation or protein degradation. Transcriptome analysis of MCMV infected cells at 3 and 18 hpi Figure demonstrus study , correlaus study .Given that most of altered Rab protein genes are upregulated, downregulation of Rab gene expression is not a mechanism that could explain membrane reshaping in the E-phase of infection. On the contrary, upregulation of some genes encoding GEFs and GAPs may correlate with the low recruitment of Rab13, Rab22a, and Rab35 and could explain some of the alterations observed in the E phase of MCMV infection. Thus, manipulation with the expression level of regulatory proteins could be a potential target of CMVs. This conclusion may be supported by observations from temporal quantitative proteome analysis in HCMV infected cells at the end of E phase , which dAltogether, immunofluorescence, proteome, and transcriptome analysis suggest that the main alteration of CMVs in the E phase of infection could be the recruitment of components of Rab cascades and that targeting of RabGAP proteins could be a mechanism exploited by CMVs in order to reshape membranous system of the cell.in vivo under physiological conditions (Analysis of the perinuclear endosomal aggregate that is established at the end of the E phase of CMV infection indicates that CMVs exploit Rab cascades to take over the control at the ERC trafficking routes and thereby initiate the establishment of the cVAC. Although a plethora of data in the last decade provided clues about Rab cascades, many components remain unidentified and functional networks that construct the cascades poorly characterized nditions . The anaPL conceived and coordinated the study, carried out image analysis, conceived figure presentation, and drafted the manuscript. HL and GZ coordinated the study, established immunofluorescence protocols, and carried out recycling analysis. LK, VP, NV, and SiJ carried out immunofluorescence and imaging studies. BL and StJ performed the transcriptome analysis. All authors read and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "How to cite this article: Ali U. The Burden of Acute Kidney Injury in Indian Pediatric Intensive Care Units. Indian J Crit Care Med 2019;23(8):349. Acute kidney injury is a major game changer when it occurs in the setting of critical illness in children. It increases the risk of death independent of the severity of illness. In addition, one third of the AKI survivors carry the risk of future progression to chronic kidney disease. Zero deaths due to AKI by the year 2025 (0 by 25) is the ambitious global vision of the International Society of Nephrology. However, in many countries of the world including India we have not yet reached the starting post of understanding the total burden of AKI in children.The burden of AKI could be higher in Indian children when compared to developed nations due to a higher incidence of infectious illnesses, wider use of both allopathic and non-allopathic nephrotoxic medications and the probability of having a higher subset of children born with lower nephron numbers secondary to premature births or intrauterine growth retardation that make them more vulnerable to AKI during critical illness.The need of the hour is to have national or regional AKI prevalence studies to understand the extent of the problem and the regional differences in the predisposing conditions. Comparison of data is hampered by the prevalence of different methodologies for measuring serum creatinine and the use of different criteria for the diagnosis of AKIAs creatinine levels are physiologically low in infants and children due to lower muscle mass, it is important that serum creatinine is measured by enzymatic methods that measure true creatinine and avoid interference by other non-creatinine chromogens. Laboratories should validate their methodology by referencing to isotope dilution mass spectrometry (IDMS). This is already being done by some laboratories, but the standards need to become universal.Several diagnostic criteria are currently used in pediatrics for defining AKI. The most common ones are the AKIN, the p RIFLE and the KDIGO criteria. The KDIGO does not need GFR calculations. It allows the use of either relative or absolute increases in serum creatinine and can be used for both adults and children. The uniform adoption of KDIGO criteria for defining AKI will allow valid comparison of data from different institutions.1 Until we are able to get the big picture, these snaphots from different parts of the country contribute important jigsaw pieces to complete the larger puzzle of the etiology and outcome of AKI in critically ill Indian children.There have been several small, single centre reports from different parts of India describing the prevalence and outcome of AKI in children. This issue carries an interesting snapshot from a PICU in Eastern UP where 50% of the cohort had viral encephalitis as the underlying illness."} +{"text": "Cancer is an important public issue around the world. Among types of cancer, lung and colorectal cancer are the most common in men while breast and cervical cancer are the most common in women. Detection of early stage cancer via screening can significantly reduce the mortality and prolong life. Although cancer prevention and control has been served as the national priority, individual\u2019s utilization of cancer screening services is low due to limited knowledge of cancer screening and ineffective patient-provider commutation, especially in minority populations. In this symposium, we will examine three scenarios that highlight the challenges of cancer screenings in minority populations. First, we will share the results from a mixed method study that investigate the knowledge and attitudes towards Low Dose Computed Tomography lung cancer (LDCT) screening and assess the smoking cessation needs for African Americans who receive LDCT screening in an effort to reduce the health burden of lung cancer. The next study will discuss how the characteristics of older Chinese adults from the United States and Taiwan are associated cancer screening communication with physicians . Lastly, we will share the results from a cross-sectional study that analyzed 10 years data of National Health Interview Survey to examine the difference in LDCT screening eligibility among Asian American smokers. The discussant will summarize with an overview of the topic, and comment on the disparities of cancer screening in older minority populations."} +{"text": "As it approaches a decade since Frontiers in Endocrinology was launched, the Chief Editors have commissioned a series of articles to reflect the continuing dynamic evolution of the science at the Frontiers of Endocrinology. These articles highlight recent breakthroughs or advances, new technologies, or challenges in the field of endocrinology. As with any dynamic field the frontiers are ever changing and these articles exemplify some of the recent developments together with some of the new questions and challenges for the future. The articles cover many different areas of endocrinology including issues involved in some of the biggest health challenges facing today's society such as stress, obesity, reproduction, cancer, and aging.Carpentier et al. review the challenges that have been encountered in the subsequent decade of research into BAT in humans. They address key questions such as the true extent of BAT in humans; whether the original imaging techniques underestimated the total mass of BAT and to what extent \u201cbeiging\u201d of white adipocytes, with the induction of UCP1, can occur in humans. These are critical questions that could establish whether beige and brown adipocytes in humans could be an effective target to bring about changes in energy expenditure that are of therapeutic benefit.The greatest challenge to health provision in virtually every country across the globe is obesity; the scale of the epidemic threatens to outstrip resources in even the richest of societies . The burStengel and Tach\u00e9.In addition to the advances in our understanding of energy expenditure there have been major progress in our knowledge of the endocrine controls of energy intake via hormones that control hunger and satiety and hence determine food intake . Life wiMa and Vella and Laferr\u00e8re and Pattou have reviewed the potential new insights into metabolic endocrinology that these observations may provide. The clues from the anatomical differences between the different surgical procedures are reviewed by Ma and Vella. They describe how such anatomical distinctions can provide insights into the various hormonal pathways and in particular the interactions between regions of the gut and the pancreas. They also touch on other interesting, less well-appreciated effects, such as how the surgery and endocrine changes can alter the perception of sweet-taste and hence alter calorie intake. The most studied surgical intervention is Roux-enY gastric bypass and Laferr\u00e8re and Pattou review these studies with an emphasis on what these studies have revealed regarding the gut endocrine system. They highlight the many new questions raised in relation to the role of satiety hormones, incretins, and bile acids. Our concepts of bile acids have been transformed from being regarded as just soaps that aid in the uptake of dietary fats to being a previously unappreciated complex endocrine system.The management of obesity remains a huge health challenge. Numerous dietary and lifestyle changes have been proposed and all have achieved modest weight loss that is invariably soon regained. The pharmaceutical industry has invested considerably in developing medications to treat the huge potential market with a long trail of failures with many concerns regarding adverse effects and long-term safety . While nJacobsen et al. provide an overview of the development of strategies to prevent this and avoid the need for lifelong treatment. In order to prevent type I diabetes it is important to understand the natural history of the development of the autoimmunity that results in pancreatic beta cell destruction and the onset of type 1 diabetes. The challenges of studying populations prior to the disease onset and how this is being addressed around the world are described. These studies have informed the various trials for primary prevention in subjects at risk and secondary prevention in those already exhibiting evidence of autoimmunity. To date these studies have had limited success and new and future strategies are discussed.While the obesity epidemic has added to the focus on type 2 diabetes it has also become clear that there has been a 3% annual increase in the incidence of type 1 diabetes and Eiden and Jiang review how new observations of adrenal chromaffin cells have contributed to our understanding of how we coordinate response to stress. The adrenal medulla has conventionally been considered the source of epinephrine to coordinate the cardiovascular, neuronal, and metabolic responses to stress. In this overview they describe recent observations of sympathetic nervous system regulation of chromaffin cell function and its secretion of not just epinephrine but also a rich cocktail of novel bioactive peptides. This new evidence is synthesized into a broader understanding of how metabolic, cardiovascular, and inflammatory responses are integrated. They also highlight interesting new questions that have arisen from this work; such as whether the sensory nervous system and immune/inflammatory systems are looped-in together via the adrenal medullary stress response and what are the broader endocrine functions of the many bioactive peptides secreted from the chromaffin cells.In a fine exposition of how studying one component can help inform on how inter-connected the endocrine system has become; Franks and Hardy. They focus on recent advances regarding the role of androgens in the development of polycystic ovary syndrome (PCOS), which remains the most common endocrine disorder in women of reproductive age.Population control and reproductive health remain major health issues globally. The important role of androgens both in ovarian follicle selection to ensure mono-follicular ovulation in women and in the normal cyclical secretion of estrodiol is reviewed by Kristensen and Andersen discuss the many issues and challenges with extending this technique to more broader applications. Using this technique to restore fertility to women with anovulatory PCOS is discussed. The many issues surrounding the more controversial application of the technique to enable healthy women to postpone childbearing into their more advanced years is also addressed.One of the recent advances in techniques for maintaining fertility in women has been ovarian tissue cryopreservation (OTC) and transplantation. Originally developed to assist prepubertal girls and young women faced with reductions in the ovarian reserve due to pathologies, such as malignancies, or due to aggressive therapies that damage the ovary, Karras et al.. The ongoing questions regarding the role of VDBP in important clinical issues such as preeclampsia, preterm birth, and gestational diabetes are discussed.During pregnancy a woman's endocrine system kicks into overdrive with most hormones adapting to enable the mother to meet the additional metabolic demands and to provide an optimal environment in which the fetus can develop. Among all of these hormonal changes vitamin D plays an under-appreciated role both in ensuring adequate calcium availability for fetal bone development and in enhancing maternal tolerance to the presence of paternal and fetal alloantigens. Recent advances in our understanding of the part played by vitamin D-binding protein (VDBP) in facilitating these roles is reviewed by Moody et al.. The development of agents that target receptors for bombesin, neurotensin, vasoactive intestinal peptide, and somatostatin are described in relation to the detection and treatment of both endocrine and non-endocrine cancers.Neuropeptide G protein-coupled receptors (GPCRs) are over-expressed in many different cancers; not just the relatively rare neuroendocrine tumors but also in some common cancers such as small cell lung cancers. The potential targeting of specific GPCRs for the development of novel cancer therapies is reviewed by Alrezk et al.. These are challenging cancers to treat and although most can be cured by surgery on rare occasions they metastasize and for these there are currently no approved treatments. The potential of systemic therapies, that have largely been developed to treat other cancers, is reviewed by Jimenez. Different strategies that have been designed to target each of the accepted \u201challmarks\u201d are discussed in relation to their application to treating PCCs and PGLs. These include different strategies to inhibit angiogenesis, cell proliferation, invasion, and metastasis, to enhance the induction of cell death and the recently developed immunotherapies.Recent advances in the genetics, biochemical characterization, and imaging of pheochromocytomas (PCCs) and paragangliomas (PGLs) are reviewed by Sahli et al.. The limitations of current genetic markers, their cost-effectiveness and the next generation of tests currently being evaluated are discussed.As more and more people survive into advanced ages the problems of the elderly become an increasing burden on clinical services. The prevalence of thyroid nodules in people over the age of 60 years is extremely high (50\u201370%) and although most of these are benign (85\u201395%) it is important to distinguish the few that can become malignant and require surgery. The current status of molecular markers for the differential diagnosis of malignant, vs. benign, thyroid nodules is reviewed by Ramchand and Seeman. The relative merits of antiresorptive and anabolic therapies are discussed as are the alternative strategies of combining these therapies or using them sequentially.Another common ailment associated with aging is osteopenia, which leads to the high prevalence of fractures seen in the elderly population. The challenges of identifying individuals at risk of fractures and the uses and limitations of current therapies for osteopenia are discussed by The increasing speed of technological advances provides endocrinologists with ever more powerful tools for investigation, diagnosis and treatment. As the pressures of modern lifestyles involve major changes in how we live and eat and demographics markedly increase the elderly population the challenges endocrinologists face in the clinic are constantly evolving. This collection of articles illustrates the variety of these challenges across the different specialties within endocrinology and the dynamic nature of modern endocrinology.Both authors have made a substantial, direct and intellectual contribution to the work and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "A selection of polar organic compounds was investigated for their biodegradation on a laboratory scale fixed-bed bioreactor and the decline of the parent compounds besides the formation of metabolites was monitored. Of particular interest was the investigation into the degradation of pesticides, especially isoproturon (IPU), surfactants and industrial by-products of chemical synthesis. The results from the laboratory degradation experiments are compared to findings in groundwater."} +{"text": "Monteggia described a fracture of the proximal third of the ulna with anterior dislocation of the radial head from both the proximal radioulnar and radiocapitellar joints. The key treatment principle in Monteggia fractures is stable anatomic alignment of the ulna. We present an uncommon case of a Monteggia fracture-dislocation with an unreducable anterior dislocation of the radial head and associated with a lesion of the lateral collateral ligament of the elbow. The patient in our report had a successful clinical outcome and functional range of motion after rigid fixation of the ulnar shaft fracture and exploration of the elbow joint, reduction of the radial head and repair of the lateral collateral ligament. This case is unusual because of the association of a complete tear of the external collateral ligament of the elbow. The original fracture pattern described by Monteggia is a fracture of the proximal third of the ulna with anterior dislocation of the radial head. Proximal radius was dislocated from both the proximal radioulnar (RU) and radiocapitellar (RC) joints. The main step in the surgical treatment of Monteggia fractures is anatomic reduction and stable osteosynthesis. Once this step is properly achieved, radial head is automatically reduced in most of the cases. Open reduction is only necessary in the rare cases of incarceration of soft parts or a bone fragment. We report a case of a Monteggia fracture were open reduction of the radial head was performed. This case is unusual because of the association of a complete tear of the external collateral ligament of the elbow.A 42-year-old patient presents in the emergency ward with a fresh closed trauma from the left forearm, following a road accident. Physical examination showed swelling of the elbow and upper forearm and an obvious deformity. The patient was neuro-vascularly intact in that extremity. Initial plain radiographic studies of the left forearm revealed a displaced fracture involving the ulna shaft and a dislocation of the radial head. This radiological finding, also called Monteggia fracture-dislocation type 1, was associated with a radial styloid fracture . PosteriIn 1814 Monteggia reported a particular injury pattern associating a fracture of the proximal third of the shaft of the ulna, with a dislocation of the radial head from both the superior radio-ulnar and the radio-humeral joints . Dislocawe have described the management of a rare lesion in which there was buttonholing of the radial head through the anterior capsule associated with a rupture and incarceration of the annular ligament, causing the radiocapitellar dislocation to be irreducible (even after fixation of the ulnar fractures). The anatomic reduction of the ulna determines the reduction of the radial head spontaneously in 93% of the cases. In the other 7%, open reduction typically needs to be performed.The authors declare no competing interests."} +{"text": "The data was obtained from a field survey aimed at measuring the patterns of utilization of mental healthcare services among people living with mental illness. The data was collected using a standardized and structured questionnaire from People Living with Mental Illness (PLMI) receiving treatment and the care-givers of People Living with Mental Illness. Three psychiatric hospitals in Ogun state, Nigeria were the population from which the samples were taken. Chi-square test of independence and correspondence analysis were used to present the data in analyzed form. Specification TableSignificance of the data\u2022The central theme is the study of utilization of mental healthcare facilities among people living with mental illness.\u2022The data could be useful in monitoring the extent to which the mental health services are available and utilized.\u2022The study can be replicated to other countries with similar demographic factors.\u2022The data can be used in the overall study of mental health.1The data is a summary of responses from a field survey. Structured questionnaires were administered to People Living with Mental Illness (PLWMI) and their caregivers and the aim is to measure the patterns of utilization of mental healthcare services among PLWMI.Only those receiving treatments and the care-givers (in the case of very unstable patients) were considered. Also, those residents in the study areas that are of Yoruba origin were considered. Adults younger than 18 years were excluded from the study.The pattern of utilization mental healthcare services in this context was determined by the perceived use of the mental healthcare services by the respondents, frequency of use, frequency of taking prescribed medications and the perceived obstacle of using the available mental healthcare services. These are shown in 2Mental illness has been believed by numerous experts to be caused amongst others by depression, alcohol and substance abuse, stress, violence against women or minors, post-traumatic stress disorder, women\u05f3s infertility and biological factors. Mental health in particular requires special help, care and management. The treatment may come as psychotherapy and medications which are available in mental healthcare services. The availability of mental health services determines their patterns of usage or utilization Utilization is connected with ease of use, excellence service, good customer relations, affordable fees charge, management and socio-economic factors.Questionnaire was used in this article to measure the pattern of utilization of mental healthcare services in Psychiatric hospitals located in three local Government areas of Ogun state, Nigeria. The utilization of the mental healthcare services in the demographics of the study area in particular and Nigeria in general are historically low due to long distance, unavailability of medications, stigmatization, epileptic or skeletal services, poor road networks, poverty and dearth of skilled psychiatrics 2.1Chi-square test of independence was used to determine the association between the measure of utilization of mental healthcare services and the socio-demographics of the respondents and is presented in Remarks: P-value less than 0.05 imply association.2.2The correlational studies are important to reveal the strength and nature of the observed linear relationship that exist between the measure of utilization and the socio-demographic variables. These are presented in 2.3Correspondence analysis is performed to visually display the contributions of the income of the respondents to the hindrance from using mental health services. Details on correspondence analysis can be found in The results are presented as follows: Correspondence table , model sRemarks: The data was explained by two dimensions. Distance seems not to be perceived hindrance to utilization of mental healthcare services in the studied area."} +{"text": "The implementation of these approaches impels its potential effect on the economy of particular countries and also reduces unnecessary overburden on the environment. This contribution aims to provide an overview of some of the most recent trends, challenges, and applications in the field of biomaterials derived from sustainable resources.Biomaterials and sustainable resources are two complementary terms supporting the development of new sustainable emerging processes. In this context, many interdisciplinary approaches including biomass waste valorization and proper usage of green technologies, The interplay between biomaterials and renewable resources has provided a window of opportunity for the development of novel sustainable emerging strategies within the past few years. Humans used biomaterials in ancient times without actually knowing it. In recent times, a great deal of attention has been paid to their development from sustainable resources driven by the need to develop increasingly more sustainable alternatives to traditional materials. The urgent need for sustainable energy development depends on the advancements of green technologies and increasingly on biocompatible materials with properties comparable to existing materials because they are the way toward a more sustainable future.2 sequestration )In view of the relevant properties and biological activities of these extracts and exopolysaccharides, advanced applications for extracted polysaccharides and related compounds from tobacco as well as from other sources were envisaged .The first initiative related to the utilization of such polysaccharides as sacrificial templates for the production of a wide range of nanomaterials including nanocrystals of metal oxides. Preliminary research results from the group indicated that the utilization of pure polysaccharides including starch and alginic acid as templates in a dry ball milling methodology could lead to advanced nanocrystals of metal oxides (ZnO) in high purities and with a highly crystalline nature 38]. Im. Im38]. These results were initial proof-of-concept from pure polysaccharides but in some cases algae-extracted polysaccharides show a remarkable potential as replacement of pure compounds in the proposed technologies .i.e., biomass and residues) to produce controllable and well defined nanostructures is the way forward in this regard. In some cases, the challenge lies in the possibility to carefully control the properties of the synthesized biomaterials , but these issues have been occasionally circumvented in cases depending on the biomaterial synthetic protocol and/or the selected future application. As an example, the presence of heteroatoms including Nitrogen or Boron can improve the electrochemical performance. In some other cases, the formation of nanocomposites or the presence of another substrate/starting material can lead to advanced materials with high-end applications such as electrodes in energy storage devices where physical properties like conductivity, flexibility, transparency, and mechanical strength are compulsorily needed. These biomaterials with customized composition and porosity can be suitable as electrodes in supercapacitor cells as well as adsorbents for CO2 sequestration. From personal experience, a deep knowledge in the structure and composition of the starting material is key to lead future advances in the field of sustainable biomaterials for different applications.The development of economic and environmentally benign processes for the scale-up production of materials, chemicals, and fuels is one of the challenges for the 21st century. Selected examples for bio(nano)material designs in view of their future applications clearly illustrates the potential of such bio-derived materials including carbonaceous materials and biocompatible nano-composites from natural sources in a wide range of applications. The use of low cost and alternative renewable precursors structures, therefore technologies and chemistries still require a significant amount of work in the future. In any case, innovative and emerging green technologies for the design of biomaterials can lead the way toward an economical and sustainable society for the betterment of mankind as illustrated by the key examples included in this contribution, which we sincerely hope can stimulate further work within the scientific community."} +{"text": "The author Elena Scherrer was inadvertently omitted from the list of authors during the submission process of the paper \"Diagnosis and Prediction of Neuroendocrine Liver Metastases: A Protocol of Six Systematic Reviews\" :e60). The author Elena Scherrer should have been added after Tobias Buerge in the original published manuscript. This error was corrected in the online version of the paper on the JMIR Research Protocols website on April 28, 2014 along with the publication of this correction notice. This correction notice has been sent to PubMed and the correct full-text has been resubmitted to Pubmed Central and other full-text repositories."} +{"text": "As the Australian population ages the demand for nursing care which focuses on responding to the needs of the older person will increase. Few newly graduated Registered Nurses (RNs) currently enter the aged care workforce and few select a career in caring for older people; yet older people are the largest patient group in most health care environments. This research, conducted by the Australian Hartford Consortium of Gerontological Nursing Excellence (Aus-HCGNE), explored how care of the older person is currently taught in Australian schools of nursing (SoN). The interview guide included questions about: whether care of the older person is taught in separate subjects or integrated across the curriculum; academics\u2019 qualifications; subject content; and aged care clinical placements. The head of each of the 33 Australian schools of nursing was contacted, invited to participate and asked to nominate the appropriate academics (undergraduate/curriculum co-ordinators) who would be the most appropriate person to participate in the interview. These academics were then contacted, written informed consent was obtained, interviews were scheduled and completed. This research is timely given the current Royal Commission into Aged Care Quality and Safety in Australia, one focus of which is nurses in residential aged care in respect to numbers, education and competence. This research will be completed by mid-2019. The results will be fed back to SoN to inform the development of their curricula and the preparation of future RNs who will undoubtably need to be expert in the care of older people across the health sector."} +{"text": "Economic analysis can be a guide to determining the level of actions taken to reduce nitrogen (N) losses and reduce environmental risk in a cost-effective manner while also allowing consideration of relative costs of controls to various groups. The biophysical science of N control, especially from nonpoint sources such as agriculture, is not certain. Widespread precise data do not exist for a river basin (or often even for a watershed) that couples management practices and other actions to reduce nonpoint N losses with specific delivery from the basin. The causal relationships are clouded by other factors influencing N flows, such as weather, temperature, and soil characteristics. Even when the science is certain, economic analysis has its own sets of uncertainties and simplifying economic assumptions. The economic analysis of the National Hypoxia Assessment provides an example of economic analysis based on less than complete scientific information that can still provide guidance to policy makers about the economic consequences of alternative approaches. One critical value to policy makers comes from bounding the economic magnitude of the consequences of alternative actions. Another value is the identification of impacts outside the sphere of initial concerns. Such analysis can successfully assess relative impacts of different degrees of control of N losses within the basin as well as outside the basin. It can demonstrate the extent to which costs of control of any one action increase with the intensity of application of control."} +{"text": "Thunnus albacares) fishery\u201d This paper presents data associated with the benchmarking of the Fair Trade USA (FT USA) Capture Fisheries Standard and the Marine Stewardship Council (MSC) Fisheries Standard against the Food and Agriculture Organization's Voluntary Guidelines for Securing Sustainable Small-Scale Fisheries in the Context of Food Security and Poverty Eradiation (FAO Voluntary Guidelines). Benchmarking was used to determine the extent to which these standards, which promote sustainability in different ways, align with the FAO Voluntary Guidelines. The data represent a comprehensive analysis of these standards and are useful for beginning to understand the appropriateness of these standards for small-scale fisheries in developing regions of the world. For further interpretation and discussion please see \u201cA tale of two standards: A case study of the Fair Trade USA certified Maluku handline yellowfin tuna ( For guidelines that were comprised of several aspects, the guidelines were further broken down into sub-guidelines. For each guideline and sub-guideline, both the FTUSA and MSC standards were assessed to determine if there were requirements or components of the certification program equivalent to the FAO Voluntary Guidelines. A stoplight methodology was used to indicate if the respective standard did or did no fulfill a guideline of the FAO Voluntary Guidelines. Green was used to indicate explicit alignment, yellow for partial fulfillment of the guideline and red if the standard did not meet the respective guideline at all. Thus the production of the benchmarking data involved the authors judging the requirements of the standards against the FAO Voluntary Guidelines. Note that criteria that were considered outside the scope of a marine certification program were ignored. For example, the FAO includes land aspects in the first guideline (5.2), but they were ignored in the analysis."} +{"text": "In the biomedical field, human organ repair and regeneration represent a very important and challenging tasks control of intrinsical multifunctional properties for enhanced bioactivity and drug/gene delivery; (2) Control of bioactivity and biodegradation; and (3) understanding molecular mechanism of nanomaterials controlling tissue regeneration. This Research Topic has attracted a series of papers that show the recent advances in the synthesis and biomedical application of bioactive nanomaterials especially bioactive glass nanoparticles, and provide new insights on designing bioactive nanomaterials. In this Research Topic, The editors hope that the Research Topic \u201cMultifunctional Bioactive Nanomaterials for Tissue Regeneration\u201d will contribute to the progress of research and development activities in the field of novel bioactive nanomaterials for regenerative medicine, inspiring future work leading to the expansion of the biomedical applications of such bioactive nanomaterials.BL proposed the Research Topic and editorial and in charge of 2 manuscript for review process. AB revised the topic and editorial. XC was in charge of 1 manuscript for review process. All authors listed have made a substantial, direct and intellectual contribution to the work.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Tetanus is a vaccine-preventable disease of significant public health importance especially in developing countries. The WHO strategy for the elimination of maternal and neonatal tetanus recommends the promotion of clean delivery practices, systematic immunization of pregnant women and those in the reproductive age (15-49 years) and surveillance for neonatal tetanus. Implementation of the recommended strategy with the support of WHO, UNICEF and other partners has led to significant decline in number of cases and deaths due to NT over the last decades. The coverage with the second or more dose of tetanus toxoid-containing vaccines (TT2+) a proxy for Protection at Birth (PAB) for the WHO African region has risen from 62% in 2000 to 77% by 2015 Reported cases of NT declined from 5175 in 2000 to 1289 in 2015.The goal of eliminating maternal and neonatal tetanus by 2015 was missed, but some progress has been made. By the end of 2016, 37 out of 47 (79%) of the WHO AFR member states achieved elimination. The 10 member states remaining need additional support by all partners to achieve and maintain the goal of MNTE. Innovative ways of implementing the recommendations need to be urgently considered. Clostridium tetani bacteria whose resistant spores are present in soil and the environment. The causative organism is harboured by humans and animals that excrete the bacteria and spores. Once in a suitable anaerobic environment such as a contaminated wound, the bacteria multiply, releasing tetanus toxin, which is responsible for the symptoms and outcomes of the diseaseTetanus is a non-communicable disease caused by a potent neurotoxin produced by Despite decades of efforts to control and eliminate the disease, it still remains a major public health problem especially in remote areas with poor health care delivery and among the poor and illiterate populations. For instance a study in Benin City, Nigeria, identified lack of awareness of antenatal care services among the target population, under-utilisation of antenatal service, non-immunisation with tetanus toxoid vaccines, negative cultural beliefs, primordial cord care, lack of economic and decision making empowerment of the target population and lack of government commitment towards elimination of neonatal tetanus, as contributing towards the burden of neonatal tetanus in the areaIn the late 1980s, the World Health Organization (WHO) estimated that about 787,000 newborn babies died from tetanus within the first 28 days of their lives. The most recent estimate for 2015 has shown a decline in the estimated NT deaths to 34,000, which is a 96% declineThe WHO defines neonatal tetanus elimination as the occurrence at the district level of less than 1 case of NT per 1000 live births annually. Maternal and neonatal tetanus is thus considered eliminated when neonatal tetanus cases are below the defined threshold.The WHO strategy recommends the promotion of clean delivery practices, aimed at minimizing bacterial contaimination. It also reccomends systematic immunization of pregnant women and those in the reproductive age (15-49 years) with a tetanus toxoid-containing vaccine. Additonally, it reccomends theprovision of at least 3 doses of tetanus toxoid to women of reproductive age that reside in areas classified as being at high risk through supplemental immunization activities\u2019 (SIAs). The vaccination will ensure that women have high enough antibodies to pass to their unborn babies during pregnancy to prevent the disease in the first few weeks after delivery when the risk is highest, in the event of bacterial contamination. Finally, case-based surveillance is used to identify NT cases and deaths as well as for risk assessment of populationsProgress in elimination has been slow and the AFR region missed its target by end-2015, due to slow implementation of the recommendations of the elimination strategy. As at end of 2016, ten of the remaining 18 countries where NT remains a public health problem are in the African region. The manuscript summarizes the progress made in MNTE, highlights the challenges and suggets ways to acelerate the elimintion of the disease.The review of the progress made in the elimination of maternal and neonatal tetanus in the African Region of the World Health Organization is based on the evaluation of the implementation of the WHO strategy for MNTE by Member States.The key components of the strategy are the promotion of clean delivery practices, systematic immunization of pregnant women and those in the reproductive age (15-49 years) with a tetanus toxoid-containing vaccine in routine immunization, or provision of at least 3 doses of tetanus toxoid through SIAs to women of reproductive age that reside in areas classified as being at high risk for MNT and the use of case-based surveillance to identify NT cases and deaths.The coverage with the second dose of tetanus toxoid (TT2+) in pregnant women serves as a proxy for Protection at Birth (PAB). WHO and UNICEF obtain data from countries, which together with surveys is used to establish an estimate of the coverage with TT2The results presented reflect data obtained on the implementation of these activities by member States as reported in the WHO/UNICEF Joint Reporting Form.The high-risk approach targets areas prioritized as at risk of maternal and neonatal tetanus. These countries have been conducting at least three rounds of SIAs. Between 2000 and 2015 a total of 78,985,175 women of reproductive age received at least two doses of Tetanus Toxoid in 31 countries of the WHO African Region.The level of skilled delivery remains low and needs to be increased to sustain the gains achieved towards elimination. Skilled delivery rates rose from 36% in 2000 to 49% in 2015 in East and Southern Africa regions while it moved from 36% to 54% within the same period in West and Central African regions of UNICEF indicating that skilled delivery rate is still low in countries of the WHO African regionThe member states of the AFR region have made remarkable progress towards achievement of the goal of MNTE. Of the 41 countries that attained MNTE from 2000 to 2016 out of 59 priority countries globally, 28 (68%) are in the AFR region. This is in addition to the nine countries that were already classified as having achieved elimination in 1999. These countries are also being supported to sustain their efforts so as to maintain their MNT elimination status. This support and guidance include a shift from the use of TT-only vaccine to Td vaccine given as a booster in schools and to pregnant women during antenatal care.The remaining 10 member states that are yet to attain MNTE have their plans of action as part of the comprehensive multi-year plan for immunization and are at different levels of strategic implementation of their planned activities to achieve elimination.Despite the apparent progress made in MNTE some challenges have been identified. The lack of awareness of antenatal services, under-utilisation of services and the sub-optimal integration of services with immunisation. These challenges lead to missed opportunities including non-immunisation with tetanus toxoid during antenatal care services. Other challenges are negative cultural beliefs, primordial cord care, lack of economic and decision making, empowerment of the target population and lack of government commitment towards elimination of neonatal tetanus, all affect the pace of eliminationEfforts are continuing through the scale up of the Reaching Every District approach in member states where most have adopted the 5-dose schedule in their immunization programmes to improve coverage with at least two protective doses of TT-containing vaccine. Based on the WHO strategic guidance countries have carried out surveillance and prioritized high-risk districts for NT cases. Using the surveillance and other core and surrogate data obtained the countries conduct at least three rounds of TT SIAs targeting women of reproductive age (15-49 years of age) in all districts classified as being at high risk for MNT. The aim of vaccination is to reach at least 80% of all women of reproductive age with three doses of the vaccine. Vaccination will ensure that the women have high enough antibodies against tetanus toxin to pass on to unborn children during pregnancy, which will protect against disease in the first few weeks of life when the risk is highest. Member states have implemented this strategy to ensure that targeted women in high-risk districts were reached. About 79 million women of reproductive age in the AFR region received at least two doses of TT from 2000 to 2015.An important component of the MNTE strategy is surveillance for cases and deaths due to NT. The Integrated Disease Surveillance and Response (IDSR) is the main strategy that is being followed for disease notification, reporting, and action in the region. Efforts are ongoing to integrate NT surveillance into the active acute flaccid paralysis (AFP) surveillance for Polio using the vast infrastructure already in place. However, more cases are being documented through the IDSR than through the case-based surveillance for NT and cases are not followed by the appropriate response. Additionally, a significant number of the NT cases being reported through routine surveillance have been found not to be truly NT cases during programme reviews, pre-validation assessments or validation surveys, but more cases compatible with neonatal infections especially neonatal sepsis. More needs to be done in surveillance to obtain reproducible data which will inform programme implementation.Although vaccination of pregnant women or women of reproductive age stands out as the most important intervention, regular antenatal check-ups, safe and clean deliveries also significantly contribute to the prevention of neonatal tetanusThe shortage of midwives, cultural preferences of the location of births, economic factors and attitude of health staff are, among others, some of the reasons for the low skilled attendants at birth rate in AFR region.The goal of eliminating maternal and neonatal tetanus by 2015 has been achieved by 35 out of 47 (75%) of the WHO African Region member states. Two more countries attained MNTE in 2016, and the remaining 10 member states need to be supported to use the high-risk approach, including increasing funding, to achieve the elimination goal while those that have achieved the goal need to sustain the gains through the implementation of appropriate strategies depending on their local context.The manuscript has provided a brief update on the progress made in MNTE. However the paper is limited to only information obtained from the JRF, including coverage data which is not obtained from coverage surveys. The degree of accuracy of data is therefore limited. In addition, data may not have come from all parts of the countries including remote hard to reach or populations with poor access to health services, which are also the high risk areas for MNTE. Rumours have circulated in some countries of the regions in the past, which have alleged that the vaccine against tetanus is meant to control birth. Attention should therefore also be paid to communicating factually and effectively about the immense value of vaccination of women of childbearing age in preventing disease and deaths from NT. Effective reporting about vaccine safety particularly among adolescents will minimize hesitancy which can cause reduction in coverage.Additionally, surveillance for NT needs to improve substantially to include local response vaccination in the area that identified NT cases and community surveillance. This will help to bring the countries back on track to meeting the overall goal of MNT elimination."} +{"text": "Reports on dengue outbreaks at hospitals are extremely rare. Here the authors analyze a dengue outbreak at the Teaching Hospital-Kandy (THK), Sri Lanka. Our hypothesis was that the present outbreak of dengue was due to nosocomial infections. Our objectives were to illustrate epidemiological evidence for nosocomial dengue infections among THK workers and comparison of dengue incidence of hospital workers of wards that treat dengue patients with workers of other wards, to ascertain whether most nosocomial dengue incidences occur closer to where dengue patients are treated and vector larvae were detected, and to draw the attention of the medical community to the significance of hospital outbreaks, making suggestions on how to improve dengue preventive work at the THK. We calculated weekly dengue incidences for the hospital workers and for the surrounding Kandy district population, plotted epicurves, and compared them. We also compared these with the temporal changes of numbers of patients who were admitted for other illnesses and then diagnosed with dengue and the numbers of containers with vector mosquito larvae found on hospital premises. Dengue incidence of the hospital workers for the 24-week study period (2388 per 100000 population) was significantly high when compared to incidence of the district (151 per 100000 population). Peaks of dengue incidence in hospital workers, the numbers of patients hospitalized for other illnesses contracting dengue, and numbers of containers with vector larvae occurred in the same week. The peak dengue incidence of the Kandy district happened six weeks later. There was no evidence to indicate blood contact causing dengue among hospital workers. The outbreak was controlled while dengue was rising in the district. This evidence indicates a probable nosocomial dengue outbreak. This outbreak adversely affected hospital workers, patients, and the community. We propose some measures to prevent such outbreaks. Aedes, breed mainly in manmade water containers [Dengue is a mosquito-borne emerging viral fever that can be life-threatening. The vector mosquitoes, genusntainers . Approxintainers . Despitentainers \u20135. Therentainers . We usedntainers . There iUsing data from the Sri Lanka Department of Census and Statistics and the Ministry of Health, we estimated the notified dengue case incidence of the Kandy district for 2015 (the previous year) to give an idea of background to the reader. It was 93 per 100000 population. THK is usually the hospital that manages the largest number of dengue cases in the Central Province. Dengue incidence in Kandy is usually high during mid-November to mid-February and again in May-September , 9. The Aedes vector indices are high in Kandy [ Aedes vector by past studies [Workers of the THK and other large Sri Lankan hospitals in cities are at a greater risk of contracting dengue than the general population based upon the following facts:in Kandy , 11 and in Kandy . Hospita studies , 10, 12. studies .The THK premises cover 14.6 hectares. During the month of May 2016, 57107 inpatients were managed at THK, the outpatient department (OPD) has treated 32553 patients, and the clinics have treated 77258 patients. The total number of employees in same month was 5569 including student nurses. Dengue patients are usually treated in two pediatric and six internal medicine wards. In the THK air conditioning is limited mainly to intensive care units (ICUs) and operating theatres and mosquito screens are not in place. Therefore vectors can fly in to and out of wards. Most dengue patients treated here are residents of the Kandy district. A hospital dengue preventive program has been active since 2012, in addition to the national and municipal council programs.Main objective: illustrating epidemiological evidence for nosocomial infections among THK workersComparison of weekly dengue incidence of THK workers with that of the Kandy district populationComparison of dengue incidence of hospital workers of wards that treat dengue patients with workers of wards without dengue patients. Expecting more workers to get dengue in the former Aedes larvae were detected within the hospital and observe whether most nosocomial dengue occurs closer to where dengue patients are treated and larvae were detectedTo map the distribution of probable nosocomial dengue cases among hospital workers and patients hospitalized for other illnesses and then contracted dengue and sites whereTo draw the attention of the medical community to the significance of hospital outbreaks of dengue by discussing how the present outbreak affected our hospital and the communityMaking suggestions on how to improve dengue preventive work at the THK.This is a retrospective, descriptive study. Our hypothesis was that the present outbreak of dengue among hospital workers was due to nosocomial infections. Our objectives were as follows:Dengue cases are diagnosed and managed at the THK based on the criteria of the 2012 national guidelines of the Health Ministry of Sri Lanka , which a Aedes mosquito larvae found on the hospital premises by the surveys conducted in the hospital during our study period were obtained from the Medical Officer of Public Health and the Public Health Inspector (PHI) of the THK. Those surveys covered the entire THK premises and were conducted with the help of the national Anti-Malaria Campaign (AMC) personnel according to their standard procedures. Identification of the species of larvae was confirmed by trained personnel of AMC. To get a basic idea of the impact of this, we gathered information on how this outbreak affected THK workers, patients, and the community by speaking to some THK workers, patients, and the Medical Officer of Health of Kandy city .We gathered data from patients' notes and records at the infections control unit of the hospital from April 1st 2016 to September 16th 2016 period. The number of THK workers treated for dengue at THK, the buildings they work in most of the time, and the number of patients who contracted dengue while hospitalized for another illness (and the buildings where they were managed) were noted. The number (count) of hospital employees in May 2016 was obtained from the THK office and each relevant ward, and the estimated midyear population of the Kandy district for 2015 was obtained from the Sri Lanka Department of Census and Statistics. The reported number of dengue cases of the Kandy district during this period was obtained from the weekly epidemiology reports of the health ministry of Sri Lanka. The locations of the containers positive for Aedes larvae at THK are good supportive evidence. Thus we studied temporal changes of numbers and locations of such patients and larvae for the period of the outbreak. We created a graph employing Microsoft Excel charts and created a spot map also for that. Then we estimated dengue incidence for the THK workers, THK workers who work at units treating dengue patients, and the rest of THK workers and for the population of the Kandy district and looked for statistically significant differences between them by calculating odds ratio and comparison of proportions between those groups. We employed Openepi 3.01 software for that.We estimated weekly dengue incidences for 100000, for the populations of hospital workers and the Kandy district population, created epicurves using Microsoft Excel 2007 software, and looked for evidence of nosocomial infections. We looked for supportive evidence for nosocomial dengue as well. Patients hospitalized for other illnesses developed dengue while at the hospital during the same period and presence ofEpicurves of The peak of the dengue incidence of THK workers occurred six weeks before the peak incidence of the Kandy district population. The differences in magnitudes of the dengue incidences of the two populations are also clearly seen in 133 THK workers and 2132 people of the Kandy district contracted dengue infections during the study period. The dengue incidences of THK worker population and the Kandy district population for the period of study (24 weeks) were 2388 and 151 per 100000 population respectively. When the comparison of proportions of dengue patients of the THK and the Kandy district was performed using Openepi 3.01 software, the P value was <0.001.The temporal sequence of dengue incidences of the THK workers, the Kandy district population, the counts of patients admitted to THK for other illnesses and then diagnosed of dengue, and the counts of containers positive for Aedes mosquito larvae found in the hospital premises are illustrated in Aedes larvae was found at THK premises during our study period.The graph of the count of patients admitted to THK for other diseases and then diagnosed of dengue each week shows the same temporal sequence pattern as the graph of weekly dengue incidences of the THK workers and both have peaks in the week ending June 3rd favoring the idea that both groups contracted dengue from the THK premises. Both graphs differ from the pattern of weekly dengue incidences of the Kandy district population. The peak of the dengue incidence of THK workers coincides with the week where the highest number of containers with Aedes larvae were detected, also showing the duration of outbreak .Monthly total numbers of dengue patients warded at THK are depicted in 2. During the period of April 1st\u2013September 16th, 2016, 47% dengue cases of the THK were serologically (dengue NS1 antigen or IgM antibody for dengue) confirmed. The job categories of the 133 affected THK workers were as follows; 18 doctors, 39 nurses, 43 minor staff members , 19 student nurses and nursing tutors, and 14 workers of various other categories.The eight wards that manage dengue patients plus the OPD and emergency treatment unit have 504 workers. Out of that 17 had dengue. Out of 5065 other workers of the THK, 116 had dengue during our study period. The dengue incidence per 100000 population of workers of units treating dengue and rest of the THK workers were 3373 and 2291, respectively. Nevertheless there was no statistically significant difference in dengue incidence between the two categories . Out of 2352 beds of the THK (that include few cots and incubators) 115 (<5%) were in the areas relatively protected from mosquito access such as ICUs and the premature baby unit. They were distributed among buildings A, B, C, P, X, and MWhen asked about their concerns and views regarding this outbreak during first week of June 2016, 100 THK workers of all levels of hierarchy said that they were very worried about possibility of them contracting dengue. The first author in July 2016 asked all 48 staff members of one internal medicine ward how many times they contracted an illness that needed hospitalization as a probable result of working at the hospital (according to the best of their knowledge) during the last one year. No one had any such illness during the last year other than four members who were hospitalized with dengue in June. THK administration had to mobilize additional manpower from other wards to wards overcrowded with dengue patients where few workers are also on leave with dengue. The first author observed and some of these workers also agreed to taking some time to become familiarized with their new wards. In June 2016, the first author asked 33 patients who presented in late stages of several illnesses to an internal medicine ward for the reason for delay. One-third said fear of contracting dengue prevented them from coming earlier but aggravation of symptoms and inability to afford private treatment made them to come late. At the same month, some nondengue patients in internal medicine wards overcrowded with dengue patients went against medical advice telling fear of contracting dengue as the reason. In July 2016, 14 clinic patients on regular medical clinic follow-up who defaulted that clinic in June said fear of contracting dengue kept them away. The Medical Officer of Health of Kandy city personally communicated that she has noticed a rise of reported dengue cases from the immediate neighborhood of the hospital in May/June and believes that dengue has spread from the hospital.At the THK and most other hospitals of Kandy, dengue is diagnosed clinically as well by considering the patterns of the serial changes in platelet count, leucocytes count, hematocrit, and liver transaminases , 14. Som Aedes vector.As described above, different temporal patterns of dengue incidences in the epicurves indicate Aedes larvae also peaked in the same week of peak dengue incidence of THK workers indicating high vector density. We think the plausible explanation for the whole picture is an outbreak of nosocomial dengue.The intrinsic incubation period (IIP) of dengue virus is usually considered 4-6 days . The ave Aedes during that week. During the first week of June THK dengue control program was intensified with outside help. Surveillance for mosquito breeding places conducted more frequently and vigorously in June and July. That also contributed to the identification of many containers positive for Aedes larvae were found there. Some of them appear to be infected when coming for ward work or from another student nurse or a worker with dengue viremia. Most patients diagnosed with dengue while being managed for another illness were also from the buildings near the building where dengue patients were managed the number of patients exceeds the number of beds in internal medicine and pediatric wards and many patients do not always utilize the nets provided. Once a probable nosocomial dengue case is identified, thermal fogging is employed to kill infected vectors in the vicinity. From the first week of June all these control measures were intensified with outside help; for example, fogging of the THK premises was performed twice a week for two months. Placement of ovitraps in selected locations to monitor the vector was introduced later. Considering results of some past studies we think that creation of additional barriers against dengue transmission inside the hospital by installation of screens against mosquitoes especially in units where dengue patients are managed and application of topical mosquito repellents to all potential dengue patients always and to the hospital workers during epidemics too may be useful. . For exa helpful .Even though a much larger number of dengue patients were managed at the hospital in July and the weather was conducive for dengue transmission, infections among THK workers got controlled. The decline of dengue incidence of THK workers while that of the surrounding Kandy population rose further supports the idea that nosocomial dengue occurred and indicates that even during the height of an epidemic in the community, vigorous preventive methods can control nosocomial dengue at hospitals, and that is an important lesson to remember. Further strengthening of THK dengue preventive program with wider participation of all stake holders based upon IMV principals may be useful in prevention of further outbreaks. Dearth of funds for preventive work is a key issue in implementing additional preventive measures in Sri Lanka and in most other countries severely affected by dengue although prevention is better than treatment. According to one study, hospital management of an adult with dengue fever and severe dengue hemorrhagic fever, respectively, cost the health ministry of Sri Lanka about 196 and 887 US dollars but the expenditure on dengue prevention per year per reported case of dengue was only 97 US dollars in 2012 . Hospita Aedes vector. One Indian team in 2008 responding to an article in the Lancet journal reported of 21 serologically confirmed cases of nosocomial dengue in healthcare workers and suspected possible aerosol transmission [In 2009, 37 hospital workers and 21 patients being treated for other diseases acquired confirmed dengue from the National Hospital of Sri Lanka-Colombo [smission . Howeversmission . A Banglsmission . Both prsmission . Many hesmission . We beliThe magnitude of this outbreak motivated us to give a basic idea of its impact. The information we gathered by talking to those who were affected is very basic. However it may help others to foresee what to expect in future outbreaks in other hospitals. The media highlighted this outbreak and sensationalized a death of one hospital worker as negligence. That led to anxiety among THK workers and to the practice of defensive medicine by some of them. Further studies may help identify dengue as an occupational hazard of workers in large hospitals in endemic countries. During the height of this outbreak, the THK administration had to mobilize additional manpower to wards treating dengue patients. Some Kandy residents had a transient fear of utilizing the hospital. According to the Medical Officer of Health of the Kandy city dengue has spilled to the immediate neighborhood of the hospital. Transportation methods and hubs are known contributors of the global spread of dengue . The preVector breeding sites were all over the THK premises. Dengue virus can get vertically transmitted in the vector , 13. PatHospitalized dengue cases are only a fraction of total dengue cases. THK workers who were treated for dengue at other hospitals were not counted. Some patients who appeared to be infected while being treated at THK for other illnesses (and some bystanders who stayed with their sick children), went home, and were then diagnosed with dengue within less than six days were not counted. Some hospital workers and patients who developed fever at the hospital may have got infected outside the hospital. The diagnosis of dengue in all cases of THK and Kandy district was not serologically confirmed. Other factors such as window space available for vectors to fly in and out of each building and practices like using mosquito repellents may also have influenced the number of cases in different buildings but these factors were not assessed. Investigations of each and every hospital staff member and all dengue patients for a longer period may give a better picture of the situation but that task is beyond the capability of the authors.The mid-2016 dengue outbreak at the THK affected hospital workers and users severely and is probably a nosocomial dengue outbreak. Such major outbreaks are likely to adversely affect the hospital workers, the smooth functioning of the hospital, and the communities they serve especially at a height of dengue epidemic, as was the case at the THK. Nonetheless such outbreaks can be controlled with additional effort. We have to generate more information on this issue from similar hospitals to confirm the findings of the present study and take steps to prevent dengue outbreaks at hospitals."} +{"text": "Remarkable progress in a range of biomedical disciplines has promoted the understanding of the cellular components of the autonomic nervous system and their differentiation during development to a critical level. Characterization of the gene expression fingerprints of individual neurons and identification of the key regulators of autonomic neuron differentiation enables us to comprehend the development of different sets of autonomic neurons. Their individual functional properties emerge as a consequence of differential gene expression initiated by the action of specific developmental regulators. In this review, we delineate the anatomical and physiological observations that led to the subdivision into sympathetic and parasympathetic domains and analyze how the recent molecular insights melt into and challenge the classical description of the autonomic nervous system. The \u201cgreat sympathetic\u201d... \u201cwas the principal means of bringing about the sympathies of the body\u201d. With these words Langley , 2 summaIn this review we first describe the anatomical and physiological findings that led to the formulation of the classical model of the autonomic nervous system, subdivided into sympathetic and parasympathetic subsystems, acting partly in antagonistic manner. The heart as a prime target of autonomic innervation is discussed with respect to the historical unfolding of the physiological function of both autonomic nervous pathways regulating heart activity, their anatomical trajectories and the positions of the neuron cell bodies involved. We then consider the electrophysiological and neurochemical features of autonomic neurons, to illustrate neuron diversity even within each of the autonomic subsystems and to compare the cranial, thoracolumbar and sacral autonomic domains, their constituent cells and targets. This paves the way to delineate neuron development and factors regulating the acquisition of neuron subtype- specific features determining functional properties. We highlight transcription factor fingerprints of preganglionic and postganglionic neurons at different axial levels that suggest a sympathetic rather than parasympathetic developmental profile of the sacral spinal cord outflow, which stands in stark contrast to the classical model of autonomic neuron domains. Then we discuss the limitations of our understanding of the mechanisms responsible for the selective innervation of postganglionic neuron populations by the appropriate preganglionic neurons. Together with the more detailed characterization of a range of autonomic neuron populations so far underrepresented in the molecular and developmental analysis, a comprehensive understanding of the cellular composition and connectivity of the autonomic nervous system is expected to emerge.During the last two decades of the 19th century a series of keystone publications on structure and function of autonomic nerves were released from the Gaskell and Langley labs that provided the foundation for the thinking about the \u201cautonomic\u201d nervous Gaskell attempted to replace the nomenclature of the efferent nerves, which to him in part appeared entirely artificial or hypothetical, by fundamental divisions of the nervous system where physiological and structural properties can be grouped together. In a series of landmark papers on the nerves innervating the heart , the visFor Langley , 14 it aAn important issue in Langley\u2019s synthesis is the dThe classical model of the sympathetic and parasympathetic nervous system provided an amazingly constructive framework for results coming in from the biomedical disciplines at increasing speed. The division into two subsystems acting at least in part in an antagonistic manner based on two neurotransmitter systems provided a very attractive framework for considering system biological problems and to confront a vast range of therapeutic challenges. The opposite action of sympathetic and parasympathetic stimulation on the ciliary muscle, the heart and the reproductive organs were but three examples where the attraction of this approach became apparent. Histological, electrophysiological, pharmacological and neurochemical approaches became the main motors to complete an anatomical and physiological description of cellular structure and function of the autonomic nervous system and its target structures , 17 as wThe interrogation of the neural control of the heart at the turn of the 19th century resolved the problem of whether the heart was able to move independently of the presence of the nervous system and the question for the contribution of the nervous system to the modulation of heart activity . Work onThe first convincing report on antagonistic regulation of the heart activity by the vagus and the sympathetic nerve is attributed to the brothers Ernst Heinrich and Eduard Weber using the electromagnetic rotation apparatus for experiments performed in frogs and confirmed in birds and mammals. Ernst Heinrich Weber reported in 1845 that galvanic excitation of the vagus nerve weakens the heart and slows down or interrupts the heartbeat, while excitation of the sympathetic restores, enhances and enforces the movement of the heart.At the time Langley and Gaskell released their key papers, Bayliss and Starling publisheA quantitative model for the regulation of heart rate by the parasympathetic \u2013 sympathetic antagonistic action was developed , yet theThe anatomical course and physiological impact of the cardio \u2013 inhibitor and cardio \u2013 accelerator fibers were studied in different mammalian species. They are exemplified by studies in dogs where electrical stimulation and surgical interruption of different cardiac nerve branches and the paravertebral sympathetic trunk were combined to determine the course of the preganglionic vagal and postganglionic sympathetic neurons \u201332. PostThe balanced interaction of the two tracks of the autonomic nervous system during regulation of heart function as reflex action under different stimulus settings and stressor regimes and its relation to heart dysfunction and autonomic conflict remain the focus of continuing interest , 48. UndEven though the anatomical connection of the vagus nerve, the sympathetic trunk and the heart was already recognized in the 18th century , anatomiLabeling of the entire vagus nerve or the target structures heart, lungs and stomach was performed in the cat by Kalia and discUpon HRP application to the heart and aortic arch of the dog, the greatest number of labeled postganglionic cell bodies is detected in the medium cervical ganglia, in addition in the cranial poles of the stellate ganglia and occasionally in the superior cervical ganglia . Upon HRThe connectivity between preganglionic and postganglionic neurons was analyzed by electrophysiology in the superior cervical ganglia of the guinea pig . This prCharacterization of the electrophysiological properties of preganglionic sympathetic neurons and their reflex regulation by sensory stimuli demonstrated a diversity of neuron populations that may subserve different functions \u201368. ThisCharacterization of the electrophysiological properties in combination with morphometric analysis and histochemical classification complemeIn addition to the location of the cell bodies of the autonomic neurons, their histological characterization provided increasing insight into their nature. In particular the neurons of the sympathetic ganglia became the subject of histological and molecular analysis that provided insight into the neurotransmitter phenotype \u201378, theiSuch a detailed knowledge is not yet available for preganglionic sympathetic or pre\u2013 and postganglionic parasympathetic neurons. Characterization of the postganglionic vagal neurons innervating the heart is still incomplete. Histochemical characterization of the heart ganglia demonstrated the presence of a cholinergic neuron population, considered to represent the postganglionic parasympathetic neuron population, and a population of small intensely fluorescent cells whose function is not fully characterized . The comOf similar interest will be the characterization of the preganglionic neurons in the sympathetic and parasympathetic systems. A very important finding was the discovery of the Phox2 transcription factors expressed in preganglionic motoneurons of the brainstem . In the Of note, application of HRP to the vagus nerve not only labels preganglionic parasympathetic neurons in the brainstem but also postganglionic sympathetic neurons in the cervical sympathetic ganglia. Even HRP application to the cervical vagus reveals labeling in the sympathetic trunk . MoreoveAfter HRP application to the duodenum and jejunum in the cat and the guinea pig, sympathetic neurons are not only labeled in the celiac ganglion but also in the cervical and stellate ganglia of the sympathetic trunk . Since cThus, the abdominal parts of the digestive tract are innervated by several domains of the sympathetic and of the parasympathetic nervous system. The cervical and thoracic domains of the sympathetic system target the duodenum and jejunum via postganglionic neurons from cervical and thoracic ganglia running in the vagus nerve. In addition, neurons from the celiac ganglia and\u00a0the splanchnic nerves are involved. Lumbar domains of the sympathetic nervous system target the colon via postganglionic neurons in colonic nerves from the mesenteric ganglia \u201396. The Postganglionic sympathetic neurons innervating abdominal viscera are located in the paravertebral chain of ganglia, in prevertebral ganglia, in isolated clusters of neurons found in the aortic plexus and plexuses accompanying arterial vessels as well as the superior hypogastric plexus, and in the pelvic ganglion or pelvic plexus as it is called in species where the condensation into a well demarcated ganglion is not so prominent.The prevertebral ganglia \u2013 the celiac, superior and inferior mesenteric ganglia \u2013 were subject to morphological, neurochemical and electrophysiological characterization as described for the cervical and thoracic sympathetic ganglia , 105\u2013107The pelvic ganglion or plexus is unique due to its dual composition reflecteFostered by the remarkable progress in electrophysiological instrumentation, histochemical techniques and pharmacological approaches, research on the autonomic nervous system in particular during the second half of the 20th century established detailed knowledge of the cellularity and connectivity of the sympathetic and parasympathetic system. The technique of retrograde neuronal tracing by HRP application to sectioned nerves or into target tissues provided a breakthrough in the localization of preganglionic and postganglionic autonomic neurons. Extracellular recordings allowed the characterization of neuronal behavior under control and experimental conditions thus demonstrating the presence of different populations of preganglionic\u00a0and postganglionic neurons providing pathways to distinct target tissues as exemplified by the sympathetic supply of different vascular beds and other targets. Intracellular recording techniques provided access to the electrical properties of neurons and again demonstrated the presence of different populations of postganglionic neurons in the accessible sympathetic ganglia. In addition they allowed the study of synaptic input from preganglionic neurons to address questions concerning the synaptic integration of autonomic neuronal activity. With these and other techniques a detailed picture of the cellular structure of the autonomic nervous system was developed for adult mammals and assembled into a model of functional organization of two partly antagonistic systems mediating a homeostatic control of key organs.During the first half of the 20th century the basic science approach towards the autonomic nervous system was largely restricted to mature mammalian organisms such as cats and dogs. This changed dramatically since the 1950\u2018s. The discovery of nerve growth factor (NGF) has put the development of postganglionic sympathetic as well as primary sensory neurons at center stage of Developmental Neuroscience. In the 1970s, immunohistochemical techniques provided specific access to the rate limiting enzyme of catecholamine biosynthesis. Developmental analysis at that time focused on postganglionic sympathetic neurons, expression of the noradrenergic marker enzyme tyrosine hydroxylase and the role of neurotrophic factors in their development. During the 1990s, this focus was considerably strengthened and widened by in situ hybridization for mRNA detection in developing tissues and the inclusion of different markers of neuron populations such as several enzymes of the noradrenaline biosynthesis cascade and transporter proteins. Growth factor receptor protein subunits and transcription factors responsible for cell type specific gene expression and its regulation during development could be analyzed. At the turn of the 21st century, this approach in combination with viral overexpression studies in chick embryos and mutational inactivation studies in mice provided critical insight into the differentiation of the noradrenergic transmitter phenotype in sympathetic neurons and the growth factors and transcription factors involved. In recent years this analysis has been extended to compare sympathetic neuron development at thoracolumbar levels to parasympathetic neuron\u00a0development at cranial levels. In addition, autonomic neuron development was studied at sacral levels. This resulted in the critical finding that both preganglionic as well as postganglionic neurons at sacral levels are related in the regulation of their differentiation with thoracolumbar sympathetic but not cranial parasympathetic neurons.The discovery and characterization of NGF and its action on peripheral neurons initiated the molecular interrogation into the sympathetic nervous system \u2013114. DifUncoupling of the survival function of NGF by deletion of the proapoptotic Bax2 gene allowed the analysis of additional functions in vivo . The finThe introduction of a range of histochemical and biochemical techniques in the second half of the 20th century boosted developmental analysis of autonomic neurons with particular emphasis on sympathetic neurons. Accessibility of catecholamines and the rate limiting enzyme in their synthesis, tyrosine hydroxylase (Th), by formaline\u2013 induced histofluorescence, enzyme activity measurements and immunohistochemistry established Th as phenotypic marker for catecholaminergic neurons, their biochemical activity state, protein expression and gene transcription. The histofluorescence technique was applA remarkable advantage of the histofluorescence technique was the possibility for combination with nucleolus staining in \u201cquail-chick\u201d chimeras resulting in the breakthrough in the analysis of the neural crest contribution to the postganglionic sympathetic and parasympathetic system . TransplAnother critical finding was the observation that sympathetic neuroblasts are still able to divide after noradrenergic and neuronal differentiation. Combination of catecholamine detection by histofluorescence techniques and demonstration of cell division by incorporation of radiolabeled thymidine into newly synthesized DNA demonstrated that sympathetic neuroblasts in chick embryos are still dividing when they have already acquired neuronal and noradrenergic properties , 135. ThWith in situ hybridization a highly specific as well as selective method was introduced to allow detection of gene expression by mRNA labeling in tissues and even single cells. Comparison of gene expression onset for noradrenergic markers and transcription factors demonstrates an early onset of Th and dopamine beta \u2013 hydroxylase (Dbh) transcript detectability following the paired homeodomain protein transcription factors Phox2a and Phox2b in mice and in chick , 140. MuGene overexpression studies in chick embryos as well as mutational inactivation in mouse embryos demonstrates that a set of transcription factors including Phox2a and 2b, Gata2 and 3, Hand1, 2 and 3 and Ascl1 interact as a network to accomplish sympathetic neuronal differentiation , 145 are required for the initiation of differentiation in both systems , 156\u2013158With short delay after initiation of the noradrenergic marker gene induction and several days before target tissue innervation expression of synaptic protein genes commences in chick sympathetic ganglia. This is shown for synaptotagmin I, a critical calcium sensor of the transmitter vesicle membrane, and neurexin 1, a crucial organizer of protein complexes within the pre-synaptic membrane and binding partner to post\u2013synaptic neuroligins . During The development of synapses and transmission in chick ciliary ganglia has been investigated by electrophysiological and ultrastructural analysis . InitialThese studies outline aspects of synapse formation in a selected set of autonomic ganglia. They demonstrate induction of the genes coding for synaptic proteins before the onset of preganglionic and postganglionic contact formation. On the other hand they show the modulatory role of preganglionic innervation as well as target contact on the specification of the synaptic machinery. RNA sequencing analysis demonstrates enormous variation in transcript levels between neurons within sympathetic ganglia , which iPostganglionic parasympathetic neurons are generated later than sympathetic neurons and are located often at sites near their target tissues. The unexpected finding that these parasympathetic neurons are generated from glial progenitor-like cells associated with nerves , 171 expComparison of marker gene expression in autonomic postganglionic neurons of the mouse embryo from cranial to sacral levels discloses a similarity between cells in the paravertebral sympathetic chain and the pelvic ganglion and critical differences to those of cranial parasympathetic ganglia, strongly suggesting a homologous developmental origin of thoracolumbar and sacral postganglionic autonomic neurons . While nMutational inactivation of Olig2 required for motoneuron differentiation results in the lack of preganglionic nerves, which surprisingly did not affect the size of the pelvic ganglion . In addiAnalysis of marker genes at the sites of preganglionic neuron development in the mouse embryo demonstrates two distinct patterns of transcription factor expression at cranial as compared to thoracolumbar and sacral levels . While TBy electrophysiological recording and cell labeling techniques synaptic input from parasympathetic and sympathetic preganglionic neurons to individual postganglionic cells was quantified to study the features of ganglionic information processing. Analysis of the rodent submandibular ganglion , 179, thDuring postnatal development the number of preganglionic fibers innervating individual neurons in the submandibular ganglion decreases while the number of synaptic boutons increases resulting in ganglion cells innervated by single preganglionic neurons in the rat . CompariIn the guinea pig superior cervical ganglion, a preganglionic fiber is estimated to contact 50 to 200 postganglionic neurons . An indiCrucial progress came from NGF mutant mice where the loss of sympathetic neurons was prevented by the additional mutation of the proapoptotic gene Bax . In newbThus, innervation and synapse formation in sympathetic ganglia are controlled by different families of growth factors, in particular neurotrophin signaling. The corresponding factors involved in synapse formation in parasympathetic ganglia are not resolved. In addition, the factors that regulate the establishment of specific connections in different autonomic pathways are unknown.The question of how the specific connections from preganglionic neurons to the target tissue are brought about is unresolved. This entails a number of different problems of distinct interest. A very productive approach has been applied to somatic motoneuron development where trMolecular understanding is, however, emerging concerning the question of the formation of the paravertebral sympathetic strands and their projections. Neuropilin 1 and 2, receptors for semaphorins and vascular endothelial growth factor, are expressed in postganglionic sympathetic neurons . MutatioTwo key unresolved problems are the transition from axon outgrowth to target innervation and the establishment, competition for and maintenance of synapses. These critical events during the development of a neural network composed of diverse target\u2013specific pathways are incompletely understood for the autonomic nervous system. To what extent population specific cell surface markers may play a role in this process is still unknown. The critical issue of how the neural circuits to the different, in part closely associated autonomic targets such as sweat glands and the neighboring blood vessels are selectively innervated is still open for analysis.A range of cell biological, molecular and surgical techniques established a detailed knowledge of the development of the noradrenergic transmitter phenotype in postganglionic sympathetic neurons including the growth factor signaling systems and the transcription factors required for induction, differentiation and maintenance of the cells. The differentiation of the cholinergic postganglionic parasympathetic neurons is not characterized in its molecular mechanism. Yet, the transcription factors expressed in these neurons during development are known at least in part. This is also the case for the preganglionic neurons in both the sympathetic and parasympathetic system. With the increasing characterization of the transcription factors expressed and required in sympathetic and parasympathetic neurons, the close relation of thoracolumbar and sacral autonomic neurons and the difference from cranial autonomic neuronal development at both preganglionic and postganglionic levels is recognized and provokes the renaming of the sacral division of the autonomic nervous system as sympathetic.With electrophysiological and histochemical techniques, subpopulations of postganglionic sympathetic neurons have been characterized and their target tissues described. RNA sequencing techniques are now complementing this approach to molecularly specify the efferent sympathetic outflow pathways. This approach is still lacking for the postganglionic parasympathetic neurons and the preganglionic neurons of both branches of the autonomic nervous system.A major quest concerning the development of the autonomic nervous system remains the understanding of process outgrowth from both preganglionic and postganglionic neurons and the establishment of neuronal specificity. Some molecular players have been identified but the picture is far from complete. The question how the outgrowing neuronal processes are directed to and choose among alternative target structures and how the strength of the synaptic connections is established and regulated are key problems for the coming years.The classical scheme of the autonomic nervous system as delineated by Langley and Gaskell included a cranial and a sacral parasympathetic domain divided by a thoracolumbar sympathetic domain. One critical argument for the classical subdivision of the sympathetic and parasympathetic systems was the anatomical segregation along the body axes of the preganglionic outflow from the cranial, the thoracolumbar and the sacral level by gaps devoid of white communicating rami containing myelinated preganglionic fibers. In addition to the anatomical location of the preganglionic neuronal cell bodies, a range of arguments are brought forward which do not provide unequivocal criteria. These weaker arguments include the neurotransmitter phenotype, the distance from postganglionic cell bodies to target tissue and the opposite action of sympathetic and parasympathetic stimulation on a range of target organs. Yet, the classical subdivision of the domains in the autonomic nervous system is only partially supported by the molecular signatures specifying cellular differentiation.With the analysis of the transcription factors expressed in the precursors and differentiating neurons of the cranial, thoracolumbar and sacral domains involved in the generation of preganglionic and postganglionic autonomic neurons , strong Alternatively, the term spinal autonomic appears appropriate. Indeed, based on comparative anatomical analysis between vertebrate classes, the term \u201cspinal autonomic\u201d was proposed earlier to include the sympathetic and the sacral, then called parasympathetic, autonomic system Fig. 3)3). It reAs already emphasized by Langley , the terThe recent description of developmental regulators involved in the differentiation of autonomic neurons and the proposed renaming of the \u201csacral autonomic outflow\u201d initiated a heated dispute \u2013208. TheFrom the rebuttal discussions , 209 it The visceral motoneurons at hindbrain level are derived from different progenitor populations than the somatic motoneurons whereas both are derived from the same progenitor population in the spinal cord . This isThe amazing progress with RNA sequencing techniques that allow quantitative detection of message transcribed from each gene within the cellular genome provided a quantum leap in the characterization of gene expression patterns within cell populations and single cells. Among autonomic neurons detailed data are available for postganglionic sympathetic neurons derived from stellate and thoracic ganglia , 83. GenMore challenging but at least as important will be the characterization by RNA sequencing of preganglionic sympathetic and parasympathetic neurons at all levels of the body axis. Again a host of questions will be addressed and the comparison to somatic motoneurons on the one hand and the subpopulations of preganglionic autonomic neurons on the other hand can be expected to provide crucial progress. A key problem linked to this approach is the molecular understanding of the pathways taken by preganglionic neurons from either system. Aspects of this topic are the understanding of the differences between preganglionic sympathetic neurons targeting postganglionic neurons in the paravertebral sympathetic chain, those targeting neurons in the prevertebral ganglia and those destined to innervate more distal plexuses. Another crucial point is the understanding of the molecular control of target innervation in cases where the sympathetic and the parasympathetic system innervate different sites within the same target tissue, i.e. the ciliary body, the heart and the pelvic organs.Already some 300\u00a0years ago the sympathetic paravertebral chain and vagus nerve, then called the great and the medium sympathetic , were cThe molecular and physiological characterization of the autonomic neurons has provided very refined knowledge of neuron subpopulations distinguished by their neurochemistry, electrical activity and reflex behavior upon sensory stimulation. The integration into autonomic networks is only partially understood, however. The molecular identity of the preganglionic and postganglionic neurons synaptically linked in distinct autonomic pathways is still undefined at the level of the molecular players mediating the specific synaptic connection. In addition, the postganglionic sympathetic neurons targeting major organs such as heart, lung and kidney are not yet characterized by their full transcriptional fingerprint. Thus, a significant gap remains between the cellular characterization of autonomic neurons and the understanding of their embedding in neural networks mediating homeostatic control. Two key systems coordinating organ functions, the cardio \u2013 respiratory balancing of respiration and perfusion , 214 andTaken together, the molecular and developmental characterization of selected autonomic neuron subgroups awaits tThe synthesis of physiological characterization and molecular identification in combination with developmental analysis of autonomic neurons promises not only a comprehensive understanding of the neural networks underlying homeostatic regulation of the body functions but also of their emergence during vertebrate development and evolution."} +{"text": "Anatomic variations involving arterial supply of the large intestines are of clinical significance. Variations range from the pattern of origin, branching and territorial supply. The colon, the part of the large intestine, usually receives its arterial blood supply from branches of the superior and inferior mesenteric arteries. However, anatomic variation in this vascular arrangement has been reported, with vascular anatomy of the right colon being described as complex and more variable compared with the left colon. During routine cadaveric dissection of the supracolic and infracolic viscera, we encountered an additional mesenteric artery originating directly from the anterior surface of the abdominal aorta between the origins of the superior and inferior\u00a0mesenteric\u00a0arteries. This additional \u201cinferior mesenteric artery\u201d ran obliquely superiorly toward the left colon giving rise to two branches supplying the distal part of the ascending colon, the transverse colon and the proximal part of the descending colon. Awareness and knowledge of this anatomic variation are important for radiologists and surgeons to improve the quality\u00a0of surgery and avoid both intra- and postoperative complications during surgical procedures of the colon. The abdominal aorta usually gives rise to three anterior unpaired branches, the celiac artery (CA), superior mesenteric artery (SMA) and inferior mesenteric artery (IMA). The SMA through the ileocolic, right colic and middle colic arteries supplies structures derived from the midgut. These structures include the cecum, ascending colon and proximal two-thirds of the transverse colon. The IMA through its branches, the left colic, sigmoid and superior rectal arteries supplies structures derived from the hindgut, including the distal third of the transverse colon, descending and sigmoid colon and rectum. Anatomical variation in this classical arrangement has been reported , 7. VariAwareness of these and other vascular variations is of clinical significance during various surgeries involving the colon to avoid complications such as intraoperative hemorrhage and colonic ischemia .The variant artery described in the present case report arose from the ventral aspect of the aorta, between the SMA and IMA and gave rise to several branches that supplied the distal part of the ascending colon, the transverse colon and the proximal part of the descending colon. Normally, these colonic regions are supplied by the middle and left colic arteries from the SMA and IMA, respectively. We discuss the possible embryological basis and the surgical implications of this vascular variation.In this case report, we describe a variant mesenteric artery originating directly from the abdominal aorta between the origin of the SMA and IMA in a 85-year-old Caucasian female body donor during routine dissection of the supracolic and infracolic regions. The cause of death in the donor was indicated as myocardial infarction. No past medical records were available for review.During the exposure of the celiac artery, superior and inferior mesenteric arteries, the intestinal loops were moved to the right and the peritoneum covering the posterior abdominal wall removed. The celiac artery and the superior mesenteric artery originated independently from the ventral surface of the abdominal aorta about 1\u00a0cm apart (Fig.\u00a0Between the SMA and IMA, an additional mesenteric artery was found to originate from the ventrolateral surface of the distal aorta, about 2\u00a0cm proximal to the origin of the IMA (Fig.\u00a0The finding in this case reports a variation in the branching pattern of the arteries supplying parts of the colon. The branches supplying the distal part of the ascending colon, the transverse colon and the descending colon were found to originate from a variant branch of the abdominal aorta. Usually, the middle and left colic arteries, branches of the superior and inferior mesenteric arteries, respectively, supply these parts of the colon. The variant artery reported in this case took its origin from the abdominal aorta between the SMA and IMA before giving rise to two branches that correspond to the middle and left colic arteries. Case reports of variant arteries arising directly from the aorta between the SMA and IMA have previously been documented , 8.The variations in the vascular supply of the small and large intestines can best be understood by appreciating the early development of the small and large intestines and their vascular supply. During the embryological period, each of the paired dorsal aortae gives rise to three sets of paired arterial branches: the dorsal, lateral and ventral intersegmental vessels . With suThere is great similarity in the origin, course and the branching pattern of the third mesenteric artery observed in our case report and a similar case reported by Benton and Cotter . Other cThe pattern of blood supply to the colon has implications on the surgical techniques and treatment outcomes on various pathological conditions involving the colon. In the surgical treatment of invasive colonic malignancies, for example, resection of the involved colon is usually accompanied with ligature of the accompanying arterial branches , 6. AcciAn in-depth knowledge of the vascular anatomy of the colon and the associated pattern of collateral variation is necessary for surgeons to avoid both intra- and postoperative complications in surgical procedures involving the colon. Making sure that patients undergo radiological investigations such as selective angiography, CT and MDCT angiography that provide better visualization of vascular variations can avert some of these complications."} +{"text": "Older Americans living in the community who need help with basic activities of daily living overwhelmingly rely on unpaid care provided most commonly by working-age family members. Because unpaid family care limits the demand for nursing facilities and reduces expenses paid by Medicaid and other government programs, previous estimates of its economic value have mostly focused on estimating the benefits of unpaid family care. However, to assess accurately the overall economic value of unpaid family care and define better the scope for policy intervention, it is also important to account for the costs of such care, yet our knowledge of their magnitude remains limited. This study assesses the impact of unpaid family caregiving on the likelihood of working and hours worked for caregivers, and calculates the related cost of forgone earnings today and in 2050. To do so, it matches family caregivers from the National Study of Caregiving with non-caregivers from the Panel Study of Income Dynamics, and uses projections from the Urban Institute\u2019s DYNASIM microsimulation model to inform calculations of future costs of foregone earnings. Results suggest that the cost of foregone earnings attributable to caregiving is currently about $67 billion. By mid-century, it will likely more than double, outpacing the growth of disabled older population as the share of better-educated caregivers with higher earning capacity increases. Policymakers can use these results to inform their current and future policy efforts aimed at assisting family caregivers who are facing the challenge of balancing work and caregiving responsibilities."} +{"text": "Plant breeding is based on phenotyping, not only because of tradition but also because of essence. A plant phenotype is the result of the interactions between the genome of a stationary plant and all the micro- and mega-environments encountered during its life span. Over the recent years, we have witnessed an explosion in state-of-the-art technologies developed through collaborative efforts of multidisciplinary teams to assist the process of high-throughput plant phenotyping in plant breeding , first oYet, plant phenotyping is still the bottleneck for breeding and farming and the What so far has not been seriously considered, but we assert it should occupy a central part in the relevant discussions, is the choice of the appropriate unit of plant phenotyping in the field, so that the efficiency of selection in plant breeding programs and the corresponding measurable genetic gain are maximized. Should the community continue using the multi-plant, densely grown field plot as the unit of phenotyping and evaluation for plant breeding purposes or should we consider more efficient approaches based on the maximization of a plant\u2019s phenotypic expression and differentiation?To increase efficiency in plant breeding, we advocate that the most appropriate unit of plant phenotyping for selection purposes should correspond to the individual plant grown unhindered in the absence of competitive interactions so that phenotypic expression and the corresponding phenotypic variance are maximized, the coefficient of variation (CV) of single-plant yields is minimized, and spatial heterogeneity is effectively controlled. These conditions are met when plants are allocated in the field according to one of the honeycomb selection designs (HSD) . In this opinion paper, we present a list of some commonly encountered barriers during a plant breeding program, including the so-called pre-breeding activities that exploit the potential of crop wild relatives (CWR), and discuss how these are successfully faced once the unit of plant phenotyping becomes the individual plant grown as described. Results from our long-term research focusing on the application and further development of the principles related to the HSD and the prognostic breeding paradigm in varioex situ materials in gene banks need to be phenotyped. Seed supplies are not an issue when working with individual plants in HSDs until enough homogeneous seed is gradually generated during the next few years, so as to permit replications, ranging between two and six, of densely grown plots. Additional testing locations are possible to be included only in the latter stages of the program. A similar problem of limited seed supply is encountered when crop wild relatives and in HSDs Figure 1This barrier refers to the masking effects of interplant competition on selection efficiency and the per se and the plant stability index, the latter being quantified on a single-plant basis for the first time. The whole-plant prognostic equation PPE incorporates the two components:Barrier 2 is overcome through a) the partitioning of CYP into components measured concurrently and precisely at the single-plant level under conditions excluding interplant competition and b) twhere x is the plant yield in grams, ensities .We advocate that the critical question is not whether the entry ranking in densely grown plots corresponds to the ranking of individual plants in wide distances, but whether we truly need genotypes that behave differently under the two conditions. New varieties should have the genetic makeup that renders them density-neutral or density-independent . This isThis barrier relates to the masking effects of soil and spatial heterogeneity. The development of HSDs enables the effective sampling of soil heterogeneity, ensuring that all plants and sibling lines are allocated under comparable growing conditions in both fertile and non-fertile spots. In all HSDs, each plant in the trial is found in the middle of a i) circular, ii) complete, and iii) moving replicate and each sibling line belongs to a moving and triangular grid that covers uniquely the whole spectrum of spatial heterogeneity plants advanced to the next generation, the higher will be the response R to selection. The same is expected when decreasing the generation time interval that is also related to the greater number of progenies.Barrier 5 relates to the conditions that satisfy the so-called breeder\u2019s equation, originally derived from the practice of animal breeding , that deIn the practice of animal breeding, the unit of evaluation for selection has always been the individual animal raised with no competitive interactions for resources , as eachThis important barrier concernsThis barrier relates to the reasons underlying the lack of automation in plant breeding coupled with the lack of standards in the phenotyping trials. The adoption of automated phenotyping in plant breeding is still in its infancy . The neeEach plant in a HSD trial possesses a unique position identification number, for example 8-12-7, where number 8 gives the number of the horizontal row, number 12 gives the plant position on the row, and number 7 gives the number of the design code corresponding to the particular sibling line. Attached to this number is the corresponding unique value of the plant phenotyping or prognostic equation. The intrinsic properties of the designs that provide a matrix of standardized motifs across environments and the wide distances between plants facilitate the use of geo-referencing methods. Plants are ranked according to the value of their phenotyping equation and selection of the \u201cbest\u201d in each environment is an unbiased process, rendering the same results regardless of the person performing the analysis. It is thus amenable to full automation and robotization.Following the discussion on the barriers, we would also like to draw some attention to some recent references and work that support the above and highlight the novel possibilities that unfold. There is a recent acknowledgment that \u201cit has become measurably harder to generate ideas and new approaches that result in real gains\u201d and the Importantly, there is good convergence of the concepts described above and the highly successful agronomic practices of the System of Rice Intensification (SRI) . Thus, tThe field phenotyping community can benefit by giving due consideration to the suggested innovations towards overcoming current barriers in plant phenotyping and phenomics, while serving also the developments in plant (epi)genomics. The involvement of international, multidisciplinary teams will contribute to the deeper understanding of the methodology. This will, in turn, unfold additional options and facilitate the successful automation, standardization, and robotization of large-scale phenotyping for plant breeding.DF and MO conceived the work and all authors contributed to the final form of the manuscript and securing relevant funding.The Cyprus Research Promotion Foundation in the frame of the research project MAGNET (INFRASTRUCTURES/1216/0032) supported this work.The authors declare that the opinion article was written in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "J Dent Res Dent Clin Dent Prospect 2017; 11(3):189-194, the name of the first author was misspelled. The correct name of the first author is Farzad Esmaeili. The authors\u2019 affiliation with Dental & Periodontal Research Center of Tabriz University of Medical Sciences was also missing from the article. The original version of the article has been updated to reflect these corrections.In the article entitled \"Efficacy of radiographic density values of the first and second cervical vertebrae recorded by CBCT technique to identify patients with osteoporosis and osteopenia\" which appeared in"} +{"text": "The systematic review published by Naylor et al. in April 2018 highlights methodological assumptions and biases that occur in studies investigating the burden of antimicrobial resistance (AMR). They note that, due to both the large diversity of statistical approaches and perspectives chosen, the current evidence base of the burden of AMR is highly variable. Certainly, these conclusions are valid and the authors present a very thorough analysis of the currently published literature with a broad array of drug-bug combinations. But readers are left with limited direction of estimating the current best available estimate of the health and economic burden of AMR. Such estimates are desperately needed to inform clinical management and for priority setting activities and initiative to curbing the global threat of AMR. It was with great interest that I read the recent systematic review by Naylor et al.(2018). This comprehensive systematic review investigated the approaches to estimating the burden of AMR for both gram negative and gram positive bacteria. They found 214 articles, comparing cases with resistant infections to both susceptible and uninfected controls.n\u2009=\u20097; 63%) show odds ratio (OR) of mortality to cross the line of significant (OR\u2009<\u2009=1), compared to the other groups which show greater odds when compared to un-exposed controls. Hence, selections of comparator groups that will answer the research question best should be clearly stated and be consistent to ensure estimates remain comparable between studies.The systematic review visualizes the eligible studies in a forest plots without performing a meta-analysis, which appears appropriate given the large heterogeneity of study design and outcomes measured. Despite this, authors go on to state that the majority of studies \u2018..found resistance to be associated with higher mortality .\u201d Figure 2a depicts 24 comparison groups with only 11 studies directly measuring the impact of resistance on mortality, by comparing resistant cases with susceptible. From these direct comparisons, majority , demonstrate the variety of methodologies used in studies generating excess length of stay (LOS) estimates ranging from as little as an additional 2.5\u00a0days 8, demons. Timing All studies investigating the burden of AMR should be assessed for risk of bias. We have developed a tool to prioritize the quality of studies based on previously identified methodological caveats when measuring the burden of AMR . This toIn combination to the recommendations listed by Naylor et al. (2018) a prioritization of the quality of included studies would provide the reader with an appreciation of the strength of the estimates and allow a more informed judgement of the validity of AMR-attributable mortality , excess length of stay and costs (Table 3). Such solutions should be in place to ensure the higher up the hierarchy the study design is positioned, the more rigorous the methodology and hence the more likely it is that the study design can minimize the effect of bias on the results. This would provide a level of clarity as to the ideal methodologies to use when designing studies and to guide the readers in the direction of deciding the best currently available estimates of burden of AMR."} +{"text": "This dataset focuses on the causes and effects of sick building syndrome among users of selected facilities in Lagos. A mixed research approach of field measurement and cross-sectional survey was adopted. Descriptive statistics were implemented on the data acquired and are reported on tables and figures. The significance of this data leverages on providing insight and consciousness of sick building syndrome to users and occupants of constructed facilities. The survey dataset when analyzed can show direction on physical quantities levels that can be experienced in public buildings in tropical region. In achieving the objectives of the dataset, opinions of 30 staff of three different banks and 46 users and worshippers in the university\u05f3s worship centers in different locations on campus were sampled through structured questionnaire. Personal data characteristics of the respondents are shown and summarized in 2The dataset adopted cross-sectional survey design and physical measurement methods. The data purposively sampled 100 respondents who were users and worshippers in the church and mosque and staff of three commercial banks within the University of Lagos, Akoka campus. The sample frame consists of 76 valid questionnaires comprising 30 bank staffers and 46 worshipers. Recent studies"} +{"text": "Systemic lupus erythematosus (SLE) and anti-phospholipid syndrome (APS) are frequently discussed together and perceived as two closely related diseases . Indeed,Caneparo et al.; Han et al.; Knight et al.; Sakata et al.; Weeding and Sawalha) report pathogenic pathways which appear to operate primarily in patients with SLE. The discussed mechanisms suggest that multiple heterogeneous pathways operate preferentially in patients with SLE rather than in patients with primary APS. For example, the clinical and histological characteristics of renal involvement in patients with APS definitely differentiate the two entities. In particular, a thrombotic vasculopathy involving medium/large and in some cases small vessels is the main pathogenic mechanism in renal APS in contrast with the inflammatory vasculitis which is characteristic of lupus nephritis . Furthermore, involvement of the central nervous system (CNS) is frequent in patients with APS and is mainly linked to vascular thrombotic events while a heterogeneous panel of pathogenic mechanisms contribute to the expression of CNS manifestations in patients with SLE including the presence of NMDR antibodies and the activation of microglia by interferon type I . It is obvious that patients need tailored treatment to address the involved pathogenetic mechanisms.In this collection five manuscripts (Rekvig). Therefore, the classification of patients along the lines of clinical manifestations cannot serve the patient and definitely has not served the multitude of failed clinical trials .The complexity of the pathogenesis of lupus looms even larger in children with SLE in whom hormonal or extensive environmental factors are not yet major contributors but distinct single gene defects explain the development of SLE. Indeed, as discussed by Lo the list of monogenic SLE patients continues to expand (Radic and Pattanaik).In contrast, the clinical manifestations of patients with APS are easily attributed to thrombophilic events orchestrated by aPL although additional non-thrombotic mechanisms may account for the increased rate of miscarriages . SLE \u201cmolecular characterization\u201d would be useful for clinicians for a personalized medicine and for better inclusion criteria in clinical trials. In fact, the common biomarkers are not informative enough and we need to enroll more homogenous, along molecular and biochemical lines, populations in the studies and to identify more specific tools for the evaluation of the efficacy of the therapy .A lot of attention has been paid to aberrant T cell activation pathways in SLE in addition to the tissue damage mediated by immune complex deposition. Several manuscripts in the session of the Journal have actually addressed this issue .Complement is central in SLE pathogenesis at two levels: luck of the early components C2 and C4 account for the incomplete elimination of autoreactive B cells and lack of C1q for the poor clearance of apoptotic debris whereas excessive activation and generation of the membrane attack complex and the production of C3a and C5a are directly responsible for the execution of tissue damage. APS experimental models support that complement activation takes place in APS as well and it represents a critical step for both aPL-mediated thrombosis and miscarriages. Moreover, there is preliminary evidence for complement activation also in patients. However, the characteristics of complement activation are quite different in SLE and APS further supporting the differences between these two disorders had a substantial contribution to the conception or design of the work, or the acquisition, analysis or interpretation of data for the work; (ii) drafted the work or revised it critically for important intellectual content; (ii) provided approval for publication of the content, and (iii) agreed to be accountable for all aspects of the work.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The presence of a new type of brain cannabinoid receptor is also indicated. Important advances have been made in our understanding of cannabinoid receptor signaling pathways, their modulation of synaptic transmission and plasticity, the cellular targets of cannabinoids in different central nervous system (CNS) regions and, in particular, the role of the endogenous brain cannabinoid (endocannabinoid) system. Cannabinoids have widespread actions in the brain: in the hippocampus they influence learning and memory; in the basal ganglia they modulate locomotor activity and reward pathways; in the hypothalamus they have a role in the control of appetite. Cannabinoids may also be protective against neurodegeneration and brain damage and exhibit anticonvulsant activity. Some of the analgesic effects of cannabinoids also appear to involve sites within the brain. These advances in our understanding of the actions of cannabinoids and the brain endocannabinoid system have led to important new insights into neuronal function which are likely to result in the development of new therapeutic strategies for the treatment of a number of key CNS disorders.Cannabis has a long history of consumption both for recreational and medicinal uses. Recently there have been significant advances in our understanding of how cannabis and related compounds (cannabinoids) affect the brain and this review addresses the current state of knowledge of these effects. Cannabinoids act primarily via two types of receptor, CB"} +{"text": "Photoluminescence detection of latent fingerprints has over the last quarter century brought about a new level of fingerprint detection sensitivity. The current state of the art is briefly reviewed to set the stage for upcoming new fingerprint processing strategies. These are designed for suppression of background fluorescence from articles holding latent prints, an often serious problem. The suppression of the background involves time-resolved imaging, which is dealt with from the perspective of instrumentation as well as the design of fingerprint treatment strategies. These focus on lanthanide chelates, nanocrystals, and nanocomposites functionalized to label fingerprints."} +{"text": "Aim: to determine the values of biomechanical parameters in patients with keratoconus and their first degree family members. The purpose of the present study was to investigate the importance of assessing corneal biomechanics in subjects at risk of developing the primary ectasia. Materials and methods: 48 participants divided into three groups were analyzed in an observational study after a complete ophthalmological exam with the primary focus on Ocular Response Analyzer.Results: The mean values of CH, CRF, and KMI in the group of relatives were lower compared with the controls but higher when compared with keratoconus patients. We noted significant differences of CH and CRF between all three groups, while in the case of KMI, only the keratoconus group presented statistically significant differences compared with the relatives, respectively with the healthy subjects. Conclusions: the decreased values of CH and CRF may raise the question whether corneal biomechanics could be an adjuvant tool in the screening of a first-degree family member of a keratoconus patient in the attempt of the early detection of a possible forme fruste keratoconus. Keratoconus is a progressive, mostly bilateral disorder which results in the thinning of the corneal stroma, thus modifying its normal architecture into a conical shape [3]. The disease starts to develop usually in the early puberty and continues to progress until approximately the fourth decade of life [4]. Described as a noninflammatory disease, the pathophysiology of keratoconus remains enigmatic. It is considered to be a multifactorial corneal ectasia triggered by external factors such as eye-rubbing or contact lens wear and endogenous stimuli through the interplay of inflammatory tear mediators, a dysregulation of oxidative stress and proteolytic enzymes resulting in corneal remodeling and keratocytes apoptosis, as well as the presence of atopic patient history [5-8]. Furthermore, the condition has also a genetic pathway, most cases being sporadic, isolated, or associated with other ocular or systemic diseases like Down, Marfan, Ehler-Danslos syndromes [9]. Many studies reported an important number of cases with familial inherited keratoconus, either through an autosomal dominant or recessive transmission, suggesting the higher risk in first-degree family members of developing the disease [10-12].This can lead to irregular astigmatism and corneal scarring with a significant impact on the visual acuity [13-16]. Classically, keratoconus was defined as a noninflammatory corneal condition, yet in the last decade, many authors have debated the possible role of inflammation. Recent studies have published results that highlight the cytokine overexpression in the tear film of keratoconus patients [17]. From a histopathological point of view, a keratoconic cornea has certain changes such as the thinning of the epithelium, the presence of breaks in Bowman\u2019s layer, disorganized and reduced collagen fibrils that have repercussions on the corneal biomechanics [18]. The cornea is a viscoelastic material and exhibits the property of recovering the initial form after the applied stress with a lag between the application of the force and the response. This property is called hysteresis [19].An essential parameter in monitoring the disease progression is the corneal biomechanics, which offers in vivo measurements of the cornea while being deformed when a mechanical stress is applied [20-23].Given the fact that the corneal stroma and Bowman\u2019s layer are responsible for the corneal strength, their disintegration may cause instability of the tissue. Proteolytic enzymes and inflammatory cytokines released in the tear film of keratoconus patients generate a corneal thinning through the alterations produced at the level of the extracellular matrix and collagen fibrils. These factors interfere in the mechanical stability and imbalance the viscous behavior of the cornea [24-26]. In the last years, ORA has been used in the attempt of early detection of subclinical keratoconus and for monitoring its progression. According to many studies, both parameters (CH and CRF) have been reported to be lower in keratoconic corneas compared to healthy ones .Multiple methods have been used to measure corneal biomechanics beginning with the ex vivo studies reflected by Young\u2019s modulus and continuing with the in vivo assessment of biomechanical properties. The Ocular Response Analyzer is a noncontact tonometer that indents the cornea through an air puff producing two distinct peaks: P1 moving inwards and P2 outwards, representing the necessary pressures to deform the cornea respectively to recover from the applanation while the air pulse decreases. The difference between P1 and P2 is an indicator of the corneal viscosity and is called hysteresis (CH). As for the other parameter, the corneal resistance factor (CRF) highlights the overall resistance and is referred to as the indicator of corneal elasticity [28-30].Another waveform analysis is represented by the keratoconus match index (KMI), which is believed to be a reliable index in keratoconus diagnosis and staging, especially in the early diagnosis of a subclinical keratoconus and results from the comparison of the waveform of the patient\u2019s eye compared with the waveform of the normal population in the database . The findings highlighted the altered viscoelastic capacities in keratoconic corneas as well as in the group of relatives.Taking into account the higher risk of developing keratoconus in relatives, we compared corneal hysteresis, corneal resistance factor and keratoconus match index in patients with keratoconus with their first-degree family members and a control group and observed lower values in the relatives when compared with the controls. These results were in accordance with the study conducted by Kara et al., which stated that these biomechanical properties could be used in assessing a subject at risk, suggesting that these may detect early changes of a diseased cornea even before the topographic indices [32]. Transposing this hypothesis into our study, the group of relatives presented decreased values of CH, CRF, and KMI compared with the controls, indicating that even though the topographic indices were inside normal limits, the corneal biomechanical properties may be affected in some degrees. Many studies evaluated the dynamics of biomechanical properties after corneal collagen crosslinking (CXL) using ORA and observed an improvement of some parameters after the intervention but no significant changes in CH or CRF long-term after CXL [33-35].Furthermore, Schwitzer et al. evidenced the finding of the lower values of CH and CRF in forme fruste keratoconus compared with healthy corneas with the hope of detecting subclinical keratoconus [The screening in keratoconus remains a clinical challenge, especially in the case of first-degree family members of the keratoconus patients who are at higher risk of developing the disease. These subjects should be closely and regularly assessed not only by corneal topography but also by Ocular Response Analyzer, since even a discretely modified biomechanical parameter could be an indicator of corneal sufferance and implicitly an early premonitory sign of a subclinical keratoconus."} +{"text": "Additionally, there are reports that suggest possible claustral involvement in focal epilepsy, including MRI findings of bilaterally increased T2 signal intensity in patients with status epilepticus (SE). Although its cytoarchitecture and connectivity have been studied extensively, the precise role of the claustrum in consciousness processing, and, thus, its contribution to the semiology of dyscognitive seizures are still elusive. To investigate the role of the claustrum in rats, we studied the effect of high-frequency stimulation (HFS) of the claustrum on performance in the operant chamber. We also studied the inter-claustral and the claustro-hippocampal connectivity through cerebro-cerebral evoked potentials (CCEPs), and investigated the involvement of the claustrum in kainate (KA)-induced seizures. We found that HFS of the claustrum decreased the performance in the operant task in a manner that was proportional to the current intensity used. In this article, we present previously unpublished data about the effect of stimulating extra-claustral regions in the operant chamber task as a control experiment. In these animals, stimulation of the corpus callosum, the largest interhemispheric commissure, as well as the orbitofrontal cortex in the vicinity of the claustrum did not produce that same effect as with claustral stimulation. Additionally, CCEPs established the presence of effective connectivity between both claustra, as well as between the claustrum and bilateral hippocampi indicating that these connections may be part of the circuitry involved in alteration of consciousness in limbic seizures. Lastly, some seizures induced by KA injections showed an early involvement of the claustrum with later propagation to the hippocampi. Further work is needed to clarify the exact role of the claustrum in mediating alteration of consciousness during epileptic seizures.The neural mechanisms of altered consciousness that accompanies most epileptic seizures are not known. We have reported alteration of consciousness resulting from electrical stimulation of the claustrum The neural correlates of consciousness are not fully understood. Altered consciousness is the hallmark of focal dyscognitive seizures, which are characterized by loss of perception of external and internal stimuli during wakefulness with functional MRI (fMRI) assessing interictal epileptiform discharges in individuals with focal epilepsy found increased blood-oxygen-level-dependent (BOLD) signal in the piriform area in association with spikes , and the direct clinical evidence through electrical stimulation, as well as the animal data we have collected so far all continue to be insufficient to formulate a solid hypothesis about the function of the claustrum. Therefore, more controlled experiments in animals and prospective data in humans need to be collected before a clearer picture about the function of the claustrum can be attained.Also, the KA model focuses on focal epilepsy and thus studies using a wide-range of animal models as well as a genetic model for generalized epilepsy are required to extend the generalizability of the findings. Future studies should also focus on the selection of different stimulation parameters and assessing whether low-frequency stimulation of the claustrum can have anti-seizure effects. With the advancement of direct access to the claustrum using modern techniques such as optogenetics (Wang et al., MK outlined the manuscript and all authors contributed equally. LK generated the first draft.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "This data article contains information on a new intelligent bandwidth allocation model for future network (Smart Allocation). The included data describe the topology of the network testbed and the obtained results. Obtained data show the effectiveness of the proposed model in comparison with the MAM and RDM bandwidth allocation models. In relation to the performances evaluation, a variety of flows are used such as: voice over IP (VoIP), video, HTTP, and Internet Control Message Protocol (ICMP). The evaluation criteria are: VoIP latency and jitter, Peak Signal to Noise Ratio (PSNR) video, retransmission video, goodput, HTTP response page, and the Round-Trip Time (RTT) ICMP delay. The presented data are extracted based on simulation. Specifications TableValue of the data\u2022Dynamic bandwidth allocation is one of the major concerns in the networking and telecommunications sector.\u2022The proposed model allows equitable distribution of bandwidth resources over different flows with different priorities.\u2022The proposed model is tested for the next generation computer networks. It can be deployed in industrial networks.\u2022These simulation data can be used as references for future work related to adaptive bandwidth management.\u2022The proposed model is deployed at a controller; this allows researchers in the Software-Defined Network (SDN) axis to adopt it in order to optimize the performance of their networks.1With the emergence of new flows , optimal bandwidth management has become one of the major concerns for both the telecommunication industry actors and the researchers. The Quality of Service (QoS) has become a necessity in the Next Generation Networks (NGN). Several algorithms have been proposed for dynamic bandwidth management while offering clients the solicited QoS level and guaranteeing operators the optimal use of their network infrastructures. The Maximum Allocation Model (MAM) 2The materials used in the experiment are the Cisco XR and Cisco 7200 IOS routers. The Smart Allocation model was deployed on a centralized controller, connected to routers through a 100 Megabit UTP link. The server contains a video sequence of 720 pixels\u2019 resolution. This video will be broadcast to the users to measure the quality of the video in the three bandwidth adaptation models.\u2022VoIP latency: the delay between sending a packet and receiving it.\u2022VoIP jitter: the duration between the sending of two successive packets.\u2022Page response time: the session opening delay and the delivery of all web content.\u2022Round-Trip Time (RTT): the delay since sending an ICMP packet to the receipt acknowledgment.The experimental data shown in The values are represented in milliseconds. The evaluation methodology consists of activating IP SLA probes on routers and programming fully meshed communications. The results were obtained by repeating the same measures ten times in order to ensure the relevance of the obtained results. The parameters of the data used in the experiments are: the G.711 alaw codec for VoIP, version 1.1 of the HTTP protocol, and ICMP Echo type with the Don\u2019t Fragment (DF) flag set.We cannot talk about the QoS without dealing with the video quality. The data in In order to qualitatively evaluate the data, Other data was generated to compare the QoS adaptation models; we mention the goodput and the retransmission. The goodput means the amount of information received by the application layer of the OSI model without including the size of the lower layer headers. The higher the goodput data, the better the quality of the intercepted video. The packets retransmission is one of the factors that influence the transmissions quality in general and specifically the video quality. The effective retransmission represents the number of packets successfully retransmitted."} +{"text": "A reference is omitted from the second sentence of the third paragraph in the Introduction section.The sentence should read: The hard palate of DS adolescents is often described as high arched and constricted or narrow , but data in the literature are contradictory , depending on patient age [23] of the studied DS group."} +{"text": "This is an interesting case that highlights the variability of the clinical course of sarcoidosis and the importance of the knowledge of the clinical and radiologic features of the disease for its diagnosis and management. The patient also reported dyspnea under moderate stress. Physical examination showed hepatomegaly and erythematous plaques on the face Figure\u00a0A and lefSarcoidosis is an immune\u2010mediated systemic inflammatory disease of unknown etiology, characterized by noncaseating epithelioid\u2010cell granulomas. Sarcoidosis may affect virtually any organ system, although 90% of patients present with pulmonary involvement.Extrapulmonary disease is reported in 30% of patients, with the liver and spleen being the most frequently affected abdominal organs. Homogeneous hepatomegaly often associated with splenomegaly and enlarged lymph nodes is the typical imaging feature of abdominal sarcoidosis.All the authors made substantial contribution to the preparation of this manuscript and approved the final version for submission. YMR: contributed to write the case and identify the images. GZ: performed literature search and helped in identifying appropriate images. EM: reviewed and edited the case report and helped in identifying appropriate images.Consent was obtained from the patient for publication of case details.The authors declare that they have no conflict of interests to express."} +{"text": "Due to the lack of anatomical studies concerning complexity of the tibiofibular syndesmosis blood supply, density of blood vessels with further organization of syndesmotic vascular variations is presented in clinically relevant classification system. The material for the study was obtained from cadaveric dissections. We dissected 50 human ankles observing different types of arterial blood supply. Our classification system is based on the vascular variations of the anterior aspect of tibiofibular syndesmosis and corresponds with vascular density. According to our study the mean vascular density of tibiofibular syndesmosis is relatively low (4.4%) and depends on the type of blood supply. The highest density was observed among ankles with complete vasculature and the lowest when lateral anterior malleolar artery was absent . Awareness of various types of tibiofibular syndesmosis arterial blood supply is essential for orthopedic surgeons who operate in the ankle region and radiologists for the anatomic evaluation of this area. Knowledge about possible variations along with relatively low density of vessels may contribute to modification of treatment approach by the increase of the recommended time of syndesmotic screw stabilization in order to prevent healing complications. There is a lack of studies presenting the complexity of its arterial blood supply. Only McKeon et al. provided detailed anatomical description of the vessels supplying syndesmotic area5. However, this study is focused mainly on the supply of the anterior aspect of the tibiofibular syndesmosis with the consideration of the caliber of the vessels. Huber (1941) in his study mentioned a perforating (anterior) branch of the fibular artery , which pierces the interosseous membrane and runs across the anteroinferior region of the tibiofibular syndesmosis6. Bartonicek also reported a perforating branch of the peroneal artery that penetrates the interosseous membrane1. According to most studies the popliteal artery gives rise to anterior tibial artery and common tibiofibular trunk, which divides into posterior tibial artery and fibular artery. In most cases origin of the fibular artery is described as just below the level of the neck of fibula, about 5\u2009cm below the bifurcation of the popliteal artery. Fibular artery in its distal part gives rise to anterior branch, posterior branch and communicating branch joining posterior tibial artery, and then continues its passage, finally spreading into the calcaneal branches. In general, three main arteries of the leg may have contribution in tibiofibular syndesmosis blood supply: posterior tibial, anterior tibial and fibular artery11. Understanding the anatomical variations in blood supply of tibiofibular syndesmosis is crucial for orthopedic surgeons operating in the ankle region and may contribute to the modification of the surgical technique to avoid complications as a direct consequence of accidental vessel injury. Among the most common complications is the malreduction resulting with decreased joint motion and chronic pain, both can lead to long-term disability. Another less prominent complication with yet poorly understood pathophysiology is heterotopic ossification. It can lead to ankle synostosis, resulting in pain and abnormal ankle kinematics12. Injury of the tibiofibular syndesmosis is associated with slower healing time in comparison with injuries of other ankle ligaments. Typically it results in significantly longer time away from ambulation and sport activities. The limited blood supply is the commonly recognized factor which may have a negative impact on healing of number of anatomical structures. Knowledge about this limitation is essential at the time of preoperative planning and selection of the potentially most successful treatment17. Insufficient vascular supply to an area of injury usually leads to delay in the healing process and increased overall morbidity5. The management of injury to the distal tibiofibular syndesmosis remains controversial in the treatment of ankle fractures. Operative fixation usually involves the insertion of a metallic diastasis screw. There are a variety of options for the position and characterization of the screw, the type of cortical fixation, and whether the screw should be removed prior to weight-bearing18. This paper reviews the relevant anatomy, the clinical and radiological diagnosis and the mechanism of trauma and alternative methods of treatment for injuries to the syndesmosis. The complex nature of vascular anatomy of the distal tibiofibular syndesmosis results with variation and disagreement in treatment strategies. The aim of this study is to present the complexity of the tibiofibular syndesmosis blood supply, further organization of its vascular variations into classification system and examination of vascular density of this region with clinical correlation.Tibiofibular syndesmosis is a fibrous connection localized between the fibular notch of the tibia and medial surface of the lateral ankle. It is stabilized by two ligaments: anterior and posterior tibiofibular ligament. Some authors distinguish three ligaments stabilizing this joint: anterior, posterior and interosseous tibiofibular ligament5 and our findings, variants of tibiofibular syndesmosis blood supply were organized into unified and clinically useful classification system was described in 36 cases 72%). Its characteristic feature was complete blood supply of anterior aspect of the syndesmosis by lateral anterior malleolar artery from anterior tibial artery and by anterior branch of fibular artery. Type I was divided into three subtypes: IA when lateral anterior malleolar artery was present and supplied anterior aspect of syndesmosis indirectly connecting to the anterior branch of fibular artery. It occurred in 26 cases (52%) Fig.\u00a0. In type%. Its chType II was found in 14 cases (28%). In this type, anterior aspect of tibiofibular syndesmosis was supplied exclusively by anterior branch of fibular artery, with lack of lateral anterior malleolar artery. Similarly to type I, type II was divided into three subtypes. In type IIA which was represented by 12 cases (12%), there was lack of lateral anterior malleolar artery but anterior tibial artery was present Fig.\u00a0. In typeAccording to our analysis, mean vascular density of tibiofibular syndesmosis was relatively low (4.4%) and closely correlated with the specific type of its blood supply according to the classification system introduced in this study. The percentage of combined lumen of the blood vessel was 4.5% in type IA, 5.8% in type IB, 4.2% in type IC, 3.8% in type IIA and 3.5% in type IIB. The highest density was observed in type IB and the lowest in type IIB .19. Despite the amount of syndesmotic injuries, there is a lack of presentations in literature of all possible variations of its fairly complex vascular anatomy. Attinger et al. in his study of utilization of angiosomes of the foot in order to salvage the limb presented, that anterior tibial and fibular arteries are connected by anterior branch of fibular artery and lateral anterior malleolar artery20. According to authors of this study, disruption of that connection can put the lateral ankle soft tissue at risk. In our study this kind of connection was observed in 56% of cases (in types IA and IC)18. McKeon et al. study seems to be the most complete description5. In this analysis anatomical variations of tibiofibular syndesmosis blood supply were divided into three main types. In type I (63% of cases) \u2013 anterior branch of fibular artery was the main source of blood supply, with occasional anastomoses with anterior tibial artery. This type corresponds to our type IA (52% of cases enrolled in our study). In the second pattern (21%) the fibular artery gave rise of multiple branches to the anterior ligaments of syndesmotic area and was supplemented by branches of a lesser caliber arising from the anterior tibial artery which supplied syndesmosis directly5. In the third pattern (16%) the anterior tibial artery supplied branches of a larger caliber than that of the branches from the perforating branch of the fibular artery5. Second and third pattern of McKeon et al. study are similar to our type IB (16% of cases)5. In the classification system presented in our study we did not take into consideration the caliber of the vessels, only the total density of vessels calculated as the percentage of combined blood vessel lumen in the syndesmosis. Observations presented in McKeon\u2019s study are complement with our findings that blood supply to the posterior aspect of syndesmosis is rather constant. However due to the limited number of specimens, cases with predominantly posterior blood supply, as the consequence of the lack of branches supplying anterior aspect of the syndesmosis, were omitted in McKeon\u2019s study. This aspect is described in our study and completes the description proposed by McKeon et al.5.Tibiofibular syndesmosis in majority of cases is supplied by three branches of two main arteries of the leg. Anterior aspect of tibiofibular syndesmosis is supplied by lateral anterior malleolar artery from anterior tibial artery together with anterior branch of fibular artery, whereas posterior aspect by posterior branch of fibular artery. In the following study this complex vascular anatomy was organized into clinically useful classification system along with examination of vascular density of this area. Awareness of various types of tibiofibular syndesmosis arterial blood supply is crucial for orthopedic surgeons who operate in the ankle region and radiology specialists for the anatomic evaluation of this commonly injured structure. Approximately 5% to 10% of all ankle sprains and 23% of ankle fractures involve trauma to the tibiofibular syndesmosiset al. the vascular supply to the posterior syndesmotic area originated completely from the fibular artery in 63% of cases5. In 37% of cases the posterior tibial artery also provided small branches to supply the posterior syndesmosis. According to our study blood supply to posterior aspect of tibiofibular syndesmosis was rather constant and was supplied by posterior branch of fibular artery in 96% of cases and in 4% of cases by branch of posterior tibial artery (type IC). McKeon et al.5 reported that there were no specimens in which due to the lack of anterior tibial artery, the posterior tibial artery contribution was the dominant supply to the posterior syndesmotic region. According to our study branch from posterior tibial artery was dominant in 4% of cases (type IC). Our classification system which is complement with McKeon et al.5 corresponds well with syndesmotic vascular density in order to obtain general estimation of vascular density. Although this approach was enough for general orientation in density of blood vessels in examined area, further analysis of syndesmotic composition is needed. As the next step authors would like to examine histologic composition of tibiofibular syndesmosis utilizing specific tissue staining for better visual differentiation in order to increase the accuracy of estimations and provide detailed structure of tibiofibular syndesmosis.A clear understanding of arterial blood supply of the tibiofibular syndesmosis is essential for proper identification and avoidance of accidental injury of the arteries during procedures in the ankle region. Although fairly complex vasculature, the overall density of blood vessels in tibiofibular syndesmosis is low and is one of the factors which make surgical treatment of this area highly challenging.The material for the following study was obtained from routine cadaveric dissections in the Anatomy Department and Forensic Medicine Department, Jagiellonian University Medical College, Krakow, Poland. We dissected 50 human ankles of both sexes in the age from 35 to 76, observing different types of tibiofibular syndesmosis arterial blood supply Table\u00a0. ImmediaAccess for proper visualization of the arterial blood supply of the tibiofibular syndesmosis was obtained by remove of the fibula. For this purpose a vertical incision on the posterior aspect of the leg was done, from the region of popliteal fossa to the plantar tendon. The incision was made along the axis of the bone, a finger breadth posterior to the fibula and was similar to the incision made in order to harvest fibular flap for reconstruction purposes. The skin, the muscles of posterior and lateral groups of the leg and a distal part of fibula were removed. The small vessels supplying lateral malleolar region were shown and described. Images of dissected specimens presented in this study, were recorded with digital camera and then analyzed with Photoshop software .In order to examine the vascular density of tibiofibular syndesmosis material was evaluated under microscope in the Department of Pathology, Jagiellonian University Medical College, Krakow, Poland. Three sections were taken from upper, middle and lower part of tibiofibular syndesmosis Fig.\u00a0. Tissue Regarding the experiments involving human participants (including the use of tissue samples) we have obtained informed consent for study participation and approval from the Jagiellonian University Medical College Bioethics Committee (registry no. KBET/167/B/2009) for routine cadaveric dissections in the Anatomy Department and Forensic Medicine Department, Jagiellonian University Medical College, Krakow, Poland. This study adhered to the Declaration of Helsinki and its later amendments. Due to the limited number of enrolled specimens statistical analysis was withdrawn."} +{"text": "The epimutation concept, that is, malignancy is a result of deranged patterns of gene expression due to defective epigenetic control, proposes that in the majority of adult cancers the primary (initiating) lesion adversely affects the mechanism of vertical transmission of the epigenetic pattern existing in the stem cells of differentiated tissue. Such an error-prone mechanism will result in deviant gene expression capable of accumulation at each mitosis of the affected stem cell clone. It is argued that a proportion of these proliferation products will express combinations of genes which endow them with malignant properties, such as the ability to transgress tissue boundaries and migrate to distant locations. Since the likelihood of this occurrence is dependent on the proliferation of cells manifesting the defective epigenetic transmission, the theory predicts that cancer incidence will be strongly influenced by factors regulating the turnover rate of the stem cells of the tissue in question. Evidence relating to this stipulation is examined. In addition, it would be anticipated on the basis of the selection of genes involved that the susceptibility to malignant transformation will vary according to the tissue of origin and this is also discussed. Recently there has been considerable interest in the role of epigenetic mechanisms in cancer \u20133 and itThe cardinal abnormality exhibited by the majority of adult cancers is chromosomal instability (CIN) with widespread alterations in gene expression accompanSome of the relevant factors involve obvious differences such as the number of susceptible cells and the degree of mutagenic exposure. For each tissue the probability of the initiation phase taking place is influenced by the size of the population, the number of genes that have to be mutated to bring about the defect, the exposure to mutagenic events, and the elimination or repair efficiency of the relevant stem cells, i.e., stem cell numbers, the number of crucial genes, and their effective mutation rate , 10. DifAn important question with regard to the epigenetic theory of carcinogenesis concerns the problem of which genes are necessary and sufficient to bring about the defective copying of the epigenetic pattern and whether the same genes are involved in all cases. It can be argued that in developmental neoplasms the underlying problem is the failure of evocation of some gene silencing mechanism involved in differentiation, and in these cases reversal is possible . In adulHowever, assuming that the initiation stage has been accomplished, the factors implicated in the secondary carcinogenic events, i.e., the failure of fidelity of epigenetic copying resulting from the initiating lesion, have hitherto received relatively little discussion. In essence the likelihood of the acquisition of epigenetic errors that result in malignancy will depend on two criteria: (1) the rate of stem cell proliferation and (2) the ease of activation of the genes that determine the metastatic phenotype. In the absence of clear evidence of which genes are involved in the processes that result in the manifestation of the malignant phenotype, the second factor is difficult to assess. Possibly, if reactivation of the most recently silenced genes occurs more readily, it might be speculated, for example, that migratory properties would be more likely to be expressed in melanocytes, thus accounting for the earlier age-specific incidence of melanoma . An inteFrom the theoretical point of view, the proposed central role of stem cell proliferation in bringing about the epigenetic errors that lead to the malignant phenotype makes a number of testable predictions, several of which are known to be the case. For example, it follows that malignant tumours are not found in nonproliferating tissues such as the central nervous system and rarely occur in slowly proliferating cells such as striated muscle. Moreover, since the greatest turnover occurs in epithelia, it accounts for the high proportion of epithelial cancers. The central significance of stem cell proliferation has been emphasised by the statistical association between cancer risk and the total number of stem cell divisions in different tissues as shown by Tomasetti and Vogelstein and the Also consistent with the epigenetic model is the evidence of increased risk of malignancy associated with factors increasing the proliferation rate of tissues. This includes the stimulatory effects of chronic inflammation and specific growth factors such as hormones, e.g., the increased breast cancer risk associated with hormone replacement therapy . At the The influence of the stem cell proliferation rate on the acquisition of malignant characteristics importantly predicts that initiated cells can be prevented from becoming malignant by suppression of proliferation. This phenomenon has been described in the case of hepatocytes experimentally initiated by aflatoxin \u201330.In brief, the preventative possibilities presented by the epimutation model of cancer concern the identification of mutant genes responsible for the error-prone epigenetic copying and their repair or selective elimination of cells bearing these mutations. In this respect the scenario differs from the conventional theories of carcinogenesis only in focusing on genes implicated in the epigenetic copying mechanism. It is a moot point which genes might be most significant in this respect but possible candidates include genes involved in regulating the activities of DNA methylases, histone deacetylases, p53 and related processes involved in apoptosis, and other possible gatekeeping and editing functions. For example, polycomb group proteins such as EZH2 the H3K29 methylase UTX and components of the chromatin remodelling complex such as SWI/SNF are modified in several cancers and therAssuming that malignant behaviour in all cases is derived from a specific genetic composition, a promising approach might lie in the identification of the genes giving rise to the crucial malignant properties. This would potentially enable the design of agents causing epigenetic suppression of these malignant genes or drugs to selectively eliminate cells expressing those genes. At present, the nature of such crucial genetic targets is not clear and, moreover, it seems probable that the route to malignancy varies according to the tissue of origin. For the present it appears that the most accessible target is the evidence that the carcinogenic risk is a function of the stem cell proliferation rate of the tissue in question. Hence, if a tissue has undergone carcinogenic induction, any means that diminishes the rate of stem cell proliferation will suppress the emergence of a malignant variant clone and, of course, many of the currently effective chemotherapeutic agents target the proliferation rate of the affected tissue.The epigenetic theory of carcinogenesis proposes that the acquisition of the malignant phenotype results from error-prone copying of the epigenetic pattern when tissue stem cells divide. This failure of fidelity of epigenetic transmission is due to initiating mutations affecting the set of crucial genes involved in the normally stable copying of the epigenetic pattern. The manifestation of error-prone epigenetic copying is the generation of clones showing a diversifying range of abnormalities including structural and functional anomalies, abnormal mitoses, alterations of ploidy, and the occurrence of bizarre cytological features. It is argued that these anomalies give rise to cells some of which exhibit malignant characteristics and are able to transgress tissue barriers and spread to distant sites.Whilst it may be impossible to avoid the initiating mutagenesis, in principle cancer could be prevented if cells bearing the initiating lesion(s) could be identified and the faulty epigenetic process corrected or the affected cells eliminated. This eventuality remains a hope for the future.However, in view of the important role of mitosis in enabling the perpetuation of epigenetic errors, ensuring the minimum turnover rate of tissue stem cells seems to offer a basis for a general cancer prevention strategy."} +{"text": "Epiphytic bryophyte communities in the Amazon forest show a vertical gradient in species composition along the trunk of the host trees. The investigation of species traits related to this pattern has focused on the physiology of selected taxa with a clear preference for one of the extremes of the gradient. Although some species are indeed only found on the tree base or in the outer canopy, the vertical gradient is composed mainly by the variation in the abundances of species with a broader occurrence along the height zones. Therefore, this study approaches the differences among community assemblages, rather than among species, to test the role of morphological and dispersal traits on the establishment of the vertical gradient in species composition.A character state matrix was built for 104 species of the family Lejeuneaceae recorded as epiphytes in the Amazonian terra firme forests, and six binary traits supposed to influence species occurrence: dark pigmentation on leaves; ability to convolute leaves when drying; possession of thickened cell walls; reproduction mode (monoicous or dioicous); occurrence of asexual reproduction; and facultative epiphyllous habit. Based on a previous dataset on community composition along the vertical gradient, trait occurrences in random draws of the metacommunity was compared to trait occurrences in field data, in order to detect significant deviations in the different height zones.Four out of the six traits tested showed significantly higher or lower occurrence in the species composition of canopy and/or understory communities. Traits related to high dispersal ability did not vary much along the vertical gradient; although facultative epiphylls were overrepresented on tree base. Dark pigmentation and convolute leaves were significantly more frequent in the canopy communities, but also significantly less frequent in communities at the base of the tree.Dark pigmentation and convolute leaves seem to be advantageous for the establishment in the canopy zones. They may, respectively, prevent light damage and allow longer periods of photosynthesis. Interestingly, these traits occur randomly along the trunk, but are wiped out of communities on the tree base. In the relatively deep shade of the first meters of the understory, they possibly hamper net carbon gain, the first by darkening the leaf surface and the second by delaying desiccation\u2014which can be damaging under high temperatures and low light. The fact that production of asexual propagules is not overrepresented in the most dynamic microenvironment along the gradient, the canopy, challenges current views of bryophyte life strategy theory. Watson\u2019s implicit assumption of niche assembly\u2014in the broad sense that species features have a role on species occurrence\u2014as well as his emphasis on environmental filtering, rather than on species interactions, illustrates the common sense in bryophyte ecology , but not in the outer canopy (zone 6) or along the trunk (zones 2 and 3). The occurrence of this trait in communities at the base of the tree (zone 1) was significantly lower than expected by chance. Convolute leaves were significantly more represented in the outer canopy (zone 6), and significantly less in the first two understory zones (zones 1 and 2). Both the monoicous reproduction system and the facultative epiphylly were significantly more represented only in communities at the base of the tree (zone 1), with the latest significantly less represented only in communities in the bifurcation zone.For bryophytes, the vertical gradient along the host trees reflects the two extremes of a water balance axis: in the canopy, plants may die from desiccation due to evaporation and to the unavailability of water for photosynthesis and growth; in the understory, from the lack of enough light to achieve net carbon gain, given the relatively high temperature. Interestingly, in this study, the traits supposed to protect against the harsh conditions of the canopy were not only significantly more frequent there, but showed also significantly lower occurrence in the darkest zone of the understory, the tree base.Sphagnum species due to photoinibition (Bryophytes show light saturation of photosynthesis at modest irradiance when compared to most vascular plants , and lignibition . It was nibition , but thenibition .In the Amazonian lowlands, low light levels during the day combined with moist and warm conditions at night promotes high respiration rates, which in turn causes a limitation of net carbon gain . That isThe reproductive traits supposed to be relevant for the assembly of canopy communities met the expectations poorly, which could be a shortcoming of the data, because the characters treated in this study are only indirectly related to high dispersal ability. Still, I believe that a better explanation for the results obtained is that dispersal features have simply little influence on community assemblage. This relatively less deterministic role of dispersal in bryophyte assemblages has been supported especially by recent studies with a mechanistic approach of species assemblage that take into account the relationship between metacommunity and local communities . For insProduction of asexual propagules and dioicous reproductive mode are frequently taken as associated features , and eveSeveral relevant traits that play a role on plant dispersal, establishment and growth are not represented by presence/absence data. For instance, the responses of photosynthesis to irradiance of epiphytic bryophytes show compensation points over a wide range among species , as muchThe presence of dark pigmentation and the presence of convolute leaves seem to have a relevant influence on the occurrence of species at both extremes of the forest vertical microenvironmantal gradient, either favouring\u2014in the canopy\u2014or hampering\u2014at the base of the tree\u2014the number of individuals assembled in the communities. Species traits related to morphological features showed greater influence on the occurrence of species than traits related to reproduction and dispersal. Further advances in this field will profit from the study of traits with continuous variation, such as the ones mentioned in the discussion."} +{"text": "Casting is the first step toward the production of majority of metal products whether the final processing step is casting or other thermomechanical processes such as extrusion or forging. The high shear melt conditioning provides an easily adopted pathway to producing castings with a more uniform fine-grained microstructure along with a more uniform distribution of the chemical composition leading to fewer defects as a result of reduced shrinkage porosities and the presence of large oxide films through the microstructure. The effectiveness of high shear melt conditioning in improving the microstructure of processes used in industry illustrates the versatility of the high shear melt conditioning technology. The application of high shear process to direct chill and twin roll casting process is demonstrated with examples from magnesium melts. The light weighting has become more important for increased fuel efficiencies, with the increase in the weight of automobiles as a result of the weight of batteries or fuel cells associated with electric vehicles. The development of high-strength alloys and composites requires a refined microstructure with a reduced defect distribution together with a homogeneous composition profile in the final product. Solidification and casting is essential for processing metallic materials regardless of whether the final product is used in the cast or wrought form. The quality of the casting and in turn the quality of the melt is crucial in determining the final properties. Oxides, gas and other inclusion usually deteriorate the quality of the melt and in turn the properties and quality of the castings. The presence of micro- and macro-scale defects, such as gas porosity, and oxide defects in the solidified billets can lead to reduced property profiles in castings and wrought products. The presence of pores and other defects would reduce the strength and ductility of the castings. Thus, the yield strength used for design process is significantly lower than the average yield strength of the alloy to account for these defects. In applications where light weighting is of importance, such increments reduce the weight savings envisaged and, in the case of automotive or aerospace applications, reduce fuel efficiencies and consequently increase the CO2 emission. The reduction in defect density and the resultant increase in strength would contribute to the lighter parts, and this may be achieved through melt conditioning, especially with intensive melt shearing.Light alloys based on aluminum and magnesium are used in many industrial applications where light weighting is of paramount interest.Melt treatment ensures a high-quality melt through treating the liquid metal prior to casting. There are numerous existing methods including electromagnetic stirring, mechanical stirring with an impeller, melt filtering, and rotary degassing, which are used to treat the liquid metals. Recently, melt conditioning of Al and Mg melts in liquid and semisolid states, using a twin-screw device, refined the microstructure and improved the mechanical properties of both cast and wrought alloys.5\u00a0s\u22121. The rotor\u2013stator high shear device provides macro-flow in a volume of melt for distributive mixing and intensive shearing near the tip of the device for dispersive mixing. The enhanced kinetics associated with chemical reactions and phase transformations, uniform dispersion and reduction of particle size and gas bubbles, uniform distribution of composition and temperature fields, and forced wetting of solid particles in the liquid metal are some of the main advantages associated with the rotor stator. Thus, the high shear device is used effectively for the grain refinement by dispersing native oxides or grain refiners, for degassing aluminum melts, for the preparation of metal-matrix composites, and for preparation the of semisolid slurries. As a result of its size and versatility, the rotor\u2013stator high shear device can be used for many different industrial casting processes for both aluminum and magnesium alloys, including direct chill casting and twin roll casting. The rotor\u2013stator device can be inserted into a crucible containing molten metal and could be used to apply intensive shearing without disturbing the melt surface casting process is shown in Fig.\u00a0To understand the grain refining via intensive melt shearing, temperatures were recorded during DC casting by attaching thermocouples to the head of the high shear device.Twin roll casting is an important industrial process used in producing magnesium alloy sheets economically. TRC alloys generally contain relatively large grains and contain defects such as center line segregation and inverse segregation along the cast strip. Center line segregation combined with large grain sizes prevent further processing of the TRC sheet without further rolling, which introduce strong basal textures and associated asymmetries between tensile and compressive yield strength. Figure\u00a0,The addition of reinforcing particles to develop high-strength metal-matrix composites has been investigated for the light metals for specialist applications. Achieving a uniform distribution of reinforcing particles in the melt is difficult and mechanical stirringThe intensive melt shearing was used to engineer the melt pool through the dispersion of oxides and other inclusions through the melt using a rotor\u2013stator device to achieve the high quality of melt that in turn resulted in improved quality of castings. The rotor\u2013stator high shear device provides distributive and dispersive mixing that significantly enhances the kinetics of phase transformations; improves uniform dispersion, distribution, and size reduction of particles; improves uniformity of composition and temperature in the melt pool; and most importantly, provides physical grain refinement through the dispersion of naturally occurring oxides. Therefore, the rotor\u2013stator high shear device may be used to condition light alloy melts, and it may be implemented with various conventional casting processes such as direct chill casting and twin roll casting."} +{"text": "PLOS ONE article. The Journal of Health and Development article [PLOS ONE article [PLOS ONE article provides additional validation information including the following: detailed question items, scoring protocols, and an explanation of the interpretation method.There is some overlap between work reported in this article and a pr article presents article reports PLOS ONE uses the same participants as the earlier study reported previously.Both articles report use of the same study sites, sampling method, and number of participants in describing validation and intervention aspects of this work. The authors confirm that they used an independent cohort of participants for questionnaire development and validation ,2; the pPLOS ONE publication.The authors apologize for not citing the earlier article in the S1 FilePLOS ONE article and the Journal of Health and Development article.This flowchart illustrates the relationship and overlap between the (PDF)Click here for additional data file."} +{"text": "The purpose of this study was to compare the stress and fatigue index between the demented older adults and the non-demented older adults and to identify the correlation between the behavioral psychological symptoms of the demented older adults and the pain of the main caregiver. A total of 100 participants(80 demented older adults and 20 non-demented older adults) were selected. The demented older adults, who visited hospital neurology as an outpatient, were paired up with the caregivers who provided the care for the demented older adults. The non-demented older adults who had normal cognitive function without limitation in a daily life and scored MMSE of 24 or higher were selected. The stress and fatigue index were measured by using an autonomic nervous system analyzer that measures heart rate variability(HRV), and structured questionnaires were used to examine behavioral psychological symptoms. The collected data were analyzed by using Independent t-test and Pearson\u2019s correlations analysis. The result of the analysis showed that average score of the demented older adults had higher stress index than the control group(non-demented older adults), which was statistically significant. In addition, the result indicated the statistically significant positive correlation between the behavioral psychological symptoms of demented older adults and the level of pain of main caregivers . The results of this study suggest that managing both the physical and psychological aspects of the stress in the main caregivers is necessary, as well as the development of stress management program for the demented older adults."} +{"text": "This review emphasizes the events that take place after the chylomicrons are secreted by the enterocytes through exocytosis. First, we will discuss the journey of how chylomicrons cross the basement membrane to enter the lamina propria. Then the chylomicrons have to travel across the lamina propria before they can enter the lacteals. To understand the factors affecting the trafficking of chylomicron particles across the lamina propria, it is important to understand the composition and properties of the lamina propria. With different degree of hydration, the pores of the lamina propria (sponge) changes. The greater the hydration, the greater the pore size and thus the easier the diffusion of the chylomicron particles across the lamina propria to enter the lacteals. The mechanism of the entry of lacteals is discussed in considerable details. We and others have demonstrated that intestinal fat absorption, but not the absorption of protein or carbohydrates, activates the intestinal mucosal mast cells to release many products including mucosal mast cell protease II in the rat. The activation of intestinal mucosal mast cells by fat absorption involves the process of chylomicron formation since the absorption of both medium and short-chain fatty acids do not activate the mast cells. Fat absorption has been associated with increased intestinal permeability. We hypothesize that there is a link between fat absorption, activation of mucosal mast cells, and the leaky gut phenomenon . Microbiome may also be involved in this chain of events associated with fat absorption. This review is presented in sequence under the following headings: (1) Introduction; (2) Structure and properties of the gut epithelial basement membrane; (3) Composition and physical properties of the interstitial matrix of the lamina propria; (4) The movement of chylomicrons across the interstitial matrix of the lamina propria and importance of the hydration of the interstitial matrix of the lamina propria and the movement of chylomicrons; (5) Entry of the chylomicrons into the intestinal lacteals; (6) Activation of mucosal mast cells by fat absorption and the metabolic consequences; and (7) Link between chylomicron transport, mucosal mast cell activation, leaky gut, and the microbiome. The absorption and transport of lipids by the gastrointestinal tract involve the uptake of digested lipids by the enterocytes and the formation and secretion of chylomicrons. These different steps of intestinal fat absorption have been so ably reviewed by a number of reviews over the past few years . In mostvia exocytosis by the enterocytes to the final entry of the chylomicrons into the lymph lacteals of the intestinal villi. After exiting from the enterocytes, the chylomicrons accumulate in the intercellular space. The basement membrane with which the enterocytes are attached to offers considerable resistance for the passage of chylomicrons from the intercellular space into the lamina propria. We will discuss how we think the chylomicrons cross the basement membrane to enter into the lamina propria. We will also discuss the properties of the lamina propria and the factors influencing the diffusion of the chylomicrons across the lamina propria and how the chylomicrons subsequently enter the lacteals located in the central core of the villus. The lacteals transporting the chylomicrons drain initially into the intestinal lymph duct, then the thoracic duct, and finally empty into the left subclavian vein.In fact, many events are involved from the secretion of the chylomicrons by the enterocytes to the subsequently transport of these triglyceride-rich lipoproteins by the lymphatic system. One of the goals of this review is to cover those events that occur from the secretion of chylomicrons Interest in the lymphatic system increased dramatically recently in its role in lipid metabolism and gastrointestinal function. We and others have demonstrated that in addition to chylomicrons, the lymphatics of the gastrointestinal tract also carry molecules secreted by the mucosal mast cells [mucosal mast cell protease II in the rat ] and mucDuring active lipid absorption, monoglycerides and fatty acids produced from the digestion of triglycerides are taken up by enterocytes. Here, they are re-esterified to produce triglycerides and are packaged into chylomicrons for export into the lymphatic system. For a more comprehensive discussion of these processes, readers are referred to the following excellent treatises on the subject .\u03b1, \u03b2, and \u03b3 laminin subunits and the formation of laminin heterotrimers through the intercellular space. During fluid absorption, the volume of the interstitium increases, resulting in the expansion and disentanglement of the matrix components. Under these conditions, the average porosity increases to 1,000 \u00c5 which liIt has also been proposed that chylomicron transport could be facilitated by the convective fluid movement of lymph associated with fluid absorption. During lipid absorption, the subsequent increase in fluid uptake results in an increase in interstitial fluid formation and lymph flow. Previous studies have suggested that the rate of lymph formation has a major effect on chylomicron transport . Tso et It is still unclear if the increase in chylomicron transport associated with fluid absorption is a result of increased permeability of the interstitial matrix or the convective fluid movement. However, in both cases, it is apparent that hydration of the interstitial matrix plays an important role in the movement of chylomicrons across the lamina propria.There is some controversy regarding the mechanism of chylomicron entry into the intestinal lacteals . The firThese conclusions were challenged by Dobbins . With ovA more recent study by In addition to its role in the digestion and absorption of nutrients , the gut also serves an important function in host defense. The gastrointestinal tract, a tube-like structure that is covered by mucosa from the oral cavity to the anus, is continuous with the external environment. Therefore, the gastrointestinal tract is a major entry site for many bacterial and viral pathogens and has a vastly diverse microbial community . Despitede novo synthesized mediators in a conscious lymph fistula rat model (via the hepatic portal vein, while LCFAs are preferentially packaged into chylomicrons in the enterocytes and transported through the lymphatic system (Previous studies have suggested a close link between these intestinal immune cells and dietary fat absorption . In partat model . They foat model . MCFAs aat model . Furtherc system .The difference in the intestinal handling of LCFAs and MCFAs and their differential effect on MMC activation suggest that the formation and secretion of chylomicrons is potentially linked to MMC activation. To test this possibility, Ji et al. administered L-81, an inhibitor of chylomicron formation. As expected, lymphatic transport of chylomicrons was completely abolished when L-81 was present in the intestinal lumen. However, eliminating the formation and transport of chylomicrons did not completely abolish the release of mediators by MMC . It is iin vitro studies have shown that mast cell mediator RMCPII directly increases epithelial permeability by decreasing the expression of the tight junction-associated proteins occludin and zonula occludens-1 (Although results from Ji et al. demonstrated that fat absorption stimulates MMC activation, evidence from previous studies suggest that MMC activation is also important in enhancing the absorption and transport of lipids. It has been reported that intestinal permeability is increased during fat absorption, which would facilitate uptake of fluid and electrolytes . This coludens-1 . It has ludens-1 . Thus, Mludens-1 . The undIn conscious lymph fistula rats, we demonstrated that treating the animals with antibiotics greatly suppressed the lymphatic transport of chylomicrons as well as the activation of mucosal mast cells normally associated with fat absorption . The antin vitro, as there are less variables to content with than in the in vivo scenario. Of course, we cannot rule out in our experiment whether there is a direct effect of microbiome on apolipoprotein synthesis and secretion. In our study (It is not clear how antibiotic treatment affects apolipoprotein output in enterocytes because our dose of penicillin and streptomycin is not known to affect mammalian cells. The question of antibiotics treatment on the transcriptional and the translational production and secretion of apolipoproteins is probably best studied ur study , when wePT provided guidance on the overall direction of the manuscript. AZ and PT wrote the manuscript. JQ, ML, and PT edited and proofread the manuscript. All authors critically reviewed the final version of the paper.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Astrocytes have historically been considered structural supporting cells for neurons. Thanks to new molecular tools, allowing specific cell ablation or over-expression of genes, new unexpected astrocytic functions have recently been unveiled. This review focus on emerging groundbreaking findings showing that hypothalamic astrocytes are pivotal for the regulation of whole body energy homeostasis. Hypothalamic astrocytes sense glucose and fatty acids, and express receptors for several peripheral hormones such as leptin and insulin. Furthermore, they display striking sexual dimorphism which may account, at least partially, for gender specific differences in energy homeostasis. Metabolic alterations have been shown to influence the initiation and progression of many neurodegenerative disorders. A better understanding of the roles and interplay between the different brain cells in regulating energy homeostasis could help develop new therapeutic strategies to prevent or cure neurodegenerative disorders. The brain is the organ responsible for the centralized control of the other organs\u2019 functions and, in higher vertebrates, of reasoning. Such tasks are achieved thanks to the interconnections of billions of neurons and glia cells. Due to their role in receiving, processing, and transmitting information neurons are considered to be the primary cell types of the central nervous system, and the only repository of reasoning and awareness. Astrocytes are the most abundant type of glial cells 321517181920The hypothalamus is the portion of the brain that integrates sensory inputs from the external environment with hormonal and neural signals from the body, allowing ad hoc short- and long-term homeostatic adjustments 2324262819203841The possibility that astrocytes may participate in the regulation of body homeostasis in other ways than direct nutrient sensing is inferred by the fact that they express receptors for hormones involved in energy homeostasis such as leptin 455252545555596163A receptors 8082828485In vitro experiments show different responses to saturated fatty acids in astrocytes isolated from males and females 86In vivo, long term high fat diet increases the levels of estradiol in females, while in males its levels are unchanged and associated with a significant decrease of ERa in the hypothalamus 808689929394958297A receptors Neurological and neurodegenerative disorders are often characterized by sexual dimorphism in terms of either incidence or severity and progression of the pathology 666771727276Metabolic alterations influence the initiation and progression of many neurodegenerative disorders"} +{"text": "The use of passive samplers in extensive monitoring, such as that used in national forest health monitoring plots, indicates that these devices are able to determine both spatial and temporal differences in ozone exposure of the plots. This allows for categorisation of the plots and the potential for cause-effect analysis of certain forest health responses. Forest exposure along a gradient of air pollution deposition demonstrates large variation in accumulated exposures. The efficacy of using passive samplers for in situ monitoring of forest canopy exposure was also demonstrated. The sampler data produced weak relationships with ozone values from the nearest \"continuous\" monitor, even though data from colocated samplers showed strong relationships. This spatial variation and the apparent effect of elevation on ozone exposure demonstrate the importance of topography and tree canopy characteristics in plant exposure on a regional scale. In addition, passive sampling may identify the effects of local pollutant gases, such as NO, which may scavenge ozone locally only to increase the production of this secondary pollutant downwind, as atmospheric reactions redress the equilibrium between concentrations of this precursor and those of the generated ozone. The use of passive samplers at the stand level is able to resolve vertical profiles within the stand and edge effects that are important in exposure of understorey and ground flora. Recent case studies using passive samplers to determine forest exposure to ozone indicate a great potential for the development of spatial models on a regional, landscape, and stand level scale."} +{"text": "Resection of all tumor implants with the aim of maximal cytoreduction is the main predictor of overall survival in ovarian carcinoma. However, there are high risk sites of tumor recurrence, and the perihepatic region, especially the point where the ligamentum teres hepatis enters the liver parenchyma under the hepatic bridge (pont hepatique), is one of them. This video demonstrates the resection of the ligamentum teres hepatis both in a cadaveric model and in a patient with ovarian cancer. The falciform ligament divides the liver into the right and left lobes on the antero-superior part of portoumbilical fissure where the ligamentum teres hepatis attaches to the visceral surface. Due to the distribution pattern of the portal vein and hepatic veins, the liver is divided into eight functional segments . The umbMucinous ovarian or gastrointestinal carcinoma, appendiceal carcinoma, mesothelioma or a serous ovarian cancer may have a widely disseminated recurrence on the peritoneal surfaces. The complicated surgical anatomy of the liver and perihepatic tissues limits the easy detection of tumor implants; eventually, good exposure of the abdominal cavity is needed to excise all the visible tumor implants, especially on high-risk fields such as the end part of the ligamentum teres hepatis under the hepatic bridge . There is no risk of injuring any structures while cutting the hepatic bridge. However, if the ligament is deeply attached to the bottom of the liver parenchyma, while dissecting the end point, care should be taken not to damage the left hepatic artery or the left hepatic duct over the hepatoduodenal ligament, which is covered by the peritoneal lining of lesser sac ,5. RoutiThis video consists a cadaveric surgical demonstration of ligamentum teres hepatis resection over the portoumbilical fissure and a live patient video of 56 years old woman who had a recurrent high-grade serous ovarian cancer with widespread peritoneal implants. There were tumor implants at the perihepatic region on the umbilical ligament, which were resected."} +{"text": "InAction. It emerged from a series of presentations delivered during a workshop in Cascais , and from the research activities carried out during Short Term Scientific Missions supported through the COST Action FA1301. The overall aim is to fill some lacunae in knowledge of the digestive tract of cephalopod molluscs. In contrast to other areas of cephalopod biology such as the central nervous system and behavior and the visual system (see Hanke and Osorio), relatively little research has been done on this topic during the last 30 years.The collection of papers included in this Research Topic represents the outcome of some of the activities of the COST Action FA1301, CephsCephalopods are active marine predators counting more than 800 species. Understanding the physiological adaptations of these fascinating and complex molluscs poses important challenges for several disciplines. Knowledge of the normal functioning of the digestive system has wide ranging implications for fisheries, aquaculture, and for the care and welfare of cephalopods in the laboratory and in public displays. Alterations in digestive tract functionality are also a sensitive indicator of gastrointestinal and systemic infections, disease, and external stressors in the broadest sense. Most of the available knowledge on the cephalopod \u201cgut\u201d and physiology of digestion is based on assumptions by analogy with the vertebrate digestive system.This Research Topic includes 17 papers from more than 70 authors representing a contribution to the outcomes of COST FA1301. The papers present original data and/or reviews on: nutritional requirements and challenges offered by early-life stages, predatory behavior, anatomy and physiology of the cephalopod digestive system, and possible implications with animal care and welfare.Octopus vulgaris, is a prime species for cephalopod aquaculture but its potential is limited by poor survival during the paralarval stage. Limited knowledge of feeding habits and the digestive tract physiology are considered major barriers to progress and these areas are reflected by the nine paralarvae papers included here.Among other species, the common octopus, Nande et al. studied the predatory behavior and related movements of the digestive tract in 3-days post hatching (dph) O. vulgaris paralarvae hatched in the laboratory and fed on eighteen different types of wild caught prey. Capture and ingestion of decapod prey was less efficient (60%) than cladocerans or copepods (100%). Overall, paralarvae spent only ~5 min in contact with prey. The temporal sequence of digestive tract motility changes following food ingestion was quantified and pigmented food particles appeared in the digestive gland ~5 min after the crop had reached maximum volume.Fern\u00e1ndez-Gago et al. provide a 3D reconstruction of the digestive tract during the first 35 days of life, identifying four developmental periods , suggesting that the radula and digestive gland may take longer to mature than other regions. Despite the limitations of a morphological study, this paper provides background information against which the more functional studies can be considered.Estefanell et al. of wild caught with captive bred hatchlings highlights the potential utility of measurement of fatty acids such as n\u22123 highly unsaturated fatty acids from neutral and polar lipids in elucidating the nutritional requirements of O. vulgaris paralarvae. Louren\u00e7o et al. analyzed the lipid class content and fatty acid profiles of wild paralarvae and their potential prey and proposed that monounsaturated fatty acids (particularly C18:1n7) and the DHA:EPA ratio are trophic markers of the diet of paralarvae. The search for nutritional imbalance biomarkers is explored further by Morales et al. who measured changes in anaerobic and aerobic metabolism, fatty acid oxidation, and gluconeogenesis (from glycerol and amino acids) in O. vulgaris paralarvae during an extended part of this life-stage. Authors' findings suggest that phospholipid and n-3 HUFA-enriched Artemia reduced mortality and increased paralarval growth, thus contributing to the understanding of the ontogeny of metabolic pathways, an essential requirement for optimizing the diet of paralarvae in culture. A similar dietary enrichment was used by Garc\u00eda-Fern\u00e1ndez et al. to investigate the epigenetic regulation by diet and age of octopus paralarvae. An age-related demethylation was observed during the first 28 days of life and was accelerated by dietary n-3 HUFA enrichment. A proteomic approach allowed authors to identify specificity in the diet , and allowed comparison of fed and food deprived paralarvae suggesting that arginine kinase, NAD+ specific isocitrate dehydrogenase and S-crystallin 3 may be useful as biomarkers of nutritional stress . Metagenomics provided a different approach to assessing diet in wild paralarvae, by analysis of DNA from the dissected digestive gland to identify Molecular Taxonomic Units recognizing decapods, copepods, euphausiids, amphipods, echinoderms, molluscs, and hydroids as part of the natural diet. Some paralarvae showed a preference for cladocerans and ophiuroids and overall seasonal variability was shown in the presence of copepods and ophiuroids in the diet.A comparison by Roura et al. investigated the paralarval microflora (microbiome). Both wild caught paralarvae and those newly hatched in captivity had similar microbial communities which the authors termed the \u201cCore Gut Microflora,\u201d the presence of which they considered indicative of healthy O. vulgaris paralarvae. A finding of particular relevance to aquaculture was that after 5 dph, in comparison to newly hatched paralarvae the number of bacterial species was reduced by ~50% with two families (Mycoplasmataceae and Vibrionacea) dominating. The importance of the microbial diversity provided by zooplankton in the wild in contrast to the typically used Artemia diet in captivity is discussed by the Authors.Villanueva et al.) including an account of the relative roles of photo-, mechano-, and chemo-reception in the detection of diverse prey types in relation to living habits . A variety of hunting strategies are employed by different cephalopod species and the authors make an interesting comparison with marine and terrestrial vertebrates. Attention is drawn to the neglected area of the ontogeny of predation by reference to the feeding behavior of both hatchlings and senescent cephalopods.A short overview of cephalopod predatory habits is also included in this Research Topic and digestion (by enzymes). The subsequent steps in digestion are described in detail in Octopus maya and Octopus mimus by Gallardo et al. by highlighting novel data on the temporal pattern of absorption and assimilation, and providing preliminary evidence that lipid mobilization is dependent upon habitat water temperature.A contribution to the anatomy and physiology of the digestive system of cephalopods is given by five papers. Rodrigo and Costa. High concentrations of both essential and non-essential metals with metal homeostasis involving spherulae formation, chelation and metallothionins characterize the DG. The authors also discuss the involvement of the DG in the storage and metabolism of organic toxicants including amnesic shellfish toxins , polycyclic aromatic hydrocarbons and polychlorinated biphenyls, and comparisons made with the mechanisms operating in the vertebrate liver including biotransformation, conjugation and elimination with a focus on the cytochrome P450 system.The digestive gland in cephalopods is the main organ of metabolism and is analogous to the vertebrate liver. It secretes a range of digestive enzymes into the lumen of the digestive tract, receives digested nutrients from the caecum which it assimilates and subsequently transfers to the haemolymph (glucose and lipids). The digestive gland (DG) is also the main site of detoxification and storage of ingested marine pollutants as reviewed by Capaz et al. reported that exposure of adult cuttlefish to sea water with a 50% decreased oxygen for 1 h markedly increased breathing frequency (85%) and reduced oxygen consumption (37%), but there was only a small increase in mantle muscle octopine levels indicative of anaerobic metabolism. Complementary in vitro studies of protein turnover and Na+/K+ATPase activity (responsible for ionic gradient maintenance) enabled the authors to hypothesize that the reduced oxygen consumption in hypoxic animals was primarily due to reduced protein synthesis and Na+/K+ATPase activity.Understanding the metabolic adaptations of cephalopods to environmental changes is of growing importance because of predicting the effects of climate change and the consequences of coastal eutrophication and assessing the impact of intensive aquaculture. O. vulgarisBaldascino et al. utilized RT-PCR to reveal the neurochemical complexity of the gastric ganglion with evidence for putative peptide and non-peptide neurotransmitters and/or their receptors . A comparison of gene expression in the gastric ganglion of animals with relatively high or low levels of infection with the common digestive tract parasite Aggregata octopiana showed differential gene expression .In The regulation in European Union states of scientific research utilizing cephalopods has necessitated the development of guidelines for their care and welfare in the laboratory are challenges that can possibly be overcome by using tools such as ultrasound to monitor movements of the digestive tract or fecal analysis as a \u201creporter\u201d of digestive tract function.Sykes et al. Authors discuss the challenges of feeding cephalopods in captivity and particularly issues around: live food and prepared diets, feeding frequency and quantity, the impact of a range of experimental interventions on the digestive tract, and a discussion of the impact of food deprivation on the overall health and welfare of the animal.A wide-ranging overview of the relevance of understanding digestive tract functionality to the welfare of cephalopods in the laboratory and aquaculture is given by All authors have made a substantial, direct and intellectual contribution to the editorial.The authors of this editorial are co-authors of one or more of the publications discussed in this Editorial. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "We report an unusual case of fatal air embolism into the superior mesenteric artery in a patient, who underwent replacement of the ascending aorta for aortic dissection type A. CT performed twice on the first postoperative day showed abundant air in the superior mesenteric artery and its branches indicating air embolism with no signs of bowel necrosis. On the second postoperative day, the patient underwent extensive bowel resection due to bowel ischemia and died on the third postoperative day on MODS/SIRS. A 50-year-old male patient with aortic dissection originating just above the aortic valve and extending down to the common iliac arteries and signs of acute renal insufficiency (creatinine 292 \u03bcmol/l). CT showed good postoperative result in the ascending aorta, but large amount of air in the branches of the superior mesenteric artery up to the arcades was found . The CT showed distribution of the intra-arterial gas more into the periphery and into the wall of the bowel loops that still did not display signs of ileus Figure . The nexVascular air embolism, however rare this event is, can be potentially fatal . It is a3Air embolism into the SMA is a very rare event that results in mesenteric hypoperfusion. Signs of bowel paralysis and gas in the portal vein may be absent due to residual perfusion of the bowel that may be grossly insufficient. Unfortunately, because of extensive distribution of gas in the mesenteric arteries up to the arcades and later in the bowel wall, endovascular aspiration cannot be expected to yield any results."} +{"text": "A non-ossified unilateral subcutaneous fibroma was diagnosed in the distal femoral region of a 5-year-old Nooitgedacht mare. Histopathological examination of the excised mass revealed long interweaving bundles of semi-mature monotonous collagenous connective tissue with fusiform nuclei without mitotic figures. The mare made an uneventful recovery following surgical removal of the neoplasm. Subcutaneous fibromas should be considered in the differential diagnosis of skin swellings associated with the limbs of horses. Neoplasia of the appendicular skeleton in the horse is unusual .A radiographic examination of the distal right femur was unremarkable and ultrasonographic evaluation revealed a well-defined subcutaneous homogenous mass with moderate echodensity. No abnormalities were palpable or visualised ultrasonographically at the local or regional lymph nodes to indicate metastasis.The mass, which was of firm consistency and grey to white on cut surface as previously reported Hendrick , was surImmunohistochemical examination was performed by the avidin\u2013biotin complex technique into the proliferating immature dermal fibroblasts in the horse. Subcutaneous fibromas should be considered in the differential diagnosis of skin swellings associated with the limbs of horses. Surgical excision of the mass in this case report resulted in a full resolution of the clinical signs and is recommended in the treatment of this form of benign neoplastic mass in horses."} +{"text": "Background: The availability of a number of agents that are efficacious in patients with metastatic prostate cancer (mPC) has led to them being used sequentially, and this has prolonged patient survival. However, in order to maximize their efficacy, clinicians need to be able to obtain a reliable picture of disease evolution by means of monitoring procedures. Methods: As the intensive monitoring protocols used in pivotal trials cannot be adopted in everyday clinical practice and there is no agreement among the available guidelines, a multidisciplinary panel of Italian experts met to develop recommendations for monitoring mPC patients using a modified Delphi method. Results: The consensus project considered methods of clinically, radiographically, and biochemically monitoring patients with metastatic hormone-sensitive and metastatic castration-resistant prostate cancer undergoing chemotherapy and/or hormonal treatment. The panelists also considered the methods and timing of monitoring castration levels, bone health, and the metabolic syndrome during androgen deprivation therapy. Conclusions: The recommendations, which were drawn up by experts following a formal and validated consensus procedure, will help clinicians face the everyday challenges of monitoring metastatic prostate cancer patients. The prognosis of patients with metastatic prostate cancer (mPC) has dramatically improved over the last 10 years as a result of the introduction of a number of agents that are capable of significantly improving the overall survival of castration-resistant patients in everyday clinical practice. Patients with metastatic castration-resistant prostate cancer (mCRPC) can now be managed using two chemotherapeutic agents, docetaxel ; two newIn addition, the sequential use of these therapeutic options has further prolonged patient survival in comparison with both historical and current therapies ,11,12, tThe intensive monitoring protocols used in pivotal trials clearly cannot be adopted in everyday clinical practice, but there is still no agreement concerning the frequency or methods of monitoring CSPC and CRPC patients on and off treatment during the evolution of their disease treatment among the guidelines issued by the European Association of Urology (EAU), the American Urology Association (AUA), the European Society of Medical Oncology (ESMO), the Italian Association of Medical Oncology (AIOM), and the National Comprehensive Cancer Network (NCCN).The situation is further complicated by the growing use of new imaging methods such as choline or PSMA positron emission tomography (PET) and whole-body magnetic resonance imaging (wbMRI). These seem to be more sensitive than traditional bone scintigraphy (BS) and computed tomography (CT), which are significantly limited in assessing disease burden particularly in the case of skeletal involvement. The new imaging methods could therefore improve our ability to detect and quantify the burden of bone and soft tissue metastases, but there is still no agreement as to how they can be used to evaluate and classify treatment responses, and the effective management of patients with advanced prostate cancer requires accurate, reproducible and validated methods.In this complex clinical scenario, and with the participation of the Italian Association of Medical Oncology (AIOM), the Italian Association of Radiobiology (AIRB), the Italian Association for Radiation Oncology (AIRO), the Italian Society of Community Urologists (AURO.it), the Italian College of Chief Medical Oncologists (CIPOMO), and the Italian Society of Urology (SIU), the Italian Society of Uro-Oncology (SIUrO) organized a multidisciplinary expert consensus project with the aims of reviewing the available guidelines and evidence-based data, and making practical recommendations concerning the monitoring of patients with mCSPC and mCRPC.In this section the results are described via the statements discussed by the consensus panelists, which were summarized in the With a consensus of 86%, the panelists recommended that:The standard monitoring plan for an mCSPC patient who is a candidate for ADT alone should include a clinical \u00b1 biochemical assessment every 12 weeks for the first 12 months, and every 24 weeks thereafter.With a consensus of 84%, the panelists recommended that:Imaging assessments (preferably CT and BS) of an mCSPC patients who is a candidate for ADT alone should only be made in the case of a biochemical and/or clinical relapse.With a consensus of 90%, the panelists agreed that:The factors that can individually change the initial monitoring plan of an mCSPC patient who is a candidate for ADT alone are age at the time of diagnosis, Gleason score, symptoms, the number and site(s) of metastases, the time of onset of metastases , and the time interval between radical local treatment and the onset of metastases.With a consensus of 90%, the panelists agreed that:The factors that may modify the monitoring schedule of an mCSPC patient being treated with ADT alone are the trend of PSA levels and disease-related symptoms, such as a worsening in performance status, the occurrence of a skeletal event, or a change in analgesic treatment.Published evidence concerning the monitoring of such patients has been provided by the CHAARTED and LATITUDE trials ,13. CHAAWith a consensus of 93%, the panelists therefore recommended that:An mCSPC patient who is a candidate for treatment with ADT + docetaxel should be clinically evaluated at of each treatment cycle , whereas PSA levels should be repeated after at least the third and sixth treatment cycle.The CHAARTED trial did not provide any indications concerning imaging assessments during or after chemotherapy , but theWith a consensus of 91%, the panelists therefore recommended that:An imaging assessment of a patient with mCSPC who is a candidate for treatment with docetaxel and ADT should be made at the end of docetaxel treatment using the same methods as those used at the time of the initial evaluation (preferably CT and BS).There is very little evidence indicating whether there are factors that can be assessed before the start of treatment that could change previously planned biochemical, radiographic and clinical monitoring procedures. Disease volume is a well-established prognostic factor, as is the definition of high risk used in the LATITUDE trial, which was based on the number of bone metastases, Gleason score, and the presence of visceral metastases . A retroWith a consensus of 81%, the panelists therefore agreed that:That there is no factor that should modify the standard monitoring schedule of an mCSPC patient who is a candidate for treatment with ADT + docetaxel.Published evidence, which mainly relates to mCRPC patients, indicates that an increase in PSA levels alone is not sufficient to indicate disease progression as a sigWith a consensus of 85%, the panelists therefore agreed that:Increasing PSA levels and worsening disease-related symptoms may require an earlier re-evaluation of an mCSPC patient being treated with ADT + docetaxel than that laid down in the initial monitoring plan.The rate of progression in the STAMPEDE study, which tested the benefit of adding docetaxel to ADT alone in patients with mCSPC, was about 25% after 12 months of treatment and 40% after 24 months of treatment . After cWith a consensus of 92%, the panelists consequently recommended that:The standard monitoring plan of an mCSPC patient without progressive disease who has concluded docetaxel treatment but is continuing ADT should include clinical and biochemical assessments at least every 12 weeks.In accordance with the strategy adopted in the CHAARTED trial and with a consensus of 89%, the panelists recommended that:A radiographic assessment (preferably CT and BS) of an mCSPC patient who has concluded docetaxel treatment but is continuing ADT is only required in the case of clinical and/or biochemical progression.On the basis of experience and with a consensus of 92%, the panelists agreed that:The factors that may modify the monitoring schedule of an mCSPC patient who has concluded docetaxel but is continuing ADT are PSA level, the appearance of symptoms, and the biological/clinical aggressiveness of the disease.The occurrence of events such as an increase in PSA levels and/or the onset of pain in an mCSPC patient undergoing ADT who has been previously treated with docetaxel should obviously lead to the timing of the initially scheduled assessments being brought forward. At the same time, the presence of features indicating particularly aggressive disease, such as low PSA values in the presence of a tumour with a high Gleason score, should suggest regular radiographic as well as clinical and biochemical assessments.With a consensus of 85%, the panelists accordingly agreed that:The factors that may modify the monitoring schedule of an mCSPC patient undergoing ADT who has been previously treated with docetaxel are an increase in PSA levels and/or the onset or worsening of disease-related symptoms such as a worsening performance status, the occurrence of a skeletal event, an increase in pain therapy.On the basis of experience and with a consensus of 85%, the panelists recommended that:The standard monitoring plan of an mCRPC patient who is a candidate for chemotherapy should include a clinical assessment at every cycle.On the basis of experience and with a consensus of 85%, the panelists recommended that:The standard monitoring plan of an mCRPC patient who is a candidate for chemotherapy should include a PSA assessment at least every 6\u20138 weeks.On the basis of experience and with a consensus of 85%, the panelists recommended that:The first imaging assessment of an mCRPC patient who is a candidate for chemotherapy should be made after about 12 weeks using the same methods as those used for the baseline assessment (preferably CT and BS).On the basis of experience and with a consensus of 93%, the panelists agreed that:The factors that can modify the monitoring schedule of a patient with mCRPC receiving docetaxel treatment are an increase in PSA levels and the onset or worsening of disease-related symptoms such as a worsening performance status, the occurrence of a skeletal event, and an increase in pain therapy.On the basis of experience and with a consensus of 84%, the panelists recommended that:Imaging assessments of an mCRPC patient who has completed chemotherapy and shows no signs of progression should not be pre-planned, but depend on the results of clinical/biochemical assessments; in any case, it is recommended to use the same methods as those used for the baseline assessment (preferably CT and BS).ARTA-treated patients with mCRPC are usually clinically evaluated every four weeks in order to monitor toxicity and assess the onset of treatment-related adverse symptoms. In relation to biochemical monitoring, the two pivotal studies of ARTAs in this setting ,18 plannWith a consensus of 89%, the panelists accordingly recommended that:The standard follow-up schedule of an mCRPC patient who is candidate for ARTA should include a PSA assessment every 12 weeks and a clinical evaluation every four weeks.The two pivotal trials ,18 plannThe need for imaging assessments should be based on the findings of clinical/biochemical assessments.Evidence of more aggressive disease can certainly change the initially planned monitoring schedule of an mCRPC patient undergoing systemic ARTA treatment. The benefits of ARTA in docetaxel-na\u00efve mCRPC patients are generally greater than those recorded in patients who have previously received docetaxel ,4,5,20. The site of metastases and disease-related symptoms should be considered factors that may modify an initial monitoring schedule.The frequency of assessments initially planned during systemic ARTA treatment may be changed in the case of suspected disease progression in order to stop potentially ineffective treatment. In this regard, the EAU guidelines highlight the importance of the presence of disease-related symptoms. In addition, the panelists at the 2017 St. Gallen Consensus Conference stressed that, regardless of its kinetics, rapid PSA progression combined with other factors may indicate a worse prognosis . For theWhen deciding on changes in the frequency of assessments of an ARTA-treated mCRPC patient, the trend of PSA levels and the onset of disease-related symptoms should be considered.The aim of ADT is to maintain suppressed testosterone levels of <50 ng/dL (1.7 nmol/L). The initial phase of chemical castration is closely related to the reduction in PSA levels and so, as underlined by the EAU guidelines, testosterone suppression should be assessed in the case of biochemical progression. With a consensus of 89%, the panelists recommended that:The standard monitoring plan of a patient with advanced prostate cancer undergoing ADT should include a testosterone evaluation every time there is an increase in PSA levels.All mPC patients undergo ADT, which may have adverse effects on bone health and the cardiovascular system. It is estimated that a patient undergoing ADT may have a lumbar mineral density loss of 4.6% per year, and that there is a possibility of developing a bone fracture in up to 14% of cases . With a The standard monitoring plan of a patient with advanced prostate cancer (mCSPC/mCRPC) should include regular bone health assessments.Hypogonadism secondary to ADT may lead to insulin resistance and the consequent onset of metabolic syndrome . FurtherWith a consensus of 94%, the panelists therefore recommended that: Patients with advanced prostate cancer (mCSPC-mCRPC) treated with ADT should undergo regular metabolic assessments, particularly those at increased cardiovascular risk.Disease monitoring is one of the greatest challenges facing the clinicians who treat mPC patients because the possibility of sequentially administering the agents that have proved to be efficacious requires monitoring methods capable of providing a reliable picture of disease evolution. The ability to detect disease progression is crucial to enabling clinicians to stop an ineffective treatment that could lead to unnecessary side effects, and allowing them to propose a further treatment line that may be more efficacious in controlling the disease.The concept of disease progression in patients with mCRPC has been clearly defined by the Prostate Cancer Working Group, which has modified its definition over time in order to keep up with advances in our knowledge of disease biology and the introduction of new therapeutic options . The PCWThe disease status of an mCRPC patient is defined on the basis of three factors: clinical status , PSA levels (the trends of which need to be cautiously interpreted), and radiographic changes (which require the definition of clear progression criteria).Over the last 20 years, the assessment of PSA levels has been the mainstay of the management of prostate cancer patients, and biochemical responses or progression have driven the therapeutic choices of clinicians. It has already been established that changes in PSA levels reflect changes in the disease during the early stages of prostate cancer, but this is questionable in the castration-resistant phase. Previous editions of the PCWG guidelines have highlighted the fact that PSA flares may occur during the early courses of chemotherapy, and it is recommended that discontinuing treatment on the basis of increasing PSA levels alone should be avoided during the first 12 weeks . More reAs a result, clinical and imaging assessments have taken on a greater role in defining disease status. The latest version of the PCWG guidelines distinguishes the first evidence of progression from a clinical need to change or discontinue treatment by introducing the concept of clinical benefit; moreover, they also underline the importance of separating progression in existing lesions from the development of new lesions .In this complex scenario, planning careful and regular monitoring is a crucial means of ensuring that patients receive active agents for as long as they really control the disease, and that clinicians view the monitoring of progression status in the light of adopting new therapeutic options.Unfortunately, the PCWG guidelines do not provide any indications concerning the optimal timing of planned monitoring assessments, and the same is true of the prostate cancer guidelines issued by the main scientific societies. Furthermore, the monitoring plans used in the pivotal trials of agents active in mCSPC and mCRPC were designed to respond to trial and regulatory needs, and cannot be directly translated into everyday clinical practice. Finally, as the evidence provided in the literature is very limited and often confusing, clinician choices are frequently only based on their personal experience, which clearly does not guarantee optimal patient management as it may lead to the early discontinuation of a treatment in the absence of true progression, or the needless continuation of ineffective treatment because true progression is not recognized.The use of a formal method of developing recommendations on the basis of the consensus of experts is therefore one of the best ways of addressing some aspects of a scenario devoid of evidence, such as that of monitoring mPC patients.The experts\u2019 recommendations concerning the evaluation of clinical status and PSA levels varied mainly on the basis of treatment: it is suggested that patients with mCSPC treated with ADT alone (at least in the first year of treatment) or receiving ADT after docetaxel treatment should undergo a clinical assessment every 12 weeks, whereas patients receiving docetaxel should be clinically evaluated at each treatment course and undergo less frequent biochemical assessments.There were similar differences in the recommendations made for mCRPC patients treated with an ARTA or docetaxel. In the case of chemotherapy, the recommendations clearly reflect the everyday practice of clinically evaluating patients at each course in order to assess not only disease-related symptoms, but also treatment-related side effects. In the case of ARTA-based treatment, the suggestions underline the need to avoid the risk of monitoring the disease simply on the basis of serial PSA assessments, and indicate that, albeit less frequently, clinical assessments should be regularly planned in order to be able to capture signs of progressive disease.The recommendations concerning imaging monitoring also depend on the therapeutic context. No pre-planned imaging monitoring was recommended in the case of mCSPC patients treated with ADT (as the only treatment or after docetaxel administration) or mCRPC patients treated with an ARTA or after having received docetaxel because such monitoring was considered necessary only if a patient experiences a clinical and/or biochemical relapse. The possibility of using regular twice yearly imaging monitoring in order to capture the best imaging response to treatment of ARTA-treated mCRPC patients was discussed, but the degree of consensus did not reach the threshold of acceptance. It was recommended that an imaging assessment should only be repeated at the end of docetaxel treatment in patients with mCSPC, and after 12 weeks\u2019 treatment in patients with mCRPC. It is worth noting that, regardless of therapeutic context, it was always strongly recommended to use the same imaging techniques as those used at baseline, and all of the recommendations indicate that the preferred techniques are CT and BS. This preference reflects caution concerning new imaging techniques that are expected to be more sensitive than traditional techniques, but do not have standardized criteria for evaluating response. In any case, PET-PSMA and wbMRI are still only available at very few centers and are not widely used.It is also worth noting that, in all but one clinical context, a number of factors were identified whose presence at baseline may indicate a different degree of disease aggressiveness, and should therefore be considered when planning monitoring frequency because some modifications to standard monitoring programs may be necessary. The only situation in which no such variables were identified was in the case of mCSPC patients who were candidates for docetaxel treatment, meaning that the presence of de novo metastases is per se a sign of aggressiveness and that no other factors need to be considered.There are also some statements concerning the factors that may change an initially defined monitoring schedule: regardless of disease status (mCSPC or mCRPC) or the therapeutic context , these always included biochemical progression and the appearance or worsening of disease-related symptoms such as a worsening performance status, the occurrence of a skeletal event, and an increase in pain therapy.Other aspects of mPC patient monitoring that are not strictly related to evaluating the course of the disease were also discussed, and there was strong agreement that testosteronemia should be evaluated whenever PSA levels increase, that standard monitoring plans should include regular assessments of bone health, and that metabolic factors should be regularly assessed, particularly in the case of patients at increased cardiovascular risk.Clearly the clinical settings and treatment options addressed by the present Consensus are not able to fully cover all therapies that the quick evolution of PC management is progressively making disposable, requiring new editions of the Consensus. For example, monitoring procedures of treatment with abiraterone, which is approved for mCSPC in several European Countries but still not in Italy, or apalutamide and enzalutamide, which should have the same indication in the next future, will require specific discussions. Additionally, the use of apalutamide, darolutamide, and enzalutamide in a new disease setting, such as non metastatic CRPC, will open new challenges for the clinicians in defining the optimal monitoring procedures, which should be specifically addressed by new specific statements.Considering the fields that were not addressed by the Consensus statements, the present paper is to be considered as having limitations. These limitations will be overcame by a new edition of the Consensus able to cover the fields, which the introduction of new active agents in PC landscape will open.To this end, the results of the present Consensus Conference may be of help the clinicians in managing their PC patients. For example, the different approach followed in evaluating the monitoring of patients treated with docetaxel, and of those who receive an ARTA, could be valuable. From this point of view, for example, the St. Gallen Consensus considered the monitoring strategies regardless of the treatment strategy.As shown in The Delphi technique and the On the basis of the questions deserving clarification/in-depth analysis identified in phase 1, the Board members:Undertook a systematic review of the literature in the Medline, Embase and Cochrane databases, the ASCO, EAU, AIOM, and NCCN guidelines, and the St. Gallen Consensus Conference recommendations. Carried out an on-line survey of Urologists, and Medical and Radiation Oncologists, who were asked to choose from among various mPC management strategies.A specific statement was produced for each of the previously defined questions deserving clarification/in-depth analysis. Using a modified mini-Delphi approach, the statements were independently developed by each Board member, harmonized, and discussed during a final face-to-face meeting.Selected clinicians belonging to the involved scientific societies took part in a Consensus Conference panel to which the Board members presented the final statements and explained the reasons for their choices on the basis of evidence or experience. All of the panelists then voted on each statement with the aim of reaching a consensus threshold of 80%; if this threshold was not reached, the statement was discussed, revised, and voted on again up to four times until it was.The Consensus Conference panel included experts covering all of the specialties involved in treating mPC patients . The ConOver the last ten years, the increasing availability of efficacious agents that can be sequentially used has made the management of mPC highly complex. In this scenario, clinicians have very little evidence on which to base the planning of an efficient monitoring program capable of capturing real disease progression in everyday clinical practice.It is hoped that the recommendations made above, which were drawn up by experts following a formal and validated consensus procedure, will help clinicians face the everyday challenges of monitoring mPC patients."} +{"text": "The presence of endoleaks remains one of the main drawbacks of endovascular repair of abdominal aortic aneurysms leading to the increase of the size of the aneurysmal sac and in most of the cases to repeated interventions. A variety of devices and percutaneous techniques have been developed so far to prevent and treat this phenomenon, including sealing of the aneurysmal sac, endovascular embolisation, and direct sac puncture. The aim of this review is to analyse the indications, the effectiveness, and the future perspectives for the prevention and treatment of endoleaks after endovascular repair of abdominal aortic aneurysms. The detection rate of endoleaks depends on the imaging modalities usedOnly a small percentage of endoleaks will require re-interventionTreatment may include both endovascular or percutaneous routeThe endovascular aortic repair (EVAR) of abdominal aortic aneurysms was first described nearly three decades ago and has offered a crucial management shift of patients with aortic disease, particularly when open repair was not an option \u20133.A variety of EVAR devices were developed over the years offering a range of outcomes. There has been a substantial evolution in design and technology, from the initial tube grafts to the custom-made fenestrated and branched devices that are used today. EVAR has offered some benefits over the traditional open surgical repair; however, there is a cost to pay and this is mainly the need of closer patient follow-up and sometimes the necessity of re-interventions \u20139. FolloType I occurs due to incomplete proximal (Ia) or distal (Ib) seal. This could be due to either inappropriate device selection, incorrect graft deployment, or disease progression . Th. Th67]. Imaging modalities, mainly for the early detection of and characterisation endoleaks, aiming for radiation-free modalities like contrast-enhanced ultrasound (CEUS) and magnetic resonance imaging. The version 1.3 of the study on the early detection of endoleaks with CEUS (NCT02688751) is recently completed. The primary outcome is to assess the ability of CEUS to detect type I/III endoleaks on CEUS as defined by presence/absence on time-resolved CTA. The secondary outcome is the detection of type II endoleaks and the ability of CEUS to predict the likelihood of a secondary intervention. The study has also assessed healthcare costs related to each imaging modality, considering that EVAR follow-up carries an important economic impact. The results have not been made public yet.Considering the impact that endoleak prevention and treatment have for health economics, there is continuous research on the field with a prediction for an exponential increase in the next 5 years. The main areas that will be developed are the following:Biomarkers that would predict the aneurysm evolution. The best example is the matrix metalloproteinase (MMP) activity that has been associated with the process of aneurysm development. In essence, if there is a lack of balance between the MMP and its inhibitors, degeneration of the aortic wall is induced. It was previously proved that the serum level of MMP-9 is significantly higher in patients with abdominal aortic aneurysm and in patients with inadequate aneurysm exclusion after EVAR. A multicentre trial of serum levels of MMP-9 as a biomarker of endoleak (NCT01965717) has recently been completed. The aim of the study was to establish the correlation of MMP-9 with specific types of endoleaks and the requirement for re-intervention. The results have not been made public yet.Endostaples have offered satisfactory results after the completion of the pivotal study of the Aptus Endovascular AAA Repair System (NCT00507559). The ANCHOR study is currently recruiting patients (NCT01534819) aiming for a primary completion date in 2020. The primary outcome measures are the prevention of graft migration and the treatment of Type Ia endoleak.Navigation systems in CT offer more accurate needle placement and as the number of direct sac interventions will increase accurate needle placement under CT fluoroscopy will be necessary. The Endoleak Repair Guided by Navigation Technology study (NCT01843322) is a small study of 27 patients that is recently completed and is aiming to delineate whether the treatment of type II endoleaks can be improved by adding navigation technology in terms of precision and reduction of radiation exposure.Novel polymers will be developed after the Nellix system, regardless of the fact that the results until today have not been as expected. The novel ANEUFIX system in the treatment of endoleaks is assessed in a feasibility study (NCT02487290). The study is a non-randomised, multi-centre safety and feasibility trial of Aneufix ACP-T5 to treat patients with isolated type II endoleaks in the presence of a non-shrinking AAA sac following an EVAR procedure; however, it has only recruited 4 patients at the moment.Radiation reduction can also be achieved with dual-energy CT that acquires two different photon spectra in a single acquisition. It can be used to detect endoleaks with good accuracy and at a reduced radiation exposure and some preliminary data is already available .BiomarkWe may conclude that as the treatment options for endovascular repair of abdominal aortic aneurysms and the complexity of devices increase, there will be an increased necessity of prevention and management of endoleaks. Radiology is crucial in the management of such a phenomenon and needs to offer a number of solutions in the endoleak prevention and management."} +{"text": "In total, within a period of 4.5 years of the plant operation, 1853 Mg of fuel was produced and successfully co-combusted with coal in a power plant. The research demonstrated that in the waste water treatment sector there exists energy potential in terms of calorific value which translates into tangible benefits both in the context of energy generation as well as environmental protection. Over 700,000 Mg of bio-sewage sludge is generated annually in Poland. According to findings of the study presented in the paper, the proposed solution could give 970,000 Mg of dry mass of biomass qualified as energy biomass replacing fossil fuels.This paper aims to analyze the economic feasibility of generating a novel, innovative biofuel\u2014bioenergy\u2014obtained from deposit bio-components by means of a pilot installation of sewage sludge bio-conversion. Fuel produced from sewage sludge biomass bears the potential of being considered a renewable energy source. In the present study, 23 bioconversion cycles were conducted taking into consideration the different contents, types of high carbohydrate additives, moisture content of the mixture as well as the shape of the bed elements. The biofuel was produced using post fermentation sewage sludge for industrial energy and heat generation. Based on the presented research it was concluded that the composite biofuel can be co-combusted with hard coal with the optimal percentage share within the range of 20\u201330% w/w. Sewage sludge stabilized by means of anaerobic digestion carried out in closed fermentation chambers is the final product. The average values of the CO The development of sewerage infrastructure and household sewage connections has become a priority objective in addressing the issue of preventing soil and water pollution in Poland. Households, along with other sources of sewage, are connected to sewerage systems which route the effluents to a waste water treatment facility in order to remove the contaminants. Sewage sludge is a by-product generated in the process of industrial and municipal waste water treatment. However, with the development of sewage infrastructure, the volume of sewage sludge increases and, as a consequence, there arises the problem of its proper management.Sewage sludge is a dispersive system in which the non-dispersive phase is a liquid phase in the form of water with dissolved substances while the dispersed phase constitutes a solid phase in the form of insoluble parts or a gaseous phase in the form of a gas dissolved in liquid . Sewage Currently, sewage sludge can be utilized also as a soil improver or for engineering purposes in degraded land reclamation after determining the permissible level of heavy metals concentrations in compliance with the regulation of the Minister of Environment on municipal sewage sludge . MoreoveIn light of biding legal regulations, the landfilling of sewage sludge is the least desirable method of its management. Due to its changeable content, the sludge generated during the process of wastewater treatment constitutes a major problem in terms of its stabilization and utilization. So far, there does not exist a single optimal method of sewage sludge management. The selection of the technology is made on an individual basis and depends on the size of the particular wastewater treatment facility as well as on the characteristic of the processed sewage .A construction of an incineration plant on the premises of the wastewater treatment facility in Rybnik Orzepowice, Poland, would be a capital-intensive project. Besides, the combustion process produces ash of a high concentration of heavy metals which is considered hazardous waste .In addition, the incineration process is accompanied by heat generation for which there is no demand on the premises of the wastewater treatment facility. It is biogas combustion that provides the heat necessary for technological purposes, central heating and hot water. Excess heat would have to be released to the atmosphere.The activities aiming at reducing the volume of landfilled sewage sludge and increasing the degree of sewage sludge conversion as well as the development of thermal conversion technologies are in accordance with the preferred European standards and legal regulations .Fuel produced from sewage sludge biomass bears the potential of being considered a renewable energy source (RES) ,18,19,20The aim of this paper is to analyze the economic feasibility of biofuel production using an innovative technology of sewage sludge management which may constitute an alternative to thermal utilization of sewage sludge.In light of the existing limitations concerning natural management of sewage sludge which result from legal regulations and the scarcity of potential areas suitable for alternative utilization, thermal disposal acquires more and more importance. The thermal methods of sludge stabilization and utilization include incineration, co-incineration as well as pyrolysis ,18,19. OThe research was focused on the development and implementation of a cost-effective environmentally safe technology of waste recycling and utilization, including energy recovery, by means of thermal and biochemical conversion processes ,22. In t2 combustion process and, in consequence, the emission of dioxins and NOx.The physical and chemical parameters which characterized the stabilized sewage sludge are presented in The bioconversion installation processes sewage sludge to produce a surrogate to be used for the purpose of professional energy generation see . The obtWithin the research framework, 23 bioconversion cycles were conducted taking into consideration the different contents, the types of high carbohydrate additives, moisture content of the mixture as well as the shape of the bed elements. The research was conducted for an optimal content of the mixture presented in The analyses of bioconversion, stabilization , combustion and emissions confirmed that a sphere of 20 mm diameter constitutes an optimal shape and size of the surrogate. Moldings of such shape and size characterize of optimum porosity of the bed within the bioconversion and stabilization processes, good mechanical resistance, uniform combustion and the size of the molding fitting into the pea coal category, which in the case of co-combustion with hard coal has a beneficial effect on the bed structure.Based on the research findings, it was observed that the composite biofuel can be co-combusted with hard coal while the optimal percentage share is within the range of 20\u201330%. 2 emission and absorption in the process of photosynthesis and combustion. As distinct from simple forms of biomass, the only difference is the location of the said biomass in the food chain.The obtained results contributed to the development of a new generation, composite biofuel technology dedicated for industrial energy generation and district heating, including various forms of energy co-generation. Sewage sludge stabilized by means of anaerobic digestion carried out in closed fermentation chambers is the final product. The organic substance in the municipal waste water is an element of food chain; therefore, it must be classified as biomass of zero COBiogas mainly composed of methane, carbon dioxide, hydrogen sulfide and water vapor used directly for the purpose of heating or energy co-generation; Post fermentation biomass which after dewatering forms sewage sludge categorized as waste;Filtrate rerouted to the preliminary node of waste water purification.As a result of anaerobic digestion of waste water the following three streams are produced:Streams 1 and 3 are not classified as waste. Stream 2 which does not have a direct application is classified as waste according to traditional technology of waste water treatment. The obtained research results enabled to redefine the categorization of the sewage sludge into \u201cpost fermentation biomass\u201d which in the innovative waste water treatment technology is utilized as a component of a composite biofuel constituting a final fuel product dedicated for the energy and energy co-generation sectors. The sequential bioconversion of post fermentation biomass allowed to yield a market value composite biofuel.The primary objective of the research was the production of a fuel suitable for green energy generation by means of energy biomass combustion in industrial power plants and heating plants. Equally important was the issue of complying with the requirements stipulated in the EU Directive 2009/28/EC of the European Parliament and of the Council of 23 April 2009 on the promotion of the use of energy from renewable sources.Nationwide implementation of the method enabled the production of approximately 830\u2013950 thousand Mg of dry energy biomass with calorific value in the range of 14\u201318 GJ/Mg. The implementation of the technology significantly improved the available schemes of obtaining plant biomass for energy purposes and at the same time mitigated the sequestrated fuel balance in Poland. Additionally, the novel product, as an alternative to traditional plant biofuels, has tremendous environmental implications because it replaces monoculture farming and contributes to sustaining forest resources which are scarce in Poland. Other important benefits include the improvement of local ecosystems in the context of sewage sludge management, the elimination of interim repository sites characterizing of odor nuisance as well as the necessity of hygienization. Research projects which aim at improving the thermal methods of sewage sludge management bear the potential of decreasing its volumes discharged to the environment. The fact that the above activities also result in producing electricity and heat, reducing in this way the combustion of conventional energy carriers, for example hard coal, creates a synergy effect which can be translated into tangible economic as well as environmental profits .The profitability analysis of the product encompasses cost savings resulting from decreasing the expenditures connected with hygienization and transportation of sewage sludge and the income from selling the final product of bioconversion as well as the costs of energy biomass production. The analysis is based on the assumption that the final product will be contracted by a major industrial electricity and/or heat generation plant. The form of the fuel allows long distance transportation; however, it has an impact on the price.In regard to the calculation of operational costs, it was assumed that 5475 Mg/year of sewage sludge characterized by 80% moisture content is placed in the homogenizer together with a total volume of 471 Mg/year of deposit biomass, activated bacteria cultures as well as reactive components. The ingredients are next mixed in the homogenizer; the obtained mixture of 5946 Mg/year is routed to the bio-converter where the process of an effective maceration takes place. After the bioconversion process has been completed, the biomass is subject to shaping and thermo inclusion to achieve the final energy biomass in the volume of 2245 Mg/year. 2 emissions through the production of energy biomass dedicated for professional electricity and heat generation.Within the framework of the development phase, risk assessment and sensitivity analyses were carried out against the changes of market and economic conditions concerning the sale of the bioconversion product. A reassessment demonstrated that the variability of market parameters falls within the predetermined range, which proves the economic viability of the project. The conducted analyses also confirm the beneficial environmental impact due to eliminating the need of repository and transportation. Another important aspect is the compliance with the requirements of the EU Directive of reducing COThe method may be applied in any waste water treatment facility with a modest capital expenditure. Energy biomass constitutes the final product to be used in the energy sector obliged to co-firing of biomass with fossil fuels in accordance with the EU policy.The volume of bio-sewage sludge currently generated in Poland converted to dry mass accounts for over 700,000 Mg. Using the proposed method, it is possible to produce, on the national level, biomass in the amount of 970,000 Mg of dry mass qualified as energy biomass replacing fossil fuels and directly dedicated for professional electricity and heat generation.At the same time, the processed sewage sludge is not landfilled. The fact that renewable resources of sewage sludge biomass are utilized for energy purposes instead of exploiting crop and forest cultivations may be considered as yet another advantage. The product is not a competitor in relation to agricultural food production.The research demonstrated that in the waste water treatment sector there exists energy potential in terms of calorific value which translates into tangible benefits both in the context of environmental protection and professional energy generation.The conducted economic analysis confirmed that the sewage sludge bioconversion product may constitute an alternative biofuel to achieve the optimal composition of the required energy mix, and most importantly, a product which does not increase the unit cost of producing 1 MWh of electrical energy."} +{"text": "Patellar tendinopathy (PT) is an overuse injury of the knee. The mechanism of injury is associated with repetitive stress on the patellar tendon of the knee as a result of explosive movement. Patellar tendinopathy is prevalent in all populations and is associated with intrinsic and extrinsic risk factors.Primarily, the objective was to report on the intrinsic and extrinsic risk factors for PT, entailing a systematic review of the literature; the secondary objective was to use these risk factors to compile a proposed PT screening tool from the review and standard outcome measures.A systematic review was undertaken according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Elimination criteria of the articles included duplicates, titles, abstracts and methodological quality. The evidence was collected, characterised with regard to the intrinsic and extrinsic risk factors and summarised descriptively.The search yielded 157 feasible articles prior to commencement of article elimination. Six articles were included with a mean methodological quality score of 69%. Eight intrinsic and five extrinsic risk factors were identified. These identified risk factors are all relevant to the pathology and formed the basis for a proposed PT screening tool. The Victorian Institute of Sports Assessment for Patellar Tendinopathy Questionnaire, Visual Analog Scale and the Pain Provocation Test are also included in the proposed test.Intrinsic and extrinsic risk factors for PT were identified, and consequently, the proposed PT screening tool was formulated for possible future testing in appropriate studies.Prevention of PT through intrinsic and extrinsic risk factor identification, and implementation in the clinical setup as a possible outcome measurement tool with which to verify functional improvement in PT rehabilitation. Patellar tendinopathy (PT), an overuse injury Reinking often reThe physical diagnosis of PT is based on clinical and predominantly ultrasound examination, although findings may not necessarily be associated with the severity of the symptoms by Brukner and Khan between the searches by the authors. This ensured that results cross-referenced and that all eligible articles were included in the review. This interval was chosen to be a supplementary addition to a previously published systematic review on the causative risk factors and rehabilitation for PT in which articles were selected only up to October 2015 AND (rehab* or \u2018return to sport\u2019 or \u2018return to play\u2019 or \u2018motor re-educat*\u2019) and (exercise* or train* or sport*)Each author undertook the study selection process according to the inclusion and exclusion criteria in Methods for the Development of NICE Public Health Guidance checklists. The individual methodology scoring for each article is displayed in The eligibility and quality of the articles were appraised by the authors using two checklists . More thThe methodological quality scoring was performed on 13 articles that met the inclusion criteria, with 6 articles remaining for the systematic review.The data contained in the eligible articles were extracted and incorporated into a customised Microsoft Excel data spread- sheet developed by the authors and 3. TCombining of the data for the formulation of a meta-analysis was not the intention of this systematic review because of the differences of the results in terms of the variety of articles with different study populations. All the empirical evidence was collected, characterised with regard to the intrinsic and extrinsic risk factors for PT and summarised descriptively.Ethics approval was obtained from the Ethics Committee of the Faculty of Health Sciences, University of the Free State .The collective results of both independent searches yielded 157 feasible articles for inclusion prior to the commencement of article elimination . Six artn = 5) of the included articles provided detail on the level of participation. A combination of elite (66%) and recreational participants (50%) was described in four articles, with one article having a general study participant population.The demographic information obtained from the systematic review shows that all six of the included articles described the study population. Two articles consisted of exclusively either male or female participants, whilst the other four articles described both male and female participants. Eighty-three per cent , impaired lower limb muscle flexibility and muscle strength. The other identified intrinsic risk factors were body composition, leg length variances, anatomy of the foot, lower patellar pole and age of the study participants.Five extrinsic risk factors for PT were identified with the main extrinsic risk factor being the common prevalence of PT in sports that involve jumping (50%). The additional four extrinsic risk factors were heavy physical work in combination with jumping sports, level of sport participation , physical activity and type of sport.PT is a well-recognised pathology with an inclusive aetiology of intrinsic non-modifiable and intrinsic and extrinsic modifiable risk factors that are directly linked to overloading of the patellar tendon . The rationale behind this is that lower leg muscle strength, especially weakness surrounding the knee joint, contributes to patellar tendon strain by the abnormal distribution of load and malalignment of patellar tracking are all potentially relevant to the pathology and were used as the basis for the proposed PT screening tool . To broaInclusion of a pain provocation test (Malliaras et al. According to the authors\u2019 knowledge, there are no other PT screening tools. The proposed PT screening tool may possibly be useful in rehabilitation, as it includes a dual function of outlining likely intrinsic and extrinsic risk factors for the development of PT, the estimation of pain and functional impairments and is not indicated for any specific population. The value of the proposed PT screening tool will only be verified if properly tested in appropriate studies Bishop .The strength of this systematic review is that the intrinsic and extrinsic risk factors for PT have been identified. Risk factor identification promotes the development and implementation of prevention strategies in the management of this condition. The evidence on the risk factors was used to suggest a proposed PT screening tool which will need to be tested in appropriate studies. A limitation of this systematic review was the limited number of included articles and a probable reason might be the explicit exclusion of youth populations. Another limitation was the lack of randomised clinical trials to validate the results.Intrinsic and extrinsic risk factors for PT were identified in this systematic review. This evidence, as well as appropriate literature, formed the basis for the formulation of a proposed PT screening tool which will require testing to determine its usefulness."} +{"text": "The blunt edges of worn molds can cause the edge of the sheet metal to form a burr, which can seriously impede assembly and reduce the efficiency of the resulting motor. The overuse of molds without sufficient maintenance leads to wasted sheet material, whereas excessive maintenance shortens the life of the punch/die plate. Diagnosing the mechanical performance of die molds requires extensive experience and fine-grained sensor data. In this study, we embedded polyvinylidene fluoride (PVDF) films within the mechanical mold of a notching machine to obtain direct measurements of the reaction forces imposed by the punch. We also developed an automated diagnosis program based on a support vector machine (SVM) to characterize the performance of the mechanical mold. The proposed cyber-physical system (CPS) facilitated the real-time monitoring of machinery for preventative maintenance as well as the implementation of early warning alarms. The cloud server used to gather mold-related data also generated data logs for managers. The hyperplane of the CPS-PVDF was calibrated using a variety of parameters pertaining to the edge characteristics of punches. Stereo-microscopy analysis of the punched workpiece verified that the accuracy of the fault classification was 97.6%.The geometric tolerance of notching machines used in the fabrication of components for induction motor stators and rotators is less than 50 Notched workpieces with jagged edges (burrs) can seriously undermine the assembly of the stator and rotor [N) as well as the cutting shear force (S), which is a product of the shear strength of the sheet metal (G), the thickness of the steel workpiece (T), and the geometry of the cutting edge (L). The speed of the stroke (V) and the geometry of the punch (P) can also affect the life of a mechanical mold. Sampling inspections and scheduled maintenance require downtime, particularly for the estimation of the mechanical performance of the die. Researchers have used mathematical models, numerical simulations, and experimental procedures to investigate the mechanical deformation of sheet metal [In the fabrication of induction motors, the mechanical tolerance for component variation is very low due to the customized physical layer, cyber layer, and application layer. The notching machine provided 290 strokes per minutes (SPM), and the sampling rate of the PVDF film was 10,000 points per s. The embedded PVDF sensors could sustain over 10,000 strokes and demonstrated the potential of industrial measurement. This work upgraded the mechanical mold from a sensorless device to an Internet of things (IoT) component. The edge computer evaluated the performance of the die mold instead of manual diagnosis. The CPS-PVDF system cooperated with different language environments: MATLAB collected spectrum data from the PVDF sensor; LabVIEW recorded waveform from the PVDF films; and Python encrypted, compressed, and uploaded the experimental results. The customized format of the experiment data provides high flexibility for various sensors and the database on the cloud server is expandable for streaming data. The Internet can create active server pages (ASP) and would allow the calculation of the gross productivity of a factory located in a suburban region. This would allow general users to subscribe to standard information with limited conditions, and authorized analysists all around the world to receive rapid and complete information, even in different time zones.Hz. The drift ratio of the first three peaks obtained from the measured frequencies was small and was related to the mechanical performance of the punch/die during operations. We employed evidence-based features for fault detection as an alternative to multiple layers of invalid features, which would have required extensive training data and would have imposed a heavy computational burden. The program classified the condition of the mold in terms of sharpness or abrasion. The SVM function was designed to extract waveforms from the contact force measurements F denotes the operation of feature extraction:k denotes the notable features related to contact force in the temporal domain and in the frequency spectrum as well. The sizes of i, j, and k are indicated by I, J, and K, respectively. Edge/fog computation was conducted using zero mean normalized cross-correlation (ZNCC) to compute the degree of similarity between the measured signals and optimal/sub-optimal signals. The mathematical model of ZNCC is written as follows:b is the distance between selected points and the SVM hyperplane. The SVM-PVDF can be used for the online monitoring of a variety of mechanical molds.The results obtained under optimal and sub-optimal conditions yielded In this study, we customized PVDF sensors and retrieved contact force measurements during the stamping of a steel workpiece. A cam mechanism in the notching machine controlled the working displacement in the cutting of the sheet metal. An SVM program extracted patterns from this raw data to differentiate between situations involving sharp cutting tools and those performed using blunt cutting tools. Steel workpieces are core components of induction motors, and the geometric tolerance of the notched workpieces dominates the gross performance of the electromechanical machines. This research proposed a CPS-PVDF system with SVM criteria to evaluate the mechanical performance of notching machines. The in situ measurement of contact force enabled the classification of the shape/blunt status of the punching molds. The contribution of the article was to provide cost-effective metrologies for the production lines. The peak-to-peak values corresponded to the physical features of the contact force. The feature used in the SVM program was based on the direct measurement of contact force and this study decrypt the physical meaning related to sharp or blunt punch for the industrial applications. The high accuracy and small differences detected in the frequency domain were subsequently used in the SVM program. Each experiment involved 42 strokes; therefore, the SVM program sliced the measured waveforms into 42 individual segments for the computation of ZNCC. This work contributes to the early detection of abrasion seen in the mechanical molds and avoids downtime costs from unexpected failures. The ultimate objective was to maintain cutting quality while optimizing maintenance schedules. Stereo microscopy verified that the accuracy of the proposed scheme exceeded 95%. The proposed system is a highly cost-effective approach to real-time inspections and the long-term monitoring of industrial machinery."} +{"text": "With this research topic we provide an overview of the main tools regenerative medicine and stem cells research have to better understand and modulate bone and cartilage cell fate, both during natural healing processes and during the development of joint pathologies. Moreover, the contribution to the research topic with original research articles allow a further exploration toward the most advanced research in the field.What is the role of mechanical loading to determine cell fate during bone development? How can we modulate biomaterial proprieties to drive cellular differentiation of stem cells toward cartilage and bone? What we can learn about bone and cartilage cell fate by following natural healing processes and osteoarthritis development? This Research Topic explores these and other crucial questions in the field of bone and cartilage regeneration, with the intention to trigger the development of new research lines and further increase of knowledge.Hendrikson et al. show how different scaffold architectures have significant influence on stress and strain distribution, but also on the effective pore size and shape, which subsequently influence the fluid shear stress distribution. Angelozzi et al. discuss how the use of microfibrous alginate scaffolds containing gelatin or the more innovative urinary bladder matrix (UBM) are able to stimulate dedifferentiated chondrocyte to re-acquire their natural phenotype. Mesenchymal stem/progenitor cells (MSC) are often use to recapitulate the endochondral ossification process. For this reason Carroll et al. used MSC as a model to explore the role of cyclic tensile strain during their differentiation showing that this specific mechanical stimuli can play a role in promoting both intramembranous and endochondral ossification of MSC in a context-dependent manner. Dynamic mechanical compression is also one of the most used strategies to regulate cellular phenotype. However, as highlighted by Anderson and Johnstone, the lack of standardized methods and analysis to study chondrogenic differentiation and maintenance under this mechanical regimes make the comparison between the current literature difficult.Scaffold manufacturing and specific types and regimes of mechanical stimulation are known to be essential for supporting and promoting cellular differentiation. Specifically, Lo Sicco and Tasso provide an overview on the novel findings that impact bone fracture healing, with a particular focus on the role of inflammation, progenitor cell recruitment and their differentiation. Lozito TP's group, on the other hand, used the lizard tail regeneration model to determine the cellular origin of regenerated cartilage and muscle following tail loss, interestingly showing how cartilage cells can contribute to the regeneration of both muscle and cartilage tissue .Understanding the role and the mechanism of action of endogenous bone and cartilage repair by progenitor cells is pivotal to increase the knowledge around endogenous progenitor cell function and therefore improve the development of tissue repair strategies. Wong et al. reviewed the molecular pathways that (may) play a role in modulating cellular fate during endochondral fracture healing, Lesage et al. discussed the current methodologies to analyze cell differentiation during endochondral ossification using computational modeling and Javaheri et al. reviewed the interesting recent findings supporting the idea that hypertrophic chondrocytes have pluripotent capacity and may transdifferentiate into osteoblastic cells. To complete the overview on fracture healing, the group of Correa D proposed a comprehensive summary on tissue engineering strategies for fractures and bone defects, discussing not only the role of the cells but also the impact of the use of different biomaterials and grow factors to stimulate the healing process .In order to have a whole overview on endochondral fracture healing, Raman et al. addressed the key inflammatory factors and the main epigenetic changes that occur in chondrocytes during OA and how we may be able to reverse them. In parallel, the group of Welting TJM discussed the current knowledge regarding the cartilage endochondral changes occurring during OA , overall covering most of the literature available in the field up to date. It is also interesting to observe how knowledge on the endochondral ossification process can also derive by the observation of physiological processes apparently non related to it. One great example are vascular diseases, such as atherosclerosis, where vascular calcification is observed, with many aspects of the process of endochondral ossification apparent. Leszczynska and Murphy reviewed this scenario focusing on the (circulating) cellular players contributing to form ectopic bone and cartilage during atherosclerosis.Sometimes a pathological situation can be very useful to study physiological processes. Osteoarthritis (OA), a pathology of the diarthoroidal joints, is a diseased state where all the joint tissues are involved, leading to cartilage and bone changes. The issues discussed in this Research Topic address several important biological aspects that determine the fate of cells in the cartilage and bone leading to the repair of damaged tissues or the onset of disease. Improving our understanding of these processes will allow us to further refine regenerative medicine based approaches to the treatment of many bone and cartilage related pathologies.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "This symposium describes the development and implementation of an interdisciplinary and novel person-centered care (PCC) communication tool in nursing homes (NH). PCC is a philosophy that recognizes \u201cknowing the person\u201d and honoring individual preferences. The communication tool is based on an assessment of NH resident likes and dislikes via the Preferences for Everyday Living Inventory (PELI). The PELI is an evidenced-based, validated instrument that can be used to enhance the delivery of PCC. In 2016, the Ohio Department of Medicaid (ODM) mandated NHs use the PELI as one of the factors that determine the quality portion of their daily Medicaid reimbursement rate. The Preferences for Activity and Leisure (PAL) Card was developed to communicate important resident preferences across care team members. In 2018, the PAL Card Project was approved by the Ohio Department of Aging as a Quality Improvement Project. The first presentation will describe the implementation of PAL Cards with n=43 NH providers. The second presentation will present data regarding the acceptability, feasibility, and appropriateness of the communication tool as rated by providers. The final presentation explores provider qualitative responses regarding the characteristics of the PAL Card communication tool related to effective implementation. The Discussant, Dr. Howard Degenholtz will discuss the implications of initiatives to address the quality of resident care."} +{"text": "SFP) is a serious problem in the egg production industry with regard to animal welfare and performance. The multifactorial causes of SFP are discussed in the areas of genetics, feeding, husbandry, stable climate and management. Several studies on the influence of manipulable material on the incidence of SFP in different environments and housing systems have been performed. This review presents current knowledge on the effects of litter and additional enrichment elements on the occurrence of SFP in pullets and laying hens. Because SFP is associated with foraging and feed intake behaviour, the provision of manipulable material in the husbandry environment is an approach that is intended to reduce the occurrence of SFP by adequate exercise of these behaviours. As shown in the literature, the positive effect of enrichment and litter substrate on SFP in a low\u2010complexity cage environment is evident. On the other hand, consistent results have not been reported on the influence of additional enrichment material in housing systems with litter substrate, which represent the most common type of husbandry in Northwestern Europe. Thus, further research is recommended.Severe feather pecking ( SFP) is a serious problem in the egg production industry with regard to animal welfare and performance. This review presents current knowledge on the effects of litter and additional enrichment elements on the occurrence of SFP in pullets and laying hens. While the positive effect of enrichment and litter substrate on SFP in a low complexity cage environment is evident, consistent results have not been reported on the influence of additional enrichment material in housing systems with litter substrate.Severe feather pecking ( Scientific papers were included if the effects of litter and/or enrichment elements on FP in pullets and/or laying hens housed in cage or barn systems were investigated in experimental or field studies and if a control group was implemented in the study. In Germany, stabling of beak\u2010trimmed pullets has been abandoned since 2017 by a voluntary agreement between the poultry industry and the Federal Government or whether the animals were kept in an environment with litter and, thus, within conditions of alternative housing regarding the presence of a floor substrate. Most previous investigations on the influence of manipulable material on SFP compared litter\u2010free systems on perforated floors (cages or enriched cages) to husbandry on different litter substrates but did not examine the effect of additional enrichment material in housing systems with litter.2In a series of studies, the presence of manipulable material was shown to improve the plumage condition in pullets and laying hens Figure\u00a0. BlokhuiWith the knowledge that manipulable materials can reduce behavioural disorders, several groups compared the applicability of different litter substrates. Huber\u2010Eicher and Wechsler kept chiIn a study on the preferences of different substrates for pecking, scratching and dust\u2010bathing, chicks preferred sand to straw and wood shavings to feathers in the first weeks of life Can the incidence and the severity of SFP be reduced by permanent or transient offers of additional enrichment material? (b) What role does the provision of these materials play during the rearing period and in possible switches (addition or omission) between the rearing and laying period? (c) What effects can be expected on the biological performance and on the economics of egg production? and (d) What is the suitability of the different groups of enrichment material with regard to the effects on SFP and does a combined use of several substrates increase the effects on behaviour?The authors declare no conflicts of interest.This work is based on a review of the literature. The authors confirm that they have adhered to the ethical policies of the journal, as noted in the author guidelines for publication."} +{"text": "If I should die, think only this of me:That there\u2019s some corner of a foreign fieldThat is forever England.Rupert Brooke, 1914Most military cemeteries overwhelm one with the vast number of markers that represent once living soldiers now buried far from home. Occasionally one finds a lonely, single grave of an unremembered death and wonders what must have happened many years ago. Most people will know of the famous poet Rupert Brooke who died of bacterial sepsis just prior to the Gallipoli landings, who is buried on the Greek island of Skyros and whose quotation appears above. However, not long before Gallipoli, another amphibious operation had taken place on the other side of the world and both the landing and its casualties are now largely forgotten. During antimalarial drug testing on Bougainville sixteen years ago, the staff of the Australian Army Malaria Institute came upon a lonely grave in Keita and its photograph appears as The Australian Naval and Military Expeditionary Force (ANMEF) was one of the first military actions of the First World War, deploying from Sydney in August 1914. The ANMEF was a rapidly raised, independent Australian force consisting of a mixed contingent of 2000 men sent north to capture the German colony of New Guinea [The several hundred men of the residual Australian forces in New Guinea then had to establish a civil-military administration across many islands with little infrastructure other than scattered coastal plantations. Small detachments were sent to the outlying areas to keep the appearance of government functioning including Madang on the northern coast, Lorengau on Manus, Angorum up the Sepik River, and Kieta on Bougainville. Usually these isolated outposts consisted of one to three officers , 20 other ranks, and some local policemen initially with no radio capability or dedicated boats for transport . BougainP. falciparum infection killed the last known Australian soldier to die of malaria in December 1965 in South Vietnam, mid-way in time between PTE Read\u2019s death and the present [P. falciparum remains one of the few infectious diseases capable of rapidly killing an otherwise healthy adult and requires a high level of awareness following visits to endemic areas, as recently demonstrated during a successfully treated malaria outbreak on the HMAS Newcastle while on patrol in the Indian Ocean.In January 1915 a long drought was broken and the subsequent rains initiated a malaria epidemic which infected most of the Rabaul garrison and filled the hospital. The outpost at Angorum was abandoned following malaria deaths of two soldiers. Daily compulsory quinine administration for paraShortly after the end of the war, the Australian War Memorial\u2019s Roll of Honour was formed and postal inquiries were sent to family members to obtain further details of those who had died. PTE Joseph Read was a 50-year-old plasterer living in South Australia, originally from England having immigrated to Australia in 1912. PTE Read would have been one of the few experienced soldiers in the ANMEF, having served in the Second Battalion of the Dorsetshire (39th) Regiment during the South African War in 1900\u20131901, and he had been awarded the Queen\u2019s South Africa medal with five bars. There was no mention of surviving family members other than a brother in England.PTE Read\u2019s grave is a solitary one being the only Commonwealth War Graves Commission burial in the Kieta cemetery. Other ANMEF malaria deaths occurred in Madang and Lae. Disease deaths seem to be often discounted against the apparently more noble image of death by missile injury in the face of the enemy. There were all too many of such traumatic deaths to follow during the First World War, as the massive cemetery at Tyne Cot, France with 11,956 burials and 34,946 memorials for those with no known grave, attests. As medical persons interested in tropical medicine we should strive, particularly during the events commemorating the centenary of the First World War, to remember those whose deaths were caused by infectious diseases and not let their equal sacrifice go un-noticed. Our struggle against infectious diseases is far from over and Bougainville remains one of the most malarious islands of the Pacific."} +{"text": "The posterior parietal cortex (PPC) of humans and non-human primates plays a key role in the sensory and motor transformations required to guide motor actions to objects of interest in the environment. Despite decades of research, the anatomical and functional organization of this region is still a matter of contention. It is generally accepted that specialized parietal subregions and their functional counterparts in the frontal cortex participate in distinct segregated networks related to eye, arm and hand movements. However, experimental evidence obtained primarily from single neuron recording studies in non-human primates has demonstrated a rich mixing of signals processed by parietal neurons, calling into question ideas for a strict functional specialization. Here, we present a brief account of this line of research together with the basic trends in the anatomical connectivity patterns of the parietal subregions. We review, the evidence related to the functional communication between subregions of the PPC and describe progress towards using parietal neuron activity in neuroprosthetic applications. Recent literature suggests a role for the PPC not as a constellation of specialized functional subdomains, but as a dynamic network of sensorimotor loci that combine multiple signals and work in concert to guide motor behavior. Humans and non-human primates make skillful reaching-to-grasping movements that are tightly coordinated in space and time recorded caudally and responses coding locations in head-, body- and hand-centered frame rostrally has been instrumental in understanding the relationship in neural activity across brain areas. The LFP is composed of synaptic and spiking activity in the vicinity of the recording electrode movements offer some hope in helping remedy these difficulties. A BMI is a device that can record neural activity from the brain while subjects think about a certain task, and then Most commonly, electrodes are implanted in the primary motor and premotor areas while patients use motor imagery to provide the necessary input to these BMIs (Markowitz et al., However, Musallam et al. went on However, soon after this, trajectory information was successfully decoded from the medial bank of the IPS as well as the dorsal convexity to allow control of a 2-dimensional (2D; Mulliken et al., The clinical relevance of the PPC to neural prosthetics was demonstrated in the first human trial of a BMI that utilized neural signals from the PPC (Aflalo et al., Despite decades of research, a definitive understanding of how individual brains areas are defined, perform distinct computations, and interact with other brain areas remains elusive. The PPC has proved an ideal test bed for understanding how the underlying neural architecture supports a range of sensory, motor and cognitive functions. Anatomy and physiology provide distinct lines of evidence for characterizing the brain areas of the PPC less as a cluster of finite regions and more as a network of integrated areas that may flexibly form the neural basis for diverse functions. The future of systems neuroscience is in understanding how these brain areas work in concert with one another and how the neural dynamics can be used for powering the next generation of prosthetic devices.KH, SB, YW and MH contributed to the preparation, writing and revising of this text.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Family members of stroke survivors experience high rates of depression and burden. The majority of stroke survivors return to their homes and need assistance to perform activities of daily living. These demands coupled with the lack of preparedness for their new roles lead to a high risk for developing depression and other negative outcomes among caregivers. Studies indicate that Hispanic caregivers report higher levels of depression compared to others. However, no interventions have focused on this population. Our objective is to develop culturally-relevant interventions to help reduce disparities Hispanic Veterans post-stroke and their caregivers. We tailored our Spanish RESCUE intervention for the Puerto Rican population. The goal of the problem-solving telephone support & educational intervention is to reduce caregiver burden and depressive symptoms by teaching them a creative and optimistic approach to solving caregiving related problems. The intervention was developed to reflect specific characteristics of the target population. To enhance the cultural relevance of the intervention, we used recommendations from Key Stakeholders and guidelines from authoritative sources such as: 1) involving persons from the target population in all phases of the project; 2) emphasizing themes valued by the PR culture; 3) assuring that the language and wording of the materials is at appropriate reading level; 4) using certified translators and Spanish-speaking experts, and 5) having Hispanic research members, fluent in Spanish, and knowledgeable about the PR culture conduct the intervention and assessments. The intervention is currently been tested in a RCT."} +{"text": "The aim of the study is to assess the degree of adherence of medical laboratories to Kidney Disease Improving Global Outcomes (KDIGO) 2012 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease (CKD) in laboratory practice in Czechia and Slovakia.An electronic questionnaire on adherence to KDIGO 2012 guideline was designed by an external quality assessment (EQA) provider SEKK spol. s.r.o. The questionnaire was placed and distributed through website to all medical biochemistry laboratories in Czechia and Slovakia (N = 396).A total of 212 out of 396 laboratories responded to the questions, though some laboratories only answered some questions, those applicable to their practice. A total of 48 out of 212 laboratories adopted the KDIGO 2012 guideline in full extent. The metrological traceability of creatinine measurement to standard reference material of SRM 967 was declared by 180 out of 210 laboratories (two of the responding laboratories did not measure creatinine). Thirty laboratories are not well educated on traceability of creatinine measurement and seven laboratories do not calculate estimated glomerular filtration rate (eGFR). Both urinary albumin concentration and albumin to creatinine ratio are reported by 144 out of 175 laboratories .Majority of laboratories in Czechia and Slovakia adopted some parts of the KDIGO 2012 guideline in their practice, but only 23% of the laboratories apply them completely. Thus, further education and action should be conducted to improve its implementation. The questions of the questionnaire on adherence to KDIGO 2012 guideline in Czechia and Slovakia are shown in th to January 25th 2019. The entry of results could be monitored by a link to the website. Participation in the survey was voluntary. In addition to the questionnaire, data on the type of creatinine method used among laboratories were collected from the results of first basic clinical chemistry EQA scheme (the number of participants was 188). Information on the type of creatinine method was not derived from the survey. These data were collected as a creatinine enzymatic method is recommended by KDIGO 2012 guideline. In the end, 212 laboratories participated in the study. All data were collected and no laboratory response was excluded. We did not compare results among different kinds of laboratories. The ideal target is full guideline implementation. Full guideline implementation was defined as the adoption of the KDIGO 2012 guideline in full extent including cystatin C traceable method and eGFR. Partial guideline implementation was defined as creatinine measurement traceable to standard reference material SRM 967. After the survey had been completed, all participants received interpretative comments from the EQA supervisors, containing references to related pages of the KDIGO 2012 guideline. These comments were not part of the questionnaire.An electronic questionnaire on adherence to KDIGO 2012 guideline was designed by an EQA provider SEKK spol. s.r.o., with registered office in Pardubice, Czechia. The questionnaire was placed and distributed through EQA SEKK\u2019s website Data were collected to Microsoft Excel Office 2007 program . The total absolute number of specific responses of each specific question and their relative percentages compared to the total number of responses to the particular question were calculated. The denominator of the percentages for full and partial guideline implementation was the total number of participants of the survey (N = 212).The response rate to the questionnaire was 54% (212 out of 396 laboratories). The answers to the questionnaire by laboratory participants on adherence to KDIGO 2012 guideline are provided in et al. performed a similar study with the purpose to improve education of laboratories and harmonization of KDIGO 2012 guideline implementation in Croatia laboratories. Biljak et al. reported overestimation of glomerular filtration rate (GFR) by eGFRcrea compared to isotopic reference method in oncology patients before cisplatin treatment equation is the new alternative for estimation of GFR from standardized serum cystatin C concentration (The KDIGO 2012 guideline recommends confirmation of CKD by estimated glomerular filtration rate from serum cystatin C (eGFRcys) when eGFRcrea is below 60 mL/min/1.73met al. showed that eGFR based on CKD-EPI equations correlated significantly with endemic nephropathy (The MDRD study group which enrolled 1628 patients with CKD was used for the development of the MDRD eGFRcrea equation (et al. demonstrated that low number of patients with CKD had albuminuria measurement. They found the care gap among all patients with CKD. Authors suggest the measurement of albumin to creatinine ratio in patients at risk for CKD development as a quality indicator (et al. reported that combining eGFR and albumin to creatinine ratio level was more accurate in predicting risk of cardiovascular disease and all-cause mortality. Serum creatinine with calculation of eGFRcrea and urinary albumin to creatinine ratio should be regularly monitored in diabetic patients. Early intervention to halt or even reverse the progression reduces the risk of cardiovascular disease and all-cause mortality. The findings call for more aggressive screening and intervention of albuminuria in diabetic patients (The work by Manns et al. reported 6 key factors for laboratories implementing the national guidelines for the diagnosis and management of CKD. The first factor is good communication between laboratory and clinicians (The number of decimal places in reporting serum creatinine and cystatin C concentrations is the issue of uncertainty of measurement. Each series of calibrator should be accompanied with the information on its uncertainty (The limitations of the study are low survey response rate and not asking about participants\u2019 creatinine method in the survey, but obtaining it in a different way instead. Further, some participants did not respond to all questions. Information from participants may not reflect the real situation. We were not able to compare results of hospital laboratories, specialized centre laboratories, and private laboratories. In summary, a majority of laboratories in Czechia and Slovakia adopted some parts of the KDIGO 2012 guideline in laboratory practice but there is still further need of education on traceability of measurement, the importance of eGFR calculation, and harmonization of reporting of results in some cases."} +{"text": "Concrete-filled steel tube (CFST) members have been widely employed as major structural members carrying axial or vertical loads and the interface bond condition between steel tube and concrete core plays key roles in ensuring the confinement effect of steel tube on concrete core. An effective interface debonding defect detection approach for CFSTs is critical. In this paper, an active interface debonding detection approach using surface wave measurement with a piezoelectric lead zirconate titanate (PZT) patch as sensor mounted on the outer surface of the CFST member excited with a PZT actuator mounted on the identical surface is proposed in order to avoid embedding PZT-based smart aggregates (SAs) in concrete core. In order to validate the feasibility of the proposed approach and to investigate the effect of interface debonding defect on the surface wave measurement, two rectangular CFST specimens with different degrees of interface debonding defects on three internal surfaces are designed and experimentally studied. Surface stress waves excited by the PZT actuator and propagating along the steel tube of the specimens are measured by the PZT sensors with a pitch and catch pattern. Results show that the surface-mounted PZT sensor measurement is sensitive to the existence of interface debonding defect and the interface debonding defect leads to the increase in the voltage amplitude of surface wave measurement. A damage index defined with the surface wave measurement has a linear relationship with the heights of the interface debonding defects. With advanced structural performance including high load-carrying capacity, good ductility and energy dissipation capability under strong dynamic excitations, convenience and economy in construction, concrete-filled steel tubes (CFSTs) have been extensively employed as major vertical and/or axial load-carrying structural members in civil infrastructure such as long-span bridges, super high-rise buildings and off-shore platforms in harsh environments . MoreoveIn the last decades, various global structural identification approaches for different types of civil engineering structures have been investigated using structural dynamic characteristics such as frequencies, damping ratios and modal shapes extracted from structural dynamic response measurements ,5,6. UnfIn recent years, piezoelectric lead zirconate titanate (PZT)-based approaches have been widely recognized as one of the most promising active structural health monitoring (SHM) techniques for engineering structures using stress wave measurement and electromechanical impedance ,14,15,16PZT based stress wave measurement approaches also play active roles in the bonding condition monitoring between concrete and rebar in RC structures ,27. SharFor the interface debonding detection of CFSTs, Xu et al. firstly proposed a novel PZT based active approach, where PZT patches mounted on the outer surface of the steel tube or embedded in concrete core are used as actuator or sensors and the changes in the wavelet energy and wavelet energy spectrum of the PZT sensor measurements are employed to detect the interface debonding defects, and the feasibility of the proposed approaches was validated experimentally and numerically considering the piezoelectric effect of PZT materials and the coupling effect between PZT patches and CFST members ,35,36,37In fact, in the above interface debonding detection approaches, the embedded PZT sensor measures the bulk wave traveling across the steel tube and concrete core. The shortages of the defect detection approaches using bulk wave include the wave attenuation in concrete core and the inconvenience of the installation of embedded SAs in concrete core before concrete pouring. The wave attenuation in steel tubes of CFST members is smaller than that in concrete and it is more attractive to develop interface debonding approaches using surface wave measurement for CFST members . Schaal Most recently, a non-destructive early corrosion detection technique in steel tubes of CFST members using surface wave measurement was proposed and experimentally investigated . HoweverIn this study, to overcome the shortcomings of the current interface debonding detection approaches using embedded PZT patches, an active interface debonding detection approach using surface wave measurement with PZT patches mounted on the identical surface of steel tube of CFST member as actuator and sensors is proposed. In order to demonstrate the feasibility and the performance of the proposed approach, experimental studies on two rectangular CFST members with different interface debonding defect scenarios are carried out. The measurements of the surface waves propagating from the surface-mounted PZT actuator and passing through different interface debonding defects are compared. The relationship between the measurement and the interface debonding widths and heights is investigated. An evaluation index is defined based on the surface wave measurement and its linear relationship with the heights of the interface debonding defects is found. Experimental results show the proposed approach is efficient for interface debonding condition monitoring for CFST members with surface wave measurement.The interface debonding defect approach presented in this study uses surface wave measurement along the steel tube. a \u00d7 b = 400 mm \u00d7 400 mm and a height of 400 mm, as shown in t was 4.0 mm. The four sides of the specimen No. 1 were named Sides A, B, C and D, while those of the specimen No. 2 were labeled as Sides E, F, G and H, as illustrated in In this study, two rectangular CFST specimens with identical dimensions were designed. As shown in In this study, six interface debonding scenarios were designed and the corresponding dimensions of each mimicked debonding defect are shown in The arrangement of PZT actuators and sensors mounted on the identical outer surfaces of the two tested specimens, the embedded SAs and the location of the mimicked interface debonding defects shown as shaded regions are illustrated in detail in As shown in A continuous sinusoidal signal with a frequency of 10 kHz and an amplitude of 10 V generated by an arbitrary waveform/function generator was used to excite the PZT actuators on each side of the tested specimen. The output voltage signals of both PZT sensor and the embedded SA sensor were recorded using a high-frequency data acquisition system with a sampling frequency of 102.4 kHz. The interface debonding defect may attenuate or block the stress wave propagation from steel tube to concrete core, and then lead to the decrease in the measurement of SA sensors and the increase in the amplitudes of the voltage signals measured by PZT patches. Moreover, from the test measurements, it can be seen that the amplitude of the response voltage induced by the surface waves increased with the enlargement of the debonding height. The reduction of the voltage amplitude measured by SAs due to the existence of debonding defect met the findings from the collected literature ,34.To further demonstrate the efficiency of the proposed surface wave measurement based interface debonding detection approach, the measurement from three PZT sensors on an identical side of each specimen were compared and analyzed. As detailed in DI) based on the measured signal amplitude defined as follows was employed to reflect the degrees of interface debonding defects.Dn and H respectively denote the amplitude of the voltage signals measured by PZT patch n located in the areas with and without interface debonding. Here, the voltage amplitude of the output signals from the PZT sensors is employed to identify the existence and to evaluate the degrees of interface debonding. As the width and thickness of interface debonding were constant for each specimen, the relationship between the variance of voltage amplitude with the height of the defects was investigated. A damage index (DI(n) corresponding to PZT sensors in the second column of each side of the two tested specimens, which changed with the height of interface debonding defects, respectively. It can be seen that this damage index was sensitive to the existence of debonding defect located in the surface stress wave propagation path from the actuator to the sensor. Moreover, the proposed DI had a clear linear relationship with the change in debonding height no matter what the width of the interface debonding defect was. DI and the height of interface debonding, the height of interface debonding can be identified quantitatively, which is very meaningful in practice. Here, the measurements of the PZT sensors on the first and the third columns on Sides C, D, G and H are compared. As shown in Based on the relationship between It is clear that the interface debonding detection approach using surface stress wave measurement with a pitch and catch pattern is capable of detecting interface debonding defects along the surface stress wave propagation path. However, the proposed method is insensitive to the interface debonding defects apart from the propagation path of surface stress wave. In practice, scanning the surface with a smaller interval is helpful to detect the width of interface debonding if a pitch and catch measurement pattern is employed.Interface debonding leads to an increase in the voltage amplitude measured by surface-mounted PZT patches and a reduction in the voltage amplitude measured by embedded SAs. Interface debonding blocks the propagation of stress wave from steel tube to the concrete core. Therefore, the increase in the voltage amplitude can be used to evaluate the existence of interface debondings;When the width of interface debonding is constant, the measured voltage amplitudes present an obvious increment with the enlargement of debonding defect heights. The amplitude of the voltage signal has a clear relationship with the change in debonding height. The defined index is efficient to identify the existence as well as the height of interface debonding;The PZT sensor measurement is sensitive to the existence of interface debonding defect along the stress wave propagation path from the actuator and sensor on the surface of CFST structures with a pitch and catch measurement pattern.This paper proposed a surface wave measurement based active interface debonding defect detection approach for CFST structures using PZT actuating and sensing technologies. This approach only uses the PZT patches mounted on the outside surface of the steel tubes and there is no need to embed PZT transducers in concrete core. To demonstrate the efficiency of the proposed method, two CFST specimens with different degrees of interface debonding defects are established for comparison. Besides the surface-mounted PZT patches, SAs embedded in concrete core are employed to investigate the effect of interface debonding on the wave propagation along the surface and into the concrete core of CFST excited by PZT actuators. A continuous sinusoidal signal is applied to excite PZT actuators and the response voltage signals measured by the surface-mounted PZT sensors are analyzed to demonstrate the efficiency of the proposed method in detecting the existence and degrees of debonding defects in CFST structures. Based on the experimental study, the following conclusions can be made:The surface wave measurement based debonding detection approach is convenient when compared with the bulk wave measurement-based approach where embedded SAs are required. Further experimental studies on the feasibility of the proposed approach considering the randomness and irregularity of interface debonding defects and numerical studies on the wave propagation mechanism will be carried out for practical application."} +{"text": "Although she regained some degree of motor function of the left upper and lower extremities, there was decreased strength with pronation, supination, and wrist and finger extension with significant wasting of the intrinsic muscles on the right side. Furthermore, she also experienced persistent loss of sensation along the distribution of the right tibial and medial and lateral plantar nerves and the right antebrachial cutaneous nerve.Treatment of cervical radiculopathy often involves conservative measures including steroid injections.Serial radiographic imaging showed intramedullary contrast extending from the occiput to C7 and extension into the medulla. A magnetic resonance imaging performed 14 months later demonstrated degenerative changes and myelomalacia extending to the right dorsal medulla and cervical cord (Fig. The patient presented more than 2 years after the initial injury and underwent exploration and decompression of the right carpal tunnel, cubital tunnel, and Guyon\u2019s canal and the right tarsal tunnel. Microneurolysis and epineurectomy was performed of the ulnar nerve of the arm and elbow and the nerve to the flexor carpi ulnaris. The tibial nerve, medial plantar nerve, and lateral plantar nerve were also serially released. The sural nerve was harvested from the bilateral lower extremities and used as grafts from the left greater auricular nerve and the left supraclavicular nerve to the medial cutaneous nerve of the right arm and forearm in an end-to-side fashion respectively. The medial cutaneous nerve of the arm was also neurotized into a partial neurotomy of the sensory component of the ulnar nerve.There were no postoperative complications and the patient reported improved sensation in the right hand and forearm after 4 weeks and increased grip strength accompanied by a positive Tinel sign at 10 weeks. Sensation in the right upper extremity continued to improve with an advancing Tinel sign across the chest, and the patient reported significant improvement in the sensation of the sole of foot. After 13 months, sensation returned to the tips of the fingers with 6\u20137\u2009mm 2-point discrimination. Even though intrinsic hand muscle weakness was still apparent, there was improved function of the flexi carpal ulnaris and the fourth and fifth flexor digitorum profundus muscles.The treatment of cervical radiculopathy is particularly challenging but becomes even more devastating in a young patient, especially when the symptomatology is compounded by myelomalacia due to an iatrogenic injury. Fortunately, exploration and decompression of the right upper extremity nerves at the level of the carpal tunnel, cubital tunnel, and Guyon\u2019s canal with bilateral sural nerve harvest and contralateral grafting was successful. Certainly, evaluation and treatment by a multidisciplinary team with experience in reconstructive surgery with peripheral nerves and the brachial plexus are paramount in achieving optimal outcomes."} +{"text": "Objective: The internationalization of teaching and studying as well as increasing numbers of students with increasingly heterogeneous educational biographies and lifestyles require universities to develop awareness of this diversity and the need for adequate diversity management. For some diversity criteria at least it has been proven that they can influence the individual study success of students. The Dean\u2019s Office of the Medical Faculty of the University of Cologne has empirically determined a stable prognosis parameter for study progression on the basis of selected criteria in order to enable early detection of students in need of guidance. This will then be used for targeted, diversity-oriented study guidance. On the one hand a correspondingly adapted guidance offer should take into account individual study progressions. On the other hand, measures to improve the equal opportunities of students with regard to their academic success can be discussed.Methodology: With the help of study progression analyses, study progress of cohorts can be recorded longitudinally. The study progression analysis implemented in the control of faculty teaching serves as a central forecasting and steering tool for the forthcoming concept of diversity-oriented study guidance. The significance measurement of the various features is determined using binary logistic regression analyses.Results: As part of the study progression analyses, the study success rate after the first semester has the strongest influence on the concordance with the minimum duration of study in the pre-clinical phase, followed by the characteristics age at commencement of studies and place of university entrance qualification. The school leaving grade only just misses the required significance level of p <0.05. As a predictor gender provides no explanatory contribution in the considered model.Conclusion: In order to do justice to the heterogeneity among the students, university administrators and lecturers should understand the recognition of diversity as a cross-cutting task and keep an eye on diversity-related aspects and discrimination-critical topics for different target groups as well as individual guidance services in the context of individual study guidance. Within the scope of this study, we were able to empirically prove the stable prognosis parameter study success rate after the first semester allows reliable detection of students in need of guidance. The explanatory contribution is larger than any of the individual criteria examined in this study. The specific causes that led to a delay in studying will be analyzed in the context of downstream and diversity-oriented study guidance. A follow-up study will deal with the question of whether the success of students requiring study guidance can be significantly improved by subsequent study guidance. For higher education, the question arises as to which diversity-oriented strategies and measures are needed in study guidance in order to enable students with different personal requirements to successfully complete their studies . S. Sage efplace of the university entrance qualification of the Leaving Certificate have shown that students with a German Abitur, which points to socialization within the German education system, study much faster and tend to comply with the minimum period of study more often than their fellow students who, having non-German Leaving Certificates and thus differing educational socialization, perform worse overall. As stated at the outset, this differentiation primarily aims to explain that the specific path to university admission and the preceding explicit and implicit educational socialization in relation to students with a migrant background should be examined with much greater attention to the heterogeneity and peculiarities of the migrant group than plain questions regarding nationality or origin allow Number of students at NRW universities continues to increase, 06.07.2016\u201d. The authors declare that they have no competing interests."} +{"text": "Cell encapsulation is a bioengineering technology that provides live allogeneic or xenogeneic cells packaged in a semipermeable immune-isolating membrane for therapeutic applications. The concept of cell encapsulation was first proposed almost nine decades ago, however, and despite its potential, the technology has yet to deliver its promise. The few clinical trials based on cell encapsulation have not led to any licensed therapies. Progress in the field has been slow, in part due to the complexity of the technology, but also because of the difficulties encountered when trying to prevent the immune responses generated by the various microcapsule components, namely the polymer, the encapsulated cells, the therapeutic transgenes and the DNA vectors used to genetically engineer encapsulated cells. While the immune responses induced by polymers such as alginate can be minimized using highly purified materials, the need to cope with the immunogenicity of encapsulated cells is increasingly seen as key in preventing the immune rejection of microcapsules. The encapsulated cells are recognized by the host immune cells through a bidirectional exchange of immune mediators, which induce both the adaptive and innate immune responses against the engrafted capsules. The potential strategies to cope with the immunogenicity of encapsulated cells include the selective diffusion restriction of immune mediators through capsule pores and more recently inclusion in microcapsules of immune modulators such as CXCL12. Combining these strategies with the use of well-characterized cell lines harboring the immunomodulatory properties of stem cells should encourage the incorporation of cell encapsulation technology in state-of-the-art drug development. Additional diabetes studies followed and guluronic acid (G) subunits allows a more controlled pore size of the microcapsules in alginate preparations through the pattern recognition receptors (PRR) and biochemical characteristics of the molecules, such as the molecule's molecular weight, size, shape and presence of charged groups. While the weight of the molecule is only partially responsible for the molecule's ability to diffuse in and out of the capsules, it is a useful parameter when comparing the ability of different signaling agents to influence the immune response against the capsules . HoweverLive cells secrete numerous products of metabolism, some of which, such as advanced glycation end products (AGE) and uric acid can be recognized as damage-associated molecular patterns (DAMPs) by the host , which is considered a rather weak antigen. Compared with mice immunized with FIX protein in complete Freund's adjuvant (a standard for immunization), mice transplanted with microencapsulated cells had a much higher antibody titer to FIX ligand (CXCL12) into alginate by Alagpulinsa et al. remarkabin vivo implementation of the technology. Despite the challenges, the recent use of immune modulators to avoid fibrotic overgrowth is an exciting and potentially game-changing development. Together with the rigorous polymer purification protocols available today and the use of human stem cell lines it may provide the final missing element for successful cell encapsulation applications. These recent developments should encourage clinical trials with renewed hopes for the field of cell encapsulation.The concept of transplanting cells with therapeutic potential enclosed in polymeric microcapsules is highly relevant for the modern pharmaceutical industry. However, a major barrier to implementing cell encapsulation technology in the clinical setting is the immune response generated against the microcapsules and their contents. Therefore, a thorough characterization of the immune mechanisms involved in anti-capsular response is important for successful AA and GH conceived and wrote the first draft of the review. SY drew the figure, compiled the table, and wrote the immunology sections. AA and BN performed literature search and compiled references. All authors contributed to the writing of the manuscript, critically reviewed the manuscript draft, and approved the final version of the article.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The development of smart cities calls for improved accuracy in navigation and positioning services; due to the effects of satellite orbit error, ionospheric error, poor quality of navigation signals and so on, it is difficult for existing navigation technology to achieve further improvements in positioning accuracy. Distributed cooperative positioning technology can further improve the accuracy of navigation and positioning with existing GNSS systems. However, the measured range error and the positioning error of the cooperative nodes exhibit larger reductions in positioning accuracy. In response to this question, this paper proposed a factor graph-aided distributed cooperative positioning algorithm. It establishes the confidence function of factor graphs theory with the ranging error and the positioning error of the coordinated nodes and then fuses the positioning information of the coordinated nodes by the confidence function. It can avoid the influence of positioning error and ranging error and improve the positioning accuracy of cooperative nodes. In the simulation part, the proposed algorithm is compared with a mainly coordinated positioning algorithm from four aspects: the measured range error, positioning error, convergence speed, and mutation error. The simulation results show that the proposed algorithm leads to a 30\u201360% improvement in positioning accuracy compared with other algorithms under the same measured range error and positioning error. The convergence rate and mutation error elimination times are only With the development of smart cities, navigation and positioning techniques are now more important in daily life. However, it is extremely difficult to improve the accuracy of navigation and position with existing satellite navigation systems, Inertial Navigation Systems (INS), and other navigation systems. The services required by smart cities, such as autonomous vehicle driving and unmanned aerial vehicles, require the support of high-precision navigation and positioning services. The existing navigation and positioning technology mainly improves the accuracy of navigation and positioning by improving the signal quality of satellite navigation systems , enhanciK, and the position of cooperative node k can be expressed ask. Then, the vector of the position of all cooperative nodes In distributed cooperative positioning, because of the large ranging error, the introduction of cooperative positioning creates more positioning errors . TherefoK sets of subgraphs of some cooperative nodes due to factor graph theory [k is set ask with the other cooperative nodes that have a communication link with cooperative node k. The range value between cooperative node k and cooperative node i can be expressed asThe network topology of distributed cooperative positioning h theory . The disk and any other two cooperative nodes The position of a cooperative node can be obtained by more than three groups of distance equations in a cooperative positioning network . HoweverCombining Equations and 4),,4), Equai adjacent to cooperative node k has the highest belief information among all cooperative nodes, the belief information of cooperative node i is set as the standard belief information for cooperative node k; then, the index of cooperative node i is set as k is the standard distance. Then, the distance difference between cooperative node j and k and cooperative node k is as follows:The belief information is constructed and is transferred between cooperating nodes to obtain the optimal position information of a cooperative node. Belief information is the information describing the mean and standard deviation of the range value between the cooperating nodes and the positioning error of a cooperative node. If cooperative node To obtain the position of the cooperative nodes, the aim function can be constructed by multiple sets of Equation , and theThe elements of matrix i is constructed as follows:K represents the total cooperative node, k and i, k and i, and k and i. The overall positioning optimal cost function S can be expressed as the sum of cooperative node cost function Due to the positioning information accuracy variation of cooperative nodes, it is difficult to obtain the optimal position by fixed cooperative nodes . To solvFactor graph theory has two types of nodes: variable nodes and function nodes. Each edge is connected with a variable node and a function node. In our proposed distributed cooperative position algorithm, the variable node represents the cooperative node, and the function node represents a factor graph local function and achieves nonlinear fusion of belief information in every computation cycle, so there is no link between different variable nodes. The factor graphs method can split a complex multivariate global function into the product of several simple local functions, so the optimal position is obtained by the local function instead of the optimal position of the global function. In product theory of the factor graph, the belief information is transferred between variable nodes and function nodes to obtain the optimal position information of a cooperative node. The structure of distributed cooperative position based on factor graph is shown in The belief information passed from the cooperative node to the function node is the product of the belief information of all the other neighbor function nodes arriving at the cooperative node. For example, the belief information based on the transfer from the cooperative node The belief information passed from cooperative node S.Because the maximum value of the belief information in the factor graph is 1, if the number of cooperative nodes is larger, the actual value of the belief information is different by triangulation and leads to larger calculation errors. The normalized weight factor is adopted in our proposed algorithm to abate calculation error and is expressed asith line of matrix ith row vector of ith line of matrix Combining Equations and 10)10), the ith row of the residual matrix. The optimal position of cooperative node i can be expressed as follows by decomposing Equation (To obtain the optimal position of cooperative nodes, Equation can be rDXest+\u0394DXThe optimization problem of Equation is a minTherefore, the optimization problem required by Equation can be eEquation can be rThe partial derivative of function By expanding Equation , the folThe optimal position of cooperative nodes Equation . When \u2016XIn the cooperative position system, positioning accuracy is mainly affected by the positioning error of the cooperative node and the ranging error, which is measured between cooperative nodes. In the first part, the ranging error is simulated with the ideal position condition, where the standard deviation of the positioning errors of cooperative nodes is 0 m. Our proposed algorithm is compared with the cooperative position method based on distance measurement assistance in , semi-deFrom From In the cooperative position system, in addition to the ranging error, the position error of cooperative nodes will have a significant impact on the accuracy of the cooperative position system. In the second part, the positioning errors of cooperative nodes are simulated with the ideal range condition, where the standard deviation of the ranging error is 0 m; our proposed algorithm is compared with the cooperative position method based on distance measurement assistance in , the semIn In In the cooperative position system, nodes have both moving and static states, so the convergence speed of the algorithm will affect the actual performance of the algorithm; a faster convergence rate will bring better positioning performance. Therefore, the STD of the ranging error and nodes positioning error is 1 m, a value that depends on the existing GNSS position accuracy of cooperative nodes. Our proposed algorithm is compared with the cooperative position algorithms proposed in ,6,7,9; tIn ithms in ignore tithms in take intIn the cooperative position system, the GNSS position result of the cooperative nodes depends on the quality of the GNSS signal. Satellite signal interference, signal scattering and deceptive signals will affect the GNSS signal and lead to mutation error, which will degrade the the position performance of cooperative nodes, so the cooperative position algorithm should reduce the influence of mutation error on the cooperative positioning system. Under a standard deviation of the ranging error and node positioning error of 1 m, three nodes in the cooperative position network are added to the mutation positioning error in the third moment after the cooperative positioning network is stable; the simulation result of 1000 Monte Carlo is shown in In In response to the bottleneck in improvements to positioning accuracy in the existing navigation and positioning technology, the cooperative position algorithm can further improve positioning accuracy based on the interactive position information of the cooperative node. However, the ranging error and node positioning error have a greater impact on the accuracy of a cooperative positioning network and even reduce positioning accuracy. Our proposed factor-graph-assisted distributed cooperative position algorithm establishes the corresponding belief information model of the cooperative node by the ranging error and node position error and combines the total least squares method to obtain the optimal position of the cooperative position network. It can effectively restrain the influence of ranging error and node positioning error on the whole cooperative positioning system. This paper compares our proposed algorithm with the existing algorithms in terms of ranging error, cooperative node positioning error, convergence rate, and mutation error; the simulation results show that the positioning accuracy of our proposed algorithm is improved by 30\u201360%. In terms of convergence rate, our proposed algorithm utilizes the sum-product principle of the factor graph to achieve faster convergence than the other cooperative position algorithms. Importantly, when the cooperative node itself has a mutation error, our algorithm can quickly eliminate the effect of the mutation error on the entire coordinated positioning network. Our method has good application value in the field of navigation and positioning."} +{"text": "Bioelectric oscillations occur throughout the nervous system of nearly all animals, revealed to play an important role in various aspects of cognitive activity such as information processing and feature binding. Modern research into this dynamic and intrinsic bioelectric activity of neural cells continues to raise questions regarding their role in consciousness and cognition. In this theoretical article, we assert a novel interpretation of the hierarchical nature of \u201cbrain waves\u201d by identifying that the superposition of multiple oscillations varying in frequency corresponds to the superimposing of the contents of consciousness and cognition. In order to describe this isomorphism, we present a layered model of the global functional oscillations of various frequencies which act as a part of a unified metastable continuum described by the Operational Architectonics theory and suggested to be responsible for the emergence of the phenomenal mind. We detail the purposes, functions, and origins of each layer while proposing our main theory that the superimposition of these oscillatory layers mirrors the superimposition of the components of the integrated phenomenal experience as well as of cognition. In contrast to the traditional view that localizations of high and low-frequency activity are spatially distinct, many authors have suggested a hierarchical nature to oscillations. Our theoretical interpretation is founded in four layers which correlate not only in frequency but in evolutionary development. As other authors have done, we explore how these layers correlate to the phenomenology of human experience. Special importance is placed on the most basal layer of slow oscillations in coordinating and grouping all of the other layers. By detailing the isomorphism between the phenomenal and physiologic aspects of how lower frequency layers provide a foundation for higher frequency layers to be organized upon, we provide a further means to elucidate physiological and cognitive mechanisms of mind and for the well-researched outcomes of certain voluntary breathing patterns and meditative practices which modulate the mind and have therapeutic effects for psychiatric and other disorders. Berger in the early 20th century revealing Alpha waves neural assemblies of the brain on an EEG are the ionic current producing membrane potential activities of individual neurons; the dendritic and postsynaptic potentials has been explored previously by ourselves and other authors. Coordination and communication between the autonomic bodily system and the brain have been demonstrated in several studies .Current literature supports the assertion on a unified hierarchical nature to neural oscillations which have focused on spatial scales . Crucial for neocortical function, the membrane potential of nearly all neocortical neurons undergo 10\u201320 mV oscillations conscious cognitive function to screen internal and external stimuli for salient motivational cues that indicate a possible threat or reward Basar, . While tThe phenomenal aspect of the foundation of this layer, considering the role of the biological oscillations, consists of basic homeostatic drives functionally founded in the most basic division of self-vs-other. Emotional and primordial sensations are thought to have arisen on the back of a more general capacity for basic self-awareness connections between the limbic system and an array of neocortical areas (corticolimbic circuits) are key to higher-order processing of emotions throughout evolution may have developed to not only to serve communicative functions but facilitate a proper mental and behavioral response to solving a diverse range of adaptive problem domains that influence the chance of reproductive success and may be responsible for higher-order consciousness necessitated by the lower layers responsible for primitive consciousness. We suggest this layer is responsible for sensory consciousness, sensory qualia, and higher cognitive functions which we suggest are superimposed cognitively, phenomenally, and physiologically upon the respective components of the lower layers. For instance, sensory qualia are superimposed upon the sensory frameworks of which they correspond. Certain cognitive capabilities are coordinated by emotional activity or \u201caffect programs.\u201d In essence, these higher levels of cognition and sensory experience are dependent on the lower layers we have described to be part of a global metastable continuum.There is still debate among neuroscientists as to whether Gamma oscillations ( >30 Hz) play a functional role in consciousness and cognition, from being completely non-functional to being a neural correlate of consciousness literature in general support or view that lower frequency activity represents more basic and widespread activity while higher frequency activity is more localized and information-rich Orpwood, . While Bvia descending projections of the higher-frequency activity observed in the contrasting states of distress and higher awareness achieved through meditation are fundamentally different and this notion should be researched further. One likely difference is an asymmetry of oscillatory activity in the two hemispheres of the brain in the distressed state, with symmetrical sites of the cortex showing large differences in the frequency of oscillations present . We support the assertion that the integrated experience of consciousness is thus achieved when the myriad of distinct and highly-local high-frequency synchronies among neural assemblies are metastably synchronized together at a much more global, abstract, and complex scale. While the hypothesis we have presented is not based on our own empirical research, we believe the significance of our assertions may provide a deeper understanding on the nature of the previously proposed isomorphism between the integrated phenomenal experience and the global bioelectric architecture of the brain. Understanding the full nature of this isomorphism will be key in bridging the explanatory gap between the phenomenal mind and biological mechanisms of the brain and in developing theories on the biological and ultimately physical nature of consciousness.In this theoretical article, we have advanced a novel perspective on the hierarchical nature of biological oscillations which identifies an isomorphism among the frequency-based superimposition of neural oscillations and the superimposition of the contents of consciousness. The contents of consciousness have been previously proposed to consist of a superimposition of qualia upon objects and scenes further superimposed upon a 3D virtual coordinate matrix. We expand upon this assertion by including emotions and some aspects of cognition. This superimposition is identified Theory developed by RJ with some writing. CB wrote the majority of the manuscript with theoretical contributions. Images done by MJ.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The vascular system develops in response to auxin flow as continuous strands of conducting tissues arranged in regular spatial patterns. However, a mechanism governing their regular and repetitive formation remains to be fully elucidated. A model system for studying the vascular pattern formation is the process of leaf vascularization in Arabidopsis. In this paper, we present current knowledge of important factors and their interactions in this process. Additionally, we propose the sequence of events leading to the emergence of continuous vascular strands and point to significant problems that need to be resolved in the future to gain a better understanding of the regulation of the vascular pattern development. The development of the continuous conductive system is one of the most important morphogenetic processes occurring in plants. Proper construction and functioning of this system are necessary for the transport of water and nutrients, as well as the movement of signaling molecules Dengler . VasculaCurrent knowledge about the process of vascularization is based mainly on studies of the venation pattern formation in the first vegetative leaf of Arabidopsis, one of the main models for research on vascular tissue development , MP (MONOPTEROS), ATHB and DOF (DNA-BINDING WITH ONE ZINC FINGER) and the precise definition of all stages of vascular tissue emergence is necessary for a more complete understanding of the mechanisms of venation pattern formation.The first strand of procambial cells, defined as elongated, but not yet longitudinally dividing cells, emerges in the central part of the leaf blade simultaneously along its entire length, and determines the location of the future primary vein Fig.\u00a0, 2008. DIt is believed that the manner of procambium formation is related to the level of endogenous auxin. Simultaneous emerging of procambium strands, probably reflects the effective flow of auxin during the initial stages of leaf development, resulting from inefficient biosynthesis of auxin and its low concentrations. On the other hand, nonsynchronous procambium differentiation, separately in marginal and lateral vein of the third loops, can be associated with higher levels of auxin, which is synthesized in adjacent hydathodes reaction\u2013diffusion prepattern and (2) canalized auxin flow (Nelson and Dengler Polar auxin transport (PAT) represents auxin movement from cell to cell, and involves the influx and efflux carriers located in the plasma membrane are expressed in the first leaf of Arabidopsis and are able to potentially regulate the direction of auxin transport during vascularization mutant gene encoding the Golgi-associated retrograde protein (GARP) complex has been identified as also involved in venation development , PHABULOSA (PHB), PHAVOLUTA (PHV), CORONA (CNA), and ATHB8, which bind to DNA, regulating of the gene expression in many developmental processes and the MP gene regulates expression of PIN1, because the expression domains of these genes partially overlap during leaf vascularization, and the mutation in the MP gene reduces the levels of PIN1 proteins MP proteins may occur at different concentrations in different regions of the leaf primordium and only the future procambial cells have these proteins in concentrations sufficient to activate ATHB8 gene expression proteins from the same family as ATHB8, which can bind to its promoter , fkd (forked), van3/sfc (scarface) (Hardtke and Berleth In the sequence of events outlined here, auxin and its polar transport play primary and central roles. However, despite substantial evidence about the association of auxin transport via polarized PIN1 proteins with the formation of vascular tissues, the mechanism of PAT-dependent vascularization is still not fully elucidated and remains very important goal for future research. Moreover, some studies have shown that vascular differentiation may occur without the involvement of PAT and in the absence of PIN1 proteins in plasma membranes, which strongly suggests the existence of additional mechanisms regulating vascular development Banasiak . It is hAB contributed conception and design of manuscript; MB collected and processed the data; AB and MB wrote the chapters of manuscript, read and approved the submitted version."} +{"text": "There are errors in the second and third sentences of the second paragraph of the Methods, as well as errors in the second and third equations. Please see the corrected sentences and equations here:xi can be calculated by x onward, lrx, can then be calculated by\u03c9 denotes the highest age attained. Hence, the fraction alive and healthy at age For simplicity, we will write The following changes occurred after the adjustment for the incorrect summation index:Myocardial infarction: slight increase in the lifetime risk estimatesColorectal cancer: slight increase in the lifetime risk estimates, the total change between the two lifetime risks compared, and the contributions of changes in disease incidenceHip fracture: slight increase in the lifetime risk estimates, the total change in the two lifetime risks compared, and the contribution of changes in survival; slight decline in the contribution of changes in disease incidenceThese changes are represented in an updated The incorrect summation index also appears in the S2 Appendix(PDF)Click here for additional data file."} +{"text": "The main objective of this paper is to present the data set which depicts faculty commitment and effectiveness of job responsibilities in a changing world and the moderating role of the university\u05f3s support system. The population of the study consisted all the 1912 Faculty members of six selected private universities in Southwest, Nigeria Specification TableValue of the data\u2022The data covers a representative sample of private universities in Nigeria, thus enhancing external validity of the findings.\u2022University management can leverage on the data, if analysed, for the purpose of decision making regarding employee commitment and job responsibilities\u2022The analysis of this data can give valuable insights into the roles which university support plays in enhancing job commitment and responsibilities. See \u2022The data can be used as a platform for further investigation by other researchers.\u2022The data provided here can be used for educational and change management purposes.\u2022The research instrument can be adopted or adapted for similar studies\u2022This data can be used to determine the unique dimensions of relationships and significant effects of faculty commitment, university support and effectiveness of job responsibilities of faculty members.1The data comprised raw statistical data on the influence of faculty commitment on the effectiveness of job responsibilities, with the role of the university system as a moderating factor. Descriptive design was adopted for this data set. Statistical Package for Social Sciences (SPSS) and AMOS 22. Structural Equation Modelling (SEM) was used to determine the strength of relationship and resultant effects of the observed variables and the latent constructs. 2n = the desired sample size to be determinedN = total populatione = accepted error limit (0.05) on the basis of 95% confidence level.In order to determine the sample size, The data presented above was based on the quantitative study. To investigate the effect of faculty commitment on the effectiveness of job responsibilities, survey research design was adopted. The best six private universities in Southwest Nigeria were sampled . The study population consisted all the ranks of the 1912 faculty members of the selected universities. Four hundred (400) of the faculty members were selected across all colleges to participate in this study with the use of a structured questionnaire. However, information on the details of the study population can equally be accessed in The data provides insight into the role institutional support plays in enhancing Faculty commitment and effectiveness of job responsibilities. The data presented in this article may assist the management of the institutions of higher learning to have deep insight and understanding into the significant role of institutional support in enhancing faculty commitment and effectiveness of job responsibilities. This suggests that management of the selected universities may leverage on the data for the purpose decision making, educational and change management purposes. The data presented could also be used for further investigation."} +{"text": "Such information would aid in the development of evidence-based heat therapy protocols for both sporting and clinical situations.MI, JP and SR conceived and contributed to writing the manuscript. All authors provided critical feedback approved the final version of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Bilateral ligamentum teres (BLT) hepatis is a very rare anomaly defined as the connection of the bilateral fetal umbilical veins to both sides of the paramedian trunk, and it has never been reported in the English literature.A 72-year-old man who presented with obstructive jaundice was referred to our hospital. Contrast-enhanced computed tomography revealed that the patient had right-sided ligamentum teres (RSLT) and left-sided ligamentum teres (LSLT). The umbilical portion of the left portal vein, which the LSLT connected, became relatively atrophic in this patient. The RSLT attached to the tip of the right anterior pedicle and formed the umbilical portion of the right portal vein. The patient was diagnosed with perihilar cholangiocarcinoma which had invaded the root of the posterior branch of the bile duct, LHD, and intrapancreatic bile duct. The central bisectionectomy, in which the liver parenchyma was resected along the RHV on the right side and the LSLT on the left side, and caudate lobectomy combined with pancreatoduodenectomy were performed.The presence of the patient with BLT is important for ascertaining the mechanism of the development of RSLT. Two umbilical veins are present initially during the embryonic stage. In general, the right-sided vein disappears, and the atrophic left-sided vein remains connected to the left portal vein originating from the vitelline vein. Several papers on the mechanism of the development of RSLT have been published. Some authors have mentioned that a residue of the right umbilical vein and the disappearance of the left umbilical vein are the causes of RSLT. On the other hand, some authors have asserted that RSLT is the result of atrophy of the medial liver area. The presence of BLT in patients indicates that the mechanism of the development of RSLT is characterized by a residue of the right umbilical vein and the disappearance of the left umbilical vein.The mechanism and origin of RSLT can be understood through cases of BLT, and surgeons must pay attention to anomalies of the portal and hepatic veins in patients with abnormal ligamentum teres. Bilateral ligamentum teres (BLT) hepatis is a very rare anomaly defined as the connection of the bilateral fetal umbilical veins to both sides of the paramedian trunk, and it has never been reported in the English literature.A 72-year-old man who presented with obstructive jaundice was referred to our hospital. Contrast-enhanced computed tomography (CT) revealed that the patient had right-sided ligamentum teres (RSLT) and left-sided ligamentum teres (LSLT) Fig. a. The umThe ligamentum teres hepatis is a remnant of the umbilical vein that exists in the embryonic stage. The ligamentum teres hepatis connects the LUP and the Arantius duct (ligamentum venosum), which is an important landmark during liver dissection. Only one ligamentum teres hepatis exists in most cases, but we experienced a case of two ligamentum teres hepatis and demonstrated some concerns raised during surgical dissection; we also ascertained the mechanism of the development of RSLT, which exhibits a reported prevalence of 0.1\u20131.2% .From the viewpoint of surgical resection, surgeons must pay attention to the anomalies of the portal and hepatic veins. In this case, the RSLT was attached to the tip of the right anterior pedicle and formed the RUP, and the LSLT formed the relatively atrophic LUP. The Arantius duct was continuous with the left portal vein. The middle hepatic vein (MHV) had shifted to the left side, and the branch of the MHV drained the anterior inferior and left paramedian sections as well as part of the left lateral inferior section. The artery and bile duct in this BLT patient ran along the portal vein similar to what is found in normal patients, though the anterior inferior branch and anterior superior branch of the artery and the bile duct ran beside the RUP. Preoperative simulation is important in patients with planned BLT hepatectomy, and surgeons should perform hepatectomy with consideration for these features.The presence of the patient with BLT is also important for ascertaining the mechanism of the development of RSLT. The mechanism of the development of RSLT has been discussed in recent decades. Two umbilical veins are present initially during the embryonic stage. In general, the right-sided vein disappears, and the atrophic left-sided vein remains connected to the left portal vein originating from the vitelline vein . SeveralThe mechanism and origin of RSLT can be understood through cases of BLT, and surgeons must pay attention to anomalies of the portal and hepatic veins in patients with abnormal ligamentum teres."} +{"text": "The functional roles of kainate receptors (KARs) are less well defined but they play a role in some forms of synaptic plasticity. Both receptor types have been shown to be highly developmentally and activity-dependently regulated and their functional synaptic expression is under tight cellular regulation. The molecular and cellular mechanisms that regulate the synaptic localisation and functional expression of AMPARs and KARs are objects of concerted research. There has been significant progress towards elucidating some of the processes involved with the discovery of an array of proteins that selectively interact with individual AMPAR and KAR subunits. These proteins have been implicated in, among other things, the regulation of post-translational modification, targeting and trafficking, surface expression, and anchoring. The aim of this review is to present an overview of the major interacting proteins and suggest how they may fit into the hierarchical series of events controlling the trafficking of AMPARs and KARs."} +{"text": "The global travel and tourism industry has been rapidly expanding in the past decades. The traditional focus on border screening, and by airline and cruise industries may be inadequate due to the incubation period of an infectious disease. This case study highlights the potential role of the hotel industry in epidemic preparedness and response.This case study focuses on the epidemic outbreaks of SARS in 2003 and H1N1 swine flu in 2009 in Hong Kong, and the subsequent guidelines published by the health authority in relation to the hotel industry in Hong Kong which provide the backbone for discussion.The Metropole Hotel hastened the international spread of the 2003 SARS outbreak by the index case infecting visitors from Singapore, Vietnam, Canada as well as local people via close contact with the index case and the environmental contamination. The one-week quarantine of more than 300 guests and staff at the Metropark Hotel during the 2009 H1N1 swine flu exposed gaps in the partnership with the hotel industry. The subsequent guidelines for the hotel industry from the Centre of Health Protection focused largely on the maintenance of hygiene within the hotel premises.Positive collaborations may bring about effective preparedness across the health and the tourism sectors for future epidemics. Regular hygiene surveillance at hotel facilities, and developing coordination mechanism for impending epidemics on the use of screening, swift reporting and isolation of infected persons may help mitigate the impact of future events. Preparedness and contingency plans for infectious disease control for the hotel industry requires continuous engagement and dialogue. The global travel and tourism industry has expanded rapidly in recent years. The global number of international tourist arrivals increased from approximately 541 million in 1995 to 1161 million in 2014 .The search identified 34 records from MEDLINE of which five were relevant ,A medical professor from Guangzhou in China arrived in Hong Kong on 21 February 2003 and checked into a room on the ninth floor of the Metropole Hotel in Kowloon . During On 8 March 2003, the Singapore Ministry of Health reported to the Hong Kong Department of Health that three patients who presented with pneumonia were admitted to hospital after returning to Singapore from Hong Kong. They had all stayed in the Metropole Hotel. During the conversation, laboratory investigations were pending and there was not sufficient evidence to suggest that their illnesses had been related to the Metropole Hotel .On 12 March 2003, the WHO issued a global alert about unusual cases of an acute respiratory syndrome. On 14 March 2003, the index case for the significant outbreak at the Prince of Wales Hospital was identified. It was not until 19 March 2003, after multiple enquiries, the patient revealed that he had visited the Metropole Hotel around that period as a visitor but not a guest . On the The Metropole Hotel exemplified the potential international spread of infectious diseases. The index cases in the Hong Kong, Toronto, Singapore and Hanoi outbreaks were all associated with the hotel. SARS patients in Ireland and United States had also visited the Metropole Hotel around the same time when there were other sick guests present in the hotel \u201314, 18. Little was known about the new disease SARS when the outbreak began at hotel and hospitals in February and early March. The WHO did not issue its first emergency travel advisory naming the illness as SARS until 15 March 2003 . There wA number of researchers estimated the basic reproduction number of SARS by fitting models to the initial growth of the epidemics in a number of countries . ModelliA 25-year-old male from Mexico arrived in Hong Kong on 30 April 2009, and stayed at the Metropark Hotel. He attended hospital on the same evening where he was admitted to an isolation ward. He subsequently developed a fever and was confirmed to have swine flu on 1 May 2009 .Emergency Preparedness Plan for Influenza Pandemic. An \u2018Emergency Response Level Steering Committee on Human Swine Influenza (Flu A H1N1) Pandemic\u2019 was also established on the 1st May to formulate the overall disease control strategy [The Hong Kong Special Administrative Region (HKSAR) government raised the response level to \u2018Emergency\u2019 on the same day under the strategy .Prevention and Control of Disease Ordinance, the Director of Health ordered that all guests and staff at the Metropark Hotel should be quarantined on the evening of 1 May 2009 [Under the May 2009 . The quaThe quarantine ended 1 week later, which covered the incubation period of influenza of 1 to 7\u2009days. For those persons who had completed the quarantine period without showing symptoms of being infected, they were issued with Certificates of Conclusion of Quarantine.At the same time, the Centre for Health Protection (CHP), other government departments and relevant agencies conducted contact tracing starting on the 1st May 2019. Close and selected social contacts were prescribed chemoprophylaxis and put under medical surveillance. The hotel and nearby streets, as well as other public places, were cleansed and disinfected. Hygiene guidelines had been issued to all licensed hotels/guesthouses and rented rooms to encourage enhanced cleansing and improvement of hygiene. All industrial associations had been informed of the situation and reminded the need to take precautionary measures. The Occupational Safety and Health Council had organised public seminars to raise public awareness of preparedness for influenza in the workplace . Table\u00a02Both the Metropole Hotel at Kowloon (now renamed Metropark Hotel Kowloon) and the Metropark Hotel at Wan Chai (now renamed Kew Green Hotel) were four stars hotels situated at busy part of the city. Both hotels were managed by the same management group \u2013 the China Travel Service (Holdings) Hong Kong Limited. The two hotels were no different in terms of access to public health facilities and general standard of care. The difference in the timing of public health actions by the health authority was likely contributed by experience of SARS preceding swine flu.After the SARS outbreak in Hong Kong the health authority established the Guidelines for Hotels in Preventing Severe Acute Respiratory Syndrome (SARS) and GuidThe CHP organised Infection Control Seminars for the Hotel Industry on a regular basis. For instance, in response to the Ebola virus outbreak in West Africa in 2014\u20132016, the CHP provided advice for the local hotel industry on receiving guests with a travel history or residence in an Ebola virus disease affected area. The guideline stressed the importance of enquiring about the travel history of guests and outlined procedures on handling these guests who may feel unwell. The guideline reiterated the need to keep a record of staff and residents who had stayed in the hotel, with their personal and contact details, for possible future public health actions and contact tracing .Ideally, hotels should be setting their own standards of hygiene measures and providing training to staff before an outbreak occurs. Further roles and responsibilities included contingency arrangement, plan of acquisition of protective equipment, disease reporting and surveillance mechanism during outbreak period .From our online and database internet search, however, there is little mention of collaboration between the government and the hotel industry. No documentation was found on setting up of task forces or committees, or of invitation to hotel representatives to the working group advising on infection control guidelines in Hong Kong.SARS served as the classic example of how tourism and international travel can present challenges to the global health system. The spread of the illness within a single hotel and the subsequent international air travel of the victims contributed to and accelerated the speed of the spread of SARS across the globe.The experience from SARS in Hong Kong had a profound impact on the public health reform especially on the infectious disease surveillance and epidemic response . These iAccording to subsequently published literature, application of appropriate measures had likely reduced the number of people who were infected, requiring medical care and died during the influenza pandemic . It has The decision of quarantine created enormous tension between the government, guests and the hotel management. The decision of the need for quarantine and the scale of the quarantine needs to be scientifically justified. The negative effects overall of such a policy on the tourism attractiveness of a destination cannot be neglected. The quarantine at the Metropark Hotel during swine flu also highlighted the extensive assistance needed for the quarantined persons, and the cooperation necessary in the possible future need for a hotel quarantine. Pre-established partnerships and coordination between the government and the hotel industry is key in epidemic preparedness and response.Studies have shown that the psychological impacts of SARS and the government restrictions on travel, had a great impact on the travel industry far beyond the region of SARS hit areas . For theHotels are often the first point of contact for tourists arriving at a host country. Hotels could provide an additional line of defence beyond entry border screening, and they could offer another layer of protection against illnesses that border screening processes may have missed, for example in the situation where travel occurs during the incubation period of an infectious disease. In view of this, the capacity of hotels in the detection of potential illness and the launching of an initial response should be fully recognised and utilised.The WHO pandemic influenza risk management recommended involving civil society and the private business sector in pandemic preparedness planning and national committees . The casThe epidemic preparedness and infection control measures mounted against SARS and H1N1 swine flu demonstrated a role that needed to be filled by the hotel industry. During SARS, late recognition of the environmental contamination of hotel facilities and the failure of timely intervention on the hotel guests with close contact contributed to the spread of the disease internationally. While the appropriateness and best method of quarantine in future pandemic influenza warrants further research, the 2009 swine flu hotel quarantine exposed gaps in the partnership with hotel industry. Health authorities in Hong Kong had since provided guidelines mostly in the area of disinfection and hygiene, and focused on educating hotel workers on basic hygiene to prevent the spread of infectious diseases. The potential to establish traveller screening, timely reporting and isolation for the infected guests during epidemics could be explored. The capacity of the hotel industry in controlling infections should be recognised not only in Hong Kong but also in other parts of the world."} +{"text": "The composition of the terrestrial arthropod community of a tidal marsh islet in the Gulf of Gabes (Tunisia) was studied during two seasons . The study was conducted on a small islet located in an area where the highest tidal excursions of the Mediterranean occur. Standard trapping methods were used to evaluate specie richness and abundance in different areas of the islet. Diversity indices were calculated for coleopterans and isopods alone. The structure of the arthropod community varied a great deal from one season to the other and differences were found when seaward areas were compared with landward ones. El Bessila presented a particularly rich beetle community whereas only few isopod species occurred. The moderately high diversity levels found for the beetle indicate the influence of the high tidal excursions in modelling the structure of the community."} +{"text": "Population dynamics studies and harvesting strategies often take advantage of body size measurements. Selected elements of the skeletal system such as mandibles, are often used as retrospective indices to describe body size. The variation in mandibular measurements reflects the variation in the ecological context and hence the variation in animal performance. We investigated the length of the anterior and posterior sections of the mandible in relation to the conditions experienced by juveniles of 8\u201310 months of age during prenatal and early postnatal life and we evaluated these parameters as ecological indicators of juvenile condition as well as female reproductive condition in a roe deer population living in the southern part of the species range. We analyzed a sample of over 24,000 mandibles of roe deer shot in 22 hunting districts in the Arezzo province from 2005 to 2015 per age class. Mandible total length in juveniles is equal to 90% of total length in adults. In this stage of life the growing of the mandible\u2019s anterior section is already completed while that of the posterior section is still ongoing. Environmental conditions conveyed by forest productivity, agricultural land use, local population density and climate strongly affected the growth of the anterior and posterior sections of the mandibles. Conditions experienced both by pregnant females and offspring played an important role in shaping the length of the anterior section, while the size of the posterior section was found to be related to the conditions experienced by offspring. Temporal changes of the length of the anterior section are a particularly suitable index of growth constraints. Anterior section length in fact differs according to more or less advantageous conditions recorded not only in the year of birth, but also in the previous year. Similarly, the sexual size dimorphism of the anterior section of the roe deer mandible can be used to describe the quality of females above two years of age, as well as habitat value. Hence the anterior section length of the mandible and its sexual size dimorphism are indexes that can provide cues of population performance, because they capture the system\u2019s complexities, while remain simple enough to be easily and routinely used in the majority of European countries where roe deer hunting period extends from early autumn to late spring. Ovis aries, Gaillard et al. . See (DOCX)Click here for additional data file."} +{"text": "This commentary presents a summarized discussion of key findings and relevant ideas from previously published study, index analysis, and human health risk model application for evaluating ambient air-heavy metal contamination in Chemical Valley Sarnia (CVS). The CVS study provides previously unavailable data in the CVS area which evaluates the adverse effects on air quality due to nearby anthropogenic activities. The study provided an assessment of environmental pollutants, finding that carcinogenic and non-carcinogenic substances are present in trace quantities. The main findings of the study suggest that chronic exposure of humans to several contaminants identified in the area studied may lead to carcinogenic health effects, including cancer (such as nephroblastomatosis) as well as non-carcinogenic health effects, such as damage to the tracheobronchial tree. Children were found to have a significantly higher risk, that is, a higher hazard index: a value used to measure non-carcinogenic health risk from heavy metals identified in air samples collected during the research period from 2014 to 2017. This study quantified the influence of environmental contaminants, relative to human exposures and the consequence of developing nephroblastomatosis in the human population. Comment on: Olawoyin R, Schweitzer L, Zhang K, Okareh O, Slates K. Index analysis and human health risk model application for evaluating ambient air-heavy metal contamination in Chemical Valley Sarnia. Ecotoxicol Environ Saf. 2018;148:72-81. doi:10.1016/j.ecoenv.2017.09.069.The advent of technological advancement and innovation has yielded enormous benefits supporting industrialized economies and the development of the new industry known as \u201cIndustry 5.0\u201d\u2014the industry of the future with focus on personalization and customization of products and services for all. Industry 5.0 also increases the collaboration between machines and humans working side by side for optimal productivity.1 the vulnerability of children and families, and communities, to the psychological, social, economic, and ecological consequences of environmental degradation from anthropogenic activities was found to extend beyond the initial periods of the disasters. The strongest predictors of stress were family health concerns, commercial ties to renewable resources, and concern about economic future, economic loss, and exposure to the contaminants. Upsurge in the levels of contaminants generated and released into the environment from anthropogenic activities such as industrial or manufacturing processes is problematic and will continue to arouse public attention.The expansion in human activities has been the dominant influence on the environment, including the human environment, in a period known as the Anthropocene. Environmental pollutions affecting the ecosystems and human health indicate an increase in stress response among the individuals impacted. In an extant study,Environmental pollution in the Anthropocene has adverse toxic effects on human health depending on the concentration and dose of the chemicals of exposure relative to the location of the receptor. A conventional deterministic risk assessment method can be used to estimate the potential carcinogenic and non-carcinogenic risks from worst-case inhalation, ingestion, and dermal contact exposures to chemicals of potential concern (CoPC), in the air, food, water, and soil for susceptible population in polluted areas such as the Chemical Valley Sarnia (CVS).The route of entry of toxic pollutants into the human body can be through 3 main exposure pathways: (1) inhalation of particles present in the air, (2) ingestion, and (3) dermal absorption as shown in 2 Environmental research is effective in providing appropriate identification, assessment, and characterization of pollutants, which will in turn help protect the public from environmental pollution hazards, especially in the Anthropocene.Inhalation is a major exposure pathway for volatile organic compounds (VOCs) and particulate matters with impregnated toxic heavy metals. The presence of such pollutions near human habitation agitates the public due to the potentially hazardous nature of the pollutants. Both acute and chronic exposures to CoPC are critically important for the assessment of the potential impacts to human health. Unfortunately, public attention is often not garnered and maintained because of the intricacy of incremental degradation of human health from long-term chronic exposures. This subtle and potential source of exposures to familiar substances such as trace amounts of metals may pose significant public health hazards but too often remain largely unidentified.The CVS impact assessment provides benefits to the community by presenting the findings of the study to the public and provided significant conclusions that can help promote human health protection for the residents of the area. The study considered the following:Environmental pollution effects on human health from air quality degradation in the region;Assessment of human exposure (chronic and acute) to hazardous contaminants based on comprehensive assessment of exposure for both adults and children;Risk level assessment and characterization of the prevalence of nephroblastomatosis (Wilms kidney cancer) in the area;Risk assessment and determination of other carcinogenic and non-carcinogenic health effects based on the impacts of human exposure to the pollutants based on the quantities identified during the study.3Considering the significance of the area of research selected for the CVS study, it is also important to evaluate the heavy industrial activities on the waterfront of the Sarnia. Located at the mouth of Lake Huron, which provides significant coastlines of Michigan and the Ontario Province, Sarnia Canada also sits along the St Clair River which borders between Ontario and Michigan and flows into the nearby Lake St Clair.The river continues its flow out as the source of the Detroit River, feeding directly into Lake Erie which provides coastlines for Ohio, Pennsylvania, and New York. The CVS study provides Sarnia with important data to better understand anthropogenic sources of atmospheric pollutants generating hazardous air quality conditions which are certain to be a source of atmospheric deposition of these contaminants to surrounding populations.Sarnia is located at the top center position (upwind), with potential of generated pollutants reaching sufficiently high altitudes and dispersed downwind toward the Detroit Metropolitan Area see . When co4 This single chemical spill affected more than a dozen water plants, shown in 4The surrounding environment and populations sharing local waterways and proximity with Sarnia are where contaminants mostly accumulate with potentially adverse effects within aquatic biomes, including the ocean as the water of the Great Lakes terminates in the Atlantic. Direct contamination of waterways from industrial activities in Sarnia is a recurrent event. An example is the 2004 river contamination, when an estimated 39\u2009000\u2009gallons of methyl ethyl ketone and methyl isobutyl ketone were discharged into the St Clair River.5 In effect, anthropogenic pollutants released into the atmosphere in Sarnia only need to be transported short distances before they are compounded as they become available for deposit in Cleveland, Toledo, or at other surrounding suburbs.The CVS study is quintessential due to the proximity of the heavy industrial activities along Sarnia to multiple densely populated urban locations. Potential exposures to hazardous air conditions of suburban areas were considered in the study and the diminished air quality around these locations. Urban centers continually generate their own ongoing source of airborne contaminants including lead from sources such as leaded fuel and leaded paints or incinerated refuse.The travel time by which environmental airborne pollutants can be transferred across locations such as from Sarnia to Detroit is illustrated in A comparison of the wind and travel time is shown in 3 Thus, it is a noteworthy idea as a location of research, given the potential benefit to improve human health of the at-risk population in the CVS area. The research study provides crucial information from which to determine the sufficiency of air quality according to standards for harmful pollutants set by the US National Ambient Air Quality Standards (NAAQS) of the United States Environmental Protection Agency (US EPA).3 Based on the level of air quality as assessed during the air quality analysis study in the CVS area, the populations represented within the studied area and their municipalities can, for the first time, accurately begin to understand the human health risks posed by anthropogenic environmental contaminants, primarily from activities of the significant presence of the petrochemical industries located in Sarnia.An air quality study quantifying the concentration of anthropogenic pollutants in the region had not been provided until the CVS study.Time-weighted average dose was linked to the exposure simulation concentrations for heavy metals and this was used for the exposure analysis for the inhalable particulates. The assessment of the chronic and acute inhalation exposure risks considered the dose-response criteria of the US EPA and the carcinogenic and non-carcinogenic risks for all risk-posing CoPC were estimated for the different population groups. These activities included collecting and analyzing air samples from multiple locations during a 3-year span of the CVS study, and the resulting data that were analyzed offer further benefits as a guide to organizing efforts tasked with mitigating hazardous exposure to environmental contaminants. By providing data on the distribution of contaminants at sampled locations, the CVS study contributes to the body of knowledge necessary for identifying and mitigating the pollution debacle in the region. Cleanup efforts which strategize hazard mitigation according to the available data maximize the health improvements which can be made on behalf of the at-risk population. In addition, the CVS study can be used as a tool for understanding the fate and transport of hazardous materials in the region and evaluate the rate of human exposures to the pollutants of concern.To achieve the study objectives, an ecological risk index assessment was conducted on the collected air samples to estimate the deposition fractions and deposition fluxes in the human respiratory system. The study assessed the effects of the trace-metal-bound particles and the penetration potentials through the head airways and possible damages to the tracheobronchial tree and the alveolar region, thus leading to the development of nephroblastomatosis, especially among children.The CVS study also calculated the contamination factors, enrichment factors, pollution load index, and contamination indices. Hazard quotient (HQ) is the ratio of exposure to the estimated daily exposure level at which no adverse health effects are likely to occur. Non-carcinogenic risk assessment of the CoPC was conducted and the HQs of the individual exposure from the heavy metals were determined. If HQ\u2009>\u20091, the receptor is at risk of non-carcinogenic effects. When HQ\u2009>\u20091, the risk of non-carcinogenic can result in adverse health damages to the human body. The HQs were calculated for non-carcinogenic parameters for inhalation pathways.\u22124, it is considered as \u201cdefinite risk,\u201d and the other risk levels are described in Incremental lifetime carcinogenic risks (ILCRs) from the heavy metals were also investigated. If the ILCR of the CoPC is more than 1\u00d7105 To establish an ecological risk index, the CVS study provided the following assessments:The study determined that the main route of entry of contaminants into the human body is through inhalation of respirable fractions of the contaminated-particulate-matter-bound heavy metals. These inhalable fractions of airborne particles are deposited in the respiratory tract along the airways during a single regular breath.Assessment of contamination factors to determine the extent of environmental pollution from industrial activities and the contamination of air quality in the CVS;Assessment of the pollution load index to estimate the contamination of the ambient environment for the use of comparing the assessed pollution levels between sample locations;Assessment of the enrichment factor to determine if the concentration of the enriched sample is from natural deposits by geogenic or anthropogenic (man-made) sources;Assessment of human health risk to determine exposure via inhalation of atmospheric particulate matter bound with heavy metals;Assessment of the ILCR to determine the incremental probability of an exposed person to carcinogenic substances developing cancer in his or her lifetime;3Assessment of non-carcinogenic health risks provided with use of the target HQ and hazard index used to measure health risks of heavy metals found in airborne particles.3The negative human health effects that may result from chronic exposures to the toxic metals selected for chemical and statistical analyses do include health risks which are important to consider. Through the application of the briefly described statistical methods, the CVS study concludes that anthropogenic heavy metals within sample particulate matter may be a potential source of the adverse effects leading to the concentration of the nephroblastoma disease in the area.3 From 2000 to 2009, approximately 72% of these nephroblastoma cases occurred in Marine City, MI. This community located within 20\u2009miles down the St Clair River from Sarnia, which was defined as a cancer cluster because of the high prevalence of cancer over time.6 The CVS study provides a characterization of nephroblastoma as a solid tumor of epithelial tissue formed from immature kidney cells which grows quickly on the exterior of the human kidney resulting in nephroblastomatosis.3Between 1990 and 2009, 63% of occurrences of nephroblastoma occurred among children under 5\u2009years of age.7 The prognosis of nephroblastoma is highly variable depending on its stage of development and has 2 histopathologic types, favorable and unfavorable, each having outcomes across the spectrum for each of these 2 categories.7 The cancer tumors are usually found in the adrenal gland, resulting in abdominal pain and distention. The tumors are mostly large in size. Ultrasound imaging procedures are an effective means of assessing internal tumors without subjecting children to harmful sources of radiation allowing medical staff to rule out occurrences of capsular or vascular invasion occurring in 4% to 10% of all patients.8Nephroblastoma or Wilms tumor is the most common renal tumor in children with peak instances occurring at the age of 3 to 4\u2009years.9 Other determinates for course of treatment include factors such as age, overall health, medical history, the extent of the disease, and tolerance to specific therapies. Common treatment also includes the surgical procedure of a radical nephrectomy which removes the kidney in its entirety, the surrounding tissues of the kidney, the ureter which drains urine from the kidney to the bladder, and the kidneys\u2019 corresponding adrenal gland which is located on the top of the kidney.9 When the tumor presents on both kidneys, surgical treatment consisting of a partial nephrectomy is recommended. This surgery is intended to remove the entirety of the tumor, leaving behind the maximum amount of kidney tissue unaltered by the tumor in an effort to spare as much of the kidney as possible. Approximately 5% to 8% of children diagnosed with capital nephroblastoma developed tumors in both kidneys.10 In cases where treatments are difficult or impossible, to remove tumors affecting both kidneys, which are too large to perform a partial nephrectomy, patients will be subjected to radiation therapy in an attempt to shrink the tumors prior to surgical removal.8Treatment of the condition varies depending on the size of the tumor developed at the time of treatment as well as other considerations such as the bilateral presence of tumors on both kidneys.9 Over time, the overall success rate of medical treatment and the patient experience of treatment side effects have both improved significantly. For the past 20 years, the 5-year survival rate among children diagnosed and living with nephroblastoma is approximately 86% and the overall longter survival rate is even higher (up to 90%).11In children, the prognosis for long-term viability of nephroblastoma survivors depends on many factors including the stage of the disease as well as the characteristics of the tumor cells, which may be assessed under the magnification microscope, the promptness of nephroblastoma identification and treatment, and the continued follow-up of medical care provided to the patient including ongoing adjustments to the course of treatment and continued nephroblastoma screenings.3Whereas the treatability and long-term survivability of the cancer may be a source of hope to patients and their health care providers, the dramatic courses of treatment required for nephroblastoma survivors merit a closer look into causative forces. The CVS study offers the populations affected and at risk of adverse health effects a clearer understanding of a potential source of pollutants resulting in the cancers prevalent around their communities. Among the results from the CVS study were the confirmation of the findings: (1) concentrations of airborne lead are present in air samples in excess of 350% above NAAQS air quality standards; (2) concentrations of chromium (VI), lead, and nickel are partly the result of anthropogenic activity; (3) ambient air is polluted in the study area with particle material with bound metals which may increase the human health risk of nephroblastoma, especially in children; (4) high risk of potential cancer affects children and adults in the area studied; (5) children are more likely to develop carcinogenic and non-carcinogenic health effects from exposures to elemental concentrations of airborne particulate matter.The public health benefits of the CVS study are quintessential to the proper identification of anthropogenic pollutant sources, the concentration in the environment, and the potential effects on receptors exposed to the contaminants. Ideally, the prevalence of childhood nephroblastomatosis combined with the results of this study will help decision-makers provide adequate care for the residents of the CVS area and other adjoining areas."} +{"text": "Data on the micrometeorological parameters and Energy Fluxes at an intertidal zone of a Tropical Coastal Ocean was carried out on an installed eddy covariance instruments at a Muka head station in the north-western end of the Pinang Island , Peninsula Malaysia. The vast source of the supply of energy and heat to the hydrologic and earth\u05f3s energy cycles principally come from the oceans. The exchange of energies via air-sea interactions is crucial to the understanding of climate variability, energy, and water budget. The turbulent energy fluxes are primary mechanisms through which the ocean releases the heat absorbed from the solar radiations to the environment. The eddy covariance (EC) system is the direct technique of measuring the micrometeorological parameters which allow the measurement of these turbulent fluxes in the time scale of half-hourly basis at 20\u202fHz over a long period. The data being presented is the comparison of the two-year seasonality patterns of monsoons variability on the measured microclimate variables in the southern South China Sea coastal area. The data acquisition processes and instrumentations is reproduceable in any region of the world.1The data for the monsoonal variability in the tropical coast of Peninsula Malaysia on the micrometeorological parameters and Energy budget was observed based on the 2 years . The data recorded cut across two annual cycles of the Southeast Asia monsoon seasons. The data was collected to appreciate the climate variability on the overall variations of the meteorological parameters measured occasion by precipitation and temperature anomalies a broughtThe patterns of the micrometeorological data and other parameters were examined to understand the monsoon seasons on their distributions and variability . Tempera2The EC method used offers measurements of gas emission and ingestion that also allow measurements of energy exchanges in an area. The data collected is globally used in micrometeorological measurements in more than 3 decades. This has allowed it to grow into more advanced instrumentation and stronger practice that caused its usage across diverse disciplines and industries for environmental monitoring and inventory Eddy Covariance (EC) is a statistical tool to compute micrometeorological parameters, turbulent fluxes, and other useful parameters to define climate variability for environmental impact purposes"} +{"text": "Moreover, in favorable (won) matches the higher values of degree centrality of central defenders and defensive midfielders were also found in unbalanced scores. The comparisons between positions revealed that the highest and significant degree prestige levels were found in defensive midfielders in both close (12.10%) and unbalanced scores (10.95%). In conclusion, it is possible to observe that winning by an unbalanced score significantly increased the centrality levels of the wingers and forwards in comparison to close scores. Moreover, it was also found that independent of the final score or the unbalanced score level, the defensive midfielders were the most prominent or recruited players during the passing sequences.The purpose of this study is twofold: (i) analyze the variations of network centralities between close and unbalanced scores; and (ii) compare the centrality levels between playing positions. The passing sequences that occurred during the 64 matches played by the 32 national teams that participated in the 2018 FIFA World Cup were analyzed and coded. The network centralities of degree prestige and degree centrality were calculated based on the weighted adjacency matrices built from the passing sequences. The results reveal that higher degree centralities of midfielders occurred in unfavorable (lost) unbalanced scores ( The game of soccer allows the observation of two main types of relationships: (i) a network process between teammates aiming to synchronize the different individual behaviors and optimize the collective organization of the team; and (ii) a rapport of strength, that results from the interactions between two teams to beat each other in a dynamic system . Both reUsually in match analyses conducted on soccer, there is a strong tendency to code, collect and report evidence about the variation of different performance indicators between different contextual factors . HoweverIn a mixed approach, the social network analysis (SNA) applied to team sports has proposed a new way to interpret the outcomes of the match . The proUsing the pass as the linkage indicator in the network it has been possible to identify some evidence about the centrality levels of players during soccer matches . In the Considering the type of analysis, there is a lack of evidence about the influence of specific contextual factors in the studies that compared the network centrality levels of playing positions, namely considering the winning/losing factor and, most of all, the balance levels of the final scores. As one of the well-known contextual factors that may lead to different tactical behaviors and interactional processes between teammates, we hypothesize that network centralities may be different based on distinct contextual factors. Taking this rationale in mind, the unique study that tested the network levels between close scores and unbalanced scores revealed that the dyadic reciprocity levels of the players increased in winners during unbalanced matches and that total arcs and density were slightly greater in winners of close matches . Those mThis study coded the passing sequences of the 64 matches of the 2018 FIFA World Cup. Therefore, all the 32 national teams were observed and included in the analysis process. The passing sequences were coded and included in the analysis after testing the intra- and inter-reliability level of the expert observers. The weighted adjacency matrices built based on the passing sequences were then treated for the subsequent network analysis.The study was approved by the local ethical committee with the code IPVC-ESDL09052018. There is no contact or intervention with the players and the process was exclusively made based on observation.This study followed a cross-sectional observational design. All the passing sequences made by each team during each entire match of the 2018 FIFA World Cup were observed, coded and transformed in weighted adjacency matrices. Those matrices were then treated and the degree prestige and degree centrality (network measures) were calculated considering the playing positions of the players: (i) goalkeeper (GK); (ii) external defenders (ED), players that act as defenders in the side of the field; (iii) central defenders (CD), players that acts as defenders in the central region of the field; (iv) defensive midfielders (DMF), players that acts as midfielders in a region closer to the central defenders; (v) midfielders (MF), players that acts in the middle of the field linking the defensive and attacking players; (vi) wingers (W), players that act in forward and side regions; and (vii) forwards (FW), players that acts in the middle of attacking regions. The variations of degree prestige and centrality between playing positions were tested considering the unbalance level of the final score. Based on such options, we have excluded the draw situations that did not provide the same score (losing or winning) that can be comparable between unbalanced and balanced games. The matches with a final score difference of one goal were considered close scores and those with two goals or more of difference as unbalanced scores. The classification of balanced vs. unbalanced score was exclusively made considering the final score (end of the match). This definition of balanced vs. unbalanced score was used in a previous study on soccer .Two expert observers (sport scientists with more than 5 years of experience on soccer) were recruited to observe and code the passing sequences of all matches of the 2018 FIFA World Cup. All the successful passes between two teammates were considered to include in the sample. Those observers were tested for their reliability levels following a pre-post pilot study design using a total of seven matches of the competition . The pre-post analysis was interspaced by a 20-day interval period aiming to test the reliability level of the observers. The process was made before the full-data being collected, aiming to ensure the desirable level of reliability. The results obtained from the pilot study revealed an average of intra-class correlation level of 0.97 (excellent reliability) for the case of intra-observer analysis and an average of 0.91 (excellent reliability) for the case of inter-observer analysis. The values obtained revealed that the reliability level of the observers was enough to follow through with the data collection .After confirmation of the reliability level of the observers to code the passes between teammates, all the matches were observed, coded and treated following a network analysis process. The players were first coded by the playing position in the pitch and even in the case of replacements or changes during the match, the aim was to analyze the centrality levels of positions and not the specific players .Each passing sequence was converted in a weighted adjacency matrix . The pasThe weighted adjacency matrices of all matches were imported into Social Network Visualizer software . This free software allows converting weighted adjacency matrices into networks and calculating the centrality levels of the nodes (players). For this study, the calculation of the standardized degree prestige and degree centrality was made.This network measure quantifies the inbound links that a specific player received from his teammates and, foraji can be considered the elements of the weighted adjacency matrix of a G with a ni as vertex.in which The network measure of degree centrality represents the overall level of connection of a player with the teammates, considering the number of outbounds, thus a higher level of degree centrality suggests that the player contributed more often to the passing sequence by executing more passes . The staaji can be considered the elements of the weighted adjacency matrix of a G with a ni as vertexin which post hoc test after confirmation of the normality and homogeneity assumptions of the sample. The partial eta squared tested the effect size (ES) of the univariate MANOVA. The statistical procedures were executed in the SPSS for a p-value < 0.05. Moreover, the Cohen d tested the ES of the pairwise comparisons between playing positions. The following scale was used to determine the magnitude of ES for the case of Cohen d . The variations of centralities between playing positions and type of final score were tested with a univariate MANOVA and one-way ANOVA with Tukey HSD Cohen d : 0.0\u22120.2\u2217playing position and final score\u2217playing position for the case of degree prestige, however, no significant interaction were found in the pair difference of goals\u2217 final score . In the case of degree centrality were found significant interactions in the pairs difference of goals\u2217playing position and final score\u2217playing position , however, no significant interactions were found in the pair differences of goals\u2217final score .The univariate MANOVA tested the interactions between factors revealing significant interactions in the pair difference of goalsp = 0.046; ES = 0.472, small effect). In won matches it was observed that the degree centralities were higher in close scores for the cases of central defenders and defensive midfielders . Similar evidence was found for the case of degree prestige in central defenders and defensive midfielders . On the other hand, a significant increase of degree centralities in unbalanced scores among wingers and forwards was also observed in won matches. The degree prestige also increased in midfielders in the case unbalanced scores .Comparisons of network centralities within playing positions and between close and unbalanced scores can be observed in Descriptive statistics of network measures between playing positions in the case of close and unbalanced scores were tested .Comparisons between positions revealed that the highest degree prestige levels were found in defensive midfielders in both close (12.10%) and unbalanced scores (10.95%). On the other hand, the forwards presented the lowest degree prestige in both close (6.41%) and unbalanced scores (7.06%). However, as can be observed in The greatest degree of centrality levels were verified in defensive midfielders in both close (13.45%) and unbalanced matches (12.08%). On the other hand, the lowest degree centralities were observed in forwards in both close (4.60%) and unbalanced matches (5.12%). However, similarly to the case of degree prestige, the magnitude of changes between playing positions decreased from close to unbalanced matches .The network centralities allows to better identify the prominence of each player in the passing sequences of a team, namely considering the direction and weight of the passes . In the The first purpose of the study was to analyze the variations of centrality levels between favorable and unfavorable close and unbalanced scores within playing positions. The results revealed that in unfavorable results (lost) only the external defenders and defensive midfielders presented meaningful small increases of prominence levels in unbalanced matches, however, the remaining playing positions did not meaningfully change the prominence levels. In the other hand, in favorable (won) results there was two main type of evidence: (a) central defenders and defensive midfielders had meaningful greater values of degree prestige and degree centrality in close scores; (b) midfielders, wingers and forwards had meaningful greater values of degree centrality and degree prestige in unbalanced scores. Therefore, two main conclusions could be extracted from the results: (i) generally, independent from the differences of goals in unfavorable results, there are no meaningful differences in the degrees of centrality and prestige of the great majority of the playing positions; and (ii) won by one goal requires a meaningful greater participation of central defenders and defensive midfielders, however, won by an unbalanced score requires a meaningful increase in the centralities of the positions that occupy forward lines, namely midfielders, wingers and forwards.The fact of central defenders and defensive midfielders increasing their participation in favorable close scores may result from the team strategy to keep the ball in zones of low pressure and more security than in forward regions may be what increases the possibility of non-success . The styThe present study also tested the variations of network centralities between playing positions. In terms of close scores, it was found that the degree prestige (inbound) was largely greater for central defenders and defensive midfielders than for wingers and forwards. Interestingly, the magnitude of the differences decreased in the case of unbalanced scores, despite revealing the same tendencies. The degree prestige can be considered an indicator of the overall prominence level of a player to be recruited by his teammates . The resIn the case of degree centrality (outbound), larger differences were observed between playing positions than in the case of degree prestige, namely considering the comparisons of more defensive positions with forward players (wingers and the forwards). Forwards and wingers were clearly and meaningfully- less prominent than the remaining playing positions in both close and unbalanced scores. This suggests that the overall participation of these two positions in constructing the passing sequences and establishing relationships with their teammates is significantly smaller; however, this depends on the type of analysis. Naturally, in the case of counter-attacks or passing sequences that result in shots or goals, the rate of prominence may increase in forward players, based on previous research . HoweverThis study had some limitations. The passing sequences were not split by type of attack or final outcome and for that reason the results about the prominence level should be carefully interpreted. Moreover, the analysis of different team\u2019s formations was not considered. Also, there is no information about the tactical behavior that explains the prominence levels observed. Based on those limitations, it is important for future studies to split the passing sequences by type and final outcome and also add information about the pitch regions in which the passes occurred. The analysis of the formations and tactical behavior of the players should also be considered to provide a qualitative interpretation and to increase the holistic view about the dynamics that contribute to the final outcomes. Future studies should also consider other technical actions that may provide information about the interactions between team players. Moreover, an analysis that considers the spatio-temporal dimension should be considered to improve the understanding about the dynamics of the match. To do that, it may be important to add an analysis per period of time or per changes in specific moments .Despite the study limitations, this study is, in the best of our knowledge, the first that compared the prominence level of different playing positions in favorable and unfavorable close and unbalanced scores. The results of this study allow coaches to identify that favorable unbalanced scores increases the overall centrality levels of wingers and forwards and this may represent a transfer for training scenarios or even to options to make during the matches. Moreover, the evidence that defensive players are more prominent in building the attack in the generality of the passing sequences should encourage adopting strategies to reduce the success of the opponent\u2019s teams in their zone of comfort.Considering the comparisons of network centralities between close and unbalanced scores, the main evidence revealed that defensive midfielders and central defenders presented meaningful greater levels of centrality (inbound and outbound) in won close scores than in unbalanced. On the other hand, won unbalanced matches meaningfully increased the centrality levels of wingers and forwards. Regarding the second purpose of this study \u2013 to compare the variations of network centralities between playing positions \u2013 it was possible to observe that defensive midfielders were the most recruited and also the most contributing to the passing sequences and that the forwards and wingers presented the lowest values of participation in both close and unbalanced matches. As a conclusion, this study suggests that independent of the magnitude of difference in the final score, there are playing positions (the midfielders) that are relatively stable in the participation during passing sequences and other positions (the forward players) that increase in participation in favorable unbalanced scores.filipe.clemente5@gmail.com.The datasets for this manuscript are not publicly available because data are available upon request from the first author FC: filipe.clemente5@gmail.com. Requests to access the datasets should be directed to FC: The study was approved by the local ethical committee with the code IPVC-ESDL09052018. There is not contact or intervention with the players and the process was exclusively made based on the observation.FC conceived the study. FC, HS, and GP designed the study. FC collected, analyzed, and interpreted the data. FC, HS, GP, PN, TR, and BK drafted, revised and approved the final version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The complexity of developing and applying increasingly sophisticated new medicinal products has led to the participation of many non-medically qualified scientists in multi-disciplinary non-clinical and clinical drug development teams world-wide. In this introductory paper to the \u201cIFAPP International Ethics Framework for Pharmaceutical Physicians and Medicines Development Scientists\u201d it is argued that all members of such multidisciplinary teams must share the scientific and ethical responsibilities since they all influence directly or indirectly both the outcome of the various phases of the medicines development projects and the safety of the research subjects involved. The participating medical practitioner retains the overriding responsibility and the final decision to stop a trial if the well-being of the research subjects is seriously endangered. All the team members should follow the main ethical principles governing human research, the respect for autonomy, justice, beneficence and non-maleficence. Nevertheless, the weighing of these principles might be different under various conditions according to the specialty of the members. For hundreds of years, treatments based on experience formed a continuum with uncontrolled individual therapeutic trials performed by the treating physicians in the hope of helping their patients. The deep ethical concern of the practicing physicians is expressed with great clarity by William Withering who introduced digitalis into medical practice in the 18th century: \u201cAfter all, in spite of opinion, prejudice or error, Time will fix the real value upon this discovery, and determine whether I have imposed upon myself and others, or contributed to the benefit of science and mankind.\u201d , Reason, , and by Multidisciplinary teams gained broad acceptance in drug development when, beside the determination of clinical efficacy and safety, the correlations between the plasma level of the drugs and their pharmacodynamic effects also became the additional focus of clinical pharmacological investigations. Such cooperation is primarily characterized by the parallel work of the clinical and various non-medical experts who perform pharmacokinetic, biochemical, immunological and other investigations on human samples. The ethical problems of such cooperation are usually limited to the amount and frequency of the sampling of human materials needed for conducting the studies. The situations can be handled by finding a scientifically acceptable compromise which does not cause additional harm for the human subjects. A conceptually entirely different and much more sophisticated cooperation becomes necessary for investigating and applying advanced therapeutic products in patients.in vitro before re-transfusion for reaching the required number of modified T-cells for effective tumor kill. The production of the individually prepared targeted medicinal product is carried out under Good Manufacturing Practice (GMP) conditions by a multidisciplinary expert team specialized in immunology, cell and molecular biology cancer therapy. For this treatment the genes coding for the specific CAR-T receptor recognizing the cancer surface antigen(s) of the individual patients must be transferred into the harvested T-cells of the patients. The modified T-cells are then further incubated In such multidisciplinary teams the physician is only one member with a specific right to stop the intervention if the safety of the patient is endangered and the interruption of the therapy does not cause additional harm. It is not surprising that the FDA requires that the entire staff involved in this complex therapy should be specifically trained and certified (FDA News Release, The rapid progress of advanced therapies will further increase the need for including many different professionals into clinical teams. In addition, new scientific knowledge continuously generates unforeseen ethical problems. For successfully managing increasingly sophisticated ethical challenges IFAPP recommends and plans to contribute to the strengthening education of ethics at the under-graduate and post-graduate levels both for medical and other biomedical professionals.The aim of the linked IFAPP International Ethics Framework is to highlight the ethical issues relevant to the increasingly close cooperation of physicians and non-medically qualified experts in human drug development and application. All authors contributed both to the development of the ideas as well as to the writing of the paper and the linked The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Anterior tibial artery is a nonvital artery which is one of the three arteries of the leg. This artery has a short proximal l segment in the popliteal region and a long segment in the anterior compartment of the leg designated as distal segment. With consideration of the deep location of the proximal segment in the popliteal fossa, it is less susceptible to trauma and subsequent formation of an aneurysm. On the contrary, the superficial long distal segment is more susceptible to trauma with high chance of pseudoaneurysm formation at the site of unrecognized injury. In this article, a 38-year-old military man being manifested about a decade after a trivial missile fragment injury with progressive posterior tibial neuropathy is presented. A giant pseudoaneurysm arising from the proximal segment of the anterior tibial artery was confirmed with angiography and the exact size of this pathology was documented with contrasted computed tomographic scan. The aneurysmal sac removal was accomplished after ligation of the corresponding artery proximal and distal to the sac followed by tibial nerve neurolysis which result in full recovery. In careful review we found that neither pseudoaneurysm arising from the proximal tibial artery nor posterior tibial neuropathy due to the compressive effect of the aneurysmal sac of this segment has been reported previously. Our primary purpose for reporting this case is not to describe the rarity of pseudoaneurysm formation at proximal segment of this artery but rather to describe delayed-onset posterior tibial vascular compressive neuropathy due to such an aneurysm. Eventually due to the potential sequel of a pseudoaneurysm, it is important for the surgeons to have high index of suspicion to prevent a missed or delayed diagnosis. Arterial injuries are common event in military and civilian practice with iatrogenic one occurring in increasing frequency.If the initial injury, in particular in noncritical arteries, is left unrecognized or is regarded insignificant to seek medical advice, a traumatic aneurysm might develop.Traumatic aneurysms are true when the arterial wall injury occurs in intimal and medial layers of the artery with intact adventitia, and hence tend to be fusiform and their expansion is limited. A pseudoaneurysm or false aneurysm develops when all three layers of the artery are affected. In such circumstances, low-flow bleeding continues at the site of injury and this will gradually result in the reaction of the surrounding tissues with formation of a fibrous capsule around the hematoma.The leg arterial network is composed of three arteries: posterior tibial, peroneal, and anterior tibial. Anterior tibial artery is a nonvital artery of the leg with a short, but deeply located proximal and a long superficially located distal segment. Anterior tibial artery injury is one of the least among the arterial injuries of the extremities and account for approximately 2%, both in civilian and military experience.On the contrary, formation of an arterial pseudoaneurysm due to missile fragment in the lower extremities and in particular in the leg is a well-known pathology in military practice. Reviewing the military experience since World War II, we found that traumatic aneurysms of anterior tibial artery account from 0 to approximately 2% of all traumatic aneurysms of the extremities with antipersonnel mines standing at the top.Survey of civilian experience yielded the same result regarding the incidence.Recently, iatrogenic pseudoaneurysms of this artery due to orthopedic procedures are reported in increasing frequency.In careful review of the literature, we found that the injury of the proximal segment of the anterior tibial artery has been reported only in one rare occasion, where not a single case of pseudoaneurysm originating from this segment has been reported.In this article, we present the first example of a pseudoaneurysm of the proximal segment of the anterior tibial artery with compressive posterior tibial neuropathy which was diagnosed 12 years after being injured by a small missile fragment. This pathology was managed by exclusion of the pseudoaneurysm after ligation of the artery proximal and distal to the aneurysmal sac and subsequent neurolysis of the posterior tibial nerve.Furthermore, where the usual delay in the diagnosis of the posttraumatic pseudoaneurysms of the extremities may range from a few days to several weeks with a mean of 45 days, onset of the pathology after such a long delay following a penetrating injury makes this case more interesting.This 38-year-old military man was referred for the evaluation of radicular pain over the posterior aspect of his right leg and numbness at the planter aspect of his right foot for 3 weeks of duration. The patient had a history of being injured by several missile fragments 11 years before admission. With probable diagnosis of S1 root radiculopathy from L5\u2013S1 disc herniation, lumbar myelography in another institution was normal. With continuing discomfort, he was referred to our institution.His neurological exam revealed distal sciatica at the course of S1 root, with hypoesthesia of the right sole. Further examination and palpation revealed a painful and pulsatile mass in the popliteal region. A bruit was heard in auscultation. With the diagnosis of a pseudoaneurysm, selective angiography was done and this revealed a pseudoaneurysm arising from the proximal segment of the anterior tibial artery. The artery bowed because of the compressive effect of the pseudoaneurysms . With cS-shape posterolateral incision. The popliteal, tibiofibular trunk and the proximal part of anterior tibial artery were identified and exposed. The proximal anterior tibial artery was ligated. Then the aneurysmal sac with dense fibrous wall was incised. Large quantities of old and fresh bloods were evacuated. Then, the orifice of the artery distal to the aneurysm was identified through its faint retrograde bleeding and this was sutured and occluded. Subsequently, the aneurysmal sac was dissected from surrounding structures including the flattened discolored posterior tibial nerve and was totally removed, and the affected nerve was released separately from the adhesions using magnification. Postoperative course was uneventful. At 2-month follow-up, he was neurologically normal.The popliteal fossa was approached through a looseAnterior tibial artery injury is one of the three arteries of the leg which has a deep proximal segment and superficial distal segment. Vulnerability of this artery to trauma differs in these two segments. For better understanding of this difference, review of its anatomy is necessary. This artery is the first branch of the popliteal artery which after traversing a few centimeters in the popliteal fossa pierces the interosseous membrane and descends into the anterior compartment of the leg along the shin in close proximity to peroneal nerve. The popliteal artery, currently being called tibiofibular trunk, divides into two branches, posterior tibial and peroneal arteries. Expectedly, the superficial and unprotected distal segment of the anterior tibial artery allows access for wounds not strong enough to be able to traumatize the short proximal segment embedded deeply in a rather more protected location. Even among the arteries of the popliteal region, the distal anterior tibial artery is the least susceptible.With arterial injury left unrecognized, a pseudoaneurysm might develop. Especially if the initial trauma is regarded too insignificant to seek medical advice.With more susceptibility of the distal segment of this artery, the pseudoaneurysms of this part of anterior tibial artery are reported in increasing frequency. In the past only missile wounds, low-velocity bullet injuries, penetrating injuries such as stab wounds, and tibia-fibula closed fractures had been the major causes of the pseudoaneurysms formation.Before surveying the literature, with respect to the rarity of the distal segment arterial injury, a traumatic pseudoaneurysm developing at this site was expected to be an extremely rare event. Therefore, we were not surprised that not a single case of such pathology could be found in careful review.The timeframe for the development or detection of a pseudoaneurysm after a trauma is quite variable and ranges from a few days to a few months or even years. In our case, the depth of the affected artery and the size of the missile fragment may explain the rather slow process and the unusual delay in diagnosis.Pseudoaneurysm detection several years after injury has been reported only in six occasions including our case.Usually, a peripheral nerve accompanies the arteries of the extremities and with formation of an aneurysm, vascular compressive neuropathy might appear.However, compressive neuropathy due to a pseudoaneurysm of proximal segment is unlikely because of its far distance from the nearest nerve, which is posterior tibial nerve. Reasonably, if posterior tibial compressive neuropathy coinciding a traumatic aneurysm of the proximal anterior tibial artery occurs, it should be an exception and only occur when the corresponding pseudoaneurysm reach to a giant size.Historically, surgical strategy for pseudoaneurysm of less vital arteries had been ligation of the artery proximal and distal to the aneurysm followed by aneurysmal sac excision.With improvements in medical technology, treatment options for pseudoaneurysms of the peripheral arteries have evolved to less invasive methods.These new therapeutic techniques were started with ultrasonic-guided manual compression of the neck of the aneurysm. Later, echo-guided thrombin injection was introduced which has been tried successfully in a patient with anterior tibial artery pseudoaneurysms.Transcatheter coil embolization is another accepted treatment modality which offers many advantages including rapid safe occlusion and this had been successfully accomplished in the pseudoaneurysms of the anterior tibial artery as well.Recently, endovascular anatomic reconstruction of the arterial wall with covered stent has been developed.Obviously, none of these noninvasive options are recommended in the patients with aneurysmal compressive neuropathy where excision of the aneurysm and subsequent neurolysis of the affected nerve are the mainstay of surgery. In our patient, because of posterior tibial neuropathy, open surgical intervention was preferred. Furthermore, reconstruction of the artery was not indicated because of the integrity of tibioperoneal trunk and its two major branches.Duration and extent of compression have prognostic influence on associated neural recovery.In conclusion, we believe that this presentation can serve as a good reminder of another uncommon possible cause of posterior tibial neuropathy diagnosed with high index of suspicion. This report also emphasizes on the importance of taking into account the possibility of a pseudoaneurysm in differential diagnosis and a compressive neuropathy even years after trivial missile injuries."} +{"text": "This Special Issue of Frontiers in Biomolecular Sciences contains several articles in which different elements of environmental complexity are explored and discussed from theoretical and/or experimental points of view.Biochemical reactions have historically been studied in purified solutions that that are dilute in macromolecules and contained in large vessels. During the last several decades it has become increasingly clear that physiological media differ substantially from the solutions in which such studies are carried out. The fluid component of cells is distributed among numerous microenvironments that are highly heterogeneous in both composition and spatial organization. These factors have been shown to significantly influence the rates and equilibria governing many biochemical reactions. The results of recent efforts to characterize the behavior of individual proteins and other macromolecules within living cells are difficult to interpret unambiguously due to the complexity of the system. The study of biochemical kinetics and equilibria in well-characterized media designed to incorporate one or more elements of the complexity of living cells, termed cytomimetic media, has been advocated as a means for bridging the gap between traditional Nguemaha et al. have utilized atomically detailed computer simulations to estimate the concentration-dependent free energy of transferring each of eight dilute \u201ctest\u201d proteins from dilute solution to concentrated solutions of one of two concentrated \u201ccrowder\u201d proteins. The simulation allows decomposition of the total transfer free energy into contributions from \u201chard\u201d or steric repulsive interactions and \u201csoft\u201d or longer-ranged electrostatic or solvent-mediated interactions. Hoppe and Minton utilize an approximate analytical solution for the concentration-dependent transfer free energy of macromolecules interacting via a model square well-potential in order to calculate the effect of non-specific interactions on the concentration dependence of light scattering, sedimentation equilibrium, osmotic pressure, and liquid-liquid phase transitions of concentrated protein solutions. Their results show that this highly simplified interaction potential can recapitulate many experimentally observed phenomena as well as some key results of the atomistic simulations.One of the elements of complexity is the presence of locally high concentrations of multiple species of macromolecules, which may interact with each other both specifically and non-specifically. Such interactions affect both the state of association and the chemical potential, or reactivity, of individual macromolecular species. Nakashima et al. review theoretical models for the formation of incompatible phases and factors affecting their compositions, including salt concentration, pH, temperature, and the strength of intermolecular interactions. They further discuss factors regulating the partitioning of biomolecular reactants between the two phases, and the consequences of partitioning for reaction rates and equilibria in each of the phases.A second element of intracellular complexity is the presence of multiple microenvironments in which reactions may take place. Membraneless organelles, representing one class of these microenvironments, are thought to arise through liquid-liquid phase transitions. Gnutt et al. review the effects of induced differentiation of neuronal cells and induced proteostasis of HeLa cells on the thermal stabilities of a synthetic folding sensor and a labeled mutant of superoxide dismutase. They report that these changes result in destabilization of the labeled sensor species. Because the analysis of the data is based upon signals averaged over the whole cell, the authors recommend development of cytomimetic media for a more detailed analysis.A third element of complexity in the cellular environment is the dynamic nature of this environment, which is subject to changes with experimental conditions. Schavemaker et al. review a number of factors affecting the rate of diffusional transport of macromolecules, which include transient binding to other species as well as the necessity of tortuous trajectories to bypass obstacles. The authors present examples of diffusion-limited reactions in small prokaryotic cells and point out that the functioning of certain intracellular oscillatory dynamic systems depends crucially upon the maintenance of diffusional rates.A fourth element of complexity in the cellular interior is the presence of mobile and stationary obstacles to the free diffusion of macromolecular reactants. GR and AM wrote the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The network engages in empirical clarifications on both the distinctness and validity of the construct as well as in critically reviewing terminology and measures of views on aging . The network aims to help clarifying the dynamic interplay of determinants and outcomes in the context of health as well as disentangling intra- and intergenerational stereotypic perceptions . Both of these are understudied issues with highly practical implications for two of the largest demographic challenges: shaping the coexistence of generations as well as providing adequate health care supply. Integrating both pertinent theoretical approaches and empirical findings the network regards views on aging under a lifespan perspective. Recently, it suggested three core principles of views on aging regarding lifelong bio-psycho-social development, their multidimensional nature, and their impact across life. These considerations provide a background for an integrative discussion of the symposium\u2019s contributions.Over the past 20 years, research on views on aging has substantiated their importance for successful development and sustained quality of life over the full length of the life span. However, a deep understanding of the origins of views on aging and the underlying processes of their lifespan development and manifestation is lacking. Since 2017, the scientific network \u201cImages of Aging\u201d funded by the German Research Foundation ("} +{"text": "It is my pleasure to announce that two distinguished internationalscientists have joined the editorship of the FreshwaterSystems domain of TheScientificWorldJOURNAL \u2014 Professor BrijGopal of Jawaharlal Nehru University (India) and Dr. Manual Gra\u00e7a of the Universityof Coimbra . Professor Gopal is the Secretary General of the NationalInstitute of Ecology, Editor of the InternationalJournal of Ecology & Environmental Science,and Chairman of the SIL Committee on Limnology in Developing Countries. His research interestsinclude the ecology, biogeochemistry and biodiversity of wetland ecosystems,the management of wetlands as an integral part of the watershed, and wetlandwater policy\u2013related issues. Dr. Gra\u00e7a is a stream ecologist whose researchinterests include the two general areas of organic matter decomposition andbiological monitoring. His specific areas of research focus include quantificationof organic matter and other chemical changes in decomposing leaves, the ecologyof aquatic hyphomycetes, and the ecology of animals feeding on detritus. Hisresearch dealing with biological monitoring is carried out in close cooperationwith the paper and mining industries, facilitating the practical applicationof his work."} +{"text": "The assay of parathyroid hormone continues to remain problematic as a result of the presence in the circulation of a variety of parathyroid hormone (PTH) peptides derived from secretion and from peripheral metabolism. The detection of these PTH fragments to varying degrees leads to widely differing results in the various assays used, particularly in the setting of chronic kidney disease, where PTH fragments accumulate as glomerular filtration rate (GFR) falls. The differing results not only lead to problems in comparing values from various laboratories but also limit efforts to develop useful clinical practice guidelines. At the same time, research into the precise identification of the PTH fragments which contribute to the assay problems has uncovered a relatively new area of parathyroid research that has pointed to potential biologic activity of PTH peptides previously thought to be biologically inactive and which may act on a novel PTH receptor. These issues have brought new focus to the difficulties in standardization of PTH assays and have provoked efforts to provide standards to help in the characterization of PTH assays and to facilitate the development of clinical practice guidelines. Assay of parathyroid hormone is extremely important in clinical medicine because of the major role of parathyroid hormone in the regulation of mineral metabolism and skeletal physiology. Since the introduction of radioimmunoassay for parathyroid hormone (PTH) in 1963 by Berson et al., the assay of parathyroid hormone has been problematic and remains so to the present day . Thus, iA major breakthrough in PTH assay occurred with the application of immunometric assay technology to the measurement of PTH , 9. WithFurther refinements in assay technique, by utilizing detection antibodies that had specificity for the first four amino-acids of PTH, allowed the measurement of true PTH (1\u201384) and facilitated detailed investigations of parathyroid hormone assay and parathyroid hormone physiology . AlthougThese observations have expanded our knowledge of parathyroid physiology, but, at the same time, they have focused attention on some significant clinical problems. One such problem is that even with current immunometric assay techniques, there is considerable heterogeneity in results obtained using different assays from different manufacturers . The widOn the other hand, these apparent problems in PTH assay also lead to a number of opportunities that may help the field move forward. First, the recognition of potential biologic activity for N-terminally truncated PTH fragments provides a stimulus to try and uncover and define the physiological importance of the biological role for such PTH fragments in humans. Some evidence in vitro and in animal experiments has been developed to indicate the likely presence of a specific receptor for such PTH fragments (C-PTH-R), and preliminary data have suggested that such receptors might exist particularly in cells of the osteocyte lineage, thereby opening a new field of parathyroid biology \u201319. ThesAn additional problem raised by these observations is the difficulty in trying to standardize PTH assay results from different laboratories or reagent suppliers using the various techniques and to achieve precise characterization of antisera used. This issue is not unique to the assay of PTH but is also relevant to other peptide hormone assays, such as the assay of growth hormone . The proIn summary, problems with the assay of parathyroid hormone continue but have uncovered new areas of parathyroid hormone physiology and opened a new area of investigation into potential biologic activity of PTH fragments. As these assay problems are solved, and efforts at standardization of PTH assays continue, it is likely that we will be able to understand the total spectrum of biologic activity of PTH and PTH fragments and incorporate the information into clinical practice to improve the assessment and management of abnormalities of mineral metabolism in a variety of clinical circumstances."} +{"text": "Posteromedial elbow instability has been described as an injury to the lateral ulnar collateral ligament (LUCL) and an anteromedial coronoid fracture, typically with absence of a radial head injury and mild incongruity of the elbow that can lead to a rapid onset of degenerative joint changes When assessing an elbow injury, we naturally interpret the available data to try to understand the mechanism of injury and it follows to assess the integrity of the typically associated injuries. However, the observation of a posterolateral elbow dislocation with absence of injury to the radial head may be interpreted as a simple elbow dislocation. The finding of a coronoid tip fracture may indicate a complex pattern of instability. The failure to adequately recognise the severity of coronoid fractures may lead to inadequate treatment.We present a case that presented in the emergency room (ER) with a radiographic posterolateral elbow dislocation and absence of radial head injury with an unrecognised fracture of the anteromedial coronoid, typically associated with a posteromedial pattern of instability treated like a posterolateral simple elbow dislocation and a similar case with adequate recognition of the pattern and severity of the coronoid fracture.Case\u00a01A 60-year-old bricklayer presented in the ER with a radiographic posterolateral dislocation after a fall on the outstretched hand from his own height. After an initial x-ray exam, his elbow was reduced and placed in a splint at 90\u00b0 of flexion after testing for stability in flexion and extension .. He wasCase\u00a02A 30-year-old man presented in the ER referring a fall on his outstretched hand and exhibiting pain and clinical signs of elbow dislocation. On his x-ray examination, a posterolateral elbow dislocation with a readily recognisable fracture of the anteromedial coronoid is apparent . The patAt 2-month follow-up, the patient showed the decrease of the ulnohumeral joint with full range of motion and slight pain when loading the elbow that madThe observation of a posterolateral elbow dislocation on a roentgenographic exam may lead the treating physician to consider the existence of a posterolateral mechanism of injury with injury to the lateral ligamentous complex of the elbow and may direct the attention for associated injuries including radial head fractures and fractures of the tip of the coronoid O'Driscoll described the pathomechanics of posteromedial and posterolateral dislocation and it is believed that the injuries to the different ligamentous structures are sequential The detection of associated injuries is critical to identify the type of elbow instability and to apply the correct treatment. The majority of posterolateral elbow dislocations presenting in the ER are due to a posterolateral pattern of injury, but these cases show that some of these may be produced following a posteromedial pattern of instability. Obviously, the detection of the anteromedial fragment is obvious in Simple elbow dislocations typically obtain good outcomes after reduction and a short period of immobilisation Evidence of a posterolateral elbow dislocation on a presenting x-ray exam in the ER does not equate to the assumption of a posterolateral elbow mechanism. In posterolateral elbow dislocations, the search for associated injuries should include the anteromedial coronoid fractures, specifically in cases with absence of fracture of the radial head or with an associated tip fracture of the coronoid and appropriate management should be instituted to avoid early degenerative changes of the joint.Posterolateral elbow dislocations are the most frequent type of elbow dislocation but may follow a posteromedial instability pattern. The failure to detect and treat correctly an associated anteromedial coronoid fracture may compromise the outcome. A high index of suspicion for the presence of an associated anteromedial coronoid fracture may be established when a posterolateral elbow dislocation presents with absence of a radial head fracture and appropriate imaging techniques utilised for its detection."} +{"text": "Hirschsprung's disease is characterized by the absence of ganglia in the distal colon, resulting in a functional obstruction. It is managed by excision of the aganglionic segment and anastomosis of the ganglionated bowel just above the dentate line. The level of aganglionosis is determined by performing multiple seromuscular biopsies and/or full thickness biopsy on the antimesenteric border of the bowel to determine the level of pullthrough. The transition zone is described as being irregular, and hence a doughnut biopsy is recommended so that the complete circumference can be assessed. Herein, we described a child in whom there was a selective absence of ganglion cells in 30% of the circumference of the bowel along the mesenteric border for most of the transverse colon. This case defies the known concept of neural migration in an intramural and transmesenteric fashion and emphasizes the importance of a doughnut biopsy of the pulled-down segment. Hirschsprung's disease (HSCR) is a disorder of migration of neural crest cells during embryonic development characterized by the absence of ganglia in the distal colon, resulting in a functional obstruction. The principles of management involve excision of the aganglionic segment and anastomosis of the ganglionated bowel above the dentate line. The level of aganglionosis is determined by performing multiple seromuscular biopsies on the antimesenteric border of the bowel. This practice is substantiated by Coventry's theory that vagal neural crest cells colonize the gut by intra- and extramural migration.A boy baby born at 37 weeks of gestation was transferred for specialist care for abdominal distension and delayed passage of meconium at 36\u2009hours of life with a significant maternal family history of HSCR. Plain abdominal X-ray showed free air . He undHSCR is thought to be the result of arrested enteric neural crest cell (ENCC) migration, which occurs rostrocaudally. The level of arrest determines the length of the aganglionic segment. Rare variations in the pattern of aganglionosis are well described and are categorized into \u201cskip\u201d lesions, in which there is a segment of ganglionated bowel surrounded proximally and distally by aganglionosis, occurring most commonly with total colonic aganglionosis.ENCC precursors originate in the vagal region (somites 1\u20137) and to a lesser extent from the thoracic and sacral regions of the neural tube.Failure of transmesenteric migration: in a mouse model, Druckenbrod and Epstein studied the transmesenteric migration of ENCCs from the midgut to the hindgut during the time these organs are transiently juxtaposed.Delayed migration: ENCC colonization has been shown to be a timed process.Genetic factors provide the framework for patterning and morphogenesis. However, it is now known that maternal and placental factors such as hypoxia, inflammation, drug intake, and nutrition can affect the survival of ENCC.The pattern of aganglionosis in the transition zone is unusual, and a proper doughnut biopsy will avoid a transition zone pullthrough."} +{"text": "Raman spectroscopy is a novel tool used in the on-line monitoring and control of bioprocesses, offering both quantitative and qualitative determination of key process variables through spectroscopic analysis. However, the wide-spread application of Raman spectroscopy analysers to industrial fermentation processes has been hindered by problems related to the high background fluorescence signal associated with the analysis of biological samples. To address this issue, we investigated the influence of fluorescence on the spectra collected from two Raman spectroscopic devices with different wavelengths and detectors in the analysis of the critical process parameters (CPPs) and critical quality attributes (CQAs) of a fungal fermentation process. The spectra collected using a Raman analyser with the shorter wavelength (903 nm) and a charged coupled device detector (CCD) was corrupted by high fluorescence and was therefore unusable in the prediction of these CPPs and CQAs. In contrast, the spectra collected using a Raman analyser with the longer wavelength (993 nm) and an indium gallium arsenide (InGaAs) detector was only moderately affected by fluorescence and enabled the generation of accurate estimates of the fermentation\u2019s critical variables. This novel work is the first direct comparison of two different Raman spectroscopy probes on the same process highlighting the significant detrimental effect caused by high fluorescence on spectra recorded throughout fermentation runs. Furthermore, this paper demonstrates the importance of correctly selecting both the incident wavelength and detector material type of the Raman spectroscopy devices to ensure corrupting fluorescence is minimised during bioprocess monitoring applications. Saccharomyces cerevisiae fermentations [Escherichia coli fermentation [Raman spectroscopy is a non-invasive, non-destructive spectroscopic technique that exploits molecular vibrations for the qualitative and quantitative analysis of molecules . It has ntations ,10 and nentation . More adentation , in addientation in the rIt is clear that Raman spectroscopy will play a pivotal role in the real-time monitoring and control of bioprocesses. However, a major hurdle hindering the wide-spread adoption of these process analysers relates to the high fluorescence observed during the analysis of biological molecules which often overlay the important Raman scattering bonds, diminishing the ability to estimate the material of interest ,15. TherTo address this issue and advance the use of this technology in fermentation applications, two Raman spectroscopic analysers were implemented on a highly fluorescence fungal fermentation process. One Raman analyser had an incident wavelength of 903 nm and used a silicon-based charged couple device (CCD) detector and the second device had a 993 nm wavelength with an indium gallium arsenide (InGaAs) array detector. Both analysers were implemented on a similar small-scale fungal fermentation process with the objective of estimating the critical process parameters (CPPs) and critical quality attributes (CQAs) of the fermentation. These have been previously identified for this process as the glucose and active pharmaceutical ingredient (API) concentration, respectively. The spectral data collected using the Raman device with the shorter wavelength and CCD detector was found to be significantly corrupted by a high background fluorescence signal in contrast to the 993 nm Raman device with the InGaAs detector which was only moderately affected by fluorescence. The spectra collected from both analysers was correlated with the off-line concentrations of both variables using partial least squares (PLS) modelling. Only the regression models generated using the spectra recorded on the 993 nm device enabled accurate predictions of both the glucose and API concentration. To the best of the authors\u2019 knowledge, this is the first direct comparison of two Raman spectroscopy devices with different incident wavelengths and detector material to monitor the same fermentation process. This work highlights the need to better understand the fundamental principles of fluorescence on recorded Raman spectra and demonstrates the importance of correct probe selection in future applications of this novel technology to the biotechnology sector.A proprietary fungus supplied by Pfizer was used to inoculate both fermentations that was propagated from the same thawed culture stock supplemented with a proprietary nutrient feed. The fungus produces a high concentration of a commercially available antibiotic, referenced as the active pharmaceutical ingredient (API) concentration.Two fed-batch fungal fermentations (referred to as Fermentation A and Fermentation B) were performed in a 5 L bioreactor with a working volume of approximately 3.6 L. Each bioreactor was set to have identical operating conditions, both equipped with thermometers, dissolved oxygen and pH probes. The temperatures of the reactors were kept at 28 In Fermentation A, a 993 nm Raman spectroscopy device with an indium gallium arsenide (InGaAs) detector array with a spectral range of 200\u20132400 cmYYThe spectra collected by each device was combined with the off-line glucose and API measurements and was used to generate two PLS models. In Fermentation A, ten off-line glucose samples were recorded and 18 off-line samples for Fermentation B. For each fermentation eight off-line samples of the API concentration were recorded. The off-line glucose samples were interpolated using a cubic spline approximation and a 30-min sampling rate resulting in 522 and 475 sample points for Fermentation A and B, respectively. The off-line API concentrations were interpolated in a similar fashion resulting in approximately 360 sample points. The 30-min sampling rate was chosen to match the sampling frequency of each Raman device that was set up to produce a single spectra every 30 min through adjustment of the number of averages and integration times. The preprocessing of the spectra utilised the de-spiking algorithm outlined in Mori et al. and was The wavelengths associated with the glucose were identified through the analysis of the aqueous calibration samples spiked with high concentrations of glucose algorithm as outlined in detail by Wold et al. . The preB is generated that relates scores of the X block to the Y block as:A vector of inner-relationships The PLS model works iteratively for each latent variable and upon convergence a matrix of regression coefficients X block taking R latent variables:The cumulative sum of the regression coefficients predicts the response variable and for the validation data set the root mean square error of prediction (RMSEP) was used. These functions were calculated as described in :(6)RMSEChThe fundamental principles of Raman spectroscopy are outlined in hhhhhhhRayleigh scattering occurs when the light interacts with the molecules of the sample and the net exchange of energy is zero i.e., energy of the incident light . The net energy deviation results in characteristic peaks in the resultant spectra. The positions of these peaks are defined by the molecular structure of the sample and its chemical environment, allowing Raman spectroscopy to be used for chemical identification and classification. Furthermore, the peak heights (or areas) of the spectrum are assumed linearly proportional to the molecular concentration and consequently can be used to monitor the CPPs or CQAs of bioprocesses, provided the Raman analyser can detect the material of interest .The intensity of Raman scattering (fined by to be inIt is therefore necessary to filter out the Rayleigh scattered light in order to detect the weak Raman scattering effect .In competition with the weak Raman scattering is fluorescence which is a non-scattering process that occurs when the incident beam absorbs some energy from the light source and temporarily excites the electrons with enough energy to be transferred up to a higher quantum state (E\u2019). There are multiple higher quantum states that the exited electrons can obtain and this is dependent on the energy and wavelength of the external light source. The electrons in their excited state are unstable and as they return to their respective ground state they release light with energy equal to 10\u221212 s) .Molecules that are susceptible to this fluorescence process when excited by visible, ultra-violet or near-infrared light are known as fluorophores or fluorescence molecules, these are typically polyaromatic hydrocarbons or heterocycle molecules with several property , howeverproperty . UnfortuPhaffia rhodozyma. However, some of the important Raman peaks observed in The spectra collected from two Raman spectroscopic devices were analysed to estimate the glucose and API concentration, previously identified as the primary CPP and CQA on this small-scale fungal fermentation. The spectra collected on the 993 nm Raman device was moderately influenced by fluorescence in comparison to the spectra recorded by the 903 nm Raman device which was significantly effected by fluorescence. o et al. observedThe spectral data of each device and the corresponding off-line glucose measurements were used to generate two separate PLS models as previously discussed. The number of components of each model were chosen based on the RMSEC and RMSEP of both PLS models as defined by Equation . Figure The PLS model predictions of the glucose concentration in Fermentation B using the spectra collected using the 903 nm Raman device were very poor when compared to the off-line values as shown in Intensity of Raman scattered lightFrequency of light sourceWavelength of light sourceThe two main factors contributing to the large difference observed in the intensity of fluorescence effecting the spectra collected by both Raman analysers is related to the incident wavelength of each device and the detector material used. The choice of the excitation wavelength can significantly impact the level of observed fluorescence. In Raman spectroscopy the scattered energy of the light source is inversely proportional to the fourth power of the excitation wavelength defined as:Therefore the longer wavelength of 993 nm Raman device results in a decrease in energy of the light source compared to the 903 nm, hence reducing the probability of fluorescence by lowering the energy available to excite the electrons of the sample up to their quantum states. Frank et al. also higFurthermore, the 903 nm Raman device uses a CCD detector which was highlighted to have low quantum efficiency above wavelengths greater than 800 nm ,36. Li eP. rhodozyma fermentation in addition to the prediction of antibody product concentrations in mammalian cell cultures [The on-line prediction of the API concentration of this fermentation was also investigated. The PLS predictions of the API concentration in comparison to the off-line values using the spectra collected from the 993 nm Raman device is shown in cultures ,38. The Fluorescence is a major problem experienced by many scientists and engineers implementing Raman spectroscopy to monitor and control biopharmaceutical processes. This paper is the first direct comparison of two different Raman spectroscopy devices on the same fermentation highlighting the significant influence of incident wavelength and detector material on fluorescence levels detected by each device. The spectra recorded by the Raman spectroscopy device with the 903 nm incident wavelength and a CCD detector was corrupted by high fluorescence and rendered the recorded spectra unusable for regression analysis. However, the spectra recorded by the Raman spectroscopy device with the 993 incident wavelength and an indium gallium arsenide (InGaAs) detector generated spectra with only moderate levels of fluorescence. The spectra recorded by this device enabled accurate estimations of both glucose and API concentrations through the generation of a PLS regression model. Therefore this work demonstrates that although a lower incident wavelength increases the Raman scattering effect it can also increase the level of fluorescence rendering the recorded spectra obsolete. However, at elevated incident wavelengths the probability of fluorescence is significantly reduced in addition to the Raman scattering effect which can be compensated for by a more sensitive detector material as demonstrated by the 993 nm Raman probe with the InGaAs detector. Thus Raman spectroscopy is a highly suitable tool for the quantification of the key process parameters in biopharmaceutical processing. However, caution is advised in implementing this novel tool particularly in the choice of the appropriate incident wavelength of the analyser and the sensor detector material to ensure problems relating to high fluorescence do not impact on the quality of the recorded spectra."} +{"text": "Lumbricus terrestris. Body burden analysis was used to analyze the intrinsic toxicity of the six hydrocarbon mixtures. The major fractions of the whole mixtures, the saturate, and aromatic fractions had different intrinsic toxicities; the aromatics were more toxic than the saturates. The toxicity of the saturate and aromatic fractions also differed between the mixtures. The flare saturate mixture was more toxic than the crude saturate mixture, while the crude aromatic mixture was more toxic than the flare aromatic mixture. The most dramatic difference in toxicity of the two sources was between the flare whole and crude whole mixtures. The crude whole mixture was very toxic; the toxicity of this mixture reflected the toxicity of the crude aromatic fraction. However, the flare whole mixture was not toxic, due to a lack of partitioning from the whole mixture into the lipid membrane of the exposed worms. This lack of partitioning appears to be related to the relatively high concentrations of asphaltenes and polar compounds in the flare pit whole mixture.The toxicity of whole, saturate, and aromatic hydrocarbon mixtures from flare pit and crude oil sources were evaluated using"} +{"text": "Alec Coppen, a Honorary Fellow of the Royal College of Psychiatrists has acted as director of the Medical Research Council Unit for Neuropsychiatric Research in Epsom England until 1988. Among his international colleagues he was always highly respected as a pioneering psychiatrist and psychopharmacologist credited with the discovery of the role of serotonin in the pathogenesis of depression. His seminal paper in the Lancet 1963 showed that tryptophan markedly potentiated the antidepressant action of MAOIs. He pursued the serotonin theory conducting numerous studies including studies of serotonin metabolites in post-mortem brains of depressed suicides that culminated in his 1967 paper in the British Journal of Psychiatry on the biochemistry of affective disorders that became a citation classic and made him a much honoured member of the international group of psychopharmacologists and biological psychiatrists, the CINP . He acted as its president for the 17th annual CINP meeting in Kyoto in the year 1990. It is fair to state that the development of SSRI antidepressants have one origin in his work. He has published more than 500 papers and seven books on various studies of the biology of affective disorders. His other major contribution to the psychopharmacology of mood disorders was the establishment of the Lithium Clinic at West Park Hospital (in a chapter for a volume on the history of CINP he wrote: \u201cOne lesson we learned is that simply prescribing treatment is not effective\u201d). He conducted the first placebo controlled trial of the effectiveness of lithium salts in the prophylaxis of recurrent mood disorders culminating in the discovery of lithium\u2019s unique anti-suicidal effects concurrently with two German research groups in Berlin and Dresden. He conducted a trial of the optimum dosage of lithium establishing its minimum effective dose for prophylaxis. He studied its adverse effects on renal and thyroid functions to establish standards for safe medical practice. His last paper was a review of the effectiveness of lithium in the long-term treatment of unipolar (recurrent) depression in 2017. Alec often talked about the many letters he received from patients and their families thanking him for his good care years after he retired.Alec had been trained in psychiatry at the Maudsley Hospital where he started his research work in biological psychiatry and attained the MD in 1958. He established the MRC Neuropsychiatry Unit at West Park Hospital in 1959 which he directed until he retired in 1988.Alec Coppen received many honours such as the prestigious Anna Monica Prize in Biological Psychiatry Research in 1969. He was founding member of the British Association for Psychopharmacology (BAP) and was elected President 1976\u20131978. He was President of the Collegium Internationale Neuro-Psychopharmacologicum (CINP) 1988\u20131990 and was awarded the Honorary Fellowship of the Royal College of Psychiatrists in 1995. The BAP awarded him the first Gold Medal for lifetime achievement in psychopharmacology in 1998 and he was given the Pioneer of Psychopharmacology Award by the CINP in year 2000.Alec was well read, highly cultured and enjoyed opera and theatre. In the retirement he achieved his aim of seeing all 37 Shakespeare plays performed.He leaves his son, Michael and his grandchildren Daniel and Victoria."} +{"text": "We evaluated the impacts of entrainment and impingement at the Salem Generating Station on fish populations and communities in the Delaware Estuary. In the absence of an agreed-upon regulatory definition of \u201cadverse environmental impact\u201d (AEI), we developed three independent benchmarks of AEI based on observed or predicted changes that could threaten the sustainability of a population or the integrity of a community.Our benchmarks of AEI included: (1) disruption of the balanced indigenous community of fish in the vicinity of Salem ; (2) a continued downward trend in the abundance of one or more susceptible fish species ; and (3) occurrence of entrainment/impingement mortality sufficient, in combination with fishing mortality, to jeopardize the future sustainability of one or more populations .The BIC analysis utilized nearly 30 years of species presence/absence data collected in the immediate vicinity of Salem. The Trends analysis examined three independent data sets that document trends in the abundance of juvenile fish throughout the estuary over the past 20 years. The Stock Jeopardy analysis used two different assessment models to quantify potential long-term impacts of entrainment and impingement on susceptible fish populations. For one of these models, the compensatory capacities of the modeled species were quantified through meta-analysis of spawner-recruit data available for several hundred fish stocks.All three analyses indicated that the fish populations and communities of the Delaware Estuary are healthy and show no evidence of an adverse impact due to Salem. Although the specific models and analyses used at Salem are not applicable to every facility, we believe that a weight of evidence approach that evaluates multiple benchmarks of AEI using both retrospective and predictive methods is the best approach for assessing entrainment and impingement impacts at existing facilities."} +{"text": "Suspended particulate matter of samples of river water and waste water treatment plants was tested for genotoxicity and mutagenicity using the standardized umu assay and two versions of the Ames microsuspension assay. The study tries to determine the entire DNA-damaging potential of the water samples and the distribution of DNA-damaging substances among the liquid phase and solid phase. Responsiveness and sensitivity of the bioassays are compared."} +{"text": "This article describes radiolocation devices dedicated to the detection and tracking of small high-speed ballistic objects and multifunctional radars. This functionality is implemented by applying space search technology and adaptive algorithms for detection and tracking of air objects in parallel with classic search and tracking of objects in controlled airspace. This article presents examples of the construction of both types of devices produced by foreign companies and Polish industry. The following sections present methods for testing radars with the function of tracking small high-speed ballistic objects along with examples of results of observations of combat ammunition. We presented the issue of radars capable of detecting and tracking high-speed ballistic objects, as well as issues related to the specifics of research and testing of such devices at the conference Metro Aerospace 2019, in Turin. The presented issues were met with great interest from the conference participants and became a source of information for many people about radar devices manufactured by the Polish industry.This article expands the subject of the publication \"Radars with the function of detecting and tracking artillery shells\u2014selected methods of field testing\", published in Metro AeroSpace 2018 proceedings [Despite continuous technological development, the role of classic artillery in a contemporary battlefield is not diminishing. If we observe this in relation to the conflicts conducted in recent years, it is easy to notice that most activities of infantry units take place under covering fire from lighter or heavier artillery. Both cases, which aim to destroy a detected enemy and protect our positions, still require using sufficient fire power to quickly and effectively incapacitate the enemy. The key to effective use of artillery is to quickly and precisely determine the coordinates of the position of targets and to verify the accuracy and effectiveness of the conducted firing .In this area, modern technology offers the following two most popular solutions: Unmanned aerial vehicles (UAV) equipped with optoelectronic observation sensors and specialized radiolocation devices intended for detecting, tracking, and calculating the trajectory parameters of high-speed ballistic objects . Both meWe present research methods and example results related to Polish radars developed for the needs of the Polish Armed Forces. From a scientific point of view, it would be desirable to present many technical parameters that characterize individual sensors and compare the effects of different sensors depending on the technical solutions used in them, but this is impossible because the detailed technical parameters and test results of these devices are secret.The latter part of this paper, aside from a short characterization of radiolocation devices intended for detecting and tracking high-speed ballistic objects, includes a presentation of testing methods used at the Air Force Institute of Technology (AFIT) for the purpose of field testing to check the most important parameters of such devices. The most frequently used verifications are as follows: minimum and maximum distances, minimum and maximum height, maximum elevation angle of detecting for a given type of detected object, and the accuracies of determining the points of origin (POO) and points of impact (POI) . The issue of detecting and tracking high-speed ballistic objects has been a significant challenge for radiolocation. Because these objects achieve high velocities and move at relatively small heights above the horizon, the device that ensures their effective detection and tracking must quickly scan the horizon in order to immediately detect the flight object and track its flight path, i.e., observe space in a broad range of elevation angles with the radar beam . These capabilities have appeared along with the development of antennas with an electronically controlled pencil beam. Such antennas allow for very fast beam movement both in the azimuth and elevation planes. This searches the entire observation sector with an information refreshing time of less than 1 s. The algorithms for searching ballistic objects are usually optimized to search a narrow elevation sector right above the horizon. This allows for the optimal use of the radar\u2019s time budget. Application of this method is possible because after detecting the echo, the radar automatically switches to the detection mode using additional lightings with a pencil beam, which obtains information about the high-speed ballistic object\u2019s flight trajectory, optimized in terms of the ballistic calculations ,6. On thOne of the most popular examples of a radar intended for tracking high-speed ballistic objects and detecting launchers positions is ARTHUR developeThe radar is a device characterized by high mobility, able to quickly and effectively move in a combat operations area. It operates in the C band 4 to 8 GHz) and possesses a flat passive antenna with electronic beam scanning in both planes. In elevation, the antenna beam is controlled by changing the probing signal\u2019s frequency, whereas in the azimuth the control takes place using phase shifters [ GHz and The width of the azimuth sector in which the electronic beam scanning is conducted amounts to 90\u00b0. The radar\u2019s antenna setting towards the selected direction is executed by its mechanical rotation. The radar detections are analyzed by an advanced software that determines the POO and POI coordinates with consideration of the terrain\u2019s digital model and computation of commands for the co-operating military units. In order to execute the aforementioned functions, the radar must precisely determine its position in the terrain. This is done with the use of an advanced navigation system consisting of an inertial module and a GPS receiver .Other examples of radars intended for detecting high-speed ballistic objects and launcher positions include: AN/TPQ-37, AN/TPQ-47, and AN/TPQ-50 developed by Thales Raytheon Systems, COBRA (COunter Battery Radar) developeRadars produced by Polish companies also detect and track small high-speed ballistic objects and determine the POO and POI. An example of a device especially dedicated for such tasks is the Radiolocation Artillery Reconnaissance Unit LIWIEC Figure . It is eautomatic detection and tracking of high-speed ballistic objects;determination of the coordinates of launcher positions (single and grouped);automatic classification of flight object type and launcher position and type;determination of the POI coordinates;transmission of information to automated command and control systems.The LIWIEC radar can be used to protect important objects and support artillery operation in the following scope:The main advantage of this radar is that its operator can adapt space scanning algorithms to current needs, and, in addition, the automatic tracking system of detected objects can force more frequent scanning of new objects, which greatly accelerates the calculation of the ballistic trajectory of the tracked object. On the basis of the calculated trajectory, the radar identifies the type of detected object (including its launcher type) and accurately determines the POO and POI before the detected object reaches its target.Aside from small high-speed ballistic objects, the LIWIEC radar detects and tracks aircrafts, helicopters, unmanned aerial vehicles, and land-based mechanical vehicles .A different perspective on the problem of detecting small high-speed ballistic objects and especially mortars is applied in the SO\u0141A and BYSTRA radars. The radars are intended for air defense forces to control airspace in the area of land troops operations. Both devices search the airspace by mechanically rotating the antenna in the azimuth plane. The application of relatively high antenna rotation speeds (30 and 60 rpm) guarantee the ability to detect and track small high-speed ballistic objects with sufficient accuracy for the ballistic calculation algorithm to estimate the complete object trajectory based on the sample of over a dozen detections, and therefore determines the POO and POI with satisfactory accuracies. Especially good results of such a method for detection and tracking are achieved in relation to mortar .The SO\u0141A redeployment-capable radiolocation station is a shoThe BYSTRA is a mulElectronic control of the position of the antenna beam in the azimuth plane allows the system to automatically track air objects and generate re-scanning requests for newly detected objects in the same antenna rotation. This is a very important property of radar, which very quickly eliminates false ballistic trajectories based on the detection of passive or active interference.The SO\u0141A and BYSTRA radars are perfectly matched for covering and protecting important objects and areas because, during normal operation of controlling the airspace around the object, they additionally ensure the execution of the helicopter detection function, including hovering helicopters, as well as the detection and tracking of small high-speed ballistic objects including the POO and POI coordinates\u2019 estimation. This functionality provides the personnel of the protected object early warning about the threat detected. expected values of the tested radar parameter;firing capabilities of the given launcher;available military trying area;safety zones military trying area;possibility of finding a proper place of radar operation in relation to launcher position and firing targets.The design and production of radars capable of detecting and tracking small high-speed ballistic objects and calculating their ballistic trajectory parameters requires developing specific research methods to objectively evaluate their technical parameters. Generally, such radars require conducting actual observations of shelling from many types of launchers. The planning of such tests requires simultaneous consideration of many various factors, including the following:In the case of conducting testing to verify a small high-speed ballistic object detection zone, mortar shelling is most often used. This means of launcher fires ballistic objects to both small and large heights, in a broad range of distances and does not require too large of a safety zone. The most difficult element in preparing the tests for the conditions of a specific trying area is most often finding a suitable place of operation for the tested radar. Usually, the area of firing means that location of launchers and field of fire on a trying area are imposed by the adopted safety zones, whereas the terrain outside of the tactical strips is covered by forest. For this reason, the location of equipment in the field can only approximately correspond to the theoretical assumptions. Prior to the small high-speed ballistic object detection, each radar undergoes testing with the use of unmanned or classical aircrafts equipped with GPS receivers that record their flight trajectory. The comparison of radar detections with the recorded flight trajectory determines the accuracy of estimation of the detected object\u2019s coordinates . The sequence of testing analyzes the small high-speed ballistic object detection with consideration of the earlier designated estimation errors.In the case of verification of the minimum distance and minimum height of small high-speed ballistic objects detection and tracking, the mortar\u2019s position must be located in the radar\u2019s dead zone. The ballistic objects must be fired into the radar\u2019s characteristic, at a small angle, so that the radar is capable of observing them continuously along the entire flight trajectory. Such a test requires realization of several dozen firings in order to evaluate the minimum ballistic objects detection distance and height in a statistical manner. The test sketch is presented below .The sketch of firings conducted for the purpose of this test is presented in The sketch of verification of the maximum height and distance of detection and tracking of mortars is presented in The firings conducted for the purpose of verifying the maximum detection height must feature a large angle of mortar barrel elevation, with simultaneous use of the maximum propelling charge. This type of firing does not ensure sufficient range. In practical testing, it is, therefore, necessary to conduct separate firings at the maximum flight altitude and at the maximum detection range. The last type of test requires setting the mortar\u2019s barrel at an angle near 45\u00b0.Verification of the accuracy of determining the coordinates of launchers positions (POO) and points of impact (POI) of ballistic objects when the radar observes objects that are moving away. This method controls the execution of the firing task at much greater distances than by optical observation instrumentation. A radar deployed on an observation position far from the enemy is also more difficult to destroy than an observation UAV (unmanned air vehicle), which in order to evaluate the firing effectiveness must fly near the area of the shelled target, and thus can be easily destroyed by the enemy. The sketch of the method of executing the test is presented in In order to evaluate the radar\u2019s POO estimation accuracy, the launcher position\u2019s coordinates are measured with the use of a surveying GPS receiver from Spectra Precision, type Epoch 50 . Measurements are conducted in the differential mode with the use of data derived from the ASG-EUPOS network\u2019s reference station. Such a measurement allows for achieving positioning accuracy of no less than 20 to 30 cm, and thereby assumes the measured coordinates as true coordinates. The coordinates obtained this way are then compared with the results of the POO estimation calculated by the tested radar. The estimation accuracy analysis is conducted in the UTM rectangular coordinates system.The analysis of the POI coordinates estimation accuracy of the tested radar is conducted similarly to the POO accuracy analysis. However, it features an essential difficulty in the form of the necessity to find the particular impact points after the firing and to measure their positional coordinates using the GPS receiver. The measurement itself is conducted similarly as in the case of measuring the firing mean position with the use of the surveying differential Epoch 50 receiver.In order to find and identify the point of impact of each of the fired object, the firings must be executed in a special manner, i.e., subsequent objects must be fired not at the same target, but with a slight displacement, for example, 20 to 30 m left or right. Additionally, it is necessary to note the results of localizations conducted by military observers with the use of laser rangefinders. Such tests are always conducted by the polygon services in order to control compliance with safety conditions. The practice of field testing also shows that if the tested radar allows for stable tracking of the ballistic objects, it is necessary to note the estimated coordinates of particular POIs and use them to find particular points of impact. Such data can be entered into the manual GPS receiver and used in combination with the GO TO function to find the firing area and particular impact points in a relatively short time. During the searching of impact points, it is necessary to take special care, comply with the safety principles in force on the training area, and not take any actions without making arrangements with persons responsible for safety during the executed firings. Verification of the accuracy of determining the coordinates of the launcher position (POO) and the points of impact (POI) of ballistic objects when the radar observes objects that are moving towards it. The scheme of such verifications is presented in The observation of firings executed in the direction of the radar requires particularly careful planning. On one hand, the radar must be placed in a suitable position, free of any terrain obstacles in the observation sector, at a distance ensuring the ability to detect flying objects. On the other hand, the test safety conditions must be respected, which means that the radar must be located at a suitable distance from the impact zone and the point of maximum theoretical range of the tested launcher. The reconciliation of these conditions is sometimes difficult and requires strict cooperation of the testing team with the trying area services.The presented results of field testing are illustrative, and their aim is only to illustrate the specifics of particular tests. The detailed results of verifications of the tactical and technical parameters of particular devices are confidential and will not be published.GPS L1/L2/L2C/L5;GLONASS L1/L2.The basic measurement tool used in the polygon testing of radars is the surveying Epoch 50 GNSS receiver from Spectra Precision. The receiver interoperates with the Nomad type field controller, including the installed Survey Pro field measurement software. The receiver features 220 receiver channels and can operate with the use of the following signals:The Epoch 50 can operate in the following modes: autonomous, code differential, and phase differential. The differential modes can be realized in real time with the use of differential corrections from the local reference station (radio modem) or with the use of corrections sent via the Internet, for example, from the networks of the ASG-EUPOS reference stations. During field testing practice, the most common solution is the ability to record the measurements with a single receiver and their latter specification in the Spectra Precision Survey Office software. Such a specification only requires downloading from the ASG-EUPOS network of the files including the data recorded by the reference station located nearest to the testing location. The scheme of distribution of the reference stations available in the ASG-EUPOS network is presented in Research and testing of modern radars are complicated processes, both in technical and organizational terms. One of the dominant trends in radiolocation is the development of multifunctional radars. Such radars can detect and identify the following: aircrafts, helicopters, unmanned flying objects, and small high-speed ballistic objects. They require comprehensive knowledge of various fields of technology and a constant quest for new research methods from the research teams . In partTechnological progress in the field of radiolocation contributes not only to the improvement of the basic technical parameters of radars, forcing an increase in the accuracy of existing research and testing methods, but also leads to the creation of completely new functionalities .In this presentation of parameters, we are aware that this article leaves unsatisfied the specific properties and test results of individual devices, but this is impossible due to their final military application."} +{"text": "Intermittent claudication (IC) is the most common symptom of peripheral arterial disease and is generally treated conservatively due to limited prognostic evidence to support early revascularisation in the individual patient. This approach may lead to the possible loss of opportunity of early revascularisation in patients who are more likely to deteriorate to critical limb ischaemia. The aim of this review is to evaluate the available literature related to the progression rate of symptomatic peripheral arterial disease.We conducted a systematic review of the literature in PubMed and MEDLINE, Cochrane library, Elsevier, Web of Science, CINAHL and Opengrey using relevant search terms to identify the progression rate of peripheral arterial disease in patients with claudication. Outcomes of interest were progression rate in terms of haemodynamic measurement and time to development of adverse outcomes. Two independent reviewers determined study eligibility and extracted descriptive, methodologic, and outcome data. Quality of evidence was evaluated using the Cochrane recommendations for assessing risk of bias and was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines.Seven prospective cohort studies and one retrospective cohort study were identified and included in this review with the number of participants in each study ranging from 38 to 1244. Progression rate reports varied from a yearly decrease of 0.01 in ankle-brachial pressure index (ABPI) to a yearly decrease ABPI of 0.014 in 21% of participants. Quality of evidence ranged from low to moderate mostly due to limited allocation concealment at recruitment and survival selection bias.Progression of PAD in IC patients is probably underestimated in the literature due to study design issues. Predicting which patients with claudication are likely to deteriorate to critical limb ischaemia is difficult since there is a lack of evidence related to lower limb prognosis. Further research is required to enable early identification of patients at high risk of progressing to critical ischaemia and appropriate early revascularisation to reduce lower limb morbidity. Intermittent claudication (IC) is the first symptom of peripheral arterial disease (PAD) and is associated with significant functional impairment , 2. PatiSince it is expected that the majority of patients with IC will have a relatively benign lower limb prognosis, the recommended first-line treatment strategy is conservative treatment . This inIf those patients with claudication at high risk for deterioration to critical limb ischaemia could be identified before the onset of gangrene and tissue loss, early revascularisation could possibly reduce the risk of minor amputations, major amputations, local and systemic sepsis from ulcers and wet gangrene and mortality . InterveCurrently there are no predictive formulae that allow the clinician to estimate the level of risk of an individual patient with intermittent claudication to progress to critical limb ischaemia or the time scale in which this is likely to occur . For effIn order to develop predictive formulae for patient specific lower limb management for IC, detailed PAD progression data is crucial, however this is scarce, since the main focus of research has been coronary disease and stroke with less attention paid to the lower limb , 5.This paper evaluated the current evidence related to the progression rate of PAD in patients with IC which is essential for informed clinical decision making.This systematic review was conducted following recommendations from the Cochrane Collaboration . The stuThe search for potentially relevant articles was performed in PubMed and MEDLINE, Cochrane database of systematic reviews, Elsevier (Embase and Sciencedirect), Web of Science and CINAHL. Reference lists of retrieved full-text articles were also cross-checked and OpenGrey database was searched for any relevant grey literature. The searches were performed without restrictions on publication date, or publication status. Search results were downloaded into a bibliographic software Refworks (ProQuest LLC).The inclusion criteria for the search strategy consisted of studies on humans and written in the English language. The search terms used for this literature search were identified after reading several publications related to the subject area and through conducting scoping searches. The terms were formulated by three experienced reviewers who have an interest in the subject area. Choice of terms was done independently and was finalised by the main researcher who eliminated duplicates but retained all the identified key words. The literature search sought to identify studies reporting the progression of PAD in patients with IC. Search terms included free text terms and Medical Subject Heading (MeSH) terms related to intermitAs recommended in the PRISMA statement , before Eligible articles needed to report on the natural history of patients with IC as a symptom of PAD, also documenting progression rate of the disease. Disease progression has been previously suggested to be detectable after twelve months , therefoPrimary endpoints were progression rate in terms of haemodynamic parameters (expressed as time for change in ankle and / or toe pressures) and adverse lower limb events . Secondary endpoints were identification of prognostic factors for the development of adverse lower limb events and for the progression of PAD in patients with IC.While prospective observational longitudinal cohort studies have the most suitable design to investigate the natural history of events , in thisTitles and abstracts of studies identified by the search strategy were assessed in terms of relevance to the study topic. Additional relevant references identified from the bibliography of the reviewed articles and those retrieved from the grey literature search, were also assessed. Full texts of selected articles were retrieved if they fulfilled the inclusion criteria and were reviewed by two investigators independently . A meta-analysis was planned if clinical homogeneity was observed. The process was pilot-tested on a selection of studies and refined where required. Disagreement between reviewers regarding the article relevance, inclusion or quality was discussed until agreement was reached.Methodological quality of each trial was evaluated systematically with the aid of the Cochrane handbook and repoThe initial database search yielded a total of 793 potentially relevant papers and an additional paper was retrieved from grey literature search. These were processed as illustrated in Fig.\u00a0Eight full-text articles met the selection criteria reporting temporal progression of PAD in patients with IC , 24\u201330. We identified 8 studies which evaluated the progression of PAD in patients with IC. Only two studies , 25 repoIn summary, the results of this study demonstrate that yearly haemodynamic decline in ABPI was reported in two study by 0.014 and 0.01The prognostic value of the reports presented in the selected studies is limited due to the quality of evidence of each trial. Overall the reviewers rated the quality of evidence for progression rate of PAD in patients with IC as low mainly owing to the possibility of serious risk of bias in these studies Table\u00a0. For eviSelection bias, because of inadequate allocation concealment at recruitment stage was observed in four out of the seven studies reviewed , 25\u201327. Survival selection bias was evident in 6 \u201329 out oThe reporting of methodological detail about aspects that threaten internal validity such as measurement precision of the tools used were often reported. Having reliable and valid instruments is one of the best ways of reducing measurement bias in epidemiologic research. However, reports of measurement quality due to the possibility of arterial calcification and hence reporting, were also neglected, with only one article reportinThis systematic review is the first to evaluate the progression rate of PAD in individuals with IC in terms of haemodynamic assessments of the lower limb, since previous reviews largely focused on mortality or amputation risks . Data frThe underestimated risk to the limb in PAD patients has also been reported in a systematic review investigating the progression of PAD in both asymptomatic and symptomatic patients within the context of amputation and mortality risk , 41. TheCurrent level of knowledge precludes the development of robust predictive formulae to identify the risk of haemodynamic deterioration in an individual patient. It is extremely likely that accurate prediction of patient specific risk of deterioration would lead to a paradigm shift in the management of patients with intermittent claudication. Delaying intervention until the patient has already developed critical ischaemia almost invariably means that more extensive occlusive disease has developed. The more complex and the more extensive the disease the more difficult and the riskier the intervention is and the lower the likelihood of success and long-term patency . FurtherThe poor design and reporting in the selected studies may have introduced bias and reduced the robustness of data , 43. TheThe limitations of this review are mainly related to the significant heterogeneity of data among the studies due to different outcome measures and study cohorts. Therefore, meta-analyses of the data could not be performed and only descriptive analysis of the studies was presented. Methodological quality of the included studies was rigorously assessed.Although efforts were made to carry out a thorough search of the literature, some studies may have been overlooked during the process.This paper highlights the need for further research to evaluate the progression rate of PAD in patients with IC. Data from registries which include complete consecutive patient cohorts and are protected from selection bias, are important to support such knowledge. Future studies should conceal allocation and pursue a high rate of follow-up, with a maximum of twelve-month interval between reviews in order to capture any haemodynamic change and reduce the risk of survival selection bias observed in studies with long interval periods and high attrition rates. Due to the calcification risk and possibility of artefactually elevated ABPIs, future studies need to include Doppler waveform and toe-brachial pressure analysis since these assessment modalities are less susceptible to arterial calcification and more likely to provide reliable haemodynamic data.This review has shown that the existing knowledge on the natural progression of intermittent claudication is limited to a small number of studies providing mostly low-quality evidence related to measurable haemodynamic progression rate. The inherent difficulties associated with ABPI as a surrogate measure of peripheral perfusion in patients with medial arterial calcification and the probable underestimated rate of reported progression of PAD have been highlighted. Consequently, international guidelines on the management of PAD are necessarily generic. Further research into the natural progression of the disease is required to enable the development of predictive formulae to guide patient specific management of the condition."} +{"text": "Thanks to its excellent spatial resolution and dynamic aspect, ultrasound of the shoulder allows an optimal evaluation of tendon, muscle and nerve\u2019 structures in shoulder pain. Through this article and owing to inter-observer reproducibility, we will describe an ultrasound standardized protocol in basic first ultrasounds . Nowadays the ultrasound in musculoskeletal imaging is an essential tool in patients\u2019 diagnosis and therapeutic management. For years indeed spatial resolution of ultrasound has been increasing and to date dynamic ultrasound examination cannot be replaced by any type of imaging cross sections . Though At the posterior face of the sacpular region, the teres minor muscle, as the most distal part of the cuff is hardly isolately damaged. However it is often injured in massive rotator cuff tears and a key point in prosthetic replacement. The underlying teres major and latissimus dorsi muscles may be the site of post traumatic lesions in throwing sports. With the long head of the triceps, they can also cause pain in impingements of the Velpeau quadrilateral space, a crossing point of the axillary nerve.At the anterior face of the shoulder, uncommon lesions of the myotendinous junction of the supraspinatus, of the short head of the biceps and coracobrachialis mainly occur on violent trauma (skydiving). The musculocutaneous nerve can also be injured at its upper portion between the two sections of the coracobrachialis muscle as the pectoralis major that can be the site of multiple pathologies affecting both the enthesis and the musculotendinous junction.Over the cuff, the deltoid muscle is an significant abductor and shoulder stabilizer that may be injured with a large tendon rupture. It may also be a tendinopathy or myositis.In this article, we will thus successively observe the ultrasound methods of exploration of these less known musculotendinous shoulder structures, and then nervous structures such as axillary nerve and musculo cutaneous nerve also seldom explored.The aim of this issue is to try to standardize the ultrasound examination of the said-difficult shoulders to achieve a systematic review in case of basic first ultrasound examination.The teres minor inserts from the superior mid lateral scapula to the inferior posterior greater tubercle, just below the infraspinatus .The tendon is examined with the arm in abduction and external rotation (similar to the infraspinatus tendon examination). Analysis is performed arms along the body in internal rotation or hand onto the contralateral shoulder Figure . SagittaNo specific pathology of teres minor was reported though its precise exploration in posterior shoulder pain can give lots of information and helpful in case of pre prosthetic examination in extensive rotator cuff ruptures .The Teres Major originates from the anterolateral plane of the scapula with an oblique up, forward and outward direction. It lies into depth of the long head of the brachial triceps and is inserted onto the medial bicipital groove through a short tendon, common to the Latissimus Dorsi . At the Exploring the Teres Major can be with arm elevation, abduction and external rotation (like a waiter with his tray) to follow continuously from its origin to its scapular humeral insertion (without changing the exploration position so as to explore the axilla). However this position is uncomfortable and difficult to maintain for the patient. Two types of injury are mainly described: tendinous avulsions and acute or subacute (golf) injuries of the myotendinous junction in young athletes .Sectional explorations seem easier to us:Sagittal medial cross section performed next to the lateral edge of the tip of the scapula allowing to distinguish the Teres Major from the Latissimus Dorsi , in neutral or external rotation Figure .Intermediate axial cross section performed in the axis of the long head of the Triceps Brachii covering the Teres Major muscle Figure .Axial distal section of humeral insertion in external rotation of the thick portion of the Coracobrachialis, of the brachial artery below the sub scapularis and the Pectoralis Major. The myotendinous junction of the brachial biceps only needs to be located behind the insertion of the Pectoralis Major and the probe shift to the medial arm Figure .As adductor and medial rotator of the arm, it extends from the thoracolumbar spine T7 \u2013 L5) and the iliac crest twisting on itself to the bottom of the intertuberosity channel between the Pectoralis Major and Teres Major muscles and their equivalent function. Clinical implications are mostly minimal: the Latissimus Dorsi is used as transfer in reconstruction surgery without major clinical consequences .As in Teres Major, the humeral insertion is ultrasound-guided through an anterior approach and the arm in external rotation Figure . Yet theThe Quadrilateral Space Syndrome is limited:Up by Teres Minor muscleDown by Teres Major muscleInside by the long portion of the Triceps muscleOutside by the medial part of the proximal humeral diaphysis Figure .Both the posterior circumflex artery and the axillary nerve are held inside. The axillary nerve belongs to the posterior bundle of the Brachial Plexus and provides motor innervation of the Deltoid and Teres Minor and sensory innervation of the stump of the shoulder. It goes from the subscapular muscle and the axillary artery to the Velpeau Quadrilateral Space 78.7To analyze and locate the Velpeau Quadrilateral Space, a posterior axial cross section centered onto the Triceps Brachii tendon with the arm along the body in internal rotation is needed.The content analysis:A posterior sagittal cross section helps locating the infraspinatus and Teres Minor . The circumflex artery and the posterior axillary nerve are opposite the lower portion of Teres Minor in a small echoic fatty angle.Posterior axial section allows identifying the circumflex artery in an axial plane. Color Doppler can be useful to confirm the vascular origin Figure .The long head of the Triceps Brachii originates in the infra-glenoid fossa of a short tendon . The tenIn axial cross section, the tendon of the Triceps Brachii shows the lateral edge of the Velpeau Quadrilateral Space. Adjunct tendon bundles are not identifiable. The first layer shows the Teres Minor muscle and a small triangle with the fatty axillary nerve and posterior circumflex artery. The dynamic maneuvers with tricipital contraction make the difference between the long head of the Triceps Brachii muscle of the underlying Teres Major Figure .At the myotendinous junction of the supra spinatus , an anterior tendinous complex and a posterior aponeurotic portion. This was analyzed in a conventional position, hand on the buttock with shoulder on retro drive through inside-out sagittal cross sections with a \u201ccomma\u201d appearance. This may be the root of symptomatic myotendinous crack, responsible for posterior shoulder pain Figure 9]..9].The coracoid process (except in traumatic context of fracture in which ultrasound is in our experience a very simple and efficient examination) is the site of insertion on its lateral plane of the short head the biceps brachii and coracobrachialis with very few pathologies type tendinosis or crack. Calcifications with hydroxyapatite resorption can arise yet.On its medial plane the pectoralis minor inserts medially to the horizontal portion of the coracoid process, with no specific pathology.The ultrasound-guided exploration is performed in axial plane, arm in external rotation and then externally down to go on with the Biceps brachii and Coracobrachialis biceps muscles or internally to study the pectoralis minor Figures \u201313.The Musculocutaneous nerve originates from the lateral cord of the Brachial Plexus. It runs between the two heads of the Coracobrachialis muscle at the upper third of the arm. Then it arises front to the Brachialis muscle and back to the Biceps brachii muscle .Anteriorly to the elbow joint, it inclines laterally to the distal tendon of the Biceps Brachii with a semicircular-shape path. The nerve lies then on the deep slope of the cephalic vein.At the forearm, it becomes the lateral antebrachial cutaneous nerve and pierce deep to the cephalic vein. Anatomic variations are possible. Its lesion is responsible for hypoesthesia of the lateral plane of the forearm .Exploration is done arms along the body with axial cross sections according to the elevator\u2019s technique identifying the Coracobrachialis nerve that crosses back and forward Figure .As adductor, medial rotator and antepulsion of the arm, the Pectoralis Major consists of three portions converging to the bottom and side of the bicipital groove , sternal head (1/2 > sternum), Abdominal costo portion ..2].A lesion is observed in 0.3% to 9.2% of rotator cuff tears, usually in its middle portion. After (massive) cuff rupture, two mechanisms may occur: ascension of the humeral head with subacromial impingement possibly leading to avulsion; Lateral subluxation of the humeral head causing an impingement between the greater tubercle and the deep surface of the myotendinous junction .In case of sports injuries, most of the lesions on anterior and middle portions of the Deltoid Figure 13]. Ul. Ul13]. This standardized ultrasound-guided examination allows an optimal and reproducible study of resistant shoulder pain and first ultrasound-guided examinations unable to explain the patient\u2019s symptomatology.The authors declare that they have no competing interests."} +{"text": "The N to O photoisomerization pathways and absorption properties of the various stable and metastable species have been computed, providing a simple rationalization of the photoconversion trend in this series of complexes. The dramatic decrease of the N to O photoisomerization efficiency going from the first to the last complex is mainly attributed to an increase of the photoproduct absorption at the irradiation wavelength, rather than a change in the photoisomerization pathways.Ruthenium nitrosyl complexes are fascinating versatile photoactive molecules that can either undergo NO linkage photoisomerization or NO photorelease. The photochromic response of three ruthenium mononitrosyl complexes, Our results unambiguously point towards an increasing absorption of the isonitrosyl photoproduct at the irradiation wavelength used to trigger the N\u2192O linkage photoisomerization to explain the decrease of the photoconversion yield going from"} +{"text": "Action is needed to face the global threat arising from inconsistent rainfall, rise in temperature, and salinization of farm lands which may be the product of climate change. As crops are adversely affected, man and animals may face famine. Plants are severely affected by abiotic stress , which impairs yield and results in loss to farmers and to the nation at large. However, microbes have been shown to be of great help in the fight against abiotic stress, via their biological activities at the rhizosphere of plants. The external application of chemical substances such as glycine betaine, proline, and nutrients has helped in sustaining plant growth and productive ability. In this review, we tried to understand the part played by bioinoculants in aiding plants to resist the negative consequences arising from abiotic stress and to suggest better practices that will be of help in today\u2019s farming systems. The fact that absolute protection and sustainability of plant yield under stress challenges has not been achieved by microbes, nutrients, nor the addition of chemicals (osmo-protectants) alone suggests that studies should focus on the integration of these units into a strategy for achieving a complete tolerance to abiotic stress. Also, other species of microbes capable of shielding plant from stress, boosting yield and growth, providing nutrients, and protecting the plants from harmful invading pathogens should be sought. The unpredictability of the hydrological cycle has posed a serious challenge to farmers, horticulturists, and to the global community, concerning its effect in meeting food needs of mankind and animals. The number of people to be fed is constantly increasing and food supplies are not meeting the demand.In order to increase the quantity and quality of crops grown, agriculturists have intensified the use of open and ground water sources for irrigation purposes, which has a corresponding salinization implication.However, the use of bioinoculants (plant growth-promoting rhizobacteria) has been of great help in combating this abiotic-climate-induced change that limits the overall performance of plants under stress introduction of beneficial chemical substances as supplement has been used to improve resilience, yield, and tolerance of plants to the toxicity of these stress-imposed conditions, such as the application of caffeic acid oxide and increased atmospheric heat , and salinity issues. The biotechnological application of microbes may address the rising problem and ensure sustainability in the provision of food for all. In addition, the use of biofuels in automobiles as well as the practice of afforestation should be encouraged to help cut down on greenhouse gas accumulation for effective control of climate change. The use of electric automobiles should also be incentivized.Plant growth-promoting rhizobacteria are those indispensable microbes possessing the unique abilities of supporting directly and indirectly the wellbeing of plants. These microbes, in order to survive in the rhizosphere, expanded their biological activities that influence the survival and growth of plants Babalola .2\u2212 solubilizing enzyme in many parts of the world. The gradual delay or low level of rainfall due to the influence of climate adjustment to manmade activities is affecting the biotic aspect of the ecosystem particularly green plants which are the primary and major producers of food upon which all life forms depend for meeting their daily nutritional needs.The effect of drought is obvious, yet the microbes domiciled at the root zone of plants, known as plant growth-promoting rhizobacteria, have proven to contribute to the tolerance and quick adaptation and adjustment of plants to drought stress Table via suppThe survival and enhanced influence of endophytic microbes toward poplar adaptation to water stress, for example, revealed that the presence of these organisms aided the physiological wellbeing of the plant and enabled it to tolerate water scarcity in the soil. The result showed 28% increase in biomass (shoot and roots) as against the control , which stimulate growth and the resultant stress annulment for better performance of the plant.In line with the survival-induced strategy, the endophytes will produce a poly-sugar substance known as trehalose, capable of protecting biologically produced compounds and molecules from breakdown during water induced osmotic tension to tolerate drought as well as salt disturbances capable of destroying pathogen/invaders of plants and encourages plants to tolerate abiotic stresses as well as living organisms inducing stress on plants. It could be regarded as a plant growth-promoting bacteria and is widely studied by researchers for their nutrition which was able to hydroxylate the substrate putrescine in the presence of coenzyme (FAD and NADPH). The hydroxamate is a crucial part in siderophore synthesis. It is responsible for binding to irons and other metallic elements in the rhizosphere . The drought interfered with chlorophyll Aspalathus linearis) was subjected to drought condition via withholding of water supply to the plant. This led to an observable effect on the photosynthetic rate of the plant by 40% reduction as well as 61% reduction in stomatal conductance, as a consequences of continuous closure of stomata to maintain intracellular water content and cut down on water loss from the leaves. This is one of the strategies plants adopt to increase their efficiency in the use of water during drought stress periods. Drought-induced closure of stomata has a direct connection with the reduction in plant assimilation and fixation of carbon (IV) oxide from the atmosphere. Inadequate supply of water directly impedes the process of photosynthesis in plants. Preferential development and growth of underground plant parts (roots) to shoot parts of the plant undergoing stress was also observed, which enabled the plant to absorb more water from the inner layer of the earth . This consequently makes the plant more dependent on the available nitrogen in the soil could be a good measure in reducing water stress on rice plants. Potassium nutrient applied at a concentration of 120\u00a0kg per hectare was able to increase rice yield and the index of its harvest within 15\u00a0days of water scarcity greatly encouraged the resistance of soybean to drought stress and decreased its detrimental effect on the formation of nodules by the microbe at the 10th day to 15th day of stress, while the root biomass or dry weight/root growth increased (by 100.96%). This means that drought promotes the accumulation of nutrient and photosynthetic products at the root and stimulates its growth and development to maintain balance, adjustment, and sustainability of plant physiological adaptation to the stress problem. Soybean species (SJ-4) possessing unique genetic/physiological characteristics performed better than others under drought stress and should be the crop of choice for drought-predominant locations in the leaves of tomato plant and increased value of the relative water content of the leaves of the plant confer resistance to drought by tomato plant. Proline is responsible for sustaining the movement of water molecules from a region of higher concentration to that of a lower concentration in response to concentration gradient between the plant and its environment. It helps the plant maintain its turgor intracellular pressure by facilitating movement of available water from the soil into the root of the plant undergoing drought stress helped to offset the effect of drought on the plant by 20% compared to the untreated control which showed a decrease in dry matter of the plant by 54 and 56% on the photosynthesis retardation of the plant under drought stress for dissolution and absorption of solubilized POThe presence of mycorrhizal fungi adds further surface area for water and nutrient acquisition, thereby making the plant more resilient and tolerant to the climate change-induced stress. This notwithstanding, they also partake in structure building, arrangement, and improvement of soil for proper aeration and migration of water together with nutrients in the soil, making the soil healthy and fertile can produce organic acid and will be able to mineralize or solubilize P via ionic interaction of the charged group in these molecules. These solubilizers of P play a part in the enrichment of soil fertility and supporting plant nutritional requirements irrespective of the prevailing bioavailability or non-bioavailability of the nutrient in the soil. Though the level of available P can determine the function of the microbes as it relates to solubilization of P, the higher the available nutrients, the less the quest to solubilize already immobilized phosphorus and vice versa. Microbes are the key players in the replenishment of P in the soil , on the other hand, possess the ability of protecting a legume (Medicago truncatula) from senescence of the leaves under drought stress Keyvan . ObviousThe test plants survived the water stress by employing an adjustment in the osmotic behavior of their root systems leading to the accumulation of solutes such as proline in their cells to maintain the cell structure and function during water scarcity. This adjustment in osmosis to sustain the turgidity of the plant cells directly influences the escalation level of photosynthesis and tissue growth of plant during drought stress aided the tolerance of cowpea plant to water scarcity stress and boosted the quality of NO3\u2212 (nitrate) and amino acid (proline) in the inoculated plant and water-deficient (dry farming) systems with a mixture of biofertilizer and chemical fertilizer in an integrated fertilization practice encouraged the plant to efficiently adapt to the water-deficient condition and accumulate both macro- and micronutrients in the tissues of the plant compared to the use of chemical fertilizer alone. The application of bioinoculants and mineral fertilizer to a plant growing in a water-limiting environment is more Table effectivAzotobacter chroococcum and Pseudomonas fluorescence together with phosphorus fertilizer, and was able to boost the inorganic P content of the plant (soybean) growing under insufficient and abundant water content of the soil. The N content of the leaves and root of the soybean plant were increased by 6 and 8% under water stress condition. The absorption of phosphorus in the soybean plant under the influence of bioinoculant increased by 16% .The deleterious nature of sodium chloride and other salt compounds on the growth and development of plants particularly in inducing water limitation stress and uncontrollable negative stomata closure cannot be overemphasized. This necessitates the application of an attenuation strategy to cancel the effect of salt on crops.Pseudomonas and endomycorrhizae on cowpea plant undergoing salt-induced stress showed a decrease in the mycorrhizal infection of the plant. Treatment of 6000\u00a0ppm sodium chloride in the presence of the fungi and bacteria increased the carotenoid concentration (0.449\u00a0mg/g). Also, osmolytes (proline and sugars) increased in the presence of endomycorrhizae inoculation in the plant. Irrigation with tap water in the midst of endomycorrhizae and Pseudomonas fluorescens gave higher fresh and dry weight, pod length, seed number, and protein content of the cowpea plant , enhancing survival within the presence of these limited resources by aggregating in masses of cells at both living and non-living surfaces present at the root environment and contributing indirectly to tolerance ability of the plant to stress contributed to maize salt tolerance by enhancing its chlorophyll production level via colonizing and interacting with the plant roots. It also aids in the exclusion of sodium from the plant root, and stimulates the production of sugars and antioxidant within the tissues of the maize plant under the study at salt concentration as high as 6% and possess the capability of phytohormone (IAA) production together with fungicidal enzyme production that contributes to the growth, protection, and tolerance of the plant to biotic and abiotic stress on salt-stressed wheat plants has resulted in adequate adaptation of two wheat plant species to salt-influenced stress and improved its yield subjected to salt stress, the plant was able to maintain its chlorophyll composition and tissue components. Mo being a unique player (cofactor) in cellular enzyme biosynthesis and activities enhanced the formation of chlorophyll and maintenance of adequate cellular function as an additive to plants facing salinity stress. When molybdenum was applied to a bean plant from Kutch Desert has a unique strain of bacteria (Bacillus licheniformis strain A2), which solubilized phosphate, produced indole acetic acid, siderophore etc., and contributed to 31% groundnut height and 43% biomass increment of the plant and 24 and 28% rise when grown in 50\u00a0mM sodium chloride supplemented soil tolerate salt stress. The applied caffeic acid stimulated the NO (nitric oxide) composition of the nodules, which has a direct relationship with the induction of cyclic guanine monophosphate responsible for controlling reactive oxygen radicals produced in the stressed plant. Caffeic acid helps to shield nitrogenase enzyme and leghemoglobin of the soybean root nodules from the destructive effect of sodium and chloride ions, thereby adding to the growth of the plant by enabling the symbionts to fix nitrogen adequately to meet plant nitrogen requirements. Caffeic acid has also been implicated in chlorosis suppression of treated plants undergoing salinity-induced stress were also decreased by the inhibitory effect of the high salt concentration in the plant environment. Protein content of the plant was also affected as observed in the appearance and disappearance of protein bands in the plant samples analyzed capable of producing plant hormone (IAA) at the rhizosphere of inoculated cotton plant subjected to salt stress completely averted salinity stress on cotton plant that resulted in the increase in seed germination (15.40%), growth, and overall tolerance of cotton to salinity stress possessing the survival ability of changing its membrane phospholipid composition when subjected to stress condition and also producing indole acetic acid, siderophores, deaminase enzyme, and utilization of nitrate happens to improve the growth (root and shoot) of groundnut plant subjected to salt stress better than a well-known classified Bradyrhizobium sp. (C145) accepted as a suitable groundnut inoculant by the Argentine INTA organization. The organism also performed better in the production of biofilm than Bradyrhizobium and recorded high tolerance to 300\u00a0mmol sodium chloride and higher temperature , collectively regarded as bokashi, have proven to be effective in shielding Mandarin tree (citrus plant) from salt stress and improving its productivity, and the nutritional content of the fruit, and also in enhancing the rhizosphere microbial community count and population and a consortium of microbes known as \u201cKocuria erythromyxa EY43 and Staphylococcus kloosii EY37 were found to be very potent in increasing strawberry plant growth, chlorophyll, mineral content, and fruit yield. The organism was able to inhibit the absorption of toxic ions (sodium and chloride ions) from the plant rhizosphere making a salinity-sensitive strawberry plant insensitive and tolerant to the condition of salt stress with applied jasmonic acid synergistically improved the overall tolerance of tomato plants (wild type and mutant species) to salinity. The combined applied microbe\u2013jasmonic acid enhanced root and shoot growth of the plant, brought about proper regulation of glutathione content of the plant, and enabled it overcome the detrimental effects of salinity. It was noted that the microbial-jasmonate treatment lowered the intracellular abscisic acid level of the plants from salt stress better than the well-known microbes exhibiting excellent characteristics of plant growth promotion such as Microbacterium natoriense and Pseudomonas brassicacearum isolated from the plant (Moringa peregrina) bark possesses high indole acetic acid production capacity and ACC-deaminase was able to improve the growth and development of tomato plant seedlings responsible for degrading the plant product known as aminocyclopropane-1-carboxylate, the precursor of the plant hormone (ethylene) (Adams and Yang Mesorhizobium MBD26 and Rhizobacteria RHD 18) with an observed nodulation capacity of 49 nodules per plant, 201\u00a0mg weight of nodules, 12.28\u00a0mg per plant nitrogen, and a rise in 31.2% of the above the soil plant part dry weight (shoot dry weight). The result was further increased to 53 nodules and 116.9% grain produced, with N2 level spanning between 9.59 and 27.36\u00a0mg per plant investigated necessary for the production of ethylene. This helped to limit the action of this hormone and sodium hydrogen carbonate (NaHCO3) also constitute a problem to crops. They are implicated in the formation of alkaline soil that results in elevated soil pH and interferes with the bioavailability of phosphorus, iron, copper, manganese, and zinc resulting in induced nutrient deficiency and osmotic stress capable of interfering with the proper biological function of the plant could aid greatly in ensuring continuous and efficient agricultural practices that will manage the stress and boost the yield of crops."} +{"text": "The article presented a survey data on work environment predictors and productivity of selected academic staff of selected public universities in Southern, Nigeria. The study adopted a quantitative approach with a survey research design to establish the major determinants of work environments in the selected public universities. Data was analysed with the use of structural equation modelling and the field data set is made widely accessible to enable critical or a more comprehensive investigation. The findings identified meaningful work and growth opportunities as predictive factors for maximizing productivity in the sampled institutions. Specification tableValue of data\u25cfThe data can be used by government and other stakeholders to make decisions that in the long-run would lead to maximum productivity in our tertiary institutions.\u25cfThe data can be used to advise government on the importance of healthy work environments and how it can be beneficial to the overall productivity of the institutions.\u25cfThe data provides information on how different work environment attributes can interact effectively to enhance productivity and sustaining greater commitment.1Creating healthy work environment has become an important success factor in any competitive and demanding environments such as the educational sector. The study is quantitative in nature and data were retrieved from staff and management of the sampled institutions. The use of semi structured questionnaire was adopted to elicit information from selected respondents. The use of questionnaire was relevant because the sample was large enough to accommodate statistical analysis and integrate the socio-demographic and work environment variables. An extensive list of items in the questionnaire was developed to understand the nature and the type of work environments provided by the sampled institutions. Work Environment was measured using items adapted from previous studies The variables considered include: the extent of meaningful work, degree of physical work-milieu, trust leadership, growth opportunities and the levels of supportive management . The retTo support the measurement model, the use of structural equation modelling (SEM) was adopted to explain the relationship between sets of observed and latent variables . The conA comparative analysis of the selected public (state) universities was also capture as presented in 2Of the 250 copies of questionnaire distributed, 224 responses were received, resulting in a response rate of 89.6%. Academic staff of selected five public universities , South-west, were represented in this study. The use of questionnaire was used to collect quantitative data on the assessment of work environments and productivity among University academic staff. Participants were requested to respond to items in a self-administered, quick-answer, structured (close-ended) and unstructured (open-ended) copies of questionnaire. Primary data were collected using questionnaire.The choice of using questionnaire for collecting data from the cross section of the sampled population depend on the variables that were measured, the source and the resources available. The collection of data for this study was achieved by requesting and obtaining relevant data provided directly by the staff of the sampled firms to maximise timeliness and data accuracy. The researchers established and maintained good relationships with the sampled respondents in order to obtain a good response rate. In order to maximise return rates, the items in the questionnaire were designed to be as simple and clear as possible, with targeted sections and questions. The questionnaire contained structured questions (with multiple choice and open-ended questions) which were used to encourage respondents to reply at length and choose their own focus to some extent.Data collection for this study involved a combination of different activities. The first step was to recruit and train Field Assistant to administer the questionnaire alongside the researchers. Five (5) people were recruited and trained on questionnaire administration and other social issues associated with it. They (the research assistant) were also trained to understand the study questionnaire, the processes necessary for successful administration, their allotment in the administration, how to solve probable challenges they may encounter. The training was to ensure that all assistant had a thorough understanding of the concept, the place, the people and the instrument before proceeding into the field.The responses from multiple choice and open-ended questions were thematized to facilitate clearly identifiable database entry and analysis. The data collected were extracted and converted into electronic format and subsequently coded by assigning numerical values to the responses. The study indicated a meaningful relationship between work environments and productivity among academic staff of the selected institutions. The collected data were coded and analysed using SPSS version 22. Data was analysed applying descriptive (bar chart) and inferential statistical tests such as structural equation modelling (SEM). Importantly, the study participants were academic staff of the sampled public universities; participants have worked with the institution(s) for a minimum period of 3 years and finally, participants were accessible as at the time of the survey and interviews. The researchers ensured that respondents were well informed about the background and the purpose of this research and they were kept abreast with the participation process. Respondents were offered the opportunity to stay anonymous and their responses were treated confidentially."} +{"text": "Local spread patterns of malignant tumors follow permissive tissue territories, i.e., cancer fields, as shown for cervical and vulvar carcinoma. The cancer fields are associated in reverse order to the mature derivatives of the morphogenetic fields instrumental in the stepwise development of the tissue from which the tumor arose. This suggests that cancer progression may be linked to morphogenesis by inversion of the cellular bauplan sequence. Successive attractor transitions caused by proliferation-associated constraints of topobiological information processing are proposed for both morphogenesis and cancer. In morphogenesis these transitions sequentially activate bauplans with increasing complexity at decreasing plasticity restricting the permissive territories of the progenitor cell populations. Somatic mutations leading to cell proliferation in domains normally reserved for differentiation trigger the inverse cascade of bauplan changes that increase topobiological plasticity at decreased complexity and stepwise enlarge the permissive territory of neoplastic cells consistent with the clinical manifestations of cancer. The order provided by the sequence of attractor transitions and the defined topography of the permissive territories can be exploited for more accurate tumor staging and for locoregional tumor treatment either by surgery or radiotherapy with higher curative potential. Most cancer researchers accept a model of oncogenesis that is based on the principle of random variation and selection. Following a neo-Darwinian concept random genomic mutations selected for reproductive fitness within the ecosystem of the cell's microenvironment transform normal cells into cancer cells. Sequential additional mutations drive their malignant progression by clonal expansion to the invasive and metastatic phenotypes . A stochin vitro , functional multipopulation units (organs) and, finally, organisms. All stable states, from the levels of living cell to living organism represent attractors in the corresponding state spaces of the cell's genetic regulatory network established by evolution , 16. Insbauplan, a layer of the cell's genetic regulatory network as self-organizing complex system stabilized by dynamical attractors. The German term bauplan was introduced into the field of embryology initially to describe archetypical body plans of different species and physical phenomena and to respond with programmed activities such as proliferation, migration, aggregation, differentiation, quiescence, apoptosis, etc. to assemble tissue structures. Simultaneously, the cells produce topobiological information by themselves generating collectively the morphogenetic field.We assume that morphogenesis is executed by cell populations with a hierarchical sequence of a common species . The bauormation presenteBesides the bauplan, the cell's genetic regulatory network determines at different other layers biological features such as metabolism and energy production which are also relevant for morphogenesis. All layers interact with each other but only the bauplan is considered here. Each bauplan can be characterized by two features: (i) complexity of topobiological information that is processed and (ii) plasticity enabling cells to adapt their bauplans to the topobiological information provided by the morphogenetic fields of abutting cell populations. Bauplan changes due to chromatin reorganization that enhance the complexity of topobiological information processing simultaneously decrease its plasticity or adaptability. A totipotent early blastomere cell at the lowest level of morphogenetic hierarchy is characterized by maximum bauplan plasticity, i.e., all potential bauplans inherent in the cell's genome can be realized by interaction with the corresponding microenvironments. Yet, the complexity of topobiological information processing of the totipotent cell is minimal involving mainly physical interactions . The aduDuring morphogenesis the proliferation of the cells increases the topobiological information to be processed by enhancing signals through an increase of cellular surfaces, extracellular matrix and gradients of soluble molecules as well as mechano-transduction that force the cells to respond and thus provides progressive constraints to the system driving its attractor toward instability. At the point of instability bifurcation into new attractors with higher complexity of information processing at decreased plasticity occurs , 22. Eacmorphogenetic metafield. Due to the plasticity of chromatin organization that enables embryonic cells to change their bauplan when facing the morphogenetic fields of the abutting populations, the interactive domain of an embryonic cell type includes the morphogenetic field of its own population and those of the abutting cell populations. This extended territory is termed here ontogenetic tree. The ontogenetic trees for the human cervix uteri (as subcompartment of the M\u00fcllerian system) and of the vulva from the phylotypic stage to maturity are shown in ontogenetic anatomic maps. The tissue structures established within the morphogenetic field of a reference cell population during development can be followed morphologically and placed hierarchically into an in situ. Distinct genetic alterations leading to a gain of function of oncogenes and loss of function of tumor suppressor genes are known for long as so called \u201cdriver\u201d mutations genetic alterations increasing overall cell proliferation and perturbing the terminal bauplan of the affected clonogenic cell enabling cell divisions at domains normally restricted to differentiation. For epithelial cancers this manifestation is microscopically observable in dysplasia/carcinoma utations . ContinuAs the mature tissue derivatives of the hierarchical morphogenetic (meta-)fields can be determined and expressed in ontogenetic anatomical maps for each tissue of interest, the cancer fields obeying to the reverse hierarchy are morphologically distinct and represent an element of order in the malignant process. The cancer fields for carcinoma of the cervix and the vulva (middle subcompartment) are demonstrated with Associations between morphogenesis and cancer spread have also been demonstrated for malignancies such as carcinoma of the rectum , pancreaThe association between development and malignant disease has been proposed for long dating back to the beginnings of cellular pathology in the nineteenth century and has given rise to many embryology-based cancer theories \u201337. Modein vivo or in vitro cells and in cancer cells of the corresponding ontogenetic stage. Although methods to study genome organization with high resolution have been developed this appClinically, the construction of ontogenetic trees guided by the morphology of human embryonic and fetal development and their translation into ontogenetic anatomical maps could be accomplished for any tissue from which cancers originate. Ontogenetic tumor staging and cancer field resections as successfully established for the treatment of colorectal, cervical, and vulvar cancer , 40, 41 MH developed the theory of cancer progression by inverse morphogenesis. MH and UB devised the scenario of epigenomic transitions from the scope of self-organizing complex systems. Both authors wrote the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Recent interest in the generation of neural lineages by differentiation of embryonic stem cells arises from the opportunities represented by a developmentally normal, unlimited source of material that can be manipulated genetically with precision. Several experimental approaches, which differ conceptually, in the route of differentiation and the characteristics of the resulting cell population have been reported. In this review we undertake a comparative analysis of these approaches and their suitability for experimental investigation or implantation."} +{"text": "Architects that specialize in designing cultural centers have often been accused of providing spaces that become obsolete in the coming years. This is because as technology and time changes, requirements also change, necessitating new arrangement of spaces. Very few of the spaces provided in cultural centers can however be adapted to other uses. This has affected the sustainability of those spaces. These data present the perceptions of users on the need for, and the features that enhance flexibility in cultural centers. The data were obtained from a questionnaire survey of users of the three (3) cultural centers in Nigeria. The survey was conducted between October and November 2017. The data may facilitate the evidence-based approaches to facilitate improved built environment and will be useful to built environment professionals, policy makers and design researchers. Specifications TableValue of the data\u2022Descriptive statistics were used in the presentation of the dataset which if analyzed will help reveal the factors that affects the use of cultural centers and features that can enhance flexibility in the use of cultural center spaces.\u2022The data could be used in development of design standards in the design of cultural buildings.\u2022The data could be used as bases for comparison of flexibility of spaces in cultural center in different countries.\u2022The data can directly help building design professionals take appropriate decisions in the design of cultural center in Nigeria.\u2022The dataset can be useful to the government and private developers as a guide in addressing the issue of flexibility in design of other cultural center and similar buildings taking into consideration the users\u05f3 perception.\u2022The work is a major improvement over 1The data were drawn out of survey of three (3) cultural centers in South West, Nigeria. The data instrument for the study is the questionnaire containing both open and close-ended questions with each variable measured on the Likert-like five-point scale. Forty-six out of fifty questionnaires administered for the users of the selected cultural center were returned. The data were collected between October and November 2017. The data collected were analyzed using Statistical Package for Social Science (SPSS). Frequencies and mean score rankings were carried out. 2A survey of users of three (3) cultural centers was carried out in South West, Nigeria. The cultural centers investigated are June 12 cultural center, Ogun State, National Theatre, Lagos State and Oyo Cultural center, Oyo state. Studies A sample of the questionnaire used is presented as"} +{"text": "Mazzeo and collegues from Sao Paulo - Brazil shows in a very interesting paper the morphologic and structural changes of renal parenchyma during the clamping of the renal pedicle. Partial nephrectomy is considered the gold standard for treating localized renal tumors -6. Warm en bloc) clamping was used, the 30 minutes of warm ischemia caused a decrease in the number of glomeruli.Previous studies shows that the swine is the most adequate model for comparison to human kidney anatomy and physiology , 9. TradIn the present paper the authors shows that the number of renal parenchymal lesions derived from ischemia is associated with the duration of the insult, but a interesting result was the significant difference between the types of clamping, and the group with clamping of artery and vein presented a lower frequency of injuries than the group with only the renal artery clamping.According the results of this experimental study during a partial nephrectomy, the en bloc clamping for warm ischemia should be favored over only the renal artery clamping to minimize renal injury after partial nephrectomies, but more studies will be necessary in the future to confirm these results."} +{"text": "The notion of default mode network (DMN) and the dual process theory of thought, topics within different cognitive neuroscience and psychology subfields, have attracted considerable attention and been extensively studied in the past decade. The former originated from experimental evidence on the brain function obtained when an individual is not involved in a specific task, and recent research suggests that the DMN plays a role in mental and neurological disorders is spontaneously active during periods of \u201cpassivity\u201d between the two ideas. DMN function could be better identified using the well-structured corpus of the dual-process theory of thought, while the DMN could constitute a potential neural basis for use in the dual process theory, thus creating a bridge between the psychology of thinking and neuroscience.GG and FG equally contributed to all phases of the development of the manuscript including conception, literature review, and writing.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Hip fractures impact > 300,000 US older adults yearly resulting in 70,000 deaths and are expected to gain in numbers with the rising population. Residents of skilled nursing facilities are among the highest at risk of sustaining hip fractures due to increased risk of falls and frailty. These most vulnerable stand to experience the most serious results of hip fracture with > 50% resulting in total dependency and/or death. Care providers are focused on providing an environment of safety with implementation of traditionally utilized fall prevention measures. Unfortunately, the maintenance of safety in this high-risk population often comes at the price of limiting independent mobility. The utilization of passive hip protector padding for those recognized as being at high risk of hip fracture can decrease the risk of hip fracture by 82%; however, challenges to adherence of hip protectors limit the effectiveness of this widely utilized measure. Emerging technology in the form of a smart belt was evaluated in a skilled nursing setting to offer insight into efficacy and user adherence. The smart belt is capable of sensing when the wearer is experiencing a motion that would likely result in a fall onto the hip, deploy an anatomically conforming airbag and alert caregivers that a fall has occurred. The embedding of the hip protection technology into care planning led to daily patient utilization totaling over 3000 hours. Specific findings of the user derived motion and experience will be articulated through case studies and illustration of the technology captured motion data."} +{"text": "Service-learning (SL) is a contested field of knowledge and issues of sustainability and scholarship have been raised about it. The South African Higher Education Quality Committee (HEQC) has provided policy documents to guide higher education institutions (HEIs) in the facilitation of SL institutionalisation in their academic programmes. An implementation framework was therefore needed to institutionalise the necessary epistemological shifts advocated in the national SL policy guidelines.This article is based on the findings of a doctoral thesis that aimed at developing an SL implementation framework for the School of Nursing (SoN) at the University of the Western Cape (UWC).Mixed methods were used during the first four phases of the design and development intervention research model developed by Rothman and Thomas.The SL implementation framework that was developed during Phase 3 specified the intervention elements to address the gaps that had been identified by the core findings of Phases 1 and 2. Four intervention elements were specified for the SL implementation framework. The first intervention element focused on the assessment of readiness for SL institutionalisation. The development of SL capacity and SL scholarship was regarded as the pivotal intervention element for three of the elements: the development of a contextual SL definition, an SL pedagogical model, and a monitoring and evaluation system for SL institutionalisation.The SL implementation framework satisfies the goals of SL institutionalisation, namely to develop a common language and a set of principles to guide practice, and to ensure the allocation of resources in order to facilitate the SL teaching methodology. The contextualised SL definition that was formulated for the SoN contributes to the SL operationalisation discourse at the HEI. ButinThe institutionalisation of SL in the academic programmes of South African HEIs is guided by the policy documents of the Higher Education Quality Committee HEQC :9\u201310 revThe latest institutional audit by the HEQC suggests that the strategic objectives of the UWC in relation to CE and SL implementation strategies need to be reviewed (CHE 2008:19). Likewise, the SoN at the UWC has had an obligation to formalise a framework for institutionalising SL in its nursing programme in order to encapsulate the mission of the UWC as an engaged institution and to facilitate the necessary epistemological amendments Julie :1832 advThe aim was to develop an SL implementation framework for the SoN at the UWC.Engagement: The partnership between the knowledge and resources of a university and the expertise of the public and private sectors enriches scholarship, research and creative activity; enhances the curriculum, teaching and learning; prepares educated, engaged citizens; strengthens democratic values and civic responsibility; addresses critical societal issues, and contributes to the public good has been conceptualised as an engaged pedagogy that integrates theory with relevant community service projects. The SL assignments and group discussions have been designed to facilitate a more reflective approach towards greater integration of the contents of the psychiatric mental health nursing and gender-based violence modules with social responsiveness within nursing as an overarching discipline integration of engagement into operations; (2) forging partnerships as the overarching framework; (3) renewing and redefining discovery and scholarship; (4) integrating engagement into teaching and learning; (5) recruiting and supporting new champions, and (6) creating radical institutional change develop a common language, (2) compile a set of principles to guide practice and (3) ensure the allocation of resources to facilitate the SL teaching methodology HEQC :138. TheThe intervention study met all the prescribed ethical procedures of the UWC and received ethical clearance from the Senate Ethics Committee, project registration number 11/1/37 Julie :1836.Limited success was achieved with the building of an authentic community of practice amongst the SL teaching team during the piloting phase of the SL pedagogical model.The developed implementation framework needs to be implemented and evaluated as the next steps to complete the Design & Development Intervention Research. The SL definition that was developed for the school should be regarded as a work in progress, since it was developed before the 11 nurse educators completed the accredited SL short course in 2013. Therefore, this preliminary SL definition will be further refined by a master's degree nurse educator student who is prepared to take up the challenge.This study addresses the gap identified that most HEIs in South Africa fail to establish a standard practice for SL within the formalised systems of their respective academic programmes. The SL implementation framework of this study specifies the intervention elements (change strategies) needed to bridge the gaps that have been identified by the core findings of Phases 1 and 2. The design phase thus includes change intervention elements aimed at bridging the prevailing theory-practice gap that emanated from the conceptual confusion relating to (1) differentiating between SL and other forms of community engagement curricular activities, (2) addressing the lack of knowledge of the national SL policy guidelines by involving the academic staff and clinical supervisors in SL capacity building and SL scholarship, (3) developing an SL pedagogical model for the SoN by providing concrete implementation guidelines to embed SL pedagogy in undergraduate nursing modules that are amenable to this pedagogy , and (4) formulating SL institutionalisation criteria for the nursing programme at the SoN in accordance with the SL quality indicators of the HEQC."} +{"text": "Age-dependent declines in muscle function are observed across species. The loss of mobility resulting from the decline in muscle function represents an important health issue and a key determinant of quality of life for the elderly. It is believed that changes in the structure and function of the neuromuscular junction are important contributors to the observed declines in motor function with increased age. Numerous studies indicate that the aging muscle is an important contributor to the deterioration of the neuromuscular junction but the cellular and molecular mechanisms driving the degeneration of the synapse remain incompletely described. Importantly, growing data from both animal models and humans indicate that exercise can rejuvenate the neuromuscular junction and improve motor function. In this review we will focus on the role of muscle-derived neurotrophin signaling in the rejuvenation of the aged neuromuscular junction in response to exercise. Research over the past two decades has uncovered novel roles of skeletal muscle beyond its contractile function. Today skeletal muscle is recognized as a major endocrine organ with the capacity to secrete signals and act on distal targets such as adipose tissue, liver, pancreas, brain and endothelium. In response to environmental or dietary challenges, as well as to organelle and metabolic dysfunctions, the skeletal muscle secretes signals in the form of myokines, myometabolites, neurotrophins and other muscle-derived signals to help maintain the metabolic and physiological homeostasis of the organism 1345Drosophila larval NMJ. During the initial ~100 hours of larval development, the surface area of the muscle increases nearly 100-fold, which results in a large reduction in the electrical resistance of the muscle making the muscle harder to depolarize. Despite this change in the muscle, the depolarization of the muscle by the motor neuron is precisely maintained throughout development to insure normal larval motility. To maintain the consistent depolarization of the muscle, the presynaptic nerve terminal releases increasing amounts of neurotransmitter. To facilitate the increase in neurotransmission, the larval NMJ grows by increasing the number of boutons at the synapse 10-fold during developmental growth (Note: Boutons are morphometric structures of the Drosophila NMJ that is commonly used as a quantitative measure of synapse size) During development the increase in muscle size due to fiber growth results in a change in the resistance of the growing muscle fiber to the depolarizing input of the motor neuron. This change in the electrical properties requires a precise concomitant change in neurotransmitter release from the presynaptic nerve terminal to maintain faithful depolarization of the muscle. The mechanism coupling muscle growth and synaptic input is the production of muscle-derived retrograde trophic signaling that supports the growth of the synaptic innervation. A beautiful example of this coupling is observed during the development of the The preciseness of the adjustments in neurotransmitter release during larval growth suggests that the presynaptic nerve terminal is being informed about the increasing size of the muscle. This possibility proposes a signal originating from the muscle that instructs the motor neuron, and the nerve terminal, about the size of the growing muscle. Support for this model was provided by Goodman and colleagues who showed in a series of experiments that muscle-derived bone morphogenic protein (BMP), Glass bottom boat (Gbb) and the presynaptic type II BMP receptor Wishful-thinking (Wit) function in a retrograde genetic pathway required for normal synapse growth; and the lack of either leads to a smaller than normal synapse with reduced neurotransmission 9At the mammalian NMJ, a number of neurotrophins (NTs) have been shown to be required for proper development of the NMJ 56121314151617181820In vitro and in vivo treatment with BDNF, GDNF, NT-3 and NT-4 potentiates both the spontaneous and evoked release of neurotransmitter at the NMJ 24252627252+ channel activation leading to enhanced Ca2+ influx into the nerve terminal 2+ influx by muscle-derived retrograde signaling has also been demonstrated at the Drosophila larval NMJ in response to genetic or pharmacologic reduction in the sensitivity of the muscle to neurotransmitter, supporting that calcium influx into the presynaptic nerve terminal represents a conserved mechanism of retrograde control of presynaptic neurotransmitter release 330In addition to effects on NMJ morphology, several studies have revealed potent effects of NTs on neurotransmission at the NMJ 622One of the most remarkable properties of the NMJ is its ability to maintain normal function in the face of physical stresses such as extended periods of increased physical activity. During increased physical activity, the release of neurotransmitter from the presynaptic nerve terminal must be sufficient for each contraction without exhausting the store of synaptic vesicles. A number of studies have demonstrated that both endurance and resistance training stimulate extensive morphological adaptations of the presynaptic nerve terminal of the NMJ. Examinations of the NMJ morphology of the soleus, extensor digitorum longus (EDL), plantaris and gluteus maximus muscles in both mice and rats revealed that strenuous physical training induces NMJ hypertrophy leading to an increase in the degree and length of nerve terminal branching 3233343534Figure 1B).The changes observed at the NMJ in response to increased physical activity suggest the existence of an exercise-induced muscle-derived retrograde signal(s) that can modify the morphological and functional properties of the NMJ to adapt to the demands on neurotransmitter release. To that end, several studies have demonstrated elevated BDNF, GDNF, NT-3, and NT-4 levels in skeletal muscles post involuntary and voluntary exercise 38394041424343Figure 1A) 464748495051525345555759th to the 10th decade of life The progressive declines in skeletal muscle mass, referred to as sarcopenia, and muscle strength are broadly observed in mammals and represents one of the first hallmarks of aging. Numerous studies in rodent models have demonstrated that the loss of motor function with age is accompanied with changes in the structural integrity of the NMJ Drosophila larval NMJ requires TOR signaling within the muscle, a known cellular signaling system important for adaptive responses to aging 7234556264747576Figure 1A). Thus, the critical signal for the homeostatic increase in neurotransmission is not observed during aging. Furthermore, synaptic homeostasis in Drosophila requires the presynaptic Ephexin receptor Drosophila reveals that mutations in the ephexin gene do not block the increase in neurotransmission observed during aging The observation of substantial NMJ remodeling during aging suggested the possibility that neurotransmission might be compromised during aging. But nearly all studies in both mice and flies have reported increased release of neurotransmitter from aged NMJs compared to young NMJs 345456626364656667686970Drosophila is one of 13 muscle groups required for the extension of the adult proboscis during feeding 80818283Another possibility is that the age-dependent increase in neurotransmission observed at the NMJ is in response to declining muscle function. The CM9 muscle group in NT-3, NT-4, and BDNF85868788GDNF increases expression during aging 90trkB gene (encoding the receptor for BDNF and NT-4) the NT-4 KO mouse 86Figure 1B) 4693Given the important role of muscle-derived neurotrophin signaling on NMJ structure and function, it seems possible that changes in muscle-derived neurotrophin signaling during aging are responsible for the changes in synapse function and morphology. Although there is paucity of data surrounding NT expression during aging, there are studies showing that neurotrophin gene expression in muscles declines with age, including Figure 1C). For example, the increase in muscle contraction is predicted to result in an increase in the ratios of AMP:ATP and ADP:ATP leading to the activation of AMP Kinase (AMPK) 96Figure 1C). It is currently unknown if this signaling system is responsible for the increases in NT expression in muscle observed after exercise. Interestingly, PGC-1\u03b1 has been shown to be required for the increase in expression of the myokine ENDC5, which is cleaved and secreted as irisin from the muscle after exercise Currently it is unknown how muscle activity might increase NT gene expression. It is established that increased physical activity activates a number of important signaling systems within the muscle resulting in a change in muscle metabolism and cellular physiology to meet the demands of increased activity 2+ levels. In the muscle, this increase in calcium is known to activate CamKII leading to the phosphorylation and activation of the CREB transcription factor 100102103104106107It is also known that the increase in contraction will lead to an increase in cytosolic CaIt\u2019s important to consider how aging affects these exercise-related signaling systems and whether this plays a role in the changes in NT expression with age. The effects of oxidative stress on cellular physiology have been extensively reviewed 11011111211111211311411511611784NT-3, NT-4, and BDNF genes all decline during aging, except for GDNF which has been reported to increase with age 858687888990Recall that expression of"} +{"text": "Objectives. This study investigates how social participation of the aging population is associated with the community capacity, measured by the number of amenities and organizations within the community. Method. Using nationally representative survey data from the China Health and Retirement Longitudinal Study (2011), this study examines the availability of community amenities and organizations in rural and urban areas, and investigates the associations between community capacity and social participation among the middle aged and older Chinese using multilevel analysis. Results. The results of this study indicate that both community amenities and community organizations are positively associated with the social participation of the middle-aged and older Chinese. Additionally, the association between community organizations and the frequency of formal social participation is stronger among urban communities than rural ones, even after controlling for the individual-level socioeconomic status and health conditions. Conclusion. This study highlights the importance of building the community capacity by developing community-based grassroots organizations to promote the social engagement and participation of the aging population."} +{"text": "The pyramidal lobe is a thyroid tissue of embryologic origin. It is situated in the pretracheal region between the isthmus and the hyoid bone. The high incidence of 65.7% of patients undergoing thyroidectomy suggests that pyramidal lobe is a common component of the thyroid gland rather than an uncommon anatomic variation . Double A 58-year-old woman presented to our clinic with signs and symptoms of hyperthyroidism. Biochemical analysis confirmed thyrotoxicosis. Ultrasonographic examination and thyroid nuclear scan revealed solid, hot, autonomous, hyperactive multiple nodules in both lobes. The diagnosis was toxic multinodular goiter. Before the surgery, written informed consent was obtained from the patient for both surgical management and scientific publication. The patient underwent total thyroidectomy. Full dissection of the lateral lobes was performed, and both the lateral lobes were mobilized medially after completing the surgical dissection. The anterior cervical region between the isthmus and the hyoid bone was completely dissected for identifying the presence of any thyroid tissue. Two different pyramidal lobes, rare anatomic variations that originated from the junction points of the isthmus with the right and the left lobes of the gland, were observed. Both the pyramidal lobes were completely dissected from the isthmus up to the hyoid bone. The thyroid gland, including the two pyramidal lobes, was completely excised to achieve total thyroidectomy. Pathological examination of the thyroidectomy specimen revealed two large pyramidal lobes . WrittenCompleteness of thyroidectomy has great relevance for both autoimmune and malignant diseases. Remnant tissue after surgical operation may complicate the proper treatment of such diseases and the sensitive postoperative follow-up of patients. Based on its high incidence, the pyramidal lobe is considered as a normal component of the thyroid gland that may be affected by the diseases that affect the rest of the thyroid parenchyma and uncommonly harbor malignant disease . AnatomiThe incidence of pyramidal lobe is significantly higher in thyroidectomy cases. In fact, surgeons generally find a single pyramidal lobe. Conversely, the presence of double pyramidal lobes is extremely rare, and we could find only three previous cases in the English literature ,4,5. AllThe presence of pyramidal lobe is a typical example of an anatomic variation of the thyroid. The presence of two pyramidal lobes is an extremely rare occurrence that may affect the completeness of thyroidectomy. Various locations of the base of pyramidal lobe generally require careful dissection of both the pretracheal and the prelaryngeal regions from the upper border of the isthmus up to the upper border of the thyroid cartilage in most of the patients and sometimes up to the hyoid bone. Therefore, the anterior cervical region has to be dissected carefully during surgery so that no residual thyroid tissue remains."} +{"text": "The control of translation in the course of gene expression regulation plays a crucial role in plants\u2019 cellular events and, particularly, in responses to environmental factors. The paradox of the great variance between levels of mRNAs and their protein products in eukaryotic cells, including plants, requires thorough investigation of the regulatory mechanisms of translation. A wide and amazingly complex network of mechanisms decoding the plant genome into proteome challenges researchers to design new methods for genome-wide analysis of translational control, develop computational algorithms detecting regulatory mRNA contexts, and to establish rules underlying differential translation. The aims of this review are to (i) describe the experimental approaches for investigation of differential translation in plants on a genome-wide scale; (ii) summarize the current data on computational algorithms for detection of specific structure\u2013function features and key determinants in plant mRNAs and their correlation with translation efficiency; (iii) highlight the methods for experimental verification of existed and theoretically predicted features within plant mRNAs important for their differential translation; and finally (iv) to discuss the perspectives of discovering the specific structural features of plant mRNA that mediate differential translation control by the combination of computational and experimental approaches. The genomic information in plants, similar to other eukaryotes, is implemented via a successive series of biological processes, including transcription and translation as the key events. The current experimental omics tools for genomic monitoring of plant gene expression allow tracking the flow of genetic information from genome to proteome and to metabolome. New experimental approaches, for example, RNA-Seq and DNA microarrays, have given insight into many key mechanisms involved in transcription regulation in plants: the first stage of gene expression and the easiest to study in terms of experimental methodology. The studies of transcriptomes, i.e., the qualitative and quantitative estimation of expression of the entire gene pool on a genome-wide scale, have given convincing evidence of dynamic changes in the transcriptomes of various plant species in both growth and development processes and the impact of environmental factors. Comparative omics studies in plants clearly demonstrate a very modest correlation between the levels of transcription and translation (the levels of the corresponding proteins in the proteome). Of note, the observed fluctuations in the levels of a transcript do not always lead to changes in the levels of the corresponding protein . This suTranslation is a complex biological process with numerous players, including mRNAs, tRNAs, ribosomes, and manifold protein factors. Undoubtedly, each is important for efficient translation. The mRNAs themselves comprise different regions, namely, the 5\u2019 untranslated region (5\u2019UTR) and coding region (CDS) and 3\u2019 untranslated region (3\u2019UTR), which modulate translation at a number of \u201ccheckpoints\u201d: translation initiation, elongation, and termination. In the current view, numerous regulatory elements may be concealed in the nucleotide contexts of these mRNA regions and each of them individually or in combination can determine the development of any transcript in translational process .The paradox of misfit between the levels of mRNAs and their protein products observable in different plant species at all stages of their growth and development as well as upon the impact of various environmental factors focuses the attention of researchers on two key problems, namely (i) detection of the specific sets of differentially-translated transcripts, i.e., the sets of transcripts that are efficiently translated under certain conditions, and the sets of transcripts with repressed or unchanged translation under the same conditions and (ii) clarification of the particular regions or specific structural features of the mRNA nucleotide composition that mediate this differential translational control.This review focuses on the experimental methods for genome-wide analysis of translational control, computational algorithms to search and analyze various regulatory contexts within mRNAs, and approaches for subsequent experimental verification of their correlation with mRNA translation in plants. Currently, we cannot refer to deficiency in publications comprehensively reporting the basic protocols of various methods for genome-wide analyses of translational control in general, including the methods applicable to plant objects. However, reviews that consider and discuss the three key components of the general strategy for identification of regulatory contexts in mRNA that may play a key role in differential translation are still absent in the scientific literature. Our goals here are (i) to consider the experimental approaches aiming to clarify differential translation on a plant genome-wide scale; (ii) to summarize the current data on the computational algorithms used for detection of the specific structural and functional features of key determinants within plant mRNAs and their interrelation with the translation efficiency; (iii) to highlight the methods for experimental verification of existed data and theoretical predictions of the intrinsic features of plant mRNAs important for their differential translation; and (iv) to discuss the ways of decoding the specific structural features of plant mRNA that mediate differential translational control by combining computational and experimental approaches. In general, this review discusses the main and critical steps for each method in this general strategy, areas of their application, and the main results obtained using plant objects and their contribution to our knowledge about the fine mechanisms of translation in plants.Initially, proteomics methods were used to identify the correlation between the observed fluctuations in the expression of a transcript and the actual level of peptides in plants . HoweverArabidopsis thaliana, Nicotiana benthamiana, Solanum lycopersicum, and Oryza sativa, as well as for individual plant tissues ,4140,41]ditions) .A. thaliana protein-coding genes contain uORFs in their mRNA 5\u2019UTRs [Alternative ORFs are among the most abundant regulatory elements in mRNAs; they are frequently present in the 5\u2019 leader regions of eukaryotic mRNAs (designated uORFs). Such uORFs may negatively modulate the translation efficiency of the downstream main ORF. According to the current estimate, approximately 20% of the A 5\u2019UTRs . InitialA 5\u2019UTRs or experA 5\u2019UTRs ,60,61. TA. thaliana for assessing a sequence-dependent effect of each CPuORF on expression of the main ORF. A comparative analysis of the reporter protein activity of the variants when the translation is controlled by either native CPuORFs or the CPuORFs with introduced frameshift mutations has identified five novel CPuORFs that repress the expression of the main ORF in a sequence-dependent manner. Moreover, it has been convincingly demonstrated that the C-terminal regions of four of these CPuORF-encoded peptides play a crucial role in repressing the translation of the main ORF [A. thaliana CPuORFs in arresting ribosomes during translation was tested in another study. This mechanism of CPuORF action was clarified using toeprinting analysis and the additional experimental evidence was obtained by constructing the following three types of reporter constructs. (i) With the CPuORF initiation codon removed from each reporter construct of the native CPuORF by replacing AUG with AAG; (ii) with frameshift only mutations, introduced to the CPuORF sequences; and (iii) with both removed initiation codon and frameshift mutations in CPuORF sequences. A comparative testing of all types of reporter constructs has shown that removal of the initiation codon from CPuORFs considerably increases the reporter gene expression; the frameshift mutations in CPuORFs also efficiently increase the reporter gene expression, although to a lower degree as compared with the removal of initiation codon; while the simultaneous presence of frameshift mutations and absence of the initiation codon have almost no effect on the reporter gene expression. These results clearly demonstrate that (i) the peptide sequences are partially responsible for strong repressive effects of these CPuORFs on the main ORF expression; (ii) repression of the main ORF expression depends on CPuORF translation; and (iii) these CPuORFs induce ribosomal arrest and thereby considerably inhibit expression of the main ORF [The approach of frameshift mutations utilizes concurrent introduction of deletions and insertions at \u22121 and +1 positions; this procedure changes only the amino acid composition of a peptide sequence coded for in CPuORFs but retains the presence and unchanged length of the overlapping CPuORFs d. This mmain ORF . The funmain ORF . Thus, iIt should be emphasized that the selection of a method for assembling reporter constructs is of highest importance, since it is necessary to clone the target regulatory sequences with a reporter gene without the introduction of additional nucleotides, which may influence the translation modulation. The classical restriction\u2013ligation cloning method does not allow generation of such constructs and requires several cloning stages. Several more economical and efficient technologies facilitating and accelerating the design of such constructs have been recently proposed. These technologies make it possible to produce seamless fusions of a \u201cregulatory sequence\u2013reporter gene\u201d. Most of them utilize the recombination between homologous sequences residing at the ends of the DNA fragments to be assembled . For exaThe principle underlying the CPEC method utilizes the polymerase extension mechanism and one DNA polymerase for the in vitro assembly and cloning of sequences in any vector in a single-stage reaction. CPEC allows for an integrated, combinatorial, or multifragment assembly of sequences as well as for routine cloning procedures . Thus, tN. tabacum L. cv. BY2, lettuce, or A. thaliana are used as well as agroinfiltration of leaves or plant cell suspension culture [As for the functional assessment of the constructs carrying a target regulatory sequence fused with a reporter gene, two basic experimental approaches are used: they utilized (i) a transient (temporary) and/or (ii) stable (constant) expression of reporter constructs in plants ,71. In tana T87) ,71,72 (Sana T87) ,73.A stable expression of reporter constructs in plants requires production of transgenic plants or transgenic plant cell suspension cultures. This makes it possible to solve increasingly more complex problems of translational control, such as translation regulation under different growth and stress conditions or long-term physiological effects of certain changes in a sequence that modulate translation. In particular, polysome fractionation of control and transgenic plants makes it possible to confirm that the transcript of a reporter gene controlled by a tested regulatory sequence is actually associated with the polysome fraction. Thus, it is possible to assess the translational status of the mRNA of a reporter gene fused with the tested regulatory sequence, including under the effect of stress factors .When using the methods involving both stable and transient expressions, the choice of an adequate control is of a paramount importance to ascertain that the change in expression of the reporter protein is actually associated with the change in translation ,51.According to the available experimental data, the strategy of reporter systems is in demand for a wide range of studies into individual regulatory sequences or their nucleotide contexts. The use of this strategy gives the unique data on the functional role of target sequences in translation efficiency; however, this strategy is, as a rule, supplemented with other methods. The choice of a particular method depends on the regulatory sequence or its context to be studied be it a full-sized 5\u2019UTR or its regulatory elements, such as RNA G-quadruplex, IRES, uORFs, and so on, which is in part summarized above and is comprehensively described in the corresponding publications.Translation plays a key role in the overall implementation of genetic information and the new knowledge about the intricate and multilevel information encoded in the mRNA sequence are of a paramount importance. The research into translation has revealed many new and interesting facts about the structural and functional role of the mRNA regulatory sequences that mediate differential translation. In particular, this success has been determined by the use of state-of-the-art technologies for assessing the translational statuses of individual mRNA species on a genome-wide scale in combination with computational algorithms and the methods for experimental verification, summarized here .The knowledge on roles of regulatory contexts in mRNA for translation efficiency as well as the combinations of these contexts will require improvement of both experimental approaches and theoretical algorithms. The researchers must have the opportunity not only to precisely determine the correlation between the observed fluctuations in expression of a transcript and the actual content of the corresponding protein in plants, but also to precisely define and estimate the contributions of individual regulatory contexts and their combinations within mRNAs. Correspondingly, the need for development of an integrated information resource for this purpose looks very reasonable. This resource would comprise the information about (i) the experimental methods for assessing the changes in translation on a genome-wide scale, including their modifications and applicability to different plant species; (ii) the resources for analyzing, interpreting, and visualizing the polysome and ribosome profiling data; (iii) the resources for constructing the target sequences of plant transcripts and predicting their characteristics; (iv) the methods for experimental verification of the regulatory codes of the plant transcripts involved in translation modulation; and so on. This will form the background for coordination of the numerous studies and the insight into the fine mechanisms underlying the control of biological processes at the point of translation in plants. Also it will expand the capabilities for future studies and the potential of applications of the mRNA regulatory contexts, including their use in engineering novel plant genotypes carrying the best combinations of the corresponding alleles and the generation of new of transgenes, including the use of genome editing technologies."} +{"text": "Background: The effects of publishing case reports on journal impact factor and their impact on future research in pediatric dentistry has not been clearly evaluated yet. Aim. To assess the relevance and role of case reports in pediatric dentistry. Methods: A systematic review (PROSPERO registration number: CRD42018108621) of all case reports published between 2011 and 2012 in the three major pediatric dentistry journals was performed manually. Data regarding citations of each report were acquired from the Institute for Scientific Information database available online. The authors analyzed information regarding citations received by each case report and considered their relation with the 2013 journal impact factor. Results: Case reports accounted for almost sixteen per cent of all articles published between 2011 and 2012. The citation rate of case reports was generally low and the highest mean citation was 0.5. This review revealed that 6 (9.52%) case reports had at least 5 citations and that the majority of the citing articles were also case reports (27.78%) or narrative reviews (25%). Conclusions: The publication of case reports affected the journal impact factor in a negative way, this influence is closely related to the percentage of the published case reports. Case reports about innovative topics, describing rare diseases, syndromes, and pathologies were more frequently cited. In medicine, a case report (CR) in scientific literature is the detailed report of the symptoms, signs, diagnosis, treatment, and follow-up of a single clinical observation . CRs maySince CRs are characterized by methodological standards that could be assessed as limitations , they are normally considered the base of the pyramid of clinical evidence . HoweverThe only evident reason for this rejection is the preconception that the publication of the CRs lowers the journal\u2019s impact factor (IF).The IF of a journal is the number of citations, received in a year, of articles published in that journal in the previous two years, divided by the total number of articles published in the same time-frame . IF was As already reported by Nabil and Samman, there are two issues regarding CRs to be investigated. Firstly, the number of citations of CRs in high-ranking journals, as anindicator of the influence of CRs on the IF . SecondlThe present review aims to analyze both the abovementioned issues in order to understand the significance and role of CRs in field of PD.The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) checklist was used as a guideline for conducting and reporting the present systematic review 13]..13].The protocol for this systematic review was registered on PROSPERO (CRD42018108621).All CRs published in major English language PD journals from JanStudies were included if they were case-report published in English in a major PD journal and condExcluded articles included: non-English publications; reports of in vitro or animal experiments; reports of more than 5 patients; articles without complete description of demographic and clinical data of every patient; articles not citable according to the Institute for Scientific Information Web of Science (ISI WOS) and grey literature.Two reviewers (RP and ES) screened and evaluated the titles and abstracts of the retrieved studies independently and in duplicate using the abovementioned inclusion criteria. The assessment of full-text articles was also conducted by the same reviewers. The author supervisor (PG) was involved for solving any disagreements through the discussion.All included CRs were further divided into rare disease or pathology (RDP) or new treatment or diagnostic method (NTD).All information regarding the citations were retrieved from ISI online databases and collected during the third week of August 2018 since the authors expected only minimal changes to the number of citations over the time during which the study was conducted. Reviewers extracted two different data regarding citations: total number of citations received in 2013 in order to give an answer to the first question of this review and total amount of citations received from the date the CR was citable to the date of data collection in order to answer the second question of the review (the impact of CRs on PD).The screening and the data collecting processes were carried on using specially designed data extraction forms. For studies apparently fitting with the inclusion criteria, or in case of title and abstract not providing sufficient information for a clear decision, the reviewers analyzed the full report. The studies excluded after full-text evaluation along with the reasons for exclusion are reported in The primary outcome was the influence of publication of CRs on journal IF; in order to pursue this purpose, the authors evaluated the number/percentage of CRs present in each PD journal and their relation to the journal IF. With the aim of deeply evaluating the primary outcome for every CR published in the selected journals, the authors retrieved such elements: the total number and mean citations, the influence of the publication topic (RDP or NTD), and of the eventual inclusion of literature review on the citation rate.The secondary outcome was the impact of the publication of CRs on future research in PD; such objective was pursued by retrieving data on the frequency of CRs citations from the day they were citable to the third week of August 2018. The authors recorded the number of citations for each included CR but, besides that, they retrieved all the non-cited CRs and all CRs with at least 5 citations along with all the citing articles. Non-cited CRs and CRs with at least 5 citations were divided according to their topic (RDP or NTD) whereas the citing articles were analyzed collecting data about the citing journals IF and the type of study , systematic review (SR), narrative review, CR, prospective study, retrospective study, cross sectional, editorial) through analysis of their title and abstracts. In the case of insufficient data to categorize the citing articles, the full text was obtained.The final search of the ISI WOS citable CRs published from January 2011 to December 2012 in major English language PD journals retrieved 66 results, respectively: 8 published in the International Journal of Paediatric Dentistry (IJPD); 24 published by the European Journal of Paediatric Dentistry (EJPD) and 9 by the Journal of Clinical Pediatric Dentistry (JCPD). Among these results 2 CRs were excluded since they reported data regarding more than 5 patients and an additional CR was excluded because of lack of complete demographic and clinical data regarding the patient.The flow chart of the search strategy along with references of the three excluded CRs is shown in In the biennium comprised between January 2011 and December 2012, the JCPD published the highest number of CRs followed by EJPD and IJPD; this order remains unchanged even considering the percentage of CRs published on the total articles . When coThe authors, moreover, considered the category of CRs described and they highlighted that CRs describing RDP irrefutably have more citations in all of the analyzed journals . Such CROf all the CRs analyzed, only 6 obtained at least 5 citations considering the time-frame between the availability of the article on ISI WOS and the date of data collection (third week of August 2018). Such 6 CRs were equally distributed among the 3 journals and no proportion between this value and the number of CRs published within each journal was noted. Among the 6 highly cited CRs, there were 5 CRs describing RDP and only 1 CR describing NTD . From thEvidence-based medicine (EBM) is defined as the conscientious, explicit, and judicious use of current best evidence for health care . Even ifAs it regards the biennium 2011\u20132012, it emerges that the percentage of CRs (16%) published in pediatric dentistry journals represents twice the percentage of CRs (7%) published in general medical journals . The revAlthough this review has analyzed the methodological strictness of the articles citing the CRs, it is not prudent to consider completely correct the data that show a relatively low rate of citations of CRs by SR and RCT (respectively 16.67% and 2.78%) and construe them as a lack of ability to stimulate high-level evidence because it is important to note that the rare nature of some diseases makes it impossible to reach an adequate sample for high-level evidence studies ,34,35,36In conclusion, the publication of CRs remains a relevant focal point for scientific knowledge and it should be continued. Authors and editors should spend more time in carefully considering the information enclosed in the submitted CRs in order to improve its usefulness. When writing a CR on PD topics, authors should consider whether sharing their own results brings real benefits to the PD community and editors should select articles that deal with recent topics and describe RDP, since these will receive a higher number of citations and will influence future research. This review confirms that also in PD journals the publication of CR has a negative effect on their IF proportionally to the percentage of CRs published. Attempts to modify the journal IF could reduce the effect of the publication of CR in it, but this action remains doubtful and should be discussed in future publications."} +{"text": "Retinopathy of prematurity (ROP) is an ocular disorder which affects infants born before 34 weeks of gestation and/or with birth weight of less than 2000 grams. If not detected on time and appropriately managed, it can lead to irreversible blindness.The retinal blood vessels first appear between 15\u201318 weeks of gestation. These vessels grow outwards from the central part of the retina and extend towards the retinal periphery. The nasal part of the retina is fully vascularised by 36 weeks of gestation followed by the temporal retina which is completely vascularised between 36\u201340 weeks of gestation age . FollowiThe development of ROP can be divided into two phases - an initial phase of delayed growth of retinal vessels followed by a second phase of retinal vessel proliferation .At birth the lungs of an infant born preterm are immature placing him/her at a high risk of developing abnormally low level of oxygen in arterial blood.To overcome this, the newborn infant is given supplemental oxygen in NICU. Prior to 32 weeks of gestation, the retina is very immature and the retinal metabolic demand is low. This excess oxygen creates retinal hyperoxia and oxygen toxicity, inhibiting the production of VEGF. This is followed by temporary stopping or stoppage of normal retinal growth, and constriction of new immature vessels. As a result, there is a reduction of blood supply to retinal tissue and shortage of oxygen needed for metabolism.With increasing age of the preterm infant the retina matures. There is an increase in metabolic demand and oxygen consumption by the retina, creating a relative decrease in oxygen level. This promotes increase in the level of vascular endothelial growth factor (VEGF), triggering the formation of new blood vessels along the inner retinal surface. A demarcation ridge develops along the retina that separates the central vascularised retina from the peripheral avascular retina .The growth of retinal blood vessels at this stage may restart normally or may progress to significant ROP as seen by an abnormal growth of retinal vessels into the vitreous and over the surface of the retina. These new vessels are weak and underdeveloped failing to fulfill the oxygen demand of retinal tissue resulting in continuous growth of abnormal vessels. There is leakage of fluid or blood from these weak blood vessels. If not treated on time this can result in scarring or traction of the retina leading to retinal detachment and blindness ."} +{"text": "The cost-effective production of chemicals in electrolytic cells and the conversion of the radiation energy into electrical energy in photoelectrochemical cells (PECs) require the use of electrodes with large surface area, which possess either electrocatalytic or photoelectrocatalytic properties. In this context nanostructured semiconductors are electrodic materials of great relevance because of the possibility of varying their photoelectrocatalytic properties in a controlled fashion via doping, dye-sensitization or modification of the conditions of deposition. Among semiconductors for electrolysers and PECs the class of the transition metal oxides (TMOs) with a particular focus on NiO interests for the chemical-physical inertness in ambient conditions and the intrinsic electroactivity in the solid state. The latter aspect implies the existence of capacitive properties in TMO and NiO electrodes which thus act as charge storage systems. After a comparative analysis of the (photo)electrochemical properties of nanostructured TMO electrodes in the configuration of thin film the use of NiO and analogs for the specific applications of water photoelectrolysis and, secondly, photoelectrochemical conversion of carbon dioxide will be discussed. Vphoto) could be generated by an electrochemical cell employing either p- or n-type Ge electrodes , upon anchoring of an opportune sensitizer due to the very large surface concentration of active sites on which sensitization takes place have been considered as electrode materials since the late fifties when a photopotential (DSC), for which the present record is ca. 14% , it appears important to analyse and review the recent developments in the materials science behind the design, production and characterization of semiconducting electrodes with characteristic size of 10\u22129 m. In particular, the present contribution will focus on the analysis of the electrochemical and photoelectrochemical properties of nanostructured SCs made of transition metal oxides (TMOs) with particular attention to NiO , is the possibility of varying electrochemically/photoelectrochemically the redox states of their constituting units, i.e., metal centers and/or oxygen anions . This is due to the richness of the electrochemical and photoelectrochemical behavior of nanostructured NiO as proved by its direct participation in reversible processes of oxidation and reduction of various nature.Nanostructured SC electrodes made of TMOs with one in non-aqueous electrolytes and in anhydrous/anaerobic atmosphere generates the voltammetric profile of Figure x in which x accounts for the presence of Ni(III) site in the matrix of the nickel oxide. The latter became an interesting material of research due to its low cost and excellent ion storage property. For example, NiO nanostructures are p-type semiconductors with peculiar magnetic and electric behavior depending on the particle size. A comprehensive analyses of the different nanostructured morphologies, in which NiO could be obtained, falls outside the purpose of the present review work. Furthermore, a recent review paper by Bonomo properly faced this topic \u2192 Ni(III) and Ni(III) \u2192 Ni(IV), with the first occurring at lower potential values .The oxidation processes verified at i Figure are ascri Figure (1)NiO+mEonset = 2.5 V vs. Li+/Li, Figure \u2212 (Equations 1 and 2) from the electrolyte to the electrode. This kind of process is generally accompanied by the production of mechanical stress in the electrode centers can be already present in the pristine nanostructured oxide Hagfeldt . In Equa\u2212 to I\u2212 . The process of Equation 5 corresponds to the electrochemical n-doping of NiO. It has been verified that electrochemical reduction of NiO affects its electrical conductivity, optical absorption, ionic conduction and magnetic properties through sensitization with a molecular light absorber which upon optical excitation transfers an electron to a moiety acting as electrocatalytic center for H2 formation . At a large scale the conduction of the latter process through the photoelectrochemical approach would ideally diminish the request of fossil fuels extracted from fields. It has been recognized that the photoelectrochemical reduction of CO2 proceeds efficaciously through two paths and its successive reaction with CO2 to give selectively hydrogenated products with high energy density or synthetic usefulness; (b) by means of the direct photoelectrochemical reduction of CO2 to the electrocatalytic moiety (Cat) interacting directly with CO2, implies the realization of the following sequence of elementary steps: to CO2;Step of p-SC , for the neutralization/regeneration of the PS moiety and/or the Cat unit that resulted temporarily oxidized for the occurrence of the previous steps (b) and (c).Uptake of one or more electron from p-type for the efficacious photoelectrochemical reduction of CO2 are the combination of Cu2O/CuO in the shape of nanorods with the adoption of dye-sensitized nanostructured NiO of p-type as photoelectroactive component. Different to DSC, it has been early recognized that the sole light-absorbing unit in the immobilized state is not sufficient to accomplish the photoelectrochemical reduction of H+ for the successive formation of H2 during water splitting (or the reduction of CO2) due to the complexity and the number of chemical reactions that follow the starting electrochemical step of electron transfer. For this reason, the successive development of catalytic units was necessary for the full realization of the wanted photoreduction process at modified NiO. At the present stage the progress on the photoelectrolysis cells for the realization of photoelectrocatalytic reduction processes depends mainly on the nature of the dye-sensitizer and the catalytic units combined in the multifunctional supramolecular assembly PS-Cat (either bridged or separated) rather than on semiconducting NiO cathode., The special attention given by the present review to the analysis of the electrochemical and photoelectrochemical properties of p-type NiO is due to the variety of the (photo)electrochemical processes occurring in NiO electrodes, and the complexity of the kinetics of NiO redox processes. Both characteristics certainly render this system a paradigmatic example for the class of semiconducting TMO electrodes.This review has given an overview on the electrochemical and photoelectrochemical behavior of semiconducting electrodes made of nanostructured transition metal oxides (TMOs) like NiO in the configuration of thin films. The interest in NiO resides primarily in the chemical-physical stability which is imparted by bonds having mixed covalent and ionic character. Such a combination of characters generates an electronic structure characterized by the presence of energy bands and partially delocalized states at the valence level with impartment of semiconducting properties. TMO based semiconductors like NiO are photoactive since such electrode materials can transfer electrons in the desired direction provided that a radiation of opportune energy is absorbed by the TMOs for the primary realization of the hole-electron separation. Unlike the semiconductors based on Si, NiO undergoes reversible electrochemical redox processes in the solid state (either in the dark or illuminated states), and as such represent electroactive species. The occurrence of a NiO-based redox process leads to the doping of the oxide and it is accompanied by charge storage. The latter process is also of great utility for the development of batteries and primary sources of electrical energy based on NiO electrodes. Nanostructuring of NiO is an efficacious tool for modulating of the electrochemical properties, the optical absorption and the characteristics of charge transport. Therefore, the preparation of nanosized NiO gives further opportunities for employing NiO as photoelectroactive materials for the finalities of solar energy conversion , solar fuels generation during water photoelectrolysis, and the photoelectrochemical reduction of COAll authors contributed equally in the compilation of the text, the bibliographic analysis and the preparation of figures and tables. The content of the present contribution has been approved by all authors.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewers TS, IP and handling Editor declared their shared affiliation."} +{"text": "Once considered anomalous structures, transforaminal ligaments are not widely known and the criteria for identifying and classifying them are not universal. They are, however, of potential importance during neurological procedures, as their entrapment might lead to radicular pain.Transforaminal ligaments are not present in all patients, but when they are, the incidence of all types of ligaments is significantly higher, with the most common type being the superior corporotransverse ligament. By diminishing the overall amount of space available for the spinal nerve to pass, many early studies concluded that transforaminal ligaments were the cause of nerve root entrapment, resulting in radicular pain. However, more recent studies conducted have claimed that the ligaments do not cause radicular pain\u00a0but rather are more for the protection of nerves and vessels.The contribution of transforaminal ligaments to radicular pain is still a topic of debate, but their role in the protection of nerves and vessels is certain. The clinician who performs interventional procedures directed toward the intervertebral foramen and the surgeon operating in this region should have a good working knowledge of the anatomy and proposed functions of the transforaminal ligaments. Once considered anomalous structures, transforaminal ligaments are not widely known and the criteria for identifying and classifying them are not universal. There are five major classifications of transforaminal ligaments: the superior and inferior corporotransverse ligaments, the superior and inferior transforaminal ligaments, and the mid-transforaminal ligament -3. Each The intervertebral foramina allow for the passage of numerous structures, including the root of each spinal nerve, segmental spinal arteries and veins, lymphatics, and recurrent meningeal nerves , 4 Figu.Two broad varieties of ligaments have been found in association with the intervertebral foramina: radiating and transforaminal ligaments . The latWithin the boundary of each intervertebral foramen is a network of ligaments dividing the outlet into multiple subcompartments that contain their own specific anatomical structures . In 1832Upon examination of these 10 cadaveric lumbar spines, Golub and Silverman reportedThe transforaminal ligaments are not widely known and were once considered anomalous structures after they had been described in the anatomical study by Golub and Silverman -10. HoweIn their pioneering study on these ligaments in 1969, Golub and Silverman describeThe two corporotransverse ligaments are mostly distributed in the L5-S1 intervertebral foramen: the superior corporotransverse ligament attaches from the posterolateral corner of the vertebral body to the accessory process of the transverse process of the same vertebra; the inferior corporotransverse ligament connects the same posterolateral corner of a vertebral body to the transverse process below . The traIt must be emphasized that transforaminal ligaments are not always present, but when they are, the overall incidence of all types of these ligaments is approximately 47% .\u00a0 The moThe anatomical location of the transforaminal ligaments led many early studies to conclude that these condensations of fascia were the cause of nerve root entrapment resulting in radicular pain since they diminished the space available for the spinal nerve to pass -10, 12. This was a comprehensive literature review conducted on the transforaminal ligaments regarding their anatomy and function. Although there have been limited studies of their form and function, the transforaminal ligaments are believed by some to be protective of nerves and vessels traversing the intervertebral foramen while being the cause of radicular pain is still a matter of debate."} +{"text": "The keyhole digging process associated with variable polarity plasma arc (VPPA) welding remains unclear, resulting in poor control of welding stability. The VPPA pressure directly determines the dynamics of the keyhole and weld pool in the digging process. Here, through a high speed camera, high frequency pulsed diode laser light source and X-ray transmission imaging system, we reveal the potential physical phenomenon of a keyhole weld pool. The keyhole depth changes periodically corresponding to the polarity conversion period if the current is same in the electrode negative (EN) phase and electrode positive (EP) phase. There exist three distinct regimes of keyhole and weld pool behavior in the whole digging process, due to the arc pressure attenuation and energy accumulation effect. The pressure in the EP phase is smaller than that of the EN phase, causing the fluctuation of the weld pool free surface. Based on the influence mechanism of energy and momentum transaction, the arc pressure output is balanced by separately adjusting the current in each polarity. Finally, the keyhole fluctuation during the digging process is successfully reduced and welding stability is well controlled. VPPA (Variable Polarity Plasma Arc) keyhole welding is an ideal method to achieve joints made of middle thick aluminum with high quality and efficiency . There aAs a unique characteristic of plasma arc welding, keyhole behavior determines the process stability of plasma arc welding. Keyhole detection research has been carried out in view of DC-PAW (Direct Current Plasma Arc Welding) for steel. Liu et al. ,6,7 direThe above research mainly focused on the keyhole weld pool evolution in a welding quasi-steady state rather than the keyhole digging process. Zheng et al. pointed Here, we have observed the fluctuation of molten pool surface in the digging process by high speed camera with a high frequency pulsed diode laser light source system. The keyhole boundary in real time was also obtained by an imaging system of an X-ray transmission. In order to analyze the factors influencing the keyhole stability, the plasma arc pressure is measured by the pressure transducer. Combining the energy and momentum balance between electrodes and arc, the physical mechanism of plasma arc pressure was obtained, based on which we optimized the pressure output. Finally, the optimized parameters were verified by the weld formation and weld pool free surface fluctuation situation.A5052P aluminum alloy is adopted as the work piece and the chemical composition is in The setup for observing weld pool free surface is set as shown in In order to clearly observe the weld pool free surface, a telephoto micro lens with a 640 nm band-pass filter is used to filter out the arc light, a high-frequency pulsed diode laser light source system to illuminate the weld pool, a high speed video camera (HSVC) to record the images. The frame rate of HSVC is 2000 fps. The HSVC sends out an electrical pulse signal to the laser light generator at the start moment of shutter opening, ensuring each picture is clear and reducing the effect of laser heat on the weld pool.To understand the plasma arc pressure, and thus to analyze its influence on the digging process, the measurement platform is established as shown in In the digging process of VPPA welding, the variation of free surface of weld pool in one current period is analyzed, as shown in The keyhole boundary is obtained by image edge extracting technology based on the X-ray results, as shown in The keyhole depth with time in the whole digging process is shown in The diagram of keyhole and weld pool during digging process is shown in In addition, the unmelt height becomes small with the increase of the melting depth, leading to heat loss at the weld pool bottom becoming more difficult. Heat gradually accumulates in the unmelt region. The melting speed gets faster. The keyhole digging speed also increases. Predictably, with the keyhole depth increase, the thermal accumulation also increases. The keyhole digging process and thermal accumulation are mutually reinforcing, resulting in the blasting type penetration in the last stage. Through the above analysis, the keyhole instability during digging process mainly occurs in the first two stages. Adjusting the output of plasma arc pressure to stabilize the keyhole state of EN and EP phases is crucial to realizing stable penetration.In order to understand the evolution mechanism of the VPPA pressure and adjust the pressure output reasonably, the energy and momentum balance between electrodes and plasma arc in different polarities is analyzed in Another reason for the pressure reduction in the EP phase is the different physical process on the interaction of the plasma arc and the base metal in EN and EP phases. Tashiro et al. pointed I is the current, r is the radius of plasma arc. Therefore, the difference of pressure between EN phase and EP phase can be got.Moreover, Basins et al. pointed Expressed in terms of current density, Equation (2) givesThrough the analysis of momentum and energy balance in the interface between electrodes and plasma arc, we can know that the tungsten temperature increase and cleaning of oxide layer on the surface of base metal during EP phase both result in the decrease of current density. Thus, the arc pressure depends not only on the square of the current but also on the square of arc radius based on the Equations (2) and (3). However, Lin et al. pointed In this section, the current of the EP phase is adjusted to balance the pressure output. The variation of plasma arc pressure with the change in current can be clearly presented by the distribution contour shown in The weld pool free surface with different EP currents is shown in Through a large amount of welding experiments, it is found that the success rate of forming weld bead is low if the keyhole weld pool fluctuates. It is easy to form cutting. As (1)The keyhole and weld pool fluctuate with the periodic change of plasma arc state during the digging process of VPPA welding, rather than continuously increasing the keyhole depth over time. By observing the keyhole boundary of digging process in real time, it is found that the keyhole depth increases rapidly in the EN phase, but decreases gradually in the EP phase.(2)The three stages of keyhole digging process are found, which are regular periodic fluctuation, violent fluctuation and blasting penetration, respectively. The fluctuation of the second stage is caused by the attenuation of plasma arc pressure due to the increase of the keyhole depth. The positive feedback between the thermal accumulation and the continuous increase in the depth of the keyhole leads to the rapid penetration in the third stage.(3)In order to accurately balance the plasma arc pressure output, the influence mechanism of energy and momentum balance on the pressure is analyzed. The difference between the plasma arc pressure of the EP and EN phases can be effectively reduced if the EP current is 30 A to 50 A larger than that of EN. The pressure balance output can reduce the keyhole weld pool fluctuation in the digging process and improve the welding success rate.This work mainly investigates the stability of keyhole digging process in VPPA welding. Using comprehensive experimental measurement of weld pool free surface, keyhole boundary and plasma arc pressure, the evolution of keyhole weld pool and the influence mechanism of plasma arc pressure are obtained. The specific conclusions are as follows."} +{"text": "As individuals are living longer, in many cases with chronic diseases, there is an increased focus on end-of-life (EOL) planning and decision making. This includes a broad spectrum of choices including advance care planning (ACP) and turning to palliative care or hospice care. Although there has been an increase in palliative and hospice care enrollment and ACP engagement over the past decade, participation remains low for certain subgroups of the population. The purpose of this symposium is to offer insight into reasons for these varying rates of engagement by exploring determinants and barriers to EOL decision making and planning and by examining caregiver knowledge of EOL decision making and planning from the service provider perspective. The first three studies examine various types of influences in EOL decision making and planning. Inoue and colleagues explore factors associated with the length of hospice stay, and Gaines and colleagues examine the impact of environmental characteristics in ACP. Ornstein and colleagues use Denmark registry data to assess the role of kinlessness at the time of death in EOL decision making and healthcare utilization. The final presentation by Noh and colleagues examines how service providers in rural areas perceive community residents\u2019 knowledge of ACP and palliative care. The discussion following these presentations will compare findings across different forms of EOL decision making and planning, consider the impact of the varying methodological approaches used, and highlight implications of these works for potential interventions and policies related to EOL decision making and planning."} +{"text": "Evolutionary Applications includes eleven papers covering a wide range of perspectives and methodologies relevant to understanding genomic variation under domestication.Domestication has been of major interest to biologists for centuries, whether for creating new plants and animal types or more formally exploring the principles of evolution. Such studies have long used combinations of phenotypic and genetic evidence. Recently, the advent of a large number of genomes and genomic tools across a wide array of domesticated plant and animal species has reinvigorated the study of domestication. These genomic data, which can be easily generated for nearly any species, often provide great insight with or without a reference genome. The comparison of genome wide data from domestic and wild species has ignited a wave of insight into human, plant, and animal history with a new range of questions becoming accessible. With this in mind, this issue of This issue has five broad topics, with the first focused on deleterious mutations in the context of domestication , allow us to learn ever more about the human condition. Together, these articles demonstrate the emergence of a new understanding of how genomewide effects may differ from those at specific loci and how these effects can change dramatically over relatively fast time scales. It also shows how studies of the process of domestication provide tremendous insight into the change in fitness in populations and can be used to improve plant and animal breeding populations and inform other areas of evolutionary biology such as conservation and adaptation in the wild.None declared."} +{"text": "Objective: To report a clinical case of macular retinopathy after laser light exposition.Methods: We described a case of reversible maculopathy in a 29-year-old woman. Retinography and Optical Coherence Tomography were performed.Discussion: Retinopathy due to laser light is an increasingly frequent pathology because of its improper use and the massive sale not regulated by the Internet. These lesions can vary from mild disruptions in the pigmentary epithelium to retinal hemorrhages, retinal ruptures, or macular holes. The depth of the lesion and the involvement of the photoreceptors layer will confer a worse visual prognosis. Conclusion: A correct control of the sale and consumption of these devices are necessary to suppress this completely avoidable pathology. Laser light retinopathy is an increasingly common pathology due to the spread of Internet sales and less control of the safety of laser light devices. Laser lights are often mis-labeled and mis-marketed as toys and they are sold to all kinds of consumers, even children. However, even so-called safe lower-power lasers may produce retinal damage and permanent visual loss [2].The human eye suffers damage as a result of ionization, thermal and photochemical mechanisms when it is exposed to laser light of more than 500 mW of power and between 400 and 1400 nm of wavelength [3]. There is a classification of laser devices according to their power. They are classified from I to IV, those labeled as category IIIb and IV being unfit for consumption .Laser lesions in the posterior pole can range from minimal hypertrophy of the pigment epithelium to the appearance of central serous chorioretinopathy, hemorrhage, choroidal neovascularization, or macular hole [5]. Although the use of low powered lasers is widespread, fearful children may report symptoms only if bilateral visual impairment has occurred [6].Bilateral retinopathy has been reported previously and may be more likely to occur in cases involving mirrors or beam splitters or in patients with alternating fixation . A greater regulation of Internet sale, age restriction of use, and education are essential to avoid the damage of these dangerous devices. An adequate control of the sale to the public of high-power laser devices is necessary to avoid these pathologies either by direct exposure or by the glare of the light [Conflict of interestThere is no conflict of interest. All authors agree with the publication of this manuscript. This clinical investigation was approved by the Ethics Committee of the Hospital Regional of Malaga and adheres to the principles of the Declaration of Helsinki.The patient was informed and signed the informed consent to participate in this study."} +{"text": "The formation of self-generated gradients of iodixanol from a solution of uniform concentration requires the use of vertical or near-vertical rotors. The density profile that is generated depends upon the sedimentation path length of the rotor, centrifugation time, RCF and temperature. Modulation of the starting concentration changes the density range of the gradient. This Protocol Article illustrates the effect of these parameters on gradient shape in a few selected rotors. Because the gradients are formed by the centrifugal field, they are highly reproducible and easy to execute."} +{"text": "Biological signals are the reflection of accumulated action potentials of subdermal tissues of a living being. Its presence signifies the ionic and electrical activities of the muscular and the neural cells in a synchronized manner. Being a mosaic model of a living architecture, the resultant vectors of biological signals have temporal as well as spatial representation. These signals are stochastic in nature. Medical diagnostic tools are prevalent using the support of medical signals. In the course of time, a significant amount of progress has been achieved in the field of medical signal processing for the improvement of the signal-to-noise ratio, extraction of features from those filtered signals, and classification of the extracted signals for clinical applications. This special issue emphasized the recent development of medical signal processing, improvement of algorithms, and wider clinical applications. Entropy-based kernel extraction technique is being used for the analysis of the nonlinear and nonstationary epoch signals. This kind of approach shows robustness in noise reduction. Machine learning algorithms are also being used for real-time feature extraction (pattern extraction) from tympanic temperature profiles. Quadratic support vector machine algorithms were also found to enhance the accuracy of the detection mechanism.Regulation of rehabilitation devices and protocols are also governed by the processing of medical signals. Reference techniques are often used for acquiring surface EMG signals for the activation of the rehabilitation actuators. Consecutive placement of stimulator-detector arrays influences the spatial acquisition of the functional electrical stimulus (FES) and volitional sEMG which can be used for controlling EMG-driven FES neuroprosthesis. Dynamic models are also being used for analyzing and detecting features from sEMG of deltoid muscles, present in the upper arm. The analysis of the signals can be used for the evaluation of the differently abled persons to predict the treatment outcomes. Health monitoring can also be achieved by processing of time-domain biological signals of sub-nanosecond durations. It can even help in predicting the state of the human heart by correlating the activity of the heart muscles and the blood pumping process.Deep recurrent neural networks are also found to be useful tools for predicting features related to the adverse effect of drugs on the human body. In addition to this technique, conditional random fields are also implemented by many researchers for identifying the biological signals from significantly highly correlated background noise due to the subsequent physiological modification after the introduction of the foreign particles within the body. Similarly, the assessment of the trace elements in the human body can also be achieved by extracting features of the biological signals and biomarkers, and subsequently analyzing the features. A neural network can play an important role towards the identification of the continuously adaptable trace elements within the physiological fluids.Biomedical signaling has significant outreach in its clinical application domain. Cerebral arterial stenoses can also be predicted by a nonradioactive technique, namely, photoplethysmography. It involves the employment of an optical detection technique, which is a wavelength-specific process. Corresponding photodetectors are being used to measure the wavelength of the reflected light. Cerebral disorder can also be detected from EEG-fMRI recordings. It involves a rigorous filtering process, which employs a comb filter, followed by a moving-averaged filter for the elimination of the random noises.In this special issue, all the above topics are discussed with their recent state of the art and their corresponding clinical applications.Kunal PalKunal MitraArindam BitSaugat BhattacharyyaAnilesh Dey"} +{"text": "It is projected that by 2030, 6 percent of Nigeria\u2019s present population of 180 million will be 60 years and above. However, the extent to which the traditional systems of family support and security can manage the care of the increasing number of older people in the country is not clear as limited studies are available in the country regarding the health burden and Socioeconomic costs of caring for dependent older people. This study is therefore aimed at assessing the health burden and costs of caring for dependent older people in Nsukka, Nigeria. This cross sectional survey involved 1030 randomly selected elderly persons in Nsukka, Nigeria . Structured questionnaire and Focus Group Discussion Guide (FGD) provided the data for the study. The qualitative data were analyzed with descriptive statistics, while regression analysis formed the basis for predicting effects of the variables in the study. The qualitative data from the FGD were analyzed thematically. The findings show that the Nigeria government was largely uninvolved in the care and support for older dependent people; leaving families to negotiate a \u2018journey without maps\u2019. Families carried the health burden of care for the elderly with attendant socioeconomic costs. The traditional role of female relatives as caregivers was beginning to give way to paid caregivers. An innovative policy frame work targeted at the needs of older persons in health care, social protection and other forms of intergenerational support is required to supplement inputs from families of the aged in Nigeria."} +{"text": "The increasing number of existing and new chemicals demands ecotoxicological data as well as toxicological data for pre- and postmarketing risk assessments. Although human health has been the major concern in Japanese environmental management, ecosystem health is becoming the big issue as the need for preserving the diversity of ecosystems has been recognized. This recognition is changing the regulatory framework in Japan, resulting in new actions toward establishment of water-quality standards for aquatic organisms and ecotoxicological assessment of existing chemicals. At the same time, the need to assess complex liquids that contain several kinds of chemicals is increasing. The ecotoxicological study of Japanese effluents shows that the present chemical-specific standards are not enough to protect aquatic ecosystems. These two factors encourage the application of ecotoxicological tests as well as the toxicological data."} +{"text": "The field of craniofacial biology has greatly benefited from the progress made the last decades in developmental biology, molecular biology, stem cell biology, and genomics. A detailed description of the action of signaling pathways that are involved in developmental processes has significantly increased our comprehension of the organization and function of molecular networks that are involved in the formation of the craniofacial complex . The ion channel protein TRPM7 mediates the mineralization process of craniofacial hard tissues . The influence of photobiomodulation on the capacity of bone marrow cells to form alveolar and craniofacial bones is also discussed . Calvaria formation is a complex process that necessitates the coordinated action of osteogenic stem cell populations , controlled by specific signaling molecules such as Indian Hedgehog and the transcription factor Gli3 . Palatogenesis is an equally complex process orchestrated by cytoskeleton and extracellular matrix rearrangement and expression of specific signaling molecules .The influence of systemic glucocorticoid administration in the composition of bone structures of the jaw of mini pigs is reported .The identification of stem cell compartments in conjunction with the discovery of the complex ensemble of signals that creates their microenvironment is essential for a successful regenerative approach. Therefore, the identification and characterization of stem cell niches within dental pulp and the analysis of Notch signals during tooth repair is of prime importance . Similarly, Wnt signaling , VEGF and Hypoxia Inducible Factor and Syndecan-1 might affect alveolar bone remodeling and periodontal ligament homeostasis.Loss of the alveolar bone that surrounds teeth could be an actor of periodontal pathology, and this process could be controlled via the Chang et al.), Fibroblast Growth Factor 10 , MORN5 and the microRNA Mir23b and Mir133b are involved in the physiopathology of a variety of tissues and organs of the craniofacial complex. The Wnt pathway controls the growth of dental lamina from where the future teeth will develop. Retinoic acid , steroids , and Na:K:2Cl Cotransporter-1 control the formation of enamel. Generation of different dental epithelial spheres that contain stem cells might enhance our understanding of enamel formation . Dentin sialophosphoprotein (DSPP) is also very important for proper dentin formation and mutations in the DSPP gene lead to dentinogenesis imperfecta . The role of Nerve Growth Factor pathway is similarly important during human tooth development and repair (Mitsiadis and Pagella). Finally, the Hippo-YAP1/TAZ cascade is important for pituitary stem cell development .Defined molecular mechanisms such as GLI-mediated transcription (The increased knowledge on the development, pathology and regeneration of tissues and organs of the craniofacial complex will most certainly orchestrate a significant shift toward novel diagnostic and therapeutic approaches. Although many questions concerning the mechanisms involved in craniofacial tissues development and regeneration have not yet been resolved, modern imaging tools, mathematics, bioinformatics, and genomics will help to elaborate new concepts and models that will change drastically this field. Further progress in treatments concerning craniofacial tissues and organs depends upon active and vigorous research programs.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "During the development of the cortex distinct populations of Neural Stem Cells (NSCs) are defined by differences in their cell cycle duration, self-renewal capacity and transcriptional profile. A key difference across the distinct populations of NSCs is the length of G1 phase, where the licensing of the DNA replication origins takes place by the assembly of a pre-replicative complex. Licensing of DNA replication is a process that is adapted accordingly to the cell cycle length of NSCs to secure the timed duplication of the genome. Moreover, DNA replication should be efficiently coordinated with ongoing transcription for the prevention of conflicts that would impede the progression of both processes, compromising the normal course of development. In the present review we discuss how the differential regulation of the licensing and initiation of DNA replication in different cortical NSCs populations is integrated with the properties of these stem cells populations. Moreover, we examine the implication of the initial steps of DNA replication in the pathogenetic mechanisms of neurodevelopmental defects and Zika virus-related microcephaly, highlighting the significance of the differential regulation of DNA replication during brain development. The neocortex is a complicated brain region characterized by excessive cell diversity as it is composed by multiple types of neuronal and glial cells assembled in networks that regulate the higher order functions. The cortex derives from the dorsal telencephalon located in the most anterior part of the neural tube through a strictly regulated process that involves diverse types of neural stem cells (NSCs) and more committed neural progenitors (NPCs) organized in discrete zones . ExcitatCoordinated action of morphogens and intrinsic signaling cues leads to the successive generation of the distinct types of NSCs and NPCs during cortical development. These populations differ on their self-renewal ability and differentiation potential. Numerous studies have identified distinct morphological and molecular features that describe the diversity among NPCs , while at the onset of neurogenesis NECs are replaced by apical radial glial cells (aRGs) that remain in contact with the apical membrane and form the ventricular zone (VZ), one of the main proliferating zones of the cortex. During mid neurogenesis a second proliferating zone is established basally of the VZ by intermediate progenitors, which delaminate from the apical membrane and are translocated to the subventricular zone (SVZ). Despite these primary populations, other minor groups of progenitors are also generated over the course of development like the short neural precursors (SNPs) that appear in the VZ and the outer RGs (oRGs) located in the upper boundaries of the SVZ. Here, we will focus on the populations of the apical NSCs that prevail the VZ during the initial stages of cortical generation . A more Neuroepithelial cells consist the first population that reside in the walls of the neural tube. These cells are highly polarized, possessing a basal process that is attached to the basal lamina and an apical process attached to the apical surface in contact with the neural tube lumen . NECs seUpon the onset of neurogenesis at embryonic day (E)10.5 in murine development and until E12.5, NECs are gradually transformed to aRGs, which constitute the main population of NSCs during cortical development. aRGs are also polarized cells exhibiting an apical and a basal process that span the radial axis of the developing cortex and maintain their primary cilium that projects into the newly formed telencephalic vesicles . SimilarDuring cortical development, extrinsic signaling cues derived from the local environment coordinate the proliferation and fate commitment of NSCs. Various fate determinants and signaling pathways impact not only in the genetic program of NSCs by regulating the expression of specific factors, but also in their intrinsic features like their cell cycle and chromatin dynamics . ActivatWithin a complete cell cycle the genetic information must be faithfully duplicated and pass intact to the progeny. Moreover, in the level of multicellular organisms, replication must be coordinated with cell fate decisions that are mediated by complicated transcriptional programs. DNA replication is organized at multiple levels to ensure that genome duplication will be completed within the available time and over- or under- duplication that threatens genome integrity will be avoided. In eukaryotic cells, DNA replication initiates from multiple sites along the genome known as replication origins. Initiation of replication is strictly regulated by a two-step process that involves the formation of a multiprotein complex termed pre-replicative complex (Pre-RC) on the potential origins and is called origins\u2019 licensing, followed by the sequential activation of a subset of origins known as origins\u2019 firing , as both populations have similar cell cycle duration. It is also established that a short G1 phase is an intrinsic property of multipotency as both in ESCs and NECs G1 length increases upon fate commitment . The potlication . Moreovelication .These observations suggest that NECs might also require similar adaptations in the licensing of DNA replication due to their shortened G1 phase, while these features are probably absent from more committed NPCs defined by a longer G1 challenges the successful completion of DNA replication compromising genome stability . NPCs th neurons . Accordi neurons . It thusDuring the expansion phase of the developing cortex, NSCs utilize a high number of origins during DNA replication due to their short cell cycle . Howeverin vitro systems that recapitulate the progressive fate commitment of mouse and human ESCs confirmed the relation between replication timing and gene expression and showed that changes in the timing of replication coordinate with transcriptional activation is of the main syndromes that has set the link between licensing of DNA replication and brain development. MGS is an autosomal recessive type of primordial dwarfism due to developmental retardation, characterized by proportionate growth deficits, incomplete patellae formation and typical facial features . Severitpatients . The imppatients . Studiespatients . Similarpatients .The microcephaly observed in MGS patients emphasizes the role of proper licensing of DNA replication during brain development . It is eToxoplasma gondii have been also associated with microcephaly is one of the most known aetiologies of infectious microcephaly, however, congenital infections including the cytomegalovirus or the parasite ocephaly . The impocephaly . Initialocephaly . Interesocephaly . Moreoveocephaly . In lineocephaly . The posDuring the last decade the importance of the dynamic regulation of DNA replication during organismal development has been established . In the Moreover, adaptations in the number of licensed origins and in the replication timing profile permit the integration of the active transcriptional program that operates in NSCs during cortical development. How the plasticity of DNA replication initiation is regulated within the complicated environment of the developing brain and whether it is determined by signaling cues or it is an inherent feature of NSCs remain important questions.Mutations in the genes that express licensing factors have been associated with developmental retardation characterized by microcephaly, stressing the significance of efficient licensing and accurate initiation of DNA replication during brain development. A future direction will be the establishment of animal models for impaired-licensing syndromes, that will permit the detailed analysis of brain development under these conditions. Further insights into the response of NSCs upon DNA replication stress will be critical to our understanding of the pathophysiology of neurodevelopmental syndromes. Accordingly, the possible effects of ZKV in the normal progression of DNA replication is a very intriguing scenario that could explain the preference of the virus in early proliferating NSCs. Deciphering the role of the differential regulation of DNA replication will provide new grounds for research on the mechanisms leading to brain malformations like the hereditary orinfectious microcephaly.AK provided the study material and wrote the manuscript. ZL performed proofreading of the manuscript and approved final version of the manuscript. ST wrote the manuscript, approved final version of the manuscript, and provided financial support.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "This session will address the evolving work of Geriatric Workforce Enhancement Programs (GWEP) to educate and train the primary care workforce and partner with community based organizations to address gaps in healthcare. The inaugural GWEP awardees were announced at the 2015 White House Conference on Aging. In November 2018 the Health Resources and Services Administration (HRSA), released the second GWEP Notice of Funding Opportunity (NoFO). The new funding opportunity reflects current policy priorities. In addition, for the first time in 9 years a Notice of Funding Opportunity was released for the Geriatric Academic Career Award (GACA) program. The return of the GACA program indicates the success of advocacy efforts to increase programs to support geriatric training and the development of geriatric academic professionals. Individual symposiums will explore the policy priorities reflected in the GWEP NoFO including the use of technology for training and care delivery, the age-friendly healthcare and dementia-friendly community initiatives, and value-based care including improving partnership with community based organizations. Additionally, an individual symposium will present the GACA program and discuss the advocacy process leading to the reintroduction of this program. During the last congress advocates of the GWEP and GACA programs supported authorization and funding of geriatric programs. Geriatrics programs describing the work of the GWEPs and GACAs were reinserted into the legislative process, although this reauthorization was not completed. This advocacy process garnered strong bipartisan and agency support. The session will describe these efforts and will conclude with an update on current authorization and appropriation status."} +{"text": "Background: The SAFE strategy is the World Health Organization (WHO) recommended guideline for the elimination of blindness by trachoma by the year 2020.Objective: While evaluations on the implementation of the SAFE strategy have been done, systematic reviews on the factors that have shaped implementation are lacking. This review sought to identify these factors.Methods: We searched PUBMED, Google Scholar, CINAHL and Cochrane Collaboration to identify studies that had implemented SAFE interventions. The Consolidated Framework for Implementation Research (CFIR) guided development of the data extraction guide and data analysis.Results: One hundred and thirty-seven studies were identified and only 10 papers fulfilled the eligibility criteria. Characteristics of the innovation \u2013 such as adaptation of the SAFE interventions to suit the setting and observability of positive health outcomes from pilots \u2013 increased local adoption. Characteristics of outer setting \u2013 which included strong multisectoral collaboration \u2013 were found to enhance implementation through the provision of resources necessary for programme activities. When community needs and resources were unaccounted for there was poor compatibility with local settings. Characteristics of the inner setting \u2013 such as poor staffing, high labour turnovers and lack of ongoing training \u2013 affected health workers\u2019 implementation behaviour. Implementation climate within provider organisations was shaped by availability of resources. Characteristics of individuals \u2013 which included low knowledge levels \u2013 affected the acceptability of SAFE programmes; however, early adopters could be used as change agents. Finally, the use of engagement strategies tailored towards promoting community participation and stakeholder involvement during the implementation process facilitated adoption process.Conclusion: We found CFIR to be a robust framework capable of identifying different implementation determinants in low resource settings. However, there is a need for more research on the organisational, provider and implementation process related factors for trachoma as most studies focused on the outer setting. Chlamydia trachomatis. Trachoma is characterised by multiple stages classified according to whether symptoms are associated with (i) active infection commonly observed in children \u2013 trachomatous inflammation follicular (TF) and trachomatous inflammation-intense (TI) or (ii) with corneal scarring observed in older children and adults \u2013 trachomatous scarring (TS), trachomatous trichiasis (TT) and corneal opacity (CO) [Trachoma is the leading infectious cause of preventable blindness and is thought to be a public health issue in 42 countries globally. It is caused by recurring infection by the bacteria ity (CO) . The obsAlmost 200 million people are thought to be living in trachoma endemic regions, with 1.9 million people suffering from blindness and visual impairment as a result of trachomatous trichiasis and resultant corneal opacities . PrevaleA resolution passed by the World Health Assembly in 1998 called for the elimination of blinding trachoma by the year 2020. The World Health Organization (WHO) through the Global Alliance for the Elimination of Trachoma by the year 2020 (GET 2020) advocates for the implementation of the full SAFE strategy denoting surgery for trichiasis, antibiotic distribution, facial cleanliness and environmental improvement in countries rolling out national trachoma control programmes . It is aThere has been a marked increase in the implementation of the SAFE strategy in endemic countries which can be attributed to increased political commitment towards neglected tropical disease elimination programmes and global efforts to map the burden of trachoma resulting in the availability of accurate data that guides control efforts . NeverthThe aim of this review was to apply the Consolidated Framework for Implementation Research (CFIR) to identify factors acting as barriers and facilitators to the implementation of the SAFE strategy. The selection of CFIR was based on its ability to be applied in conducting comprehensive context assessment across multiple levels of the healthcare system. In conducting the review we identified primary research that reported on influencing factors and used CFIR\u2019s five major domains \u2013 inner setting, outer setting, process, characteristics of the intervention and characteristics of the individuals \u2013 to analyse the implementation efforts of the SAFE strategy across multiple settings .We conducted an initial scoping literature search to select the most appropriate key terms and search strategies that would meet the review\u2019s objective. A major consideration when developing the final strategy was to make sure that it was highly sensitive to identify all existing literature on perceived barriers and facilitators to SAFE implementation. In our preliminary search we found that the assessment of trachoma interventions was sometimes conducted alongside those of other ocular diseases and perceived determinants to implementation were not always the primary outcomes of interest in studies evaluating SAFE interventions. These studies reported on factors affecting different dimensions of implementation without necessarily referring to them as acceptability, adoption, appropriateness, cost, fidelity, penetration and sustainability . FurtherPrimary research studies that had been published in peer-reviewed journals were included in the review regardless of study design. The articles should have been published between 1998 when the SAFE strategy was adopted as the main WHO guideline for trachoma control and January 2017. The studies must have reported on perceived barriers and facilitators during any of the different phases of implementation. Facilitators and barriers were defined as any factors that promoted or hindered the implementation of any of the SAFE interventions \u2013surgery, antibiotic administration, facial cleanliness or environmental sanitation.The search was restricted to peer-reviewed articles published in English. Editorial and personal opinion articles were excluded. Studies that measure changes in trachoma health literacy, knowledge, awareness and attitudes but do not mention the implementation of SAFE were also excluded from the review.PM conducted the database searches and imported eligible studies into EndNote for reviewer access. The selection of studies was conducted using the Preferred Reporting Items for Systematics Reviews and Meta-Analyses (PRISMA) guidelines. In accordance with the guidelines from the initial 117 studies identified, 20 duplicates were excluded. PM, CJ and JMZ independently screened the titles and abstracts of the articles using the inclusion/exclusion criteria. Wherever discrepancies arose they were discussed and decisions on inclusions made jointly. All the articles that were excluded based on their titles (80) either measured health literacy in general or health literacy, knowledge, awareness and attitudes for specific ocular diseases that were out of the scope of this review such as glaucoma, diabetic retinopathy and age-related macular degeneration. The reference lists of the remaining articles were checked to identify any relevant articles that may not have arisen in the electronic search. A total of 37 abstracts were screened. Given that implementation determinants were not always the primary focus of the studies, abstracts that did not provide a lot of information but were felt to potentially meet the inclusion eligibility criteria were added to those going through a full text review. The full text review of 20 studies was focused mainly on (i) statistical analyses from quantitative findings, (ii) participant quotations in the results section and (iii) interpretive descriptions in the methods and discussion sections. Ten studies were found suitable for inclusion in the review as shown in The heterogeneous nature of the studies that were included in the review called for the use of the Critical Checklist for Public Health. The quality of the studies was gauged based on the validity, credibility and completeness of information presented as it related to (i) the study question; (ii) the study design, sampling, exposures, outcomes, confounders and other aspects of internal validity; (iii) interpretation and population relevance of the results; and (iv) the implications for implementation in their own population and public health practice . Risk ofGuidance on the Conduct of Narrative synthesis in Systematic Reviews was used as the guiding framework for the data extraction and synthesis. The narrative synthesis approach which uses qualitative text to summarise and explore the relationships between data from qualitative and quantitative data was considered appropriate for this review due to the heterogeneity of the studies identified . A good http://www.cfirguide.org/tools.html and used for coding and data extraction. PM, CJ and JMZ independently identified and extracted data and coded it based on the CFIR constructs they were judged to represent. Codes thought to occur in multiple domains were double coded. Comparisons on the coding was done by all the authors and wherever discrepancies arose they were discussed and final codes agreed to by consensus. Using the memo templates textual descriptions and summaries (narratives) with supporting information were recorded for each of the studies. All the authors reviewed the summary memos.A deductive approach using CFIR as the coding framework was used during the preliminary synthesis to map the data. Extracted data included authors, year of publication, study objectives and design, characteristics of the study population, analysis, results and interpretive summaries. Codebook and memo templates with guidance on construct definition were obtained online at Grouping and clustering of the studies was done to allow cross literary comparisons through pattern identification across studies. Thereafter thematic analysis was used to identify the key determinants (themes) within the five CFIR domains. The process of translating the data made it possible for us to explore the differences and similarities in factors identified in the studies. The final stage of the narrative synthesis was the evaluation of the robustness of the analysis process which was done by looking at whether or not the factors identified were credible and comparable to the available implementation science literature as well their appropriateness in answering the review question.Our search yielded 137 articles: 117 from the different search engines and 20 from the reference lists. PRISMA guidelines were used to identify studies that met our specific research objectives . Only 10None of the studies specifically listed implementation outcomes as defined by Proctor et al. as their outcomes of interest but reported on aspects related to their measurement as shown in Two studies reported on varying participation rates (coverage) that are an indication of the feasibility of the implementation efforts ,8. PenetThe different implementation strategies were categorised using the framework described by Wang et al. . VariousThe implementation determinants were grouped into the five major domains of CFIR.The SAFE strategy is a guideline that was developed to address the problem of trachoma on a global/macro level for United Nations member states by the WHO through the Alliance for the Global Elimination of Trachoma by the year 2020. All the studies identified in this review made mention of national trachoma control programmes or other implementing bodies that had adapted the SAFE strategy for their specific contexts. Wherever development and implementation of SAFE programmes was seen as locally driven, such as through the involvement of community organisations, there were higher levels of acceptability. This could be due to an increased sense of ownership and commitment ,18. UtilObserved positive results or stakeholders\u2019 belief in the strength of the evidence base to justify the implementation and subsequent use of SAFE interventions were shown to affect the way certain aspects of the strategy were implemented. In Zambia the decision to use roxithromycin instead of azithromycin for mass drug administration was based on the clinical experience of one of the investigators who had been successfully using it to treat trachoma . The incThe delivery of the SAFE strategy was done using a variety of implementation strategies. Social marketing communication approaches that took into account the unique features of the local communities were found to motivate change by making it easier to discuss changes in hygiene practices ,16,18,20The feasibility of scaling up of the implementation of SAFE was explored in one study which provided strong evidence that the different interventions were effective in reducing the prevalence of active trachoma in a hyper endemic region. Consequently, the Government of Zambia used these findings to begin a national trachoma initiative as a preamble to the creation of a national trachoma eradication programme .An understanding of the needs and resources of communities who would be the recipients of the intervention was found important in ensuring appropriate alignment of SAFE related activities. Key needs of the patients included inadequate water supply and poor sanitation. This was addressed through drilling of wells and building latrines to encourage uptake of better hygiene practices ,15. IntrStrong collaboration between different government agencies and non-governmental organisations was found to be important for the provision of technical and financial support to national trachoma control programmes . DiffereIn areas where all the components of the SAFE strategy had been implemented but there was lack of continuity of care due to poorly developed healthcare systems, trachoma control was affected especially if services such as trichiasis surgery and antibiotics could only be obtained from health facilities . High laImplementation climate was measured by how receptive individuals within implementing agencies were to the SAFE intervention and how the intervention activities were aligned with the routine practices of the organisation. Lack of confidence in the quality of training in trachoma control affected how able the health workers felt they were in implementing the SAFE strategy . TeacherFinancial and technical resources from non-governmental organisations and international donor organisations channelled through government agencies specifically for trachoma control programmes made it possible for successful roll-out of the SAFE strategy \u201320,22,24The attitudes of targeted populations to the different components of the SAFE strategy and their understanding of the underlying mechanisms through which control strategies work affected effective adoption. In some Gambian communities trachoma was not thought to be infectious and trichiasis was taken to be a symptom of other illnesses, which led to lack of skilled use of SAFE interventions . The norIndividuals who were found to be more confident in their skills and understanding of trachoma control measures are more likely to have better health-seeking behaviour and take up good hygiene practices . Poor awImplementation progress of SAFE interventions was tracked by determining an individual\u2019s proficiency in utilising the different interventions. In one case study participants stopped applying ointment for the prevention of active inflammatory infection as soon as they felt that their symptoms had improved, increasing their chances of reinfection .Developing a course of action to guide the implementation of SAFE activities is thought to increase implementation effectiveness. As a tool for helping to identify how the different components of the strategy would be put in place, all the studies identified in this review used epidemiological data on prevalence rates ,15\u201322,24Choosing the most appropriate individuals to implement and use an intervention was shown to influence how well it would be received. In the Gambia elderly women who are the custodians of community practice and recipients of lid surgery for trichiasis were found to be instrumental in mobilising young mother to take up SAFE interventions . Their pEvaluating implementation efforts allows the involved stakeholders to learn from their experiences while at the same time mapping the progress they have made towards their implementation goals. In Mali and Guinea Bissau community members were found to have incomplete understanding of certain aspects of trachoma and its spread and control even after SAFE had been implemented, pointing towards the possibility that packaging and design of the health messages was not inappropriate ,21. The The SAFE strategy is a multifaceted intervention which combines multiple implementation strategies to address specific stages of trachoma . The hetintervention source, adaptability, cosmopolitanism, readiness for implementation, planning and engaging. These facilitators mainly fell under characteristic of intervention, outer setting and process domains of CFIR, whereas key barriers included structural characteristics, knowledge and belief about the intervention, self-efficacy and implementation climate representing two domains: inner setting and characteristics of the individual. All five domains of CFIR were found to be of importance during implementation. However, of the 39 constructs some were not identified during the review, including relative advantage, cost, culture, individual identification with organisation, networks and communication. Most of these constructs fell within the inner setting and characteristics of individual domains. This could be due to the nature of the\u00a0 studies included in the review rather than the role\u00a0 these two domains play in endemic regions. Future research could explore the use of determinant frameworks such as CFIR and the Theoretical Domains Framework to explore implementation of context-specific determinants.The key facilitators of implementation identified included The use of CFIR also made it possible to see the relationship between implementation strategies, the barriers and facilitators identified and their bearing on intervention and implementation outcomes. Implementation strategies used during the implementation of SAFE in the studies varied but they were tailored to the particular settings in which they were being introduced. Policy-driven strategies enhanced the implementation process by encouraging cosmopolitanism and promoting favourable implementation climate thus making the SAFE interventions appropriate to these settings \u201320,22,24Overall, our study identified salient factors across multiple settings that are necessary for successful implementation of the SAFE strategy such as community engagement and a desire for change. However, there is a need for more literature characterising the SAFE strategy itself, organisational and provider level factors influencing implementation efforts such as leadership engagement, design of the intervention and its quality, peer pressure, external policies, incentives for implementation and the effect of networks and communication."} +{"text": "Beilstein Journal of Organic Chemistry in 2017 [Beilstein Journal of Organic Chemistry is contributing to the dissemination of mechanochemistry in the field of organic chemistry through a new thematic issue Mechanochemistry II.Since the publication of the first thematic issue on mechanochemistry in the Beilstein Journal of Organic Chemistry will find contributions from renowned global experts in the field of mechanochemistry spanning areas from organic mechanochemistry, supramolecular mechanochemistry to polymer mechanochemistry. Moreover, findings reported in this current thematic issue also contribute to the expansion of synthetic chemistry methodology by mechanochemistry. Importantly, such rapid advancement in applied mechanochemistry is supported by investigations focused on better understanding the fundamental aspects governing mechanochemical transformations.In this new collection of works, the readership of the Beilstein Journal of Organic Chemistry could augment the consolidation of mechanochemistry as an alternative to continue with the development of chemical processes in a more efficient manner.Therefore, I hope the joint efforts made by the authors of these contributions , and the Jos\u00e9 G. Hern\u00e1ndezAachen, June 2019"} +{"text": "To evaluate the inflammatory reaction and measure the content of mucins, in the colonic mucosa without fecal stream submit to intervention with mesalazine. Twenty-four rats were submitted to a left colostomy and a distal mucous fistula and divided into two groups according to euthanasia to be performed two or four weeks. Each group was divided into two subgroups according daily application of enemas containing saline or mesalazine at 1.0 g/kg/day. Colitis was diagnosed by histological analysis and the inflammatory reaction by validated score. Acidic mucins and neutral mucins were determined with the alcian-blue and periodic acid of Schiff techniques, respectively. Sulfomucin and sialomucin were identified by high iron diamine-alcian blue technique. The tissue contents of mucins were quantified by computer-assisted image analysis. Mann-Whitney test was used to analyze the results establishing the level of significance of 5%. Enemas with mesalazine in colonic segments without fecal stream decreased the inflammation score and increased the tissue content of all subtypes of mucins. The increase of tissue content of neutral, acid and sulfomucin was related to the time of intervention. Mesalazine enemas reduce the inflammatory process and preserve the content of mucins in colonic mucosa devoid of fecal stream. Twenty-four male Wistar rats (300-350g) were obtained from the ANILAB , barrier facility and maintained on light/dark cycles of 12 hours, and fed a standard rodent chow diet. They were deprived of food, but not water, for 12h prior to the surgical procedure. Diversion of fecal stream was performed in all animals similar as previous described Twenty-four animals were divided into two experimental groups with 12 animals each according the euthanasia had done after 2 or 4 weeks of treatment. These two experimental groups were divided into four subgroups with six animals each according to the intervention solution employed and time of intervention. In the first and second subgroups, six animals received daily rectal enemas containing 20 ml of saline (control subgroup) at 37\u00baC, and six received daily rectal enemas containing 20 ml of MEZ at concentration of 100 mg/kg for 2 weeks. In the 12 remaining animals of the second group, six rats received daily rectal enemas with saline and six with MEZ at same concentration for 4 weeks. In order to standardize the speed and time of application, the enemas were administered in all animals with an infusion pump whose speed was standardized at 5/ml/min. Upon completion of the pre-determined irrigation period, the animals were anesthetized with the same technique used to diversion of the fecal stream, and the midline incision was opened again. In both groups, specimens were taken from the intra-abdominal part of the excluded colon . The removed specimens of the colon without fecal stream, measuring approximately 6.0 cm each, was longitudinally opened through the anti-mesenteric border fixed in a piece of cork and referred to histological and histochemical analysis. Fragments prepared for histological analysis were immersed in 10% neutral formalin buffer for 24 h, dehydrated by exposure to increasing ethanol concentrations and embedded in paraffin. Thereafter, sections of tissue were cut at 5 \u00b5m on a rotary microtome , mounted on a glass slide, cleared, hydrated and stained with hematoxylin-eosin (HE) for evaluation of the presence of colitis. Slide analysis was performed with optical microscope with final magnification of x200. To establish the diagnosis of colitis, as well as the degree of inflammation, the histological slides were analyzed by two blinded observers. The diagnosis of colitis was made based on presence of five independent histological parameters: reduction of the crypt length, number of goblet cells, crypt abscess, intensity of neutrophil infiltration of the mucosa, and epithelial loss. These variables were stratified as crosses, according to the degree of each, as follows: 0: absent or no alterations; (+): when intensity was mild; (++): moderate and, (+++): intense. For all variables analyzed, the final value considered for each animal was the mean value after quantification of three distinct histological fields. The inflammatory score for each animal was obtained from the sum of the five variables analyzed. The tissue expression of the neutral and acid mucins was determined by histochemical technique of Periodic Acid Schiff (PAS) and Alcian-blue (AB), respectively, as previous report The neutral and acid mucin content, as well the tissue content of sulfomucins and sialomucins, was quantified by means of computer-assisted image processing and was always performed in a focal field in which there were at least three complete and contiguous colonic glands, at a magnification of x200. A pathologist with experience of IBD, who was unaware of the origin of the material and the objectives of the study, evaluated the content of tissue expression of all types and subtypes of mucins. The images selected were captured on a video camera that had been coupled to an optical microscope. These images were processed and analyzed using the NIS-Elements 3.1 software . By means of colored histograms in the red, green and blue (RGB) system, the software determined the color intensity and the number of pixels in each selected field, and the final data were transformed into percentage expression of mucins per field analyzed (%/field). The final value measured for each section was the mean of the values found from evaluating three different fields. The tissue content of sulfomucins and sialomucins was quantified in the same colonic glands.\u2022) when this level was less than 5% or two tickets (\u2022\u2022) when this level was less than 1%. The results obtained for the tissue contents of all types and subtypes of mucins were always described by the mean value with respective standard error. A significance level of 5% (p<0.05) was adopted for all tests. The Mann-Whitney test was used to compare the degree of inflammation, the tissue content of the all types of mucins of animals from the control and experimental (MEZ) groups and to compare colon segments submitted to intervention with saline or MEZ for two or four weeks. BioStat software (version 5.1) was used for the statistical analysis. Significant values when were compared colon segments without fecal stream irrigate with saline or MEZ, were marked with asterisk (*) when this level was less than 5% or two asterisks when this level was less than 1% . At the same way, the significant values found when the animals submitted to the intervention with saline or MEZ by two or four weeks were marked with one ticket ( In In The mean values, with the respective standard error, for the inflammatory score, tissue content of neutral, acid mucins, sulfomucins and sialomucins, in colon segments without fecal stream, of the animals submitted to intervention with saline or MEZ, for two or four weeks are show in ,,,, The colonic mucosal barrier is made up of epithelial and immune cells which together form a barrier to harmful agents. The colonic epithelium is covered by a thick layer of mucus that serves as a first line of the colonic epithelium defense system,MUC2 gene expression, stimulated mucin synthesis thereby increasing the thickness of the mucus layer that covers the colonic epithelium,,, There are several evidences suggesting that SCFAs reinforces the colonic defense barrier mainly by stimulating the synthesis of mucins,,,,,,,, Deficiency of SCFAs colonic mucosa cells could potentially increase production of ROS, resulting in onset of mucosal inflammation ,,-,-,,,,-- Experimental studies evaluated the effects of oxidative stress and substances with antioxidant properties on the tissue content of neutral an acidic mucins in the colonic mucosa with and without fecal streamex vivo challenge with H2O2 The results of the present study seems to confirm the antioxidant action of MEZ in protect the colonic mucosa devoid of the fecal stream. Enemas with MEZ were able to reduce the mucosa inflammation and maintain the tissue content of neutral and acids mucins despite the lack of supply of SCFAs. Regardless of the time of intervention adopted, the tissue content of neutral and acidic mucins was always higher in animals treated with MEZ when compared with those treated with saline. There was an increase in the tissue content of acid mucins with the intervention time. Animals treated with MEZ also had an improvement in the inflammatory score when compared to those treated with saline. It is possible that these findings are related to the reduction of levels of oxidative damage. In a previous study, utilizing the same group of animals, we showed that the application of enemas with MEZ was able to reduce the levels of oxidative damage to the DNA of the isolate cells obtained from colonic segments devoid of fecal transit, even after et al.,,- Keli , The results of this study seems to confirm the action of MEZ in maintain the tissue content of sulfomucins and sialomucins in colonic mucosa devoid of the fecal stream. Regardless of the time of intervention adopted, the tissue content of sulfomucins and sialomucins mucins was always higher in animals treated with MEZ when compared with those treated with saline. There was an increase in the tissue content only of sulfomucins with the intervention time. Although the sialomucins are the subtype of acid mucin that presents the greatest reduction of its contents in colonic mucosa excluded from fecal stream, the tissue content of sialomucins was always higher in animals treated with MEZ when compared with those treated with saline. These same results were found by other authors who used enemas with antioxidant substances in the mucosa devoid of fecal stream,,,,,, These results reinforce the possibility that oxidative stress caused by overproduction of ROS by epithelial cells devoid of fecal stream may be related to the reduction of the mucins contents. The data confirm the benefits of MEZ in the treatment of DC, as demonstrated by its therapeutic effects at increasing the production of the all types and subtypes of mucins studied. Previous experimental studies showed that the use of enemas containing MEZ could reduce levels of oxidative damage in cells of chronically inflamed colonic mucosa, and may be this one of the most important mechanism related to reduction of the mucins content Daily enemas with MEZ can improve the inflammatory process and increase the tissue content of mucins in colonic mucosa devoid of the fecal stream in an experimental model of diversion colitis."} +{"text": "Until recently, trophoblast invasion during human placentation was characterized by and restricted to invasion into uterine connective tissues and the uterine spiral arteries. The latter was explained to connect the arteries to the intervillous space of the placenta and to guarantee the blood supply of the mother to the placenta. Today, this picture has dramatically changed. Invasion of endoglandular trophoblast into uterine glands, already starting at the time of implantation, enables histiotrophic nutrition of the embryo prior to perfusion of the placenta with maternal blood. This is followed by invasion of endovenous trophoblasts into uterine veins to guarantee the drainage of fluids from the placenta back into the maternal circulation throughout pregnancy. In addition, invasion of endolymphatic trophoblasts into the lymph vessels of the uterus has been described. Only then, invasion of endoarterial trophoblasts into spiral arteries takes place, enabling hemotrophic nutrition of the fetus starting with the second trimester of pregnancy. This new knowledge paves the way to identify changes that may occur in pathological pregnancies, from tubal pregnancies to recurrent spontaneous abortions. The last years have seen a massive evolution of omics and sequencing technologies that finally allow single-cell sequencing and profiling. Single-nucleus RNA sequencing per droplet (DroNc-Seq) has been proposed in 2002 . The disadvantage of using a cytokeratin antibody is that it also identifies other epithelial cells such as the epithelium of uterine glands. Hence, a missing differentiation between invading trophoblasts and glandular epithelial cells may have misled scientists , but also differs in a variety of features (B\u2013E):(A)Interstitial trophoblasts invade through the uterine interstitium, finally reaching the outer aspects of the arterial wall. Endoarterial trophoblasts percolate through the vessel wall, restructure the arterial media, displace the arterial endothelium and finally migrate into the lumen of the arteries During the first trimester, the endoarterial trophoblasts do not only reach and line the inner wall of the arteries, but also fill the lumen by building plugs of endoarterial trophoblasts Different from veins and glands, arterial invasion by endoarterial trophoblasts leads to transformation of the vessel walls resulting in large conduits that are no longer under vasomotor control of the mother.(D)Different from veins and glands, flow of maternal blood through arteries into the intervillous space of the placenta only starts at the beginning of the second trimester Different from veins and glands, endoarterial trophoblasts also invade into the deeper regions of the arterial walls, finally reaching the portions of the arteries within the inner third of the myometrium. Veins and glands only need to be connected to the intervillous space of the placenta, while arteries need to be connected plus transformed and widened to enable a slow inflow of maternal blood into the intervillous space from the subclavian veins in the junction with the jugular veins . As described above, extravillous trophoblasts are known to invade uterine spiral arteries starting at around mid-first trimester and lymphatics (endolymphatic trophoblast) and may well be washed into the maternal blood stream. Thus, they could be responsible for the presence of fetal cells and DNA in maternal blood from week 6 onward is defined by the occurrence of three or more consecutive miscarriages before 20\u00a0weeks of gestation. In cases of idiopathic RSA, the role of arterial trophoblast invasion has historical roots but still remains controversial than healthy controls (Windsperger et al. Implantation sites within the uterus and the fallopian tube are similar in composition and cell density of the tissues comprising the surface epithelium and the lamina propria of the tubal or uterine wall containing connective tissue, glandular and surface epithelium, trophoblast cells and lymphocytes (Kemp et al. Extravillous trophoblasts invade into much more luminal structures in the maternal decidua than was known until today. It seems as if extravillous trophoblasts invade into each luminal structure that is present along their invasive pathway. Hence, it seems as if there is no restriction of trophoblast invasion in terms of specificity of invaded structures. The elucidation of alterations of trophoblast invasion in pregnancy pathologies has only just begun. The following years will tell how many of the alterations have been missed so far and may explain pathologies such as recurrent spontaneous abortions and fetal growth restriction."} +{"text": "The Kentucky (KY) Rural & Underserved Geriatric Interprofessional Education Program (KRUGIEP) participated in a unique innovative approach to the implementation of the Medicare Annual Wellness visit in collaboration with the Dartmouth GWEP. The model focused on the integration of students into the process of conducting the Health Risk Assessment through community based home visits. This talk will focus on this unique program and the participation in a multi-GWEP learning collaborative."} +{"text": "The facial artery is the main artery of the face and variations in its origin and its branching pattern have been documented. We report herein multiple facial artery branch variations in the face. A large posterior (premasseteric) branch originated from the left facial artery and coursed upwards behind the main trunk of the facial artery. This artery presented with a straight course and was closely related to the anterior border of the masseter. The branch then terminated by supplying the adjacent connective tissue below the parotid duct. It was also observed that the facial artery was very thick and tortuous and terminated as the superior labial artery. Knowledge of this variation is of great clinical significance in facial operations, especially for maxillofacial surgeons and plastic surgeons, because it forms the anatomical basis for the facial artery musculo-mucosal flap. The face is supplied by branches of the facial artery and the superficial temporal arteries. The facial artery originates in the neck, arising from the external carotid artery, and terminates at the medial angle of the eye. This artery\u2019s tortuosity allows it to stretch during various movements of the jaw. The branches of the facial artery in the face are the superior labial artery, inferior labial artery, and the lateral nasal artery, which supply the muscles and skin of the face. It also gives off a few small unnamed branches posteriorly.The facial artery presents variations with regards to its origin, termination, course, and branching pattern. We report herein a variant termination of the facial artery and the occurrence of a large posterior branch of the facial artery known as the premasseteric artery. This is a minor inconsistent branch,During dissections for undergraduate medical students, we found unusual variations in the branching pattern and termination of the left facial artery in the face. These variations were observed in a male cadaver aged about 65-70 years. The left facial artery presented a large posterior branch (premasseteric branch) which coursed upwards parallel to the facial artery . After iVariations related to the facial artery or its branches have two aspects of interest: In operations of the face and lip, during reconstructive and reparative procedures, and in radiologic anatomy, in the field of malignancy for the treatment of some facial tumors by embolization. There are reports in the literature of variations in the origin, termination, and branches of the facial artery. The premasseteric branch is an uncommon branch that was first described by Adachi in 1928.Bayram et al. has studied variations of the facial artery in fetuses.The face is richly vascularized and so construction of several facial flaps is possible. Reconstruction of lip defects using procedures like the Abbe flap and other lip flap procedures involves surgical manipulation of one of the major facial artery branches, mainly the superior labial artery.Maxillofacial and plastic surgeons must have detailed knowledge of such variations before deciding on any grafts or surgical interventions on the face. Knowledge regarding such variations is also imperative for general surgical practitioners and specialists, for the effective management of any injuries, or correction of congenital anomalies like cleft lip and palate. Such anomalous variations therefore warrant use of a noninvasive in vivo technique for evaluation of the facial artery anatomy in order to facilitate preoperative planning in complex facial reconstructions."} +{"text": "This review describes the preparation of the hSV as a bypass conduit in CABG and its performance compared with the ITA as well as how and why its patency might be improved by harvesting with minimal trauma in a way that preserves an intact vasa vasorum.The saphenous vein (SV) is the most commonly used conduit for revascularization in patients undergoing coronary artery bypass surgery (CABG). The patency rate of this vessel is inferior to the internal thoracic artery (ITA). In the majority of CABG procedures the ITA is removed with its outer pedicle intact whereas the (human) SV (hSV) is harvested with pedicle removed. The vasa vasorum, a microvessel network providing the adventitia and media with oxygen and nutrients, is more pronounced and penetrates deeper towards the lumen in veins than in arteries. When prepared in conventional CABG the vascular trauma caused when removing the hSV pedicle damages the vasa vasorum, a situation affecting transmural flow potentially impacting on graft performance. In patients, where the hSV is harvested with pedicle intact, the vasa vasorum is preserved and transmural blood flow restored at graft insertion and completion of CABG. By maintaining blood supply to the hSV wall, apart from oxygen and nutrients, the vasa vasorum may also transport factors potentially beneficial to graft performance. Studies, using either corrosion casts or India ink, have shown the course of vasa vasorum in animal SV as well as in hSV. In addition, there is some evidence that vasa vasorum of hSV terminate in the vessel lumen based on In a recent issue of Nature Reviews in Cardiology de Vries and colleagues discuss the use of the saphenous vein (SV) as a conduit for surgical revascularization in patients undergoing coronary artery bypass graft surgery (CABG) endothelial integrity is maintained even in the absence of intraluminal blood flow . However, it is important to stress that at present little is known regarding the luminal termination of vasa in hSV used for CABG. The most compelling evidence has been previously described in canine SV, where it is known that such terminations of vasa at the SV lumen can be opened by a complex structure \u2013 the agger, which is sensitive to changes in blood turbulence and conditions affecting flow through vasa microvessels blood. The vasa vasorum of veins are classically described as being situated within the adventitia and the media Ham with somThe study by Loop et al. showed tThe advantages of the NTSVG technique is becoming recognised and the technique adopted by an increasing number of cardiac centres worldwide. Indeed, the recent review by deVries and colleagues may be limited and may not reflect the true physiological events that occur in vivo hSV when used in CABG patients.Whereas the majority of studies relating to the effect of hypoxia in vessel pathology have been mostly on arteries, some have also included hSV material. The importance of maintaining blood and oxygen supply to all layers of the vessel wall has recently been supported by an elegant A porcine model of SV into artery interposition grafting has been described where the effect of external synthetic stents and sheaths on vein graft remodelling and thickening have been studied (Violaris et al. While a considerable amount of data has been published regarding the role of the vasa vasorum in pathological conditions in human arteries, in particular the epicardial coronary arteries, there is less information available on the SV. Reducing medial blood flow and oxygen supply by occluding the vasa vasorum in arteries causes neointima formation and atherosclerosis via the action of a vast number of hypoxia-induced factors. Since the role of the vasa vasorum in providing oxygen to the wall of the SV is more important than in arteries it seems reasonable to assume that anything reducing blood flow through this microvessel network will produce similar, or more serious, consequences. The considerable damage caused to the vasa vasorum when using CT harvesting described in earlier sections of this review will reduce medial blood flow stimulating a plethora of hypoxia-induced factors involved in vein graft failure Fig.\u00a0. If the The vasa vasorum of the hSV plays a more important role in providing oxygen and nutrients to the vessel wall than in arterial bypass grafts, the ITA and RA. Consequently, this microvascular network is more prolific in the SV than the ITA or RA and extends deeper in the media with some evidence of terminations in the lumen. Furthermore, the connection of vasa vasorum to capillaries within the PVAT surrounding the hSV suggests that, apart from supplying blood, this microvascular network may potentially transport tissue- and/or cell-derived factors from the outermost to the innermost layers of this blood vessel or vice versa. The damage caused to the vasa vasorum of the hSV during conventional harvesting in CABG reduces transmural blood supply, a situation that promotes neointimal hyperplasia and atheroma formation, features associated with mid- and long-term graft patency. If the hSV is harvested with minimal vascular trauma, the vasa vasorum remains intact and arterial blood flow is restored at completion of graft insertion. In addition, an intact vasa vasorum may provide a system for the transport of factors and cell to cell communication across the hSV graft wall that are beneficial to improved graft performance."}